Organized by:
from Westminster University

Sponsored by:

in cooperation with:

Area 1 - Global Communication Information Systems and Services
Area 2 - Security and Reliability in Information Systems and Networks
Area 3 -Wireless Communication Systems and Networks
Area 4 - Multimedia Signal Processing 
Workshop 1: The First International Workshop on Web Personalization, Recommender Systems and Intelligent User Interfaces (WPRSIUI 2005)
Workshop 3: The First International Workshop on Requirements Engineering for Information Systems in Digital Economy (REISDE 2005)

Authors: Daniel A. Nagya
Abstract: In present paper a novel approach to on-line payment is presented that tackles some issues of digital cash that have, in the author's opinion, contributed to the fact that despite the availability of the technology for more than a decade, it has not achieved even a fraction of the anticipated popularity. The basic assumptions and requirements for such a system are revisited, clear (economic) objectives are formulated and cryptographic techniques to achieve them are proposed.
Authors: Ghazal Afroozi Milani, Koorush Ziarati and Alireza Tamaddoni-Nezhad
Abstract: Increasing development of using Information and Communication Technology and globalization issue leads to fundamental changes in classic principles of organizations. So, lack of the regards to these changes is caused most of the organizations fail. Substitution of virtual organization is one of the most important of these changes that many corporations claim. Of crucial necessity is the criterion for measuring the amount of their success in getting virtual characteristics. In this paper, we want to measure the degree of virtual characteristics by using the properties of an ideal virtual enterprise as a reference point and comparing each company with that point. To assess the degree of virtual characteristics, factor analysis method is used. Our aim in this paper is to perform an empirical research, besides improvement of previously used methods to measure the degree of virtual characteristics. As application, the degree of virtual characteristics of Iranian petrochemical corporations which is one of the most important industries of our country is assessed.
Authors: Jan Gerke, Peter Reichl and Burkhard Stiller
Abstract: Recently, the advance of service-oriented architectures and peer-to-peer networks has lead to the creation of service-oriented peer-to-peer networks, which enable a distributed and decentralized services market. Apart from the usage of single services, this market supports the merging of services into new services, a process called service composition. However, it is argued that for the time being this process can only be carried out by specialized peers, called service composers. This paper describes the new market created by these service composers, and models mathematically building blocks required for such a service composition. A general algorithm for service composition developed can be used independently of solutions for semantic difficulties and interface adaption problems of service composition. In a scenario for buying a distributed computing service, simulated strategies are evaluated according to their scalability and the market welfare they create.
Title: TRANSFORMATION OF TRADITIONAL BUSINESS TO ELECTRONIC BUSINESS: A transformation cum maturity model and transformation matrix
Authors: Asif Ali Munshi and Fareed Hussain
Abstract: Electronic business provides many benefits to organizations that embrace on it. It is true that ebusiness improve business processes, integrate business processes and its corresponding value chain, give speed to internal and external organizational activities etc, but it poses the greatest challenge to today's organizations to transform their traditional businesses into electronic business. To assist organizations, in making successful transformation to ebusiness, in this paper, we first distinguish between electronic commerce and electronic business on the basis of three parameters of scope, support and technology. After that ebusiness transformation cum maturity model is proposed, whose five levels acts as a benchmark to assess an organization at different maturity levels. Emphasis is not given on discussing maturity levels, but on the transformation stages that acts between any two levels of maturity. For this purpose, transformation matrix is proposed, which will guide organizations to take critical organizational and information technology ( IT ) domains into consideration during each transformation stage.
Title: SECURE TRANSPARENT MOBILITY - Secure Mobility Infrastructure using Mobile IP
Authors: Mark W. Andrews, Ronan J. Skehill, Michael Barry and Sean McGrath
Abstract: Mobility has become an integral part of modern computing. It increases user flexibility by releasing the potential of fixed data. Reliance on a static computing platform is not sufficient for the future needs of nomadic users. Portable e-mail devices have become popular in recent years due to their simplicity and functionality. These devices give the average user transparent access to their e-mail from any location. Similar transparent access does not exist for general notebook or Personal Digital Assistant (PDA) computing environments. This paper addresses such access, and details a secure mobility architecture from which users can extract greater value. It utilises Mobile IP, IPSecurity, Internet Key Exchange and Firewalls to provide a comprehensive mobility solution. It evaluates a test-bed in which this secure mobility solution was deployed, and discusses the viability of a secure, transparent architecture which supports mobility.
Authors: Wilfred W. K. Lin, Allan K. Y. Wong, Richard S.L. Wu and Tharam S. Dillon
Abstract: The self-similarity (S^2 ) filter is proposed for real-time applications. It can be used independently or as an extra component for the enhanced RTPD (real-time traffic pattern detector) or E-RTPD. The S^2 filter basis is the "asymptotically second-order self-similarity" concept (alternatively called statistical 2nd OSS or S2nd OSS) for stationary time series. The focus is the IAT (inter-arrival times) traffic. The filter is original because similar approaches are not found in the literature for detecting self-similar traffic patterns on the fly. Different experiments confirm that with help form the S^2 filter the FLC (Fuzzy Logic Controller) dynamic buffer size tuner control more accurately. As a result the FLC improves the reliability of the client/server interaction path leading to shorter roundtrip time (RTT).
Authors: Antonia Stefani, Dimitris Stavrinoudis and Michalis Xenos
Abstract: This paper provides an in-depth analysis of selected important topics related to the quality assessment of e-commerce systems. It briefly introduces to the reader a quality assessment model based on Bayesian Networks and presents in detail the practical application of this model, highlighting practical issues related to the involvement of human subjects, conflict resolution, and calibration of the measurement instruments. Furthermore, the paper presents the application process of the model for the quality assessment of various e-commerce systems; it also discusses in detail how particular features (data) of the assessed e-commerce systems can be identified and, using the described automated assessment process, lead to higher abstraction information (desiderata) regarding the quality of the assessed e-commerce systems.
Title: ADVERTISING VIA MOBILE TERMINALS - Delivering context sensitive and personalized advertising while guaranteeing privacy
Authors: Rebecca Bulander, Michael Decker, Gunther Schiefer and Bernhard Kölmel
Abstract: Mobile terminals like cellular phones and PDAs are a promising target platform for mobile advertising: The devices are widely spread, are able to present interactive multimedia content und offer as almost permanently carried along personal communication devices a high degree of reachability. But particular because of the latter feature it is important to pay great attention to privacy aspects and avoidance of spam-messages when designing an application for mobile advertising. Furthermore the limited user interface of mobile devices is a special challenge. The following article describes the solution approach for mobile advertising developed within the project ****, which is financed by the Federal Ministry of Economics and Labour of Germany (BMWA). **** enables highly personalized and context sensitive mobile advertising while guaranteeing data protection. To achieve this we have to distinguish public and private context information.
Title: SHARING SERVICE RESOURCE INFORMATION FOR APPLICATION INTEGRATION IN A VIRTUAL ENTERPRISE - Modeling the communication protocol for exchanging service resource information
Authors: Hiroshi Yamada and Akira Kawaguchi
Abstract: Grid computing and web service technologies enable us to use networked resources in a coordinated manner. An integrated service is made of individual services running on coordinated resources. In order to achieve such coordinated services autonomously, the initiator of a coordinated service needs to know detailed service resource information. This information ranges from static attributes like the IP address of the application server to highly dynamic ones like the CPU load. The most famous wide-area service discovery mechanism based on names is DNS. Its hierarchical tree organization and caching methods take advantage of the static information managed. However, in order to integrate business applications in a virtual enterprise, we need a discovery mechanism to search for the optimal resources based on the given a set of criteria (search keys). In this paper, we propose a communication protocol for exchanging service resource information among wide-area systems. We introduce the concept of the service domain that consists of service providers managed under the same management policy. This concept of the service domain is similar to that for autonomous systems (ASs). In each service domain, the service information provider manages the service resource information of service providers that exist in this service domain. The service resource information provider exchanges this information with other service resource information providers that belong to the different service domains. We also verified the protocol's behavior and effectiveness using a simulation model developed for proposed protocol.
Authors: Ilhem Abdelhedi Abdelmoula, Hella Kaffel Ben Ayed and Farouk Kamoun
Abstract: This paper presents a new hierarchical distributed communication architecture, called AHS (Auction Handling System), based on clusters. This architecture uses the IRC channels and protocol facilities in order to support real-time auctions. To resolve the winning bid within a cluster, a cooperation between distributed auctioneers is needed to exchange and update some relevant auction information. The problem is how to determine the best location of the coordinator. For this purpose, we suggest to use a Floyd-Warshall's algorithm, which is a graph theory algorithm.
Authors: Evgenii Krouk and Sergei Semenov
Abstract: A fundamental characteristic of the majority of communications networks is the mean message delay. In a packet-switching network, the mean packet and message delay may differ considerably from each other, and their distribution will often take different forms. The mean message delay depends both on the mean packet delay and on the dispersion of the packet delay. Obviously, by reducing the mean packet delay one can also reduce the message delay. However, it is not always possible to decrease the mean packet delay in the network. A proposed method of transmitting data in a network is based on the use of error-correcting coding, which reduces the dispersion of the packet delay with some increase in the mean packet delay. The conditions were obtained for which an increase in the mean packet delay with simultaneous reduction in the dispersion leads to a reduction in the mean message delay. In many real time networks there are exist some restrictions on the message delay. Use of the transport coding makes it possible to deliver the messages over the network during some limited time with high probability.
Authors: Jinmi Jung, Hyun Kim, Hyungsun Kim and Joohaeng Lee
Abstract: In modern business environment, inter-organizational collaborative product development is very important. CPC (Collaborative Product Commerce) is a new category of software, now emerging, to support inter-enterprise collaboration through the product life-cycle. But, so far there are not striking solutions to support collaboration among collaborative engineering groups. In this paper, we describe the engine to support building a CPC solution which is being developed by the Electronics and Telecommunications Research Institute (ETRI) as a part of a CPC project. It makes it possible to collaborate among geographically dispersed enterprises by sharing product information. We look over problems to have to be solved for designing the engine and propose solutions. In addition, we mention the developed CPC solution using the engine.
Authors: C. W. Cheng and C. G. Chung
Abstract: The liberation of regulation and the expansion of mobile telecommunication market encourage more service providers to join the market. When customers have more opportunities to change service providers, the number portability (NP) service which allows a user to keep a unique number when changing service provider becomes essential. However, the existed NP solutions face the same problems of huge NP database (NPDB), long delay time of NPDB query and NP call processing, and extra transmission resource consumption for number translation and call routing. Although there were researches pointed to utilize caches to alleviate the workload of NPDB and to reduce NPDB query delay, but it doesn't work to the NP service of mobile communication service. Mobile communication is necessary to people. To large organizations, members move among the service areas or the subsidiaries of the same organization. Many mobile calls are made between organization members. Furthermore, organizations usually have frequently contact objects, calls made from an organization often have locality. If caches are applied to organization-based mobile telecommunication components, the efficiency of NP communication can be improved. Thus, the call setup delay can be reduced when organization members calling to NP subscribers, also the NPDB and network resources of a mobile telecom service provider can be efficiently utilized. In this paper, we propose an organization-based mobile communication system to clarify the operation model of applying organization-based caches to the global mobile telecommunication system. Besides, the time delay of call setup and handoff is investigated to illustrate the feasibility and efficiency of applying caches to organization-based mobile communication system.
Authors: Charles A Shoniregun, Ziyang Duan, Subhra Bose and  Alex Logvynovskiy
Abstract: A business object is a set of well-structured, persistent data associated with some predefined transactional operations. Maintaining the transactional correctness of business objects is very important, especially in fi- nancial applications. The object's correctness has to be guaranteed at any time during the lifecycle of the object. This requires that each simple operation is correct, i.e., satisfies the ACID property, and the object is in acceptable states before and after each operation. The correctness of each simple transaction can be secured and guaranteed by using a transactional database or a transaction monitor. However, the combined effect of executing a set of simple transactions may violate some business rules and leave the object in an unacceptable state. The proposed model is based on Heirarchical Statechart to specify the allowable states and transitions on a business object during its life cycle. The paper describes an XML-based framework to support application development based on this model. The framework includes an XML language for model speci- fication, a set of tools for model definition, testing and simulation, and a set of APIs to provide business object management functionalities at runtime. The model and framework allows secure transactional properties of a business object to be defined formally and declaratively, and provides correctness guarantees at runtime. The framework facilitates fast product development and integration in a service-oriented architectural model, and provides great flexibilities for persisting data in either XML or relational databases. The experience of how to use the framework in developing a financial transactions system and the tradeoffs is based on comparison between XML and relational databases.
Title: WEB PERSONALIZED INTELLIGENT USER INTERFACES AND PROCESSES - An Enabler of Multi-Channel eBusiness Services Sustainability
Authors: Panagiotis Germanakos, Constantinos Mourlas and Chara Isaia
Abstract: The explosive growth in the size and use of the World Wide Web as a communication medium has been enthusiastically adopted by the mass market to provide an electronic connection between progressive businesses and millions of customers bringing to light the eBusiness sector. However, the nature of most Web structures is static and complicated, and users often lose sight of the goal of their inquiry, look for stimulating rather than informative material, or even use the navigational features unwisely. Hence, practitioners need to alleviate such navigational difficulties and satisfy the heterogeneous needs of the users to allow web applications of this nature to survive. Main emphasis is given at saving costs, improving efficiency and growth, competitiveness, expanding markets and creating more business opportunities, for local and regional governments which aim at providing better and affordable public services to citizens and building "smart communities" by attracting business investment while guaranteeing both quality of life and economic health in the European e-Economy. The rapid growth of the Communication Developments and Technologies has had a profound economic and social impact and has introduced a wide variety of new channels over which different forms of contact and service delivery can take place. A predominant trend in the field of eBusiness research concerns the creation of new infrastructures, methodologies and techniques to support high-level business-to-business and business-to-consumer activities on the Web. This paper presents the research implications and challenges of the Web Personalization concept as an enabler of eBusiness Services sustainability with the development of Personalized Intelligent User Interfaces and processes.
Authors: Markus Schranz
Abstract: Modern Internet technologies have been opening innovative aspects in research for more than two decades and entrepreneurs and pioneers have been transferring latest technical developments to successful businesses. As the WWW and application-oriented Web Services become increasingly important in distributed computing, business related application domains like media get interested in applying such modern technologies, often extended with intelligent algorithms to improve the service values for their end users and customers. The specific application domain news publishing and news distribution is facing significant performance requirements and high loads of information handling while transferring hundreds of thousands of messages daily to the research and commercial auditorium. Although communication technologies are well employed by modern organizations to generate, publish and distribute their information, specific application domains like online news publishing and distribution involve multinational and multilingual requirements and enormous amounts of data to be transported. This paper discusses an approach that integrates news agency services from existing European organizations supported by University research in the are of global information services, distributed information management and AI in order to form an intelligent multinational business news publishing and distribution network, based on Web Services and a peer-to-peer inter-agency communication network.
Title: E-COMMERCE TICKETING TOOL FOR MULTIFUNCTIONAL ENCLOSURES - Low cost solutions for tradicional venues best adventaje
Authors: Ricardo Colomo, Ángel García and Edmundo Tovar
Abstract: The explosion of Internet has made consumers much more familiar with new ways to shop. Sellers can distribute their products and accept orders from buyers 24 hours a day. The field of e-Ticketing constitutes one of the mightier lines of business in e-commerce. Traditional enclosures have a vital importance in the circuits of leisure of many places of Europe, but they count on the disadvantage of the obsolescence in its systems and procedures. This paper describes the initiative carried out in Spain, which allows the companies that manage traditional enclosures to sell tickets through the Internet without making big payments in order to reach new channels and to improve the operation and yield of the present ones.
Authors: Malamati Louta, Angelos Michalas, Ioannis Psoroulas and Evangelos Loutas
Abstract: Highly competitive and open environments should encompass mechanisms that will assist service providers in accounting for their interests, i.e., offering at a given period of time adequate quality services in a cost efficient manner which is highly associated to efficiently managing and fulfilling current user requests. In this paper, service task assignment problem is addressed from one of the possible theoretical perspectives, while new functionality is introduced in service architectures that run in open environments in order to support the proposed solution. The pertinent problem is concisely defined, mathematically formulated, computationally efficient solved and evaluated through simulation experiments.
Authors: Noh-sam Park and Gil-haeng Lee
Abstract: With the evolution of wireless networks, the wireless community has been increasingly looking for a framework that can provide policy-based SLA management. In this paper we first construct such a framework and then describe how SLA-based control can be used to achieve QoS in wireless environment. We provide a common generic framework capable of components to interwork via XML. The proposed framework offers effective wireless LAN QoS control and management from the service initiation to the service termination.
Title: ENHANCED DISCOVERY OF WEB SERVICES - Using Semantic Context Descriptions
Authors: Simone A. Ludwig and S.M.S. Reyhani
Abstract: Automatic discovery of services is a crucial task for the e-Science and e-Business communities. Finding a suitable way to address this issue has become one of the key points to convert the Web into a distributed source of computation, as they enable the location of distributed services to perform a required functionality. To provide such an automatic location, the discovery process should be based on a semantic match between a declarative description of a service being sought and a description being offered. This problem requires not only an algorithm to match these descriptions, but also a language to declaratively express the capabilities of services. This paper presents a context-aware ontology selection framework which allows an increase in precision of the retrieved results by taking the contextual information into account.
Authors: Israel González-Carrasco, Jose Luis López-Cuadrado, Belen Ruiz-Mezcua and Angel García-Crespo
Abstract: eCommerce can be defined, in an ample sense, as any form of commercial transaction based on the remote data transmission on communication networks. In order to facilitate this process, the market at the moment offers an ample range of electronic payment systems that allow to make electronic purchases with simplicity and transparency, being helped to harness the sales and to manage them of efficient way. This article presents, in the first place, the current situation of the electronic commerce in Spain, detailing the state of the used technology, its real possibilities of use, the new methods of payment, the security used in the process and the influence that it has in the market. Secondly, is a proposal of virtual store in which different technologies are integrated to make the process of purchase software product. The designed website innovates in the implemented modality of payment, considers the effective legislation at the present time in Spain, and it makes agile and assures the process purchase with the activation of each product in an individual way.
Authors: S. Pasqualini, S. Verbrugge, A. Kirstädter, A. Iselt, D. Colle, M. Pickavet, and P. Demeester
Abstract: This paper provides a detailed analysis and modelling of the Operational Expenditures (OPEX) for a network provider. The traditional operational processes are elaborated and the expected changes when using GMPLS are described. GMPLS is promoted as a major technology for the automation of network operations. It is often claimed to allow the reduction of OPEX. However, detailed analysis and quantitative evaluation of the changes induced by such technologies is rare. In this paper we quantify the cost reduction potential of GMPLS. In case of a traditional network, we show an important impact of the used resilience scheme on the expenses directly related to continuous costs of infrastructure (floorspace, energy,.) and on the planning and reparation costs. Concerning the service provisioning costs, we show that GMPLS introduction leads to a reduction in the order of 50% of the OPEX cost compared to the traditional case.
Title: END TO END ADAPTATION FOR THE WEB: Matching Content to Client Connections
Authors: Kristoffer Getchell, Martin Bateman, Colin Allison and Alan Miller
Abstract: The size and heterogeneity of the Internet means that the bandwidth available for a particular download may range from many megabits per second to a few kilobits. Yet Web servers today provide a one size fits all service and consequently the delay experienced by users accessing the same Web page may range from a few milliseconds to minutes. This paper presents a framework for making Web servers aware of the Quality of Service that is likely to be available for a user session, by utilizing measurements of past traffic conditions. The Web server adapts the fidelity of content delivered to users in order to control the delay experienced and thereby optimize the browsing experience. Where high bandwidth connectivity and low congestion exist high fidelity content will be delivered, where the connectivity is low bandwidth or the path congested lower fidelity content will be served and delay controlled.
Authors: Tao-Shen Li and Jing-Li Wu
Abstract: With the rapid development of e-commerce, logistic industry has also experienced a new reform. Intelligent logistics is an important part of it and plays a key role to realize highly effective logistics. In the process of logistics, there are abundant operational and decision-making problems that need to be solved, and logistic vehicle routing problem is one of which. However, the vehicle routing problem with time windows(VRPTM) is a combination optimization problem and is a NP-complete problem, so we can't get satisfying results when we use exact approaches and normal heuristic ones. In this paper, an improved genetic algorithm to solve the VRPTM problem is developed, which use an improved Route Crossover operator (RC') and can meet the needs for solving VRPTM problem. Computational experiments show that the GA based on RC' can obtain a general optimality for all evaluated indexes on the premise of satisfying every customer's demand and its performance is superior to the GA based on PMX or RC
Authors: C. W. Cheng and C. G. Chung
Abstract: The liberalization of regulations attracts more operators to join the telecommunication service and encourages the expansion of the telecommunication market. Users have the need to keep the phone number when changing service providers. Number portability (NP) service becomes an important factor in enhancing the competitiveness of a service provider. Although NP service is available now, NP database (NPDB) query and number translation are all-call-based, and the time required to process a NP call is long. Hence, it is an overhead to the service provider and the caller. How to reduce the impact of NP processing calls is an important issue to all service providers. Considering the contact objects of an organization often have locality, we suggest locating caches to PBX to record the related destination addresses of portable numbers. Thus, the workload of translating portable numbers to destination addressed is shared out to PBX, and the traffic load of NPDB decreased when the number of requests to translate portable numbers is reduced. In this paper, an interoperation of PBX and telecommunication service providers is introduced. Besides, we demonstrate the caches on PBX can remarkably reduce the time delay of call setup, and alleviate the cost of NPDB query.
AREA 2  
Authors: Chunlei Yang, Guiyun Tian and Steve Ward
Abstract: Fingerprint has been increaesingly used in authentication applications. Smart card is becoming more and more common and is moving toward a multi-function era. The integration of biometric and smart card is a trend for the future of smart card. As a part of our research project which concerns a novel security card, we propose to integrate the fingerprint sensor with the smart card instead of the normal solution where the sensor is installed with a terminal machine. This solution has some advantages regarding security, user privacy as well as flexibility. In this paper, we study the biometric security and outline our solution. In addition, in the system authentication decision part, a novel adaptive decision algorithm which combined with biometrics, PIN (personal identify number) is introduced. This algorithm can be a better trade-off between user convenience and security.
Authors: Patrik Salmela and Jan Melén
Abstract: The Host Identity Protocol (HIP) (Moskowitz (1), 2004) is one of the more recent designs that challenge the current Internet architecture. The main features of HIP are security and the identifier-locator split, which solves the problem of overloading the IP address with two separate tasks. This paper studies the possibility of providing HIP services to legacy hosts via a HIP proxy. Making a host HIP enabled requires that the IP-stack of the host is updated to support HIP. From a network administrator's perspective this can be a large obstacle. However, by providing HIP from a centralized point, a HIP proxy, the transition to begin using HIP can be made smoother. This and other arguments for a HIP proxy will be presented in this paper along with an analysis of a prototype HIP proxy and its performance.
Authors: Georgia Frantzeskou, Efstathios Stamatatos and Stefanos Gritzalis
Abstract: Source code authorship analysis is the particular field that attempts to identify the author of a computer program by treating each program as a linguistically analyzable entity. This is usually based on other undisputed program samples from the same author. There are several cases where the application of such a method could be of a major benefit, such as tracing the source of code left in the system after a cyber attack, authorship disputes, proof of authorship in court, etc. In this paper, we present our approach which is based on byte-level n-gram profiles and is an extension of a method that has been successfully applied to natural language text authorship attribution. We propose a simplified profile and a new similarity measure which is less complicated than the algorithm followed in text authorship attribution and it seems more suitable for source code identification since is better able to deal with very small training sets. Experiments were performed on two different data sets, one with programs written in C++ and the second with programs written in Java. Unlike the traditional language-dependent metrics used by previous studies, our approach can be applied to any programming language with no additional cost. The presented accuracy rates are much better than the best reported results for the same data sets
Authors: Hehong Fan, Mingde Zhang and Xiaohan Sun
Abstract: The restoration techniques used in large scale network systems make it possible for networks to survive in a degradative manner against destruction. Therefore the assumption that network systems have only two opposite working-states is not justifiable any more. For WDM networks, as there are the elements in the network whose failure affect only certain wavelength channel in the link, a novel network reliability evaluation module was presented in the paper, in which network are composed of three different kinds of elements - nodes, links and channel-related elements. Knowledge for reliability parameters of multiple working-states object (MWSO) was introduced and algorithm of the network performability index - function value of network under certain link capacity and hop requirements was described for the first time. Finally the reliability evaluation module for WDM network was used to analyze the reliability of WDM network of the CERNET topology of which the elements obey weibull failure distribution. Simulation results indicated that different working-state requirements may lead to fairly different reliability evaluation results; different link capacity requirement for network may contribute to different evaluation results as well, especially when failure rates are high; and the differences of different link capacity requirements grow very fast with the increasing of failure rates. This implies that study of network reliability should be performed under multiple working-states assumption and the addition of the new network element kind - wavelength channel-related elements for reliability analysis of WDM network is necessary.
Authors: Solange GHERNAOUTI-HÉLIE and Mohamed Ali SFAXI
Abstract: Protocols and applications could profit of quantum cryptography to secure communications. The applications of quantum cryptography are linked to telecommunication services that require very high level of security such as bank transactions. The aim of this paper is to present the possibility to use quantum cryptography in critical financial transactions, to analyse the use of quantum key distribution within IPSEC to secure these transactions and to present the estimated performances of this solution. After having introduced basic concepts in quantum cryptography, we describe a scenario of using quantum key distribution in bank transactions in Switzerland. Then, we propose a solution that integrate quantum key distribution into IPSEC. A performance analysis is done to demonstrate the operational feasibility of this solution.
Authors: Marek Karpinski and Yakov Nekrich
Abstract: In this paper we describe optimal trade-offs between time and space complexity of Merkle tree traversals with their associated authentication paths, improving on the previous results of Jakobsson, Leighton, Micali, and Szydlo (Jakobsson et al., 03) and Szydlo (szydlo, 03). In particular, we show that our algorithm requires $2 \log n/\log^{(3)} n$ hash function computations and storage for less than $(\log n/\log^{(3)} n + 1)\log\log n + 2 \log n$ hash values, where $n$ is the number of leaves in the Merkle tree. We also prove that these trade-offs are optimal, i.e. there is no algorithm that requires less than $O(\log n/\log t)$ time and less than $O(t\log n/\log t)$ space for any choice of parameter $t\geq 2$. \\ Our algorithm could be of special use in the case when both time and space are limited.
Authors: Debdeep Mukhopadhyay and Dipanwita RoyChowdhury
Abstract: This paper proposes a new key agreement protocol using Cellular Automata (CA). The primitives on which the protocol is based has been developed using specially designed CA structures. The paper aims at developing a key agreement technique which is not based on the Diffie-Hellman problem. The removal of exponentiations makes the protocol fast and have a linear time complexity. The protocol has been found to resist known forms of attacks. Indeed the initial review promises the development of a key agreement protocol which meets nicely the conflicting ends of security and efficiency.
Title: ADAPTIVE REAL-TIME NETWORK MONITORING SYSTEM: - Detecting Anomalous Activity with Evolving Connectionist System
Authors: Muhammad Fermi Pasha, Rahmat Budiarto and Masashi Yamada
Abstract: When diagnosing network problems, it is desirable to have a view of the traffic inside the network. This can be achieved by profiling the traffic. A fully profiled traffic can contain significant information of the network's current state, detecting anomalous traffic and can be further used to manage the network better. Many has addressed problems of profiling network traffic, but unfortunately there are no specific profiles could lasts forever for one particular network, since network traffic characteristic always changes over and over based on the sum of nodes, software that being used, type of access, etc. This paper introduces an online adaptive system using Evolving Connectionist Systems to profile network traffic in continuous manner while at the same time try to detect anomalous activity inside the network in real-time and adapt with changes if necessary. Different from an offline approach, which usually profile network traffic using previously captured data for a certain period of time, an online and adaptive approach can use a shorter period of data capturing and evolve its profile if the characteristic of the network traffic has changed.
Authors: Liu Yan, Yin Xia and Wu Jianping
Abstract: Network management is a process to control networks with great efficiency. So far, network management protocols have changed from SNMPv1, SNMPv2 to SNMPv3. This paper briefly compares and analyses these versions on SMI & MIB, protocol operation, security and access control. Besides, it thoroughly summarizes most research aspects of recent SNMP developments, especially the next generation network and new technologies. It also analyses and points out the most promising aspects.
Authors: Ahmad Fadlallah and Ahmed Serhrouchni
Abstract: Denial of service (DoS) attacks figure highly among the dangers that face the Internet. Many research studies deal with DoS, proposing models and/or architectures to stop this threat. The proposed solutions vary between prevention, detection, filtering and traceback of the attack. The latter (attack traceback) constitutes an important part of the DoS defense. The most complex issue it has to face is related to the fact that attackers often used spoofed or incorrect IP addresses, thus disguising the true origin. In this work, we propose a security-oriented signaling protocol named SSSP (Simple Security Signaling Protocol). This protocol makes it easier to trace both the DoS and other types attack back to their sources; it is simple, robust and efficient against IP spoofing. SSSP thus constitutes a novel and efficient approach to deal with the attack traceback problem.
Authors: Seny Kamara, Breno de Medeiros and Susanne Wetzel
Abstract: Biometrics play an increasingly important role in the cont ext of access control techniques as they promise to overcome the problems of forgotten passwords or passwords that can be guessed easily. In this paper we introduce and provide a formal definition of the notion of \emph{secret locking} which generalizes a previously introduced concept for cryptographic key extraction from biometrics. We give details on an optimized implementation of the scheme which show that its performance allows the system for use in practice. In addition, we introduce an extended framework to analyze the security of the scheme.
Authors: Somchai Lekcharoen and  Chanintorn Jittawiriyanukoon
Abstract: High-performance frame communication networks including VDSL have been conceived to carry traffic sources and support a continuum of transport rates ranging from low bit-rate to high bit-rate traffic. As a number of telecommunications traffic (bursty traffic) fluctuates such a network then it results congestion. The traditional policing mechanisms are finite-sized buffers with queue management techniques and fixed leak rate. Most queue management schemes employ fixed thresholds or a limited number of arrival frames, to determine when to allow or discard the entry of frames. However, traditional policing mechanisms have proved to be inefficient in coping with the conflicting requirements of ideal policing mechanisms, that is, low dropping frames and high conforming frames. An alternative solution based on artificial intelligence techniques, specifically, in the field of fuzzy systems is introduced. In this paper, a fuzzy control queue leak rate (QLR) of the buffer prior to policing mechanism is investigated. The performance of this alternative method is then compared with traditional policing mechanisms. Simulation results show that over VDSL network, the fuzzy control scheme help improve performance of QLR in policing mechanisms. The proposed method performance is much better than traditional policing ones.
Title: A PRELIMINARY EXPLORATION OF STRIPED HASHING: A probabilistic scheme to speed up existing hash algorithms
Authors: George I. Davida and Jeremy A. Hansen
Abstract: Hash algorithms generate a fixed-size output from a variable-size input. Typical algorithms will process every byte of the input to generate their output, which, on a very large input, can be time consuming. The hashes' potential slowness, coupled with recently reported attacks on the MD5 and SHA-1 hash algorithms, prompted a look at hashing from a different perspective. Pre-processing the input to a hash algorithm so that only part of the input is used may save time and possibly defend against these new attacks. By generating several "striped" hashes, we may speed up the hash verification by a factor of the chosen stripe size.
Authors: Michael Höding
Abstract: This contribution discusses management software for Application Service providing (ASP). For the development of such software systems numerous specific requirements have to be considered. As examples we discuss aspects of heterogeneity. Because of this a flexible software engineering approach is necessary, covering design and implementation. For that we propose design patterns and component technology. The application of design patterns is demonstrated in examples
Authors: George Davida and Jon Peccarelli
Abstract: Microsoft Windows systems have been the favourite target of viruses, worms and spyware in recent years. Through modifications of the file association and file type mappings and a kernel level process, we can secure Windows\texttrademark processes by running them in their own "security boxes". These "security boxes" are configured to only allow a subset of rights to be granted to the process, therefore limiting the ability of a rogue process or a compromised process to inflict damage on the rest of the system.
Title: RELIABLE MULTICAST PROTOCOL - A modified retransmission-based loss-recovery scheme based on the selective repeated ARQ protocols
Authors: Chih-Shing Tau and Tzone-I Wang
Abstract: The restoration techniques used in large scale network systems make it possible for networks to survive in a degradative manner against destruction. Therefore the assumption that network systems have only two opposite working-states is not justifiable any more. For WDM networks, as there are the elements in the network whose failure affect only certain wavelength channel in the link, a novel network reliability evaluation module was presented in the paper, in which network are composed of three different kinds of elements - nodes, links and channel-related elements. Knowledge for reliability parameters of multiple working-states object (MWSO) was introduced and algorithm of the network performability index - function value of network under certain link capacity and hop requirements was described for the first time. Finally the reliability evaluation module for WDM network was used to analyze the reliability of WDM network of the CERNET topology of which the elements obey weibull failure distribution. Simulation results indicated that different working-state requirements may lead to fairly different reliability evaluation results; different link capacity requirement for network may contribute to different evaluation results as well, especially when failure rates are high; and the differences of different link capacity requirements grow very fast with the increasing of failure rates. This implies that study of network reliability should be performed under multiple working-states assumption and the addition of the new network element kind - wavelength channel-related elements for reliability analysis of WDM network is necessary.
Authors: Yazeed A. Al-Sbou, Reza Saatchi, Samir Al-Khayatt and Rebecca Strachan
Abstract: As networks grow in complexity and scale, the importance of network performance monitoring and measurement also increases significantly. High data rates often lead to large amount of measurement results. Therefore, in order to prevent an exhaustion of the network resources and to reduce the measurement cost, a reduction of the collected data is required. A performance measurement method for estimating the actual network performance, experienced by the user, has been proposed. The basic procedure of the proposed method is as follows: Select a suitable number of samples of packets of the on-going current traffic. Measure the network performance based on measuring the QoS parameters (delay, jitter, and throughput) of the sampled packets. Convert the measurement values of the sampled packets to represent the actual performance experienced by the user by weighting the performance with the number of the user packets arriving between each two consecutive sampled packets, which is measured passively. This study focuses on monitoring the network performance and estimates its main Quality of Service (QoS) parameters (delay, throughput, and jitter) through the use of non-intrusive passive measurement method based on sampling methodologies. This method, therefore, will overcome the drawbacks of both active and passive monitoring methods. This is because it measures the actual performance experienced by the user and requires reduced calculations of QoS parameters from the sampled packets. The validation of this idea was examined and verified through simulations. Three different sampling techniques (systematic, random, and stratified) were investigated. A comparison among these techniques was carried out for different sample sizes. The study indicated that an accurate estimation of the QoS parameters could be obtained without the need to measure across the whole packets of traffic information. As a result, the scheme has shown an estimation of the detailed characteristics of performance for each individual user. For a bottleneck based network topology and traffic conditions used, the random sampling showed the best overall performance.
Authors: Delphine Charlet and Victor Peral Lecha
Abstract: Driven by an increasing need for personalizing and protecting access to voice services, a state-of-the-art speaker recognition system has been used as a framework for experimenting with voices in a family context. With the aim of evaluating this task, 33 families were recruited. Particular attention was given to 2 main scenarios: a name-based scenario and a common sentence-based scenario. Moreover, this paper presents a database collection and first experiments for family voices. The results of these experiments show the particular difficulties of speaker recognition within the family, depending on the scenario, the genre and age of the speaker, and the physiological nature of the imposture.
Authors: David Buchmann, Dominik Jungo and Ulrich Ultes-Nitsche
Abstract: We present in this paper our system to configure networks. The Verified Network Configuration (VeriNeC) project aims to manage the complete network from one central XML database. This approach allows to verify whether the configuration is usable prior to applying it to the network devices. Distribution of the configuration to each managed device within the network is done by a modular framework which can be extended to support all kinds of services and devices. Verifying the configuration of a network helps build more reliable networks, as conflicting or potentially problematic setups are detected. It also increases the security of a network, because weak points, e.g. unnecessary open channels and running services, can be avoided.
Authors: Eunjin Ko, Junwoo Lee, Gilhaeng Lee and Youngsun Kim
Abstract: To provide high-level quality of network service and prevent contract dissension, a Service Level Agreement (SLA) is a more and more essential key factor in telecommunication industry. While most network service providers normally offer an SLA based on their legacy monitoring or reporting systems, it is needed to suggest an architecture of SLA system for accommodating different operating legacy systems without considering environment of each systems and providing an integrated SLA function between network service providers and telecommunication customers instead of using individual legacy systems to provide an SLA. In order to achieve these goals, this paper presents an integrated SLA system that is operated on the web-based interworking technology with legacy system, i.e. Web-based Service Level Agreement Monitoring and Reporting (WSMR) system. This will be easily modified and adjusted depending on the change of system environment. In WSMR, it is important to gather raw data in time from legacy systems or other Operating Support Systems and exactly transfers it into modules in WSMR system to support SLA monitoring and reporting function. To provide in-time data distribution, this paper designs a Data Dispatcher Module (DDM) which provides interworking interfaces, classifies and modifies raw data depended on the pre-defined configuration information, transfers it into other modules and logs all data flow through DDM. With DDM, this paper manages and controls raw data to provide an integrated SLA in time.
Authors: Rodrigo S. Alves, Clarissa C. Marquezan, Philippe O. A. Navaux and Lisandro Z. Granville
Abstract: The management of high-performance clusters, in several situations, needs to be integrated with the management of computer networks. In order to achieve such integration, a network management architecture and protocol, such as the SNMP, could be used for cluster management. This paper investigates the required integration through the implementation of SNMP agents deployed in the cluster infrastructures, as well as the development of a management system for these SNMP agents.
Authors: Jarrod Trevathan and Alan McCabe
Abstract: This paper presents a secure real-time remote user authentication system based on dynamic handwritten signature verification. The system allows users to establish their identities to other parties in real-time via a trusted verification server. The system can be used to gain remote access to restricted content on a server or to verify a signature on a legal document. State of the art dynamic verification techniques are combined with proven cryptographic methods to develop a secure model for remote handwritten signature verification.
Authors: Jarrod Trevathan, Wayne Read and Hossein Ghodosi
Abstract: Extensive research has been conducted in order to improve the security and efficiency of electronic auctions. However, little attention has been paid to the design issues. This paper discusses design issues and contrasts the differing security requirements between various auction types. We demonstrate that poor design for an electronic auction breaches the security of the system and degrades its practicality, irrespective of how secure/efficient the building blocks of an electronic auction are. This is accomplished by illustrating design flaws in several existing electronic auction schemes. Furthermore, we provide a solution to these flaws using a group signature scheme and give recommendations for sound auction design.
Authors: João Afonso, Edmundo Monteiro and Carlos Ferreira
Abstract: In this work we propose a computer platform that aims to unify the tasks of monitoring, diagnosing, error detection, alarm management and intrusion detection system (IDS) associated with the administration of a computer network and related critical services. As main objective, we intend to develop a user-intuitive program that does not require specialized computer skills from the operators in order to assume full responsibility for the system. Open-source solutions were used, whenever possible, namely for server operating systems, application development tools, database engine and integrated Web solution. The project started by studying the existing solutions, exploring their strengths and shortcomings and iteratively defining the specific requirements to be implemented. The development phase was conceptually divided in three different levels: the agents and connectors collecting the data from the different areas being monitored; the database engine, cataloguing the information and the Web Interface (Security Portal) that allows the management of all functionalities and guarantees the operationability of the solution. An alarm management tool should also be developed permitting, according to programmed warnings for certain malfunctions, trigger the warning messages through the pre-defined medium - E-Mail, SMS (short message service) or IM (instant messaging), using a Unified Messaging (UM) solution. According to the defined specifications, the solution to be implemented was designed and a functional analysis was created. Finally the projected solution was implemented and applied to a case study - the Department of Fisheries Inspection from the General-Directorate of Fisheries and Aquiculture. The preliminary results from the reliability and user-friendliness tests were very positive and a decision was made to move into the production phase. The platform was developed in line with current accessibility requirements and can be operated / consulted by users with disabilities.
Authors: Weider D. Yu and Archana Mansukhani
Abstract: Service Oriented Architecture (SOA) is a recent evolution in distributed software technology. Web Services(WS) are a new way of thinking in distributed computing. They are an important step towards service oriented architecture that proposes the idea of providing a service versus providing software. Web Servics are used to obtain service in an open, platform independent way. Recent focus on Web Services has been in the area of security. Security is an on going concern in many areas but is very pertinent in Web Services technology. Many standards exist today for different aspects of Web Services security; however no standard exists in the area of Web Services authorization. This paper describes the design of a reusable authorization layer for Web Services software. This layer resides separate from the Web Services themselves and uses a rule based inference engine for determining authorization and access rights. It also uses different types of access control to formulate feature rich rules.
Authors: Nancy Alonistioti, Alexandros Kaloxylos and Andreas Maras
Abstract: Reconfiguration is the action of modifying the operation or behaviour of a system, a network node, or functional entity when specific events take place. This paper describes an integrated control and management plane framework for end-to-end reconfiguration, and maps this model to a Beyond 3G mobile network architecture. Emphasis is provided on the reconfiguration actions that take place when a session setup or a handover are executed in a heterogeneous mobile network.
Authors: Mohammad Saraireh, Reza Saatchi, Samir Al-khayatt and Rebecca Strachan
Abstract: The fast growth and development of wireless computer networks and multimedia applications make the Quality of Service (QoS) provided to their transmission an important issue. This paper aims to investigate the impact of varying the number of active stations on the network performance. This was carried out using different data rates. The investigations also considered both MAC protocol access mechanisms, i.e. the basic access and the Request To Send / Clear To Send (RTS/CTS). The effect of traffic type i.e. Constant Bit Rate (CBR) and Variable Bit Rate (VBR) traffics was also examined. The findings revealed that in large networks (larger than 15 stations), the RTS/CTS access mechanism outperformed the basic access mechanism since the performance of the latter was more sensitive to the increase and decrease of the number of active station. Increasing the data rate improved the network performance in term of delay and jitter but it degraded the network performance in term of channel utilisation and packet loss ratio.
Authors: Shobhan Adhikari and Teerapat Sanguankotchakorn
Abstract: Due to the concerns about the imminent crunch in available addresses in the previous version of Internet Protocol, IPv4, and to offer additional functionalities for new devices, IPv6 was proposed. Mobile IPv6 (MIPv6), an extension to IPv6, manages Mobile Nodes' movements between wireless IPv6 networks. One of the most important considerations for Mobile IPv6 is handover management. It is desired that handover be fast and lossless. Seamless handovers are such that they incur minimum packet loss and delay. Various proposals have been made for seamless handover in MIPv6. By forwarding the packets destined to the Mobile Node towards the new point of attachment and storing the packets there until the Mobile Node has attached there, packet loss can be significantly decreased, and the delay associated with the forwarding is also less compared to forwarding from the previous point. In this paper, we study the performance of one such scheme which has optimized fast handover over hierarchical structure with buffering and simulation using NS-2 to evaluate packet loss and delay for UDP streams. It was observed that with the buffering scheme used, the handover was seamless. There was a difference in latencies with and without handover, as expected. It was observed that most of the performance factors studied depended on the data rate of the traffic. The factors were found to be more dependent on the data rate than on the speed of the Mobile Node.
Authors: Gráinne Hanley, Seán Murphy and Liam Murphy
Abstract: This paper examines, via simulation, the performance of an 802.11e MAC over an 802.11g PHY operating in DSSS-OFDM mode. The DSSS-OFDM scheme provides data rates of up to 54Mb/s as well as interoperability with 802.11b nodes. Due to the widespread use of 802.11b nodes, such interoperability is an important consideration. This paper involves a study of the number of simultaneous bidirectional G.711 VoIP calls that can be supported by such a WLAN. The results show that this mode of operation introduces a very significant overhead. The actual number of calls that can be carried is limited to 12 when using the 24Mb/s data rate and 13 when using either the 36Mb/s or 54Mb/s rates. These results demonstrate the well-known disparity between uplink and downlink performance, with the downlink imposing the limit on the number of calls that can be carried by the system in the cases studied. The results also show that when when a significant amount of lower priority traffic is introduced into the system, it can have a significant impact on VoIP call capacity despite the use of 802.11e.
Authors: Christian Veigner and Chunming Rong
Abstract: In the next generation Internet protocol (IPv6), mobility by means of Mobile IPv6 (MIPv6) is default. As a default part of the MIPv6 protocol, route optimization is used to transmit packets directly to a mobile node's currently used address at the mobile node's visited subnet. Return Routability is the protocol suggested by the IETF in [1] for managing this task. Route optimization is often carried out during handovers, where a mobile node changes network attachment from one subnet to another. To offer seamless handovers to the user it is important that route optimizations are carried out quickly. In this paper we will present an attack that was discovered during design of a new and more seamless protocol than the Return Routability. Our improved route optimization protocol for Mobile IPv6 [2] suffers this attack; therefore we wanted to investigate if a similar attack was feasible on the Return Routability protocol suggested by IETF. In this paper, we show that our new route optimization protocol offers no less security than the already standardized Return Routability protocol in this field.
Authors: M. Vastram Naik, A. Mahanta, R. Bhattacharjee and H. B. Nemade
Abstract: This paper addresses Automatic Blind Modulation Recognition (ABMR)problem, utilizing a Mean Square Error (MSE) decision rule to recognize and differentiate M-ary PSK modulated signals in presence of noise and fading. The performance of the modulation recognition scheme has been evaluated by simulating different types of PSK signals. By putting appropriate Mean Square Error Difference Threshold(MSEDT) on Mean Square Error(MSE), the proposed scheme has been found to recognize the different modulated signals with 100% recognition accuracy at Signal to Noise Ratio (SNR) as low as 1 dB in AWGN channels. The data samples required to be used for performing recognition is very small, thereby greatly reducing the time complexity of the recognizer. For fading signal Constant Modulus(CM) equalization has been applied prior to performing recognition. It has been observed that when CM equalization is used, 100% recognition can be achieved at SNR as low as 6 dB.
Authors: Jason C.T. Lo, Allan K.Y. Wong and Wilfred W.K. Lin
Abstract: The novel statistical distribution independent transfer policy model (SDITPM) is proposed to improve the serviceability of a logical agent server in a pervasive computing environment. Serviceability is defined as the "chance of obtaining a required service within a defined period". The SDITPM helps the agent to make sound migration decision by leveraging different primary metrics from which secondary ones are derived for the proportional (P), derivative (D), and integral (I) control elements. These elements are timely combined by planar and vertical integrations to form the final transfer probability that affirms a transfer policy decision to migrate. Therefore, The SDITPM is basically a PID controller that facilitates the decision making process.
Authors: Ziad Hunaiti, Ammar Rahman, Wamadeva Balachandran and Zayed Huneiti
Abstract: This paper will discuss the results obtained from testing and evaluating the performance of public Wireless local Area Network (WLAN) hotspots in real life. A fully detailed analysis of a specially constructed testbed will be given. The construction was based on standard user equipments and provided near reality performance results. The paper will also present an overview of the bandwidth and quality of communication received by end-users. BT Openzone was chosen as the Hotspot provider to implement and carry out the tests. The results of these tests have shown the suitability of the public wireless networks in the United Kingdom to quality-critical applications.
Authors: Matthew D'Souza, Adam Postula, Neil Bergmann and Montserrat Ros
Abstract: This paper presents a Bluetooth based communications protocol used for multimedia guidebooks on mobile computing devices. Multimedia guidebooks are used in museums to allow users to access information about museum exhibits. The multimedia guidebook protocol was successfully implemented on a personal digital assistant and mobile (cell) phone platforms. The protocol overcomes some of the wireless file transfer protocol limitation issues with mobile computing devices. The protocol uses Bluetooth wireless connections as a communications medium. The protocol can be used to transfer various file formats such as image or audio files. The protocol also identifies the language content of the information file. Future work on this protocol involves expanding it to allow for other languages to be included and other user preferences such as personal interests and file download options.
Authors: Adrian Spalka, Armin B. Cremers and Marcel Winandy
Abstract: Adaptive mobile applications are supposed to play an important role in the future of mobile communication. Adaptation offers a convenient and resource-saving way of providing tailored functionality to an application at the time, when the user requests it. But to make this versatile technology a success, the security of all concerned parties must be addressed. This work, which is in part embedded in a T-Com project, presents a multilateral security examination in two stages. We first introduce a co-operation model and state the security requirements from the perspective of each party. In the second stage we investigate the set of all requirements with respect to conflicts, state each party's role in the enforcement and suggest a realisation. The result is a comprehensive picture of the security aspects of adaptive applications in mobile environments.
Authors: Juan P. Pece, Carlos Fernández and Carlos J. Escudero
Abstract: Nowadays the amount of accessible information across Internet is enormous. Context-aware systems try to filter and to adapt that information to the environment of the user who requests it. This paper introduces a context-aware system, that filters the information by using the physical location of the request origin. The system is designed for mobile phones and it is composed of three subsystems: mobile, access point and information server. Access points are used to transmit and receive information from mobile phones and also, they are used to locate the request origin. Communication and location are based on Bluetooth technology, an opened standard for wireless communication. In the paper, it is shown an application of these system for a tourist information system
Title: MOBILE LOCALITY-AWARE MULTIMEDIA ON MOBILE COMPUTING DEVICES-A Bluetooth Wireless Network Infrastructure for a Multimedia Guidebook
Authors: Matthew D'Souza, Adam Postula, Neil Bergmann and Montserrat Ros
Abstract: This paper describes the implementation of a Bluetooth Village Guide Book (VGB) scenario for use in the Kelvin Grove Urban Village located in Brisbane, Australia. An Information Point Station Network (IPSN) was developed, along with software for two types of mobile computing devices. The implementation consists of several Information Point Stations (IPSs) placed at locations of significance, with access to information items on a centralized server. Once a user is registered on the network, he/she is given the opportunity to experience context- (and eventually user-) aware information on demand and in various multimedia formats. These information items are selected by the user, either by way of a menu system appearing on their mobile computing device or a more intuitive pointer-tag system. Information items are then 'beamed' to the user's mobile computing device for the user to view. Bluetooth was selected as the medium of choice due to its prevalence on most modern mobile computing devices such as Personal Digital Assistants and mobile (cell) phones. This fact, coupled with the nature of Bluetooth communications lacking the requirement for line-of-sight meant users could retrieve location-aware information at the "locality" level. The implementation was found to be successful and was tested with multiple users accessing information items from a given IPS as well as multiple IPSs attached to the centralized server. Still, there is further work to be done on the VBG software, the user-registration system and on creating an embedded solution for the individual Information Point Stations.
Authors: Tsung-Han Lee , Alan Marshall and Bosheng Zhou
Abstract: We present a novel scheme for conserving energy in multi-rate multi-hop wireless networks such as 802.11. In our approach, energy conservation is achieved by controlling the rebroadcast times of Route Request (RREQ) packets during path discovery in on-demand wireless routing protocols. The scheme is cross-layer in nature. At the network layer, the RREQ rebroadcast delay is controlled by the energy consumption information, and at the Physical layer, an energy consumption model is used to select both the rate and transmission range. The paper describes the energy-conserving algorithm at the network layer (ECAN), along with simulation results that compare the energy consumption of Ad-hoc On-Demand Distance Vector routing (AODV) with and without ECAN.
Authors: Hiroshi Masuyama, Yuuta Fukudome and Tetsuo ichimori
Abstract: In this paper, a one-to-all broadcasting algorithm on a mobile cellular network is first discussed as a minimum diameter spanning tree problem in a graph where every arc has a constant weight. An all-to-all broadcasting algorithm is next discussed on a subject of avoiding heavy traffic conditions. Finally, a fault-tolerant broadcasting scheme is presented.
Authors: Andrey Lyakhov, Vladimir Vishnevsky and Pavel Poupyrev
Abstract: IEEE 802.11 technology becomes attractive with implementation of various broadcast applications. For these applications, one of the main performance indices is the mean notification time that is the interval between consecutive successful receipts of the same source's packets. In this paper, we develop the first analytical method to study the ad hoc 802.11 network performance with broadcasting stations. The method based on a Markov model allows estimating the mean notification time and optimizing the packet generation time. The developed method is validated by simulation and shows high accuracy of notification time estimation as well as the its efficiency in broadcasting optimization.
Authors: João Carlos Silva, Nuno Souto, Francisco Cercas and Rui Dinis
Abstract: A MMSE (Minimum Mean Square Error) DS-CDMA (Direct Sequence-Code Division Multiple Access) receiver coupled with a low-complexity iterative interference suppression algorithm was devised for a MIMO/BLAST (Multiple Input, Multiple Output / Bell Laboratories Layered Space Time) system in order to improve system performance, considering frequency selective fading channels. The scheme is compared against the simple MMSE receiver, for both QPSK and 16QAM modulations, under SISO (Single Input, Single Output) and MIMO systems, the latter with 2Tx by 2Rx and 4Tx by 4Rx (MIMO order 2 and 4 respectively) antennas. To assess its performance in an existing system, the uncoded UMTS HSDPA (High Speed Downlink Packet Access) standard was considered.
Title: SEARCHING FOR RESOURCES IN MANETS - A cluster based flooding approach
Authors: Rodolfo Oliveira, Luis Bernardo and Paulo Pinto
Abstract: In this paper, we propose a searching service optimized for highly dynamic mobile ad-hoc networks based on a flooding approach. A light-weight clustering algorithm is proposed to reduce the flooding overhead. MANETs unreliability and routing costs prevent the use of central servers or global infra-structured services on top of a priori defined virtual overlay networks. A flooding approach over a virtual overlay network created on-demand performs better. A clustering algorithm based on link stability is proposed in order to avoid broadcast storm problems. The paper compares the relative efficiency of two clustering approaches using 1.5-hop and 2.5-hop neighborhood information. It presents a set of simulation results on the clustering efficiency and on searching efficiency for low movement and for high movement, showing that the 1.5-hop algorithm is more resilient to load and to node movement than the 2.5-hop algorithm. It also shows that a pure source-routing approach fails for both scenarios.
Authors: Fernando da Costa Junior, Luciano Gaspary, Jorge Barbosa, Gerson Cavalheiro and Luciano Pfitscher
Abstract: Despite offering the possibility to develop and distribute a new set of applications to its users, the widespread and unrestricted use of mobile computing depends on the provisioning of a secure network environment. Regarding the communication established from mobile devices such as PDAs (Personal Digital Assistants), one of the most currently used standards is the IEEE 802.11b, which presents known security flaws. To overcome them, some alternative setups are commonly deployed, based on link, network, transport or application-layer. In this paper we evaluate the impact on data reception rate and energy consumption of IPSec-based PDAs access to 802.11b (WiFi) wireless LANs. As a result of this work we identify the overhead imposed by the security mechanisms and the capacity of the device to run CPU and network-intensive applications.
Authors: Carlos Fernández, Juan P. Pece and Daniel I. Iglesia
Abstract: Nowadays, many of the services through SMS offered by the mobile telephony don't provide to the user a simple and intuitive tool to fill and send information. This way, the users have to code themselves the format of the message, with the corresponding annoyance and the possibility of making errors. One of the solutions to these problems could be provide to the user forms that he/she could fill from their mobile phones and send in a transparent way. SMSFormKit is an application which makes easier the creation of new services for mobile phones based on XForms. It was implemented using J2ME as development environment and it has the following functionalities: forms display, validation of the values provided by the user, shipment of requests and forms update. Forms are displayed using high-level graphic elements.This fact guarantees the compatibility with most of the mobile phones in the market. The validation is carried out in the own application (local validation), so it is avoided the inconvenience of sending the form to the server and that it returns it because the fields are bad filled. With regard to the update of the forms, there are two ways to obtain them: through SMS of through GPRS.
Authors: Akira Takahashi, Yoshitaka Takahashi, Shigeru Kaneda, Yoshikazu Akinaga and Noriteru Shinagawa
Abstract: In this paper, we analyze and synthesize a multi-server loss system with repeated customers, arising out of NTTDoCoMo-developed telecommunication networks. We first provide the numerical solution for a Markovian model with exponential retrial intervals. Applying Little's formula, we derive the main system performance measures (blocking probability and mean waiting time) for general non-Markovian models. We compare the numerical and simulated results for the Markovian model, in order to check the accuracy of the simulations. Via performing extensive simulations for non-Markovian (non-exponential retrial intervals) models, we find robustness in the blocking probability and the mean waiting time, that is, the performance measures are shown to be insensitive to the retrial intervals distribution except for the mean.
Authors: Albert Mráz, Mihály Katona and Sándor Imre
Abstract: In the recent years the mobile telecommunication networks have gone through on a big development. The services of the systems have been extended very quickly, such as the number of the subscribers. The various multimedia and Internet systems have become quickly the part of our life. This phenomenon takes effect on mobile communication, too. The third-generation mobile networks could be the solution, which eliminate the defects of current systems and apply good solutions concerning both access and transport system. These 3G systems are able to meet the growing user demands and the mobile Internet requirements. Our goal is to use the scarce resources as efficiently as possible. We implemented a new, more efficient and faster call admission control than former methods. Our algorithm is able to be adapted directly for 3G mobile networks.
Authors: Nadia Chetta and Nadjib Badache
Abstract: Mobile location management is one of the most important problems in the mobile networks systems. Indeed, reducing the search cost increases the update cost and vice versa. That's why, a trade off between the search cost and update cost must be defined. This paper proposes location management scheme based on quorums. The communication cost is reduced with our method because the information of a mobile location is saved in an efficient manner in a subset of location registers that change with the mobile moves. The proposed algorithm is evaluated in term of total cost of search and update, and is compared to Ihn-Han Bae algorithm
Authors: Saravut Yaipairoj, Fotios Harmantzis and Vinoth Gunasekaran
Abstract: As wireless services have become increasingly integrated and their demand is mounting, Wi-Fi provides an appealing opportunity for the GSM/GPRS operators to enhance their data capability. By integrating both networks, operators are able to provide 3G-like services. However, both networks have different data rates and capacity, which makes pricing a challenging issue. In this paper we propose a pricing model for GPRS networks integrated with Wi-Fi, which applies to data users with high service demand (``heavy"). The model identifies how the integration can play a significant role in increasing operators' overall revenue and potentially improving the performance of GPRS networks.
Authors: Suresh Venkatachalaiah and Richard J. Harris
Abstract: In this paper we propose to improve handoff performance by applying a mobility prediction technique, which is optimised using evolutionary algorithms such as genetic algorithm and particle swarm optimisation. Here, we describe a hybrid technique that uses the Grey model in combination with fuzzy logic and evolutionary algorithms. Handoff is the call handling mechanism invoked when a mobile node moves from one cell to another and the accuracy in predicting mobility holds a key to handoff performance. Our model uses the received signal strength from the base stations to help the mobile device during handoff. We also describe the optimisation criterion adopted in this paper and compare the self-tuning algorithm and the two evolutionary algorithms in terms of accuracy and faster convergence time. The improved accuracy of the approaches is shown by comparing results of simulations and experiments.
Authors: Eisuke Hanada and Takato Kudou
Abstract: Computer systems, often called hospital information systems (HIS), have been installed in most large Japanese hospitals for administration of the basic medical information of patients, for making entries on medical charts, and for prescribing medication. In almost all cases, HIS have a server / client type structure, with the servers and client terminals connected with a LAN. For voice communication among the hospital staff, a landline telephone is often used. Fixed-line call systems (nurse call systems) are used for communication between patients and nurses. The potential demand for the introduction of wireless communication devices for data/voice communication into hospitals is high because of the promise of savings these technologies bring by improving patient service and labour efficiency. However, because of guidelines made to reduce problems that might be caused by electromagnetic interference (EMI) with medical electric devices and administrative fears about potential problems, the introduction of these systems has, until recently, been shelved in almost all cases. Because in recent years it has became possible to control the electromagnetic waves emitted by mobile communications apparatus and to protect against the possible occurrence of EMI, the number of hospitals introducing such wireless communications has grown. We report a case of a university hospital in which data and voice wireless communication have been safely and efficiently introduced.
Authors: Andrey Krendzel, Jarmo Harju and Sergey Lopatin
Abstract: Wireless network planning is a very complex process, the result of which influences on the success of network operators. A poorly planned network cannot achieve the required Quality of Service. It also involves extra costs and fewer benefits for its network operator. Actually, wireless network planning deals with a large number of different aspects. In this paper Core Network (CN) planning aspects for the third generation (3G) wireless systems are discussed. The problem of performance evaluation of 3G CN nodes for Internet Protocol Multimedia Core Network Subsystem (IM CN subsystem) is considered in details taking into account self-similarity caused by the high variability of burstiness of multiservice traffic in 3G wireless networks. The method for the problem solution is based on the use of FBM/D/1/W queueing system (FBM - Fractional Brownian Motion).
Authors: Xabiel G. Pañeda, Roberto García, David Melendi, Manuel Vilas and Víctor G. García
Abstract: This paper presents a method to develop lab experiments for audio/video services, both using testbed and simulation models. Audio/video services in the Internet have special characteristics which make them very difficult to configure. Our research group has designed a methodology (Pañeda, 2004) for video-on-demand service analysis and configuration. In this methodology, the analysis phase is divided into two independent parts, one which works on data extracted from the behaviour of the real service, and another which works on predictions. The latter uses simulation models and testbeds to evaluate situations which may appear in a near future. In all the cases, a method must be used to specify the experiments. This method must determine elements such as: goals establishment, experiment generation process, the input data for the workload definition, etc.
Authors: David Melendi, Manuel Vilas, Xabiel G. Pañeda, Roberto García and Víctor G. García
Abstract: This paper presents a test environment designed to improve Internet radio services through the evaluation of different service features. The environment comprises the generation of audio streams, the delivery of those streams through different communication networks, and the access of final users to those contents. A broad set of service architectures can be emulated, and several network configurations can be deployed using the available communication devices. It is also possible to simulate users' behaviour thanks to a workload generator that can be configured using statistical information obtained from real service access data. A case study is also presented, where a glimpse of the possibilities of the environment can be caught. This test environment will allow service administrators or research teams to predict what will happen in a real service if its configuration is modified, or if user behaviour changes. Furthermore, managers will be capable of knowing whether an Internet radio service can be improved or not.
Authors: Xiaoyun Wang and Kefei Chen
Abstract: In this paper we overview the existing home network DRM schemes and recommend Philips' invention as the most suitable scheme for today's digital home. Then we propose to use the proxy signature to simplify the authentication of certificate chain in Philips' scheme. And we also give an efficient scheme of proxy signature based on the characters of certificate chain. We then give a detailed security analysis to show that our scheme meets the six properties of proxy signature. Finally the paper points that we should go ahead to improve the efficiency of home network DRM scheme and show a kinder interface to digital home consumers.
Authors: Karol Wnukowicz and Wladyslaw Skarbek
Abstract: The concept of color temperature is formulated as a psychophysical feature referring to the perceptual feeling of perceived light. This perceptual feeling results from an innate characteristic of the human visual system. Color temperature has also a physical-based definition, and thus, color temperature of scenes and visual objects can be modelled in a mathematical way as one-parameter characteristics of perceived light. In the Amendment to the Visual Part of the MPEG-7 Standard the Color Temperature descriptor is proposed, which refers to the color temperature of image illumination. To extend the functionality of content-based image searching by color temperature we propose the Dominant Color Temperatures descriptor, which allows a user to perform query by example and query by value searches. The extraction algorithm was originally adopted from dominant color's one, which utilizes vector quantization in 3D color space. We proposed a second, much faster algorithm based on scalar quantization in one-dimensional color temperature space. In this paper we presented a comparison of the two extraction algorithms. We also compared querying results of Dominant Color Temperature descriptor and two conceptually related descriptors: Dominant Color and Color Temperature.
Title: SELF-ORGANIZING AND SELF-REPAIRING MASS MEMORIES FOR AUTOSOPHY MULTIMEDIA ARCHIVING SYSTEMS: Replacing the Data Processing Computer with Self-Learning Machines based on the Autosophy Information Theory
Authors: Klaus Holtz, Eric Holtz and Diana Kalienky
Abstract: The programmed data processing computer may soon be eclipsed by a next generation of brain-like learning machines based on the "Autosophy" information theory. This will require a paradigm shift in memory technology, from random addressable memories to self-organizing failure-proof memories. The computer is essentially a blind calculating machine that cannot find "meaning" as our own brains obviously can. All that can be achieved are mere programmed simulations. The problem can be traced to an outdated (Shannon) information theory, which treats all data as "quantities". A new Autosophy information theory, in contrast, treats all data as "addresses". The original research explains the functioning of self-assembling natural structures, such a chemical crystals or living trees. The same principles can also grow self-assembling data structures that grow like data crystals or data trees in electronic memories without computing or programming. The resulting brain-like systems would require virtually unlimited capacity failure-proof memories. Features include self-checking, self-repair, self-healing, memory cloning, both random and content addressability, very low power consumption, and small memory size for mobile robots. Replacing the programmed data processing "computer" with brain-like "autosopher" promises a true paradigm shift in technology resulting in system architectures with true "learning" and eventually true Artificial Intelligence.
Title: AVATAR: A Flexible Approach to Improve the Personalized TV by Semantic Inference
Authors: Yolanda Blanco Fernández, Jose J. Pazos Arias, Alberto Gil Solla and Manuel Ramos Cabrer
Abstract: Both the TV recommender systems and search engines developed in the Internet are intended to lighten the user burden, by offering them automatically the required information, personalized according to their preferences or needs. In last years, with the goal of improving these search engines, an important research line has been developed in the context of the WWW, known as the Semantic Web. The Semantic Web describes the resources by metadata and reasons on them by discovering new knowledge. Taking the advantage of the Semantic Web in the field of the personalized TV, we propose an intelligent assistant named AVATAR, which uses the semantic inference as a novel recommendation strategy. This approach allows to overcome an important limitation identified in the personalization strategies adopted in other systems: an excessive similarity between the programs known by the user and those suggested by the recommender. In this regard, our approach diversifies and personalizes the elaborated recommendations, by inferring semantic associations of different nature between the user preferences and the suggested TV contents. This inference process requires a formal representation both the knowledge of our application domain, and the user preferences. In this regard, we resort to an OWL ontology to identify resources and relations typical in the TV field, and to reason about them.
Title: Identifying User and Group Information from Collaborative Filtering Data Sets
Authors: Josephine Griffith, Colm O'Riordan and Humphrey Sorensen
Abstract: This paper considers the information that can be captured about users and groups from a collaborative filtering data set with a view to creating user models and group models. The approach outlined defines a number of user and group features which are represented using a graph model where links exist between users and items, between users and users, and between items and items. The main focus of this paper is to extract implicit information about users and groups that exists in a collaborative filtering data set.
Title: Incorporating Context into Recommender Systems Using Multidimensional Rating Estimation Methods
Authors: Gediminas Adomavicius and Alexander Tuzhilin
Abstract: Traditionally recommendation technologies have been focusing on recommending items to users (or users to items) and typically do not consider additional contextual information, such as time or location. In this paper we discuss a multidimensional approach to recommender systems that supports ad-ditional dimensions capturing the context in which recommendations are made. One of the most important questions in recommender systems research is how to estimate unknown ratings, and in the paper we address this issue for the mul-tidimensional recommendation space. We present the classification of multidi-mensional rating estimation methods, discuss how to extend traditional two-dimensional recommendation approaches to the multidimensional space, and identify research directions for the multidimensional rating estimation problem.
Title: Personalizing the Search for Persons: A Recommender-based Approach
Authors: Tobias Keim, Jochen Malinowski, Gregor Heinrich, and Oliver Wendt
Abstract: Recommendation systems are widely used on the Internet to assist customers in finding the products or services that best fit their individual preferences. While current implementations successfully reduce information overload by generating personalized suggestions when searching for objects such as books or movies, recommendation systems so far cannot be found in another potential field of application: the personalized search for subjects such as business partners or employees. This is astonishing as (1) the number of CV-, assessment- and social network-data available on the Internet is growing and (2) the complexity and scope of selecting the right partner is much higher than when buying a book. We argue that recommendation systems personalizing the search for people need to be grounded on two pillars: unary attributes on the one hand and relational attributes on the other. We present a framework meeting these requirements together with an outline of a first prototypical implementation.
Title: Comparative evaluation of personalization algorithms for content recommendation
Authors: Carlos R. C. Alves and Lúcia V. L. Filgueiras
Abstract: Personalization techniques that combine user characteristics, user behavior, and content organization can be used to help users on finding objectively content on the web. The main contribution of this text is the multidisciplinary study that was conducted integrating different areas on human knowledge in order to find the best way to direct content, including some wide research on personalization concepts and applications. This study also presents the development of the Argo software which is formed by a web site, a component that captures and stores information about the user navigation, and three different personalization algorithms. Using navigation data it is possible to generate user profile, which is used to recommend content. Tests were conducted to check efficiency of the personalization algorithms.
Title: SMS Communication and Announcement Classification in Managed Learning Environments
Authors: Ross Clement, Mark Baldwin, Clive Vassell and Nadia Amin
Abstract: A prototype system for sending SMS text messages to students telling them about announcements has been designed and partially implemented. Experiments have been performed to test whether automatic text classification can be used to decide which announcements posted by tutors are urgent and that a SMS text message should be sent informing students. The accuracy of a naive Bayes classifier is not sufficient in itself to decide this, but a flexible classifier and the ability of tutors to override its decisions has promise. How the system would be used would depend on management policies concerning the effects of classification errors.
Title: User Profile Generation Based on a Memory Retrieval Theory
Authors: Fabio Gasparetti and Alessandro Micarelli
Abstract: Several statistical approaches for user profiling have been proposed in order to recognize users' information needs during their interaction with information sources. Human memory processes in terms of learning and retrieval are with no doubt some of the fundamental elements that take part during this interaction, but actually only a few models for user profiling have been devised considering explicitly these processes. For this reason, a new approach for user modeling is proposed and evaluated. The grounds in the well-studied Search of Associative Memory (SAM) model provides a clear definition of the structure to store information and the processes of learning and retrieval. These assets are missing in other works based for example on simplified versions of semantic memory models and co-occurrence data.
Title: Koba4MS: Knowledge-based Recommenders for Marketing and Sales
Authors: Alexander Felfernig
Abstract: Due to the increasing size and complexity of products offered by online stores and electronic marketplaces, the identification of solutions fitting to the wishes and needs of a customer is a challenging task. Customers can differ greatly in their expertise and level of knowledge w.r.t. the product domain which requires sales assistance systems allowing personalized dialogs, explanations and repair proposals in the case of inconsistent requirements. In this context, knowledge-based recommenders allow a flexible mapping of product, marketing and sales knowledge to the formal representation of a knowledge base. This paper presents the knowledge-based recommender environment Koba4MS which assists customers and sales representatives in the identification of appropriate solutions. Based on application examples from the domain of financial services, basic Koba4MS technologies are presented which support the effective implementation of customer-oriented sales dialogs.
Title: Electronic Programming Guide Recommender for Viewing on a Portable Device
Authors: Matthew Y. Ma, Jinhong K. Guo, Jingbo Zhu and Guiran Chang
Abstract: With the merge of DTV and the exponential growth of broadcasting network, an overwhelmingly amount of information have become available at views' homes. Therefore, it becomes increasingly challenging how consumers can receive the right amount of information at the right time for their entertainment needs. We proposed an electronic programming guide (EPG) recommender based on natural language processing techniques. Particularly, the recommender has been implemented as a service on a home network that facilitates the browsing and recommendation of TV programs on a portable remote device and such system is found to be feasible. Preliminary experiments have shown a precision of 81%.
Title: Generalizing e-Bay.NET: An Approach to Recommendation Based on Probabilistic Computing
Authors: Luis M. de Campos, Juan M. Fernández-Luna and Juan F. Huete
Abstract: In this paper, we shall present the theoretical developments related to extending existing e-Bay.NET recommendation system in order to improve its expressiveness. In particular, we shall make them more flexible and more general by enabling it to handle evidence items with a finer granularity so that more accurate information may be obtained when user preferences are elicited. The model is based on the formalism of Bayesian networks, and this extension requires the design of new methods to estimate conditional probability distributions and also a new algorithm to compute the posterior probabilities of relevance.
Title: Software Engineering Aspects of an Intelligent User Interface based on Multi Criteria Decision Making
Authors: Katerina Kabassi and Maria Virvou
Abstract: Decision making theories have been proved to be very successful for evaluating the users' interests and preferences in Intelligent User Interfaces (IUIs). However, their application and incorporation in the reasoning of an IUI requires empirical studies throughout the software life-cycle that lead to re-quirements analysis and specification, important design decisions and evalua-tion of the resulting IUI. This paper presents a life-cycle model of how a deci-sion making theory can be applied effectively in an IUI and gives detailed in-formation about the experiments conducted. More specifically, the Simple Ad-ditive Weighting (SAW) model has been used as a theory test bed and has been applied in an IUI that is called MBIFM. MBIFM is a file manipulation system that works in a similar way as Windows/NT Explorer. However, the system constantly reasons about every user's action and provides spontaneous advice, in case this is considered necessary.
Title: Trust Building in Recommender Agents
Authors: Li Chen and Pearl Pu
Abstract: Trust has long been regarded as an important factor influencing users' decision to buy a product in an online shop or to return to the shop for more product information. However, most notions of trust focus on the aspects of benevolence and integrity, and less on competence. Although benefits clearly exist for websites to employ competent recommender agents, the exact nature of these benefits to users' trusting intentions remains unclear. This paper presents some preliminary results of these issues based on a trust model that we have developed for recommender agents. We describe a carefully constructed survey in an attempt to reveal the relationship between users' perception of the agent's trustworthiness based on its competence and consumer trusting intentions, and more importantly, the role of explanation-based recommendation interfaces and their media format on trust promotion.
Title: Data Quality and Sparsity Issues in Collaborative Filtering on Web Logs
Authors: Miha Grcar, Dunja Mladenic and Marko Grobelnik
Abstract: In this paper, we present our experience in applying collaborative filtering to real-life corporate data in the light of data quality and sparsity. The quality of collaborative filtering recommendations is highly dependent on the quality of the data used to identify users' preferences. To understand the influence that highly sparse server-side collected data has on the accuracy of collaborative filtering, we ran a series of experiments in which we used publicly available datasets and, on the other hand, a real-life corporate dataset that does not fit the profile of ideal data for collaborative filtering. We have also experimentally compared two standard distance measures (Pearson correlation and Cosine similarity) used by k-Nearest Neighbor classifier, showing that depending on the dataset one outperforms the other - but no consistent difference can be claimed.
Title: Using Agent Technology to Overcome Project Failure in Distributed Organizations
Authors: Holly Parsons - Hann and Kecheng Liu
Abstract: As organisations become more global and interest groups more widely distributed reaching a consensus among stakeholders when conducting a global spanning project becomes increasingly harder to achieve. Many software projects have failed because their requirements were poorly negotiated among stakeholders and this problem must be solved if project failure is to decrease. Although many stakeholder negotiation methods have been suggested, validated and employed in projects across the globe, project success rates are still very low, suggesting that there is still work to be done in the distributed organisational domain to increase the probability of project success. This paper will highlight the current project management problems in distributed organisations and suggest a new agent based method of solving them.
Title: A Normative Approach to Capture and Analyze Quality of Service Requirements of Distributed Multimedia Systems
Authors: Mangtang Chan and Kecheng Liu
Abstract: With the availability of powerful hardware, it is now possible to manipulate multimedia data with normal desk top computers. Further enhanced by the World Wide Web, the Internet serves as a platform for truly distributed multimedia systems (DMS). Non-functional requirements play an important role in the analysis and design of DMS and one of them is quality of service (QoS). This paper proposed a methodology to integrate QoS modeling under a semiotic framework which can then be used to analysis both the functional and non-functional requirements of DMS. The semiotic framework is agentoriented. QoS characteristics and requirement for agents would first be defined, a normative approach would then be used for static model checking, admission test as well as run-time QoS monitoring and policing. The methodology is demonstrated by examples of some common DMS and related work in QoS modeling and specification will also be reviewed.
Title: An Empirical Study into Governance Requirements for Autonomic E-Health Clinical Care Path Systems
Authors: Mr. Philip Miseldine and Prof. Azzelarabe Taleb-Bendiab
Abstract: Information technology has been widely recognized as a key building block to the Government modernization agenda for the NHS, and a vital component to assisting continuous improvement in clinical practice, patient safety and standard of care. Medicine is far from a static field, and this is especially true for research into the prevention and treatment of breast cancer, which is thankfully ever changing and advancing towards more comprehensive care and therapy for the condition. With such a fluid and fluctuating set of requirements, software that aids in the delivery and prognosis of therapy faces real challenges in its design so that it can adapt successfully as and when required to new requirements in the field. This paper will discuss the challenges that designers of such software must solve, and highlights the issues facing current state-of-the-art solutions in the domain of breast cancer prognosis. The paper then introduces the notion of system self-governance to produce a rigid yet highly dynamic system that is evaluated through a case study involving several leading UK cancer hospitals. The paper concludes with analysis of how the principals introduced in the paper can be applied to the wider domain of eHealthcare.
Title: A Practical Approach to Goal Modelling for Time-Constrained Projects
Authors: Kenneth Boness, Marc Bartsch, Stephen Cook and Rachel Harrison
Abstract: Goal modelling is a well known rigorous method for analysing problem rationale and developing requirements. Under the pressures typical of time-constrained projects its benefits are not accessible. This is because of the effort and time needed to create the graph and because reading the results can be difficult owing to the effects of crosscutting concerns. Here we introduce an adaptation of KAOS to meet the needs of rapid turn around and clarity. The main aim is to help the stakeholders gain an insight into the larger issues that might be overlooked if they make a premature start into implementation. The method emphasises the use of obstacles, accepts under-refined goals and has new methods for managing crosscutting concerns and strategic decision making. It is expected to be of value to agile as well as traditional processes.
Title: An Isomorphic Architecture for Enterprise Information Systems Integration
Authors: Mingxin Gan, Lily Sun and Botang Han
Abstract: Enterprise information systems often face difficulties in linking various components and external systems. Interoperability among collaborating participants is hard to tackle in both business and IT domains. These call for effective architectural solutions that coordinate powerful technologies with business applications to enable seamless inter-organizational integration. The inter-organizational integration of information systems needs to be considered from an architectural point of view on issues around business, organization and information technology. In this paper, an isomorphic architecture for systems integration (IASI) is proposed with a focus on supply chain management systems. This architecture allows integrating the business agility and IT infrastructure by consolidating processes between these two domains, and facilitates business changes with simultaneous evolution of the IT infrastructure.
Title: Modelling the role of e-learning in developing collaborative skills
Authors: David King, Sharon Cox and Richard Midgley
Abstract: This paper presents a model to assist in planning the use of e learning to support the teaching and learning of transferable skills needed to support collaboration. The model is derived from the practical experience of tutors teaching technical undergraduates in a UK university. The findings suggest a three stage model taking a student through participation and collaborative engagement in the teaching material, learning by reflection upon its content in groups, and finally an innovative method of supporting their comprehension of the skills being taught.
Title: Towards Purposeful Collaboration in E-Business:  A Case of Industry and Academia
Authors: John Perkins and Sharon Cox
Abstract: Information and Communication Technology (ICT) helps to remove barriers and improve mechanisms that support collaboration in e-business. This paper proposes a model of purposeful collaboration analysis that helps identify the extent to which ICT supports collaboration. It is argued that the ICT components of e-business are necessary to support collaboration but in themselves are often insufficient as enablers of collaboration. The model encourages the examination of issues left unsupported by ICT and allows more focused consideration of further initiatives that might be applied to improve purposefulness of the collaboration task. The case of a retail manufacturer in a long term e-business collaborative exercise with an academic institution is used to illustrate the model. Concepts from the social practice literature are identified that might contribute to a hybrid approach to addressing the gap resulting between generic technology and situated business applications.