5 resultados para service performance

em Helda - Digital Repository of University of Helsinki


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The mobile phone has, as a device, taken the world by storm in the past decade; from only 136 million phones globally in 1996, it is now estimated that by the end of 2008 roughly half of the worlds population will own a mobile phone. Over the years, the capabilities of the phones as well as the networks have increased tremendously, reaching the point where the devices are better called miniature computers rather than simply mobile phones. The mobile industry is currently undertaking several initiatives of developing new generations of mobile network technologies; technologies that to a large extent focus at offering ever-increasing data rates. This thesis seeks to answer the question of whether the future mobile networks in development and the future mobile services are in sync; taking a forward-looking timeframe of five to eight years into the future, will there be services that will need the high-performance new networks being planned? The question is seen to be especially pertinent in light of slower-than-expected takeoff of 3G data services. Current and future mobile services are analyzed from two viewpoints; first, looking at the gradual, evolutionary development of the services and second, through seeking to identify potential revolutionary new mobile services. With information on both current and future mobile networks as well as services, a network capability - service requirements mapping is performed to identify which services will work in which networks. Based on the analysis, it is far from certain whether the new mobile networks, especially those planned for deployment after HSPA, will be needed as soon as they are being currently roadmapped. The true service-based demand for the "beyond HSPA" technologies may be many years into the future - or, indeed, may never materialize thanks to the increasing deployment of local area wireless broadband technologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, XML has been accepted as the format of messages for several applications. Prominent examples include SOAP for Web services, XMPP for instant messaging, and RSS and Atom for content syndication. This XML usage is understandable, as the format itself is a well-accepted standard for structured data, and it has excellent support for many popular programming languages, so inventing an application-specific format no longer seems worth the effort. Simultaneously with this XML's rise to prominence there has been an upsurge in the number and capabilities of various mobile devices. These devices are connected through various wireless technologies to larger networks, and a goal of current research is to integrate them seamlessly into these networks. These two developments seem to be at odds with each other. XML as a fully text-based format takes up more processing power and network bandwidth than binary formats would, whereas the battery-powered nature of mobile devices dictates that energy, both in processing and transmitting, be utilized efficiently. This thesis presents the work we have performed to reconcile these two worlds. We present a message transfer service that we have developed to address what we have identified as the three key issues: XML processing at the application level, a more efficient XML serialization format, and the protocol used to transfer messages. Our presentation includes both a high-level architectural view of the whole message transfer service, as well as detailed descriptions of the three new components. These components consist of an API, and an associated data model, for XML processing designed for messaging applications, a binary serialization format for the data model of the API, and a message transfer protocol providing two-way messaging capability with support for client mobility. We also present relevant performance measurements for the service and its components. As a result of this work, we do not consider XML to be inherently incompatible with mobile devices. As the fixed networking world moves toward XML for interoperable data representation, so should the wireless world also do to provide a better-integrated networking infrastructure. However, the problems that XML adoption has touch all of the higher layers of application programming, so instead of concentrating simply on the serialization format we conclude that improvements need to be made in an integrated fashion in all of these layers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the proliferation of wireless and mobile devices equipped with multiple radio interfaces to connect to the Internet, vertical handoff involving different wireless access technologies will enable users to get the best of connectivity and service quality during the lifetime of a TCP connection. A vertical handoff may introduce an abrupt, significant change in the access link characteristics and as a result the end-to-end path characteristics such as the bandwidth and the round-trip time (RTT) of a TCP connection may change considerably. TCP may take several RTTs to adapt to these changes in path characteristics and during this interval there may be packet losses and / or inefficient utilization of the available bandwidth. In this thesis we study the behaviour and performance of TCP in the presence of a vertical handoff. We identify the different handoff scenarios that adversely affect TCP performance. We propose several enhancements to the TCP sender algorithm that are specific to the different handoff scenarios to adapt TCP better to a vertical handoff. Our algorithms are conservative in nature and make use of cross-layer information obtained from the lower layers regarding the characteristics of the access links involved in a handoff. We evaluate the proposed algorithms by extensive simulation of the various handoff scenarios involving access links with a wide range of bandwidth and delay. We show that the proposed algorithms are effective in improving the TCP behaviour in various handoff scenarios and do not adversely affect the performance of TCP in the absence of cross-layer information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A randomised and population-based screening design with new technologies has been applied to the organised cervical cancer screening programme in Finland. In this experiment the women invited to routine five-yearly screening are individually randomised to be screened with automation-assisted cytology, human papillomavirus (HPV) test or conventional cytology. By using the randomised design, the ultimate aim is to assess and compare the long-term outcomes of the different screening regimens. The primary aim of the current study was to evaluate, based on the material collected during the implementation phase of the Finnish randomised screening experiment, the cross-sectional performance and validity of automation-assisted cytology (Papnet system) and primary HPV DNA testing (Hybrid Capture II assay for 13 oncogenic HPV types) within service screening, in comparison to conventional cytology. The parameters of interest were test positivity rate, histological detection rate, relative sensitivity, relative specificity and positive predictive value. Also, the effect of variation in performance by screening laboratory on age-adjusted cervical cancer incidence was assessed. Based on the cross-sectional results, almost no differences were observed in the performance of conventional and automation-assisted screening. Instead, primary HPV screening found 58% (95% confidence interval 19-109%) more cervical lesions than conventional screening. However, this was mainly due to overrepresentation of mild- and moderate-grade lesions and, thus, is likely to result in overtreatment since a great deal of these lesions would never progress to invasive cancer. Primary screening with an HPV DNA test alone caused substantial loss in specificity in comparison to cytological screening. With the use of cytology triage test, the specificity of HPV screening improved close to the level of conventional cytology. The specificity of primary HPV screening was also increased by increasing the test positivity cutoff from the level recommended for clinical use, but the increase was more modest than the one gained with the use of cytology triage. The performance of the cervical cancer screening programme varied widely between the screening laboratories, but the variation in overall programme effectiveness between respective populations was more marginal from the very beginning of the organised screening activity. Thus, conclusive interpretations on the quality or success of screening should not be based on performance parameters only. In the evaluation of cervical cancer screening the outcome should be selected as closely as possible to the true measure of programme effectiveness, which is the number of invasive cervical cancers and subsequent deaths prevented in the target population. The evaluation of benefits and adverse effects of each new suggested screening technology should be performed before the technology becomes an accepted routine in the existing screening programme. At best, the evaluation is performed randomised, within the population and screening programme in question, which makes the results directly applicable to routine use.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The management and coordination of business-process collaboration experiences changes because of globalization, specialization, and innovation. Service-oriented computing (SOC) is a means towards businessprocess automation and recently, many industry standards emerged to become part of the service-oriented architecture (SOA) stack. In a globalized world, organizations face new challenges for setting up and carrying out collaborations in semi-automating ecosystems for business services. For being efficient and effective, many companies express their services electronically in what we term business-process as a service (BPaaS). Companies then source BPaaS on the fly from third parties if they are not able to create all service-value inhouse because of reasons such as lack of reasoures, lack of know-how, cost- and time-reduction needs. Thus, a need emerges for BPaaS-HUBs that not only store service offers and requests together with information about their issuing organizations and assigned owners, but that also allow an evaluation of trust and reputation in an anonymized electronic service marketplace. In this paper, we analyze the requirements, design architecture and system behavior of such a BPaaS-HUB to enable a fast setup and enactment of business-process collaboration. Moving into a cloud-computing setting, the results of this paper allow system designers to quickly evaluate which services they need for instantiationg the BPaaS-HUB architecture. Furthermore, the results also show what the protocol of a backbone service bus is that allows a communication between services that implement the BPaaS-HUB. Finally, the paper analyzes where an instantiation must assign additional computing resources vor the avoidance of performance bottlenecks.