18 resultados para THIRD GENERATION SYSTEMS
em Aston University Research Archive
Resumo:
Third Generation cellular communication systems are expected to support mixed cell architecture in which picocells, microcells and macrocells are used to achieve full coverage and increase the spectral capacity. Supporting higher numbers of mobile terminals and the use of smaller cells will result in an increase in the number of handovers, and consequently an increase in the time delays required to perform these handovers. Higher time delays will generate call interruptions and forced terminations, particularly for time sensitive applications like real-time multimedia and data services. Currently in the Global System for Mobile communications (GSM), the handover procedure is initiated and performed by the fixed part of the Public Land Mobile Network (PLMN). The mobile terminal is only capable of detecting candidate base stations suitable for the handover; it is the role of the network to interrogate a candidate base station for a free channel. Handover signalling is exchanged via the fixed network and the time delay required to perform the handover is greatly affected by the levels of teletraffic handled by the network. In this thesis, a new handover strategy is developed to reduce the total time delay for handovers in a microcellular system. The handover signalling is diverted from the fixed network to the air interface to prevent extra delays due to teletraffic congestion, and to allow the mobile terminal to exchange signalling directly with the candidate base station. The new strategy utilises Packet Reservation Multiple Access (PRMA) technique as a mechanism to transfer the control of the handover procedure from the fixed network to the mobile terminal. Simulation results are presented to show a dramatic reduction in the handover delay as compared to those obtained using fixed channel allocation and dynamic channel allocation schemes.
Resumo:
Fibre-optic communications systems have traditionally carried data using binary (on-off) encoding of the light amplitude. However, next-generation systems will use both the amplitude and phase of the optical carrier to achieve higher spectral efficiencies and thus higher overall data capacities(1,2). Although this approach requires highly complex transmitters and receivers, the increased capacity and many further practical benefits that accrue from a full knowledge of the amplitude and phase of the optical field(3) more than outweigh this additional hardware complexity and can greatly simplify optical network design. However, use of the complex optical field gives rise to a new dominant limitation to system performance-nonlinear phase noise(4,5). Developing a device to remove this noise is therefore of great technical importance. Here, we report the development of the first practical ('black-box') all-optical regenerator capable of removing both phase and amplitude noise from binary phase-encoded optical communications signals.
Resumo:
The advent of personal communication systems within the last decade has depended upon the utilization of advanced digital schemes for source and channel coding and for modulation. The inherent digital nature of the communications processing has allowed the convenient incorporation of cryptographic techniques to implement security in these communications systems. There are various security requirements, of both the service provider and the mobile subscriber, which may be provided for in a personal communications system. Such security provisions include the privacy of user data, the authentication of communicating parties, the provision for data integrity, and the provision for both location confidentiality and party anonymity. This thesis is concerned with an investigation of the private-key and public-key cryptographic techniques pertinent to the security requirements of personal communication systems and an analysis of the security provisions of Second-Generation personal communication systems is presented. Particular attention has been paid to the properties of the cryptographic protocols which have been employed in current Second-Generation systems. It has been found that certain security-related protocols implemented in the Second-Generation systems have specific weaknesses. A theoretical evaluation of these protocols has been performed using formal analysis techniques and certain assumptions made during the development of the systems are shown to contribute to the security weaknesses. Various attack scenarios which exploit these protocol weaknesses are presented. The Fiat-Sharmir zero-knowledge cryptosystem is presented as an example of how asymmetric algorithm cryptography may be employed as part of an improved security solution. Various modifications to this cryptosystem have been evaluated and their critical parameters are shown to be capable of being optimized to suit a particular applications. The implementation of such a system using current smart card technology has been evaluated.
Resumo:
The use of antibiotics was investigated in twelve acute hospitals in England. Data was collected electronically and by questionnaire for the financial years 2001/2, 2002/3 and 2003/4. Hospitals were selected on the basis of their Medicines Management Self-Assessment Scores (MMAS) and included a cohort of three hospitals with integrated electronic prescribing systems. The total sample size was 6.65% of English NHS activity for 2001/2 based on Finished Consultant Episode (FCE) numbers. Data collected included all antibiotics dispensed (ATC category J01), hospital activity FCE's and beddays, Medicines Management Self-assessment scores, Antibiotic Medicines Management scores (AMS), Primary Care Trust (PCT) of origin of referral populations, PCT antibiotic prescribing rates, Index of Multiple Deprivation for each PCT. The DDD/FCE (Defined Daily Dose/FCE) was found to correlate with the DDD 100beddays (r = 0.74 p
Resumo:
Purpose: To describe the methodology, sampling strategy and preliminary results for the Aston Eye Study (AES), a cross-sectional study to determine the prevalence of refractive error and its associated ocular biometry in a large multi-racial sample of school children from the metropolitan area of Birmingham, England. Methods: A target sample of 1700 children aged 6–7 years and 1200 aged 12–13 years is being selected from Birmingham schools selected randomly with stratification by area deprivation index (a measure of socio-economic status). Schools with pupils predominantly (>70%) from a single race are excluded. Sample size calculations account for the likely participation rate and the clustering of individuals within schools. Procedures involve standardised protocols to allow for comparison with international population-based data. Visual acuity, non-contact ocular biometry (axial length, corneal radius of curvature and anterior chamber depth) and cycloplegic autorefraction are measured in both eyes. Distance and near oculomotor balance, height and weight are also assessed. Questionnaires for parents and older children will allow the influence of environmental factors on refractive error to be examined. Results: Recruitment and data collection are ongoing (currently N = 655). Preliminary cross-sectional data on 213 South Asian, 44 black African Caribbean and 70 white European children aged 6–7 years and 114 South Asian, 40 black African Caribbean and 115 white European children aged 12–13 years found myopia prevalence of 9.4% and 29.4% for the two age groups respectively. A more negative mean spherical equivalent refraction (SER) was observed in older children (-0.21 D vs +0.87 D). Ethnic differences in myopia prevalence are emerging with South Asian children having higher levels than white European children 36.8% vs 18.6% (for the older children). Axial length, corneal radius of curvature and anterior chamber depth were normally distributed, while SER was leptokurtic (p < 0.001) with a slight negative skew. Conclusions: The AES will allow ethnic differences in the ocular characteristics of children from a large metropolitan area of the UK to be examined. The findings to date indicate the emergence of higher levels of myopia by early adolescence in second and third generation British South Asians, compared to white European children. The continuation of the AES will allow the early determinants of these ethnic differences to be studied.
Resumo:
The advent of the Integrated Services Digital Network (ISDN) led to the standardisation of the first video codecs for interpersonal video communications, followed closely by the development of standards for the compression, storage and distribution of digital video in the PC environment, mainly targeted at CD-ROM storage. At the same time the second-generation digital wireless networks, and the third-generation networks being developed, have enough bandwidth to support digital video services. The radio propagation medium is a difficult environment in which to deploy low bit error rate, real time services such as video. The video coding standards designed for ISDN and storage applications, were targeted at low bit error rate levels, orders of magnitude lower than the typical bit error rates experienced on wireless networks. This thesis is concerned with the transmission of digital, compressed video over wireless networks. It investigates the behaviour of motion compensated, hybrid interframe DPCM/DCT video coding algorithms, which form the basis of current coding algorithms, in the presence of high bit error rates commonly found on digital wireless networks. A group of video codecs, based on the ITU-T H.261 standard, are developed which are robust to the burst errors experienced on radio channels. The radio link is simulated at low level, to generate typical error files that closely model real world situations, in a Rayleigh fading environment perturbed by co-channel interference, and on frequency selective channels which introduce inter symbol interference. Typical anti-multipath techniques, such as antenna diversity, are deployed to mitigate the effects of the channel. Link layer error control techniques are also investigated.
Resumo:
This study investigated the variability of response associated with various perimetric techniques, with the aim of improving the clinical interpretation of automated static threshold perirnetry. Evaluation of a third generation of perimetric threshold algorithms (SITA) demonstrated a reduction in test duration by approximately 50% both in normal subjects and in glaucoma patients. SITA produced a slightly higher, but clinically insignificant, Mean Sensitivity than with the previous generations of algorithms. This was associated with a decreased between-subject variability in sensitivity and hence, lower confidence intervals for normality. In glaucoma, the SITA algorithms gave rise to more statistically significant visual field defects and a similar between-visit repeatability to the Full Threshold and FASTPAC algorithms. The higher estimated sensitivity observed with SITA compared to Full Threshold and FASTPAC were not attributed to a reduction in the fatigue effect. The investigation of a novel method of maintaining patient fixation, a roving fixation target which paused immediately prior lo the stimulus presentation, revealed a greater degree of fixational instability with the roving fixation target compared to the conventional static fixation target. Previous experience with traditional white-white perimetry did not eradicate the learning effect in short-wavelength automated perimetry (SWAP) in a group of ocular hypertensive patients. The learning effect was smaller in an experienced group of patients compared to a naive group of patients, but was still at a significant level to require that patients should undertake a series of at least three familiarisation tests with SWAP.
Resumo:
Power generation from biomass is a sustainable energy technology which can contribute to substantial reductions in greenhouse gas emissions, but with greater potential for environmental, economic and social impacts than most other renewable energy technologies. It is important therefore in assessing bioenergy systems to take account of not only technical, but also environmental, economic and social parameters on a common basis. This work addresses the challenge of analysing, quantifying and comparing these factors for bioenergy power generation systems. A life-cycle approach is used to analyse the technical, environmental, economic and social impacts of entire bioelectricity systems, with a number of life-cycle indicators as outputs to facilitate cross-comparison. The results show that similar greenhouse gas savings are achieved with the wide variety of technologies and scales studied, but land-use efficiency of greenhouse gas savings and specific airborne emissions varied substantially. Also, while specific investment costs and electricity costs vary substantially from one system to another the number of jobs created per unit of electricity delivered remains roughly constant. Recorded views of stakeholders illustrate that diverging priorities exist for different stakeholder groups and this will influence appropriate choice of bioenergy systems for different applications.
Resumo:
HSDPA (High-Speed Downlink Packet Access) is a 3.5-generation asynchronous mobile communications service based on the third generation of W-CDMA. In Korea, it is mainly provided in through videophone service. Because of the diffusion of more powerful and diversified services, along with steep advances in mobile communications technology, consumers demand a wide range of choices. However, because of the variety of technologies, which tend to overflow the market regardless of consumer preferences, consumers feel increasingly confused. Therefore, we should not adopt strategies that focus only on developing new technology on the assumption that new technologies are next-generation projects. Instead, we should understand the process by which consumers accept new forms of technology and devise schemes to lower market entry barriers through strategies that enable developers to understand and provide what consumers really want.
Resumo:
This work is part of a bigger project which aims to research the potential development of commercial opportunities for the re-use of batteries after their use in low carbon vehicles on an electricity grid or microgrid system. There are three main revenue streams (peak load lopping on the distribution Network to allow for network re-enforcement deferral, National Grid primary/ secondary/ high frequency response, customer energy management optimization). These incomes streams are dependent on the grid system being present. However, there is additional opportunity to be gained from also using these batteries to provide UPS backup when the grid is no longer present. Most UPS or ESS on the market use new batteries in conjunction with a two level converter interface. This produces a reliable backup solution in the case of loss of mains power, but may be expensive to implement. This paper introduces a modular multilevel cascade converter (MMCC) based ESS using second-life batteries for use on a grid independent industrial plant without any additional onsite generator as a potentially cheaper alternative. The number of modules has been designed for a given reliability target and these modules could be used to minimize/eliminate the output filter. An appropriate strategy to provide voltage and frequency control in a grid independent system is described and simulated under different disturbance conditions such as load switching, fault conditions or a large motor starting. A comparison of the results from the modular topology against a traditional two level converter is provided to prove similar performance criteria. The proposed ESS and control strategy is an acceptable way of providing backup power in the event of loss of grid. Additional financial benefit to the customer may be obtained by using a second life battery in this way.
Resumo:
The future broadband information network will undoubtedly integrate the mobility and flexibility of wireless access systems with the huge bandwidth capacity of photonics solutions to enable a communication system capable of handling the anticipated demand for interactive services. Towards wide coverage and low cost implementations of such broadband wireless photonics communication networks, various aspects of the enabling technologies are continuingly generating intense research interest. Among the core technologies, the optical generation and distribution of radio frequency signals over fibres, and the fibre optic signal processing of optical and radio frequency signals, have been the subjects for study in this thesis. Based on the intrinsic properties of single-mode optical fibres, and in conjunction with the concepts of optical fibre delay line filters and fibre Bragg gratings, a number of novel fibre-based devices, potentially suitable for applications in the future wireless photonics communication systems, have been realised. Special single-mode fibres, namely, the high birefringence (Hi-Bi) fibre and the Er/Yb doped fibre have been employed so as to exploit their merits to achieve practical and cost-effective all-fibre architectures. A number of fibre-based complex signal processors for optical and radio frequencies using novel Hi-Bi fibre delay line filter architectures have been illustrated. In particular, operations such as multichannel flattop bandpass filtering, simultaneous complementary outputs and bidirectional nonreciprocal wavelength interleaving, have been demonstrated. The proposed configurations featured greatly reduced environmental sensitivity typical of coherent fibre delay line filter schemes, reconfigurable transfer functions, negligible chromatic dispersions, and ease of implementation, not easily achievable based on other techniques. A number of unique fibre grating devices for signal filtering and fibre laser applications have been realised. The concept of the superimposed fibre Bragg gratings has been extended to non-uniform grating structures and into Hi-Bi fibres to achieve highly useful grating devices such as overwritten phase-shifted fibre grating structure and widely/narrowly spaced polarization-discriminating filters that are not limited by the intrinsic fibre properties. In terms of the-fibre-based optical millimetre wave transmitters, unique approaches based on fibre laser configurations have been proposed and demonstrated. The ability of the dual-mode distributed feedback (DFB) fibre lasers to generate high spectral purity, narrow linewidth heterodyne signals without complex feedback mechanisms has been illustrated. A novel co-located dual DFB fibre laser configuration, based on the proposed superimposed phase-shifted fibre grating structure, has been further realised with highly desired operation characteristics without the need for costly high frequency synthesizers and complex feedback controls. Lastly, a novel cavity mode condition monitoring and optimisation scheme for short length, linear-cavity fibre lasers has been proposed and achieved. Based on the concept and simplicity of the superimposed fibre laser cavities structure, in conjunction with feedback controls, enhanced output performances from the fibre lasers have been achieved. The importance of such cavity mode assessment and feedback control for optimised fibre laser output performance has been illustrated.
Resumo:
In this second talk on dissipative structures in fiber applications, we overview theoretical aspects of the generation, evolution and characterization of self-similar parabolic-shaped pulses in fiber amplifier media. In particular, we present a perturbation analysis that describes the structural changes induced by third-order fiber dispersion on the parabolic pulse solution of the nonlinear Schrödinger equation with gain. Promising applications of parabolic pulses in optical signal post-processing and regeneration in communication systems are also discussed.
Resumo:
Modelling architectural information is particularly important because of the acknowledged crucial role of software architecture in raising the level of abstraction during development. In the MDE area, the level of abstraction of models has frequently been related to low-level design concepts. However, model-driven techniques can be further exploited to model software artefacts that take into account the architecture of the system and its changes according to variations of the environment. In this paper, we propose model-driven techniques and dynamic variability as concepts useful for modelling the dynamic fluctuation of the environment and its impact on the architecture. Using the mappings from the models to implementation, generative techniques allow the (semi) automatic generation of artefacts making the process more efficient and promoting software reuse. The automatic generation of configurations and reconfigurations from models provides the basis for safer execution. The architectural perspective offered by the models shift focus away from implementation details to the whole view of the system and its runtime change promoting high-level analysis. © 2009 Springer Berlin Heidelberg.
Resumo:
In this article we envision factors and trends that shape the next generation of environmental monitoring systems. One key factor in this respect is the combined effect of end-user needs and the general development of IT services and their availability. Currently, an environmental (monitoring) system is assumed to be reactive. It delivers measurement data and computational results only if the user explicitly asks for it either by query or subscription. There is a temptation to automate this by simply pushing data to end-users. This, however, leads easily to an "advertisement strategy", where data is pushed to end-users regardless of users' needs. Under this strategy, the mere amount of received data obfuscates the individual messages; any "automatic" service, regardless of its fitness, overruns a system that requires the user's initiative. The foreseeable problem is that, unless there is no overall management, each new environmental service is going to compete for end-users' attention and, thus, inadvertently hinder the use of existing services. As the main contribution we investigate the nature of proactive environmental systems, and how they should be designed to avoid the aforementioned problem. We also discuss how semantics, participatory sensing, uncertainty management, and situational awareness link to proactive environmental systems. We illustrate our proposals with some real-life examples.