33 resultados para exponential wide band model
em Aston University Research Archive
Resumo:
Orthogonal frequency division multiplexing (OFDM) is becoming a fundamental technology in future generation wireless communications. Call admission control is an effective mechanism to guarantee resilient, efficient, and quality-of-service (QoS) services in wireless mobile networks. In this paper, we present several call admission control algorithms for OFDM-based wireless multiservice networks. Call connection requests are differentiated into narrow-band calls and wide-band calls. For either class of calls, the traffic process is characterized as batch arrival since each call may request multiple subcarriers to satisfy its QoS requirement. The batch size is a random variable following a probability mass function (PMF) with realistically maximum value. In addition, the service times for wide-band and narrow-band calls are different. Following this, we perform a tele-traffic queueing analysis for OFDM-based wireless multiservice networks. The formulae for the significant performance metrics call blocking probability and bandwidth utilization are developed. Numerical investigations are presented to demonstrate the interaction between key parameters and performance metrics. The performance tradeoff among different call admission control algorithms is discussed. Moreover, the analytical model has been validated by simulation. The methodology as well as the result provides an efficient tool for planning next-generation OFDM-based broadband wireless access systems.
Resumo:
We present results on characterization of lasers with ultra-long cavity lengths up to 84km, the longest cavity ever reported. We have analyzed the mode structure, shape and width of the generated spectra, intensity fluctuations depending on length and intra-cavity power. The RF spectra exhibit an ultra-dense cavity mode structure (mode spacing is 1.2kHz for 84km), in which the width of the mode beating is proportional to the intra-cavity power while the optical spectra broaden with power according to the square-root law acquiring a specific shape with exponential wings. A model based on wave turbulence formalism has been developed to describe the observed effects.
Resumo:
We report a characterization of the acoustic sensitivity of microstructured polymer optical fiber interferometric sensors at ultrasonic frequencies from 100kHz to 10MHz. The use of wide-band ultrasonic fiber optic sensors in biomedical ultrasonic and optoacoustic applications is an open alternative to conventional piezoelectric transducers. These kind of sensors, made of biocompatible polymers, are good candidates for the sensing element in an optoacoustic endoscope because of its high sensitivity, its shape and its non-brittle and non-electric nature. The acoustic sensitivity of the intrinsic fiber optic interferometric sensors depends strongly of the material which is composed of. In this work we compare experimentally the intrinsic ultrasonic sensitivities of a PMMA mPOF with other three optical fibers: a singlemode silica optical fiber, a single-mode polymer optical fiber and a multimode graded-index perfluorinated polymer optical fiber. © 2014 SPIE.
Resumo:
Chalcogenide suspended core fibers are a valuable solution to obtain supercontinuum generation of light in the mid-infrared, thanks to glass high transparency, high index contrast, small core diameter and widely-tunable dispersion. In this work the dispersion and nonlinear properties of several chalcogenide suspended core mi-crostructured fibers are numerically evaluated, and the effects of all the structural parameters are investigated. Optimization of the design is carried out to provide a fiber suitable for wide-band supercontinuum generation in the mid-infrared.
Resumo:
We present results on characterization of lasers with ultra-long cavity lengths up to 84km, the longest cavity ever reported. We have analyzed the mode structure, shape and width of the generated spectra, intensity fluctuations depending on length and intra-cavity power. The RF spectra exhibit an ultra-dense cavity mode structure (mode spacing is 1.2kHz for 84km), in which the width of the mode beating is proportional to the intra-cavity power while the optical spectra broaden with power according to the square-root law acquiring a specific shape with exponential wings. A model based on wave turbulence formalism has been developed to describe the observed effects.
Resumo:
We propose a long range, high precision optical time domain reflectometry (OTDR) based on an all-fiber supercontinuum source. The source simply consists of a CW pump laser with moderate power and a section of fiber, which has a zero dispersion wavelength near the laser's central wavelength. Spectrum and time domain properties of the source are investigated, showing that the source has great capability in nonlinear optics, such as correlation OTDR due to its ultra-wide-band chaotic behavior, and mm-scale spatial resolution is demonstrated. Then we analyze the key factors limiting the operational range of such an OTDR, e. g., integral Rayleigh backscattering and the fiber loss, which degrades the optical signal to noise ratio at the receiver side, and then the guideline for counter-act such signal fading is discussed. Finally, we experimentally demonstrate a correlation OTDR with 100km sensing range and 8.2cm spatial resolution (1.2 million resolved points), as a verification of theoretical analysis.
Resumo:
Ophthalmophakometric measurements of ocular surface radius of curvature and alignment were evaluated on physical model eyes encompassing a wide range of human ocular dimensions. The results indicated that defocus errors arising from imperfections in the ophthalmophakometer camera telecentricity and light source collimation were smaller than experimental errors. Reasonable estimates emerged for anterior lens surface radius of curvature (accuracy: 0.02–0.10 mm; precision 0.05–0.09 mm), posterior lens surface radius of curvature (accuracy: 0.10–0.55 mm; precision 0.06–0.20 mm), eye rotation (accuracy: 0.00–0.32°; precision 0.06–0.25°), lens tilt (accuracy: 0.00–0.33°; precision 0.05–0.98°) and lens decentration (accuracy: 0.00–0.07 mm; precision 0.00–0.07 mm).
Resumo:
A new general linear model (GLM) beamformer method is described for processing magnetoencephalography (MEG) data. A standard nonlinear beamformer is used to determine the time course of neuronal activation for each point in a predefined source space. A Hilbert transform gives the envelope of oscillatory activity at each location in any chosen frequency band (not necessary in the case of sustained (DC) fields), enabling the general linear model to be applied and a volumetric T statistic image to be determined. The new method is illustrated by a two-source simulation (sustained field and 20 Hz) and is shown to provide accurate localization. The method is also shown to locate accurately the increasing and decreasing gamma activities to the temporal and frontal lobes, respectively, in the case of a scintillating scotoma. The new method brings the advantages of the general linear model to the analysis of MEG data and should prove useful for the localization of changing patterns of activity across all frequency ranges including DC (sustained fields). © 2004 Elsevier Inc. All rights reserved.
Resumo:
A multi-scale model of edge coding based on normalized Gaussian derivative filters successfully predicts perceived scale (blur) for a wide variety of edge profiles [Georgeson, M. A., May, K. A., Freeman, T. C. A., & Hesse, G. S. (in press). From filters to features: Scale-space analysis of edge and blur coding in human vision. Journal of Vision]. Our model spatially differentiates the luminance profile, half-wave rectifies the 1st derivative, and then differentiates twice more, to give the 3rd derivative of all regions with a positive gradient. This process is implemented by a set of Gaussian derivative filters with a range of scales. Peaks in the inverted normalized 3rd derivative across space and scale indicate the positions and scales of the edges. The edge contrast can be estimated from the height of the peak. The model provides a veridical estimate of the scale and contrast of edges that have a Gaussian integral profile. Therefore, since scale and contrast are independent stimulus parameters, the model predicts that the perceived value of either of these parameters should be unaffected by changes in the other. This prediction was found to be incorrect: reducing the contrast of an edge made it look sharper, and increasing its scale led to a decrease in the perceived contrast. Our model can account for these effects when the simple half-wave rectifier after the 1st derivative is replaced by a smoothed threshold function described by two parameters. For each subject, one pair of parameters provided a satisfactory fit to the data from all the experiments presented here and in the accompanying paper [May, K. A. & Georgeson, M. A. (2007). Added luminance ramp alters perceived edge blur and contrast: A critical test for derivative-based models of edge coding. Vision Research, 47, 1721-1731]. Thus, when we allow for the visual system's insensitivity to very shallow luminance gradients, our multi-scale model can be extended to edge coding over a wide range of contrasts and blurs. © 2007 Elsevier Ltd. All rights reserved.
Resumo:
We describe a template model for perception of edge blur and identify a crucial early nonlinearity in this process. The main principle is to spatially filter the edge image to produce a 'signature', and then find which of a set of templates best fits that signature. Psychophysical blur-matching data strongly support the use of a second-derivative signature, coupled to Gaussian first-derivative templates. The spatial scale of the best-fitting template signals the edge blur. This model predicts blur-matching data accurately for a wide variety of Gaussian and non-Gaussian edges, but it suffers a bias when edges of opposite sign come close together in sine-wave gratings and other periodic images. This anomaly suggests a second general principle: the region of an image that 'belongs' to a given edge should have a consistent sign or direction of luminance gradient. Segmentation of the gradient profile into regions of common sign is achieved by implementing the second-derivative 'signature' operator as two first-derivative operators separated by a half-wave rectifier. This multiscale system of nonlinear filters predicts perceived blur accurately for periodic and aperiodic waveforms. We also outline its extension to 2-D images and infer the 2-D shape of the receptive fields.
Resumo:
B-ISDN is a universal network which supports diverse mixes of service, applications and traffic. ATM has been accepted world-wide as the transport technique for future use in B-ISDN. ATM, being a simple packet oriented transfer technique, provides a flexible means for supporting a continuum of transport rates and is efficient due to possible statistical sharing of network resources by multiple users. In order to fully exploit the potential statistical gain, while at the same time provide diverse service and traffic mixes, an efficient traffic control must be designed. Traffic controls which include congestion and flow control are a fundamental necessity to the success and viability of future B-ISDN. Congestion and flow control is difficult in the broadband environment due to the high speed link, the wide area distance, diverse service requirements and diverse traffic characteristics. Most congestion and flow control approaches in conventional packet switched networks are reactive in nature and are not applicable in the B-ISDN environment. In this research, traffic control procedures mainly based on preventive measures for a private ATM-based network are proposed and their performance evaluated. The various traffic controls include CAC, traffic flow enforcement, priority control and an explicit feedback mechanism. These functions operate at call level and cell level. They are carried out distributively by the end terminals, the network access points and the internal elements of the network. During the connection set-up phase, the CAC decides the acceptance or denial of a connection request and allocates bandwidth to the new connection according to three schemes; peak bit rate, statistical rate and average bit rate. The statistical multiplexing rate is based on a `bufferless fluid flow model' which is simple and robust. The allocation of an average bit rate to data traffic at the expense of delay obviously improves the network bandwidth utilisation.
A profile of low vision services in England the Low Vision Service Model Evaluation (LOVSME) project
Resumo:
In the UK, low vision rehabilitation is delivered by a wide variety of providers with different strategies being used to integrate services from health, social care and the voluntary sector. In order to capture the current diversity of service provision the Low vision Service Model Evaluation (LOVSME) project aimed to profile selected low vision services using published standards for service delivery as a guide. Seven geographically and organizationally varied low-vision services across England were chosen for their diversity and all agreed to participate. A series of questionnaires and follow-up visits were undertaken to obtain a comprehensive description of each service, including the staff workloads and the cost of providing the service. In this paper the strengths of each model of delivery are discussed, and examples of good practice identified. As a result of the project, an Assessment Framework tool has been developed that aims to help other service providers evaluate different aspects of their own service to identify any gaps in existing service provision, and will act as a benchmark for future service development.
Resumo:
Non-linear relationships are common in microbiological research and often necessitate the use of the statistical techniques of non-linear regression or curve fitting. In some circumstances, the investigator may wish to fit an exponential model to the data, i.e., to test the hypothesis that a quantity Y either increases or decays exponentially with increasing X. This type of model is straight forward to fit as taking logarithms of the Y variable linearises the relationship which can then be treated by the methods of linear regression.
Resumo:
Shropshire Energy Team initiated this study to examine consumption and associated emissions in the predominantly rural county of Shropshire. Current use of energy is not sustainable in the long term and there are various approaches to dealing with the environmental problems it creates. Energy planning by a local authority for a sustainable future requires detailed energy consumption and environmental information. This information would enable target setting and the implementation of policies designed to encourage energy efficiency improvements and exploitation of renewable energy resources. This could aid regeneration strategies by providing new employment opportunities. Associated reductions in carbon dioxide and other emissions would help to meet national and international environmental targets. In the absence of this detailed information, the objective was to develop a methodology to assess energy consumption and emissions on a regional basis from 1990 onwards for all local planning authorities. This would enable a more accurate assessment of the relevant issues, such that plans are more appropriate and longer lasting. A first comprehensive set of data has been gathered from a wide range of sources and a strong correlation was found between population and energy consumption for a variety of regions across the UK. In this case the methodology was applied to the county of Shropshire to give, for the first time, estimates of primary fuel consumption, electricity consumption and associated emissions in Shropshire for 1990 to 2025. The estimates provide a suitable baseline for assessing the potential contribution renewable energy could play in meeting electricity demand in the country and in reducing emissions. The assessment indicated that in 1990 total primary fuel consumption was 63,518,018 GJ/y increasing to 119,956,465 GJ/y by 2025. This is associated with emissions of 1,129,626 t/y of carbon in 1990 rising to 1,303,282 t/y by 2025. In 1990, 22,565,713 GJ/y of the primary fuel consumption was used for generating electricity rising to 23,478,050 GJ/y in 2025. If targets to reduce primary fuel consumption are reached, then emissions of carbon would fall to 1,042,626 by 2025, if renewable energy targets were also reached then emissions of carbon would fall to 988,638 t/y by 2025.
Resumo:
This thesis is concerned with the inventory control of items that can be considered independent of one another. The decisions when to order and in what quantity, are the controllable or independent variables in cost expressions which are minimised. The four systems considered are referred to as (Q, R), (nQ,R,T), (M,T) and (M,R,T). Wiith ((Q,R) a fixed quantity Q is ordered each time the order cover (i.e. stock in hand plus on order ) equals or falls below R, the re-order level. With the other three systems reviews are made only at intervals of T. With (nQ,R,T) an order for nQ is placed if on review the inventory cover is less than or equal to R, where n, which is an integer, is chosen at the time so that the new order cover just exceeds R. In (M, T) each order increases the order cover to M. Fnally in (M, R, T) when on review, order cover does not exceed R, enough is ordered to increase it to M. The (Q, R) system is examined at several levels of complexity, so that the theoretical savings in inventory costs obtained with more exact models could be compared with the increases in computational costs. Since the exact model was preferable for the (Q,R) system only exact models were derived for theoretical systems for the other three. Several methods of optimization were tried, but most were found inappropriate for the exact models because of non-convergence. However one method did work for each of the exact models. Demand is considered continuous, and with one exception, the distribution assumed is the normal distribution truncated so that demand is never less than zero. Shortages are assumed to result in backorders, not lost sales. However, the shortage cost is a function of three items, one of which, the backorder cost, may be either a linear, quadratic or an exponential function of the length of time of a backorder, with or without period of grace. Lead times are assumed constant or gamma distributed. Lastly, the actual supply quantity is allowed to be distributed. All the sets of equations were programmed for a KDF 9 computer and the computed performances of the four inventory control procedures are compared under each assurnption.