971 resultados para Employing systems,


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article, a novel method to generate an ultra-wideband (UWB) doublet using the cross-phase modulation (XPM) effect is proposed and experimentally demonstrated. The main component of the submitted architecture is a SOA-Mach-Zehnder interferometer (MZI) pumped with a modulated Gaussian pulse. Maximum and minimum conversion points are analyzed through the systems transfer function in order to determinate the most effective operation stage. By tuning different values for the SOAs currents, it is possible to identify a conversion step in which the input pulse is enough large to saturate the SOAMZI, leading to the generation of a UWB doublet pulse.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present article, an innovative approach for generation of an UWB monocycle is proposed and experimentally demonstrated. The proposed design features the combination of an interferometric device (SOA-Mach Zehnder interferometer) with an optical processor unit. The fusion of such components permits to generate, combine and customize UWB pulses. An optical pulse is used as pump signal and two optical carriers represent and the optical input of the system. The selection of a specific wavelength and therefore of a particular port provides the possibility of modifying the systems output pulse polarity. The capacity of transmitting several data sequence has been also evidenced.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Los dispositivos móviles modernos disponen cada vez de más funcionalidad debido al rápido avance de las tecnologías de las comunicaciones y computaciones móviles. Sin embargo, la capacidad de la batería no ha experimentado un aumento equivalente. Por ello, la experiencia de usuario en los sistemas móviles modernos se ve muy afectada por la vida de la batería, que es un factor inestable de difícil de control. Para abordar este problema, investigaciones anteriores han propuesto un esquema de gestion del consumo (PM) centrada en la energía y que proporciona una garantía sobre la vida operativa de la batería mediante la gestión de la energía como un recurso de primera clase en el sistema. Como el planificador juega un papel fundamental en la administración del consumo de energía y en la garantía del rendimiento de las aplicaciones, esta tesis explora la optimización de la experiencia de usuario para sistemas móviles con energía limitada desde la perspectiva de un planificador que tiene en cuenta el consumo de energía en un contexto en el que ésta es un recurso de primera clase. En esta tesis se analiza en primer lugar los factores que contribuyen de forma general a la experiencia de usuario en un sistema móvil. Después se determinan los requisitos esenciales que afectan a la experiencia de usuario en la planificación centrada en el consumo de energía, que son el reparto proporcional de la potencia, el cumplimiento de las restricciones temporales, y cuando sea necesario, el compromiso entre la cuota de potencia y las restricciones temporales. Para cumplir con los requisitos, el algoritmo clásico de fair queueing y su modelo de referencia se extienden desde los dominios de las comunicaciones y ancho de banda de CPU hacia el dominio de la energía, y en base a ésto, se propone el algoritmo energy-based fair queueing (EFQ) para proporcionar una planificación basada en la energía. El algoritmo EFQ está diseñado para compartir la potencia consumida entre las tareas mediante su planificación en función de la energía consumida y de la cuota reservada. La cuota de consumo de cada tarea con restricciones temporales está protegida frente a diversos cambios que puedan ocurrir en el sistema. Además, para dar mejor soporte a las tareas en tiempo real y multimedia, se propone un mecanismo para combinar con el algoritmo EFQ para dar preferencia en la planificación durante breves intervalos de tiempo a las tareas más urgentes con restricciones temporales.Las propiedades del algoritmo EFQ se evaluan a través del modelado de alto nivel y la simulación. Los resultados de las simulaciones indican que los requisitos esenciales de la planificación centrada en la energía pueden lograrse. El algoritmo EFQ se implementa más tarde en el kernel de Linux. Para evaluar las propiedades del planificador EFQ basado en Linux, se desarrolló un banco de pruebas experimental basado en una sitema empotrado, un programa de banco de pruebas multihilo, y un conjunto de pruebas de código abierto. A través de experimentos específicamente diseñados, esta tesis verifica primero las propiedades de EFQ en la gestión de la cuota de consumo de potencia y la planificación en tiempo real y, a continuación, explora los beneficios potenciales de emplear la planificación EFQ en la optimización de la experiencia de usuario para sistemas móviles con energía limitada. Los resultados experimentales sobre la gestión de la cuota de energía muestran que EFQ es más eficaz que el planificador de Linux-CFS en la gestión de energía, logrando un reparto proporcional de la energía del sistema independientemente de en qué dispositivo se consume la energía. Los resultados experimentales en la planificación en tiempo real demuestran que EFQ puede lograr de forma eficaz, flexible y robusta el cumplimiento de las restricciones temporales aunque se dé el caso de aumento del el número de tareas o del error en la estimación de energía. Por último, un análisis comparativo de los resultados experimentales sobre la optimización de la experiencia del usuario demuestra que, primero, EFQ es más eficaz y flexible que los algoritmos tradicionales de planificación del procesador, como el que se encuentra por defecto en el planificador de Linux y, segundo, que proporciona la posibilidad de optimizar y preservar la experiencia de usuario para los sistemas móviles con energía limitada. Abstract Modern mobiledevices have been becoming increasingly powerful in functionality and entertainment as the next-generation mobile computing and communication technologies are rapidly advanced. However, the battery capacity has not experienced anequivalent increase. The user experience of modern mobile systems is therefore greatly affected by the battery lifetime,which is an unstable factor that is hard to control. To address this problem, previous works proposed energy-centric power management (PM) schemes to provide strong guarantee on the battery lifetime by globally managing energy as the first-class resource in the system. As the processor scheduler plays a pivotal role in power management and application performance guarantee, this thesis explores the user experience optimization of energy-limited mobile systemsfrom the perspective of energy-centric processor scheduling in an energy-centric context. This thesis first analyzes the general contributing factors of the mobile system user experience.Then itdetermines the essential requirements on the energy-centric processor scheduling for user experience optimization, which are proportional power sharing, time-constraint compliance, and when necessary, a tradeoff between the power share and the time-constraint compliance. To meet the requirements, the classical fair queuing algorithm and its reference model are extended from the network and CPU bandwidth sharing domain to the energy sharing domain, and based on that, the energy-based fair queuing (EFQ) algorithm is proposed for performing energy-centric processor scheduling. The EFQ algorithm is designed to provide proportional power shares to tasks by scheduling the tasks based on their energy consumption and weights. The power share of each time-sensitive task is protected upon the change of the scheduling environment to guarantee a stable performance, and any instantaneous power share that is overly allocated to one time-sensitive task can be fairly re-allocated to the other tasks. In addition, to better support real-time and multimedia scheduling, certain real-time friendly mechanism is combined into the EFQ algorithm to give time-limited scheduling preference to the time-sensitive tasks. Through high-level modelling and simulation, the properties of the EFQ algorithm are evaluated. The simulation results indicate that the essential requirements of energy-centric processor scheduling can be achieved. The EFQ algorithm is later implemented in the Linux kernel. To assess the properties of the Linux-based EFQ scheduler, an experimental test-bench based on an embedded platform, a multithreading test-bench program, and an open-source benchmark suite is developed. Through specifically-designed experiments, this thesis first verifies the properties of EFQ in power share management and real-time scheduling, and then, explores the potential benefits of employing EFQ scheduling in the user experience optimization for energy-limited mobile systems. Experimental results on power share management show that EFQ is more effective than the Linux-CFS scheduler in managing power shares and it can achieve a proportional sharing of the system power regardless of on which device the energy is spent. Experimental results on real-time scheduling demonstrate that EFQ can achieve effective, flexible and robust time-constraint compliance upon the increase of energy estimation error and task number. Finally, a comparative analysis of the experimental results on user experience optimization demonstrates that EFQ is more effective and flexible than traditional processor scheduling algorithms, such as those of the default Linux scheduler, in optimizing and preserving the user experience of energy-limited mobile systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of channel estimation for multicarrier communications is addressed. We focus on systems employing the Discrete Cosine Transform Type-I (DCT1) even at both the transmitter and the receiver, presenting an algorithm which achieves an accurate estimation of symmetric channel filters using only a small number of training symbols. The solution is obtained by using either matrix inversion or compressed sensing algorithms. We provide the theoretical results which guarantee the validity of the proposed technique for the DCT1. Numerical simulations illustrate the good behaviour of the proposed algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High-quality software, delivered on time and budget, constitutes a critical part of most products and services in modern society. Our government has invested billions of dollars to develop software assets, often to redevelop the same capability many times. Recognizing the waste involved in redeveloping these assets, in 1992 the Department of Defense issued the Software Reuse Initiative. The vision of the Software Reuse Initiative was "To drive the DoD software community from its current "re-invent the software" cycle to a process-driven, domain-specific, architecture-centric, library-based way of constructing software.'' Twenty years after issuing this initiative, there is evidence of this vision beginning to be realized in nonembedded systems. However, virtually every large embedded system undertaken has incurred large cost and schedule overruns. Investigations into the root cause of these overruns implicates reuse. Why are we seeing improvements in the outcomes of these large scale nonembedded systems and worse outcomes in embedded systems? This question is the foundation for this research. The experiences of the Aerospace industry have led to a number of questions about reuse and how the industry is employing reuse in embedded systems. For example, does reuse in embedded systems yield the same outcomes as in nonembedded systems? Are the outcomes positive? If the outcomes are different, it may indicate that embedded systems should not use data from nonembedded systems for estimation. Are embedded systems using the same development approaches as nonembedded systems? Does the development approach make a difference? If embedded systems develop software differently from nonembedded systems, it may mean that the same processes do not apply to both types of systems. What about the reuse of different artifacts? Perhaps there are certain artifacts that, when reused, contribute more or are more difficult to use in embedded systems. Finally, what are the success factors and obstacles to reuse? Are they the same in embedded systems as in nonembedded systems? The research in this dissertation is comprised of a series of empirical studies using professionals in the aerospace and defense industry as its subjects. The main focus has been to investigate the reuse practices of embedded systems professionals and nonembedded systems professionals and compare the methods and artifacts used against the outcomes. The research has followed a combined qualitative and quantitative design approach. The qualitative data were collected by surveying software and systems engineers, interviewing senior developers, and reading numerous documents and other studies. Quantitative data were derived from converting survey and interview respondents' answers into coding that could be counted and measured. From the search of existing empirical literature, we learned that reuse in embedded systems are in fact significantly different from nonembedded systems, particularly in effort in model based development approach and quality where the development approach was not specified. The questionnaire showed differences in the development approach used in embedded projects from nonembedded projects, in particular, embedded systems were significantly more likely to use a heritage/legacy development approach. There was also a difference in the artifacts used, with embedded systems more likely to reuse hardware, test products, and test clusters. Nearly all the projects reported using code, but the questionnaire showed that the reuse of code brought mixed results. One of the differences expressed by the respondents to the questionnaire was the difficulty in reuse of code for embedded systems when the platform changed. The semistructured interviews were performed to tell us why the phenomena in the review of literature and the questionnaire were observed. We asked respected industry professionals, such as senior fellows, fellows and distinguished members of technical staff, about their experiences with reuse. We learned that many embedded systems used heritage/legacy development approaches because their systems had been around for many years, before models and modeling tools became available. We learned that reuse of code is beneficial primarily when the code does not require modification, but, especially in embedded systems, once it has to be changed, reuse of code yields few benefits. Finally, while platform independence is a goal for many in nonembedded systems, it is certainly not a goal for the embedded systems professionals and in many cases it is a detriment. However, both embedded and nonembedded systems professionals endorsed the idea of platform standardization. Finally, we conclude that while reuse in embedded systems and nonembedded systems is different today, they are converging. As heritage embedded systems are phased out, models become more robust and platforms are standardized, reuse in embedded systems will become more like nonembedded systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Voters try to avoid wasting their votes even in PR systems. In this paper we make a case that this type of strategic voting can be observed and predicted even in PR systems. Contrary to the literature we do not see weak institutional incentive structures as indicative of a hopeless endeavor for studying strategic voting. The crucial question for strategic voting is how institutional incentives constrain an individual’s decision-making process. Based on expected utility maximization we put forward a micro-logic of an individual’s expectation formation process driven by institutional and dispositional incentives. All well-known institutional incentives to vote strategically that get channelled through the district magnitude are moderated by dispositional factors in order to become relevant for voting decisions. Employing data from Finland – because of its electoral system a particularly hard testing ground - we find considerable evidence for observable implications of our theory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Shock tubes have been used successfully by a number of investigators to study the biological effects of variations in environmental pressures (1,2,3). Recently an unusually versatile laboratory pressurization source became available with the capability of consistently reproducing a wide variety of pressure-time phenomena of durations equal to and well beyond those associated with the detonation of nuclear devices (4). Thus it became possible to supplement costly full-scale field research in blast biology carried out at the Nevada Test Site (5,6) by using an economical yet realistic laboratory tool. In one exploratory study employing pressure pulses of 5 to 10 sec duration wherein the times to max overpressure and the magnitudes of the overpressures were varied, a relatively high tolerance of biological media to pressures well over 150 psi was demonstrated (7). In contrast, the present paper will describe the relatively high biological susceptibility to long duration overpressures in which the pressure rises occurred in single and double fast-rising steps.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a rectangular array antenna with a suitable signal-processing algorithm that is able to steer the beam in azimuth over a wide frequency band. In the previous approach, which was reported in the literature, an inverse discrete Fourier transform technique was proposed for obtaining the signal weighting coefficients. This approach was demonstrated for large arrays in which the physical parameters of the antenna elements were not considered. In this paper, a modified signal-weighting algorithm that works for arbitrary-size arrays is described. Its validity is demonstrated in examples of moderate-size arrays with real antenna elements. It is shown that in some cases, the original beam-forming algorithm fails, while the new algorithm is able to form the desired radiation pattern over a wide frequency band. The performance of the new algorithm is assessed for two cases when the mutual coupling between array elements is both neglected and taken into account.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this work was to design and build an equipment which can detect ferrous and non-ferrous objects in conveyed commodities, discriminate between them and locate the object along the belt and on the width of the belt. The magnetic induction mechanism was used as a means of achieving the objectives of this research. In order to choose the appropriate geometry and size of the induction field source, the field distributions of different source geometries and sizes were studied in detail. From these investigations it was found the square loop geometry is the most appropriate as a field generating source for the purpose of this project. The phenomena of field distribution in the conductors was also investigated. An equipment was designed and built at the preliminary stages of thework based on a flux-gate magnetometer with the ability to detect only ferrous objects.The instrument was designed such that it could be used to detect ferrous objects in the coal conveyors of power stations. The advantages of employing this detector in the power industry over the present ferrous metal electromagnetic separators were also considered. The objectives of this project culminated in the design and construction of a ferrous and non-ferrous detector with the ability to discriminate between ferrous and non-ferrous metals and to locate the objects on the conveying system. An experimental study was carried out to test the performance of the equipment in the detection of ferrous and non-ferrous objects of a given size carried on the conveyor belt. The ability of the equipment to discriminate between the types of metals and to locate the object on the belt was also evaluated experimentally. The benefits which can be gained from the industrial implementations of the equipment were considered. Further topics which may be investigated as an extension of this work are given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents improvements to optical transmission systems through the use of optical solitons as a digital transmission format, both theoretically and experimentally. An introduction to the main concepts and impairments of optical fibre on pulse transmission is included before introducing the concept of solitons in optically amplified communications and the problems of soliton system design. The theoretical work studies two fibre dispersion profiling schemes and a soliton launch improvement. The first provides superior pulse transmission by optimally tailoring the fibre dispersion to better follow the power, and hence nonlinearity, decay and thus allow soliton transmission for longer amplifier spacings and shorter pulse widths than normally possible. The second profiling scheme examines the use of dispersion compensating fibre in the context of soliton transmission over existing, standard fibre systems. The limits for solitons in uncompensated standard fibre are assessed, before the potential benefits of dispersion compensating fibre included as part of each amplifier are shown. The third theoretical investigation provides a simple improvement to the propagation of solitons in a highly perturbed system. By introducing a section of fibre of the correct length prior to the first system amplifier span, the soliton shape can be better coupled into the system thus providing an improved "average soliton" propagation model. The experimental work covers two areas. An important issue for soliton systems is pulse sources. Three potential lasers are studied, two ring laser configurations and one semiconductor device with external pulse shaping. The second area studies soliton transmission using a recalculating loop, reviewing the advantages and draw-backs of such an experiment in system testing and design. One particular example of employing the recirculating loop is also examined, using a novel method of pulse shape stabilisation over long distances with low jitter. The future for nonlinear optical communications is considered with the thesis conclusions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the feasibility of simultaneous suppressing of the amplification noise and nonlinearity, representing the most fundamental limiting factors in modern optical communication. To accomplish this task we developed a general design optimisation technique, based on concepts of noise and nonlinearity management. We demonstrate the immense efficiency of the novel approach by applying it to a design optimisation of transmission lines with periodic dispersion compensation using Raman and hybrid Raman-EDFA amplification. Moreover, we showed, using nonlinearity management considerations, that the optimal performance in high bit-rate dispersion managed fibre systems with hybrid amplification is achieved for a certain amplifier spacing – which is different from commonly known optimal noise performance corresponding to fully distributed amplification. Required for an accurate estimation of the bit error rate, the complete knowledge of signal statistics is crucial for modern transmission links with strong inherent nonlinearity. Therefore, we implemented the advanced multicanonical Monte Carlo (MMC) method, acknowledged for its efficiency in estimating distribution tails. We have accurately computed acknowledged for its efficiency in estimating distribution tails. We have accurately computed marginal probability density functions for soliton parameters, by numerical modelling of Fokker-Plank equation applying the MMC simulation technique. Moreover, applying a powerful MMC method we have studied the BER penalty caused by deviations from the optimal decision level in systems employing in-line 2R optical regeneration. We have demonstrated that in such systems the analytical linear approximation that makes a better fit in the central part of the regenerator nonlinear transfer function produces more accurate approximation of the BER and BER penalty. We present a statistical analysis of RZ-DPSK optical signal at direct detection receiver with Mach-Zehnder interferometer demodulation

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research primarily focused on identifying the formulation parameters which control the efficacy of liposomes as delivery systems to enhance the delivery of poorly soluble drugs. Preliminary studies focused on the drug loading of ibuprofen within vesicle systems. Initially both liposomal and niosomal formulations were screened for their drug-loading capacity: liposomal systems were shown to offer significantly higher ibuprofen loading and thereafter lipid based systems were further investigated. Given the key role cholesterol is known to play within the stability of bilayer vesicles. the optimum cholesterol content in terms of drug loading and release of poorly soluble drugs was then investigated. From these studies a concentration of 11 total molar % of cholesterol was used as a benchmark for all further formulations. Investigating the effect of liposomc composition on several low solubility drugs, drug loading was shown to be enhanced by adopting longer chain length lipids. cationic lipids and. decreasing drug molecular weight. Drug release was increased by using cationic lipids and lower molecular weight of drug; conversely, a reduction was noted when employing longer chain lipids thus supporting the rational of longer chain lipids producing more stable liposomes, a theory also supported by results obtained via Langmuir studies· although it was revealed that stability is also dependent on geometric features associated with the lipid chain moiety. Interestingly, reduction in drug loading appeared to be induced when symmetrical phospholipids were substituted for lipids constituting asymmetrical alkyl chain groups thus further highlighting the importance of lipid geometry. Combining a symmetrical lipid with an asymmetrical derivative enhanced encapsulation of a hydrophobic drug while reducing that of another suggesting the importance of drug characteristics. Phosphatidylcholine liposornes could successfully be prepared (and visualised using transmission electron microscopy) from fatty alcohols therefore offering an alternative liposomal stabiliser to cholesterol. Results obtained revealed that liposomes containing tetradecanol within their formulation shares similar vesicle size, drug encapsulation, surface charge. and toxicity profiles as liposomes formulated with cholesterol, however the tetradecanol preparation appeared to release considerably more drug during stability studies. Langmuir monolayer studies revealed that the condensing influence by tetradecanol is less than compared with cholesterol suggesting that this reduced intercalation by the former could explain why the tetradecanol formulation released more drug compared with cholesterol formulations. Environmental scanning electron microscopy (ESEM) was used to analyse the morphology and stability of liposomes. These investigations indicated that the presence of drugs within the liposomal bilayer were able to enhance the stability of the bilayers against collapse under reduced hydration conditions. In addition the presence of charged lipids within the formulation under reduced hydration conditions compared with its neutral counterpart. However the applicability of using ESEM as a new method to investigate liposome stability appears less valid than first hoped since the results are often open to varied interpretation and do not provide a robust set of data to support conclusions in some cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Firstly, we numerically model a practical 20 Gb/s undersea configuration employing the Return-to-Zero Differential Phase Shift Keying data format. The modelling is completed using the Split-Step Fourier Method to solve the Generalised Nonlinear Schrdinger Equation. We optimise the dispersion map and per-channel launch power of these channels and investigate how the choice of pre/post compensation can influence the performance. After obtaining these optimal configurations, we investigate the Bit Error Rate estimation of these systems and we see that estimation based on Gaussian electrical current systems is appropriate for systems of this type, indicating quasi-linear behaviour. The introduction of narrower pulses due to the deployment of quasi-linear transmission decreases the tolerance to chromatic dispersion and intra-channel nonlinearity. We used tools from Mathematical Statistics to study the behaviour of these channels in order to develop new methods to estimate Bit Error Rate. In the final section, we consider the estimation of Eye Closure Penalty, a popular measure of signal distortion. Using a numerical example and assuming the symmetry of eye closure, we see that we can simply estimate Eye Closure Penalty using Gaussian statistics. We also see that the statistics of the logical ones dominates the statistics of the logical ones dominates the statistics of signal distortion in the case of Return-to-Zero On-Off Keying configurations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We experimentally demonstrate performance enhancements enabled by weighted digital back propagation method for 28 Gbaud PM-16QAM transmission systems, over a 250 km ultra-large area fibre, using only one back-propagation step for the entire link, enabling up to 3 dB improvement in power tolerance with respect to linear compensation only. We observe that this is roughly the same improvement that can be obtained with the conventional, computationally heavy, non-weighted digital back propagation compensation with one step per span. As a further benchmark, we analyze performance improvement as a function of number of steps, and show that the performance improvement saturates at approximately 20 steps per span, at which a 5 dB improvement in power tolerance is obtained with respect to linear compensation only. Furthermore, we show that coarse-step self-phase modulation compensation is inefficient in wavelength division multiplexed transmission.