949 resultados para Performance degradation
Resumo:
IEEE 802.15.4 standard is a relatively new standard designed for low power low data rate wireless sensor networks (WSN), which has a wide range of applications, e.g., environment monitoring, e-health, home and industry automation. In this paper, we investigate the problems of hidden devices in coverage overlapped IEEE 802.15.4 WSNs, which is likely to arise when multiple 802.15.4 WSNs are deployed closely and independently. We consider a typical scenario of two 802.15.4 WSNs with partial coverage overlapping and propose a Markov-chain based analytical model to reveal the performance degradation due to the hidden devices from the coverage overlapping. Impacts of the hidden devices and network sleeping modes on saturated throughput and energy consumption are modeled. The analytic model is verified by simulations, which can provide the insights to network design and planning when multiple 802.15.4 WSNs are deployed closely. © 2013 IEEE.
Resumo:
As optical coherence tomography (OCT) becomes widespread, validation and characterization of systems becomes important. Reference standards are required to qualitatively and quantitatively measure the performance between difference systems. This would allow the performance degradation of the system over time to be monitored. In this report, the properties of the femtosecond inscribed structures from three different systems for making suitable OCT characterization artefacts (phantoms) are analyzed. The parameter test samples are directly inscribed inside transparent materials. The structures are characterized using an optical microscope and a swept-source OCT. The high reproducibility of the inscribed structures shows high potential for producing multi-modality OCT calibration and characterization phantoms. Such that a single artefact can be used to characterize multiple performance parameters such the resolution, linearity, distortion, and imaging depths. © 2012 SPIE.
Resumo:
Relay selection has been considered as an effective method to improve the performance of cooperative communication. However, the Channel State Information (CSI) used in relay selection can be outdated, yielding severe performance degradation of cooperative communication systems. In this paper, we investigate the relay selection under outdated CSI in a Decode-and-Forward (DF) cooperative system to improve its outage performance. We formulize an optimization problem, where the set of relays that forwards data is optimized to minimize the probability of outage conditioned on the outdated CSI of all the decodable relays’ links. We then propose a novel multiple-relay selection strategy based on the solution of the optimization problem. Simulation results show that the proposed relay selection strategy achieves large improvement of outage performance compared with the existing relay selection strategies combating outdated CSI given in the literature.
Resumo:
In this paper we evaluate and compare two representativeand popular distributed processing engines for large scalebig data analytics, Spark and graph based engine GraphLab. Wedesign a benchmark suite including representative algorithmsand datasets to compare the performances of the computingengines, from performance aspects of running time, memory andCPU usage, network and I/O overhead. The benchmark suite istested on both local computer cluster and virtual machines oncloud. By varying the number of computers and memory weexamine the scalability of the computing engines with increasingcomputing resources (such as CPU and memory). We also runcross-evaluation of generic and graph based analytic algorithmsover graph processing and generic platforms to identify thepotential performance degradation if only one processing engineis available. It is observed that both computing engines showgood scalability with increase of computing resources. WhileGraphLab largely outperforms Spark for graph algorithms, ithas close running time performance as Spark for non-graphalgorithms. Additionally the running time with Spark for graphalgorithms over cloud virtual machines is observed to increaseby almost 100% compared to over local computer clusters.
Resumo:
In order to cope up with the ever increasing demand for larger transmission bandwidth, Radio over Fiber technology is a very beneficial solution. These systems are expected to play a major role within future fifth generation wireless networks due to their inherent capillary distribution properties. Nonlinear compensation techniques are becoming increasingly important to improve the performance of telecommunication channels by compensating for channel nonlinearities. Indeed, significant bounds on the technology usability and performance degradation occur due to nonlinear characteristics of optical transmitter, nonlinear generation of spurious frequencies, which, in the case of RoF links exploiting Directly Modulated Lasers , has the combined effect of laser chirp and optical fiber dispersion among its prevailing causes. The purpose of the research is to analyze some of the main causes of harmonic and intermodulation distortion present in Radio over Fiber (RoF) links, and to suggest a solution to reduce their effects, through a digital predistortion technique. Predistortion is an effective and interesting solution to linearize and this allows to demonstrate that the laser’s chirp and the optical fiber’s dispersion are the main causes which generate harmonic distortion. The improvements illustrated are only theoretical, based on a feasibility point of view. The simulations performed lead to significant improvements for short and long distances of radio over fiber link lengths. The algorithm utilized for simulation has been implemented on MATLAB. The effects of chirp and fiber nonlinearity in a directly modulated fiber transmission system are investigated by simulation, and a cost effective and rather simple technique for compensating these effects is discussed. A detailed description of its functional model is given, and its attractive features both in terms of quality improvement of the received signal, and cost effectiveness of the system are illustrated.
Resumo:
This paper presents a multi-class AdaBoost based on incorporating an ensemble of binary AdaBoosts which is organized as Binary Decision Tree (BDT). It is proved that binary AdaBoost is extremely successful in producing accurate classification but it does not perform very well for multi-class problems. To avoid this performance degradation, the multi-class problem is divided into a number of binary problems and binary AdaBoost classifiers are invoked to solve these classification problems. This approach is tested with a dataset consisting of 6500 binary images of traffic signs. Haar-like features of these images are computed and the multi-class AdaBoost classifier is invoked to classify them. A classification rate of 96.7% and 95.7% is achieved for the traffic sign boarders and pictograms, respectively. The proposed approach is also evaluated using a number of standard datasets such as Iris, Wine, Yeast, etc. The performance of the proposed BDT classifier is quite high as compared with the state of the art and it converges very fast to a solution which indicates it as a reliable classifier.
Resumo:
The performance of supersonic engine inlets and external aerodynamic surfaces can be critically affected by shock wave / boundary layer interactions (SBLIs), whose severe adverse pressure gradients can cause boundary layer separation. Currently such problems are avoided primarily through the use of boundary layer bleed/suction which can be a source of significant performance degradation. This study investigates a novel type of flow control device called micro-vortex generators (µVGs) which may offer similar control benefits without the bleed penalties. µVGs have the ability to alter the near-wall structure of compressible turbulent boundary layers to provide increased mixing of high speed fluid which improves the boundary layer health when subjected to flow disturbance. Due to their small size,µVGs are embedded in the boundary layer which provide reduced drag compared to the traditional vortex generators while they are cost-effective, physically robust and do not require a power source. To examine the potential of µVGs, a detailed experimental and computational study of micro-ramps in a supersonic boundary layer at Mach 3 subjected to an oblique shock was undertaken. The experiments employed a flat plate boundary layer with an impinging oblique shock with downstream total pressure measurements. The moderate Reynolds number of 3,800 based on displacement thickness allowed the computations to use Large Eddy Simulations without the subgrid stress model (LES-nSGS). The LES predictions indicated that the shock changes the structure of the turbulent eddies and the primary vortices generated from the micro-ramp. Furthermore, they generally reproduced the experimentally obtained mean velocity profiles, unlike similarly-resolved RANS computations. The experiments and the LES results indicate that the micro-ramps, whose height is h≈0.5δ, can significantly reduce boundary layer thickness and improve downstream boundary layer health as measured by the incompressible shape factor, H. Regions directly behind the ramp centerline tended to have increased boundary layer thickness indicating the significant three-dimensionality of the flow field. Compared to baseline sizes, smaller micro-ramps yielded improved total pressure recovery. Moving the smaller ramps closer to the shock interaction also reduced the displacement thickness and the separated area. This effect is attributed to decreased wave drag and the closer proximity of the vortex pairs to the wall. In the second part of the study, various types of µVGs are investigated including micro-ramps and micro-vanes. The results showed that vortices generated from µVGs can partially eliminate shock induced flow separation and can continue to entrain high momentum flux for boundary layer recovery downstream. The micro-ramps resulted in thinner downstream displacement thickness in comparison to the micro-vanes. However, the strength of the streamwise vorticity for the micro-ramps decayed faster due to dissipation especially after the shock interaction. In addition, the close spanwise distance between each vortex for the ramp geometry causes the vortex cores to move upwards from the wall due to induced upwash effects. Micro-vanes, on the other hand, yielded an increased spanwise spacing of the streamwise vortices at the point of formation. This resulted in streamwise vortices staying closer to the wall with less circulation decay, and the reduction in overall flow separation is attributed to these effects. Two hybrid concepts, named “thick-vane” and “split-ramp”, were also studied where the former is a vane with side supports and the latter has a uniform spacing along the centerline of the baseline ramp. These geometries behaved similar to the micro-vanes in terms of the streamwise vorticity and the ability to reduce flow separation, but are more physically robust than the thin vanes. Next, Mach number effect on flow past the micro-ramps (h~0.5δ) are examined in a supersonic boundary layer at M=1.4, 2.2 and 3.0, but with no shock waves present. The LES results indicate that micro-ramps have a greater impact at lower Mach number near the device but its influence decays faster than that for the higher Mach number cases. This may be due to the additional dissipation caused by the primary vortices with smaller effective diameter at the lower Mach number such that their coherency is easily lost causing the streamwise vorticity and the turbulent kinetic energy to decay quickly. The normal distance between the vortex core and the wall had similar growth indicating weak correlation with the Mach number; however, the spanwise distance between the two counter-rotating cores further increases with lower Mach number. Finally, various µVGs which include micro-ramp, split-ramp and a new hybrid concept “ramped-vane” are investigated under normal shock conditions at Mach number of 1.3. In particular, the ramped-vane was studied extensively by varying its size, interior spacing of the device and streamwise position respect to the shock. The ramped-vane provided increased vorticity compared to the micro-ramp and the split-ramp. This significantly reduced the separation length downstream of the device centerline where a larger ramped-vane with increased trailing edge gap yielded a fully attached flow at the centerline of separation region. The results from coarse-resolution LES studies show that the larger ramped-vane provided the most reductions in the turbulent kinetic energy and pressure fluctuation compared to other devices downstream of the shock. Additional benefits include negligible drag while the reductions in displacement thickness and shape factor were seen compared to other devices. Increased wall shear stress and pressure recovery were found with the larger ramped-vane in the baseline resolution LES studies which also gave decreased amplitudes of the pressure fluctuations downstream of the shock.
Resumo:
Transmitting sensitive data over non-secret channels has always required encryption technologies to ensure that the data arrives without exposure to eavesdroppers. The Internet has made it possible to transmit vast volumes of data more rapidly and cheaply and to a wider audience than ever before. At the same time, strong encryption makes it possible to send data securely, to digitally sign it, to prove it was sent or received, and to guarantee its integrity. The Internet and encryption make bulk transmission of data a commercially viable proposition. However, there are implementation challenges to solve before commercial bulk transmission becomes mainstream. Powerful have a performance cost, and may affect quality of service. Without encryption, intercepted data may be illicitly duplicated and re-sold, or its commercial value diminished because its secrecy is lost. Performance degradation and potential for commercial loss discourage the bulk transmission of data over the Internet in any commercial application. This paper outlines technical solutions to these problems. We develop new technologies and combine existing ones in new and powerful ways to minimise commercial loss without compromising performance or inflating overheads.
Resumo:
Electrical Submersible Pump (ESP) is used as an artificial lift technique. However, pumping viscous oil is generally associated with low Reynolds number flows. This condition leads to a performance degradation respect to the performance expected from the regular operation with water that most of the centrifugal pumps are originally designed for. These issues are considered in this investigation through a numerical study of the flow in two different multistage, semi-axial type ESPs. This investigation is carried out numerically using a Computational Fluid Dynamics (CFD) package, where the transient RANS equations are solved numerically. The turbulence is modeled using the SST model. Head curves for several operating conditions are compared with manufacturer’s curves and experimental data for a three-stage ESP, showing good agreement for a wide range of fluid viscosities and rotational speeds. Dimensionless numbers (n, n, n e Re) are used to investigate performance degradation of the ESPs. In addition, flow phenomena through the impellers of the ESPs are investigated using flow field from numerical results. Results show that performance degradation is directly related to rotational Reynolds number, Re. In addition, it was verified that performance degradation occurs for constant normalized specific speedn, which shows that performance degradation occurs similarly for different centrifugal pumps. Moreover, experimental data and numerical results agreed with a correlation from literature between head and flow correction factors proposed by Stepanoff (1967). A definition of modified Reynolds number was proposed and relates the head correction factor to viscosity. A correlation between head correction factor and the modified Reynolds number was proposed, which agreed well with numerical and experimental data. Then, a method to predict performance degradation based on the previous correlations was proposed. This method was compared with others from literature. In general, results and conclusions from this work can also be useful to bring more information about the flow of highly viscous fluids in pumps, especially in semi-axial, multistage ESPs.
Resumo:
Over the last decade, success of social networks has significantly reshaped how people consume information. Recommendation of contents based on user profiles is well-received. However, as users become dominantly mobile, little is done to consider the impacts of the wireless environment, especially the capacity constraints and changing channel. In this dissertation, we investigate a centralized wireless content delivery system, aiming to optimize overall user experience given the capacity constraints of the wireless networks, by deciding what contents to deliver, when and how. We propose a scheduling framework that incorporates content-based reward and deliverability. Our approach utilizes the broadcast nature of wireless communication and social nature of content, by multicasting and precaching. Results indicate this novel joint optimization approach outperforms existing layered systems that separate recommendation and delivery, especially when the wireless network is operating at maximum capacity. Utilizing limited number of transmission modes, we significantly reduce the complexity of the optimization. We also introduce the design of a hybrid system to handle transmissions for both system recommended contents ('push') and active user requests ('pull'). Further, we extend the joint optimization framework to the wireless infrastructure with multiple base stations. The problem becomes much harder in that there are many more system configurations, including but not limited to power allocation and how resources are shared among the base stations ('out-of-band' in which base stations transmit with dedicated spectrum resources, thus no interference; and 'in-band' in which they share the spectrum and need to mitigate interference). We propose a scalable two-phase scheduling framework: 1) each base station obtains delivery decisions and resource allocation individually; 2) the system consolidates the decisions and allocations, reducing redundant transmissions. Additionally, if the social network applications could provide the predictions of how the social contents disseminate, the wireless networks could schedule the transmissions accordingly and significantly improve the dissemination performance by reducing the delivery delay. We propose a novel method utilizing: 1) hybrid systems to handle active disseminating requests; and 2) predictions of dissemination dynamics from the social network applications. This method could mitigate the performance degradation for content dissemination due to wireless delivery delay. Results indicate that our proposed system design is both efficient and easy to implement.
Resumo:
The atomic-level structure and chemistry of materials ultimately dictate their observed macroscopic properties and behavior. As such, an intimate understanding of these characteristics allows for better materials engineering and improvements in the resulting devices. In our work, two material systems were investigated using advanced electron and ion microscopy techniques, relating the measured nanoscale traits to overall device performance. First, transmission electron microscopy and electron energy loss spectroscopy (TEM-EELS) were used to analyze interfacial states at the semiconductor/oxide interface in wide bandgap SiC microelectronics. This interface contains defects that significantly diminish SiC device performance, and their fundamental nature remains generally unresolved. The impacts of various microfabrication techniques were explored, examining both current commercial and next-generation processing strategies. In further investigations, machine learning techniques were applied to the EELS data, revealing previously hidden Si, C, and O bonding states at the interface, which help explain the origins of mobility enhancement in SiC devices. Finally, the impacts of SiC bias temperature stressing on the interfacial region were explored. In the second system, focused ion beam/scanning electron microscopy (FIB/SEM) was used to reconstruct 3D models of solid oxide fuel cell (SOFC) cathodes. Since the specific degradation mechanisms of SOFC cathodes are poorly understood, FIB/SEM and TEM were used to analyze and quantify changes in the microstructure during performance degradation. Novel strategies for microstructure calculation from FIB-nanotomography data were developed and applied to LSM-YSZ and LSCF-GDC composite cathodes, aged with environmental contaminants to promote degradation. In LSM-YSZ, migration of both La and Mn cations to the grain boundaries of YSZ was observed using TEM-EELS. Few substantial changes however, were observed in the overall microstructure of the cells, correlating with a lack of performance degradation induced by the H2O. Using similar strategies, a series of LSCF-GDC cathodes were analyzed, aged in H2O, CO2, and Cr-vapor environments. FIB/SEM observation revealed considerable formation of secondary phases within these cathodes, and quantifiable modifications of the microstructure. In particular, Cr-poisoning was observed to cause substantial byproduct formation, which was correlated with drastic reductions in cell performance.
Resumo:
Lithium-ion batteries provide high energy density while being compact and light-weight and are the most pervasive energy storage technology powering portable electronic devices such as smartphones, laptops, and tablet PCs. Considerable efforts have been made to develop new electrode materials with ever higher capacity, while being able to maintain long cycle life. A key challenge in those efforts has been characterizing and understanding these materials during battery operation. While it is generally accepted that the repeated strain/stress cycles play a role in long-term battery degradation, the detailed mechanisms creating these mechanical effects and the damage they create still remain unclear. Therefore, development of techniques which are capable of capturing in real time the microstructural changes and the associated stress during operation are crucial for unravelling lithium-ion battery degradation mechanisms and further improving lithium-ion battery performance. This dissertation presents the development of two microelectromechanical systems sensor platforms for in situ characterization of stress and microstructural changes in thin film lithium-ion battery electrodes, which can be leveraged as a characterization platform for advancing battery performance. First, a Fabry-Perot microelectromechanical systems sensor based in situ characterization platform is developed which allows simultaneous measurement of microstructural changes using Raman spectroscopy in parallel with qualitative stress changes via optical interferometry. Evolutions in the microstructure creating a Raman shift from 145 cm−1 to 154 cm−1 and stress in the various crystal phases in the LixV2O5 system are observed, including both reversible and irreversible phase transitions. Also, a unique way of controlling electrochemically-driven stress and stress gradient in lithium-ion battery electrodes is demonstrated using the Fabry-Perot microelectromechanical systems sensor integrated with an optical measurement setup. By stacking alternately stressed layers, the average stress in the stacked electrode is greatly reduced by 75% compared to an unmodified electrode. After 2,000 discharge-charge cycles, the stacked electrodes retain only 83% of their maximum capacity while unmodified electrodes retain 91%, illuminating the importance of the stress gradient within the electrode. Second, a buckled membrane microelectromechanical systems sensor is developed to enable in situ characterization of quantitative stress and microstructure evolutions in a V2O5 lithium-ion battery cathode by integrating atomic force microscopy and Raman spectroscopy. Using dual-mode measurements in the voltage range of the voltage range of 2.8V – 3.5V, both the induced stress (~ 40 MPa) and Raman intensity changes due to lithium cycling are observed. Upon lithium insertion, tensile stress in the V2O5 increases gradually until the α- to ε-phase and ε- to δ-phase transitions occur. The Raman intensity change at 148 cm−1 shows that the level of disorder increases during lithium insertion and progressively recovers the V2O5 lattice during lithium extraction. Results are in good agreement with the expected mechanical behavior and disorder change in V2O5, highlighting the potential of microelectromechanical systems as enabling tools for advanced scientific investigations. The work presented here will be eventually utilized for optimization of thin film battery electrode performance by achieving fundamental understanding of how stress and microstructural changes are correlated, which will also provide valuable insight into a battery performance degradation mechanism.
Resumo:
Current industry proposals for Hardware Transactional Memory (HTM) focus on best-effort solutions (BE-HTM) where hardware limits are imposed on transactions. These designs may show a significant performance degradation due to high contention scenarios and different hardware and operating system limitations that abort transactions, e.g. cache overflows, hardware and software exceptions, etc. To deal with these events and to ensure forward progress, BE-HTM systems usually provide a software fallback path to execute a lock-based version of the code. In this paper, we propose a hardware implementation of an irrevocability mechanism as an alternative to the software fallback path to gain insight into the hardware improvements that could enhance the execution of such a fallback. Our mechanism anticipates the abort that causes the transaction serialization, and stalls other transactions in the system so that transactional work loss is mini- mized. In addition, we evaluate the main software fallback path approaches and propose the use of ticket locks that hold precise information of the number of transactions waiting to enter the fallback. Thus, the separation of transactional and fallback execution can be achieved in a precise manner. The evaluation is carried out using the Simics/GEMS simulator and the complete range of STAMP transactional suite benchmarks. We obtain significant performance benefits of around twice the speedup and an abort reduction of 50% over the software fallback path for a number of benchmarks.
Resumo:
Organic acids are important constituents of fruit juices. They render tartness, flavour and specific taste to fruit juices. Shelf life and stability of fruit juices are important factors, which determine their nutritional quality and freshness. In this view, the effect of storage on the concentration of organic acids in commercially packed fruit juices is studied by reverse phase high performance liquid chromatography (RP-HPLC). Ten packed fruit juices from two different brands are stored at 30 C for 24, 48 and 72 hours. A reverse phase high performance liquid chromatographic method is used to determine the concentration of oxalic, tartaric, malic, ascorbic and citric acid in the fruit juices during storage. The chromatographic analysis of organic acids is carried out using mobile phase 0.5% (w/v) ammonium dihydrogen orthophosphate buffer (pH 2.8) on C18 column with UV-Vis detector. The results show that the concentration of organic acids generally decreases in juices under study with the increase in storage time. All the fruit juices belonging to tropicana brand underwent less organic acid degradation in comparison to juices of real brand. Orange fruit juice is found to be least stable among the juices under study, after the span of 72 hours. Amongst all the organic acids under investigation minimum stability is shown by ascorbic acid followed by malic and citric acid.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)