972 resultados para Performance improvements
Resumo:
In this paper, we consider the secure beamforming design for an underlay cognitive radio multiple-input singleoutput broadcast channel in the presence of multiple passive eavesdroppers. Our goal is to design a jamming noise (JN) transmit strategy to maximize the secrecy rate of the secondary system. By utilizing the zero-forcing method to eliminate the interference caused by JN to the secondary user, we study the joint optimization of the information and JN beamforming for secrecy rate maximization of the secondary system while satisfying all the interference power constraints at the primary users, as well as the per-antenna power constraint at the secondary transmitter. For an optimal beamforming design, the original problem is a nonconvex program, which can be reformulated as a convex program by applying the rank relaxation method. To this end, we prove that the rank relaxation is tight and propose a barrier interior-point method to solve the resulting saddle point problem based on a duality result. To find the global optimal solution, we transform the considered problem into an unconstrained optimization problem. We then employ Broyden-Fletcher-Goldfarb-Shanno (BFGS) method to solve the resulting unconstrained problem which helps reduce the complexity significantly, compared to conventional methods. Simulation results show the fast convergence of the proposed algorithm and substantial performance improvements over existing approaches.
Resumo:
Lately, various programming frameworks has been developed for developing web applications. These frameworks focus on increasing the user experience by performance improvements such as faster render times and response times. One of these frameworks are React, which has introduced a completely new architectural pattern for both managing the state and data flow of an application. React also offers support for native application development and makes server-side rendering possible. Something that is difficult to accomplish with an application developed with Angular 1.5, which is used by the company Dewire today. The aim of this thesis was to compare React with an existing Angular project, in order to determine whether React could be a potential replacement for Angular. To gain knowledge about the subject, a theoretical study of web- based sources has been made. While the practical part has been to rebuild a web application with React together with the architecture Flux, which is based on a view from the Angular project. The implementation process was repeated until the view was completed and a desired data flow, as in the Angular application, was reached. The resulting React application was later compared with the Angular application developed by the company, where the outcome of the comparison showed that the React performed better than Angular in all tests. In conclusion, due to the timeframe of the project, only the most important parts of the Angular project were implemented in order to carry out the measurements that were of interest to the company. By recreating most of the functionality, or the entire Angular application, more interesting comparisons could have been done.
Resumo:
Successful implementation of fault-tolerant quantum computation on a system of qubits places severe demands on the hardware used to control the many-qubit state. It is known that an accuracy threshold Pa exists for any quantum gate that is to be used for such a computation to be able to continue for an unlimited number of steps. Specifically, the error probability Pe for such a gate must fall below the accuracy threshold: Pe < Pa. Estimates of Pa vary widely, though Pa ∼ 10−4 has emerged as a challenging target for hardware designers. I present a theoretical framework based on neighboring optimal control that takes as input a good quantum gate and returns a new gate with better performance. I illustrate this approach by applying it to a universal set of quantum gates produced using non-adiabatic rapid passage. Performance improvements are substantial comparing to the original (unimproved) gates, both for ideal and non-ideal controls. Under suitable conditions detailed below, all gate error probabilities fall by 1 to 4 orders of magnitude below the target threshold of 10−4. After applying the neighboring optimal control theory to improve the performance of quantum gates in a universal set, I further apply the general control theory in a two-step procedure for fault-tolerant logical state preparation, and I illustrate this procedure by preparing a logical Bell state fault-tolerantly. The two-step preparation procedure is as follow: Step 1 provides a one-shot procedure using neighboring optimal control theory to prepare a physical qubit state which is a high-fidelity approximation to the Bell state |β01⟩ = 1/√2(|01⟩ + |10⟩). I show that for ideal (non-ideal) control, an approximate |β01⟩ state could be prepared with error probability ϵ ∼ 10−6 (10−5) with one-shot local operations. Step 2 then takes a block of p pairs of physical qubits, each prepared in |β01⟩ state using Step 1, and fault-tolerantly prepares the logical Bell state for the C4 quantum error detection code.
Resumo:
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.
Resumo:
In modern power electronics equipment, it is desirable to design a low profile, high power density, and fast dynamic response converter. Increases in switching frequency reduce the size of the passive components such as transformers, inductors, and capacitors which results in compact size and less requirement for the energy storage. In addition, the fast dynamic response can be achieved by operating at high frequency. However, achieving high frequency operation while keeping the efficiency high, requires new advanced devices, higher performance magnetic components, and new circuit topology. These are required to absorb and utilize the parasitic components and also to mitigate the frequency dependent losses including switching loss, gating loss, and magnetic loss. Required performance improvements can be achieved through the use of Radio Frequency (RF) design techniques. To reduce switching losses, resonant converter topologies like resonant RF amplifiers (inverters) combined with a rectifier are the effective solution to maintain high efficiency at high switching frequencies through using the techniques such as device parasitic absorption, Zero Voltage Switching (ZVS), Zero Current Switching (ZCS), and a resonant gating. Gallium Nitride (GaN) device technologies are being broadly used in RF amplifiers due to their lower on- resistance and device capacitances compared with silicon (Si) devices. Therefore, this kind of semiconductor is well suited for high frequency power converters. The major problems involved with high frequency magnetics are skin and proximity effects, increased core and copper losses, unbalanced magnetic flux distribution generating localized hot spots, and reduced coupling coefficient. In order to eliminate the magnetic core losses which play a crucial role at higher operating frequencies, a coreless PCB transformer can be used. Compared to the conventional wire-wound transformer, a planar PCB transformer in which the windings are laid on the Printed Board Circuit (PCB) has a low profile structure, excellent thermal characteristics, and ease of manufacturing. Therefore, the work in this thesis demonstrates the design and analysis of an isolated low profile class DE resonant converter operating at 10 MHz switching frequency with a nominal output of 150 W. The power stage consists of a class DE inverter using GaN devices along with a sinusoidal gate drive circuit on the primary side and a class DE rectifier on the secondary side. For obtaining the stringent height converter, isolation is provided by a 10-layered coreless PCB transformer of 1:20 turn’s ratio. It is designed and optimized using 3D Finite Element Method (FEM) tools and radio frequency (RF) circuit design software. Simulation and experimental results are presented for a 10-layered coreless PCB transformer operating in 10 MHz.
Resumo:
String searching within a large corpus of data is an important component of digital forensic (DF) analysis techniques such as file carving. The continuing increase in capacity of consumer storage devices requires corresponding im-provements to the performance of string searching techniques. As string search-ing is a trivially-parallelisable problem, GPGPU approaches are a natural fit – but previous studies have found that local storage presents an insurmountable performance bottleneck. We show that this need not be the case with modern hardware, and demonstrate substantial performance improvements from the use of single and multiple GPUs when searching for strings within a typical forensic disk image.
Resumo:
Embedding intelligence in extreme edge devices allows distilling raw data acquired from sensors into actionable information, directly on IoT end-nodes. This computing paradigm, in which end-nodes no longer depend entirely on the Cloud, offers undeniable benefits, driving a large research area (TinyML) to deploy leading Machine Learning (ML) algorithms on micro-controller class of devices. To fit the limited memory storage capability of these tiny platforms, full-precision Deep Neural Networks (DNNs) are compressed by representing their data down to byte and sub-byte formats, in the integer domain. However, the current generation of micro-controller systems can barely cope with the computing requirements of QNNs. This thesis tackles the challenge from many perspectives, presenting solutions both at software and hardware levels, exploiting parallelism, heterogeneity and software programmability to guarantee high flexibility and high energy-performance proportionality. The first contribution, PULP-NN, is an optimized software computing library for QNN inference on parallel ultra-low-power (PULP) clusters of RISC-V processors, showing one order of magnitude improvements in performance and energy efficiency, compared to current State-of-the-Art (SoA) STM32 micro-controller systems (MCUs) based on ARM Cortex-M cores. The second contribution is XpulpNN, a set of RISC-V domain specific instruction set architecture (ISA) extensions to deal with sub-byte integer arithmetic computation. The solution, including the ISA extensions and the micro-architecture to support them, achieves energy efficiency comparable with dedicated DNN accelerators and surpasses the efficiency of SoA ARM Cortex-M based MCUs, such as the low-end STM32M4 and the high-end STM32H7 devices, by up to three orders of magnitude. To overcome the Von Neumann bottleneck while guaranteeing the highest flexibility, the final contribution integrates an Analog In-Memory Computing accelerator into the PULP cluster, creating a fully programmable heterogeneous fabric that demonstrates end-to-end inference capabilities of SoA MobileNetV2 models, showing two orders of magnitude performance improvements over current SoA analog/digital solutions.
Resumo:
We start in Chapter 2 to investigate linear matrix-valued SDEs and the Itô-stochastic Magnus expansion. The Itô-stochastic Magnus expansion provides an efficient numerical scheme to solve matrix-valued SDEs. We show convergence of the expansion up to a stopping time τ and provide an asymptotic estimate of the cumulative distribution function of τ. Moreover, we show how to apply it to solve SPDEs with one and two spatial dimensions by combining it with the method of lines with high accuracy. We will see that the Magnus expansion allows us to use GPU techniques leading to major performance improvements compared to a standard Euler-Maruyama scheme. In Chapter 3, we study a short-rate model in a Cox-Ingersoll-Ross (CIR) framework for negative interest rates. We define the short rate as the difference of two independent CIR processes and add a deterministic shift to guarantee a perfect fit to the market term structure. We show how to use the Gram-Charlier expansion to efficiently calibrate the model to the market swaption surface and price Bermudan swaptions with good accuracy. We are taking two different perspectives for rating transition modelling. In Section 4.4, we study inhomogeneous continuous-time Markov chains (ICTMC) as a candidate for a rating model with deterministic rating transitions. We extend this model by taking a Lie group perspective in Section 4.5, to allow for stochastic rating transitions. In both cases, we will compare the most popular choices for a change of measure technique and show how to efficiently calibrate both models to the available historical rating data and market default probabilities. At the very end, we apply the techniques shown in this thesis to minimize the collateral-inclusive Credit/ Debit Valuation Adjustments under the constraint of small collateral postings by using a collateral account dependent on rating trigger.
Resumo:
An essential role in the global energy transition is attributed to Electric Vehicles (EVs) the energy for EV traction can be generated by renewable energy sources (RES), also at a local level through distributed power plants, such as photovoltaic (PV) systems. However, EV integration with electrical systems might not be straightforward. The intermittent RES, combined with the high and uncontrolled aggregate EV charging, require an evolution toward new planning and paradigms of energy systems. In this context, this work aims to provide a practical solution for EV charging integration in electrical systems with RES. A method for predicting the power required by an EV fleet at the charging hub (CH) is developed in this thesis. The proposed forecasting method considers the main parameters on which charging demand depends. The results of the EV charging forecasting method are deeply analyzed under different scenarios. To reduce the EV load intermittency, methods for managing the charging power of EVs are proposed. The main target was to provide Charging Management Systems (CMS) that modulate EV charging to optimize specific performance indicators such as system self-consumption, peak load reduction, and PV exploitation. Controlling the EV charging power to achieve specific optimization goals is also known as Smart Charging (SC). The proposed techniques are applied to real-world scenarios demonstrating performance improvements in using SC strategies. A viable alternative to maximize integration with intermittent RES generation is the integration of energy storage. Battery Energy Storage Systems (BESS) may be a buffer between peak load and RES production. A sizing algorithm for PV+BESS integration in EV charging hubs is provided. The sizing optimization aims to optimize the system's energy and economic performance. The results provide an overview of the optimal size that the PV+BESS plant should have to improve whole system performance in different scenarios.
Resumo:
In this study, an attempt was made in order to measure and evaluate the eco-efficiency performance of a pultruded composite processing company. For this purpose the recommendations of World Business Council for Sustainable Development (WCSD) and the directives of ISO 14301 standard were followed and applied. The main general indicators of eco-efficiency, as well as the specific indicators, were defined and determined. With basis on indicators’ figures, the value profile, the environmental profile, and the pertinent eco-efficiency ratios were established and analyzed. In order to evaluate potential improvements on company eco-performance, new indicators values and eco-efficiency ratios were estimated taking into account the implementation of new proceedings and procedures, at both upstream and downstream of the production process, namely: i) Adoption of a new heating system for pultrusion die-tool in the manufacturing process, more effective and with minor heat losses; ii) Recycling approach, with partial waste reuse of scrap material derived from manufacturing, cutting and assembly processes of GFRP profiles. These features lead to significant improvements on the sequent assessed eco-efficiency ratios of the present case study, yielding to a more sustainable product and manufacturing process of pultruded GFRP profiles.
Resumo:
This thesis deals with the development of the upcoming aeronautical mobile airport communications system (AeroMACS) system. We analyzed the performance of AeroMACS and we investigated potential solutions for enhancing its performance. Since the most critical results correspond to the channel scenario having less diversity1, we tackled this problem investigating potential solutions for increasing the diversity of the system and therefore improving its performance. We accounted different forms of diversity as space diversity and time diversity. More specifically, space (antenna and cooperative) diversity and time diversity are analyzed as countermeasures for the harsh fading conditions that are typical of airport environments. Among the analyzed techniques, two novel concepts are introduced, namely unequal diversity coding and flexible packet level codes. The proposed techniques have been analyzed on a novel airport channel model, derived from a measurement campaign at the airport of Munich (Germany). The introduced techniques largely improve the performance of the conventional AeroMACS link; representing thus appealing solutions for the long term evolution of the system.
Resumo:
Life Cycle Climate Performance (LCCP) is an evaluation method by which heating, ventilation, air conditioning and refrigeration systems can be evaluated for their global warming impact over the course of their complete life cycle. LCCP is more inclusive than previous metrics such as Total Equivalent Warming Impact. It is calculated as the sum of direct and indirect emissions generated over the lifetime of the system “from cradle to grave”. Direct emissions include all effects from the release of refrigerants into the atmosphere during the lifetime of the system. This includes annual leakage and losses during the disposal of the unit. The indirect emissions include emissions from the energy consumption during manufacturing process, lifetime operation, and disposal of the system. This thesis proposes a standardized approach to the use of LCCP and traceable data sources for all aspects of the calculation. An equation is proposed that unifies the efforts of previous researchers. Data sources are recommended for average values for all LCCP inputs. A residential heat pump sample problem is presented illustrating the methodology. The heat pump is evaluated at five U.S. locations in different climate zones. An excel tool was developed for residential heat pumps using the proposed method. The primary factor in the LCCP calculation is the energy consumption of the system. The effects of advanced vapor compression cycles are then investigated for heat pump applications. Advanced cycle options attempt to reduce the energy consumption in various ways. There are three categories of advanced cycle options: subcooling cycles, expansion loss recovery cycles and multi-stage cycles. The cycles selected for research are the suction line heat exchanger cycle, the expander cycle, the ejector cycle, and the vapor injection cycle. The cycles are modeled using Engineering Equation Solver and the results are applied to the LCCP methodology. The expander cycle, ejector cycle and vapor injection cycle are effective in reducing LCCP of a residential heat pump by 5.6%, 8.2% and 10.5%, respectively in Phoenix, AZ. The advanced cycles are evaluated with the use of low GWP refrigerants and are capable of reducing the LCCP of a residential heat by 13.7%, 16.3% and 18.6% using a refrigerant with a GWP of 10. To meet the U.S. Department of Energy’s goal of reducing residential energy use by 40% by 2025 with a proportional reduction in all other categories of residential energy consumption, a reduction in the energy consumption of a residential heat pump of 34.8% with a refrigerant GWP of 10 for Phoenix, AZ is necessary. A combination of advanced cycle, control options and low GWP refrigerants are necessary to meet this goal.
Resumo:
ARTIOLI, G. G., B. GUALANO, A. SMITH, J. STOUT, and A. H. LANCHA, JR. Role of beta-Alanine Supplementation on Muscle Carnosine and Exercise Performance. Med. Sci. Sports Exerc., Vol. 42, No. 6, pp. 1162-1173, 2010. In this narrative review, we present and discuss the current knowledge available on carnosine and beta-alanine metabolism as well as the effects of beta-alanine supplementation on exercise performance. Intramuscular acidosis has been attributed to be one of the main causes of fatigue during intense exercise. Carnosine has been shown to play a significant role in muscle pH regulation. Carnosine is synthesized in skeletal muscle from the amino acids L-histidine and beta-alanine. The rate-limiting factor of carnosine synthesis is beta-alanine availability. Supplementation with beta-alanine has been shown to increase muscle carnosine content and therefore total muscle buffer capacity, with the potential to elicit improvements in physical performance during high-intensity exercise. Studies on beta-alanine supplementation and exercise performance have demonstrated improvements in performance during multiple bouts of high-intensity exercise and in single bouts of exercise lasting more than 60 s. Similarly, beta-alanine supplementation has been shown to delay the onset of neuromuscular fatigue. Although beta-alanine does not improve maximal strength or (V) over dotO(2max), some aspects of endurance performance, such as anaerobic threshold and time to exhaustion, can be enhanced. Symptoms of paresthesia may be observed if a single dose higher than 800 mg is ingested. The symptoms, however, are transient and related to the increase in plasma concentration. They can be prevented by using controlled release capsules and smaller dosing strategies. No important side effect was related to the use of this amino acid so far. In conclusion, beta-alanine supplementation seems to be a safe nutritional strategy capable of improving high-intensity anaerobic performance.
Resumo:
In this study, further improvements regarding the fault location problem for power distribution systems are presented. The proposed improvements relate to the capacitive effect consideration on impedance-based fault location methods, by considering an exact line segment model for the distribution line. The proposed developments, which consist of a new formulation for the fault location problem and a new algorithm that considers the line shunt admittance matrix, are presented. The proposed equations are developed for any fault type and result in one single equation for all ground fault types, and another equation for line-to-line faults. Results obtained with the proposed improvements are presented. Also, in order to compare the improvements performance and demonstrate how the line shunt admittance affects the state-of-the-art impedance-based fault location methodologies for distribution systems, the results obtained with two other existing methods are presented. Comparative results show that, in overhead distribution systems with laterals and intermediate loads, the line shunt admittance can significantly affect the state-of-the-art methodologies response, whereas in this case the proposed developments present great improvements by considering this effect.
Resumo:
Cementitious stabilization of aggregates and soils is an effective technique to increase the stiffness of base and subbase layers. Furthermore, cementitious bases can improve the fatigue behavior of asphalt surface layers and subgrade rutting over the short and long term. However, it can lead to additional distresses such as shrinkage and fatigue in the stabilized layers. Extensive research has tested these materials experimentally and characterized them; however, very little of this research attempts to correlate the mechanical properties of the stabilized layers with their performance. The Mechanistic Empirical Pavement Design Guide (MEPDG) provides a promising theoretical framework for the modeling of pavements containing cementitiously stabilized materials (CSMs). However, significant improvements are needed to bring the modeling of semirigid pavements in MEPDG to the same level as that of flexible and rigid pavements. Furthermore, the MEPDG does not model CSMs in a manner similar to those for hot-mix asphalt or portland cement concrete materials. As a result, performance gains from stabilized layers are difficult to assess using the MEPDG. The current characterization of CSMs was evaluated and issues with CSM modeling and characterization in the MEPDG were discussed. Addressing these issues will help designers quantify the benefits of stabilization for pavement service life.