77 resultados para analysis of performance
Resumo:
In this work, we explain the behavior of multijunction solar cells under non-uniform (spatially and in spectral content) light profiles in general and in particular when Gaussian light profiles cause a photo-generated current density, which exceeds locally the peak current density of the tunnel junction. We have analyzed the implications on the tunnel junction's limitation, that is, in the loss of efficiency due to the appearance of a dip in the I–V curve. For that, we have carried out simulations with our three-dimensional distributed model for multijunction solar cells, which contemplates a full description of the tunnel junction and also takes into account the lateral resistances in the tunnel junction. The main findings are that the current density photo-generated spreads out through the lateral resistances of the device, mainly through the tunnel junction layers and the back contact. Therefore, under non-uniform light profiles these resistances are determinant not only to avoid the tunnel junction's limitation but also for mitigating losses in the fill factor. Therefore, taking into account these lateral resistances could be the key for jointly optimizing the concentrator photovoltaic system (concentrator optics, front grid layout and semiconductor structure)
Resumo:
This article analyses the long-term performance of collective off-grid photovoltaic (PV) systems in rural areas. The use of collective PV systems for the electrification of small medium-size villages in developing countries has increased in the recent years. They are basically set up as stand-alone installations (diesel hybrid or pure PV) with no connection with other electrical grids. Their particular conditions (isolated) and usual installation places (far from commercial/industrial centers) require an autonomous and reliable technology. Different but related factors affect their performance and the energy supply; some of them are strictly technical but others depend on external issues like the solar energy resource and users’ energy and power consumption. The work presented is based on field operation of twelve collective PV installations supplying the electricity to off-grid villages located in the province of Jujuy, Argentina. Five of them have PV generators as unique power source while other seven include the support of diesel groups. Load demand evolution, energy productivity and fuel consumption are analyzed. Besides, energy generation strategies (PV/diesel) are also discussed.
Resumo:
Introduction Lithium-based ceramics (silicates, titanates, ?) possess a series of advantages as alternative over liquid lithium and lithium-lead alloys for fusion breeders. They have a sufficient lithium atomic density (up to 540 kg*m-3), high temperature stability (up to 1300 K), and good chemical compatibility with structural materials. Nevertheless, few research is made on the diffusion behavior of He and H isotopes through polycrystalline structures of porous ceramics which is crucial in order to understand the mobility of gas coolants as well as, the release of tritium. Moreover, in the operating conditions of actual breeder blanket concepts, the extraction rate of the helium produced during lithium transmutation can be affected by the composition and the structure of the near surface region modifying the performance of BB materials
Resumo:
In this work, robustness and stability of continuum damage models applied to material failure in soft tissues are addressed. In the implicit damage models equipped with softening, the presence of negative eigenvalues in the tangent elemental matrix degrades the condition number of the global matrix, leading to a reduction of the computational performance of the numerical model. Two strategies have been adapted from literature to improve the aforementioned computational performance degradation: the IMPL-EX integration scheme [Oliver,2006], which renders the elemental matrix contribution definite positive, and arclength-type continuation methods [Carrera,1994], which allow to capture the unstable softening branch in brittle ruptures. The IMPL-EX integration scheme has as a major drawback the need to use small time steps to keep numerical error below an acceptable value. A convergence study, limiting the maximum allowed increment of internal variables in the damage model, is presented. Finally, numerical simulation of failure problems with fibre reinforced materials illustrates the performance of the adopted methodology.
Resumo:
This paper reports the studies carried out to develop and calibrate the optimal models for the objectives of this work. In particular, quarter bogie model for vehicle, rail-wheel contact with Lagrangian multiplier method, 2D spatial discretization were selected as the optimal decisions. Furthermore, the 3D model of coupled vehicle-track also has been developed to contrast the results obtained in the 2D model. The calculations were carried out in the time domain and envelopes of relevant results were obtained for several track profiles and speed ranges. Distributed elevation irregularities were generated based on power spectral density (PSD) distributions. The results obtained include the wheel-rail contact forces, forces transmitted to the bogie by primary suspension. The latter loads are relevant for the purpose of evaluating the performance of the infrastructure
Resumo:
In this paper, the results of six years of research in engineering education, in the application of the European Higher Education Area (EHEA) to improve the performance of the students in the subject Analysis of Circuits of Telecommunication Engineering, are analysed taking into consideration the fact that there would be hidden variables that both separate students into subgroups and show the connection among several basic subjects such as Analysis of Circuits (AC) and Mathematics (Math). The discovery of these variables would help us to explain the characteristics of the students through the teaching and learning methodology, and would show that there are some characteristics that instructors do not take into account but that are of paramount importance
Resumo:
As a result of advances in mobile technology, new services which benefit from the ubiquity of these devices are appearing. Some of these services require the identification of the subject since they may access private user information. In this paper, we propose to identify each user by drawing his/her handwritten signature in the air (in-airsignature). In order to assess the feasibility of an in-airsignature as a biometric feature, we have analysed the performance of several well-known patternrecognitiontechniques—Hidden Markov Models, Bayes classifiers and dynamic time warping—to cope with this problem. Each technique has been tested in the identification of the signatures of 96 individuals. Furthermore, the robustness of each method against spoofing attacks has also been analysed using six impostors who attempted to emulate every signature. The best results in both experiments have been reached by using a technique based on dynamic time warping which carries out the recognition by calculating distances to an average template extracted from several training instances. Finally, a permanence analysis has been carried out in order to assess the stability of in-airsignature over time.
Resumo:
This article presents and illustrates a practical approach to the dataow analysis of constraint logic programming languages using abstract interpretation. It is rst argued that from the framework point of view it suces to propose relatively simple extensions of traditional analysis methods which have already been proved useful and practical and for exist. This is shown by proposing a simple extension of Bruynooghes traditional framework which allows it to analyze constraint logic programs. Then and using this generalized framework two abstract domains and their required abstract functions are presented the rst abstract domain approximates deniteness information and the second one freeness. Finally an approach for cobining those domains is proposed The two domains and their combination have been implemented and used in the analysis of CLP and Prolog III applications. Results from this implementation showing its performance and accuracy are also presented
Resumo:
Several models for context-sensitive analysis of modular programs have been proposed, each with different characteristics and representing different trade-offs. The advantage of these context-sensitive analyses is that they provide information which is potentially more accurate than that provided by context-free analyses. Such information can then be applied to validating/debugging the program and/or to specializing the program in order to obtain important performance improvements. Some very preliminary experimental results have also been reported for some of these models which provided initial evidence on their potential. However, further experimentation, which is needed in order to understand the many issues left open and to show that the proposed modes scale and are usable in the context of large, real-life modular programs, was left as future work. The aim of this paper is two-fold. On one hand we provide an empirical comparison of the different models proposed in previous work, as well as experimental data on the different choices left open in those designs. On the other hand we explore the scalability of these models by using larger modular programs as benchmarks. The results have been obtained from a realistic implementation of the models, integrated in a production-quality compiler (CiaoPP/Ciao). Our experimental results shed light on the practical implications of the different design choices and of the models themselves. We also show that contextsensitive analysis of modular programs is indeed feasible in practice, and that in certain critical cases it provides better performance results than those achievable by analyzing the whole program at once, specially in terms of memory consumption and when reanalyzing after making changes to a program, as is often the case during program development.
Resumo:
This paper addresses the issue of the practicality of global flow analysis in logic program compilation, in terms of both speed and precision of analysis. It discusses design and implementation aspects of two practical abstract interpretation-based flow analysis systems: MA3, the MOO Andparallel Analyzer and Annotator; and Ms, an experimental mode inference system developed for SB-Prolog. The paper also provides performance data obtained from these implementations. Based on these results, it is concluded that the overhead of global flow analysis is not prohibitive, while the results of analysis can be quite precise and useful.
Resumo:
Abstract interpretation has been widely used for the analysis of object-oriented languages and, in particular, Java source and bytecode. However, while most existing work deals with the problem of flnding expressive abstract domains that track accurately the characteristics of a particular concrete property, the underlying flxpoint algorithms have received comparatively less attention. In fact, many existing (abstract interpretation based—) flxpoint algorithms rely on relatively inefHcient techniques for solving inter-procedural caligraphs or are speciflc and tied to particular analyses. We also argüe that the design of an efficient fixpoint algorithm is pivotal to supporting the analysis of large programs. In this paper we introduce a novel algorithm for analysis of Java bytecode which includes a number of optimizations in order to reduce the number of iterations. The algorithm is parametric -in the sense that it is independent of the abstract domain used and it can be applied to different domains as "plug-ins"-, multivariant, and flow-sensitive. Also, is based on a program transformation, prior to the analysis, that results in a highly uniform representation of all the features in the language and therefore simplifies analysis. Detailed descriptions of decompilation solutions are given and discussed with an example. We also provide some performance data from a preliminary implementation of the analysis.
Resumo:
The design of an electrodynamic tether is a complex task that involves the control of dynamic instabilities, optimization of the generated power (or the descent time in deorbiting missions), and minimization of the tether mass. The electrodynamic forces on an electrodynamic tether are responsible for variations in the mechanical energy of the tethered system and can also drive the system to dynamic instability. Energy sources and sinks in this system include the following: 1) ionospheric impedance, 2) the potential drop at the cathodic contactor, 3) ohmic losses in the tether, 4) the corotational plasma electric field, and 5) generated power and/or 6) input power. The analysis of each of these energy components, or bricks, establishes parameters that are useful tools for tether design. In this study, the nondimensional parameters that govern the orbital energy variation, dynamic instability, and power generation were characterized, and their mutual interdependence was established. A space-debris mitigation mission was taken as an example of this approach for the assessment of tether performance. Numerical simulations using a dumbbell model for tether dynamics, the International Geomagnetic Reference Field for the geomagnetic field, and the International Reference Ionosphere for the ionosphere were performed to test the analytical approach. The results obtained herein stress the close relationships that exist among the velocity of descent, dynamic stability, and generated power. An optimal tether design requires a detailed tradeoff among these performances in a real-world scenario.
Resumo:
Within the last years there has been increasing interest in direct liquid fuel cells as power sources for portable devices and, in the future, power plants for electric vehicles and other transport media as ships will join those applications. Methanol is considerably more convenient and easy to use than gaseous hydrogen and a considerable work is devoted to the development of direct methanol fuel cells. But ethanol has much lower toxicity and from an ecological viewpoint ethanol is exceptional among all other types of fuel as is the only chemical fuel in renewable supply. The aim of this study is to investigate the possibility of using direct alcohol fuel cells fed with alcohol mixtures. For this purpose, a comparative exergy analysis of a direct alcohol fuel cell fed with alcohol mixtures against the same fuel cell fed with single alcohols is performed. The exergetic efficiency and the exergy loss and destruction are calculated and compared in each case. When alcohol mixtures are fed to the fuel cell, the contribution of each fuel to the fuel cell performance is weighted attending to their relative proportion in the aqueous solution. The optimum alcohol composition for methanol/ethanol mixtures has been determined.
Resumo:
Strong motion records obtained in instrumented short-span bridges show the importance of the abutments in the dynamic response of the structure. Existing models study the pier foundation influence but not the abutment performance. This work proposes two and three dimensional boundary element models in the frequency domain and studies the dimensionless dynamic stiffness of standard bridge abutments.
Resumo:
The deviation of calibration coefficients from five cup anemometer models over time was analyzed. The analysis was based on a series of laboratory calibrations between January 2001 and August 2010. The analysis was performed on two different groups of anemometers: (1) anemometers not used for any industrial purpose (that is, just stored); and (2) anemometers used in different industrial applications (mainly in the field—or outside—applications like wind farms). Results indicate a loss of performance of the studied anemometers over time. In the case of the unused anemometers the degradation shows a clear pattern. In the case of the anemometers used in the field, the data analyzed also suggest a loss of performance, yet the degradation does not show a clear trend. A recalibration schedule is proposed based on the observed performances variations