917 resultados para power engineering computing
Resumo:
In vitro studies have provided conflicting evidence of temperature changes in the tooth pulp chamber after low-level laser irradiation of the tooth surface. The present study was an in vitro evaluation of temperature increases in the human tooth pulp chamber after diode laser irradiation (GaAlAs, lambda = 808 nm) using different power densities. Twelve human teeth (three incisors, three canines, three premolars and three molars) were sectioned in the cervical third of the root and enlarged for the introduction of a thermocouple into the pulp chamber. The teeth were irradiated with 417 mW, 207 mW and 78 mW power outputs for 30 s on the vestibular surface approximately 2 mm from the cervical line of the crown. The highest average increase in temperature (5.6A degrees C) was observed in incisors irradiated with 417 mW. None of the teeth (incisors, canines, premolars or molars) irradiated with 207 mW showed temperature increases higher than 5.5A degrees C that could potentially be harmful to pulp tissue. Teeth irradiated with 78 mW showed lower temperature increases. The study showed that diode laser irradiation with a wavelength of 808 nm at 417 mW power output increased the pulp chamber temperature of certain groups of teeth, especially incisors and premolars, to critical threshold values for the dental pulp (5.5A degrees C). Thus, this study serves as a warning to clinicians that ""more"" is not necessarily ""better"".
Resumo:
This paper describes a new mechanical samples positioning system that allows the safe placement and removal of biological samples for prolonged irradiation, in a nuclear reactor during full-power continuous operation. Also presented herein the materials of construction and operating principles. Additionally, this sample positioning system is compared with an existing pneumatic and automated transfer system, already available at the research reactors. The system consists of a mechanical arm with a claw, which can deliver the samples for irradiations without reactor shutdown. It was installed in the lEA-R1 research reactor at Instituto de Pesquisas Energeticas e Nucleares (IPEN), Sao Paulo, Brazil, and for the past 5 years, the system has successfully operated and allowed the conducting of important experiments. As a result of its introduction, the facility has been in a position to positively respond to the increased demand in studies of biology, medicine, physics, engineering, detector/dosimeter calibrations, etc. It is one example of the appropriated technologies that save energy and resources. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Large-scale simulations of parts of the brain using detailed neuronal models to improve our understanding of brain functions are becoming a reality with the usage of supercomputers and large clusters. However, the high acquisition and maintenance cost of these computers, including the physical space, air conditioning, and electrical power, limits the number of simulations of this kind that scientists can perform. Modern commodity graphical cards, based on the CUDA platform, contain graphical processing units (GPUs) composed of hundreds of processors that can simultaneously execute thousands of threads and thus constitute a low-cost solution for many high-performance computing applications. In this work, we present a CUDA algorithm that enables the execution, on multiple GPUs, of simulations of large-scale networks composed of biologically realistic Hodgkin-Huxley neurons. The algorithm represents each neuron as a CUDA thread, which solves the set of coupled differential equations that model each neuron. Communication among neurons located in different GPUs is coordinated by the CPU. We obtained speedups of 40 for the simulation of 200k neurons that received random external input and speedups of 9 for a network with 200k neurons and 20M neuronal connections, in a single computer with two graphic boards with two GPUs each, when compared with a modern quad-core CPU. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
The capacitor test process at ABB Capacitors in Ludvika must be improved to meet future demands for high voltage products. To find a solution to how to improve the test process, an investigation was performed to establish which parts of the process are used and how they operate. Several parts which can improves the process were identified. One of them was selected to be improved in correlation with the subject, mechanical engineering. Four concepts were generated and decision matrixes were used to systematically select the best concept. By improving the process several benefits has been added to the process. More units are able to be tested and lead time is reduced. As the lead time is reduced the cost for each unit is reduced, workers will work less hours for the same amount of tested units, future work to further improve the process is also identified. The selected concept was concept 1, the sway stop concept. This concept is used to reduce the sway of the capacitors as they have entered the test facility, the box. By improving this part of the test process a time saving of 20 seconds per unit can be achieved, equivalent to 7% time reduction. This can be compared to an additional 1400 units each year.
Resumo:
The ever increasing spurt in digital crimes such as image manipulation, image tampering, signature forgery, image forgery, illegal transaction, etc. have hard pressed the demand to combat these forms of criminal activities. In this direction, biometrics - the computer-based validation of a persons' identity is becoming more and more essential particularly for high security systems. The essence of biometrics is the measurement of person’s physiological or behavioral characteristics, it enables authentication of a person’s identity. Biometric-based authentication is also becoming increasingly important in computer-based applications because the amount of sensitive data stored in such systems is growing. The new demands of biometric systems are robustness, high recognition rates, capability to handle imprecision, uncertainties of non-statistical kind and magnanimous flexibility. It is exactly here that, the role of soft computing techniques comes to play. The main aim of this write-up is to present a pragmatic view on applications of soft computing techniques in biometrics and to analyze its impact. It is found that soft computing has already made inroads in terms of individual methods or in combination. Applications of varieties of neural networks top the list followed by fuzzy logic and evolutionary algorithms. In a nutshell, the soft computing paradigms are used for biometric tasks such as feature extraction, dimensionality reduction, pattern identification, pattern mapping and the like.
Resumo:
Architectural description languages (ADLs) are used to specify high-level, compositional view of a software application. ADLs usually come equipped with a rigourous state-transition style semantics, facilitating specification and analysis of distributed and event-based systems. However, enterprise system architectures built upon newer middleware (implementations of Java’s EJB specification, or Microsoft’s COM+/ .NET) require additional expressive power from an ADL. The TrustME ADL is designed to meet this need. In this paper, we describe several aspects of TrustME that facilitate specification and anlysis of middleware-based architectures for the enterprise.
Resumo:
This work evaluates the environmental impact resulting from the natural gas and diesel combustion in thermoelectric power plants that utilize the combined cycle technology (CC), as regarding to Brazilian conditions according to Thermopower Priority Plan JPP). In the regions where there are not natural gas the option has been the utilization of diesel and consequentily there are more emission of pollutants. The ecological efficiency concept, which evaluates by and large the environmental impact, caused by CO2, SO2, NOx and particulate matter (PM) emissions. The combustion gases of the thermoelectric power plants working with natural gas (less pollutant) and diesel (more pollutant) cause problems to the environment, for their components harm the human being life, animals and directly the plants. The resulting pollution from natural gas and diesel combustion is analyzed, considering separately the CO2, SO2, NO2 and particulate matter gas emission and comparing them with the in use international standards regarding the air quality. It can be concluded that it is possible to calculate thermoelectric power plant quantitative and qualitative environment factor, and on the ecological standpoint, for plant with total power of 41441 kW, being 27 170 kW for the gas turbine and 14271 kW for the steam turbine. The natural gas used as fuel is better than the diesel, presenting ecological efficiency of 0.944 versus 0.914 for the latter, considering a thermal efficiency of 54% for the combined cycle. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
The distribution of natural gas is carried out by means of long ducts and intermediate compression stations to compensate the pressure drops due to friction. The natural gas compressors are usually driven by an electric motor or a gas turbine system, offering possibilities for energy management, one of these consisting in generating energy for use in-plant or to commercialize as independent power producer. It can be done by matching the natural gas demand, at the minimum pressure allowed in the reception point, and the storage capacity of the feed duct with the maximum compressor capacity, for storing the natural gas at the maximum permitted pressure. This allows the gas turbine to drive an electric generator during the time in which the decreasing pressure in duct is above the minimum acceptable by the sink unit. In this paper, a line-pack management analysis is done for an existing compression station considering its actual demand curve for determining the economic feasibility of maintaining the gas turbine system driver generating electricity in a peak and off-peak tariff structure. The potential of cost reduction from the point of view of energy resources (natural gas and electric costs) is also analyzed. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
A Summary of different topological arrangements concerning a ZCS-PWM cell is presented, based on the analysis of its application in boost rectifying preregulators, controlled by the technique of instantaneous average values of input current, with the purpose of obtaining high-input-power-factor rectifier and high efficiency in single-phase applications in telecommunication systems. The main characteristics of each switching cell are described, providing conditions to establish a qualitative comparison among the structures. In addition, experimental results are presented for a prototype of the latest version of the ZCS-PWM boost rectifier, implemented for processing normal values of 1200 W output power and 400 V output average voltage, at 220 V Input RMS voltage and 50 kHz switching frequency.
Resumo:
This work presents a methodology to analyze electric power systems transient stability for first swing using a neural network based on adaptive resonance theory (ART) architecture, called Euclidean ARTMAP neural network. The ART architectures present plasticity and stability characteristics, which are very important for the training and to execute the analysis in a fast way. The Euclidean ARTMAP version provides more accurate and faster solutions, when compared to the fuzzy ARTMAP configuration. Three steps are necessary for the network working, training, analysis and continuous training. The training step requires much effort (processing) while the analysis is effectuated almost without computational effort. The proposed network allows approaching several topologies of the electric system at the same time; therefore it is an alternative for real time transient stability of electric power systems. To illustrate the proposed neural network an application is presented for a multi-machine electric power systems composed of 10 synchronous machines, 45 buses and 73 transmission lines. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
In this paper we present the results of the use of a methodology for multinodal load forecasting through an artificial neural network-type Multilayer Perceptron, making use of radial basis functions as activation function and the Backpropagation algorithm, as an algorithm to train the network. This methodology allows you to make the prediction at various points in power system, considering different types of consumers (residential, commercial, industrial) of the electric grid, is applied to the problem short-term electric load forecasting (24 hours ahead). We use a database (Centralised Dataset - CDS) provided by the Electricity Commission de New Zealand to this work.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)