977 resultados para computation
Resumo:
Numerical weather prediction and climate simulation have been among the computationally most demanding applications of high performance computing eversince they were started in the 1950's. Since the 1980's, the most powerful computers have featured an ever larger number of processors. By the early 2000's, this number is often several thousand. An operational weather model must use all these processors in a highly coordinated fashion. The critical resource in running such models is not computation, but the amount of necessary communication between the processors. The communication capacity of parallel computers often fallsfar short of their computational power. The articles in this thesis cover fourteen years of research into how to harness thousands of processors on a single weather forecast or climate simulation, so that the application can benefit as much as possible from the power of parallel high performance computers. The resultsattained in these articles have already been widely applied, so that currently most of the organizations that carry out global weather forecasting or climate simulation anywhere in the world use methods introduced in them. Some further studies extend parallelization opportunities into other parts of the weather forecasting environment, in particular to data assimilation of satellite observations.
Resumo:
This study compares different rotor structures of permanent magnet motors with fractional slot windings. The surface mounted magnet and the embedded magnet rotor structures are studied. This thesis analyses the characteristics of a concentrated two-layer winding, each coil of which is wound around one tooth and which has a number of slots per pole and per phase less than one (q < 1). Compared to the integer slot winding, the fractional winding (q < 1) has shorter end windings and this, thereby, makes space as well as manufacturing cost saving possible. Several possible ways of winding a fractional slot machine with slots per pole and per phase lessthan one are examined. The winding factor and the winding harmonic components are calculated. The benefits attainable from a machine with concentrated windingsare considered. Rotor structures with surface magnets, radially embedded magnets and embedded magnets in V-position are discussed. The finite element method isused to solve the main values of the motors. The waveform of the induced electro motive force, the no-load and rated load torque ripple as well as the dynamic behavior of the current driven and voltage driven motor are solved. The results obtained from different finite element analyses are given. A simple analytic method to calculate fractional slot machines is introduced and the values are compared to the values obtained with the finite element analysis. Several different fractional slot machines are first designed by using the simple analytical methodand then computed by using the finite element method. All the motors are of thesame 225-frame size, and have an approximately same amount of magnet material, a same rated torque demand and a 400 - 420 rpm speed. An analysis of the computation results gives new information on the character of fractional slot machines.A fractional slot prototype machine with number 0.4 for the slots per pole and per phase, 45 kW output power and 420 rpm speed is constructed to verify the calculations. The measurement and the finite element method results are found to beequal.
Resumo:
Thisresearch deals with the dynamic modeling of gas lubricated tilting pad journal bearings provided with spring supported pads, including experimental verification of the computation. On the basis of a mathematical model of a film bearing, a computer program has been developed, which can be used for the simulation of a special type of tilting pad gas journal bearing supported by a rotary spring under different loading conditions time dependently (transient running conditions due to geometry variations in time externally imposed). On the basis of literature, different transformations have been used in the model to achieve simpler calculation. The numerical simulation is used to solve a non-stationary case of a gasfilm. The simulation results were compared with literature results in a stationary case (steady running conditions) and they were found to be equal. In addition to this, comparisons were made with a number of stationary and non-stationary bearing tests, which were performed at Lappeenranta University of Technology using bearings designed with the simulation program. A study was also made using numerical simulation and literature to establish the influence of the different bearing parameters on the stability of the bearing. Comparison work was done with literature on tilting pad gas bearings. This bearing type is rarely used. One literature reference has studied the same bearing type as that used in LUT. A new design of tilting pad gas bearing is introduced. It is based on a stainless steel body and electron beam welding of the bearing parts. It has good operation characteristics and is easier to tune and faster to manufacture than traditional constructions. It is also suitable for large serial production.
Resumo:
Observers are often required to adjust actions with objects that change their speed. However, no evidence for a direct sense of acceleration has been found so far. Instead, observers seem to detect changes in velocity within a temporal window when confronted with motion in the frontal plane (2D motion). Furthermore, recent studies suggest that motion-in-depth is detected by tracking changes of position in depth. Therefore, in order to sense acceleration in depth a kind of second-order computation would have to be carried out by the visual system. In two experiments, we show that observers misperceive acceleration of head-on approaches at least within the ranges we used [600-800 ms] resulting in an overestimation of arrival time. Regardless of the viewing condition (only monocular or monocular and binocular), the response pattern conformed to a constant velocity strategy. However, when binocular information was available, overestimation was highly reduced.
Resumo:
The resource utilization level in open laboratories of several universities has been shown to be very low. Our aim is to take advantage of those idle resources for parallel computation without disturbing the local load. In order to provide a system that lets us execute parallel applications in such a non-dedicated cluster, we use an integral scheduling system that considers both Space and Time sharing concerns. For dealing with the Time Sharing (TS) aspect, we use a technique based on the communication-driven coscheduling principle. This kind of TS system has some implications on the Space Sharing (SS) system, that force us to modify the way job scheduling is traditionally done. In this paper, we analyze the relation between the TS and the SS systems in a non-dedicated cluster. As a consequence of this analysis, we propose a new technique, termed 3DBackfilling. This proposal implements the well known SS technique of backfilling, but applied to an environment with a MultiProgramming Level (MPL) of the parallel applications that is greater than one. Besides, 3DBackfilling considers the requirements of the local workload running on each node. Our proposal was evaluated in a PVM/MPI Linux cluster, and it was compared with several more traditional SS policies applied to non-dedicated environments.
Resumo:
In the classical theorems of extreme value theory the limits of suitably rescaled maxima of sequences of independent, identically distributed random variables are studied. The vast majority of the literature on the subject deals with affine normalization. We argue that more general normalizations are natural from a mathematical and physical point of view and work them out. The problem is approached using the language of renormalization-group transformations in the space of probability densities. The limit distributions are fixed points of the transformation and the study of its differential around them allows a local analysis of the domains of attraction and the computation of finite-size corrections.
Resumo:
An analytical approach for the interpretation of multicomponent heterogeneous adsorption or complexation isotherms in terms of multidimensional affinity spectra is presented. Fourier transform, applied to analyze the corresponding integral equation, leads to an inversion formula which allows the computation of the multicomponent affinity spectrum underlying a given competitive isotherm. Although a different mathematical methodology is used, this procedure can be seen as the extension to multicomponent systems of the classical Sips’s work devoted to monocomponent systems. Furthermore, a methodology which yields analytical expressions for the main statistical properties (mean free energies of binding and covariance matrix) of multidimensional affinity spectra is reported. Thus, the level of binding correlation between the different components can be quantified. It has to be highlighted that the reported methodology does not require the knowledge of the affinity spectrum to calculate the means, variances, and covariance of the binding energies of the different components. Nonideal competitive consistent adsorption isotherm, widely used in metal/proton competitive complexation to environmental macromolecules, and Frumkin competitive isotherms are selected to illustrate the application of the reported results. Explicit analytical expressions for the affinity spectrum as well as for the matrix correlation are obtained for the NICCA case. © 2004 American Institute of Physics.
Resumo:
Possibilistic Defeasible Logic Programming (P-DeLP) is a logic programming language which combines features from argumentation theory and logic programming, incorporating the treatment of possibilistic uncertainty at the object-language level. In spite of its expressive power, an important limitation in P-DeLP is that imprecise, fuzzy information cannot be expressed in the object language. One interesting alternative for solving this limitation is the use of PGL+, a possibilistic logic over Gödel logic extended with fuzzy constants. Fuzzy constants in PGL+ allow expressing disjunctive information about the unknown value of a variable, in the sense of a magnitude, modelled as a (unary) predicate. The aim of this article is twofold: firstly, we formalize DePGL+, a possibilistic defeasible logic programming language that extends P-DeLP through the use of PGL+ in order to incorporate fuzzy constants and a fuzzy unification mechanism for them. Secondly, we propose a way to handle conflicting arguments in the context of the extended framework.
Resumo:
Tissue analysis is a useful tool for the nutrient management of fruit orchards. The mineral composition of diagnostic tissues expressed as nutrient concentration on a dry weight basis has long been used to assess the status of 'pure' nutrients. When nutrients are mixed and interact in plant tissues, their proportions or concentrations change relatively to each other as a result of synergism, antagonism, or neutrality, hence producing resonance within the closed space of tissue composition. Ternary diagrams and nutrient ratios are early representations of interacting nutrients in the compositional space. Dual and multiple interactions were integrated by the Diagnosis and Recommendation Integrated System (DRIS) into nutrient indexes and by Compositional Nutrient Diagnosis into centered log ratios (CND-clr). DRIS has some computational flaws such as using a dry matter index that is not a part as well as nutrient products (e.g. NxCa) instead of ratios. DRIS and CND-clr integrate all possible nutrient interactions without defining an ad hoc interactive model. They diagnose D components while D-1 could be diagnosed in the D-compositional Hilbert space. The isometric log ratio (ilr) coordinates overcome these problems using orthonormal binary nutrient partitions instead of dual ratios. In this study, it is presented a nutrient interactive model as well as computation methods for DRIS and CND-clr and CND-ilr coordinates (CND-ilr) using leaf analytical data from an experimental apple orchard in Southwestern Quebec, Canada. It was computed the Aitchison and Mahalanobis distances across ilr coordinates as measures of nutrient imbalance. The effect of changing nutrient concentrations on ilr coordinates are simulated to identify the ones contributing the most to nutrient imbalance.
Resumo:
Universal Converter (UNICON) –projektin osana suunniteltiin sähkömoottorikäyttöjen ohjaukseen ja mittaukseen soveltuva digitaaliseen signaaliprosessoriin (DSP) pohjautuva sulautettu järjestelmä. Riittävän laskentatehon varmistamiseksi päädyttiin käyttämään moniprosessorijärjestelmää. Prosessorijärjestelmässä käytettävää DSP-piiriä valittaessa valintaperusteina olivat piirien tarjoama prosessointiteho ja moniprosessorituki. Analog Devices:n SHARC-sarjan DSP-piirit täyttivät parhaiten asetetut vaatimukset: Ne tarjoavat tehokkaan käskykannan lisäksi suuren sisäisen muistin ja sisäänrakennetun moniprosessorituen. Järjestelmän mittalaiteluonteisuudesta johtuen keskeinen suunnitteluparametri oli luoda nopeat tiedonsiirtoyhteydet mittausantureilta DSP-järjestelmään. Tämä toteutettiin käyttäen ohjelmointavia FPGA-logiikkapiirejä digitaalimuotoisen mittausdatan vastaanotossa ja esikäsittelyssä. Tiedonsiirtoyhteys PC-tietokoneelle toteutettiin käyttäen erityistä liityntäkorttia DSP-järjestelmän ja PC-tietokoneen välillä. Liityntäkortin päätehtävänä on puskuroida siirrettävä data. Järjestelyllä estetään PC-tietokoneen vaikutus DSP-järjestelmän toimintaan, jotta kyetään takaamaan järjestelmän reaaliaikainen toiminta kaikissa olosuhteissa.
Resumo:
Purpose: Atheromatic plaque progression is affected, among others phenomena, by biomechanical, biochemical, and physiological factors. In this paper, the authors introduce a novel framework able to provide both morphological (vessel radius, plaque thickness, and type) and biomechanical (wall shear stress and Von Mises stress) indices of coronary arteries. Methods: First, the approach reconstructs the three-dimensional morphology of the vessel from intravascular ultrasound(IVUS) and Angiographic sequences, requiring minimal user interaction. Then, a computational pipeline allows to automatically assess fluid-dynamic and mechanical indices. Ten coronary arteries are analyzed illustrating the capabilities of the tool and confirming previous technical and clinical observations. Results: The relations between the arterial indices obtained by IVUS measurement and simulations have been quantitatively analyzed along the whole surface of the artery, extending the analysis of the coronary arteries shown in previous state of the art studies. Additionally, for the first time in the literature, the framework allows the computation of the membrane stresses using a simplified mechanical model of the arterial wall. Conclusions: Circumferentially (within a given frame), statistical analysis shows an inverse relation between the wall shear stress and the plaque thickness. At the global level (comparing a frame within the entire vessel), it is observed that heavy plaque accumulations are in general calcified and are located in the areas of the vessel having high wall shear stress. Finally, in their experiments the inverse proportionality between fluid and structural stresses is observed.
Resumo:
Työn tarkoituksena oli tutkia pinch-menetelmän soveltuvuutta toiminnassa olevassa UPM-Kymmene Oyj:n kaukaan sellutehtaassa ja kartoittaa pinch-analyysin avulla keittämön sekä haihduttamon lämpöenergiatarpeet sekä ylijäämät kesä ja talvitilanteissa vuodelle 1999. Kesätilanteeksi on katsottu riittävän elokuu ja talvitilanteeksi helmikuu. Työn alussa on lyhyesti käyty läpi pinch-tekniikkaan liittyvää teoriaa ja tutustutaan yleisesti työssä käytettyyn Pro_pi2 laskentaohjelmaan. Laskennan tulokset on muodostettu käsittelemällä tiedonkeruujärjestelmistä haettu virtaavien nesteiden tulo- ja kohdelämpötiloista sekä massavirroista muodostuva data Pro_pi2 laskentaohjelmalla. Tulokset on esitetty kappaleessa 5, jossa kesä ja talvitilanteista saatuja tuloksia on verrattu keskenään. Mielenkiinnon vuoksi mukaan vertailuun on otettu vuoden 1999 tietojen lisäksi myös tiedot vuoden 2000 talvelle. Pinch-menetelmään perustuvan Pro_pi2 laskentaohjelman todettiin soveltuvan ennemminkin lämmön talteenottojärjestelmän sekä tehtaan energiakäytön esisuunnitteluun kuin jo olemassa olevan tehtaan tutkimiseen.
Resumo:
Tämän työn tarkoituksena on tarkastella tulevaisuuden kehitysnäkymien vaikutusta Vaasan kaukolämpötoimintaan. Komartekin Flowra 32 verkostolaskentaohjelman avulla tutkitaan kaukolämpöverkon siirtokykyä nykyisissä ja tulevaisuuden kuormitustilanteissa. Työn yhteydessä laaditaan kaukolämmityksen kasvuennuste seuraavalle kymmenelle vuodelle ja selvitetään mitoituslämpötilaa -29°C vastaava teho tilastollisen analyysin avulla. Lisäksi tutkitaan mahdollisia ratkaisuja huippu- ja varatehon tuottamiseksi. Tarkastelun kohteena on myös lämmön lyhytaikaisvarastoinnin kannattavuus energianhankintajärjestelmässä. Kaukolämpöverkon siirtokyky on tarkastelun perusteella kohtalaisen hyvä, mutta liittymistehojen kasvaessa paine-erot verkon häntäpäässä jäävät liian alhaisiksi. Paras ratkaisu paine-ero ongelmaan on rakentaa välipumppaamo Hovioikeudenpuistoon. Tarkastelun perusteella kaukolämmön varatehon lisätarve on kymmenen vuoden kuluttua noin 40 MW ja varatehoksi on kannattavinta rakentaa raskasta polttoöljyä käyttävä lämpökeskus. Lämmön lyhytaikaisvarastointi on nykyisillä energianhinnoilla kohtalaisen kannattavaa varsinkin, jos Kauppa- ja teollisuusministeriö myöntää hankkeelle täyden 30%:n investointiavustuksen.
Resumo:
The computer simulation of reaction dynamics has nowadays reached a remarkable degree of accuracy. Triatomic elementary reactions are rigorously studied with great detail on a straightforward basis using a considerable variety of Quantum Dynamics computational tools available to the scientific community. In our contribution we compare the performance of two quantum scattering codes in the computation of reaction cross sections of a triatomic benchmark reaction such as the gas phase reaction Ne + H2+ %12. NeH++ H. The computational codes are selected as representative of time-dependent (Real Wave Packet [ ]) and time-independent (ABC [ ]) methodologies. The main conclusion to be drawn from our study is that both strategies are, to a great extent, not competing but rather complementary. While time-dependent calculations advantages with respect to the energy range that can be covered in a single simulation, time-independent approaches offer much more detailed information from each single energy calculation. Further details such as the calculation of reactivity at very low collision energies or the computational effort related to account for the Coriolis couplings are analyzed in this paper.
Resumo:
We consider the numerical treatment of the optical flow problem by evaluating the performance of the trust region method versus the line search method. To the best of our knowledge, the trust region method is studied here for the first time for variational optical flow computation. Four different optical flow models are used to test the performance of the proposed algorithm combining linear and nonlinear data terms with quadratic and TV regularization. We show that trust region often performs better than line search; especially in the presence of non-linearity and non-convexity in the model.