947 resultados para dynamic parameters identification


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large Dynamic Message Signs (DMSs) have been increasingly used on freeways, expressways and major arterials to better manage the traffic flow by providing accurate and timely information to drivers. Overhead truss structures are typically employed to support those DMSs allowing them to provide wider display to more lanes. In recent years, there is increasing evidence that the truss structures supporting these large and heavy signs are subjected to much more complex loadings than are typically accounted for in the codified design procedures. Consequently, some of these structures have required frequent inspections, retrofitting, and even premature replacement. Two manufacturing processes are primarily utilized on truss structures - welding and bolting. Recently, cracks at welding toes were reported for the structures employed in some states. Extremely large loads (e.g., due to high winds) could cause brittle fractures, and cyclic vibration (e.g., due to diurnal variation in temperature or due to oscillations in the wind force induced by vortex shedding behind the DMS) may lead to fatigue damage, as these are two major failures for the metallic material. Wind and strain resulting from temperature changes are the main loads that affect the structures during their lifetime. The American Association of State Highway and Transportation Officials (AASHTO) Specification defines the limit loads in dead load, wind load, ice load, and fatigue design for natural wind gust and truck-induced gust. The objectives of this study are to investigate wind and thermal effects in the bridge type overhead DMS truss structures and improve the current design specifications (e.g., for thermal design). In order to accomplish the objective, it is necessary to study structural behavior and detailed strain-stress of the truss structures caused by wind load on the DMS cabinet and thermal load on the truss supporting the DMS cabinet. The study is divided into two parts. The Computational Fluid Dynamics (CFD) component and part of the structural analysis component of the study were conducted at the University of Iowa while the field study and related structural analysis computations were conducted at the Iowa State University. The CFD simulations were used to determine the air-induced forces (wind loads) on the DMS cabinets and the finite element analysis was used to determine the response of the supporting trusses to these pressure forces. The field observation portion consisted of short-term monitoring of several DMS Cabinet/Trusses and long-term monitoring of one DMS Cabinet/Truss. The short-term monitoring was a single (or two) day event in which several message sign panel/trusses were tested. The long-term monitoring field study extended over several months. Analysis of the data focused on trying to identify important behaviors under both ambient and truck induced winds and the effect of daily temperature changes. Results of the CFD investigation, field experiments and structural analysis of the wind induced forces on the DMS cabinets and their effect on the supporting trusses showed that the passage of trucks cannot be responsible for the problems observed to develop at trusses supporting DMS cabinets. Rather the data pointed toward the important effect of the thermal load induced by cyclic (diurnal) variations of the temperature. Thermal influence is not discussed in the specification, either in limit load or fatigue design. Although the frequency of the thermal load is low, results showed that when temperature range is large the restress range would be significant to the structure, especially near welding areas where stress concentrations may occur. Moreover stress amplitude and range are the primary parameters for brittle fracture and fatigue life estimation. Long-term field monitoring of one of the overhead truss structures in Iowa was used as the research baseline to estimate the effects of diurnal temperature changes to fatigue damage. The evaluation of the collected data is an important approach for understanding the structural behavior and for the advancement of future code provisions. Finite element modeling was developed to estimate the strain and stress magnitudes, which were compared with the field monitoring data. Fatigue life of the truss structures was also estimated based on AASHTO specifications and the numerical modeling. The main conclusion of the study is that thermal induced fatigue damage of the truss structures supporting DMS cabinets is likely a significant contributing cause for the cracks observed to develop at such structures. Other probable causes for fatigue damage not investigated in this study are the cyclic oscillations of the total wind load associated with the vortex shedding behind the DMS cabinet at high wind conditions and fabrication tolerances and induced stresses due to fitting of tube to tube connections.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several methods and approaches for measuring parameters to determine fecal sources of pollution in water have been developed in recent years. No single microbial or chemical parameter has proved sufficient to determine the source of fecal pollution. Combinations of parameters involving at least one discriminating indicator and one universal fecal indicator offer the most promising solutions for qualitative and quantitative analyses. The universal (nondiscriminating) fecal indicator provides quantitative information regarding the fecal load. The discriminating indicator contributes to the identification of a specific source. The relative values of the parameters derived from both kinds of indicators could provide information regarding the contribution to the total fecal load from each origin. It is also essential that both parameters characteristically persist in the environment for similar periods. Numerical analysis, such as inductive learning methods, could be used to select the most suitable and the lowest number of parameters to develop predictive models. These combinations of parameters provide information on factors affecting the models, such as dilution, specific types of animal source, persistence of microbial tracers, and complex mixtures from different sources. The combined use of the enumeration of somatic coliphages and the enumeration of Bacteroides-phages using different host specific strains (one from humans and another from pigs), both selected using the suggested approach, provides a feasible model for quantitative and qualitative analyses of fecal source identification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les catastrophes sont souvent perçues comme des événements rapides et aléatoires. Si les déclencheurs peuvent être soudains, les catastrophes, elles, sont le résultat d'une accumulation des conséquences d'actions et de décisions inappropriées ainsi que du changement global. Pour modifier cette perception du risque, des outils de sensibilisation sont nécessaires. Des méthodes quantitatives ont été développées et ont permis d'identifier la distribution et les facteurs sous- jacents du risque.¦Le risque de catastrophes résulte de l'intersection entre aléas, exposition et vulnérabilité. La fréquence et l'intensité des aléas peuvent être influencées par le changement climatique ou le déclin des écosystèmes, la croissance démographique augmente l'exposition, alors que l'évolution du niveau de développement affecte la vulnérabilité. Chacune de ses composantes pouvant changer, le risque est dynamique et doit être réévalué périodiquement par les gouvernements, les assurances ou les agences de développement. Au niveau global, ces analyses sont souvent effectuées à l'aide de base de données sur les pertes enregistrées. Nos résultats montrent que celles-ci sont susceptibles d'être biaisées notamment par l'amélioration de l'accès à l'information. Elles ne sont pas exhaustives et ne donnent pas d'information sur l'exposition, l'intensité ou la vulnérabilité. Une nouvelle approche, indépendante des pertes reportées, est donc nécessaire.¦Les recherches présentées ici ont été mandatées par les Nations Unies et par des agences oeuvrant dans le développement et l'environnement (PNUD, l'UNISDR, la GTZ, le PNUE ou l'UICN). Ces organismes avaient besoin d'une évaluation quantitative sur les facteurs sous-jacents du risque, afin de sensibiliser les décideurs et pour la priorisation des projets de réduction des risques de désastres.¦La méthode est basée sur les systèmes d'information géographique, la télédétection, les bases de données et l'analyse statistique. Une importante quantité de données (1,7 Tb) et plusieurs milliers d'heures de calculs ont été nécessaires. Un modèle de risque global a été élaboré pour révéler la distribution des aléas, de l'exposition et des risques, ainsi que pour l'identification des facteurs de risque sous- jacent de plusieurs aléas (inondations, cyclones tropicaux, séismes et glissements de terrain). Deux indexes de risque multiples ont été générés pour comparer les pays. Les résultats incluent une évaluation du rôle de l'intensité de l'aléa, de l'exposition, de la pauvreté, de la gouvernance dans la configuration et les tendances du risque. Il apparaît que les facteurs de vulnérabilité changent en fonction du type d'aléa, et contrairement à l'exposition, leur poids décroît quand l'intensité augmente.¦Au niveau local, la méthode a été testée pour mettre en évidence l'influence du changement climatique et du déclin des écosystèmes sur l'aléa. Dans le nord du Pakistan, la déforestation induit une augmentation de la susceptibilité des glissements de terrain. Les recherches menées au Pérou (à base d'imagerie satellitaire et de collecte de données au sol) révèlent un retrait glaciaire rapide et donnent une évaluation du volume de glace restante ainsi que des scénarios sur l'évolution possible.¦Ces résultats ont été présentés à des publics différents, notamment en face de 160 gouvernements. Les résultats et les données générées sont accessibles en ligne (http://preview.grid.unep.ch). La méthode est flexible et facilement transposable à des échelles et problématiques différentes, offrant de bonnes perspectives pour l'adaptation à d'autres domaines de recherche.¦La caractérisation du risque au niveau global et l'identification du rôle des écosystèmes dans le risque de catastrophe est en plein développement. Ces recherches ont révélés de nombreux défis, certains ont été résolus, d'autres sont restés des limitations. Cependant, il apparaît clairement que le niveau de développement configure line grande partie des risques de catastrophes. La dynamique du risque est gouvernée principalement par le changement global.¦Disasters are often perceived as fast and random events. If the triggers may be sudden, disasters are the result of an accumulation of actions, consequences from inappropriate decisions and from global change. To modify this perception of risk, advocacy tools are needed. Quantitative methods have been developed to identify the distribution and the underlying factors of risk.¦Disaster risk is resulting from the intersection of hazards, exposure and vulnerability. The frequency and intensity of hazards can be influenced by climate change or by the decline of ecosystems. Population growth increases the exposure, while changes in the level of development affect the vulnerability. Given that each of its components may change, the risk is dynamic and should be reviewed periodically by governments, insurance companies or development agencies. At the global level, these analyses are often performed using databases on reported losses. Our results show that these are likely to be biased in particular by improvements in access to information. International losses databases are not exhaustive and do not give information on exposure, the intensity or vulnerability. A new approach, independent of reported losses, is necessary.¦The researches presented here have been mandated by the United Nations and agencies working in the development and the environment (UNDP, UNISDR, GTZ, UNEP and IUCN). These organizations needed a quantitative assessment of the underlying factors of risk, to raise awareness amongst policymakers and to prioritize disaster risk reduction projects.¦The method is based on geographic information systems, remote sensing, databases and statistical analysis. It required a large amount of data (1.7 Tb of data on both the physical environment and socio-economic parameters) and several thousand hours of processing were necessary. A comprehensive risk model was developed to reveal the distribution of hazards, exposure and risk, and to identify underlying risk factors. These were performed for several hazards (e.g. floods, tropical cyclones, earthquakes and landslides). Two different multiple risk indexes were generated to compare countries. The results include an evaluation of the role of the intensity of the hazard, exposure, poverty, governance in the pattern and trends of risk. It appears that the vulnerability factors change depending on the type of hazard, and contrary to the exposure, their weight decreases as the intensity increases.¦Locally, the method was tested to highlight the influence of climate change and the ecosystems decline on the hazard. In northern Pakistan, deforestation exacerbates the susceptibility of landslides. Researches in Peru (based on satellite imagery and ground data collection) revealed a rapid glacier retreat and give an assessment of the remaining ice volume as well as scenarios of possible evolution.¦These results were presented to different audiences, including in front of 160 governments. The results and data generated are made available online through an open source SDI (http://preview.grid.unep.ch). The method is flexible and easily transferable to different scales and issues, with good prospects for adaptation to other research areas. The risk characterization at a global level and identifying the role of ecosystems in disaster risk is booming. These researches have revealed many challenges, some were resolved, while others remained limitations. However, it is clear that the level of development, and more over, unsustainable development, configures a large part of disaster risk and that the dynamics of risk is primarily governed by global change.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Résumé La masse de cellules β sécrétrices d'insuline est un tissu dynamique qui s'adapte aux variations de la demande métabolique pour assurer une normoglycémie. Cette adaptation se fait par un changement de sécrétion d'insuline et de la masse totale des cellules β. Une perte complète ou partielle des cellules β conduit respectivement à un diabète de type 1 et de type 2. Les mécanismes qui régulent la masse de cellules β et maintiennent leur phénotype differencié sont encore peu connus. Leur identification est nécessaire pour comprendre le développement du diabète et développer des stratégies de traitement. La greffe d'îlots est une approche thérapeutique prometteuse pour le diabète de type 1, mais est limitée par une perte précoce des cellules β due à une apoptose induite par des cytokines. Afin d'améliorer la survie des cellules β lors de la greffe d'îlots, le premier but était de trouver des peptides pouvant bloquer l'apoptose induite par FasL et TNF-α. Pour ce faire, deux librairies de phages ont été criblées pour sélectionner des peptides se liant au Fas DD ou au TNFRl DD. Nous avons identifié six peptides différents. Cependant, aucun d'entre eux n'était capable de protéger les cellules de l'apoptose induite par FasL ou TNF-α. Deuxièmement, le GLP-1 est une hormone qui stimule la sécrétion d'insuline, et est impliquée dans la prolifération des cellules β, la différentiation, et inhibe l'apoptose. Nous avons fait l'hypothèse que le GLP-1 joue un rôle crucial dans le contrôle de la masse et de la fonction des cellules β. Afin de l'évaluer, une analyse par puce à ADN a été réalisée en comparant des cellules βTC-Tet traitées avec du GLP-1 à des cellules non-traitées. 376 gènes régulés ont été identifiés, dont RGS2, CREM, ICERI et DUSP14, augmentés significativement par le GLP-1. Nous avons confirmé que le GLP-1 augmente l'expression de ces gènes, aussi bien au niveau des transcripts que des protéines. De plus, nous avons montré que le GLP-1 induit leur expression par activation de la voie cAMP/PKA, et nécessite l'entrée de calcium extracellulaire. D'après leur fonction biologique, nous avons ensuite supposé que ces gènes pourraient agir comme régulateurs négatifs de la signalisation du GLP-l, et donc freiner son effet proliférateur. Pour vérifier notre hypothèse, des siRNAs contre ces gènes ont été développés, et leurs effets sur la prolifération des cellules β seront évalués ultérieurement. Abstract The pancreatic β-cell mass is a dynamic tissue which adapts to variations in metabolic demand in order to ensure normoglycemia. This adaptation occurs through a change in both insulin secretion and the total mass of ,β-cells. An absolute or relative loss of β-cells leads to type 1 and type 2 diabetes, respectively. The mechanisms that regulate the pancreatic β-cell mass and maintain the fully differentiated phenotype of the insulin-secreting β-cells are only poorly defined. Their identification is required to understand the progression of diabetes, but also to design strategies for the treatment of diabetes. Islet transplantation is a promising therapeutic approach for type 1 diabetes, but it is still limited by an early graft loss due to cytokine-induced apoptosis. In order to improve β-cell survival during islet transplantation, our first goal was to find novel blockers of FasL- and TNF-α-mediated cell death in the form of peptides. To that end, we screened two phage display libraries to select Fas DD- or TNFR1 DD-binding peptides. We identified six different small peptides. However, none of these peptides was able to prevent cells from FasL- or TNF-α-mediated apoptosis. Secondly, GLP-1 is a hormone that has been shown to stimulate insulin secretion and to be involved in β-cell proliferation, differentiation and inhibition of apoptosis. We hypothesized that GLP-1 plays a crucial role to control mass and function of β-cells. To evaluate this hypothesis, we performed a cDNA microarray analysis with GLP-1-treated βTC-Tet cells compared to untreated cells. We found 376 regulated genes, among these, RGS2, CREM, ICERI and DUSP14, which were significantly upregulated by GLP-1. We confirmed that both their mRNA and protein levels were strongly and rapidly increased after GLP-1 treatment. Moreover, we found that GLP-1 activates their expression mainly through the activation of the cAMP/PKA signaling pathway, and requires extracellular calcium entry. According to their biological function, we then hypothesized that these genes might act as negative regulators of the GLP-1 signaling. In particular, they might brake the effects of GLP-1 on β-cell proliferation. To verify this hypothesis, siRNAs against these genes were developed. The effect of these siRNAs on GLP-1-induced β-cell proliferation will be evaluated later.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electroencephalography (EEG) is an easily accessible and low-cost modality that might prove to be a particularly powerful tool for the identification of subtle functional changes preceding structural or metabolic deficits in progressive mild cognitive impairment (PMCI). Most previous contributions in this field assessed quantitative EEG differences between healthy controls, MCI and Alzheimer's disease(AD) cases leading to contradictory data. In terms of MCI conversion to AD, certain longitudinal studies proposed various quantitative EEG parameters for an a priori distinction between PMCI and stable MCI. However, cross-sectional comparisons revealed a substantial overlap in these parameters between MCI patients and elderly controls. Methodological differences including variable clinical definition of MCI cases and substantial interindividual differences within the MCI group could partly explain these discrepancies. Most importantly, EEG measurements without cognitive demand in both cross-sectional and longitudinal designs have demonstrated limited sensitivity and generally do not produce significant group differences in spectral EEG parameters. Since the evolution of AD is characterized by the progressive loss of functional connectivity within neocortical association areas, event-modulated EEG dynamic analysis which makes it possible to investigate the functional activation of neocortical circuits may represent a more sensitive method to identify early alterations of neuronal networks predictive of AD development among MCI cases. The present review summarizes clinically significant results of EEG activation studies in this field and discusses future perspectives of research aiming to reach an early and individual prediction of cognitive decline in healthy elderly controls.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this work was to estimate the repeatability of adaptability and stability parameters of common bean between years, within each biennium from 2003 to 2012, in Minas Gerais state, Brazil. Grain yield data from trials of value for cultivation and use common bean were analyzed. Grain yield, ecovalence, regression coefficient, and coefficient of determination were estimated considering location and sowing season per year, within each biennium. Subsequently, a analysis of variance these estimates was carried out, and repeatability was estimated in the biennia. Repeatability estimate for grain yield in most of the biennia was relatively high, but for ecovalence and regression coefficient it was null or of small magnitude, which indicates that confidence on identification of common bean lines for recommendation is greater when using means of yield, instead of stability parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we develop a new linear approach to identify the parameters of a moving average (MA) model from the statistics of the output. First, we show that, under some constraints, the impulse response of the system can be expressed as a linear combination of cumulant slices. Then, thisresult is used to obtain a new well-conditioned linear methodto estimate the MA parameters of a non-Gaussian process. Theproposed method presents several important differences withexisting linear approaches. The linear combination of slices usedto compute the MA parameters can be constructed from dif-ferent sets of cumulants of different orders, providing a generalframework where all the statistics can be combined. Further-more, it is not necessary to use second-order statistics (the autocorrelation slice), and therefore the proposed algorithm stillprovides consistent estimates in the presence of colored Gaussian noise. Another advantage of the method is that while mostlinear methods developed so far give totally erroneous estimates if the order is overestimated, the proposed approach doesnot require a previous estimation of the filter order. The simulation results confirm the good numerical conditioning of thealgorithm and the improvement in performance with respect to existing methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Työn tavoitteena oli kehittää sekoituskyvyn empiirinen monimuuttujafunktio eräälle dynaamiselle linjasekoittimelle. Monimuuttujafunktio oli tarkoitus kehittää sekoittimen kierrosnopeuden, virtaavien materiaalien ja tilavuusvirtausten perus-teella. Työn kirjallisuusosassa tarkasteltiin pääosin dynaamisia linjasekoittimia ja niiden toimintaperiaatteita sekä sekoituskyvyn määrittämiseen soveltuvia mittausmene-telmiä. Sekoitettavien aineiden epähomogeenisuus määritettiin mittaamalla termoelemen-teillä sekoittimen jälkeisen virran lämpötilavaihtelut. Sekoitettavina aineina käy-tettiin kolmea erilaista seosta: vesi-vesi-, kuitususpensio-vesi- ja karboksyylime-tyyliselluloosa-vesiseoksia. Muita muuttujia kokeissa olivat neljä eri sekoittimen kierrosnopeutta, neljä eri päävirran tilavuusvirtausta ja kolme päävirran sekäsii-hen lisättävän sivuvirran suhdetta. Monimuuttujafunktiokehitettiin vain kuitususpensio-vesiajojen pohjalta muilla seoksilla tehtyjen koeajojen osittaisen epäonnistumisen vuoksi. Hallitsevaksi pa-rametriksi monimuuttujafunktiossa osoittautui sekoittimen kierrosnopeus. Osittai-nen epäonnistuminen kokeissa johtui osaksi termoelementtien käytön sopimatto-muudesta sekoituskyvyn määrittämiseen ja osaksi sekoitettavien aineiden valin-nasta. Jatkotutkimuksiaajatellen muista sekoituskyvyn mittaamiseen käytettävissä olevista menetelmistäkäyttökelpoisimmaksi arvioitiin sähkötomografia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The active magnetic bearings have recently been intensively developed because of noncontact support having several advantages compared to conventional bearings. Due to improved materials, strategies of control, and electrical components, the performance and reliability of the active magnetic bearings are improving. However, additional bearings, retainer bearings, still have a vital role in the applications of the active magnetic bearings. The most crucial moment when the retainer bearings are needed is when the rotor drops from the active magnetic bearings on the retainer bearings due to component or power failure. Without appropriate knowledge of the retainer bearings, there is a chance that an active magnetic bearing supported rotor system will be fatal in a drop-down situation. This study introduces a detailed simulation model of a rotor system in order to describe a rotor drop-down situation on the retainer bearings. The introduced simulation model couples a finite element model with component mode synthesis and detailed bearing models. In this study, electrical components and electromechanical forces are not in the focus. The research looks at the theoretical background of the finite element method with component mode synthesis that can be used in the dynamic analysis of flexible rotors. The retainer bearings are described by using two ball bearing models, which include damping and stiffness properties, oil film, inertia of rolling elements and friction between races and rolling elements. Thefirst bearing model assumes that the cage of the bearing is ideal and that the cage holds the balls in their predefined positions precisely. The second bearing model is an extension of the first model and describes the behavior of the cageless bearing. In the bearing model, each ball is described by using two degrees of freedom. The models introduced in this study are verified with a corresponding actual structure. By using verified bearing models, the effects of the parameters of the rotor system onits dynamics during emergency stops are examined. As shown in this study, the misalignment of the retainer bearings has a significant influence on the behavior of the rotor system in a drop-down situation. In this study, a stability map of the rotor system as a function of rotational speed of the rotor and the misalignment of the retainer bearings is presented. In addition, the effects of parameters of the simulation procedure and the rotor system on the dynamics of system are studied.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In distributed energy production, permanent magnet synchronous generators (PMSG) are often connected to the grid via frequency converters, such as voltage source line converters. The price of the converter may constitute a large part of the costs of a generating set. Some of the permanent magnet synchronous generators with converters and traditional separately excited synchronous generators couldbe replaced by direct-on-line (DOL) non-controlled PMSGs. Small directly networkconnected generators are likely to have large markets in the area of distributed electric energy generation. Typical prime movers could be windmills, watermills and internal combustion engines. DOL PMSGs could also be applied in island networks, such as ships and oil platforms. Also various back-up power generating systems could be carried out with DOL PMSGs. The benefits would be a lower priceof the generating set and the robustness and easy use of the system. The performance of DOL PMSGs is analyzed. The electricity distribution companies have regulations that constrain the design of the generators being connected to the grid. The general guidelines and recommendations are applied in the analysis. By analyzing the results produced by the simulation model for the permanent magnet machine, the guidelines for efficient damper winding parameters for DOL PMSGs are presented. The simulation model is used to simulate grid connections and load transients. The damper winding parameters are calculated by the finite element method (FEM) and determined from experimental measurements. Three-dimensional finite element analysis (3D FEA) is carried out. The results from the simulation model and 3D FEA are compared with practical measurements from two prototype axial flux permanent magnet generators provided with damper windings. The dimensioning of the damper winding parameters is case specific. The damper winding should be dimensioned based on the moment of inertia of the generating set. It is shown that the damper winding has optimal values to reach synchronous operation in the shortest period of time after transient operation. With optimal dimensioning, interferenceon the grid is minimized.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thisresearch deals with the dynamic modeling of gas lubricated tilting pad journal bearings provided with spring supported pads, including experimental verification of the computation. On the basis of a mathematical model of a film bearing, a computer program has been developed, which can be used for the simulation of a special type of tilting pad gas journal bearing supported by a rotary spring under different loading conditions time dependently (transient running conditions due to geometry variations in time externally imposed). On the basis of literature, different transformations have been used in the model to achieve simpler calculation. The numerical simulation is used to solve a non-stationary case of a gasfilm. The simulation results were compared with literature results in a stationary case (steady running conditions) and they were found to be equal. In addition to this, comparisons were made with a number of stationary and non-stationary bearing tests, which were performed at Lappeenranta University of Technology using bearings designed with the simulation program. A study was also made using numerical simulation and literature to establish the influence of the different bearing parameters on the stability of the bearing. Comparison work was done with literature on tilting pad gas bearings. This bearing type is rarely used. One literature reference has studied the same bearing type as that used in LUT. A new design of tilting pad gas bearing is introduced. It is based on a stainless steel body and electron beam welding of the bearing parts. It has good operation characteristics and is easier to tune and faster to manufacture than traditional constructions. It is also suitable for large serial production.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Simulation is a useful tool in cardiac SPECT to assess quantification algorithms. However, simple equation-based models are limited in their ability to simulate realistic heart motion and perfusion. We present a numerical dynamic model of the left ventricle, which allows us to simulate normal and anomalous cardiac cycles, as well as perfusion defects. Bicubic splines were fitted to a number of control points to represent endocardial and epicardial surfaces of the left ventricle. A transformation from each point on the surface to a template of activity was made to represent the myocardial perfusion. Geometry-based and patient-based simulations were performed to illustrate this model. Geometry-based simulations modeled ~1! a normal patient, ~2! a well-perfused patient with abnormal regional function, ~3! an ischaemic patient with abnormal regional function, and ~4! a patient study including tracer kinetics. Patient-based simulation consisted of a left ventricle including a realistic shape and motion obtained from a magnetic resonance study. We conclude that this model has the potential to study the influence of several physical parameters and the left ventricle contraction in myocardial perfusion SPECT and gated-SPECT studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several methods and approaches for measuring parameters to determine fecal sources of pollution in water have been developed in recent years. No single microbial or chemical parameter has proved sufficient to determine the source of fecal pollution. Combinations of parameters involving at least one discriminating indicator and one universal fecal indicator offer the most promising solutions for qualitative and quantitative analyses. The universal (nondiscriminating) fecal indicator provides quantitative information regarding the fecal load. The discriminating indicator contributes to the identification of a specific source. The relative values of the parameters derived from both kinds of indicators could provide information regarding the contribution to the total fecal load from each origin. It is also essential that both parameters characteristically persist in the environment for similar periods. Numerical analysis, such as inductive learning methods, could be used to select the most suitable and the lowest number of parameters to develop predictive models. These combinations of parameters provide information on factors affecting the models, such as dilution, specific types of animal source, persistence of microbial tracers, and complex mixtures from different sources. The combined use of the enumeration of somatic coliphages and the enumeration of Bacteroides-phages using different host specific strains (one from humans and another from pigs), both selected using the suggested approach, provides a feasible model for quantitative and qualitative analyses of fecal source identification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present molecular dynamics simulations of a simple model for polymer melts with intramolecular barriers. We investigate structural relaxation as a function of the barrier strength. Dynamic correlators can be consistently analyzed within the framework of the mode coupling theory of the glass transition. Control parameters are tuned in order to induce a competition between general packing effects and polymer-specific intramolecular barriers as mechanisms for dynamic arrest. This competition yields unusually large values of the so-called mode coupling theory exponent parameter and rationalizes qualitatively different observations for simple bead-spring and realistic polymers. The systematic study of the effect of intramolecular barriers presented here also establishes a fundamental difference between the nature of the glass transition in polymers and in simple glass formers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work describes the formation of transformation products (TPs) by the enzymatic degradation at laboratory scale of two highly consumed antibiotics: tetracycline (Tc) and erythromycin (ERY). The analysis of the samples was carried out by a fast and simple method based on the novel configuration of the on-line turbulent flow system coupled to a hybrid linear ion trap – high resolution mass spectrometer. The method was optimized and validated for the complete analysis of ERY, Tc and their transformation products within 10 min without any other sample manipulation. Furthermore, the applicability of the on-line procedure was evaluated for 25 additional antibiotics, covering a wide range of chemical classes in different environmental waters with satisfactory quality parameters. Degradation rates obtained for Tc by laccase enzyme and ERY by EreB esterase enzyme without the presence of mediators were ∼78% and ∼50%, respectively. Concerning the identification of TPs, three suspected compounds for Tc and five of ERY have been proposed. In the case of Tc, the tentative molecular formulas with errors mass within 2 ppm have been based on the hypothesis of dehydroxylation, (bi)demethylation and oxidation of the rings A and C as major reactions. In contrast, the major TP detected for ERY has been identified as the “dehydration ERY-A”, with the same molecular formula of its parent compound. In addition, the evaluation of the antibiotic activity of the samples along the enzymatic treatments showed a decrease around 100% in both cases