932 resultados para ENERGY-PARTITIONING ANALYSIS
Resumo:
Assessing the total energy expenditure (TEE) and the levels of physical activity in free-living conditions with non-invasive techniques remains a challenge. The purpose of the present study was to investigate the accuracy of a new uniaxial accelerometer for assessing TEE and physical-activity-related energy expenditure (PAEE) over a 24 h period in a respiratory chamber, and to establish activity levels based on the accelerometry ranges corresponding to the operationally defined metabolic equivalent (MET) categories. In study 1, measurement of the 24 h energy expenditure of seventy-nine Japanese subjects (40 (SD 12) years old) was performed in a large respiratory chamber. During the measurements, the subjects wore a uniaxial accelerometer (Lifecorder; Suzuken Co. Ltd, Nagoya, Japan) on their belt. Two moderate walking exercises of 30 min each were performed on a horizontal treadmill. In study 2, ten male subjects walked at six different speeds and ran at three different speeds on a treadmill for 4 min, with the same accelerometer. O2 consumption was measured during the last minute of each stage and was expressed in MET. The measured TEE was 8447 (SD 1337) kJ/d. The accelerometer significantly underestimated TEE and PAEE (91.9 (SD 5.4) and 92.7 (SD 17.8) % chamber value respectively); however, there was a significant correlation between the two values (r 0.928 and 0.564 respectively; P<0.001). There was a strong correlation between the activity levels and the measured MET while walking (r(2) 0.93; P<0.001). Although TEE and PAEE were systematically underestimated during the 24 h period, the accelerometer assessed energy expenditure well during both the exercise period and the non-structured activities. Individual calibration factors may help to improve the accuracy of TEE estimation, but the average calibration factor for the group is probably sufficient for epidemiological research. This method is also important for assessing the diurnal profile of physical activity.
Resumo:
The IncP alpha promiscuous plasmid (R18, R68, RK2, RP1 and RP4) comprises 60,099 bp of nucleotide sequence, encoding at least 74 genes. About 40 kb of the genome, designated the IncP core and including all essential replication and transfer functions, can be aligned with equivalent sequences in the IncP beta plasmid R751. The compiled IncP alpha sequence revealed several previously unidentified reading frames that are potential genes. IncP alpha plasmids carry genetic information very efficiently: the coding sequences of the genes are closely packed but rarely overlap, and occupy almost 86% of the genome's nucleotide sequence. All of the 74 genes should be expressed, although there is as yet experimental evidence for expression of only 60 of them. Six examples of tandem-in-frame initiation sites specifying two gene products each are known. Two overlapping gene arrangements occupy different reading frames of the same region. Intergenic regions include most of the 25 promoters; transcripts are usually polycistronic. Translation of most of the open reading frames seems to be initiated independently, each from its own ribosomal binding and initiation site, although, a few cases of coupled translation have been reported. The most frequently used initiation codon is AUG but translation for a few open reading frames begins at GUG or UUG. The most common stop-codon is UGA followed by UAA and then UAG. Regulatory circuits are complex and largely dependent on two components of the central control operon. KorA and KorB are transcriptional repressors controlling at least seven operons. KorA and KorB act synergistically in several cases by recognizing and binding to conserved nucleotide sequences. Twelve KorB binding sites were found around the IncP alpha sequence and these are conserved in R751 (IncP beta) with respect to both sequence and location. Replication of IncP alpha plasmids requires oriV and the plasmid-encoded initiator protein TrfA in combination with the host-encoded replication machinery. Conjugative plasmid transfer depends on two separate regions occupying about half of the genome. The primary segregational stability system designated Par/Mrs consists of a putative site-specific recombinase, a possible partitioning apparatus and a post-segregational lethality mechanism, all encoded in two divergent operons. Proteins related to the products of F sop and P1 par partitioning genes are separately encoded in the central control operon.
Resumo:
Several definitions of paediatric abdominal obesity have been proposed but it is unclear whether they lead to similar results. We assessed the prevalence of abdominal obesity using five different waist circumference-based definitions and their agreement with total body fat (TBF) and abdominal fat (AF). Data from 190 girls and 162 boys (Ballabeina), and from 134 girls and 113 boys (Kinder-Sportstudie, KISS) aged 5-11 years were used. TBF was assessed by bioimpedance (Ballabeina) or dual energy X-ray absorption (KISS). On the basis of the definition used, the prevalence of abdominal obesity varied between 3.1 and 49.4% in boys, and 4.7 and 55.5% in girls (Ballabeina), and between 1.8 and 36.3% in boys and 4.5 and 37.3% in girls (KISS). Among children considered as abdominally obese by at least one definition, 32.0 (Ballabeina) and 44.7% (KISS) were considered as such by at least two (out of five possible) definitions. Using excess TBF or AF as reference, the areas under the receiver operating curve varied between 0.577 and 0.762 (Ballabeina), and 0.583 and 0.818 (KISS). We conclude that current definitions of abdominal obesity in children lead to wide prevalence estimates and should not be used until a standard definition can be proposed.
Resumo:
BACKGROUND: This study validates the use of phycoerythrin (PE) and allophycocyanin (APC) for fluorescence energy transfer (FRET) analyzed by flow cytometry. METHODS: FRET was detected when a pair of antibody conjugates directed against two noncompetitive epitopes on the same CD8alpha chain was used. FRET was also detected between antibody conjugate pairs specific for the two chains of the heterodimeric alpha (4)beta(1) integrin. Similarly, the association of T-cell receptor (TCR) with a soluble antigen ligand was detected by FRET when anti-TCR antibody and MHC class I/peptide complexes (<<tetramers>>) were used. RESULTS: FRET efficiency was always less than 10%, probably because of steric effects associated with the size and structure of PE and APC. Some suggestions are given to take into account this and other effects (e.g., donor and acceptor concentrations) for a better interpretation of FRET results obtained with this pair of fluorochromes. CONCLUSIONS: We conclude that FRET assays can be carried out easily with commercially available antibodies and flow cytometers to study arrays of multimolecular complexes.
Resumo:
The computer simulation of reaction dynamics has nowadays reached a remarkable degree of accuracy. Triatomic elementary reactions are rigorously studied with great detail on a straightforward basis using a considerable variety of Quantum Dynamics computational tools available to the scientific community. In our contribution we compare the performance of two quantum scattering codes in the computation of reaction cross sections of a triatomic benchmark reaction such as the gas phase reaction Ne + H2+ %12. NeH++ H. The computational codes are selected as representative of time-dependent (Real Wave Packet [ ]) and time-independent (ABC [ ]) methodologies. The main conclusion to be drawn from our study is that both strategies are, to a great extent, not competing but rather complementary. While time-dependent calculations advantages with respect to the energy range that can be covered in a single simulation, time-independent approaches offer much more detailed information from each single energy calculation. Further details such as the calculation of reactivity at very low collision energies or the computational effort related to account for the Coriolis couplings are analyzed in this paper.
Resumo:
The federal government is aggressively promoting biofuels as an answer to global climate change and dependence on imported sources of energy. Iowa has quickly become a leader in the bioeconomy and wind energy production, but meeting the United States Department of Energy’s goal having 20% of U.S. transportation fuels come from biologically based sources by 2030 will require a dramatic increase in ethanol and biodiesel production and distribution. At the same time, much of Iowa’s rural transportation infrastructure is near or beyond its original design life. As Iowa’s rural roadway structures, pavements, and unpaved roadways become structurally deficient or functionally obsolete, public sector maintenance and rehabilitation costs rapidly increase. More importantly, costs to move all farm products will rapidly increase if infrastructure components are allowed to fail; longer hauls, slower turnaround times, and smaller loads result. When these results occur on a large scale, Iowa will start to lose its economic competitive edge in the rapidly developing bioeconomy. The primary objective of this study was to document the current physical and fiscal impacts of Iowa’s existing biofuels and wind power industries. A four-county cluster in north-central Iowa and a two-county cluster in southeast Iowa were identified through a local agency survey as having a large number of diverse facilities and were selected for the traffic and physical impact analysis. The research team investigated the large truck traffic patterns on Iowa’s secondary and local roads from 2002 to 2008 and associated those with the pavement condition and county maintenance expenditures. The impacts were quantified to the extent possible and visualized using geographic information system (GIS) tools. In addition, a traffic and fiscal assessment tool was developed to understand the impact of the development of the biofuels on Iowa’s secondary road system. Recommended changes in public policies relating to the local government and to the administration of those policies included standardizing the reporting and format of all county expenditures, conducting regular pavement evaluations on a county’s system, cooperating and communicating with cities (adjacent to a plant site), considering utilization of tax increment financing (TIF) districts as a short-term tool to produce revenues, and considering alternative ways to tax the industry.
Resumo:
The concept of energy gap(s) is useful for understanding the consequence of a small daily, weekly, or monthly positive energy balance and the inconspicuous shift in weight gain ultimately leading to overweight and obesity. Energy gap is a dynamic concept: an initial positive energy gap incurred via an increase in energy intake (or a decrease in physical activity) is not constant, may fade out with time if the initial conditions are maintained, and depends on the 'efficiency' with which the readjustment of the energy imbalance gap occurs with time. The metabolic response to an energy imbalance gap and the magnitude of the energy gap(s) can be estimated by at least two methods, i.e. i) assessment by longitudinal overfeeding studies, imposing (by design) an initial positive energy imbalance gap; ii) retrospective assessment based on epidemiological surveys, whereby the accumulated endogenous energy storage per unit of time is calculated from the change in body weight and body composition. In order to illustrate the difficulty of accurately assessing an energy gap we have used, as an illustrative example, a recent epidemiological study which tracked changes in total energy intake (estimated by gross food availability) and body weight over 3 decades in the US, combined with total energy expenditure prediction from body weight using doubly labelled water data. At the population level, the study attempted to assess the cause of the energy gap purported to be entirely due to increased food intake. Based on an estimate of change in energy intake judged to be more reliable (i.e. in the same study population) and together with calculations of simple energetic indices, our analysis suggests that conclusions about the fundamental causes of obesity development in a population (excess intake vs. low physical activity or both) is clouded by a high level of uncertainty.
Resumo:
This paper addresses the surprising lack of quality control on the analysis and selection on energy policies observable in the last decades. As an example, we discuss the delusional idea that it is possible to replace fossil energy with large scale ethanol production from agricultural crops. But if large scale ethanol production is not practical in energetic terms, why huge amount of money has been invested in it and is it still being invested? In order to answer this question we introduce two concepts useful to frame, in general terms, the predicament of quality control in science: (i) the concept of “granfalloons” proposed by K. Vonnegut (1963) flagging the danger of the formation of “crusades to save the world” void of real meaning. These granfalloons are often used by powerful lobbies to distort policy decisions; and (ii) the concept of Post-Normal science by S. Funtowicz and J. Ravetz (1990) indicating a standard predicament faced by science when producing information for governance. When mixing together uncertainty, multiple-scale and legitimate but contrasting views it becomes impossible to deal with complex issue using the conventional scientific approach based on reductionism. We finally discuss the implications of a different approach to the assessment of alternative energy sources by introducing the concept of Promethean technology.
Resumo:
BACKGROUND: Environmental conditions play a crucial role in mite growth, and optimal environmental control is key in the prevention of airway inflammation in chronic allergic rhinoconjunctivitis or asthma. OBJECTIVE: To evaluate the relationship between building energy performance and indoor mite allergen concentration in a cross-sectional study. METHODS: Major allergen concentration (Der f 1, Der p 1, mite group 2, Fel d 1 and Bla g 2) was determined by quantitative dot blot analysis from mattress and carpet dust samples in five buildings designed for low energy use (LEB) and in six control buildings (CB). Inhabitants had received 4 weeks prior to mite measurement a personal validated questionnaire related to the perceived state of health and comfort of living. RESULTS: Cumulative mite allergen concentration (with Der f 1 as the major contributor) was significantly lower in LEB as compared with CB both in mattresses and in carpets. In contrast, the two categories of buildings did not differ in Bla g 2 and Fel d 1 concentration, in the amount of dust and airborne mould collected. Whereas temperature was higher in LEB, relative humidity was significantly lower than in CB. Perceived overall comfort was better in LEB. CONCLUSIONS: Major mite allergen Der f 1 preferentially accumulates in buildings not specifically designed for low energy use, reaching levels at risk for sensitization. We hypothesize that controlled mechanical ventilation present in all audited LEB may favour lower air humidity and hence lower mite growth and allergen concentration, while preserving optimal perceived comfort.
Resumo:
OBJECTIVE: Critically ill patients are at high risk of malnutrition. Insufficient nutritional support still remains a widespread problem despite guidelines. The aim of this study was to measure the clinical impact of a two-step interdisciplinary quality nutrition program. DESIGN: Prospective interventional study over three periods (A, baseline; B and C, intervention periods). SETTING: Mixed intensive care unit within a university hospital. PATIENTS: Five hundred seventy-two patients (age 59 ± 17 yrs) requiring >72 hrs of intensive care unit treatment. INTERVENTION: Two-step quality program: 1) bottom-up implementation of feeding guideline; and 2) additional presence of an intensive care unit dietitian. The nutrition protocol was based on the European guidelines. MEASUREMENTS AND MAIN RESULTS: Anthropometric data, intensive care unit severity scores, energy delivery, and cumulated energy balance (daily, day 7, and discharge), feeding route (enteral, parenteral, combined, none-oral), length of intensive care unit and hospital stay, and mortality were collected. Altogether 5800 intensive care unit days were analyzed. Patients in period A were healthier with lower Simplified Acute Physiologic Scale and proportion of "rapidly fatal" McCabe scores. Energy delivery and balance increased gradually: impact was particularly marked on cumulated energy deficit on day 7 which improved from -5870 kcal to -3950 kcal (p < .001). Feeding technique changed significantly with progressive increase of days with nutrition therapy (A: 59% days, B: 69%, C: 71%, p < .001), use of enteral nutrition increased from A to B (stable in C), and days on combined and parenteral nutrition increased progressively. Oral energy intakes were low (mean: 385 kcal*day, 6 kcal*kg*day ). Hospital mortality increased with severity of condition in periods B and C. CONCLUSION: A bottom-up protocol improved nutritional support. The presence of the intensive care unit dietitian provided significant additional progression, which were related to early introduction and route of feeding, and which achieved overall better early energy balance.
Resumo:
Networks are evolving toward a ubiquitous model in which heterogeneousdevices are interconnected. Cryptographic algorithms are required for developing securitysolutions that protect network activity. However, the computational and energy limitationsof network devices jeopardize the actual implementation of such mechanisms. In thispaper, we perform a wide analysis on the expenses of launching symmetric and asymmetriccryptographic algorithms, hash chain functions, elliptic curves cryptography and pairingbased cryptography on personal agendas, and compare them with the costs of basic operatingsystem functions. Results show that although cryptographic power costs are high and suchoperations shall be restricted in time, they are not the main limiting factor of the autonomyof a device.
Resumo:
Les plantes sont essentielles pour les sociétés humaines. Notre alimentation quotidienne, les matériaux de constructions et les sources énergétiques dérivent de la biomasse végétale. En revanche, la compréhension des multiples aspects développementaux des plantes est encore peu exploitée et représente un sujet de recherche majeur pour la science. L'émergence des technologies à haut débit pour le séquençage de génome à grande échelle ou l'imagerie de haute résolution permet à présent de produire des quantités énormes d'information. L'analyse informatique est une façon d'intégrer ces données et de réduire la complexité apparente vers une échelle d'abstraction appropriée, dont la finalité est de fournir des perspectives de recherches ciblées. Ceci représente la raison première de cette thèse. En d'autres termes, nous appliquons des méthodes descriptives et prédictives combinées à des simulations numériques afin d'apporter des solutions originales à des problèmes relatifs à la morphogénèse à l'échelle de la cellule et de l'organe. Nous nous sommes fixés parmi les objectifs principaux de cette thèse d'élucider de quelle manière l'interaction croisée des phytohormones auxine et brassinosteroïdes (BRs) détermine la croissance de la cellule dans la racine du méristème apical d'Arabidopsis thaliana, l'organisme modèle de référence pour les études moléculaires en plantes. Pour reconstruire le réseau de signalement cellulaire, nous avons extrait de la littérature les informations pertinentes concernant les relations entre les protéines impliquées dans la transduction des signaux hormonaux. Le réseau a ensuite été modélisé en utilisant un formalisme logique et qualitatif pour pallier l'absence de données quantitatives. Tout d'abord, Les résultats ont permis de confirmer que l'auxine et les BRs agissent en synergie pour contrôler la croissance de la cellule, puis, d'expliquer des observations phénotypiques paradoxales et au final, de mettre à jour une interaction clef entre deux protéines dans la maintenance du méristème de la racine. Une étude ultérieure chez la plante modèle Brachypodium dystachion (Brachypo- dium) a révélé l'ajustement du réseau d'interaction croisée entre auxine et éthylène par rapport à Arabidopsis. Chez ce dernier, interférer avec la biosynthèse de l'auxine mène à la formation d'une racine courte. Néanmoins, nous avons isolé chez Brachypodium un mutant hypomorphique dans la biosynthèse de l'auxine qui affiche une racine plus longue. Nous avons alors conduit une analyse morphométrique qui a confirmé que des cellules plus anisotropique (plus fines et longues) sont à l'origine de ce phénotype racinaire. Des analyses plus approfondies ont démontré que la différence phénotypique entre Brachypodium et Arabidopsis s'explique par une inversion de la fonction régulatrice dans la relation entre le réseau de signalisation par l'éthylène et la biosynthèse de l'auxine. L'analyse morphométrique utilisée dans l'étude précédente exploite le pipeline de traitement d'image de notre méthode d'histologie quantitative. Pendant la croissance secondaire, la symétrie bilatérale de l'hypocotyle est remplacée par une symétrie radiale et une organisation concentrique des tissus constitutifs. Ces tissus sont initialement composés d'une douzaine de cellules mais peuvent aisément atteindre des dizaines de milliers dans les derniers stades du développement. Cette échelle dépasse largement le seuil d'investigation par les moyens dits 'traditionnels' comme l'imagerie directe de tissus en profondeur. L'étude de ce système pendant cette phase de développement ne peut se faire qu'en réalisant des coupes fines de l'organe, ce qui empêche une compréhension des phénomènes cellulaires dynamiques sous-jacents. Nous y avons remédié en proposant une stratégie originale nommée, histologie quantitative. De fait, nous avons extrait l'information contenue dans des images de très haute résolution de sections transverses d'hypocotyles en utilisant un pipeline d'analyse et de segmentation d'image à grande échelle. Nous l'avons ensuite combiné avec un algorithme de reconnaissance automatique des cellules. Cet outil nous a permis de réaliser une description quantitative de la progression de la croissance secondaire révélant des schémas développementales non-apparents avec une inspection visuelle classique. La formation de pôle de phloèmes en structure répétée et espacée entre eux d'une longueur constante illustre les bénéfices de notre approche. Par ailleurs, l'exploitation approfondie de ces résultats a montré un changement de croissance anisotropique des cellules du cambium et du phloème qui semble en phase avec l'expansion du xylème. Combinant des outils génétiques et de la modélisation biomécanique, nous avons démontré que seule la croissance plus rapide des tissus internes peut produire une réorientation de l'axe de croissance anisotropique des tissus périphériques. Cette prédiction a été confirmée par le calcul du ratio des taux de croissance du xylème et du phloème au cours de développement secondaire ; des ratios élevés sont effectivement observés et concomitant à l'établissement progressif et tangentiel du cambium. Ces résultats suggèrent un mécanisme d'auto-organisation établi par un gradient de division méristématique qui génèrent une distribution de contraintes mécaniques. Ceci réoriente la croissance anisotropique des tissus périphériques pour supporter la croissance secondaire. - Plants are essential for human society, because our daily food, construction materials and sustainable energy are derived from plant biomass. Yet, despite this importance, the multiple developmental aspects of plants are still poorly understood and represent a major challenge for science. With the emergence of high throughput devices for genome sequencing and high-resolution imaging, data has never been so easy to collect, generating huge amounts of information. Computational analysis is one way to integrate those data and to decrease the apparent complexity towards an appropriate scale of abstraction with the aim to eventually provide new answers and direct further research perspectives. This is the motivation behind this thesis work, i.e. the application of descriptive and predictive analytics combined with computational modeling to answer problems that revolve around morphogenesis at the subcellular and organ scale. One of the goals of this thesis is to elucidate how the auxin-brassinosteroid phytohormone interaction determines the cell growth in the root apical meristem of Arabidopsis thaliana (Arabidopsis), the plant model of reference for molecular studies. The pertinent information about signaling protein relationships was obtained through the literature to reconstruct the entire hormonal crosstalk. Due to a lack of quantitative information, we employed a qualitative modeling formalism. This work permitted to confirm the synergistic effect of the hormonal crosstalk on cell elongation, to explain some of our paradoxical mutant phenotypes and to predict a novel interaction between the BREVIS RADIX (BRX) protein and the transcription factor MONOPTEROS (MP),which turned out to be critical for the maintenance of the root meristem. On the same subcellular scale, another study in the monocot model Brachypodium dystachion (Brachypodium) revealed an alternative wiring of auxin-ethylene crosstalk as compared to Arabidopsis. In the latter, increasing interference with auxin biosynthesis results in progressively shorter roots. By contrast, a hypomorphic Brachypodium mutant isolated in this study in an enzyme of the auxin biosynthesis pathway displayed a dramatically longer seminal root. Our morphometric analysis confirmed that more anisotropic cells (thinner and longer) are principally responsible for the mutant root phenotype. Further characterization pointed towards an inverted regulatory logic in the relation between ethylene signaling and auxin biosynthesis in Brachypodium as compared to Arabidopsis, which explains the phenotypic discrepancy. Finally, the morphometric analysis of hypocotyl secondary growth that we applied in this study was performed with the image-processing pipeline of our quantitative histology method. During its secondary growth, the hypocotyl reorganizes its primary bilateral symmetry to a radial symmetry of highly specialized tissues comprising several thousand cells, starting with a few dozens. However, such a scale only permits observations in thin cross-sections, severely hampering a comprehensive analysis of the morphodynamics involved. Our quantitative histology strategy overcomes this limitation. We acquired hypocotyl cross-sections from tiled high-resolution images and extracted their information content using custom high-throughput image processing and segmentation. Coupled with an automated cell type recognition algorithm, it allows precise quantitative characterization of vascular development and reveals developmental patterns that were not evident from visual inspection, for example the steady interspace distance of the phloem poles. Further analyses indicated a change in growth anisotropy of cambial and phloem cells, which appeared in phase with the expansion of xylem. Combining genetic tools and computational modeling, we showed that the reorientation of growth anisotropy axis of peripheral tissue layers only occurs when the growth rate of central tissue is higher than the peripheral one. This was confirmed by the calculation of the ratio of the growth rate xylem to phloem throughout secondary growth. High ratios are indeed observed and concomitant with the homogenization of cambium anisotropy. These results suggest a self-organization mechanism, promoted by a gradient of division in the cambium that generates a pattern of mechanical constraints. This, in turn, reorients the growth anisotropy of peripheral tissues to sustain the secondary growth.
Resumo:
Centrifugal compressors are widely used for example in process industry, oil and gas industry, in small gas turbines and turbochargers. In order to achieve lower consumption of energy and operation costs the efficiency of the compressor needs to be improve. In the present work different pinches and low solidity vaned diffusers were utilized in order to improve the efficiency of a medium size centrifugal compressor. In this study, pinch means the decrement of the diffuser flow passage height. First different geometries were analyzed using computational fluid dynamics. The flow solver Finflo was used to solve the flow field. Finflo is a Navier-Stokes solver. The solver is capable to solve compressible, incompressible, steady and unsteady flow fields. Chien's k-e turbulence model was used. One of the numerically investigated pinched diffuser and one low solidity vaned diffuser were studied experimentally. The overall performance of the compressor and the static pressure distribution before and after the diffuser were measured. The flow entering and leaving the diffuser was measured using a three-hole Cobra-probe and Kiel-probes. The pinch and the low solidity vaned diffuser increased the efficiency of the compressor. Highest isentropic efficiency increment obtained was 3\% of the design isentropic efficiency of the original geometry. It was noticed in the numerical results that the pinch made to the hub and the shroud wall was most beneficial to the operation of the compressor. Also the pinch made to the hub was better than the pinchmade to the shroud. The pinch did not affect the operation range of the compressor, but the low solidity vaned diffuser slightly decreased the operation range.The unsteady phenomena in the vaneless diffuser were studied experimentally andnumerically. The unsteady static pressure was measured at the diffuser inlet and outlet, and time-accurate numerical simulation was conducted. The unsteady static pressure showed that most of the pressure variations lay at the passing frequency of every second blade. The pressure variations did not vanish in the diffuser and were visible at the diffuser outlet. However, the amplitude of the pressure variations decreased in the diffuser. The time-accurate calculations showed quite a good agreement with the measured data. Agreement was very good at the design operation point, even though the computational grid was not dense enough inthe volute and in the exit cone. The time-accurate calculation over-predicted the amplitude of the pressure variations at high flow.
Resumo:
Monimutkaisen tietokonejärjestelmän suorituskykyoptimointi edellyttää järjestelmän ajonaikaisen käyttäytymisen ymmärtämistä. Ohjelmiston koon ja monimutkaisuuden kasvun myötä suorituskykyoptimointi tulee yhä tärkeämmäksi osaksi tuotekehitysprosessia. Tehokkaampien prosessorien käytön myötä myös energiankulutus ja lämmöntuotto ovat nousseet yhä suuremmiksi ongelmiksi, erityisesti pienissä, kannettavissa laitteissa. Lämpö- ja energiaongelmien rajoittamiseksi on kehitetty suorituskyvyn skaalausmenetelmiä, jotka edelleen lisäävät järjestelmän kompleksisuutta ja suorituskykyoptimoinnin tarvetta. Tässä työssä kehitettiin visualisointi- ja analysointityökalu ajonaikaisen käyttäytymisen ymmärtämisen helpottamiseksi. Lisäksi kehitettiin suorituskyvyn mitta, joka mahdollistaa erilaisten skaalausmenetelmien vertailun ja arvioimisen suoritusympäristöstä riippumatta, perustuen joko suoritustallenteen tai teoreettiseen analyysiin. Työkalu esittää ajonaikaisesti kerätyn tallenteen helposti ymmärrettävällä tavalla. Se näyttää mm. prosessit, prosessorikuorman, skaalausmenetelmien toiminnan sekä energiankulutuksen kolmiulotteista grafiikkaa käyttäen. Työkalu tuottaa myös käyttäjän valitsemasta osasta suorituskuvaa numeerista tietoa, joka sisältää useita oleellisia suorituskykyarvoja ja tilastotietoa. Työkalun sovellettavuutta tarkasteltiin todellisesta laitteesta saatua suoritustallennetta sekä suorituskyvyn skaalauksen simulointia analysoimalla. Skaalausmekanismin parametrien vaikutus simuloidun laitteen suorituskykyyn analysoitiin.