932 resultados para Dynamic performance
Resumo:
Thermochromic windows are able to modulate their transmittance in both the visible and the near-infrared field as a function of their temperature. As a consequence, they allow to control the solar gains in summer, thus reducing the energy needs for space cooling. However, they may also yield a reduction in the daylight availability, which results in the energy consumption for indoor artificial lighting being increased. This paper investigates, by means of dynamic simulations, the application of thermochromic windows to an existing office building in terms of energy savings on an annual basis, while also focusing on the effects in terms of daylighting and thermal comfort. In particular, due attention is paid to daylight availability, described through illuminance maps and by the calculation of the daylight factor, which in several countries is subject thresholds. The study considers both a commercially available thermochromic pane and a series of theoretical thermochromic glazing. The expected performance is compared to static clear and reflective insulating glass units. The simulations are repeated in different climatic conditions, showing that the overall energy savings compared to clear glazing can range from around 5% for cold climates to around 20% in warm climates, while not compromising daylight availability. Moreover the role played by the transition temperature of the pane is examined, pointing out an optimal transition temperatures that is irrespective of the climatic conditions.
Resumo:
Cool materials are characterized by having a high solar reflectance r – which is able to reduce heat gains during daytime - and a high thermal emissivity ε that enables them to dissipate the heat absorbed throughout the day during night. Despite the concept of cool roofs - i.e. the application of cool materials to roof surfaces - is well known in US since 1990s, many studies focused on their performance in both residential and commercial sectors under various climatic conditions for US countries, while only a few case studies are analyzed in EU countries. The present work aims at analyzing the thermal benefits due to their application to existing office buildings located in EU countries. Indeed, due to their weight in the existing buildings stock, as well as the very low rate of new buildings construction, the retrofit of office buildings is a topic of great concern worldwide. After an in-depth characterization of the existing buildings stock in the EU, the book gives an insight into roof energy balance due to different technological solutions, showing in which cases and to what extent cool roofs are preferable. A detailed description of the physical properties of cool materials and their availability on the market provides a solid background for the parametric analysis carried out by means of detailed numerical models that aims at evaluating cool roofs performance for various climates and office buildings configurations. With the help of dynamic simulations, the thermal behavior of representative office buildings of the existing EU buildings stock is assessed in terms of thermal comfort and energy needs for air conditioning. The results, which consider several variations of building features that may affect the resulting energy balance, show how cool roofs are an effective strategy for reducing overheating occurrences and thus improving thermal comfort in any climate. On the other hand, potential heating penalties due to a reduction in the incoming heat fluxes through the roof are taken into account, as well as the aging process of cool materials. Finally, an economic analysis of the best performing models shows the boundaries for their economic convenience.
Resumo:
This paper presents a new technique and two algorithms to bulk-load data into multi-way dynamic metric access methods, based on the covering radius of representative elements employed to organize data in hierarchical data structures. The proposed algorithms are sample-based, and they always build a valid and height-balanced tree. We compare the proposed algorithm with existing ones, showing the behavior to bulk-load data into the Slim-tree metric access method. After having identified the worst case of our first algorithm, we describe adequate counteractions in an elegant way creating the second algorithm. Experiments performed to evaluate their performance show that our bulk-loading methods build trees faster than the sequential insertion method regarding construction time, and that it also significantly improves search performance. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The assessment of routing protocols for mobile wireless networks is a difficult task, because of the networks` dynamic behavior and the absence of benchmarks. However, some of these networks, such as intermittent wireless sensors networks, periodic or cyclic networks, and some delay tolerant networks (DTNs), have more predictable dynamics, as the temporal variations in the network topology can be considered as deterministic, which may make them easier to study. Recently, a graph theoretic model-the evolving graphs-was proposed to help capture the dynamic behavior of such networks, in view of the construction of least cost routing and other algorithms. The algorithms and insights obtained through this model are theoretically very efficient and intriguing. However, there is no study about the use of such theoretical results into practical situations. Therefore, the objective of our work is to analyze the applicability of the evolving graph theory in the construction of efficient routing protocols in realistic scenarios. In this paper, we use the NS2 network simulator to first implement an evolving graph based routing protocol, and then to use it as a benchmark when comparing the four major ad hoc routing protocols (AODV, DSR, OLSR and DSDV). Interestingly, our experiments show that evolving graphs have the potential to be an effective and powerful tool in the development and analysis of algorithms for dynamic networks, with predictable dynamics at least. In order to make this model widely applicable, however, some practical issues still have to be addressed and incorporated into the model, like adaptive algorithms. We also discuss such issues in this paper, as a result of our experience.
Resumo:
High-Performance Liquid Chromatography (HPLC) conditions are described for separation of 2,4-dinitrophenylhydrazone (2,4-DNPH) derivatives of carbonyl compounds in a 10 cm long C-18 reversed phase monolithic column. Using a linear gradient from 40 to 77% acetonitrile (acetonitrile-water system), the separation was achieved in about 10 min-a time significantly shorter than that obtained with a packed particles column. The method was applied for determination of formaldehyde and acetaldehyde in Brazilian sugar cane spirits. The linear dynamic range was between 30 and 600 mu g L-1, and the detection limits were 8 and 4 mu g L-1 for formaldehyde and acetaldehyde, respectively.
Resumo:
Genetic algorithms are commonly used to solve combinatorial optimizationproblems. The implementation evolves using genetic operators (crossover, mutation,selection, etc.). Anyway, genetic algorithms like some other methods have parameters(population size, probabilities of crossover and mutation) which need to be tune orchosen.In this paper, our project is based on an existing hybrid genetic algorithmworking on the multiprocessor scheduling problem. We propose a hybrid Fuzzy-Genetic Algorithm (FLGA) approach to solve the multiprocessor scheduling problem.The algorithm consists in adding a fuzzy logic controller to control and tunedynamically different parameters (probabilities of crossover and mutation), in anattempt to improve the algorithm performance. For this purpose, we will design afuzzy logic controller based on fuzzy rules to control the probabilities of crossoverand mutation. Compared with the Standard Genetic Algorithm (SGA), the resultsclearly demonstrate that the FLGA method performs significantly better.
Resumo:
Objective To design, develop and set up a web-based system for enabling graphical visualization of upper limb motor performance (ULMP) of Parkinson’s disease (PD) patients to clinicians. Background Sixty-five patients diagnosed with advanced PD have used a test battery, implemented in a touch-screen handheld computer, in their home environment settings over the course of a 3-year clinical study. The test items consisted of objective measures of ULMP through a set of upper limb motor tests (finger to tapping and spiral drawings). For the tapping tests, patients were asked to perform alternate tapping of two buttons as fast and accurate as possible, first using the right hand and then the left hand. The test duration was 20 seconds. For the spiral drawing test, patients traced a pre-drawn Archimedes spiral using the dominant hand, and the test was repeated 3 times per test occasion. In total, the study database consisted of symptom assessments during 10079 test occasions. Methods Visualization of ULMP The web-based system is used by two neurologists for assessing the performance of PD patients during motor tests collected over the course of the said study. The system employs animations, scatter plots and time series graphs to visualize the ULMP of patients to the neurologists. The performance during spiral tests is depicted by animating the three spiral drawings, allowing the neurologists to observe real-time accelerations or hesitations and sharp changes during the actual drawing process. The tapping performance is visualized by displaying different types of graphs. Information presented included distribution of taps over the two buttons, horizontal tap distance vs. time, vertical tap distance vs. time, and tapping reaction time over the test length. Assessments Different scales are utilized by the neurologists to assess the observed impairments. For the spiral drawing performance, the neurologists rated firstly the ‘impairment’ using a 0 (no impairment) – 10 (extremely severe) scale, secondly three kinematic properties: ‘drawing speed’, ‘irregularity’ and ‘hesitation’ using a 0 (normal) – 4 (extremely severe) scale, and thirdly the probable ‘cause’ for the said impairment using 3 choices including Tremor, Bradykinesia/Rigidity and Dyskinesia. For the tapping performance, a 0 (normal) – 4 (extremely severe) scale is used for first rating four tapping properties: ‘tapping speed’, ‘accuracy’, ‘fatigue’, ‘arrhythmia’, and then the ‘global tapping severity’ (GTS). To achieve a common basis for assessment, initially one neurologist (DN) performed preliminary ratings by browsing through the database to collect and rate at least 20 samples of each GTS level and at least 33 samples of each ‘cause’ category. These preliminary ratings were then observed by the two neurologists (DN and PG) to be used as templates for rating of tests afterwards. In another track, the system randomly selected one test occasion per patient and visualized its items, that is tapping and spiral drawings, to the two neurologists. Statistical methods Inter-rater agreements were assessed using weighted Kappa coefficient. The internal consistency of properties of tapping and spiral drawing tests were assessed using Cronbach’s α test. One-way ANOVA test followed by Tukey multiple comparisons test was used to test if mean scores of properties of tapping and spiral drawing tests were different among GTS and ‘cause’ categories, respectively. Results When rating tapping graphs, inter-rater agreements (Kappa) were as follows: GTS (0.61), ‘tapping speed’ (0.89), ‘accuracy’ (0.66), ‘fatigue’ (0.57) and ‘arrhythmia’ (0.33). The poor inter-rater agreement when assessing “arrhythmia” may be as a result of observation of different things in the graphs, among the two raters. When rating animated spirals, both raters had very good agreement when assessing severity of spiral drawings, that is, ‘impairment’ (0.85) and irregularity (0.72). However, there were poor agreements between the two raters when assessing ‘cause’ (0.38) and time-information properties like ‘drawing speed’ (0.25) and ‘hesitation’ (0.21). Tapping properties, that is ‘tapping speed’, ‘accuracy’, ‘fatigue’ and ‘arrhythmia’ had satisfactory internal consistency with a Cronbach’s α coefficient of 0.77. In general, the trends of mean scores of tapping properties worsened with increasing levels of GTS. The mean scores of the four properties were significantly different to each other, only at different levels. In contrast from tapping properties, kinematic properties of spirals, that is ‘drawing speed’, ‘irregularity’ and ‘hesitation’ had a questionable consistency among them with a coefficient of 0.66. Bradykinetic spirals were associated with more impaired speed (mean = 83.7 % worse, P < 0.001) and hesitation (mean = 77.8% worse, P < 0.001), compared to dyskinetic spirals. Both these ‘cause’ categories had similar mean scores of ‘impairment’ and ‘irregularity’. Conclusions In contrast from current approaches used in clinical setting for the assessment of PD symptoms, this system enables clinicians to animate easily and realistically the ULMP of patients who at the same time are at their homes. Dynamic access of visualized motor tests may also be useful when observing and evaluating therapy-related complications such as under- and over-medications. In future, we foresee to utilize these manual ratings for developing and validating computer methods for automating the process of assessing ULMP of PD patients.
Resumo:
Objective: For the evaluation of the energetic performance of combined renewable heating systems that supply space heat and domestic hot water for single family houses, dynamic behaviour, component interactions, and control of the system play a crucial role and should be included in test methods. Methods: New dynamic whole system test methods were developed based on “hardware in the loop” concepts. Three similar approaches are described and their differences are discussed. The methods were applied for testing solar thermal systems in combination with fossil fuel boilers (heating oil and natural gas), biomass boilers, and/or heat pumps. Results: All three methods were able to show the performance of combined heating systems under transient operating conditions. The methods often detected unexpected behaviour of the tested system that cannot be detected based on steady state performance tests that are usually applied to single components. Conclusion: Further work will be needed to harmonize the different test methods in order to reach comparable results between the different laboratories. Practice implications: A harmonized approach for whole system tests may lead to new test standards and improve the accuracy of performance prediction as well as reduce the need for field tests.
Resumo:
Recent studies have shown that the optical properties of building exterior surfaces are important in terms of energy use and thermal comfort. While the majority of the studies are related to exterior surfaces, the radiation properties of interior surfaces are less thoroughly investigated. Development in the coil-coating industries has now made it possible to allocate different optical properties for both exterior and interior surfaces of steel-clad buildings. The aim of this thesis is to investigate the influence of surface radiation properties with the focus on the thermal emittance of the interior surfaces, the modeling approaches and their consequences in the context of the building energy performance and indoor thermal environment. The study consists of both numerical and experimental investigations. The experimental investigations include parallel field measurements on three similar test cabins with different interior and exterior surface radiation properties in Borlänge, Sweden, and two ice rink arenas with normal and low emissive ceiling in Luleå, Sweden. The numerical methods include comparative simulations by the use of dynamic heat flux models, Building Energy Simulation (BES), Computational Fluid Dynamics (CFD) and a coupled model for BES and CFD. Several parametric studies and thermal performance analyses were carried out in combination with the different numerical methods. The parallel field measurements on the test cabins include the air, surface and radiation temperatures and energy use during passive and active (heating and cooling) measurements. Both measurement and comparative simulation results indicate an improvement in the indoor thermal environment when the interior surfaces have low emittance. In the ice rink arenas, surface and radiation temperature measurements indicate a considerable reduction in the ceiling-to-ice radiation by the use of low emittance surfaces, in agreement with a ceiling-toice radiation model using schematic dynamic heat flux calculations. The measurements in the test cabins indicate that the use of low emittance surfaces can increase the vertical indoor air temperature gradients depending on the time of day and outdoor conditions. This is in agreement with the transient CFD simulations having the boundary condition assigned on the exterior surfaces. The sensitivity analyses have been performed under different outdoor conditions and surface thermal radiation properties. The spatially resolved simulations indicate an increase in the air and surface temperature gradients by the use of low emittance coatings. This can allow for lower air temperature at the occupied zone during the summer. The combined effect of interior and exterior reflective coatings in terms of energy use has been investigated by the use of building energy simulation for different climates and internal heat loads. The results indicate possible energy savings by the smart choice of optical properties on interior and exterior surfaces of the building. Overall, it is concluded that the interior reflective coatings can contribute to building energy savings and improvement of the indoor thermal environment. This can be numerically investigated by the choice of appropriate models with respect to the level of detail and computational load. This thesis includes comparative simulations at different levels of detail.
Resumo:
This thesis presents DCE, or Dynamic Conditional Execution, as an alternative to reduce the cost of mispredicted branches. The basic idea is to fetch all paths produced by a branch that obey certain restrictions regarding complexity and size. As a result, a smaller number of predictions is performed, and therefore, a lesser number of branches are mispredicted. DCE fetches through selected branches avoiding disruptions in the fetch flow when these branches are fetched. Both paths of selected branches are executed but only the correct path commits. In this thesis we propose an architecture to execute multiple paths of selected branches. Branches are selected based on the size and other conditions. Simple and complex branches can be dynamically predicated without requiring a special instruction set nor special compiler optimizations. Furthermore, a technique to reduce part of the overhead generated by the execution of multiple paths is proposed. The performance achieved reaches levels of up to 12% when comparing a Local predictor used in DCE against a Global predictor used in the reference machine. When both machines use a Local predictor, the speedup is increased by an average of 3-3.5%.
Resumo:
A Execução Condicional Dinâmica (DCE) é uma alternativa para redução dos custos relacionados a desvios previstos incorretamente. A idéia básica é buscar todos os fluxos produzidos por um desvio que obedecem algumas restrições relativas à complexidade e tamanho. Como conseqüência, um número menor de previsões é executado, e assim, um número mais baixo de desvios é incorretamente previsto. Contudo, tal como outras soluções multi-fluxo, o DCE requer uma estrutura de controle mais complexa. Na arquitetura DCE, é observado que várias réplicas da mesma instrução são despachadas para as unidades funcionais, bloqueando recursos que poderiam ser utilizados por outras instruções. Essas réplicas são geradas após o ponto de convergência dos diversos fluxos em execução e são necessárias para garantir a semântica correta entre instruções dependentes de dados. Além disso, o DCE continua produzindo réplicas até que o desvio que gerou os fluxos seja resolvido. Assim, uma seção completa do código pode ser replicado, reduzindo o desempenho. Uma alternativa natural para esse problema é reusar essas seções (ou traços) que são replicadas. O objetivo desse trabalho é analisar e avaliar a efetividade do reuso de valores na arquitetura DCE. Como será apresentado, o princípio do reuso, em diferentes granularidades, pode reduzir efetivamente o problema das réplicas e levar a aumentos de desempenho.
Resumo:
We study the effects of a conditional transfers program on school enrollment and performance in Mexico. We provide a theoretical framework for analyzing the dynamic educational decision and process inc1uding the endogeneity and uncertainty of performance (passing grades) and the effect of a conditional cash transfer program for children enrolled at school. Careful identification of the program impact on this model is studied. This framework is used to study the Mexican social program Progresa in which a randomized experiment has been implemented and allows us to identify the effect of the conditional cash transfer program on enrollment and performance at school. Using the mIes of the conditional program, we can explain the different incentive effects provided. We also derive the formal identifying assumptions needed to provide consistent estimates of the average treatment effects on enrollment and performance at school. We estimate empirically these effects and find that Progresa had always a positive impact on school continuation whereas for performance it had a positive impact at primary school but a negative one at secondary school, a possible consequence of disincentives due to the program termination after the third year of secondary school.
Resumo:
My dissertation focuses on dynamic aspects of coordination processes such as reversibility of early actions, option to delay decisions, and learning of the environment from the observation of other people’s actions. This study proposes the use of tractable dynamic global games where players privately and passively learn about their actions’ true payoffs and are able to adjust early investment decisions to the arrival of new information to investigate the consequences of the presence of liquidity shocks to the performance of a Tobin tax as a policy intended to foster coordination success (chapter 1), and the adequacy of the use of a Tobin tax in order to reduce an economy’s vulnerability to sudden stops (chapter 2). Then, it analyzes players’ incentive to acquire costly information in a sequential decision setting (chapter 3). In chapter 1, a continuum of foreign agents decide whether to enter or not in an investment project. A fraction λ of them are hit by liquidity restrictions in a second period and are forced to withdraw early investment or precluded from investing in the interim period, depending on the actions they chose in the first period. Players not affected by the liquidity shock are able to revise early decisions. Coordination success is increasing in the aggregate investment and decreasing in the aggregate volume of capital exit. Without liquidity shocks, aggregate investment is (in a pivotal contingency) invariant to frictions like a tax on short term capitals. In this case, a Tobin tax always increases success incidence. In the presence of liquidity shocks, this invariance result no longer holds in equilibrium. A Tobin tax becomes harmful to aggregate investment, which may reduces success incidence if the economy does not benefit enough from avoiding capital reversals. It is shown that the Tobin tax that maximizes the ex-ante probability of successfully coordinated investment is decreasing in the liquidity shock. Chapter 2 studies the effects of a Tobin tax in the same setting of the global game model proposed in chapter 1, with the exception that the liquidity shock is considered stochastic, i.e, there is also aggregate uncertainty about the extension of the liquidity restrictions. It identifies conditions under which, in the unique equilibrium of the model with low probability of liquidity shocks but large dry-ups, a Tobin tax is welfare improving, helping agents to coordinate on the good outcome. The model provides a rationale for a Tobin tax on economies that are prone to sudden stops. The optimal Tobin tax tends to be larger when capital reversals are more harmful and when the fraction of agents hit by liquidity shocks is smaller. Chapter 3 focuses on information acquisition in a sequential decision game with payoff complementar- ity and information externality. When information is cheap relatively to players’ incentive to coordinate actions, only the first player chooses to process information; the second player learns about the true payoff distribution from the observation of the first player’s decision and follows her action. Miscoordination requires that both players privately precess information, which tends to happen when it is expensive and the prior knowledge about the distribution of the payoffs has a large variance.
Resumo:
In the present study, a simple and sensitive methodology based on dynamic headspace solid-phase microextraction (HS-SPME) followed by thermal desorption gas chromatography with quadrupole mass detection (GC–qMSD), was developed and optimized for the determination of volatile (VOCs) and semi-volatile (SVOCs) compounds from different alcoholic beverages: wine, beer and whisky. Key experimental factors influencing the equilibrium of the VOCs and SVOCs between the sample and the SPME fibre, as the type of fibre coating, extraction time and temperature, sample stirring and ionic strength, were optimized. The performance of five commercially available SPME fibres was evaluated and compared, namely polydimethylsiloxane (PDMS, 100 μm); polyacrylate (PA, 85 μm); polydimethylsiloxane/divinylbenzene (PDMS/DVB, 65 μm); carboxen™/polydimethylsiloxane (CAR/PDMS, 75 μm) and the divinylbenzene/carboxen on polydimethylsiloxane (DVB/CAR/PDMS, 50/30 μm) (StableFlex). An objective comparison among different alcoholic beverages has been established in terms of qualitative and semi-quantitative differences on volatile and semi-volatile compounds. These compounds belong to several chemical families, including higher alcohols, ethyl esters, fatty acids, higher alcohol acetates, isoamyl esters, carbonyl compounds, furanic compounds, terpenoids, C13-norisoprenoids and volatile phenols. The optimized extraction conditions and GC–qMSD, lead to the successful identification of 44 compounds in white wines, 64 in beers and 104 in whiskys. Some of these compounds were found in all of the examined beverage samples. The main components of the HS-SPME found in white wines were ethyl octanoate (46.9%), ethyl decanoate (30.3%), ethyl 9-decenoate (10.7%), ethyl hexanoate (3.1%), and isoamyl octanoate (2.7%). As for beers, the major compounds were isoamyl alcohol (11.5%), ethyl octanoate (9.1%), isoamyl acetate (8.2%), 2-ethyl-1-hexanol (5.9%), and octanoic acid (5.5%). Ethyl decanoate (58.0%), ethyl octanoate (15.1%), ethyl dodecanoate (13.9%) followed by 3-methyl-1-butanol (1.8%) and isoamyl acetate (1.4%) were found to be the major VOCs in whisky samples.
Resumo:
A suitable analytical procedure based on static headspace solid-phase microextraction (SPME) followed by thermal desorption gas chromatography–ion trap mass spectrometry detection (GC–ITDMS), was developed and applied for the qualitative and semi-quantitative analysis of volatile components of Portuguese Terras Madeirenses red wines. The headspace SPME method was optimised in terms of fibre coating, extraction time, and extraction temperature. The performance of three commercially available SPME fibres, viz. 100 lm polydimethylsiloxane; 85 lm polyacrylate, PA; and 50/30 lm divinylbenzene/carboxen on polydimethylsiloxane, was evaluated and compared. The highest amounts extracted, in terms of the maximum signal recorded for the total volatile composition, were obtained with a PA coating fibre at 308C during an extraction time of 60 min with a constant stirring at 750 rpm, after saturation of the sample with NaCl (30%, w/v). More than sixty volatile compounds, belonging to different biosynthetic pathways, have been identified, including fatty acid ethyl esters, higher alcohols, fatty acids, higher alcohol acetates, isoamyl esters, carbonyl compounds, and monoterpenols/C13-norisoprenoids.