951 resultados para In-loop-simulations
Resumo:
Cool materials are characterized by high solar reflectance and high thermal emittance; when applied to the external surface of a roof, they make it possible to limit the amount of solar irradiance absorbed by the roof, and to increase the rate of heat flux emitted by irradiation to the environment, especially during nighttime. However, a roof also releases heat by convection on its external surface; this mechanism is not negligible, and an incorrect evaluation of its entity might introduce significant inaccuracy in the assessment of the thermal performance of a cool roof, in terms of surface temperature and rate of heat flux transferred to the indoors. This issue is particularly relevant in numerical simulations, which are essential in the design stage, therefore it deserves adequate attention. In the present paper, a review of the most common algorithms used for the calculation of the convective heat transfer coefficient due to wind on horizontal building surfaces is presented. Then, with reference to a case study in Italy, the simulated results are compared to the outcomes of a measurement campaign. Hence, the most appropriate algorithms for the convective coefficient are identified, and the errors deriving by an incorrect selection of this coefficient are discussed.
Resumo:
The influence of the aspect ratio (building height/street canyon width) and the mean building height of cities on local energy fluxes and temperatures is studied by means of an Urban Canopy Model (UCM) coupled with a one-dimensional second-order turbulence closure model. The UCM presented is similar to the Town Energy Balance (TEB) model in most of its features but differs in a few important aspects. In particular, the street canyon walls are treated separately which leads to a different budget of radiation within the street canyon walls. The UCM has been calibrated using observations of incoming global and diffuse solar radiation, incoming long-wave radiation and air temperature at a site in So Paulo, Brazil. Sensitivity studies with various aspect ratios have been performed to assess their impact on urban temperatures and energy fluxes at the top of the canopy layer. In these simulations, it is assumed that the anthropogenic heat flux and latent heat fluxes are negligible. Results show that the simulated net radiation and sensible heat fluxes at the top of the canopy decrease and the stored heat increases as the aspect ratio increases. The simulated air temperature follows the behavior of the sensible heat flux. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The Wolf-Rayet (WR) stars are hot luminous objects which are suffering an extreme mass loss via a continuous stellar wind. The high values of mass loss rates and high terminal velocities of the WR stellar winds constitute a challenge to the theories of radiation driven winds. Several authors incorporated magnetic forces to the line driven mechanism in order to explain these characteristics of the wind. Observations indicate that the WR stellar winds may reach, at the photosphere, velocities of the order of the terminal values, which means that an important part of the wind acceleration occurs at the optically thick region. The aim of this study is to analyze a model in which the wind in a WR star begins to be accelerated in the optically thick part of the wind. We used as initial conditions stellar parameters taken from the literature and solved the energy, mass and momentum equations. We demonstrate that the acceleration only by radiative forces is prevented by the general behavior of the opacities. Combining radiative forces plus a flux of Alfven waves, we found in the simulations a fast drop in the wind density profile which strongly reduces the extension of the optically thick region and the wind becomes optically thin too close its base. The understanding how the WR wind initiate is still an open issue. (C) 2010 COSPAR. Published by Elsevier Ltd. All rights reserved.
Resumo:
The diffusion of astrophysical magnetic fields in conducting fluids in the presence of turbulence depends on whether magnetic fields can change their topology via reconnection in highly conducting media. Recent progress in understanding fast magnetic reconnection in the presence of turbulence reassures that the magnetic field behavior in computer simulations and turbulent astrophysical environments is similar, as far as magnetic reconnection is concerned. This makes it meaningful to perform MHD simulations of turbulent flows in order to understand the diffusion of magnetic field in astrophysical environments. Our studies of magnetic field diffusion in turbulent medium reveal interesting new phenomena. First of all, our three-dimensional MHD simulations initiated with anti-correlating magnetic field and gaseous density exhibit at later times a de-correlation of the magnetic field and density, which corresponds well to the observations of the interstellar media. While earlier studies stressed the role of either ambipolar diffusion or time-dependent turbulent fluctuations for de-correlating magnetic field and density, we get the effect of permanent de-correlation with one fluid code, i.e., without invoking ambipolar diffusion. In addition, in the presence of gravity and turbulence, our three-dimensional simulations show the decrease of the magnetic flux-to-mass ratio as the gaseous density at the center of the gravitational potential increases. We observe this effect both in the situations when we start with equilibrium distributions of gas and magnetic field and when we follow the evolution of collapsing dynamically unstable configurations. Thus, the process of turbulent magnetic field removal should be applicable both to quasi-static subcritical molecular clouds and cores and violently collapsing supercritical entities. The increase of the gravitational potential as well as the magnetization of the gas increases the segregation of the mass and magnetic flux in the saturated final state of the simulations, supporting the notion that the reconnection-enabled diffusivity relaxes the magnetic field + gas system in the gravitational field to its minimal energy state. This effect is expected to play an important role in star formation, from its initial stages of concentrating interstellar gas to the final stages of the accretion to the forming protostar. In addition, we benchmark our codes by studying the heat transfer in magnetized compressible fluids and confirm the high rates of turbulent advection of heat obtained in an earlier study.
Resumo:
In this paper we describe and evaluate a geometric mass-preserving redistancing procedure for the level set function on general structured grids. The proposed algorithm is adapted from a recent finite element-based method and preserves the mass by means of a localized mass correction. A salient feature of the scheme is the absence of adjustable parameters. The algorithm is tested in two and three spatial dimensions and compared with the widely used partial differential equation (PDE)-based redistancing method using structured Cartesian grids. Through the use of quantitative error measures of interest in level set methods, we show that the overall performance of the proposed geometric procedure is better than PDE-based reinitialization schemes, since it is more robust with comparable accuracy. We also show that the algorithm is well-suited for the highly stretched curvilinear grids used in CFD simulations. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
This paper explores the distortions on the cost of education, associated with government policies and institutional factors, as an additional determinant of cross-country income differences. Agents are finitely lived and the model takes into account life-cycle features of human capital accumulation. There are two sectors, one producing goods and the other providing educational services. The model is calibrated and simulated for 89 economies. We find that human capital taxation has a relevant impact on incomes, which is amplified by its indirect effect on returns to physical capital. Life expectancy plays an important role in determining long-run output: the expansion of the population working life increases the present value of the flow of wages, which induces further human capital investment and raises incomes. Although in our simulations the largest gains are observed when productivity is equated across countries, changes in longevity and in the incentives to educational investment are too relevant to ignore.
Resumo:
Alavancagem em hedge funds tem preocupado investidores e estudiosos nos últimos anos. Exemplos recentes de estratégias desse tipo se mostraram vantajosos em períodos de pouca incerteza na economia, porém desastrosos em épocas de crise. No campo das finanças quantitativas, tem-se procurado encontrar o nível de alavancagem que otimize o retorno de um investimento dado o risco que se corre. Na literatura, os estudos têm se mostrado mais qualitativos do que quantitativos e pouco se tem usado de métodos computacionais para encontrar uma solução. Uma forma de avaliar se alguma estratégia de alavancagem aufere ganhos superiores do que outra é definir uma função objetivo que relacione risco e retorno para cada estratégia, encontrar as restrições do problema e resolvê-lo numericamente por meio de simulações de Monte Carlo. A presente dissertação adotou esta abordagem para tratar o investimento em uma estratégia long-short em um fundo de investimento de ações em diferentes cenários: diferentes formas de alavancagem, dinâmicas de preço das ações e níveis de correlação entre esses preços. Foram feitas simulações da dinâmica do capital investido em função das mudanças dos preços das ações ao longo do tempo. Considerou-se alguns critérios de garantia de crédito, assim como a possibilidade de compra e venda de ações durante o período de investimento e o perfil de risco do investidor. Finalmente, estudou-se a distribuição do retorno do investimento para diferentes níveis de alavancagem e foi possível quantificar qual desses níveis é mais vantajoso para a estratégia de investimento dadas as restrições de risco.
Resumo:
O objetivo deste trabalho é realizar procedimento de back-test da Magic Formula na Bovespa, reunindo evidências sobre violações da Hipótese do Mercado Eficiente no mercado brasileiro. Desenvolvida por Joel Greenblatt, a Magic Formula é uma metodologia de formação de carteiras que consiste em escolher ações com altos ROICs e Earnings Yields, seguindo a filosofia de Value Investing. Diversas carteiras foram montadas no período de dezembro de 2002 a maio de 2014 utilizando diferentes combinações de número de ativos por carteira e períodos de permanência. Todas as carteiras, independentemente do número de ativos ou período de permanência, apresentaram retornos superiores ao Ibovespa. As diferenças entre os CAGRs das carteiras e o do Ibovespa foram significativas, sendo que a carteira com pior desempenho apresentou CAGR de 27,7% contra 14,1% do Ibovespa. As carteiras também obtiveram resultados positivos após serem ajustadas pelo risco. A pior razão retorno-volatilidade foi de 1,2, comparado a 0,6 do Ibovespa. As carteiras com pior pontuação também apresentaram bons resultados na maioria dos cenários, contrariando as expectativas iniciais e os resultados observados em outros trabalhos. Adicionalmente foram realizadas simulações para diversos períodos de 5 anos com objetivo de analisar a robustez dos resultados. Todas as carteiras apresentaram CAGR maior que o do Ibovespa em todos os períodos simulados, independentemente do número de ativos incluídos ou dos períodos de permanência. Estes resultados indicam ser possível alcançar retornos acima do mercado no Brasil utilizando apenas dados públicos históricos. Esta é uma violação da forma fraca da Hipótese do Mercado Eficiente.
Resumo:
The synthetic control (SC) method has been recently proposed as an alternative method to estimate treatment e ects in comparative case studies. Abadie et al. [2010] and Abadie et al. [2015] argue that one of the advantages of the SC method is that it imposes a data-driven process to select the comparison units, providing more transparency and less discretionary power to the researcher. However, an important limitation of the SC method is that it does not provide clear guidance on the choice of predictor variables used to estimate the SC weights. We show that such lack of speci c guidances provides signi cant opportunities for the researcher to search for speci cations with statistically signi cant results, undermining one of the main advantages of the method. Considering six alternative speci cations commonly used in SC applications, we calculate in Monte Carlo simulations the probability of nding a statistically signi cant result at 5% in at least one speci cation. We nd that this probability can be as high as 13% (23% for a 10% signi cance test) when there are 12 pre-intervention periods and decay slowly with the number of pre-intervention periods. With 230 pre-intervention periods, this probability is still around 10% (18% for a 10% signi cance test). We show that the speci cation that uses the average pre-treatment outcome values to estimate the weights performed particularly bad in our simulations. However, the speci cation-searching problem remains relevant even when we do not consider this speci cation. We also show that this speci cation-searching problem is relevant in simulations with real datasets looking at placebo interventions in the Current Population Survey (CPS). In order to mitigate this problem, we propose a criterion to select among SC di erent speci cations based on the prediction error of each speci cations in placebo estimations
Resumo:
Nowadays, evaluation methods to measure thermal performance of buildings have been developed in order to improve thermal comfort in buildings and reduce the use of energy with active cooling and heating systems. However, in developed countries, the criteria used in rating systems to asses the thermal and energy performance of buildings have demonstrated some limitations when applied to naturally ventilated building in tropical climates. The present research has as its main objective to propose a method to evaluate the thermal performance of low-rise residential buildings in warm humid climates, through computational simulation. The method was developed in order to conceive a suitable rating system for the athermal performance assessment of such buildings using as criteria the indoor air temperature and a thermal comfort adaptive model. The research made use of the software VisualDOE 4.1 in two simulations runs of a base case modeled for two basic types of occupancies: living room and bedroom. In the first simulation run, sensitive analyses were made to identify the variables with the higher impact over the cases´ thermal performance. Besides that, the results also allowed the formulation of design recommendations to warm humid climates toward an improvement on the thermal performance of residential building in similar situations. The results of the second simulation run was used to identify the named Thermal Performance Spectrum (TPS) of both occupancies types, which reflect the variations on the thermal performance considering the local climate, building typology, chosen construction material and studied occupancies. This analysis generates an index named IDTR Thermal Performance Resultant Index, which was configured as a thermal performance rating system. It correlates the thermal performance with the number of hours that the indoor air temperature was on each of the six thermal comfort bands pre-defined that received weights to measure the discomfort intensity. The use of this rating system showed to be appropriated when used in one of the simulated cases, presenting advantages in relation to other evaluation methods and becoming a tool for the understanding of building thermal behavior
Resumo:
Oil production and exploration techniques have evolved in the last decades in order to increase fluid flows and optimize how the required equipment are used. The base functioning of Electric Submersible Pumping (ESP) lift method is the use of an electric downhole motor to move a centrifugal pump and transport the fluids to the surface. The Electric Submersible Pumping is an option that has been gaining ground among the methods of Artificial Lift due to the ability to handle a large flow of liquid in onshore and offshore environments. The performance of a well equipped with ESP systems is intrinsically related to the centrifugal pump operation. It is the pump that has the function to turn the motor power into Head. In this present work, a computer model to analyze the three-dimensional flow in a centrifugal pump used in Electric Submersible Pumping has been developed. Through the commercial program, ANSYS® CFX®, initially using water as fluid flow, the geometry and simulation parameters have been defined in order to obtain an approximation of what occurs inside the channels of the impeller and diffuser pump in terms of flow. Three different geometry conditions were initially tested to determine which is most suitable to solving the problem. After choosing the most appropriate geometry, three mesh conditions were analyzed and the obtained values were compared to the experimental characteristic curve of Head provided by the manufacturer. The results have approached the experimental curve, the simulation time and the model convergence were satisfactory if it is considered that the studied problem involves numerical analysis. After the tests with water, oil was used in the simulations. The results were compared to a methodology used in the petroleum industry to correct viscosity. In general, for models with water and oil, the results with single-phase fluids were coherent with the experimental curves and, through three-dimensional computer models, they are a preliminary evaluation for the analysis of the two-phase flow inside the channels of centrifugal pump used in ESP systems
Resumo:
Steam injection is a method usually applied to very viscous oils and consists of injecting heat to reduce the viscosity and, therefore, increase the oil mobility, improving the oil production. For designing a steam injection project it is necessary to have a reservoir simulation in order to define the various parameters necessary for an efficient heat reservoir management, and with this, improve the recovery factor of the reservoir. The purpose of this work is to show the influence of the coupled wellbore/reservoir on the thermal simulation of reservoirs under cyclic steam stimulation. In this study, the methodology used in the solution of the problem involved the development of a wellbore model for the integration of steam flow model in injection wellbores, VapMec, and a blackoil reservoir model for the injection of cyclic steam in oil reservoirs. Thus, case studies were developed for shallow and deep reservoirs, whereas the usual configurations of injector well existing in the oil industry, i.e., conventional tubing without packer, conventional tubing with packer and insulated tubing with packer. A comparative study of the injection and production parameters was performed, always considering the same operational conditions, for the two simulation models, non-coupled and a coupled model. It was observed that the results are very similar for the specified well injection rate, whereas significant differences for the specified well pressure. Finally, on the basis of computational experiments, it was concluded that the influence of the coupled wellbore/reservoir in thermal simulations using cyclic steam injection as an enhanced oil recovery method is greater for the specified well pressure, while for the specified well injection rate, the steam flow model for the injector well and the reservoir may be simulated in a non- coupled way
Resumo:
This study aimed: 1) to classify ingredients according to the digestible amino acid (AA) profile; 2) to determine ingredients with AA profile closer to the ideal for broiler chickens; and 3) to compare digestible AA profiles from simulated diets with the ideal protein profile. The digestible AA levels of 30 ingredients were compiled from the literature and presented as percentages of lysine according to the ideal protein concept. Cluster and principal component analyses (exploratory analyses) were used to compose and describe groups of ingredients according to AA profiles. Four ingredient groups were identified by cluster analysis, and the classification of the ingredients within each of these groups was obtained from a principal component analysis, showing 11 classes of ingredients with similar digestible AA profiles. The ingredients with AA profiles closer to the ideal protein were meat and bone meal 45, fish meal 60 and wheat germ meal, all of them constituting Class 1; the ingredients from the other classes gradually diverged from the ideal protein. Soybean meal, which is the main protein source for poultry, showed good AA balance since it was included in Class 3. on the contrary, corn, which is the main energy source in poultry diets, was classified in Class 8. Dietary AA profiles were improved when corn and/or soybean meal were partially or totally replaced in the simulations by ingredients with better AA balance.
Resumo:
The main purpose of this work was the development of ceramic dielectric substrates of bismuth niobate (BiNbO4) doped with vanadium pentoxide (V2O5), with high permittivity, used in the construction of microstrip patch antennas with applications in wireless communications systems. The high electrical permittivity of the ceramic substrate provided a reduction of the antenna dimensions. The numerical results obtained in the simulations and the measurements performed with the microstrip patch antennas showed good agreement. These antennas can be used in wireless communication systems in various frequency bands. Results were satisfactory for antennas operating at frequencies in the S band, in the range between 2.5 GHz and 3.0 GHz.
Resumo:
Simulations based on cognitively rich agents can become a very intensive computing task, especially when the simulated environment represents a complex system. This situation becomes worse when time constraints are present. This kind of simulations would benefit from a mechanism that improves the way agents perceive and react to changes in these types of environments. In other worlds, an approach to improve the efficiency (performance and accuracy) in the decision process of autonomous agents in a simulation would be useful. In complex environments, and full of variables, it is possible that not every information available to the agent is necessary for its decision-making process, depending indeed, on the task being performed. Then, the agent would need to filter the coming perceptions in the same as we do with our attentions focus. By using a focus of attention, only the information that really matters to the agent running context are perceived (cognitively processed), which can improve the decision making process. The architecture proposed herein presents a structure for cognitive agents divided into two parts: 1) the main part contains the reasoning / planning process, knowledge and affective state of the agent, and 2) a set of behaviors that are triggered by planning in order to achieve the agent s goals. Each of these behaviors has a runtime dynamically adjustable focus of attention, adjusted according to the variation of the agent s affective state. The focus of each behavior is divided into a qualitative focus, which is responsible for the quality of the perceived data, and a quantitative focus, which is responsible for the quantity of the perceived data. Thus, the behavior will be able to filter the information sent by the agent sensors, and build a list of perceived elements containing only the information necessary to the agent, according to the context of the behavior that is currently running. Based on the human attention focus, the agent is also dotted of a affective state. The agent s affective state is based on theories of human emotion, mood and personality. This model serves as a basis for the mechanism of continuous adjustment of the agent s attention focus, both the qualitative and the quantative focus. With this mechanism, the agent can adjust its focus of attention during the execution of the behavior, in order to become more efficient in the face of environmental changes. The proposed architecture can be used in a very flexibly way. The focus of attention can work in a fixed way (neither the qualitative focus nor the quantitaive focus one changes), as well as using different combinations for the qualitative and quantitative foci variation. The architecture was built on a platform for BDI agents, but its design allows it to be used in any other type of agents, since the implementation is made only in the perception level layer of the agent. In order to evaluate the contribution proposed in this work, an extensive series of experiments were conducted on an agent-based simulation over a fire-growing scenario. In the simulations, the agents using the architecture proposed in this work are compared with similar agents (with the same reasoning model), but able to process all the information sent by the environment. Intuitively, it is expected that the omniscient agent would be more efficient, since they can handle all the possible option before taking a decision. However, the experiments showed that attention-focus based agents can be as efficient as the omniscient ones, with the advantage of being able to solve the same problems in a significantly reduced time. Thus, the experiments indicate the efficiency of the proposed architecture