951 resultados para In-loop-simulations


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Arctic sea ice cover is thinning and retreating, causing changes in surface roughness that in turn modify the momentum flux from the atmosphere through the ice into the ocean. New model simulations comprising variable sea ice drag coefficients for both the air and water interface demonstrate that the heterogeneity in sea ice surface roughness significantly impacts the spatial distribution and trends of ocean surface stress during the last decades. Simulations with constant sea ice drag coefficients as used in most climate models show an increase in annual mean ocean surface stress (0.003 N/m2 per decade, 4.6%) due to the reduction of ice thickness leading to a weakening of the ice and accelerated ice drift. In contrast, with variable drag coefficients our simulations show annual mean ocean surface stress is declining at a rate of -0.002 N/m2 per decade (3.1%) over the period 1980-2013 because of a significant reduction in surface roughness associated with an increasingly thinner and younger sea ice cover. The effectiveness of sea ice in transferring momentum does not only depend on its resistive strength against the wind forcing but is also set by its top and bottom surface roughness varying with ice types and ice conditions. This reveals the need to account for sea ice surface roughness variations in climate simulations in order to correctly represent the implications of sea ice loss under global warming.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Heavy precipitation affected Central Europe in May/June 2013, triggering damaging floods both on the Danube and the Elbe rivers. Based on a modelling approach with COSMO-CLM, moisture fluxes, backward trajectories, cyclone tracks and precipitation fields are evaluated for the relevant time period 30 May–2 June 2013. We identify potential moisture sources and quantify their contribution to the flood event focusing on the Danube basin through sensitivity experiments: Control simulations are performed with undisturbed ERA-Interim boundary conditions, while multiple sensitivity experiments are driven with modified evaporation characteristics over selected marine and land areas. Two relevant cyclones are identified both in reanalysis and in our simulations, which moved counter-clockwise in a retrograde path from Southeastern Europe over Eastern Europe towards the northern slopes of the Alps. The control simulations represent the synoptic evolution of the event reasonably well. The evolution of the precipitation event in the control simulations shows some differences in terms of its spatial and temporal characteristics compared to observations. The main precipitation event can be separated into two phases concerning the moisture sources. Our modelling results provide evidence that the two main sources contributing to the event were the continental evapotranspiration (moisture recycling; both phases) and the North Atlantic Ocean (first phase only). The Mediterranean Sea played only a minor role as a moisture source. This study confirms the importance of continental moisture recycling for heavy precipitation events over Central Europe during the summer half year.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the present study, a three-dimensional Eulerian photochemical model was employed to estimate the impact that organic compounds have on tropospheric ozone formation in the Metropolitan Area of Sao Paulo (MASP). In the year 2000, base case simulations were conducted in two periods: August 22-24 and March 13-15. Based on the pollutant concentrations calculated by the model, the correlation coefficient relative to observations for ozone ranged from 0.91 to 0.93 in both periods. In the simulations employed to evaluate the ozone potential of individual VOCs, as well as the sensitivity of ozone to the VOC/NO(x) emission ratio, the variation in anthropogenic emissions was estimated at 15% (according to tests performed previously variations of 15% were stable). Although there were significant differences between the two periods, ozone concentrations were found to be much more sensitive to VOCs than to NO(x) in both periods and throughout the study domain. In addition, considering their individual rates of emission from vehicles, the species/classes that were most important for ozone formation were as follows: aromatics with a kOH>2x 10(4) ppm(-1) min(-1); olefins with a kOH 7 x 10(4) ppm(-1) min(-1); olefins with a kOH 7 x 10(4) ppm(-1) min(-1); ethene; and formaldehyde, which are the principal species related to the production, transport, storage and combustion of fossil fuels.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Classical nova remnants are important scenarios for improving the photoionization modeling. This work describes the pseudo-three-dimensional code RAINY3D, which drives the photoionization code Cloudy as a subroutine. Photoionization simulations of old nova remnants are also presented and discussed. In these simulations we analyze the effect of condensation in the remnant spectra. The condensed mass fraction affects the Balmer lines by a factor of greater than 4 when compared with homogeneous models, and this directly impacts the shell mass determination. The He II 4686/H beta ratio decreases by a factor of 10 in clumpy shells. These lines are also affected by the clump size and density distributions. The behavior of the strongest nebular line observed in nova remnants is also analyzed for heterogeneous shells. The gas diagnoses in novae ejecta are thought to be more accurate during the nebular phase, but we have determined that at this phase the matter distribution can strongly affect the derived shell physical properties and chemical abundances.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The ejection of gas out of the disc in late-type galaxies is related to star formation and is mainly due to the explosion of Type II supernovae (SN II). In a previous paper, we considered the evolution of a single Galactic fountain, that is, a fountain powered by a single SN cluster. Using three-dimensional hydrodynamical simulations, we studied in detail the fountain flow and its dependence with several factors, such as the Galactic rotation, the distance to the Galactic centre and the presence of a hot gaseous halo. As a natural followup, this paper investigates the dynamical evolution of multiple generations of fountains generated by similar to 100 OB associations. We have considered the observed size-frequency distribution of young stellar clusters within the Galaxy in order to appropriately fuel the multiple fountains in our simulations. Most of the results of the previous paper have been confirmed, like for example the formation of intermediate velocity clouds above the disc by the multiple fountains. Also, this work confirms the localized nature of the fountain flows: the freshly ejected metals tend to fall back close to the same Galactocentric region where they are delivered. Therefore, the fountains do not change significantly the radial profile of the disc chemical abundance. The multiple fountain simulations also allowed us to consistently calculate the feedback of the star formation on the halo gas. We found that the hot gas gains about 10 per cent of all the SN II energy produced in the disc. Thus, the SN feedback more than compensate for the halo radiative losses and allow for a quasi steady-state disc-halo circulation to exist. Finally, we have also considered the possibility of mass infall from the intergalactic medium and its interaction with the clouds that are formed by the fountains. Though our simulations are not suitable to reproduce the slow rotational pattern that is typically observed in the haloes around the disc galaxies, they indicate that the presence of an external gas infall may help to slow down the rotation of the gas in the clouds and thus the amount of angular momentum that they transfer to the coronal gas, as previously suggested in the literature.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

By means of self-consistent three-dimensional magnetohydrodynamics (MHD) numerical simulations, we analyze magnetized solar-like stellar winds and their dependence on the plasma-beta parameter (the ratio between thermal and magnetic energy densities). This is the first study to perform such analysis solving the fully ideal three-dimensional MHD equations. We adopt in our simulations a heating parameter described by gamma, which is responsible for the thermal acceleration of the wind. We analyze winds with polar magnetic field intensities ranging from 1 to 20 G. We show that the wind structure presents characteristics that are similar to the solar coronal wind. The steady-state magnetic field topology for all cases is similar, presenting a configuration of helmet streamer-type, with zones of closed field lines and open field lines coexisting. Higher magnetic field intensities lead to faster and hotter winds. For the maximum magnetic intensity simulated of 20 G and solar coronal base density, the wind velocity reaches values of similar to 1000 km s(-1) at r similar to 20r(0) and a maximum temperature of similar to 6 x 10(6) K at r similar to 6r(0). The increase of the field intensity generates a larger ""dead zone"" in the wind, i.e., the closed loops that inhibit matter to escape from latitudes lower than similar to 45 degrees extend farther away from the star. The Lorentz force leads naturally to a latitude-dependent wind. We show that by increasing the density and maintaining B(0) = 20 G the system recover back to slower and cooler winds. For a fixed gamma, we show that the key parameter in determining the wind velocity profile is the beta-parameter at the coronal base. Therefore, there is a group of magnetized flows that would present the same terminal velocity despite its thermal and magnetic energy densities, as long as the plasma-beta parameter is the same. This degeneracy, however, can be removed if we compare other physical parameters of the wind, such as the mass-loss rate. We analyze the influence of gamma in our results and we show that it is also important in determining the wind structure.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We consider a four dimensional field theory with target space being CP(N) which constitutes a generalization of the usual Skyrme-Faddeev model defined on CP(1). We show that it possesses an integrable sector presenting an infinite number of local conservation laws, which are associated to the hidden symmetries of the zero curvature representation of the theory in loop space. We construct an infinite class of exact solutions for that integrable submodel where the fields are meromorphic functions of the combinations (x(1) + i x(2)) and (x(3) + x(0)) of the Cartesian coordinates of four dimensional Minkowski space-time. Among those solutions we have static vortices and also vortices with waves traveling along them with the speed of light. The energy per unity of length of the vortices show an interesting and intricate interaction among the vortices and waves.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper proposes a spatial-temporal downscaling approach to construction of the intensity-duration-frequency (IDF) relations at a local site in the context of climate change and variability. More specifically, the proposed approach is based on a combination of a spatial downscaling method to link large-scale climate variables given by General Circulation Model (GCM) simulations with daily extreme precipitations at a site and a temporal downscaling procedure to describe the relationships between daily and sub-daily extreme precipitations based on the scaling General Extreme Value (GEV) distribution. The feasibility and accuracy of the suggested method were assessed using rainfall data available at eight stations in Quebec (Canada) for the 1961-2000 period and climate simulations under four different climate change scenarios provided by the Canadian (CGCM3) and UK (HadCM3) GCM models. Results of this application have indicated that it is feasible to link sub-daily extreme rainfalls at a local site with large-scale GCM-based daily climate predictors for the construction of the IDF relations for present (1961-1990) and future (2020s, 2050s, and 2080s) periods at a given site under different climate change scenarios. In addition, it was found that annual maximum rainfalls downscaled from the HadCM3 displayed a smaller change in the future, while those values estimated from the CGCM3 indicated a large increasing trend for future periods. This result has demonstrated the presence of high uncertainty in climate simulations provided by different GCMs. In summary, the proposed spatial-temporal downscaling method provided an essential tool for the estimation of extreme rainfalls that are required for various climate-related impact assessment studies for a given region.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we can model the heteroskedasticity of a linear combination of the errors. We show that this assumption can be satisfied without imposing strong assumptions on the errors in common DID applications. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative inference method that relies on strict stationarity and ergodicity of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment periods. We extend our inference methods to linear factor models when there are few treated groups. We also derive conditions under which a permutation test for the synthetic control estimator proposed by Abadie et al. (2010) is robust to heteroskedasticity and propose a modification on the test statistic that provided a better heteroskedasticity correction in our simulations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we know how the heteroskedasticity is generated, which is the case when it is generated by variation in the number of observations per group. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative application of our method that relies on assumptions about stationarity and convergence of the moments of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment groups. We extend our inference method to linear factor models when there are few treated groups. We also propose a permutation test for the synthetic control estimator that provided a better heteroskedasticity correction in our simulations than the test suggested by Abadie et al. (2010).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this Thesis, the development of the dynamic model of multirotor unmanned aerial vehicle with vertical takeoff and landing characteristics, considering input nonlinearities and a full state robust backstepping controller are presented. The dynamic model is expressed using the Newton-Euler laws, aiming to obtain a better mathematical representation of the mechanical system for system analysis and control design, not only when it is hovering, but also when it is taking-off, or landing, or flying to perform a task. The input nonlinearities are the deadzone and saturation, where the gravitational effect and the inherent physical constrains of the rotors are related and addressed. The experimental multirotor aerial vehicle is equipped with an inertial measurement unit and a sonar sensor, which appropriately provides measurements of attitude and altitude. A real-time attitude estimation scheme based on the extended Kalman filter using quaternions was developed. Then, for robustness analysis, sensors were modeled as the ideal value with addition of an unknown bias and unknown white noise. The bounded robust attitude/altitude controller were derived based on globally uniformly practically asymptotically stable for real systems, that remains globally uniformly asymptotically stable if and only if their solutions are globally uniformly bounded, dealing with convergence and stability into a ball of the state space with non-null radius, under some assumptions. The Lyapunov analysis technique was used to prove the stability of the closed-loop system, compute bounds on control gains and guaranteeing desired bounds on attitude dynamics tracking errors in the presence of measurement disturbances. The controller laws were tested in numerical simulations and in an experimental hexarotor, developed at the UFRN Robotics Laboratory

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The paper describes a novel neural model to electrical load forecasting in transformers. The network acts as identifier of structural features to forecast process. So that output parameters can be estimated and generalized from an input parameter set. The model was trained and assessed through load data extracted from a Brazilian Electric Utility taking into account time, current, tension, active power in the three phases of the system. The results obtained in the simulations show that the developed technique can be used as an alternative tool to become more appropriate for planning of electric power systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The paper describes a novel neural model to estimate electrical losses in transformer during the manufacturing phase. The network acts as an identifier of structural features on electrical loss process, so that output parameters can be estimated and generalized from an input parameter set. The model was trained and assessed through experimental data taking into account core losses, copper losses, resistance, current and temperature. The results obtained in the simulations have shown that the developed technique can be used as an alternative tool to make the analysis of electrical losses on distribution transformer more appropriate regarding to manufacturing process. Thus, this research has led to an improvement on the rational use of energy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

DBMODELING is a relational database of annotated comparative protein structure models and their metabolic, pathway characterization. It is focused on enzymes identified in the genomes of Mycobacterium tuberculosis and Xylella fastidiosa. The main goal of the present database is to provide structural models to be used in docking simulations and drug design. However, since the accuracy of structural models is highly dependent on sequence identity between template and target, it is necessary to make clear to the user that only models which show high structural quality should be used in such efforts. Molecular modeling of these genomes generated a database, in which all structural models were built using alignments presenting more than 30% of sequence identity, generating models with medium and high accuracy. All models in the database are publicly accessible at http://www.biocristalografia.df.ibilce.unesp.br/tools. DBMODELING user interface provides users friendly menus, so that all information can be printed in one stop from any web browser. Furthermore, DBMODELING also provides a docking interface, which allows the user to carry out geometric docking simulation, against the molecular models available in the database. There are three other important homology model databases: MODBASE, SWISSMODEL, and GTOP. The main applications of these databases are described in the present article. © 2007 Bentham Science Publishers Ltd.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this study, the flocculation process in continuous systems with chambers in series was analyzed using the classical kinetic model of aggregation and break-up proposed by Argaman and Kaufman, which incorporates two main parameters: K (a) and K (b). Typical values for these parameters were used, i. e., K (a) = 3.68 x 10(-5)-1.83 x 10(-4) and K (b) = 1.83 x 10(-7)-2.30 x 10(-7) s(-1). The analysis consisted of performing simulations of system behavior under different operating conditions, including variations in the number of chambers used and the utilization of fixed or scaled velocity gradients in the units. The response variable analyzed in all simulations was the total retention time necessary to achieve a given flocculation efficiency, which was determined by means of conventional solution methods of nonlinear algebraic equations, corresponding to the material balances on the system. Values for the number of chambers ranging from 1 to 5, velocity gradients of 20-60 s(-1) and flocculation efficiencies of 50-90 % were adopted.