73 resultados para dynamic factor models
Resumo:
This paper presents a new method to calculate sky view factors (SVFs) from high resolution urban digital elevation models using a shadow casting algorithm. By utilizing weighted annuli to derive SVF from hemispherical images, the distance light source positions can be predefined and uniformly spread over the whole hemisphere, whereas another method applies a random set of light source positions with a cosine-weighted distribution of sun altitude angles. The 2 methods have similar results based on a large number of SVF images. However, when comparing variations at pixel level between an image generated using the new method presented in this paper with the image from the random method, anisotropic patterns occur. The absolute mean difference between the 2 methods is 0.002 ranging up to 0.040. The maximum difference can be as much as 0.122. Since SVF is a geometrically derived parameter, the anisotropic errors created by the random method must be considered as significant.
Resumo:
Tropical Cyclones (TC) under different climate conditions in the Northern Hemisphere have been investigated with the Max Planck Institute (MPI) coupled (ECHAM5/MPIOM) and atmosphere (ECHAM5) climate models. The intensity and size of the TC depend crucially on resolution with higher wind speed and smaller scales at the higher resolutions. The typical size of the TC is reduced by a factor of 2.3 from T63 to T319 using the distance of the maximum wind speed from the centre of the storm as a measure. The full three dimensional structure of the storms becomes increasingly more realistic as the resolution is increased. For the T63 resolution, three ensemble runs are explored for the period 1860 until 2100 using the IPCC SRES scenario A1B and evaluated for three 30 year periods at the end of the 19th, 20th and 21st century, respectively. While there is no significant change between the 19th and the 20th century, there is a considerable reduction in the number of the TC by some 20% in the 21st century, but no change in the number of the more intense storms. Reduction in the number of storms occurs in all regions. A single additional experiment at T213 resolution was run for the two latter 30-year periods. The T213 is an atmospheric only experiment using the transient Sea Surface Temperatures (SST) of the T63 resolution experiment. Also in this case, there is a reduction by some 10% in the number of simulated TC in the 21st century compared to the 20th century but a marked increase in the number of intense storms. The number of storms with maximum wind speeds greater than 50ms-1 increases by a third. Most of the intensification takes place in 2 the Eastern Pacific and in the Atlantic where also the number of storms more or less stays the same. We identify two competing processes effecting TC in a warmer climate. First, the increase in the static stability and the reduced vertical circulation is suggested to contribute to the reduction in the number of storms. Second, the increase in temperature and water vapor provide more energy for the storms so that when favorable conditions occur, the higher SST and higher specific humidity will contribute to more intense storms. As the maximum intensity depends crucially on resolution, this will require higher resolution to have its full effect. The distribution of storms between different regions does not, at first approximation, depend on the temperature itself but on the distribution of the SST anomalies and their influence on the atmospheric circulation. Two additional transient experiments at T319 resolution where run for 20 years at the end of the 20th and 21st century, respectively using the same conditions as in the T213 experiments. The results are consistent with the T213 study. The total number of tropical cyclones were similar to the T213 experiment but were generally more intense. The change from the 20th to the 21st century was also similar with fewer TC in total but with more intense cyclones.
Resumo:
A number of recent experiments suggest that, at a given wetting speed, the dynamic contact angle formed by an advancing liquid-gas interface with a solid substrate depends on the flow field and geometry near the moving contact line. In the present work, this effect is investigated in the framework of an earlier developed theory that was based on the fact that dynamic wetting is, by its very name, a process of formation of a new liquid-solid interface (newly “wetted” solid surface) and hence should be considered not as a singular problem but as a particular case from a general class of flows with forming or/and disappearing interfaces. The results demonstrate that, in the flow configuration of curtain coating, where a liquid sheet (“curtain”) impinges onto a moving solid substrate, the actual dynamic contact angle indeed depends not only on the wetting speed and material constants of the contacting media, as in the so-called slip models, but also on the inlet velocity of the curtain, its height, and the angle between the falling curtain and the solid surface. In other words, for the same wetting speed the dynamic contact angle can be varied by manipulating the flow field and geometry near the moving contact line. The obtained results have important experimental implications: given that the dynamic contact angle is determined by the values of the surface tensions at the contact line and hence depends on the distributions of the surface parameters along the interfaces, which can be influenced by the flow field, one can use the overall flow conditions and the contact angle as a macroscopic multiparametric signal-response pair that probes the dynamics of the liquid-solid interface. This approach would allow one to investigate experimentally such properties of the interface as, for example, its equation of state and the rheological properties involved in the interface’s response to an external torque, and would help to measure its parameters, such as the coefficient of sliding friction, the surface-tension relaxation time, and so on.
Resumo:
This paper describes benchmark testing of six two-dimensional (2D) hydraulic models (DIVAST, DIVASTTVD, TUFLOW, JFLOW, TRENT and LISFLOOD-FP) in terms of their ability to simulate surface flows in a densely urbanised area. The models are applied to a 1·0 km × 0·4 km urban catchment within the city of Glasgow, Scotland, UK, and are used to simulate a flood event that occurred at this site on 30 July 2002. An identical numerical grid describing the underlying topography is constructed for each model, using a combination of airborne laser altimetry (LiDAR) fused with digital map data, and used to run a benchmark simulation. Two numerical experiments were then conducted to test the response of each model to topographic error and uncertainty over friction parameterisation. While all the models tested produce plausible results, subtle differences between particular groups of codes give considerable insight into both the practice and science of urban hydraulic modelling. In particular, the results show that the terrain data available from modern LiDAR systems are sufficiently accurate and resolved for simulating urban flows, but such data need to be fused with digital map data of building topology and land use to gain maximum benefit from the information contained therein. When such terrain data are available, uncertainty in friction parameters becomes a more dominant factor than topographic error for typical problems. The simulations also show that flows in urban environments are characterised by numerous transitions to supercritical flow and numerical shocks. However, the effects of these are localised and they do not appear to affect overall wave propagation. In contrast, inertia terms are shown to be important in this particular case, but the specific characteristics of the test site may mean that this does not hold more generally.
Resumo:
Applications such as neuroscience, telecommunication, online social networking, transport and retail trading give rise to connectivity patterns that change over time. In this work, we address the resulting need for network models and computational algorithms that deal with dynamic links. We introduce a new class of evolving range-dependent random graphs that gives a tractable framework for modelling and simulation. We develop a spectral algorithm for calibrating a set of edge ranges from a sequence of network snapshots and give a proof of principle illustration on some neuroscience data. We also show how the model can be used computationally and analytically to investigate the scenario where an evolutionary process, such as an epidemic, takes place on an evolving network. This allows us to study the cumulative effect of two distinct types of dynamics.
Resumo:
There is increasing concern about soil enrichment with K+ and subsequent potential losses following long-term application of poor quality water to agricultural land. Different models are increasingly being used for predicting or analyzing water flow and chemical transport in soils and groundwater. The convective-dispersive equation (CDE) and the convective log-normal transfer function (CLT) models were fitted to the potassium (K+) leaching data. The CDE and CLT models produced equivalent goodness of fit. Simulated breakthrough curves for a range of CaCl2 concentration based on parameters of 15 mmol l(-1) CaCl2 were characterised by an early peak position associated with higher K+ concentration as the CaCl2 concentration used in leaching experiments decreased. In another method, the parameters estimated from 15 mmol l(-1) CaCl2 solution were used for all other CaCl2 concentrations, and the best value of retardation factor (R) was optimised for each data set. A better prediction was found. With decreasing CaCl2 concentration the value of R is required to be more than that measured (except for 10 mmol l(-1) CaCl2), if the estimated parameters of 15 mmol l(-1) CaCl2 are used. The two models suffer from the fact that they need to be calibrated against a data set, and some of their parameters are not measurable and cannot be determined independently.
Observations of the depth of ice particle evaporation beneath frontal cloud to improve NWP modelling
Resumo:
The evaporation (sublimation) of ice particles beneath frontal ice cloud can provide a significant source of diabatic cooling which can lead to enhanced slantwise descent below the frontal surface. The strength and vertical extent of the cooling play a role in determining the dynamic response of the atmosphere, and an adequate representation is required in numerical weather-prediction (NWP) models for accurate forecasts of frontal dynamics. In this paper, data from a vertically pointing 94 GHz radar are used to determine the characteristic depth-scale of ice particle sublimation beneath frontal ice cloud. A statistical comparison is made with equivalent data extracted from the NWP mesoscale model operational at the Met Office, defining the evaporation depth-scale as the distance for the ice water content to fall to 10% of its peak value in the cloud. The results show that the depth of the ice evaporation zone derived from observations is less than 1 km for 90% of the time. The model significantly overestimates the sublimation depth-scales by a factor of between two and three, and underestimates the local ice water content by a factor of between two and four. Consequently the results suggest the model significantly underestimates the strength of the evaporative cooling, with implications for the prediction of frontal dynamics. A number of reasons for the model discrepancy are suggested. A comparison with radiosonde relative humidity data suggests part of the overestimation in evaporation depth may be due to a high RH bias in the dry slot beneath the frontal cloud, but other possible reasons include poor vertical resolution and deficiencies in the evaporation rate or ice particle fall-speed parametrizations.
Resumo:
Ice clouds are an important yet largely unvalidated component of weather forecasting and climate models, but radar offers the potential to provide the necessary data to evaluate them. First in this paper, coordinated aircraft in situ measurements and scans by a 3-GHz radar are presented, demonstrating that, for stratiform midlatitude ice clouds, radar reflectivity in the Rayleigh-scattering regime may be reliably calculated from aircraft size spectra if the "Brown and Francis" mass-size relationship is used. The comparisons spanned radar reflectivity values from -15 to +20 dBZ, ice water contents (IWCs) from 0.01 to 0.4 g m(-3), and median volumetric diameters between 0.2 and 3 mm. In mixed-phase conditions the agreement is much poorer because of the higher-density ice particles present. A large midlatitude aircraft dataset is then used to derive expressions that relate radar reflectivity and temperature to ice water content and visible extinction coefficient. The analysis is an advance over previous work in several ways: the retrievals vary smoothly with both input parameters, different relationships are derived for the common radar frequencies of 3, 35, and 94 GHz, and the problem of retrieving the long-term mean and the horizontal variance of ice cloud parameters is considered separately. It is shown that the dependence on temperature arises because of the temperature dependence of the number concentration "intercept parameter" rather than mean particle size. A comparison is presented of ice water content derived from scanning 3-GHz radar with the values held in the Met Office mesoscale forecast model, for eight precipitating cases spanning 39 h over Southern England. It is found that the model predicted mean I WC to within 10% of the observations at temperatures between -30 degrees and - 10 degrees C but tended to underestimate it by around a factor of 2 at colder temperatures.
Resumo:
Space weather effects on technological systems originate with energy carried from the Sun to the terrestrial environment by the solar wind. In this study, we present results of modeling of solar corona-heliosphere processes to predict solar wind conditions at the L1 Lagrangian point upstream of Earth. In particular we calculate performance metrics for (1) empirical, (2) hybrid empirical/physics-based, and (3) full physics-based coupled corona-heliosphere models over an 8-year period (1995–2002). L1 measurements of the radial solar wind speed are the primary basis for validation of the coronal and heliosphere models studied, though other solar wind parameters are also considered. The models are from the Center for Integrated Space-Weather Modeling (CISM) which has developed a coupled model of the whole Sun-to-Earth system, from the solar photosphere to the terrestrial thermosphere. Simple point-by-point analysis techniques, such as mean-square-error and correlation coefficients, indicate that the empirical coronal-heliosphere model currently gives the best forecast of solar wind speed at 1 AU. A more detailed analysis shows that errors in the physics-based models are predominately the result of small timing offsets to solar wind structures and that the large-scale features of the solar wind are actually well modeled. We suggest that additional “tuning” of the coupling between the coronal and heliosphere models could lead to a significant improvement of their accuracy. Furthermore, we note that the physics-based models accurately capture dynamic effects at solar wind stream interaction regions, such as magnetic field compression, flow deflection, and density buildup, which the empirical scheme cannot.
Resumo:
To test for magnetic flux buildup in the heliosphere from coronal mass ejections (CMEs), we simulate heliospheric flux as a constant background open flux with a time-varying interplanetary CME (ICME) contribution. As flux carried by ejecta can only contribute to the heliospheric flux budget while it remains closed, the ICME flux opening rate is an important factor. Two separate forms for the ICME flux opening rate are considered: (1) constant and (2) exponentially decaying with time. Coronagraph observations are used to determine the CME occurrence rates, while in situ observations are used to estimate the magnetic flux content of a typical ICME. Both static equilibrium and dynamic simulations, using the constant and exponential ICME flux opening models, require flux opening timescales of ∼50 days in order to match the observed doubling in the magnetic field intensity at 1 AU over the solar cycle. Such timescales are equivalent to a change in the ICME closed flux of only ∼7–12% between 1 and 5 AU, consistent with CSE signatures; no flux buildup results. The dynamic simulation yields a solar cycle flux variation with high variability that matches the overall variability of the observed magnetic field intensity remarkably well, including the double peak forming the Gnevyshev gap.
Resumo:
Nonlinear adjustment toward long-run price equilibrium relationships in the sugar-ethanol-oil nexus in Brazil is examined. We develop generalized bivariate error correction models that allow for cointegration between sugar, ethanol, and oil prices, where dynamic adjustments are potentially nonlinear functions of the disequilibrium errors. A range of models are estimated using Bayesian Monte Carlo Markov Chain algorithms and compared using Bayesian model selection methods. The results suggest that the long-run drivers of Brazilian sugar prices are oil prices and that there are nonlinearities in the adjustment processes of sugar and ethanol prices to oil price but linear adjustment between ethanol and sugar prices.
Resumo:
Variations in demographic rates due to differential resource allocation between individuals are important considerations in the development of accurate population dynamic models. Systematic harvesting can alter age structure and/or reduce population density, conferring indirect positive benefits on the source population as a result of a consequent redistribution of resources between the remaining individuals. Independently of effects mediated through changes in density and competition, demographic rates can also be influenced by within-individual competition for resources. Harvesting dependent life stages can reduce an individual's current reproductive costs, allowing increased investment in its future fecundity and survival. Although such changes in demographic rates are well known, there has been little exploration of the potential impact on population dynamics. We use empirical data collected from a successfully reintroduced population of the Mauritius kestrel Falco punctatus to explore the population consequences of manipulating reproductive effort through harvesting. Consequent increases in an individual's future fecundity and survival allow source populations to withstand longer and more intensive harvesting regimes without being exposed to an increase in extinction risk, increasing maximum sustainable yields. These effects may also buffer populations against the impacts of stochastic events, but directional shifts in environmental conditions that increase reproductive costs may have detrimental population-level effects.
Resumo:
If the fundamental precepts of Farming Systems Research were to be taken literally then it would imply that for each farm 'unique' solutions should be sought. This is an unrealistic expectation, but it has led to the idea of a recommendation domain, implying creating a taxonomy of farms, in order to increase the general applicability of recommendations. Mathematical programming models are an established means of generating recommended solutions, but for such models to be effective they have to be constructed for 'truly' typical or representative situations. The multi-variate statistical techniques provide a means of creating the required typologies, particularly when an exhaustive database is available. This paper illustrates the application of this methodology in two different studies that shared the common purpose of identifying types of farming systems in their respective study areas. The issues related with the use of factor and cluster analyses for farm typification prior to building representative mathematical programming models for Chile and Pakistan are highlighted. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
Experimental data for the title reaction were modeled using master equation (ME)/RRKM methods based on the Multiwell suite of programs. The starting point for the exercise was the empirical fitting provided by the NASA (Sander, S. P.; Finlayson-Pitts, B. J.; Friedl, R. R.; Golden, D. M.; Huie, R. E.; Kolb, C. E.; Kurylo, M. J.; Molina, M. J.; Moortgat, G. K.; Orkin, V. L.; Ravishankara, A. R. Chemical Kinetics and Photochemical Data for Use in Atmospheric Studies, Evaluation Number 15; Jet Propulsion Laboratory: Pasadena, California, 2006)(1) and IUPAC (Atkinson, R.; Baulch, D. L.; Cox, R. A.: R. F. Hampson, J.; Kerr, J. A.; Rossi, M. J.; Troe, J. J. Phys. Chem. Ref. Data. 2000, 29, 167) 2 data evaluation panels, which represents the data in the experimental pressure ranges rather well. Despite the availability of quite reliable parameters for these calculations (molecular vibrational frequencies (Parthiban, S.; Lee, T. J. J. Chem. Phys. 2000, 113, 145)3 and a. value (Orlando, J. J.; Tyndall, G. S. J. Phys. Chem. 1996, 100,. 19398)4 of the bond dissociation energy, D-298(BrO-NO2) = 118 kJ mol(-1), corresponding to Delta H-0(circle) = 114.3 kJ mol(-1) at 0 K) and the use of RRKM/ME methods, fitting calculations to the reported data or the empirical equations was anything but straightforward. Using these molecular parameters resulted in a discrepancy between the calculations and the database of rate constants of a factor of ca. 4 at, or close to, the low-pressure limit. Agreement between calculation and experiment could be achieved in two ways, either by increasing Delta H-0(circle) to an unrealistically high value (149.3 kJ mol(-1)) or by increasing