975 resultados para Average models
Resumo:
Sea surface temperatures and sea-ice extent are the most critical variables to evaluate the Southern Ocean paleoceanographic evolution in relation to the development of the global carbon cycle, atmospheric CO2 variability and ocean-atmosphere circulation. In contrast to the Atlantic and the Indian sectors, the Pacific sector of the Southern Ocean has been insufficiently investigated so far. To cover this gap of information we present diatom-based estimates of summer sea surface temperature (SSST) and winter sea-ice concentration (WSI) from 17 sites in the polar South Pacific to study the Last Glacial Maximum (LGM) at the EPILOG time slice (19,000-23,000 cal. years BP). Applied statistical methods are the Imbrie and Kipp Method (IKM) and the Modern Analog Technique (MAT) to estimate temperature and sea-ice concentration, respectively. Our data display a distinct LGM east-west differentiation in SSST and WSI with steeper latitudinal temperature gradients and a winter sea-ice edge located consistently north of the Pacific-Antarctic Ridge in the Ross sea sector. In the eastern sector of our study area, which is governed by the Amundsen Abyssal Plain, the estimates yield weaker latitudinal SSST gradients together with a variable extended winter sea-ice field. In this sector, sea-ice extent may have reached sporadically the area of the present Subantarctic Front at its maximum LGM expansion. This pattern points to topographic forcing as major controller of the frontal system location and sea-ice extent in the western Pacific sector whereas atmospheric conditions like the Southern Annular Mode and the ENSO affected the oceanographic conditions in the eastern Pacific sector. Although it is difficult to depict the location and the physical nature of frontal systems separating the glacial Southern Ocean water masses into different zones, we found a distinct temperature gradient in latitudes straddled by the modern Southern Subtropical Front. Considering that the glacial temperatures north of this zone are similar to the modern, we suggest that this represents the Glacial Southern Subtropical Front (GSSTF), which delimits the zone of strongest glacial SSST cooling (>4K) to its North. The southern boundary of the zone of maximum cooling is close to the glacial 4°C isotherm. This isotherm, which is in the range of SSST at the modern Antarctic Polar Front (APF), represents a circum-Antarctic feature and marks the northern edge of the glacial Antarctic Circumpolar Current (ACC). We also assume that a glacial front was established at the northern average winter sea ice edge, comparable with the modern Southern Antarctic Circumpolar Current Front (SACCF). During the glacial, this front would be located in the area of the modern APF. The northward deflection of colder than modern surface waters along the South American continent leads to a significant cooling of the glacial Humboldt Current surface waters (4-8K), which affects the temperature regimes as far north as into tropical latitudes. The glacial reduction of ACC temperatures may also result in the significant cooling in the Atlantic and Indian Southern Ocean, thus may enhance thermal differentiation of the Southern Ocean and Antarctic continental cooling. Comparison with temperature and sea ice simulations for the last glacial based on numerical simulations show that the majority of modern models overestimate summer and winter sea ice cover and that there exists few models that reproduce our temperature data rather well.
Resumo:
The province of Salta is located the Northwest of Argentina in the border with Bolivia, Chile and Paraguay. Its Capital is the city of Salta that concentrates half of the inhabitants of the province and has grown to 600000 hab., from a small active Spanish town well founded in 1583. The city is crossed by the Arenales River descending from close mountains at North, source of water and end of sewers. But with actual growing it has become a focus of infection and of remarkable unhealthiness. It is necessary to undertake a plan for the recovery of the river, directed to the attainment of the well-being and to improve the life?s quality of the Community. The fundamental idea of the plan is to obtain an ordering of the river basin and an integral management of the channel and its surroundings, including the cleaning out. The improvement of the water?s quality, the healthiness of the surroundings and the improvement of the environment, must go hand by hand with the development of sport activities, of relaxation, tourism, establishment of breeding grounds, kitchen gardens, micro enterprises with clean production and other actions that contribute to their benefit by the society, that being a basic factor for their care and sustainable use. The present pollution is organic, chemical, industrial, domestic, due to the disposition of sweepings and sewer effluents that affects not only the flora and small fauna, destroying the biodiversity, but also to the health of people living in their margins. Within the plan it will be necessary to consider, besides hydric and environmental cleaning and the prevention of floods, the planning of the extraction of aggregates, the infrastructure and consolidation of margins works and the arrangement of all the river basin. It will be necessary to consider the public intervention at state, provincial and local level, and the private intervention. In the model it has been necessary to include the sub-model corresponding to the election of the entity to be the optimal instrument to reach the proposed objectives, giving an answer to the social, environmental and economic requirements. For that the authors have used multi-criteria decision methods to qualify and select alternatives, and for the programming of their implementation. In the model the authors have contemplated the short, average and long term actions. They conform a Paretooptimal alternative which secures the ordering, integral and suitable management of the basin of the Arenales River, focusing on its passage by the city of Salta.
Resumo:
La sequía es un fenómeno natural que se origina por el descenso de las precipitaciones con respecto a una media, y que resulta en la disponibilidad insuficiente de agua para alguna actividad. La creciente presión que se ha venido ejerciendo sobre los recursos hídricos ha hecho que los impactos de la sequía se hayan visto agravados a la vez que ha desencadenado situaciones de escasez de agua en muchas partes del planeta. Los países con clima mediterráneo son especialmente vulnerables a las sequías, y, su crecimiento económico dependiente del agua da lugar a impactos importantes. Para reducir los impactos de la sequía es necesaria una reducción de la vulnerabilidad a las sequías que viene dada por una gestión más eficiente y por una mejor preparación. Para ello es muy importante disponer de información acerca de los impactos y el alcance de este fenómeno natural. Esta investigación trata de abarcar el tema de los impactos de las sequías, de manera que plantea todos los tipos de impactos que pueden darse y además compara sus efectos en dos países (España y Chile). Para ello se proponen modelos de atribución de impactos que sean capaces de medir las pérdidas económicas causadas por la falta de agua. Los modelos propuestos tienen una base econométrica en la que se incluyen variables clave a la hora de evaluar los impactos como es una variable relacionada con la disponibilidad de agua, y otras de otra naturaleza para distinguir los efectos causados por otras fuentes de variación. Estos modelos se adaptan según la fase del estudio en la que nos encontremos. En primer lugar se miden los impactos directos sobre el regadío y se introduce en el modelo un factor de aleatoriedad para evaluar el riesgo económico de sequía. Esto se hace a dos niveles geográficos (provincial y de Unidad de Demanda Agraria) y además en el último se introduce no solo el riesgo de oferta sino también el riesgo de demanda de agua. La introducción de la perspectiva de riesgo en el modelo da lugar a una herramienta de gestión del riesgo económico que puede ser utilizada para estrategias de planificación. Más adelante una extensión del modelo econométrico se desarrolla para medir los impactos en el sector agrario (impactos directos sobre el regadío y el secano e impactos indirectos sobre la Agro Industria) para ello se adapta el modelo y se calculan elasticidades concatenadas entre la falta de agua y los impactos secundarios. Por último se plantea un modelo econométrico para el caso de estudio en Chile y se evalúa el impacto de las sequías debidas al fenómeno de La Niña. iv Los resultados en general muestran el valor que brinda el conocimiento más preciso acerca de los impactos, ya que en muchas ocasiones se tiende a sobreestimar los daños realmente producidos por la falta de agua. Los impactos indirectos de la sequía confirman su alcance a la vez que son amortiguados a medida que nos acercamos al ámbito macroeconómico. En el caso de Chile, su diferente gestión muestra el papel que juegan el fenómeno de El Niño y La Niña sobre los precios de los principales cultivos del país y sobre el crecimiento del sector. Para reducir las pérdidas y su alcance se deben plantear más medidas de mitigación que centren su esfuerzo en una gestión eficiente del recurso. Además la prevención debe jugar un papel muy importante para reducir los riesgos que pueden sufrirse ante situaciones de escasez. ABSTRACT Drought is a natural phenomenon that originates by the decrease in rainfall in comparison to the average, and that results in water shortages for some activities. The increasing pressure on water resources has augmented the impact of droughts just as water scarcity has become an additional problem in many parts of the planet. Countries with Mediterranean climate are especially vulnerable to drought, and its waterdependent economic growth leads to significant impacts. To reduce the negative impacts it is necessary to deal with drought vulnerability, and to achieve this objective a more efficient management is needed. The availability of information about the impacts and the scope of droughts become highly important. This research attempts to encompass the issue of drought impacts, and therefore it characterizes all impact types that may occur and also compares its effects in two different countries (Spain and Chile). Impact attribution models are proposed in order to measure the economic losses caused by the lack of water. The proposed models are based on econometric approaches and they include key variables for measuring the impacts. Variables related to water availability, crop prices or time trends are included to be able to distinguish the effects caused by any of the possible sources. These models are adapted for each of the parts of the study. First, the direct impacts on irrigation are measured and a source of variability is introduced into the model to assess the economic risk of drought. This is performed at two geographic levels provincial and Agricultural Demand Unit. In the latter, not only the supply risk is considered but also the water demand risk side. The introduction of the risk perspective into the model results in a risk management tool that can be used for planning strategies. Then an extension of the econometric model is developed to measure the impacts on the agricultural sector (direct impacts on irrigated and rainfed productions and indirect impacts on the Agri-food Industry). For this aim the model is adapted and concatenated elasticities between the lack of water and the impacts are estimated. Finally an econometric model is proposed for the Chilean case study to evaluate the impact of droughts, especially caused by El Niño Southern Oscillation. The overall results show the value of knowing better about the precise impacts that often tend to be overestimated. The models allow for measuring accurate impacts due to the lack of water. Indirect impacts of drought confirm their scope while they confirm also its dilution as we approach the macroeconomic variables. In the case of Chile, different management strategies of the country show the role of ENSO phenomena on main crop prices and on economic trends. More mitigation measures focused on efficient resource management are necessary to reduce drought losses. Besides prevention must play an important role to reduce the risks that may be suffered due to shortages.
Resumo:
Many cities in Europe have difficulties to meet the air quality standards set by the European legislation, most particularly the annual mean Limit Value for NO2. Road transport is often the main source of air pollution in urban areas and therefore, there is an increasing need to estimate current and future traffic emissions as accurately as possible. As a consequence, a number of specific emission models and emission factors databases have been developed recently. They present important methodological differences and may result in largely diverging emission figures and thus may lead to alternative policy recommendations. This study compares two approaches to estimate road traffic emissions in Madrid (Spain): the COmputer Programme to calculate Emissions from Road Transport (COPERT4 v.8.1) and the Handbook Emission Factors for Road Transport (HBEFA v.3.1), representative of the ‘average-speed’ and ‘traffic situation’ model types respectively. The input information (e.g. fleet composition, vehicle kilometres travelled, traffic intensity, road type, etc.) was provided by the traffic model developed by the Madrid City Council along with observations from field campaigns. Hourly emissions were computed for nearly 15 000 road segments distributed in 9 management areas covering the Madrid city and surroundings. Total annual NOX emissions predicted by HBEFA were a 21% higher than those of COPERT. The discrepancies for NO2 were lower (13%) since resulting average NO2/NOX ratios are lower for HBEFA. The larger differences are related to diesel vehicle emissions under “stop & go” traffic conditions, very common in distributor/secondary roads of the Madrid metropolitan area. In order to understand the representativeness of these results, the resulting emissions were integrated in an urban scale inventory used to drive mesoscale air quality simulations with the Community Multiscale Air Quality (CMAQ) modelling system (1 km2 resolution). Modelled NO2 concentrations were compared with observations through a series of statistics. Although there are no remarkable differences between both model runs, the results suggest that HBEFA may overestimate traffic emissions. However, the results are strongly influenced by methodological issues and limitations of the traffic model. This study was useful to provide a first alternative estimate to the official emission inventory in Madrid and to identify the main features of the traffic model that should be improved to support the application of an emission system based on “real world” emission factors.
Resumo:
This paper presents a new methodology to build parametric models to estimate global solar irradiation adjusted to specific on-site characteristics based on the evaluation of variable im- portance. Thus, those variables higly correlated to solar irradiation on a site are implemented in the model and therefore, different models might be proposed under different climates. This methodology is applied in a study case in La Rioja region (northern Spain). A new model is proposed and evaluated on stability and accuracy against a review of twenty-two already exist- ing parametric models based on temperatures and rainfall in seventeen meteorological stations in La Rioja. The methodology of model evaluation is based on bootstrapping, which leads to achieve a high level of confidence in model calibration and validation from short time series (in this case five years, from 2007 to 2011). The model proposed improves the estimates of the other twenty-two models with average mean absolute error (MAE) of 2.195 MJ/m2 day and average confidence interval width (95% C.I., n=100) of 0.261 MJ/m2 day. 41.65% of the daily residuals in the case of SIAR and 20.12% in that of SOS Rioja fall within the uncertainty tolerance of the pyranometers of the two networks (10% and 5%, respectively). Relative differences between measured and estimated irradiation on an annual cumulative basis are below 4.82%. Thus, the proposed model might be useful to estimate annual sums of global solar irradiation, reaching insignificant differences between measurements from pyranometers.
Resumo:
Assessing wind conditions on complex terrain has become a hard task as terrain complexity increases. That is why there is a need to extrapolate in a reliable manner some wind parameters that determine wind farms viability such as annual average wind speed at all hub heights as well as turbulence intensities. The development of these tasks began in the early 90´s with the widely used linear model WAsP and WAsP Engineering especially designed for simple terrain with remarkable results on them but not so good on complex orographies. Simultaneously non-linearized Navier Stokes solvers have been rapidly developed in the last decade through CFD (Computational Fluid Dynamics) codes allowing simulating atmospheric boundary layer flows over steep complex terrain more accurately reducing uncertainties. This paper describes the features of these models by validating them through meteorological masts installed in a highly complex terrain. The study compares the results of the mentioned models in terms of wind speed and turbulence intensity.
Resumo:
Comments This article is a U.S. government work, and is not subject to copyright in the United States. Abstract Potential consequences of climate change on crop production can be studied using mechanistic crop simulation models. While a broad variety of maize simulation models exist, it is not known whether different models diverge on grain yield responses to changes in climatic factors, or whether they agree in their general trends related to phenology, growth, and yield. With the goal of analyzing the sensitivity of simulated yields to changes in temperature and atmospheric carbon dioxide concentrations [CO2], we present the largest maize crop model intercomparison to date, including 23 different models. These models were evaluated for four locations representing a wide range of maize production conditions in the world: Lusignan (France), Ames (USA), Rio Verde (Brazil) and Morogoro (Tanzania). While individual models differed considerably in absolute yield simulation at the four sites, an ensemble of a minimum number of models was able to simulate absolute yields accurately at the four sites even with low data for calibration, thus suggesting that using an ensemble of models has merit. Temperature increase had strong negative influence on modeled yield response of roughly 0.5 Mg ha 1 per °C. Doubling [CO2] from 360 to 720 lmol mol 1 increased grain yield by 7.5% on average across models and the sites. That would therefore make temperature the main factor altering maize yields at the end of this century. Furthermore, there was a large uncertainty in the yield response to [CO2] among models. Model responses to temperature and [CO2] did not differ whether models were simulated with low calibration information or, simulated with high level of calibration information.
Resumo:
This work proposes an automatic methodology for modeling complex systems. Our methodology is based on the combination of Grammatical Evolution and classical regression to obtain an optimal set of features that take part of a linear and convex model. This technique provides both Feature Engineering and Symbolic Regression in order to infer accurate models with no effort or designer's expertise requirements. As advanced Cloud services are becoming mainstream, the contribution of data centers in the overall power consumption of modern cities is growing dramatically. These facilities consume from 10 to 100 times more power per square foot than typical office buildings. Modeling the power consumption for these infrastructures is crucial to anticipate the effects of aggressive optimization policies, but accurate and fast power modeling is a complex challenge for high-end servers not yet satisfied by analytical approaches. For this case study, our methodology minimizes error in power prediction. This work has been tested using real Cloud applications resulting on an average error in power estimation of 3.98%. Our work improves the possibilities of deriving Cloud energy efficient policies in Cloud data centers being applicable to other computing environments with similar characteristics.
Resumo:
The present study explores a “hydrophobic” energy function for folding simulations of the protein lattice model. The contribution of each monomer to conformational energy is the product of its “hydrophobicity” and the number of contacts it makes, i.e., E(h⃗, c⃗) = −Σi=1N cihi = −(h⃗.c⃗) is the negative scalar product between two vectors in N-dimensional cartesian space: h⃗ = (h1, … , hN), which represents monomer hydrophobicities and is sequence-dependent; and c⃗ = (c1, … , cN), which represents the number of contacts made by each monomer and is conformation-dependent. A simple theoretical analysis shows that restrictions are imposed concomitantly on both sequences and native structures if the stability criterion for protein-like behavior is to be satisfied. Given a conformation with vector c⃗, the best sequence is a vector h⃗ on the direction upon which the projection of c⃗ − c̄⃗ is maximal, where c̄⃗ is the diagonal vector with components equal to c̄, the average number of contacts per monomer in the unfolded state. Best native conformations are suggested to be not maximally compact, as assumed in many studies, but the ones with largest variance of contacts among its monomers, i.e., with monomers tending to occupy completely buried or completely exposed positions. This inside/outside segregation is reflected on an apolar/polar distribution on the corresponding sequence. Monte Carlo simulations in two dimensions corroborate this general scheme. Sequences targeted to conformations with large contact variances folded cooperatively with thermodynamics of a two-state transition. Sequences targeted to maximally compact conformations, which have lower contact variance, were either found to have degenerate ground state or to fold with much lower cooperativity.
Resumo:
The fact that fast oscillating homogeneous scalar fields behave as perfect fluids in average and their intrinsic isotropy have made these models very fruitful in cosmology. In this work we will analyse the perturbations dynamics in these theories assuming general power law potentials V(ϕ) = λ|ϕ|^n /n. At leading order in the wavenumber expansion, a simple expression for the effective sound speed of perturbations is obtained c_eff^ 2 = ω = (n − 2)/(n + 2) with ω the effective equation of state. We also obtain the first order correction in k^ 2/ω_eff^ 2 , when the wavenumber k of the perturbations is much smaller than the background oscillation frequency, ω_eff. For the standard massive case we have also analysed general anharmonic contributions to the effective sound speed. These results are reached through a perturbed version of the generalized virial theorem and also studying the exact system both in the super-Hubble limit, deriving the natural ansatz for δϕ; and for sub-Hubble modes, exploiting Floquet’s theorem.
Resumo:
Underwater video transects have become a common tool for quantitative analysis of the seafloor. However a major difficulty remains in the accurate determination of the area surveyed as underwater navigation can be unreliable and image scaling does not always compensate for distortions due to perspective and topography. Depending on the camera set-up and available instruments, different methods of surface measurement are applied, which make it difficult to compare data obtained by different vehicles. 3-D modelling of the seafloor based on 2-D video data and a reference scale can be used to compute subtransect dimensions. Focussing on the length of the subtransect, the data obtained from 3-D models created with the software PhotoModeler Scanner are compared with those determined from underwater acoustic positioning (ultra short baseline, USBL) and bottom tracking (Doppler velocity log, DVL). 3-D model building and scaling was successfully conducted on all three tested set-ups and the distortion of the reference scales due to substrate roughness was identified as the main source of imprecision. Acoustic positioning was generally inaccurate and bottom tracking unreliable on rough terrain. Subtransect lengths assessed with PhotoModeler were on average 20% longer than those derived from acoustic positioning due to the higher spatial resolution and the inclusion of slope. On a high relief wall bottom tracking and 3-D modelling yielded similar results. At present, 3-D modelling is the most powerful, albeit the most time-consuming, method for accurate determination of video subtransect dimensions.
Resumo:
Substantial retreat or disintegration of numerous ice shelves have been observed on the Antarctic Peninsula. The ice shelf in the Prince Gustav Channel retreated gradually since the late 1980's and broke-up in 1995. Tributary glaciers reacted with speed-up, surface lowering and increased ice discharge, consequently contributing to sea level rise. We present a detailed long-term study (1993-2014) on the dynamic response of Sjögren Inlet glaciers to the disintegration of Prince Gustav Ice Shelf. We analyzed various remote sensing datasets to observe the reactions of the glaciers to the loss of the buttressing ice shelf. A strong increase in ice surface velocities was observed with maximum flow speeds reaching 2.82±0.48 m/d in 2007 and 1.50±0.32 m/d in 2004 at Sjögren and Boydell glaciers respectively. Subsequently, the flow velocities decelerated, however in late 2014, we still measured about two times the values of our first measurements in 1996. The tributary glaciers retreated 61.7±3.1 km² behind the former grounding line of the ice shelf. In regions below 1000 m a.s.l., a mean surface lowering of -68±10 m (-3.1 m/a) was observed in the period 1993-2014. The lowering rate decreased to -2.2 m/a in recent years. Based on the surface lowering rates, geodetic mass balances of the glaciers were derived for different time steps. High mass loss rate of -1.21±0.36 Gt/a was found in the earliest period (1993-2001). Due to the dynamic adjustments of the glaciers to the new boundary conditions the ice mass loss reduced to -0.59±0.11 Gt/a in the period 2012-2014, resulting in an average mass loss rate of -0.89±0.16 Gt/a (1993-2014). Including the retreat of the ice front and grounding line, a total mass change of -38.5±7.7 Gt and a contribution to sea level rise of 0.061±0.013 mm were computed. Analysis of the ice flux revealed that available bedrock elevation estimates at Sjögren Inlet are too shallow and are the major uncertainty in ice flux computations. This temporally dense time series analysis of Sjögren Inlet glaciers shows that the adjustments of tributary glaciers to ice shelf disintegration are still going on and provides detailed information of the changes in glacier dynamics.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06