903 resultados para Generalization of Ehrenfest’s urn Model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The rejection of the European Constitution marks an important crystallization point for debate about the European Union (EU) and the integration process. The European Constitution was envisaged as the founding document of a renewed and enlarged European Union and thus it was rather assumed to find wide public support. Its rejection was not anticipated. The negative referenda in France and the Netherlands therefore led to a controversial debate about the more fundamental meaning and the consequences of the rejection both for the immediate state of affairs as well as for the further integration process. The rejection of the Constitution and the controversy about its correct interpretation therefore present an intriguing puzzle for political analysis. Although the treaty rejection was taken up widely in the field of European Studies, the focus of existing analyses has predominantly been on explaining why the current situation occurred. Underlying these approaches is the premise that by establishing the reasons for the rejection it is possible to derive the ‘true’ meaning of the event for the EU integration process. In my paper I rely on an alternative, discourse theoretical approach which aims to overcome the positivist perspective dominating the existing analyses. I argue that the meaning of the event ‘treaty rejection’ is not fixed or inherent to it but discursively constructed. The critical assessment of this concrete meaning-production is of high relevance as the specific meaning attributed to the treaty rejection effectively constrains the scope for supposedly ‘reasonable’ options for action, both in the concrete situation and in the further European integration process more generally. I will argue that the overall framing suggests a fundamental technocratic approach to governance from part of the Commission. Political struggle and public deliberation is no longer foreseen as the concrete solutions to the citizens’ general concerns are designed by supposedly apolitical experts. Through the communicative diffusion and the active implementation of this particular model of governance the Commission shapes the future integration process in a more substantial way than is obvious from its seemingly limited immediate problem-solving orientation of overcoming the ‘constitutional crisis’. As the European Commission is a central actor in the discourse production my analysis focuses on the specific interpretation of the situation put forward by the Commission. In order to work out the Commission’s particular take on the event I conducted a frame analysis (according to Benford/Snow) on a body of key sources produced in the context of coping with the treaty rejection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Auf dem Gebiet der Strukturdynamik sind computergestützte Modellvalidierungstechniken inzwischen weit verbreitet. Dabei werden experimentelle Modaldaten, um ein numerisches Modell für weitere Analysen zu korrigieren. Gleichwohl repräsentiert das validierte Modell nur das dynamische Verhalten der getesteten Struktur. In der Realität gibt es wiederum viele Faktoren, die zwangsläufig zu variierenden Ergebnissen von Modaltests führen werden: Sich verändernde Umgebungsbedingungen während eines Tests, leicht unterschiedliche Testaufbauten, ein Test an einer nominell gleichen aber anderen Struktur (z.B. aus der Serienfertigung), etc. Damit eine stochastische Simulation durchgeführt werden kann, muss eine Reihe von Annahmen für die verwendeten Zufallsvariablengetroffen werden. Folglich bedarf es einer inversen Methode, die es ermöglicht ein stochastisches Modell aus experimentellen Modaldaten zu identifizieren. Die Arbeit beschreibt die Entwicklung eines parameter-basierten Ansatzes, um stochastische Simulationsmodelle auf dem Gebiet der Strukturdynamik zu identifizieren. Die entwickelte Methode beruht auf Sensitivitäten erster Ordnung, mit denen Parametermittelwerte und Kovarianzen des numerischen Modells aus stochastischen experimentellen Modaldaten bestimmt werden können.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Upper Blue Nile River Basin (UBNRB) located in the western part of Ethiopia, between 7° 45’ and 12° 45’N and 34° 05’ and 39° 45’E has a total area of 174962 km2 . More than 80% of the population in the basin is engaged in agricultural activities. Because of the particularly dry climate in the basin, likewise to most other regions of Ethiopia, the agricultural productivity depends to a very large extent on the occurrence of the seasonal rains. This situation makes agriculture highly vulnerable to the impact of potential climate hazards which are about to inflict Africa as a whole and Ethiopia in particular. To analyze these possible impacts of future climate change on the water resources in the UBNRB, in the first part of the thesis climate projection for precipitation, minimum and maximum temperatures in the basin, using downscaled predictors from three GCMs (ECHAM5, GFDL21 and CSIRO-MK3) under SRES scenarios A1B and A2 have been carried out. The two statistical downscaling models used are SDSM and LARS-WG, whereby SDSM is used to downscale ECHAM5-predictors alone and LARS-WG is applied in both mono-model mode with predictors from ECHAM5 and in multi-model mode with combined predictors from ECHAM5, GFDL21 and CSIRO-MK3. For the calibration/validation of the downscaled models, observed as well as NCEP climate data in the 1970 - 2000 reference period is used. The future projections are made for two time periods; 2046-2065 (2050s) and 2081-2100 (2090s). For the 2050s future time period the downscaled climate predictions indicate rise of 0.6°C to 2.7°C for the seasonal maximum temperatures Tmax, and of 0.5°C to 2.44°C for the minimum temperatures Tmin. Similarly, during the 2090s the seasonal Tmax increases by 0.9°C to 4.63°C and Tmin by 1°C to 4.6°C, whereby these increases are generally higher for the A2 than for the A1B scenario. For most sub-basins of the UBNRB, the predicted changes of Tmin are larger than those of Tmax. Meanwhile, for the precipitation, both downscaling tools predict large changes which, depending on the GCM employed, are such that the spring and summer seasons will be experiencing decreases between -36% to 1% and the autumn and winter seasons an increase of -8% to 126% for the two future time periods, regardless of the SRES scenario used. In the second part of the thesis the semi-distributed, physically based hydrologic model, SWAT (Soil Water Assessment Tool), is used to evaluate the impacts of the above-predicted future climate change on the hydrology and water resources of the UBNRB. Hereby the downscaled future predictors are used as input in the SWAT model to predict streamflow of the Upper Blue Nile as well as other relevant water resources parameter in the basin. Calibration and validation of the streamflow model is done again on 1970-2000 measured discharge at the outlet gage station Eldiem, whereby the most sensitive out the numerous “tuneable” calibration parameters in SWAT have been selected by means of a sophisticated sensitivity analysis. Consequently, a good calibration/validation model performance with a high NSE-coefficient of 0.89 is obtained. The results of the future simulations of streamflow in the basin, using both SDSM- and LARS-WG downscaled output in SWAT reveal a decline of -10% to -61% of the future Blue Nile streamflow, And, expectedly, these obviously adverse effects on the future UBNRB-water availibiliy are more exacerbated for the 2090’s than for the 2050’s, regardless of the SRES.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Rice straw is used in Northeastern Thailand as an alternative to organic fertilizer for crop production. This enables farmers to reduce the use of chemical fertilizers which leads to a decrease in production costs. In spite of the beneficial effects in agricultural production, rice straw compost cannot be produced in large amounts because the burning of rice straws is a common farming practice. The decisions of farmers who use rice straw compost have been investigated by interviewing 120 households belonging to the members of an organic fertilizer user group using a household questionnaire. The study was conducted to evaluate the factors that affect the use of rice straw compost in Khon Kaen Province in Northeastern Thailand. Results of the logit model showed that the farmers’ education, number of rice straw compost trainings in which the farmer participated, lack of knowledge about technology, insufficient labour and difficulty in making rice straw compost had a significant impact on the farmer’s decision to use rice straw compost. Difficulty in making rice straw compost appeared to be the root cause because the procedure of making rice straw compost is complex and labour intensive. Repeated trainings thus, will have a positive and significant influence on farmers’ adoption of the technology. Training provides more knowledge and will presumably change the perception of the farmers towards new technologies and the awareness of positive effects of rice straw compost utilization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ongoing depletion of the coastal aquifer in the Gaza strip due to groundwater overexploitation has led to the process of seawater intrusion, which is continually becoming a serious problem in Gaza, as the seawater has further invaded into many sections along the coastal shoreline. As a first step to get a hold on the problem, the artificial neural network (ANN)-model has been applied as a new approach and an attractive tool to study and predict groundwater levels without applying physically based hydrologic parameters, and also for the purpose to improve the understanding of complex groundwater systems and which is able to show the effects of hydrologic, meteorological and anthropogenic impacts on the groundwater conditions. Prediction of the future behaviour of the seawater intrusion process in the Gaza aquifer is thus of crucial importance to safeguard the already scarce groundwater resources in the region. In this study the coupled three-dimensional groundwater flow and density-dependent solute transport model SEAWAT, as implemented in Visual MODFLOW, is applied to the Gaza coastal aquifer system to simulate the location and the dynamics of the saltwater–freshwater interface in the aquifer in the time period 2000-2010. A very good agreement between simulated and observed TDS salinities with a correlation coefficient of 0.902 and 0.883 for both steady-state and transient calibration is obtained. After successful calibration of the solute transport model, simulation of future management scenarios for the Gaza aquifer have been carried out, in order to get a more comprehensive view of the effects of the artificial recharge planned in the Gaza strip for some time on forestall, or even to remedy, the presently existing adverse aquifer conditions, namely, low groundwater heads and high salinity by the end of the target simulation period, year 2040. To that avail, numerous management scenarios schemes are examined to maintain the ground water system and to control the salinity distributions within the target period 2011-2040. In the first, pessimistic scenario, it is assumed that pumping from the aquifer continues to increase in the near future to meet the rising water demand, and that there is not further recharge to the aquifer than what is provided by natural precipitation. The second, optimistic scenario assumes that treated surficial wastewater can be used as a source of additional artificial recharge to the aquifer which, in principle, should not only lead to an increased sustainable yield of the latter, but could, in the best of all cases, revert even some of the adverse present-day conditions in the aquifer, i.e., seawater intrusion. This scenario has been done with three different cases which differ by the locations and the extensions of the injection-fields for the treated wastewater. The results obtained with the first (do-nothing) scenario indicate that there will be ongoing negative impacts on the aquifer, such as a higher propensity for strong seawater intrusion into the Gaza aquifer. This scenario illustrates that, compared with 2010 situation of the baseline model, at the end of simulation period, year 2040, the amount of saltwater intrusion into the coastal aquifer will be increased by about 35 %, whereas the salinity will be increased by 34 %. In contrast, all three cases of the second (artificial recharge) scenario group can partly revert the present seawater intrusion. From the water budget point of view, compared with the first (do nothing) scenario, for year 2040, the water added to the aquifer by artificial recharge will reduces the amount of water entering the aquifer by seawater intrusion by 81, 77and 72 %, for the three recharge cases, respectively. Meanwhile, the salinity in the Gaza aquifer will be decreased by 15, 32 and 26% for the three cases, respectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An electronic theory is developed, which describes the ultrafast demagnetization in itinerant ferromagnets following the absorption of a femtosecond laser pulse. The present work intends to elucidate the microscopic physics of this ultrafast phenomenon by identifying its fundamental mechanisms. In particular, it aims to reveal the nature of the involved spin excitations and angular-momentum transfer between spin and lattice, which are still subjects of intensive debate. In the first preliminary part of the thesis the initial stage of the laser-induced demagnetization process is considered. In this stage the electronic system is highly excited by spin-conserving elementary excitations involved in the laser-pulse absorption, while the spin or magnon degrees of freedom remain very weakly excited. The role of electron-hole excitations on the stability of the magnetic order of one- and two-dimensional 3d transition metals (TMs) is investigated by using ab initio density-functional theory. The results show that the local magnetic moments are remarkably stable even at very high levels of local energy density and, therefore, indicate that these moments preserve their identity throughout the entire demagnetization process. In the second main part of the thesis a many-body theory is proposed, which takes into account these local magnetic moments and the local character of the involved spin excitations such as spin fluctuations from the very beginning. In this approach the relevant valence 3d and 4p electrons are described in terms of a multiband model Hamiltonian which includes Coulomb interactions, interatomic hybridizations, spin-orbit interactions, as well as the coupling to the time-dependent laser field on the same footing. An exact numerical time evolution is performed for small ferromagnetic TM clusters. The dynamical simulations show that after ultra-short laser pulse absorption the magnetization of these clusters decreases on a time scale of hundred femtoseconds. In particular, the results reproduce the experimentally observed laser-induced demagnetization in ferromagnets and demonstrate that this effect can be explained in terms of the following purely electronic non-adiabatic mechanism: First, on a time scale of 10–100 fs after laser excitation the spin-orbit coupling yields local angular-momentum transfer between the spins and the electron orbits, while subsequently the orbital angular momentum is very rapidly quenched in the lattice on the time scale of one femtosecond due to interatomic electron hoppings. In combination, these two processes result in a demagnetization within hundred or a few hundred femtoseconds after laser-pulse absorption.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the theory of the Navier-Stokes equations, the proofs of some basic known results, like for example the uniqueness of solutions to the stationary Navier-Stokes equations under smallness assumptions on the data or the stability of certain time discretization schemes, actually only use a small range of properties and are therefore valid in a more general context. This observation leads us to introduce the concept of SST spaces, a generalization of the functional setting for the Navier-Stokes equations. It allows us to prove (by means of counterexamples) that several uniqueness and stability conjectures that are still open in the case of the Navier-Stokes equations have a negative answer in the larger class of SST spaces, thereby showing that proof strategies used for a number of classical results are not sufficient to affirmatively answer these open questions. More precisely, in the larger class of SST spaces, non-uniqueness phenomena can be observed for the implicit Euler scheme, for two nonlinear versions of the Crank-Nicolson scheme, for the fractional step theta scheme, and for the SST-generalized stationary Navier-Stokes equations. As far as stability is concerned, a linear version of the Euler scheme, a nonlinear version of the Crank-Nicolson scheme, and the fractional step theta scheme turn out to be non-stable in the class of SST spaces. The positive results established in this thesis include the generalization of classical uniqueness and stability results to SST spaces, the uniqueness of solutions (under smallness assumptions) to two nonlinear versions of the Euler scheme, two nonlinear versions of the Crank-Nicolson scheme, and the fractional step theta scheme for general SST spaces, the second order convergence of a version of the Crank-Nicolson scheme, and a new proof of the first order convergence of the implicit Euler scheme for the Navier-Stokes equations. For each convergence result, we provide conditions on the data that guarantee the existence of nonstationary solutions satisfying the regularity assumptions needed for the corresponding convergence theorem. In the case of the Crank-Nicolson scheme, this involves a compatibility condition at the corner of the space-time cylinder, which can be satisfied via a suitable prescription of the initial acceleration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis describes the development of a model-based vision system that exploits hierarchies of both object structure and object scale. The focus of the research is to use these hierarchies to achieve robust recognition based on effective organization and indexing schemes for model libraries. The goal of the system is to recognize parameterized instances of non-rigid model objects contained in a large knowledge base despite the presence of noise and occlusion. Robustness is achieved by developing a system that can recognize viewed objects that are scaled or mirror-image instances of the known models or that contain components sub-parts with different relative scaling, rotation, or translation than in models. The approach taken in this thesis is to develop an object shape representation that incorporates a component sub-part hierarchy- to allow for efficient and correct indexing into an automatically generated model library as well as for relative parameterization among sub-parts, and a scale hierarchy- to allow for a general to specific recognition procedure. After analysis of the issues and inherent tradeoffs in the recognition process, a system is implemented using a representation based on significant contour curvature changes and a recognition engine based on geometric constraints of feature properties. Examples of the system's performance are given, followed by an analysis of the results. In conclusion, the system's benefits and limitations are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since robots are typically designed with an individual actuator at each joint, the control of these systems is often difficult and non-intuitive. This thesis explains a more intuitive control scheme called Virtual Model Control. This thesis also demonstrates the simplicity and ease of this control method by using it to control a simulated walking hexapod. Virtual Model Control uses imagined mechanical components to create virtual forces, which are applied through the joint torques of real actuators. This method produces a straightforward means of controlling joint torques to produce a desired robot behavior. Due to the intuitive nature of this control scheme, the design of a virtual model controller is similar to the design of a controller with basic mechanical components. The ease of this control scheme facilitates the use of a high level control system which can be used above the low level virtual model controllers to modulate the parameters of the imaginary mechanical components. In order to apply Virtual Model Control to parallel mechanisms, a solution to the force distribution problem is required. This thesis uses an extension of Gardner`s Partitioned Force Control method which allows for the specification of constrained degrees of freedom. This virtual model control technique was applied to a simulated hexapod robot. Although the hexapod is a highly non-linear, parallel mechanism, the virtual models allowed text-book control solutions to be used while the robot was walking. Using a simple linear control law, the robot walked while simultaneously balancing a pendulum and tracking an object.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this work was to establish a taxonomy of hand made model construction as a platform for an approach to project an operative method in architecture. It was therefore studied and catalogued in a systematic approach a broad model production in the work of ARX. A wide range of families and sub-families of models were found, with different purposes according to each phase of development, from searching steps for a new possible configuration to detailed refined decisions. This working method revealed as most relevant characteristics, the grounds for a potential personal reflection and open discussion on project method, its flexibility on space modeling, an accuracy on the representation of real construction situations and its constant and stimulating opening to new suggestions. This research helped on a meta-reflection about this method, having been useful on creating a consciousness of processes that pretend to become an autonomous language, knowledge that might become useful to those who pretend to implement a haptic modus operandi in the work of an architectural project.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Previous assessments of the impacts of climate change on heat-related mortality use the "delta method" to create temperature projection time series that are applied to temperature-mortality models to estimate future mortality impacts. The delta method means that climate model bias in the modelled present does not influence the temperature projection time series and impacts. However, the delta method assumes that climate change will result only in a change in the mean temperature but there is evidence that there will also be changes in the variability of temperature with climate change. The aim of this paper is to demonstrate the importance of considering changes in temperature variability with climate change in impacts assessments of future heat-related mortality. We investigate future heatrelated mortality impacts in six cities (Boston, Budapest, Dallas, Lisbon, London and Sydney) by applying temperature projections from the UK Meteorological Office HadCM3 climate model to the temperature-mortality models constructed and validated in Part 1. We investigate the impacts for four cases based on various combinations of mean and variability changes in temperature with climate change. The results demonstrate that higher mortality is attributed to increases in the mean and variability of temperature with climate change rather than with the change in mean temperature alone. This has implications for interpreting existing impacts estimates that have used the delta method. We present a novel method for the creation of temperature projection time series that includes changes in the mean and variability of temperature with climate change and is not influenced by climate model bias in the modelled present. The method should be useful for future impacts assessments. Few studies consider the implications that the limitations of the climate model may have on the heatrelated mortality impacts. Here, we demonstrate the importance of considering this by conducting an evaluation of the daily and extreme temperatures from HadCM3, which demonstrates that the estimates of future heat-related mortality for Dallas and Lisbon may be overestimated due to positive climate model bias. Likewise, estimates for Boston and London may be underestimated due to negative climate model bias. Finally, we briefly consider uncertainties in the impacts associated with greenhouse gas emissions and acclimatisation. The uncertainties in the mortality impacts due to different emissions scenarios of greenhouse gases in the future varied considerably by location. Allowing for acclimatisation to an extra 2°C in mean temperatures reduced future heat-related mortality by approximately half that of no acclimatisation in each city.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper examines to what extent crops and their environment should be viewed as a coupled system. Crop impact assessments currently use climate model output offline to drive process-based crop models. However, in regions where local climate is sensitive to land surface conditions more consistent assessments may be produced with the crop model embedded within the land surface scheme of the climate model. Using a recently developed coupled crop–climate model, the sensitivity of local climate, in particular climate variability, to climatically forced variations in crop growth throughout the tropics is examined by comparing climates simulated with dynamic and prescribed seasonal growth of croplands. Interannual variations in land surface properties associated with variations in crop growth and development were found to have significant impacts on near-surface fluxes and climate; for example, growing season temperature variability was increased by up to 40% by the inclusion of dynamic crops. The impact was greatest in dry years where the response of crop growth to soil moisture deficits enhanced the associated warming via a reduction in evaporation. Parts of the Sahel, India, Brazil, and southern Africa were identified where local climate variability is sensitive to variations in crop growth, and where crop yield is sensitive to variations in surface temperature. Therefore, offline seasonal forecasting methodologies in these regions may underestimate crop yield variability. The inclusion of dynamic crops also altered the mean climate of the humid tropics, highlighting the importance of including dynamical vegetation within climate models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe a new methodology for comparing satellite radiation budget data with a numerical weather prediction (NWP) model. This is applied to data from the Geostationary Earth Radiation Budget (GERB) instrument on Meteosat-8. The methodology brings together, in near-real time, GERB broadband shortwave and longwave fluxes with simulations based on analyses produced by the Met Office global NWP model. Results for the period May 2003 to February 2005 illustrate the progressive improvements in the data products as various initial problems were resolved. In most areas the comparisons reveal systematic errors in the model's representation of surface properties and clouds, which are discussed elsewhere. However, for clear-sky regions over the oceans the model simulations are believed to be sufficiently accurate to allow the quality of the GERB fluxes themselves to be assessed and any changes in time of the performance of the instrument to be identified. Using model and radiosonde profiles of temperature and humidity as input to a single-column version of the model's radiation code, we conduct sensitivity experiments which provide estimates of the expected model errors over the ocean of about ±5–10 W m−2 in clear-sky outgoing longwave radiation (OLR) and ±0.01 in clear-sky albedo. For the more recent data the differences between the observed and modeled OLR and albedo are well within these error estimates. The close agreement between the observed and modeled values, particularly for the most recent period, illustrates the value of the methodology. It also contributes to the validation of the GERB products and increases confidence in the quality of the data, prior to their release.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An expert elicitation exercise was undertaken to determine those components and processes that are most important for modeling plant uptake of organic chemicals. The state of our knowledge of these processes was also assessed. This semi-quantitative analysis allowed the construction of an idealized model with seven compartments; soil bulk, soil water, roots, stem, leaves, fruit, and air. Three main areas were identified further research: 1) the uptake of organic chemicals by fruit; 2) the internal transfer of organic chemicals between plant structures (e.g., stem and leaves); and 3) the transfer via the soil-air-plant pathway. Until new data becomes available to quantify these processes, it is proposed that an equilibrium partitioning approach is used between plant components other than fruit or that models consist of both an edible and inedible compartment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Models of the dynamics of nitrogen in soil (soil-N) can be used to aid the fertilizer management of a crop. The predictions of soil-N models can be validated by comparison with observed data. Validation generally involves calculating non-spatial statistics of the observations and predictions, such as their means, their mean squared-difference, and their correlation. However, when the model predictions are spatially distributed across a landscape the model requires validation with spatial statistics. There are three reasons for this: (i) the model may be more or less successful at reproducing the variance of the observations at different spatial scales; (ii) the correlation of the predictions with the observations may be different at different spatial scales; (iii) the spatial pattern of model error may be informative. In this study we used a model, parameterized with spatially variable input information about the soil, to predict the mineral-N content of soil in an arable field, and compared the results with observed data. We validated the performance of the N model spatially with a linear mixed model of the observations and model predictions, estimated by residual maximum likelihood. This novel approach allowed us to describe the joint variation of the observations and predictions as: (i) independent random variation that occurred at a fine spatial scale; (ii) correlated random variation that occurred at a coarse spatial scale; (iii) systematic variation associated with a spatial trend. The linear mixed model revealed that, in general, the performance of the N model changed depending on the spatial scale of interest. At the scales associated with random variation, the N model underestimated the variance of the observations, and the predictions were correlated poorly with the observations. At the scale of the trend, the predictions and observations shared a common surface. The spatial pattern of the error of the N model suggested that the observations were affected by the local soil condition, but this was not accounted for by the N model. In summary, the N model would be well-suited to field-scale management of soil nitrogen, but suited poorly to management at finer spatial scales. This information was not apparent with a non-spatial validation. (c),2007 Elsevier B.V. All rights reserved.