890 resultados para continuous model theory
Resumo:
Models of root system growth emerged in the early 1970s, and were based on mathematical representations of root length distribution in soil. The last decade has seen the development of more complex architectural models and the use of computer-intensive approaches to study developmental and environmental processes in greater detail. There is a pressing need for predictive technologies that can integrate root system knowledge, scaling from molecular to ensembles of plants. This paper makes the case for more widespread use of simpler models of root systems based on continuous descriptions of their structure. A new theoretical framework is presented that describes the dynamics of root density distributions as a function of individual root developmental parameters such as rates of lateral root initiation, elongation, mortality, and gravitropsm. The simulations resulting from such equations can be performed most efficiently in discretized domains that deform as a result of growth, and that can be used to model the growth of many interacting root systems. The modelling principles described help to bridge the gap between continuum and architectural approaches, and enhance our understanding of the spatial development of root systems. Our simulations suggest that root systems develop in travelling wave patterns of meristems, revealing order in otherwise spatially complex and heterogeneous systems. Such knowledge should assist physiologists and geneticists to appreciate how meristem dynamics contribute to the pattern of growth and functioning of root systems in the field.
Resumo:
We consider the relation between so called continuous localization models—i.e. non-linear stochastic Schrödinger evolutions—and the discrete GRW-model of wave function collapse. The former can be understood as scaling limit of the GRW process. The proof relies on a stochastic Trotter formula, which is of interest in its own right. Our Trotter formula also allows to complement results on existence theory of stochastic Schrödinger evolutions by Holevo and Mora/Rebolledo.
Resumo:
The Fredholm properties of Toeplitz operators on the Bergman space A2 have been well-known for continuous symbols since the 1970s. We investigate the case p=1 with continuous symbols under a mild additional condition, namely that of the logarithmic vanishing mean oscillation in the Bergman metric. Most differences are related to boundedness properties of Toeplitz operators acting on Ap that arise when we no longer have 1
Resumo:
Data assimilation refers to the problem of finding trajectories of a prescribed dynamical model in such a way that the output of the model (usually some function of the model states) follows a given time series of observations. Typically though, these two requirements cannot both be met at the same time–tracking the observations is not possible without the trajectory deviating from the proposed model equations, while adherence to the model requires deviations from the observations. Thus, data assimilation faces a trade-off. In this contribution, the sensitivity of the data assimilation with respect to perturbations in the observations is identified as the parameter which controls the trade-off. A relation between the sensitivity and the out-of-sample error is established, which allows the latter to be calculated under operational conditions. A minimum out-of-sample error is proposed as a criterion to set an appropriate sensitivity and to settle the discussed trade-off. Two approaches to data assimilation are considered, namely variational data assimilation and Newtonian nudging, also known as synchronization. Numerical examples demonstrate the feasibility of the approach.
Resumo:
The continuous ranked probability score (CRPS) is a frequently used scoring rule. In contrast with many other scoring rules, the CRPS evaluates cumulative distribution functions. An ensemble of forecasts can easily be converted into a piecewise constant cumulative distribution function with steps at the ensemble members. This renders the CRPS a convenient scoring rule for the evaluation of ‘raw’ ensembles, obviating the need for sophisticated ensemble model output statistics or dressing methods prior to evaluation. In this article, a relation between the CRPS score and the quantile score is established. The evaluation of ‘raw’ ensembles using the CRPS is discussed in this light. It is shown that latent in this evaluation is an interpretation of the ensemble as quantiles but with non-uniform levels. This needs to be taken into account if the ensemble is evaluated further, for example with rank histograms.
Resumo:
An analytical model of orographic gravity wave drag due to sheared flow past elliptical mountains is developed. The model extends the domain of applicability of the well-known Phillips model to wind profiles that vary relatively slowly in the vertical, so that they may be treated using a WKB approximation. The model illustrates how linear processes associated with wind profile shear and curvature affect the drag force exerted by the airflow on mountains, and how it is crucial to extend the WKB approximation to second order in the small perturbation parameter for these effects to be taken into account. For the simplest wind profiles, the normalized drag depends only on the Richardson number, Ri, of the flow at the surface and on the aspect ratio, γ, of the mountain. For a linear wind profile, the drag decreases as Ri decreases, and this variation is faster when the wind is across the mountain than when it is along the mountain. For a wind that rotates with height maintaining its magnitude, the drag generally increases as Ri decreases, by an amount depending on γ and on the incidence angle. The results from WKB theory are compared with exact linear results and also with results from a non-hydrostatic nonlinear numerical model, showing in general encouraging agreement, down to values of Ri of order one.
Resumo:
In 'Avalanche', an object is lowered, players staying in contact throughout. Normally the task is easily accomplished. However, with larger groups counter-intuitive behaviours appear. The paper proposes a formal theory for the underlying causal mechanisms. The aim is to not only provide an explicit, testable hypothesis for the source of the observed modes of behaviour-but also to exemplify the contribution that formal theory building can make to understanding complex social phenomena. Mapping reveals the importance of geometry to the Avalanche game; each player has a pair of balancing loops, one involved in lowering the object, the other ensuring contact. For more players, sets of balancing loops interact and these can allow dominance by reinforcing loops, causing the system to chase upwards towards an ever-increasing goal. However, a series of other effects concerning human physiology and behaviour (HPB) is posited as playing a role. The hypothesis is therefore rigorously tested using simulation. For simplicity a 'One Degree of Freedom' case is examined, allowing all of the effects to be included whilst rendering the analysis more transparent. Formulation and experimentation with the model gives insight into the behaviours. Multi-dimensional rate/level analysis indicates that there is only a narrow region in which the system is able to move downwards. Model runs reproduce the single 'desired' mode of behaviour and all three of the observed 'problematic' ones. Sensitivity analysis gives further insight into the system's modes and their causes. Behaviour is seen to arise only when the geometric effects apply (number of players greater than degrees of freedom of object) in combination with a range of HPB effects. An analogy exists between the co-operative behaviour required here and various examples: conflicting strategic objectives in organizations; Prisoners' Dilemma and integrated bargaining situations. Additionally, the game may be relatable in more direct algebraic terms to situations involving companies in which the resulting behaviours are mediated by market regulations. Finally, comment is offered on the inadequacy of some forms of theory building and the case is made for formal theory building involving the use of models, analysis and plausible explanations to create deep understanding of social phenomena.
Resumo:
We model the behavior of rational forward-looking agents in a spatial economy. The economic geography structure is built on Fujita et al. (1999)'s racetrack economy. Workers choose optimally what to consume at each period, as well as which spatial itinerary to follow in the geographical space. The spatial extent of the resulting agglomerations increases with the taste for variety and the expenditure share on manufactured goods, and decreases with transport costs. Because forward-looking agents anticipate the future formation of agglomerations, they are more responsive to spatial utility differentials than myopic agents. As a consequence, the emerging agglomerations are larger under perfect foresight spatial adjustments than under myopic ones.
Resumo:
Existing numerical characterizations of the optimal income tax have been based on a limited number of model specifications. As a result, they do not reveal which properties are general. We determine the optimal tax in the quasi-linear model under weaker assumptions than have previously been used; in particular, we remove the assumption of a lower bound on the utility of zero consumption and the need to permit negative labor incomes. A Monte Carlo analysis is then conducted in which economies are selected at random and the optimal tax function constructed. The results show that in a significant proportion of economies the marginal tax rate rises at low skills and falls at high. The average tax rate is equally likely to rise or fall with skill at low skill levels, rises in the majority of cases in the centre of the skill range, and falls at high skills. These results are consistent across all the specifications we test. We then extend the analysis to show that these results also hold for Cobb-Douglas utility.
Resumo:
The extent and thickness of the Arctic sea ice cover has decreased dramatically in the past few decades with minima in sea ice extent in September 2005 and 2007. These minima have not been predicted in the IPCC AR4 report, suggesting that the sea ice component of climate models should more realistically represent the processes controlling the sea ice mass balance. One of the processes poorly represented in sea ice models is the formation and evolution of melt ponds. Melt ponds accumulate on the surface of sea ice from snow and sea ice melt and their presence reduces the albedo of the ice cover, leading to further melt. Toward the end of the melt season, melt ponds cover up to 50% of the sea ice surface. We have developed a melt pond evolution theory. Here, we have incorporated this melt pond theory into the Los Alamos CICE sea ice model, which has required us to include the refreezing of melt ponds. We present results showing that the presence, or otherwise, of a representation of melt ponds has a significant effect on the predicted sea ice thickness and extent. We also present a sensitivity study to uncertainty in the sea ice permeability, number of thickness categories in the model representation, meltwater redistribution scheme, and pond albedo. We conclude with a recommendation that our melt pond scheme is included in sea ice models, and the number of thickness categories should be increased and concentrated at lower thicknesses.
Resumo:
A simple four-dimensional assimilation technique, called Newtonian relaxation, has been applied to the Hamburg climate model (ECHAM), to enable comparison of model output with observations for short periods of time. The prognostic model variables vorticity, divergence, temperature, and surface pressure have been relaxed toward European Center for Medium-Range Weather Forecasts (ECMWF) global meteorological analyses. Several experiments have been carried out, in which the values of the relaxation coefficients have been varied to find out which values are most usable for our purpose. To be able to use the method for validation of model physics or chemistry, good agreement of the model simulated mass and wind field is required. In addition, the model physics should not be disturbed too strongly by the relaxation forcing itself. Both aspects have been investigated. Good agreement with basic observed quantities, like wind, temperature, and pressure is obtained for most simulations in the extratropics. Derived variables, like precipitation and evaporation, have been compared with ECMWF forecasts and observations. Agreement for these variables is smaller than for the basic observed quantities. Nevertheless, considerable improvement is obtained relative to a control run without assimilation. Differences between tropics and extratropics are smaller than for the basic observed quantities. Results also show that precipitation and evaporation are affected by a sort of continuous spin-up which is introduced by the relaxation: the bias (ECMWF-ECHAM) is increasing with increasing relaxation forcing. In agreement with this result we found that with increasing relaxation forcing the vertical exchange of tracers by turbulent boundary layer mixing and, in a lesser extent, by convection, is reduced.
Resumo:
The relationship between price volatility and competition is examined. Atheoretic, vector auto regressions on farm prices of wheat and retail prices of derivatives (flour, bread, pasta, bulgur and cookies) are compared to results from a dynamic, simultaneous-equations model with theory-based farm-to-retail linkages. Analytical results yield insights about numbers of firms and their impacts on demand- and supply-side multipliers, but the applications to Turkish time series (1988:1-1996:12) yield mixed results.
Resumo:
This paper aims to develop a mathematical model based on semi-group theory, which allows to improve quality of service (QoS), including the reduction of the carbon path, in a pervasive environment of a Mobile Virtual Network Operator (MVNO). This paper generalise an interrelationship Machine to Machine (M2M) mathematical model, based on semi-group theory. This paper demonstrates that using available technology and with a solid mathematical model, is possible to streamline relationships between building agents, to control pervasive spaces so as to reduce the impact in carbon footprint through the reduction of GHG.
Resumo:
We discuss the modeling of dielectric responses of electromagnetically excited networks which are composed of a mixture of capacitors and resistors. Such networks can be employed as lumped-parameter circuits to model the response of composite materials containing conductive and insulating grains. The dynamics of the excited network systems are studied using a state space model derived from a randomized incidence matrix. Time and frequency domain responses from synthetic data sets generated from state space models are analyzed for the purpose of estimating the fraction of capacitors in the network. Good results were obtained by using either the time-domain response to a pulse excitation or impedance data at selected frequencies. A chemometric framework based on a Successive Projections Algorithm (SPA) enables the construction of multiple linear regression (MLR) models which can efficiently determine the ratio of conductive to insulating components in composite material samples. The proposed method avoids restrictions commonly associated with Archie’s law, the application of percolation theory or Kohlrausch-Williams-Watts models and is applicable to experimental results generated by either time domain transient spectrometers or continuous-wave instruments. Furthermore, it is quite generic and applicable to tomography, acoustics as well as other spectroscopies such as nuclear magnetic resonance, electron paramagnetic resonance and, therefore, should be of general interest across the dielectrics community.
Resumo:
Despite many decades investigating scalp recordable 8–13-Hz (alpha) electroencephalographic activity, no consensus has yet emerged regarding its physiological origins nor its functional role in cognition. Here we outline a detailed, physiologically meaningful, theory for the genesis of this rhythm that may provide important clues to its functional role. In particular we find that electroencephalographically plausible model dynamics, obtained with physiological admissible parameterisations, reveals a cortex perched on the brink of stability, which when perturbed gives rise to a range of unanticipated complex dynamics that include 40-Hz (gamma) activity. Preliminary experimental evidence, involving the detection of weak nonlinearity in resting EEG using an extension of the well-known surrogate data method, suggests that nonlinear (deterministic) dynamics are more likely to be associated with weakly damped alpha activity. Thus rather than the “alpha rhythm” being an idling rhythm it may be more profitable to conceive it as a readiness rhythm.