967 resultados para Models : mixing length
Resumo:
The term reliability of an equipment or device is often meant to indicate the probability that it carries out the functions expected of it adequately or without failure and within specified performance limits at a given age for a desired mission time when put to use under the designated application and operating environmental stress. A broad classification of the approaches employed in relation to reliability studies can be made as probabilistic and deterministic, where the main interest in the former is to device tools and methods to identify the random mechanism governing the failure process through a proper statistical frame work, while the latter addresses the question of finding the causes of failure and steps to reduce individual failures thereby enhancing reliability. In the probabilistic attitude to which the present study subscribes to, the concept of life distribution, a mathematical idealisation that describes the failure times, is fundamental and a basic question a reliability analyst has to settle is the form of the life distribution. It is for no other reason that a major share of the literature on the mathematical theory of reliability is focussed on methods of arriving at reasonable models of failure times and in showing the failure patterns that induce such models. The application of the methodology of life time distributions is not confined to the assesment of endurance of equipments and systems only, but ranges over a wide variety of scientific investigations where the word life time may not refer to the length of life in the literal sense, but can be concieved in its most general form as a non-negative random variable. Thus the tools developed in connection with modelling life time data have found applications in other areas of research such as actuarial science, engineering, biomedical sciences, economics, extreme value theory etc.
Resumo:
The objective of the study of \Queueing models with vacations and working vacations" was two fold; to minimize the server idle time and improve the e ciency of the service system. Keeping this in mind we considered queueing models in di erent set up in this thesis. Chapter 1 introduced the concepts and techniques used in the thesis and also provided a summary of the work done. In chapter 2 we considered an M=M=2 queueing model, where one of the two heterogeneous servers takes multiple vacations. We studied the performance of the system with the help of busy period analysis and computation of mean waiting time of a customer in the stationary regime. Conditional stochastic decomposition of queue length was derived. To improve the e ciency of this system we came up with a modi ed model in chapter 3. In this model the vacationing server attends the customers, during vacation at a slower service rate. Chapter 4 analyzed a working vacation queueing model in a more general set up. The introduction of N policy makes this MAP=PH=1 model di erent from all working vacation models available in the literature. A detailed analysis of performance of the model was provided with the help of computation of measures such as mean waiting time of a customer who gets service in normal mode and vacation mode.
Resumo:
In this paper, we study some dynamic generalized information measures between a true distribution and an observed (weighted) distribution, useful in life length studies. Further, some bounds and inequalities related to these measures are also studied
Resumo:
In this paper, we study the relationship between the failure rate and the mean residual life of doubly truncated random variables. Accordingly, we develop characterizations for exponential, Pareto 11 and beta distributions. Further, we generalize the identities for fire Pearson and the exponential family of distributions given respectively in Nair and Sankaran (1991) and Consul (1995). Applications of these measures in file context of lengthbiased models are also explored
Resumo:
Es ist bekannt, dass die Dichte eines gelösten Stoffes die Richtung und die Stärke seiner Bewegung im Untergrund entscheidend bestimmen kann. Eine Vielzahl von Untersuchungen hat gezeigt, dass die Verteilung der Durchlässigkeiten eines porösen Mediums diese Dichteffekte verstärken oder abmindern kann. Wie sich dieser gekoppelte Effekt auf die Vermischung zweier Fluide auswirkt, wurde in dieser Arbeit untersucht und dabei das experimentelle sowohl mit dem numerischen als auch mit dem analytischen Modell gekoppelt. Die auf der Störungstheorie basierende stochastische Theorie der macrodispersion wurde in dieser Arbeit für den Fall der transversalen Makodispersion. Für den Fall einer stabilen Schichtung wurde in einem Modelltank (10m x 1.2m x 0.1m) der Universität Kassel eine Serie sorgfältig kontrollierter zweidimensionaler Experimente an einem stochastisch heterogenen Modellaquifer durchgeführt. Es wurden Versuchsreihen mit variierenden Konzentrationsdifferenzen (250 ppm bis 100 000 ppm) und Strömungsgeschwindigkeiten (u = 1 m/ d bis 8 m/d) an drei verschieden anisotrop gepackten porösen Medien mit variierender Varianzen und Korrelationen der lognormal verteilten Permeabilitäten durchgeführt. Die stationäre räumliche Konzentrationsausbreitung der sich ausbreitenden Salzwasserfahne wurde anhand der Leitfähigkeit gemessen und aus der Höhendifferenz des 84- und 16-prozentigen relativen Konzentrationsdurchgang die Dispersion berechnet. Parallel dazu wurde ein numerisches Modell mit dem dichteabhängigen Finite-Elemente-Strömungs- und Transport-Programm SUTRA aufgestellt. Mit dem kalibrierten numerischen Modell wurden Prognosen für mögliche Transportszenarien, Sensitivitätsanalysen und stochastische Simulationen nach der Monte-Carlo-Methode durchgeführt. Die Einstellung der Strömungsgeschwindigkeit erfolgte - sowohl im experimentellen als auch im numerischen Modell - über konstante Druckränder an den Ein- und Auslauftanks. Dabei zeigte sich eine starke Sensitivität der räumlichen Konzentrationsausbreitung hinsichtlich lokaler Druckvariationen. Die Untersuchungen ergaben, dass sich die Konzentrationsfahne mit steigendem Abstand von der Einströmkante wellenförmig einem effektiven Wert annähert, aus dem die Makrodispersivität ermittelt werden kann. Dabei zeigten sich sichtbare nichtergodische Effekte, d.h. starke Abweichungen in den zweiten räumlichen Momenten der Konzentrationsverteilung der deterministischen Experimente von den Erwartungswerten aus der stochastischen Theorie. Die transversale Makrodispersivität stieg proportional zur Varianz und Korrelation der lognormalen Permeabilitätsverteilung und umgekehrt proportional zur Strömungsgeschwindigkeit und Dichtedifferenz zweier Fluide. Aus dem von Welty et al. [2003] mittels Störungstheorie entwickelten dichteabhängigen Makrodispersionstensor konnte in dieser Arbeit die stochastische Formel für die transversale Makrodispersion weiter entwickelt und - sowohl experimentell als auch numerisch - verifiziert werden.
Resumo:
Theory of compositional data analysis is often focused on the composition only. However in practical applications we often treat a composition together with covariables with some other scale. This contribution systematically gathers and develop statistical tools for this situation. For instance, for the graphical display of the dependence of a composition with a categorical variable, a colored set of ternary diagrams might be a good idea for a first look at the data, but it will fast hide important aspects if the composition has many parts, or it takes extreme values. On the other hand colored scatterplots of ilr components could not be very instructive for the analyst, if the conventional, black-box ilr is used. Thinking on terms of the Euclidean structure of the simplex, we suggest to set up appropriate projections, which on one side show the compositional geometry and on the other side are still comprehensible by a non-expert analyst, readable for all locations and scales of the data. This is e.g. done by defining special balance displays with carefully- selected axes. Following this idea, we need to systematically ask how to display, explore, describe, and test the relation to complementary or explanatory data of categorical, real, ratio or again compositional scales. This contribution shows that it is sufficient to use some basic concepts and very few advanced tools from multivariate statistics (principal covariances, multivariate linear models, trellis or parallel plots, etc.) to build appropriate procedures for all these combinations of scales. This has some fundamental implications in their software implementation, and how might they be taught to analysts not already experts in multivariate analysis
Resumo:
Across Europe, elevated phosphorus (P) concentrations in lowland rivers have made them particularly susceptible to eutrophication. This is compounded in southern and central UK by increasing pressures on water resources, which may be further enhanced by the potential effects of climate change. The EU Water Framework Directive requires an integrated approach to water resources management at the catchment scale and highlights the need for modelling tools that can distinguish relative contributions from multiple nutrient sources and are consistent with the information content of the available data. Two such models are introduced and evaluated within a stochastic framework using daily flow and total phosphorus concentrations recorded in a clay catchment typical of many areas of the lowland UK. Both models disaggregate empirical annual load estimates, derived from land use data, as a function of surface/near surface runoff, generated using a simple conceptual rainfall-runoff model. Estimates of the daily load from agricultural land, together with those from baseflow and point sources, feed into an in-stream routing algorithm. The first model assumes constant concentrations in runoff via surface/near surface pathways and incorporates an additional P store in the river-bed sediments, depleted above a critical discharge, to explicitly simulate resuspension. The second model, which is simpler, simulates P concentrations as a function of surface/near surface runoff, thus emphasising the influence of non-point source loads during flow peaks and mixing of baseflow and point sources during low flows. The temporal consistency of parameter estimates and thus the suitability of each approach is assessed dynamically following a new approach based on Monte-Carlo analysis. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
This paper presents a hybrid control strategy integrating dynamic neural networks and feedback linearization into a predictive control scheme. Feedback linearization is an important nonlinear control technique which transforms a nonlinear system into a linear system using nonlinear transformations and a model of the plant. In this work, empirical models based on dynamic neural networks have been employed. Dynamic neural networks are mathematical structures described by differential equations, which can be trained to approximate general nonlinear systems. A case study based on a mixing process is presented.
Resumo:
To test the effectiveness of stochastic single-chain models in describing the dynamics of entangled polymers, we systematically compare one such model; the slip-spring model; to a multichain model solved using stochastic molecular dynamics(MD) simulations (the Kremer-Grest model). The comparison involves investigating if the single-chain model can adequately describe both a microscopic dynamical and a macroscopic rheological quantity for a range of chain lengths. Choosing a particular chain length in the slip-spring model, the parameter values that best reproduce the mean-square displacement of a group of monomers is determined by fitting toMDdata. Using the same set of parameters we then test if the predictions of the mean-square displacements for other chain lengths agree with the MD calculations. We followed this by a comparison of the time dependent stress relaxation moduli obtained from the two models for a range of chain lengths. After identifying a limitation of the original slip-spring model in describing the static structure of the polymer chain as seen in MD, we remedy this by introducing a pairwise repulsive potential between the monomers in the chains. Poor agreement of the mean-square monomer displacements at short times can be rectified by the use of generalized Langevin equations for the dynamics and resulted in significantly improved agreement.
Resumo:
This paper seeks to illustrate the point that physical inconsistencies between thermodynamics and dynamics usually introduce nonconservative production/destruction terms in the local total energy balance equation in numerical ocean general circulation models (OGCMs). Such terms potentially give rise to undesirable forces and/or diabatic terms in the momentum and thermodynamic equations, respectively, which could explain some of the observed errors in simulated ocean currents and water masses. In this paper, a theoretical framework is developed to provide a practical method to determine such nonconservative terms, which is illustrated in the context of a relatively simple form of the hydrostatic Boussinesq primitive equation used in early versions of OGCMs, for which at least four main potential sources of energy nonconservation are identified; they arise from: (1) the “hanging” kinetic energy dissipation term; (2) assuming potential or conservative temperature to be a conservative quantity; (3) the interaction of the Boussinesq approximation with the parameterizations of turbulent mixing of temperature and salinity; (4) some adiabatic compressibility effects due to the Boussinesq approximation. In practice, OGCMs also possess spurious numerical energy sources and sinks, but they are not explicitly addressed here. Apart from (1), the identified nonconservative energy sources/sinks are not sign definite, allowing for possible widespread cancellation when integrated globally. Locally, however, these terms may be of the same order of magnitude as actual energy conversion terms thought to occur in the oceans. Although the actual impact of these nonconservative energy terms on the overall accuracy and physical realism of the oceans is difficult to ascertain, an important issue is whether they could impact on transient simulations, and on the transition toward different circulation regimes associated with a significant reorganization of the different energy reservoirs. Some possible solutions for improvement are examined. It is thus found that the term (2) can be substantially reduced by at least one order of magnitude by using conservative temperature instead of potential temperature. Using the anelastic approximation, however, which was initially thought as a possible way to greatly improve the accuracy of the energy budget, would only marginally reduce the term (4) with no impact on the terms (1), (2) and (3).
Resumo:
Most existing models of language production and speech motor control do not explicitly address how language requirements affect speech motor functions, as these domains are usually treated as separate and independent from one another. This investigation compared lip movements during bilabial closure between five individuals with mild aphasia and five age and gender-matched control speakers when the linguistic characteristics of the stimuli were varied by increasing the number of syllables. Upper and lower lip movement data were collected for mono-, bi- and tri-syllabic nonword sequences using an AG 100 EMMA system. Each task was performed under both normal and fast rate conditions. Single articulator kinematic parameters (peak velocity, amplitude, duration,and cyclic spatio-temporal index) were measured to characterize lip movements. Results revealed that compared to control speakers, individuals with aphasia showed significantly longer movement duration and lower movement stability for longer items (bi- and tri-syllables). Moreover, utterance length affected the lip kinematics, in that the monosyllables had smaller peak velocities, smaller amplitudes and shorter durations compared to bi- and trisyllables, and movement stability was lowest for the trisyllables. In addition, the rate-induced changes (smaller amplitude and shorter duration with increased rate) were most prominent for the short items (i.e., monosyllables). These findings provide further support for the notion that linguistic changes have an impact on the characteristics of speech movements, and that individuals with aphasia are more affected by such changes than control speakers.
Resumo:
Many numerical models for weather prediction and climate studies are run at resolutions that are too coarse to resolve convection explicitly, but too fine to justify the local equilibrium assumed by conventional convective parameterizations. The Plant-Craig (PC) stochastic convective parameterization scheme, developed in this paper, solves this problem by removing the assumption that a given grid-scale situation must always produce the same sub-grid-scale convective response. Instead, for each timestep and gridpoint, one of the many possible convective responses consistent with the large-scale situation is randomly selected. The scheme requires as input the large-scale state as opposed to the instantaneous grid-scale state, but must nonetheless be able to account for genuine variations in the largescale situation. Here we investigate the behaviour of the PC scheme in three-dimensional simulations of radiative-convective equilibrium, demonstrating in particular that the necessary space-time averaging required to produce a good representation of the input large-scale state is not in conflict with the requirement to capture large-scale variations. The resulting equilibrium profiles agree well with those obtained from established deterministic schemes, and with corresponding cloud-resolving model simulations. Unlike the conventional schemes the statistics for mass flux and rainfall variability from the PC scheme also agree well with relevant theory and vary appropriately with spatial scale. The scheme is further shown to adapt automatically to changes in grid length and in forcing strength.
Resumo:
This study describes the turbulent processes in the upper ocean boundary layer forced by a constant surface stress in the absence of the Coriolis force using large-eddy simulation. The boundary layer that develops has a two-layer structure, a well-mixed layer above a stratified shear layer. The depth of the mixed layer is approximately constant, whereas the depth of the shear layer increases with time. The turbulent momentum flux varies approximately linearly from the surface to the base of the shear layer. There is a maximum in the production of turbulence through shear at the base of the mixed layer. The magnitude of the shear production increases with time. The increase is mainly a result of the increase in the turbulent momentum flux at the base of the mixed layer due to the increase in the depth of the boundary layer. The length scale for the shear turbulence is the boundary layer depth. A simple scaling is proposed for the magnitude of the shear production that depends on the surface forcing and the average mixed layer current. The scaling can be interpreted in terms of the divergence of a mean kinetic energy flux. A simple bulk model of the boundary layer is developed to obtain equations describing the variation of the mixed layer and boundary layer depths with time. The model shows that the rate at which the boundary layer deepens does not depend on the stratification of the thermocline. The bulk model shows that the variation in the mixed layer depth is small as long as the surface buoyancy flux is small.
Resumo:
Models of root system growth emerged in the early 1970s, and were based on mathematical representations of root length distribution in soil. The last decade has seen the development of more complex architectural models and the use of computer-intensive approaches to study developmental and environmental processes in greater detail. There is a pressing need for predictive technologies that can integrate root system knowledge, scaling from molecular to ensembles of plants. This paper makes the case for more widespread use of simpler models of root systems based on continuous descriptions of their structure. A new theoretical framework is presented that describes the dynamics of root density distributions as a function of individual root developmental parameters such as rates of lateral root initiation, elongation, mortality, and gravitropsm. The simulations resulting from such equations can be performed most efficiently in discretized domains that deform as a result of growth, and that can be used to model the growth of many interacting root systems. The modelling principles described help to bridge the gap between continuum and architectural approaches, and enhance our understanding of the spatial development of root systems. Our simulations suggest that root systems develop in travelling wave patterns of meristems, revealing order in otherwise spatially complex and heterogeneous systems. Such knowledge should assist physiologists and geneticists to appreciate how meristem dynamics contribute to the pattern of growth and functioning of root systems in the field.
Resumo:
A method to solve a quasi-geostrophic two-layer model including the variation of static stability is presented. The divergent part of the wind is incorporated by means of an iterative procedure. The procedure is rather fast and the time of computation is only 60–70% longer than for the usual two-layer model. The method of solution is justified by the conservation of the difference between the gross static stability and the kinetic energy. To eliminate the side-boundary conditions the experiments have been performed on a zonal channel model. The investigation falls mainly into three parts: The first part (section 5) contains a discussion of the significance of some physically inconsistent approximations. It is shown that physical inconsistencies are rather serious and for these inconsistent models which were studied the total kinetic energy increased faster than the gross static stability. In the next part (section 6) we are studying the effect of a Jacobian difference operator which conserves the total kinetic energy. The use of this operator in two-layer models will give a slight improvement but probably does not have any practical use in short periodic forecasts. It is also shown that the energy-conservative operator will change the wave-speed in an erroneous way if the wave-number or the grid-length is large in the meridional direction. In the final part (section 7) we investigate the behaviour of baroclinic waves for some different initial states and for two energy-consistent models, one with constant and one with variable static stability. According to the linear theory the waves adjust rather rapidly in such a way that the temperature wave will lag behind the pressure wave independent of the initial configuration. Thus, both models give rise to a baroclinic development even if the initial state is quasi-barotropic. The effect of the variation of static stability is very small, qualitative differences in the development are only observed during the first 12 hours. For an amplifying wave we will get a stabilization over the troughs and an instabilization over the ridges.