950 resultados para Scaling sStrategies
Resumo:
[English] This paper is a tutorial introduction to pseudospectral optimal control. With pseudospectral methods, a function is approximated as a linear combination of smooth basis functions, which are often chosen to be Legendre or Chebyshev polynomials. Collocation of the differential-algebraic equations is performed at orthogonal collocation points, which are selected to yield interpolation of high accuracy. Pseudospectral methods directly discretize the original optimal control problem to recast it into a nonlinear programming format. A numerical optimizer is then employed to find approximate local optimal solutions. The paper also briefly describes the functionality and implementation of PSOPT, an open source software package written in C++ that employs pseudospectral discretization methods to solve multi-phase optimal control problems. The software implements the Legendre and Chebyshev pseudospectral methods, and it has useful features such as automatic differentiation, sparsity detection, and automatic scaling. The use of pseudospectral methods is illustrated in two problems taken from the literature on computational optimal control. [Portuguese] Este artigo e um tutorial introdutorio sobre controle otimo pseudo-espectral. Em metodos pseudo-espectrais, uma funcao e aproximada como uma combinacao linear de funcoes de base suaves, tipicamente escolhidas como polinomios de Legendre ou Chebyshev. A colocacao de equacoes algebrico-diferenciais e realizada em pontos de colocacao ortogonal, que sao selecionados de modo a minimizar o erro de interpolacao. Metodos pseudoespectrais discretizam o problema de controle otimo original de modo a converte-lo em um problema de programa cao nao-linear. Um otimizador numerico e entao empregado para obter solucoes localmente otimas. Este artigo tambem descreve sucintamente a funcionalidade e a implementacao de um pacote computacional de codigo aberto escrito em C++ chamado PSOPT. Tal pacote emprega metodos de discretizacao pseudo-spectrais para resolver problemas de controle otimo com multiplas fase. O PSOPT permite a utilizacao de metodos de Legendre ou Chebyshev, e possui caractersticas uteis tais como diferenciacao automatica, deteccao de esparsidade e escalonamento automatico. O uso de metodos pseudo-espectrais e ilustrado em dois problemas retirados da literatura de controle otimo computacional.
Resumo:
We present molecular dynamics (MD) and slip-springs model simulations of the chain segmental dynamics in entangled linear polymer melts. The time-dependent behavior of the segmental orientation autocorrelation functions and mean-square segmental displacements are analyzed for both flexible and semiflexible chains, with particular attention paid to the scaling relations among these dynamic quantities. Effective combination of the two simulation methods at different coarse-graining levels allows us to explore the chain dynamics for chain lengths ranging from Z ≈ 2 to 90 entanglements. For a given chain length of Z ≈ 15, the time scales accessed span for more than 10 decades, covering all of the interesting relaxation regimes. The obtained time dependence of the monomer mean square displacements, g1(t), is in good agreement with the tube theory predictions. Results on the first- and second-order segmental orientation autocorrelation functions, C1(t) and C2(t), demonstrate a clear power law relationship of C2(t) C1(t)m with m = 3, 2, and 1 in the initial, free Rouse, and entangled (constrained Rouse) regimes, respectively. The return-to-origin hypothesis, which leads to inverse proportionality between the segmental orientation autocorrelation functions and g1(t) in the entangled regime, is convincingly verified by the simulation result of C1(t) g1(t)−1 t–1/4 in the constrained Rouse regime, where for well-entangled chains both C1(t) and g1(t) are rather insensitive to the constraint release effects. However, the second-order correlation function, C2(t), shows much stronger sensitivity to the constraint release effects and experiences a protracted crossover from the free Rouse to entangled regime. This crossover region extends for at least one decade in time longer than that of C1(t). The predicted time scaling behavior of C2(t) t–1/4 is observed in slip-springs simulations only at chain length of 90 entanglements, whereas shorter chains show higher scaling exponents. The reported simulation work can be applied to understand the observations of the NMR experiments.
Resumo:
Models of root system growth emerged in the early 1970s, and were based on mathematical representations of root length distribution in soil. The last decade has seen the development of more complex architectural models and the use of computer-intensive approaches to study developmental and environmental processes in greater detail. There is a pressing need for predictive technologies that can integrate root system knowledge, scaling from molecular to ensembles of plants. This paper makes the case for more widespread use of simpler models of root systems based on continuous descriptions of their structure. A new theoretical framework is presented that describes the dynamics of root density distributions as a function of individual root developmental parameters such as rates of lateral root initiation, elongation, mortality, and gravitropsm. The simulations resulting from such equations can be performed most efficiently in discretized domains that deform as a result of growth, and that can be used to model the growth of many interacting root systems. The modelling principles described help to bridge the gap between continuum and architectural approaches, and enhance our understanding of the spatial development of root systems. Our simulations suggest that root systems develop in travelling wave patterns of meristems, revealing order in otherwise spatially complex and heterogeneous systems. Such knowledge should assist physiologists and geneticists to appreciate how meristem dynamics contribute to the pattern of growth and functioning of root systems in the field.
Resumo:
An analysis of the climate of precipitation extremes as simulated by six European regional climate models (RCMs) is undertaken in order to describe/quantify future changes and to examine/interpret differences between models. Each model has adopted boundary conditions from the same ensemble of global climate model integrations for present (1961–1990) and future (2071–2100) climate under the Intergovernmental Panel on Climate Change A2 emission scenario. The main diagnostics are multiyear return values of daily precipitation totals estimated from extreme value analysis. An evaluation of the RCMs against observations in the Alpine region shows that model biases for extremes are comparable to or even smaller than those for wet day intensity and mean precipitation. In winter, precipitation extremes tend to increase north of about 45°N, while there is an insignificant change or a decrease to the south. In northern Europe the 20-year return value of future climate corresponds to the 40- to 100-year return value of present climate. There is a good agreement between the RCMs, and the simulated change is similar to a scaling of present-day extremes by the change in average events. In contrast, there are large model differences in summer when RCM formulation contributes significantly to scenario uncertainty. The model differences are well explained by differences in the precipitation frequency and intensity process, but in all models, extremes increase more or decrease less than would be expected from the scaling of present-day extremes. There is evidence for a component of the change that affects extremes specifically and is consistent between models despite the large variation in the total response.
Resumo:
We consider the relation between so called continuous localization models—i.e. non-linear stochastic Schrödinger evolutions—and the discrete GRW-model of wave function collapse. The former can be understood as scaling limit of the GRW process. The proof relies on a stochastic Trotter formula, which is of interest in its own right. Our Trotter formula also allows to complement results on existence theory of stochastic Schrödinger evolutions by Holevo and Mora/Rebolledo.
Resumo:
The characteristics of the boundary layer separating a turbulence region from an irrotational (or non-turbulent) flow region are investigated using rapid distortion theory (RDT). The turbulence region is approximated as homogeneous and isotropic far away from the bounding turbulent/non-turbulent (T/NT) interface, which is assumed to remain approximately flat. Inviscid effects resulting from the continuity of the normal velocity and pressure at the interface, in addition to viscous effects resulting from the continuity of the tangential velocity and shear stress, are taken into account by considering a sudden insertion of the T/NT interface, in the absence of mean shear. Profiles of the velocity variances, turbulent kinetic energy (TKE), viscous dissipation rate (epsilon), turbulence length scales, and pressure statistics are derived, showing an excellent agreement with results from direct numerical simulations (DNS). Interestingly, the normalized inviscid flow statistics at the T/NT interface do not depend on the form of the assumed TKE spectrum. Outside the turbulent region, where the flow is irrotational (except inside a thin viscous boundary layer), epsilon decays as z^{-6}, where z is the distance from the T/NT interface. The mean pressure distribution is calculated using RDT, and exhibits a decrease towards the turbulence region due to the associated velocity fluctuations, consistent with the generation of a mean entrainment velocity. The vorticity variance and epsilon display large maxima at the T/NT interface due to the inviscid discontinuities of the tangential velocity variances existing there, and these maxima are quantitatively related to the thickness delta of the viscous boundary layer (VBL). For an equilibrium VBL, the RDT analysis suggests that delta ~ eta (where eta is the Kolmogorov microscale), which is consistent with the scaling law identified in a very recent DNS study for shear-free T/NT interfaces.
Resumo:
A mechanism for the enhancement of the viscous dissipation rate of turbulent kinetic energy (TKE) in the oceanic boundary layer (OBL) is proposed, based on insights gained from rapid-distortion theory (RDT). In this mechanism, which complements mechanisms purely based on wave breaking, preexisting TKE is amplified and subsequently dissipated by the joint action of a mean Eulerian wind-induced shear current and the Stokes drift of surface waves, the same elements thought to be responsible for the generation of Langmuir circulations. Assuming that the TKE dissipation rate epsilon saturates to its equilibrium value over a time of the order one eddy turnover time of the turbulence, a new scaling expression, dependent on the turbulent Langmuir number, is derived for epsilon. For reasonable values of the input parameters, the new expression predicts an increase of the dissipation rate near the surface by orders of magnitude compared with usual surface-layer scaling estimates, consistent with available OBL data. These results establish on firmer grounds a suspected connection between two central OBL phenomena: dissipation enhancement and Langmuir circulations.
Resumo:
A rapid-distortion model is developed to investigate the interaction of weak turbulence with a monochromatic irrotational surface water wave. The model is applicable when the orbital velocity of the wave is larger than the turbulence intensity, and when the slope of the wave is sufficiently high that the straining of the turbulence by the wave dominates over the straining of the turbulence by itself. The turbulence suffers two distortions. Firstly, vorticity in the turbulence is modulated by the wave orbital motions, which leads to the streamwise Reynolds stress attaining maxima at the wave crests and minima at the wave troughs; the Reynolds stress normal to the free surface develops minima at the wave crests and maxima at the troughs. Secondly, over several wave cycles the Stokes drift associated with the wave tilts vertical vorticity into the horizontal direction, subsequently stretching it into elongated streamwise vortices, which come to dominate the flow. These results are shown to be strikingly different from turbulence distorted by a mean shear flow, when `streaky structures' of high and low streamwise velocity fluctuations develop. It is shown that, in the case of distortion by a mean shear flow, the tendency for the mean shear to produce streamwise vortices by distortion of the turbulent vorticity is largely cancelled by a distortion of the mean vorticity by the turbulent fluctuations. This latter process is absent in distortion by Stokes drift, since there is then no mean vorticity. The components of the Reynolds stress and the integral length scales computed from turbulence distorted by Stokes drift show the same behaviour as in the simulations of Langmuir turbulence reported by McWilliams, Sullivan & Moeng (1997). Hence we suggest that turbulent vorticity in the upper ocean, such as produced by breaking waves, may help to provide the initial seeds for Langmuir circulations, thereby complementing the shear-flow instability mechanism developed by Craik & Leibovich (1976). The tilting of the vertical vorticity into the horizontal by the Stokes drift tends also to produce a shear stress that does work against the mean straining associated with the wave orbital motions. The turbulent kinetic energy then increases at the expense of energy in the wave. Hence the wave decays. An expression for the wave attenuation rate is obtained by scaling the equation for the wave energy, and is found to be broadly consistent with available laboratory data.
Resumo:
The turbulent mixing in thin ocean surface boundary layers (OSBL), which occupy the upper 100 m or so of the ocean, control the exchange of heat and trace gases between the atmosphere and ocean. Here we show that current parameterizations of this turbulent mixing lead to systematic and substantial errors in the depth of the OSBL in global climate models, which then leads to biases in sea surface temperature. One reason, we argue, is that current parameterizations are missing key surface-wave processes that force Langmuir turbulence that deepens the OSBL more rapidly than steady wind forcing. Scaling arguments are presented to identify two dimensionless parameters that measure the importance of wave forcing against wind forcing, and against buoyancy forcing. A global perspective on the occurrence of waveforced turbulence is developed using re-analysis data to compute these parameters globally. The diagnostic study developed here suggests that turbulent energy available for mixing the OSBL is under-estimated without forcing by surface waves. Wave-forcing and hence Langmuir turbulence could be important over wide areas of the ocean and in all seasons in the Southern Ocean. We conclude that surfacewave- forced Langmuir turbulence is an important process in the OSBL that requires parameterization.
Resumo:
A manageable, relatively inexpensive model was constructed to predict the loss of nitrogen and phosphorus from a complex catchment to its drainage system. The model used an export coefficient approach, calculating the total nitrogen (N) and total phosphorus (P) load delivered annually to a water body as the sum of the individual loads exported from each nutrient source in its catchment. The export coefficient modelling approach permits scaling up from plot-scale experiments to the catchment scale, allowing application of findings from field experimental studies at a suitable scale for catchment management. The catchment of the River Windrush, a tributary of the River Thames, UK, was selected as the initial study site. The Windrush model predicted nitrogen and phosphorus loading within 2% of observed total nitrogen load and 0.5% of observed total phosphorus load in 1989. The export coefficient modelling approach was then validated by application in a second research basin, the catchment of Slapton Ley, south Devon, which has markedly different catchment hydrology and land use. The Slapton model was calibrated within 2% of observed total nitrogen load and 2.5% of observed total phosphorus load in 1986. Both models proved sensitive to the impact of temporal changes in land use and management on water quality in both catchments, and were therefore used to evaluate the potential impact of proposed pollution control strategies on the nutrient loading delivered to the River Windrush and Slapton Ley
Resumo:
The contribution non-point P sources make to the total P loading on water bodies in agricultural catchments has not been fully appreciated. Using data derived from plot scale experimental studies, and modelling approaches developed to simulate system behaviour under differing management scenarios, a fuller understanding of the processes controlling P export and transformations along non-point transport pathways can be achieved. One modelling approach which has been successfully applied to large UK catchments (50-350km2 in area) is applied here to a small, 1.5 km2 experimental catchment. The importance of scaling is discussed in the context of how such approaches can extrapolate the results from plot-scale experimental studies to full catchment scale. However, the scope of such models is limited, since they do not at present directly simulate the processes controlling P transport and transformation dynamics. As such, they can only simulate total P export on an annual basis, and are not capable of prediction over shorter time scales. The need for development of process-based models to help answer these questions, and for more comprehensive UK experimental studies is highlighted as a pre-requisite for the development of suitable and sustainable management strategies to reduce non-point P loading on water bodies in agricultural catchments.
Resumo:
1. It has been postulated that climate warming may pose the greatest threat species in the tropics, where ectotherms have evolved more thermal specialist physiologies. Although species could rapidly respond to environmental change through adaptation, little is known about the potential for thermal adaptation, especially in tropical species. 2. In the light of the limited empirical evidence available and predictions from mutation-selection theory, we might expect tropical ectotherms to have limited genetic variance to enable adaptation. However, as a consequence of thermodynamic constraints, we might expect this disadvantage to be at least partially offset by a fitness advantage, that is, the ‘hotter-is-better’ hypothesis. 3. Using an established quantitative genetics model and metabolic scaling relationships, we integrate the consequences of the opposing forces of thermal specialization and thermodynamic constraints on adaptive potential by evaluating extinction risk under climate warming. We conclude that the potential advantage of a higher maximal development rate can in theory more than offset the potential disadvantage of lower genetic variance associated with a thermal specialist strategy. 4. Quantitative estimates of extinction risk are fundamentally very sensitive to estimates of generation time and genetic variance. However, our qualitative conclusion that the relative risk of extinction is likely to be lower for tropical species than for temperate species is robust to assumptions regarding the effects of effective population size, mutation rate and birth rate per capita. 5. With a view to improving ecological forecasts, we use this modelling framework to review the sensitivity of our predictions to the model’s underpinning theoretical assumptions and the empirical basis of macroecological patterns that suggest thermal specialization and fitness increase towards the tropics. We conclude by suggesting priority areas for further empirical research.
Resumo:
In a world where massive amounts of data are recorded on a large scale we need data mining technologies to gain knowledge from the data in a reasonable time. The Top Down Induction of Decision Trees (TDIDT) algorithm is a very widely used technology to predict the classification of newly recorded data. However alternative technologies have been derived that often produce better rules but do not scale well on large datasets. Such an alternative to TDIDT is the PrismTCS algorithm. PrismTCS performs particularly well on noisy data but does not scale well on large datasets. In this paper we introduce Prism and investigate its scaling behaviour. We describe how we improved the scalability of the serial version of Prism and investigate its limitations. We then describe our work to overcome these limitations by developing a framework to parallelise algorithms of the Prism family and similar algorithms. We also present the scale up results of a first prototype implementation.
Resumo:
In a world where data is captured on a large scale the major challenge for data mining algorithms is to be able to scale up to large datasets. There are two main approaches to inducing classification rules, one is the divide and conquer approach, also known as the top down induction of decision trees; the other approach is called the separate and conquer approach. A considerable amount of work has been done on scaling up the divide and conquer approach. However, very little work has been conducted on scaling up the separate and conquer approach.In this work we describe a parallel framework that allows the parallelisation of a certain family of separate and conquer algorithms, the Prism family. Parallelisation helps the Prism family of algorithms to harvest additional computer resources in a network of computers in order to make the induction of classification rules scale better on large datasets. Our framework also incorporates a pre-pruning facility for parallel Prism algorithms.
Resumo:
Inducing rules from very large datasets is one of the most challenging areas in data mining. Several approaches exist to scaling up classification rule induction to large datasets, namely data reduction and the parallelisation of classification rule induction algorithms. In the area of parallelisation of classification rule induction algorithms most of the work has been concentrated on the Top Down Induction of Decision Trees (TDIDT), also known as the ‘divide and conquer’ approach. However powerful alternative algorithms exist that induce modular rules. Most of these alternative algorithms follow the ‘separate and conquer’ approach of inducing rules, but very little work has been done to make the ‘separate and conquer’ approach scale better on large training data. This paper examines the potential of the recently developed blackboard based J-PMCRI methodology for parallelising modular classification rule induction algorithms that follow the ‘separate and conquer’ approach. A concrete implementation of the methodology is evaluated empirically on very large datasets.