966 resultados para Dynamic prediction


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ECMWF operational grid point model (with a resolution of 1.875° of latitude and longitude) and its limited area version (with a resolution of !0.47° of latitude and longitude) with boundary values from the global model have been used to study the simulation of the typhoon Tip. The fine-mesh model was capable of simulating the main structural features of the typhoon and predicting a fall in central pressure of 60 mb in 3 days. The structure of the forecast typhoon, with a warm core (maximum potential temperature anomaly 17 K). intense swirling wind (maximum 55 m s-1 at 850 mb) and spiralling precipitation patterns is characteristic of a tropical cyclone. Comparison with the lower resolution forecast shows that the horizontal resolution is a determining factor in predicting not only the structure and intensity but even the movement of these vortices. However, an accurate and refined initial analysis is considered to be a prerequisite for a correct forecast of this phenomenon.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neurovascular coupling in response to stimulation of the rat barrel cortex was investigated using concurrent multichannel electrophysiology and laser Doppler flowmetry. The data were used to build a linear dynamic model relating neural activity to blood flow. Local field potential time series were subject to current source density analysis, and the time series of a layer IV sink of the barrel cortex was used as the input to the model. The model output was the time series of the changes in regional cerebral blood flow (CBF). We show that this model can provide excellent fit of the CBF responses for stimulus durations of up to 16 s. The structure of the model consisted of two coupled components representing vascular dilation and constriction. The complex temporal characteristics of the CBF time series were reproduced by the relatively simple balance of these two components. We show that the impulse response obtained under the 16-s duration stimulation condition generalised to provide a good prediction to the data from the shorter duration stimulation conditions. Furthermore, by optimising three out of the total of nine model parameters, the variability in the data can be well accounted for over a wide range of stimulus conditions. By establishing linearity, classic system analysis methods can be used to generate and explore a range of equivalent model structures (e.g., feed-forward or feedback) to guide the experimental investigation of the control of vascular dilation and constriction following stimulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current feed evaluation systems for ruminants are too imprecise to describe diets in terms of their acidosis risk. The dynamic mechanistic model described herein arises from the integration of a lactic acid (La) metabolism module into an extant model of whole-rumen function. The model was evaluated using published data from cows and sheep fed a range of diets or infused with various doses of La. The model performed well in simulating peak rumen La concentrations (coefficient of determination = 0.96; root mean square prediction error = 16.96% of observed mean), although frequency of sampling for the published data prevented a comprehensive comparison of prediction of time to peak La accumulation. The model showed a tendency for increased La accumulation following feeding of diets rich in nonstructural carbohydrates, although less-soluble starch sources such as corn tended to limit rumen La concentration. Simulated La absorption from the rumen remained low throughout the feeding cycle. The competition between bacteria and protozoa for rumen La suggests a variable contribution of protozoa to total La utilization. However, the model was unable to simulate the effects of defaunation on rumen La metabolism, indicating a need for a more detailed description of protozoal metabolism. The model could form the basis of a feed evaluation system with regard to rumen La metabolism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: For the evaluation of the energetic performance of combined renewable heating systems that supply space heat and domestic hot water for single family houses, dynamic behaviour, component interactions, and control of the system play a crucial role and should be included in test methods. Methods: New dynamic whole system test methods were developed based on “hardware in the loop” concepts. Three similar approaches are described and their differences are discussed. The methods were applied for testing solar thermal systems in combination with fossil fuel boilers (heating oil and natural gas), biomass boilers, and/or heat pumps. Results: All three methods were able to show the performance of combined heating systems under transient operating conditions. The methods often detected unexpected behaviour of the tested system that cannot be detected based on steady state performance tests that are usually applied to single components. Conclusion: Further work will be needed to harmonize the different test methods in order to reach comparable results between the different laboratories. Practice implications: A harmonized approach for whole system tests may lead to new test standards and improve the accuracy of performance prediction as well as reduce the need for field tests.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accurate speed prediction is a crucial step in the development of a dynamic vehcile activated sign (VAS). A previous study showed that the optimal trigger speed of such signs will need to be pre-determined according to the nature of the site and to the traffic conditions. The objective of this paper is to find an accurate predictive model based on historical traffic speed data to derive the optimal trigger speed for such signs. Adaptive neuro fuzzy (ANFIS), classification and regression tree (CART) and random forest (RF) were developed to predict one step ahead speed during all times of the day. The developed models were evaluated and compared to the results obtained from artificial neural network (ANN), multiple linear regression (MLR) and naïve prediction using traffic speed data collected at four sites located in Sweden. The data were aggregated into two periods, a short term period (5-min) and a long term period (1-hour). The results of this study showed that using RF is a promising method for predicting mean speed in the two proposed periods.. It is concluded that in terms of performance and computational complexity, a simplistic input features to the predicitive model gave a marked increase in the response time of the model whilse still delivering a low prediction error.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents DCE, or Dynamic Conditional Execution, as an alternative to reduce the cost of mispredicted branches. The basic idea is to fetch all paths produced by a branch that obey certain restrictions regarding complexity and size. As a result, a smaller number of predictions is performed, and therefore, a lesser number of branches are mispredicted. DCE fetches through selected branches avoiding disruptions in the fetch flow when these branches are fetched. Both paths of selected branches are executed but only the correct path commits. In this thesis we propose an architecture to execute multiple paths of selected branches. Branches are selected based on the size and other conditions. Simple and complex branches can be dynamically predicated without requiring a special instruction set nor special compiler optimizations. Furthermore, a technique to reduce part of the overhead generated by the execution of multiple paths is proposed. The performance achieved reaches levels of up to 12% when comparing a Local predictor used in DCE against a Global predictor used in the reference machine. When both machines use a Local predictor, the speedup is increased by an average of 3-3.5%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this dissertation, different ways of combining neural predictive models or neural-based forecasts are discussed. The proposed approaches consider mostly Gaussian radial basis function networks, which can be efficiently identified and estimated through recursive/adaptive methods. Two different ways of combining are explored to get a final estimate – model mixing and model synthesis –, with the aim of obtaining improvements both in terms of efficiency and effectiveness. In the context of model mixing, the usual framework for linearly combining estimates from different models is extended, to deal with the case where the forecast errors from those models are correlated. In the context of model synthesis, and to address the problems raised by heavily nonstationary time series, we propose hybrid dynamic models for more advanced time series forecasting, composed of a dynamic trend regressive model (or, even, a dynamic harmonic regressive model), and a Gaussian radial basis function network. Additionally, using the model mixing procedure, two approaches for decision-making from forecasting models are discussed and compared: either inferring decisions from combined predictive estimates, or combining prescriptive solutions derived from different forecasting models. Finally, the application of some of the models and methods proposed previously is illustrated with two case studies, based on time series from finance and from tourism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of dynamic camera calibration considering moving objects in close range environments using straight lines as references is addressed. A mathematical model for the correspondence of a straight line in the object and image spaces is discussed. This model is based on the equivalence between the vector normal to the interpretation plane in the image space and the vector normal to the rotated interpretation plane in the object space. In order to solve the dynamic camera calibration, Kalman Filtering is applied; an iterative process based on the recursive property of the Kalman Filter is defined, using the sequentially estimated camera orientation parameters to feedback the feature extraction process in the image. For the dynamic case, e.g. an image sequence of a moving object, a state prediction and a covariance matrix for the next instant is obtained using the available estimates and the system model. Filtered state estimates can be computed from these predicted estimates using the Kalman Filtering approach and based on the system model parameters with good quality, for each instant of an image sequence. The proposed approach was tested with simulated and real data. Experiments with real data were carried out in a controlled environment, considering a sequence of images of a moving cube in a linear trajectory over a flat surface.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The cost of maintenance makes up a large part of total energy costs in ruminants. Metabolizable energy (ME) requirement for maintenance (MEm) is the daily ME intake that exactly balances heat energy (HE). The net energy requirement for maintenance (NEm) is estimated subtracting MEm from the HE produced by the processing of the diet. Men cannot be directly measured experimentally and is estimated by measuring basal metabolism in fasted animals or by regression measuring the recovered energy in fed animals. MEm and NEm usually, but not always, are expressed in terms of BW0.75. However, this scaling factor is substantially empirical and its exponent is often inadequate, especially for growing animals. MEm estimated by different feeding systems (AFRC, CNCPS, CSIRO, INRA, NRC) were compared by using dairy cattle data. The comparison showed that these systems differ in the approaches used to estimate MEm and for its quantification. The CSIRO system estimated the highest MEm, mostly because it includes a correction factor to increase ME as the feeding level increases. Relative to CSIRO estimates, those of NRC, INRA, CNCPS, and AFRC were on average 0.92, 0.86, 0.84, and 0.78, respectively. MEm is affected by the previous nutritional history of the animals. This phenomenon is best predicted by dynamic models, of which several have been published in the last decades. They are based either on energy flows or on nutrient flows. Some of the different approaches used were described and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The prediction of the traffic behavior could help to make decision about the routing process, as well as enables gains on effectiveness and productivity on the physical distribution. This need motivated the search for technological improvements in the Routing performance in metropolitan areas. The purpose of this paper is to present computational evidences that Artificial Neural Network ANN could be use to predict the traffic behavior in a metropolitan area such So Paulo (around 16 million inhabitants). The proposed methodology involves the application of Rough-Fuzzy Sets to define inference morphology for insertion of the behavior of Dynamic Routing into a structured rule basis, without human expert aid. The dynamics of the traffic parameters are described through membership functions. Rough Sets Theory identifies the attributes that are important, and suggest Fuzzy relations to be inserted on a Rough Neuro Fuzzy Network (RNFN) type Multilayer Perceptron (MLP) and type Radial Basis Function (RBF), in order to get an optimal surface response. To measure the performance of the proposed RNFN, the responses of the unreduced rule basis are compared with the reduced rule one. The results show that by making use of the Feature Reduction through RNFN, it is possible to reduce the need for human expert in the construction of the Fuzzy inference mechanism in such flow process like traffic breakdown. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives. Verify the influence of different filler distributions on the subcritical crack growth (SCG) susceptibility, Weibull parameters (m and sigma(0)) and longevity estimated by the strength-probability-time (SPT) diagram of experimental resin composites. Methods. Four composites were prepared, each one containing 59 vol% of glass powder with different filler sizes (d(50) = 0.5; 0.9; 1.2 and 1.9 mu m) and distributions. Granulometric analyses of glass powders were done by a laser diffraction particle size analyzer (Sald-7001, Shimadzu, USA). SCG parameters (n and sigma(f0)) were determined by dynamic fatigue (10(-2) to 10(2) MPa/s) using a biaxial flexural device (12 x 1.2 mm; n = 10). Twenty extra specimens of each composite were tested at 10(0) MPa/s to determine m and sigma(0). Specimens were stored in water at 37 degrees C for 24 h. Fracture surfaces were analyzed under SEM. Results. In general, the composites with broader filler distribution (C0.5 and C1.9) presented better results in terms of SCG susceptibility and longevity. C0.5 and C1.9 presented higher n values (respectively, 31.2 +/- 6.2(a) and 34.7 +/- 7.4(a)). C1.2 (166.42 +/- 0.01(a)) showed the highest and C0.5 (158.40 +/- 0.02(d)) the lowest sigma(f0) value (in MPa). Weibull parameters did not vary significantly (m: 6.6 to 10.6 and sigma(0): 170.6 to 176.4 MPa). Predicted reductions in failure stress (P-f = 5%) for a lifetime of 10 years were approximately 45% for C0.5 and C1.9 and 65% for C0.9 and C1.2. Crack propagation occurred through the polymeric matrix around the fillers and all the fracture surfaces showed brittle fracture features. Significance. Composites with broader granulometric distribution showed higher resistance to SCG and, consequently, higher longevity in vitro. (C) 2012 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research is aimed at contributing to the identification of reliable fully predictive Computational Fluid Dynamics (CFD) methods for the numerical simulation of equipment typically adopted in the chemical and process industries. The apparatuses selected for the investigation, specifically membrane modules, stirred vessels and fluidized beds, were characterized by a different and often complex fluid dynamic behaviour and in some cases the momentum transfer phenomena were coupled with mass transfer or multiphase interactions. Firs of all, a novel modelling approach based on CFD for the prediction of the gas separation process in membrane modules for hydrogen purification is developed. The reliability of the gas velocity field calculated numerically is assessed by comparison of the predictions with experimental velocity data collected by Particle Image Velocimetry, while the applicability of the model to properly predict the separation process under a wide range of operating conditions is assessed through a strict comparison with permeation experimental data. Then, the effect of numerical issues on the RANS-based predictions of single phase stirred tanks is analysed. The homogenisation process of a scalar tracer is also investigated and simulation results are compared to original passive tracer homogenisation curves determined with Planar Laser Induced Fluorescence. The capability of a CFD approach based on the solution of RANS equations is also investigated for describing the fluid dynamic characteristics of the dispersion of organics in water. Finally, an Eulerian-Eulerian fluid-dynamic model is used to simulate mono-disperse suspensions of Geldart A Group particles fluidized by a Newtonian incompressible fluid as well as binary segregating fluidized beds of particles differing in size and density. The results obtained under a number of different operating conditions are compared with literature experimental data and the effect of numerical uncertainties on axial segregation is also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the last few years, a great deal of interest has risen concerning the applications of stochastic methods to several biochemical and biological phenomena. Phenomena like gene expression, cellular memory, bet-hedging strategy in bacterial growth and many others, cannot be described by continuous stochastic models due to their intrinsic discreteness and randomness. In this thesis I have used the Chemical Master Equation (CME) technique to modelize some feedback cycles and analyzing their properties, including experimental data. In the first part of this work, the effect of stochastic stability is discussed on a toy model of the genetic switch that triggers the cellular division, which malfunctioning is known to be one of the hallmarks of cancer. The second system I have worked on is the so-called futile cycle, a closed cycle of two enzymatic reactions that adds and removes a chemical compound, called phosphate group, to a specific substrate. I have thus investigated how adding noise to the enzyme (that is usually in the order of few hundred molecules) modifies the probability of observing a specific number of phosphorylated substrate molecules, and confirmed theoretical predictions with numerical simulations. In the third part the results of the study of a chain of multiple phosphorylation-dephosphorylation cycles will be presented. We will discuss an approximation method for the exact solution in the bidimensional case and the relationship that this method has with the thermodynamic properties of the system, which is an open system far from equilibrium.In the last section the agreement between the theoretical prediction of the total protein quantity in a mouse cells population and the observed quantity will be shown, measured via fluorescence microscopy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A simple dependency between contact angle θ and velocity or surface tension has been predicted for the wetting and dewetting behavior of simple liquids. According to the hydrodynamic theory, this dependency was described by Cox and Voinov as θ ∼ Ca^(1/3) (Ca: Capillary number). For more complex liquids like surfactant solutions, this prediction is not directly given.rnHere I present a rotating drum setup for studying wetting/dewetting processes of surfactant solutions on the basis of velocity-dependent contact angle measurements. With this new setup I showed that surfactant solutions do not follow the predicted Cox-Voinov relation, but showed a stronger contact angle dependency on surface tension. All surfactants independent of their charge showed this difference from the prediction so that electrostatic interactions as a reason could be excluded. Instead, I propose the formation of a surface tension gradient close to the three-phase contact line as the main reason for the strong contact angle decrease with increasing surfactant concentration. Surface tension gradients are not only formed locally close to the three-phase contact line, but also globally along the air-liquid interface due to the continuous creation/destruction of the interface by the drum moving out of/into the liquid. By systematically hindering the equilibration routes of the global gradient along the interface and/or through the bulk, I was able to show that the setup geometry is also important for the wetting/dewetting of surfactant solutions. Further, surface properties like roughness or chemical homogeneity of the wetted/dewetted substrate influence the wetting/dewetting behavior of the liquid, i. e. the three-phase contact line is differently pinned on rough/smooth or homogeneous/inhomogeneous surfaces. Altogether I showed that the wetting/dewetting of surfactant solutions did not depend on the surfactant type (anionic, cationic, or non-ionic) but on the surfactant concentration and strength, the setup geometry, and the surface properties.rnSurfactants do not only influence the wetting/dewetting behavior of liquids, but also the impact behavior of drops on free-standing films or solutions. In a further part of this work, I dealt with the stability of the air cushion between drop and film/solution. To allow coalescence between drop and substrate, the air cushion has to vanish. In the presence of surfactants, the vanishing of the air is slowed down due to a change in the boundary condition from slip to no-slip, i. e. coalescence is suppressed or slowed down in the presence of surfactant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dimensional modeling, GT-Power in particular, has been used for two related purposes-to quantify and understand the inaccuracies of transient engine flow estimates that cause transient smoke spikes and to improve empirical models of opacity or particulate matter used for engine calibration. It has been proposed by dimensional modeling that exhaust gas recirculation flow rate was significantly underestimated and volumetric efficiency was overestimated by the electronic control module during the turbocharger lag period of an electronically controlled heavy duty diesel engine. Factoring in cylinder-to-cylinder variation, it has been shown that the electronic control module estimated fuel-Oxygen ratio was lower than actual by up to 35% during the turbocharger lag period but within 2% of actual elsewhere, thus hindering fuel-Oxygen ratio limit-based smoke control. The dimensional modeling of transient flow was enabled with a new method of simulating transient data in which the manifold pressures and exhaust gas recirculation system flow resistance, characterized as a function of exhaust gas recirculation valve position at each measured transient data point, were replicated by quasi-static or transient simulation to predict engine flows. Dimensional modeling was also used to transform the engine operating parameter model input space to a more fundamental lower dimensional space so that a nearest neighbor approach could be used to predict smoke emissions. This new approach, intended for engine calibration and control modeling, was termed the "nonparametric reduced dimensionality" approach. It was used to predict federal test procedure cumulative particulate matter within 7% of measured value, based solely on steady-state training data. Very little correlation between the model inputs in the transformed space was observed as compared to the engine operating parameter space. This more uniform, smaller, shrunken model input space might explain how the nonparametric reduced dimensionality approach model could successfully predict federal test procedure emissions when roughly 40% of all transient points were classified as outliers as per the steady-state training data.