984 resultados para Empirical Modeling


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The authors have endeavored to create a verified a-posteriori model of a planktonic ecosystem. Verification of an empirically derived set of first-order, quadratic differential equations proved elusive due to the sensitivity of the model system to changes in initial conditions. Efforts to verify a similarly derived set of linear differential equations were more encouraging, yielding reasonable behavior for half of the ten ecosystem compartments modeled. The well-behaved species models gave indications as to the rate-controlling processes in the ecosystem.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper proposes the use of empirical modeling techniques for building microarchitecture sensitive models for compiler optimizations. The models we build relate program performance to settings of compiler optimization flags, associated heuristics and key microarchitectural parameters. Unlike traditional analytical modeling methods, this relationship is learned entirely from data obtained by measuring performance at a small number of carefully selected compiler/microarchitecture configurations. We evaluate three different learning techniques in this context viz. linear regression, adaptive regression splines and radial basis function networks. We use the generated models to a) predict program performance at arbitrary compiler/microarchitecture configurations, b) quantify the significance of complex interactions between optimizations and the microarchitecture, and c) efficiently search for'optimal' settings of optimization flags and heuristics for any given microarchitectural configuration. Our evaluation using benchmarks from the SPEC CPU2000 suits suggests that accurate models (< 5% average error in prediction) can be generated using a reasonable number of simulations. We also find that using compiler settings prescribed by a model-based search can improve program performance by as much as 19% (with an average of 9.5%) over highly optimized binaries.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The accurate determination of thermophysical properties of milk is very important for design, simulation, optimization, and control of food processing such as evaporation, heat exchanging, spray drying, and so forth. Generally, polynomial methods are used for prediction of these properties based on empirical correlation to experimental data. Artificial neural networks are better Suited for processing noisy and extensive knowledge indexing. This article proposed the application of neural networks for prediction of specific heat, thermal conductivity, and density of milk with temperature ranged from 2.0 to 71.0degreesC, 72.0 to 92.0% of water content (w/w), and 1.350 to 7.822% of fat content (w/w). Artificial neural networks presented a better prediction capability of specific heat, thermal conductivity, and density of milk than polynomial modeling. It showed a reasonable alternative to empirical modeling for thermophysical properties of foods.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Model-based calibration of steady-state engine operation is commonly performed with highly parameterized empirical models that are accurate but not very robust, particularly when predicting highly nonlinear responses such as diesel smoke emissions. To address this problem, and to boost the accuracy of more robust non-parametric methods to the same level, GT-Power was used to transform the empirical model input space into multiple input spaces that simplified the input-output relationship and improved the accuracy and robustness of smoke predictions made by three commonly used empirical modeling methods: Multivariate Regression, Neural Networks and the k-Nearest Neighbor method. The availability of multiple input spaces allowed the development of two committee techniques: a 'Simple Committee' technique that used averaged predictions from a set of 10 pre-selected input spaces chosen by the training data and the "Minimum Variance Committee" technique where the input spaces for each prediction were chosen on the basis of disagreement between the three modeling methods. This latter technique equalized the performance of the three modeling methods. The successively increasing improvements resulting from the use of a single best transformed input space (Best Combination Technique), Simple Committee Technique and Minimum Variance Committee Technique were verified with hypothesis testing. The transformed input spaces were also shown to improve outlier detection and to improve k-Nearest Neighbor performance when predicting dynamic emissions with steady-state training data. An unexpected finding was that the benefits of input space transformation were unaffected by changes in the hardware or the calibration of the underlying GT-Power model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

As a key issue of ionospheric weather study, systemic studies on ionospheric storms can not only further improve our understanding of the response of the ionosphere to solar and geomagnetic disturbances, but also help us to reveal the chemical, dynamic and electro-dynamic mechanisms during storms. Empirical modelling for regional ionospheric storm is also very useful, because it can provide us with tools and references for the forecasting and further practical application of ionospheric activity. In this thesis, we focus on describing and forecasting of ionospheric storms at middle and low latitudes. The main points of my investigations are listed as follows. (1) By using magnetic storms during the period over 50 years, the dependence of the type, onset time and time delay of the ionospheric storms on magnetic latitude, season and local time at middle and low latitudes in the East-Asian sector are studied. The results show that the occurrences of the types of ionospheric disturbances differ in latitude and season. The onset of the ionospheric storms depends on local time. At middle latitudes, most negative phase onsets are within the local time interval from night to early morning, and they rarely occurred in the local noon and afternoon sectors. At low latitudes, positive phases commence most frequently in the daytime sector as well as pre-midnight sector. The average time delays for both the positive and negative ionospheric storms increase with descending latitudes. The time delay has significant dependence on the local time of main phase onset (MPO). The time delay of positive response is shorter for daytime MPO and longer for night-time MPO, whereas the opposite applies for negative response. (2) Based on some previous researches, a primary empirical model for mid-latitude ionospheric disturbance is set up. By fitting to the observed data, we get a high accuracy with a mean RMSE of only 12-14% in summer and equinox. The model output has been compared with the output of STORM model, and the results show that, our model is much better than STORM in summer and a little better for some mid-latitude stations at equinox. Especially, for the type of two-step geomagnetic storm, our model can present twice descending of foF2 very well. In addition, our model can forecast positive ionospheric storms.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Empirical modeling of high-frequency currency market data reveals substantial evidence for nonnormality, stochastic volatility, and other nonlinearities. This paper investigates whether an equilibrium monetary model can account for nonlinearities in weekly data. The model incorporates time-nonseparable preferences and a transaction cost technology. Simulated sample paths are generated using Marcet's parameterized expectations procedure. The paper also develops a new method for estimation of structural economic models. The method forces the model to match (under a GMM criterion) the score function of a nonparametric estimate of the conditional density of observed data. The estimation uses weekly U.S.-German currency market data, 1975-90. © 1995.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A conjugate heat transfer (CHT) method was used to perform the aerothermal analysis of an internally cooled turbine vane, and was validated against experimental and empirical data.
Firstly, validation of the method with regard to internal cooling was done by reproducing heat transfer test data in a channel with pin fin heat augmenters, under steady constant wall temperature. The computed Nusselt numbers for the two tested configurations (full length circular pin fins attached to both walls and partial pin fins attached to one wall only) showed good agreement with the measurements. Sensitivity to mesh density was evaluated under this simplified case in order to establish mesh requirements for the analysis of the full component.
Secondly, the CHT method was applied onto a turbine vane test case from an actual engine. The predicted vane airfoil metal temperature was compared to the measured thermal paint data and the in-house empirical predictions. The CHT results agreed well with the thermal paint data and showed better prediction than the current empirical modeling approach.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study describes a combined empirical/modeling approach to assess the possible impact of climate variability on rice production in the Philippines. We collated climate data of the last two decades (1985-2002) as well as yield statistics of six provinces of the Philippines, selected along a North-South gradient. Data from the climate information system of NASA were used as input parameters of the model ORYZA2000 to determine potential yields and, in the next steps, the yield gaps defined as the difference between potential and actual yields. Both simulated and actual yields of irrigated rice varied strongly between years. However, no climate-driven trends were apparent and the variability in actual yields showed no correlation with climatic parameters. The observed variation in simulated yields was attributable to seasonal variations in climate (dry/wet season) and to climatic differences between provinces and agro-ecological zones. The actual yield variation between provinces was not related to differences in the climatic yield potential but rather to soil and management factors. The resulting yield gap was largest in remote and infrastructurally disfavored provinces (low external input use) with a high production potential (high solar radiation and day-night temperature differences). In turn, the yield gap was lowest in central provinces with good market access but with a relatively low climatic yield potential. We conclude that neither long-term trends nor the variability of the climate can explain current rice yield trends and that agroecological, seasonal, and management effects are over-riding any possible climatic variations. On the other hand the lack of a climate-driven trend in the present situation may be superseded by ongoing climate change in the future.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Este documento presenta una revisión de las principales aproximaciones teóricas sobre recursos humanos en ciencia y tecnología y la modelación empírica de las carreras académicas y científi cas utilizando los Curriculum Vitae (CV) como fuente de información principal. Adicionalmente, muestra los resultados de varios estudios realizados en Colombia basados en la teoría del capital conocimiento. Estos estudios han permitido establecer una línea de investigación sobre la evaluación del comportamiento de los recursos humanos, el tránsito hacia comunidades científi cas y el estudio de las carreras académicas de los investigadores. Adicionalmente, muestran que la información contenida en la Plataforma ScienTI (Grup-Lac y Cv-Lac) permite establecer de manera concreta las capacidades científi cas y tecnológicas del país. Palabras claves: Recursos humanos, carreras académicas y científi cas, regresión discreta y modelos de elección cualitativa. Clasifi cación JEL: C25, O15.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Este documento presenta una revisión de las principales aproximaciones teóricas sobre recursos humanos en ciencia y tecnología y la modelación empírica de las carreras académicas y científicas utilizando los CVs como fuente de información principal. Adicionalmente, muestra los resultados de varios estudios realizados en Colombia basados en la teoría del capital conocimiento. Estos estudios han permitido establecer una línea de investigación sobre la evaluación del comportamiento de los recursos humanos, el tránsito hacia comunidades científicas y el estudio de las carreras académicas de los investigadores. Adicionalmente, muestran que la información contenida en la Plataforma ScienTI (Grup-Lac y Cv-Lac) permite establecer de manera concreta las capacidades científicas y tecnológicas del país.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper offers a way of operationalizing resilience according to the sub-systems that comprise a business (technical, social and economic), the kinds of perturbations that might impact a business (local, national and global) and the criteria that determine resilient capacity (redundancy, requisite variety and resources). When a business system has incorporated redundancy, developed requisite variety and adequately monitors its resources, we might then conclude that such a business is a resilient business. The model offered here is theoretical, and is yet to undergo empirical scrutiny. Empirical modeling will enable us to ascertain the strength of a business's internal characteristics against different levels/kinds of external perturbation. Sensitivity analysis of this kind will lead to more in-depth understanding of the dynamics that generate resilient businesses in a complex world.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Second-order polynomial models have been used extensively to approximate the relationship between a response variable and several continuous factors. However, sometimes polynomial models do not adequately describe the important features of the response surface. This article describes the use of fractional polynomial models. It is shown how the models can be fitted, an appropriate model selected, and inference conducted. Polynomial and fractional polynomial models are fitted to two published datasets, illustrating that sometimes the fractional polynomial can give as good a fit to the data and much more plausible behavior between the design points than the polynomial model. © 2005 American Statistical Association and the International Biometric Society.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Novel brominated amorphous hydrogenated carbon (a-C:H:Br) films were produced by the plasma polymerization of acetylene-bromoform mixtures. The main parameter of interest was the degree of bromination, which depends on the partial pressure of bromoform in the plasma feed, expressed as a percentage of the total pressure, R-B. When bromoform is present in the feed, deposition rates of up to about 110 nm min(-1) may be obtained. The structure and composition of the films were characterized by Transmission Infrared Reflection Absorption Spectroscopy (IRRAS) and X-ray Photo-electron Spectroscopy (XPS). The latter revealed that films with atomic ratios Br:C of up to 0.58 may be produced. Surface contact angles, measured using goniometry, could be increased from similar to 63 degrees (for an unbrominated film) to similar to 90 degrees for R-B of 60 to 80%. Film surface roughness, measured using a profilometer, does not depend strongly on R-B. Optical properties the refractive index, n, absorption coefficient, alpha(E), where E is the photon energy, and the optical gap, E-g, were determined from film thicknesses and data obtained by Transmission Ultraviolet-Visible Near Infrared Spectroscopy (UVS). Control of n was possible via selection of R-B. The measured optical gap increases with increasing F-BC, the atomic ratio of Br to C in the film, and semi-empirical modeling accounts for this tendency. A typical hardness of the brominated films, determined via nano-indentation, was similar to 0.5 GPa. (C), 2013 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.