986 resultados para empirical modeling


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Empirical modeling of exposure levels has been popular for identifying exposure determinants in occupational hygiene. Traditional data-driven methods used to choose a model on which to base inferences have typically not accounted for the uncertainty linked to the process of selecting the final model. Several new approaches propose making statistical inferences from a set of plausible models rather than from a single model regarded as 'best'. This paper introduces the multimodel averaging approach described in the monograph by Burnham and Anderson. In their approach, a set of plausible models are defined a priori by taking into account the sample size and previous knowledge of variables influent on exposure levels. The Akaike information criterion is then calculated to evaluate the relative support of the data for each model, expressed as Akaike weight, to be interpreted as the probability of the model being the best approximating model given the model set. The model weights can then be used to rank models, quantify the evidence favoring one over another, perform multimodel prediction, estimate the relative influence of the potential predictors and estimate multimodel-averaged effects of determinants. The whole approach is illustrated with the analysis of a data set of 1500 volatile organic compound exposure levels collected by the Institute for work and health (Lausanne, Switzerland) over 20 years, each concentration having been divided by the relevant Swiss occupational exposure limit and log-transformed before analysis. Multimodel inference represents a promising procedure for modeling exposure levels that incorporates the notion that several models can be supported by the data and permits to evaluate to a certain extent model selection uncertainty, which is seldom mentioned in current practice.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The accurate determination of thermophysical properties of milk is very important for design, simulation, optimization, and control of food processing such as evaporation, heat exchanging, spray drying, and so forth. Generally, polynomial methods are used for prediction of these properties based on empirical correlation to experimental data. Artificial neural networks are better Suited for processing noisy and extensive knowledge indexing. This article proposed the application of neural networks for prediction of specific heat, thermal conductivity, and density of milk with temperature ranged from 2.0 to 71.0degreesC, 72.0 to 92.0% of water content (w/w), and 1.350 to 7.822% of fat content (w/w). Artificial neural networks presented a better prediction capability of specific heat, thermal conductivity, and density of milk than polynomial modeling. It showed a reasonable alternative to empirical modeling for thermophysical properties of foods.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Model-based calibration of steady-state engine operation is commonly performed with highly parameterized empirical models that are accurate but not very robust, particularly when predicting highly nonlinear responses such as diesel smoke emissions. To address this problem, and to boost the accuracy of more robust non-parametric methods to the same level, GT-Power was used to transform the empirical model input space into multiple input spaces that simplified the input-output relationship and improved the accuracy and robustness of smoke predictions made by three commonly used empirical modeling methods: Multivariate Regression, Neural Networks and the k-Nearest Neighbor method. The availability of multiple input spaces allowed the development of two committee techniques: a 'Simple Committee' technique that used averaged predictions from a set of 10 pre-selected input spaces chosen by the training data and the "Minimum Variance Committee" technique where the input spaces for each prediction were chosen on the basis of disagreement between the three modeling methods. This latter technique equalized the performance of the three modeling methods. The successively increasing improvements resulting from the use of a single best transformed input space (Best Combination Technique), Simple Committee Technique and Minimum Variance Committee Technique were verified with hypothesis testing. The transformed input spaces were also shown to improve outlier detection and to improve k-Nearest Neighbor performance when predicting dynamic emissions with steady-state training data. An unexpected finding was that the benefits of input space transformation were unaffected by changes in the hardware or the calibration of the underlying GT-Power model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Diplomityössä tutkittiin hydrauliikan reaaliaikasimulointia ja sen mahdollisuuksia tuotekehityksen apuvälineenä. Työssä käytettiin dSPACE:n reaaliaikasimulointiin valmistamia laitteita ja ohjelmia. Työssä luotiin Matlab/Simulink –ympäristöön tyypillisimmistä hydrauliikkakomponenttien puoliempiirisistä malleista koostuva komponenttikirjasto, joista kootut hydrauliikkapiirien mallit voitiin kääntää reaaliaikaympäristöön. Työn tavoitteena oli kehittää menetelmä, jonka avulla voidaan nopeuttaa ja helpottaa hydraulismekaanisten konejärjestelmien suunnittelua ja tuotekehitystä. Kehitetyt menetelmät perustuvat todellisen konejärjestelmän osaksi kytketyn reaaliaikaisen virtuaalihydrauliikan avulla laskettuun uuteen ohjaussignaaliin, jonka avulla voidaan todellisella hydrauliikalla kuvata virtuaalisen hydrauliikan vaikutukset todelliseen järjestelmään. Näin ollen muutokset voidaan siis tehdä virtuaaliseen hydrauliikkaan ja niiden vaikutukset nähdä todellisen järjestelmän käyttäytymisessä.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Filtration is a widely used unit operation in chemical engineering. The huge variation in the properties of materials to be ltered makes the study of ltration a challenging task. One of the objectives of this thesis was to show that conventional ltration theories are di cult to use when the system to be modelled contains all of the stages and features that are present in a complete solid/liquid separation process. Furthermore, most of the ltration theories require experimental work to be performed in order to obtain critical parameters required by the theoretical models. Creating a good overall understanding of how the variables a ect the nal product in ltration is somewhat impossible on a purely theoretical basis. The complexity of solid/liquid separation processes require experimental work and when tests are needed, it is advisable to use experimental design techniques so that the goals can be achieved. The statistical design of experiments provides the necessary tools for recognising the e ects of variables. It also helps to perform experimental work more economically. Design of experiments is a prerequisite for creating empirical models that can describe how the measured response is related to the changes in the values of the variable. A software package was developed that provides a ltration practitioner with experimental designs and calculates the parameters for linear regression models, along with the graphical representation of the responses. The developed software consists of two software modules. These modules are LTDoE and LTRead. The LTDoE module is used to create experimental designs for di erent lter types. The lter types considered in the software are automatic vertical pressure lter, double-sided vertical pressure lter, horizontal membrane lter press, vacuum belt lter and ceramic capillary action disc lter. It is also possible to create experimental designs for those cases where the variables are totally user de ned, say for a customized ltration cycle or di erent piece of equipment. The LTRead-module is used to read the experimental data gathered from the experiments, to analyse the data and to create models for each of the measured responses. Introducing the structure of the software more in detail and showing some of the practical applications is the main part of this thesis. This approach to the study of cake ltration processes, as presented in this thesis, has been shown to have good practical value when making ltration tests.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Virtuaalimallinnuksella tarkoitetaan koneen simulointia, jossa huomioidaan koneen mekaniikka, toimilaitteet ja ohjausjärjestelmä. Työn tavoitteena oli luoda virtuaalimalli harvesteripäästä. Kyseinen virtuaalimalli sisälsi harvesteripään tärkeimmät mekaaniset osat, pituusmittalaitteen hydrauliikkapiirin ja tämän ohjauksen. Luodussa virtuaalimallissa huomioitiin myös harvesteripään ja puun väliset kontaktit. Lisäksi työssä tutkittiin mahdollista virtuaalimallinnuksen implementoimista osaksi yrityksen tuotekehitysprosessia. Työssä suoritettiin verifiointimittaukset pituusmittalaitteen hydrauliikkapiirille sekä virtuaalimallin hydrauliikkakomponentit parametrisoitiin. Mittauksista saatuja tuloksia verrattiin virtuaalimallista saatuihin tuloksiin. Työssä esitellään myös ehdotus kuinka virtuaalimallinnusta kannattaisi hyödyntää osana yrityksen tuotekehitysprosessia. Virtuaalimallin eri osa-alueilla saavutetut tulokset osoittavat, että virtuaalimallinuksen hyödyntäminen tuotekehitysprosessin aikana mahdollistaa harvesteripään toimintojen tarkastelun ennen prototyypin rakentamista ja testaamista. Lisäksi hydrauliikkapiirin parametrisoimisella pystytään tutkimaan parametrien vaikutusta kokonaisuuteen.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Filtration is a widely used unit operation in chemical engineering. The huge variation in the properties of materials to be ltered makes the study of ltration a challenging task. One of the objectives of this thesis was to show that conventional ltration theories are di cult to use when the system to be modelled contains all of the stages and features that are present in a complete solid/liquid separation process. Furthermore, most of the ltration theories require experimental work to be performed in order to obtain critical parameters required by the theoretical models. Creating a good overall understanding of how the variables a ect the nal product in ltration is somewhat impossible on a purely theoretical basis. The complexity of solid/liquid separation processes require experimental work and when tests are needed, it is advisable to use experimental design techniques so that the goals can be achieved. The statistical design of experiments provides the necessary tools for recognising the e ects of variables. It also helps to perform experimental work more economically. Design of experiments is a prerequisite for creating empirical models that can describe how the measured response is related to the changes in the values of the variable. A software package was developed that provides a ltration practitioner with experimental designs and calculates the parameters for linear regression models, along with the graphical representation of the responses. The developed software consists of two software modules. These modules are LTDoE and LTRead. The LTDoE module is used to create experimental designs for di erent lter types. The lter types considered in the software are automatic vertical pressure lter, double-sided vertical pressure lter, horizontal membrane lter press, vacuum belt lter and ceramic capillary action disc lter. It is also possible to create experimental designs for those cases where the variables are totally user de ned, say for a customized ltration cycle or di erent piece of equipment. The LTRead-module is used to read the experimental data gathered from the experiments, to analyse the data and to create models for each of the measured responses. Introducing the structure of the software more in detail and showing some of the practical applications is the main part of this thesis. This approach to the study of cake ltration processes, as presented in this thesis, has been shown to have good practical value when making ltration tests.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study describes a combined empirical/modeling approach to assess the possible impact of climate variability on rice production in the Philippines. We collated climate data of the last two decades (1985-2002) as well as yield statistics of six provinces of the Philippines, selected along a North-South gradient. Data from the climate information system of NASA were used as input parameters of the model ORYZA2000 to determine potential yields and, in the next steps, the yield gaps defined as the difference between potential and actual yields. Both simulated and actual yields of irrigated rice varied strongly between years. However, no climate-driven trends were apparent and the variability in actual yields showed no correlation with climatic parameters. The observed variation in simulated yields was attributable to seasonal variations in climate (dry/wet season) and to climatic differences between provinces and agro-ecological zones. The actual yield variation between provinces was not related to differences in the climatic yield potential but rather to soil and management factors. The resulting yield gap was largest in remote and infrastructurally disfavored provinces (low external input use) with a high production potential (high solar radiation and day-night temperature differences). In turn, the yield gap was lowest in central provinces with good market access but with a relatively low climatic yield potential. We conclude that neither long-term trends nor the variability of the climate can explain current rice yield trends and that agroecological, seasonal, and management effects are over-riding any possible climatic variations. On the other hand the lack of a climate-driven trend in the present situation may be superseded by ongoing climate change in the future.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Este documento presenta una revisión de las principales aproximaciones teóricas sobre recursos humanos en ciencia y tecnología y la modelación empírica de las carreras académicas y científi cas utilizando los Curriculum Vitae (CV) como fuente de información principal. Adicionalmente, muestra los resultados de varios estudios realizados en Colombia basados en la teoría del capital conocimiento. Estos estudios han permitido establecer una línea de investigación sobre la evaluación del comportamiento de los recursos humanos, el tránsito hacia comunidades científi cas y el estudio de las carreras académicas de los investigadores. Adicionalmente, muestran que la información contenida en la Plataforma ScienTI (Grup-Lac y Cv-Lac) permite establecer de manera concreta las capacidades científi cas y tecnológicas del país. Palabras claves: Recursos humanos, carreras académicas y científi cas, regresión discreta y modelos de elección cualitativa. Clasifi cación JEL: C25, O15.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Este documento presenta una revisión de las principales aproximaciones teóricas sobre recursos humanos en ciencia y tecnología y la modelación empírica de las carreras académicas y científicas utilizando los CVs como fuente de información principal. Adicionalmente, muestra los resultados de varios estudios realizados en Colombia basados en la teoría del capital conocimiento. Estos estudios han permitido establecer una línea de investigación sobre la evaluación del comportamiento de los recursos humanos, el tránsito hacia comunidades científicas y el estudio de las carreras académicas de los investigadores. Adicionalmente, muestran que la información contenida en la Plataforma ScienTI (Grup-Lac y Cv-Lac) permite establecer de manera concreta las capacidades científicas y tecnológicas del país.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Second-order polynomial models have been used extensively to approximate the relationship between a response variable and several continuous factors. However, sometimes polynomial models do not adequately describe the important features of the response surface. This article describes the use of fractional polynomial models. It is shown how the models can be fitted, an appropriate model selected, and inference conducted. Polynomial and fractional polynomial models are fitted to two published datasets, illustrating that sometimes the fractional polynomial can give as good a fit to the data and much more plausible behavior between the design points than the polynomial model. © 2005 American Statistical Association and the International Biometric Society.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Novel brominated amorphous hydrogenated carbon (a-C:H:Br) films were produced by the plasma polymerization of acetylene-bromoform mixtures. The main parameter of interest was the degree of bromination, which depends on the partial pressure of bromoform in the plasma feed, expressed as a percentage of the total pressure, R-B. When bromoform is present in the feed, deposition rates of up to about 110 nm min(-1) may be obtained. The structure and composition of the films were characterized by Transmission Infrared Reflection Absorption Spectroscopy (IRRAS) and X-ray Photo-electron Spectroscopy (XPS). The latter revealed that films with atomic ratios Br:C of up to 0.58 may be produced. Surface contact angles, measured using goniometry, could be increased from similar to 63 degrees (for an unbrominated film) to similar to 90 degrees for R-B of 60 to 80%. Film surface roughness, measured using a profilometer, does not depend strongly on R-B. Optical properties the refractive index, n, absorption coefficient, alpha(E), where E is the photon energy, and the optical gap, E-g, were determined from film thicknesses and data obtained by Transmission Ultraviolet-Visible Near Infrared Spectroscopy (UVS). Control of n was possible via selection of R-B. The measured optical gap increases with increasing F-BC, the atomic ratio of Br to C in the film, and semi-empirical modeling accounts for this tendency. A typical hardness of the brominated films, determined via nano-indentation, was similar to 0.5 GPa. (C), 2013 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.