986 resultados para Estimated parameters
Resumo:
Transport of volatile hydrocarbons in soils is largely controlled by interactions of vapours with the liquid and solid phase. Sorption on solids of gaseous or dissolved comPounds may be important. Since the contact time between a chemical and a specific sorption site can be rather short, kinetic or mass-transfer resistance effects may be relevant. An existing mathematical model describing advection and diffusion in the gas phase and diffusional transport from the gaseous phase into an intra-aggregate water phase is modified to include linear kinetic sorption on ps-solid and water-solid interfaces. The model accounts for kinetic mass transfer between all three phases in a soil. The solution of the Laplace-transformed equations is inverted numerically. We performed transient column experiments with 1,1,2-Trichloroethane, Trichloroethylene, and Tetrachloroethylene using air-dry solid and water-saturated porous glass beads. The breakthrough curves were calculated based on independently estimated parameters. The model calculations agree well with experimental data. The different transport behaviour of the three compounds in our system primarily depends on Henry's constants.
Resumo:
Pre-combined SLR-GNSS solutions are studied and the impact of different types of datum definition on the estimated parameters is assessed. It is found that the origin is realized best by using only the SLR core network for defining the geodetic datum and the inclusion of the GNSS core sites degrades the origin. The orientation, however, requires a dense and continuous network, thus, the inclusion of the GNSS core network is absolutely needed.
Resumo:
When considering data from many trials, it is likely that some of them present a markedly different intervention effect or exert an undue influence on the summary results. We develop a forward search algorithm for identifying outlying and influential studies in meta-analysis models. The forward search algorithm starts by fitting the hypothesized model to a small subset of likely outlier-free studies and proceeds by adding studies into the set one-by-one that are determined to be closest to the fitted model of the existing set. As each study is added to the set, plots of estimated parameters and measures of fit are monitored to identify outliers by sharp changes in the forward plots. We apply the proposed outlier detection method to two real data sets; a meta-analysis of 26 studies that examines the effect of writing-to-learn interventions on academic achievement adjusting for three possible effect modifiers, and a meta-analysis of 70 studies that compares a fluoride toothpaste treatment to placebo for preventing dental caries in children. A simple simulated example is used to illustrate the steps of the proposed methodology, and a small-scale simulation study is conducted to evaluate the performance of the proposed method. Copyright © 2016 John Wiley & Sons, Ltd.
Resumo:
The evolution of porosity due to dissolution/precipitation processes of minerals and the associated change of transport parameters are of major interest for natural geological environments and engineered underground structures. We designed a reproducible and fast to conduct 2D experiment, which is flexible enough to investigate several process couplings implemented in the numerical code OpenGeosys-GEM (OGS-GEM). We investigated advective-diffusive transport of solutes, effect of liquid phase density on advective transport, and kinetically controlled dissolution/precipitation reactions causing porosity changes. In addition, the system allowed to investigate the influence of microscopic (pore scale) processes on macroscopic (continuum scale) transport. A Plexiglas tank of dimension 10 × 10 cm was filled with a 1 cm thick reactive layer consisting of a bimodal grain size distribution of celestite (SrSO4) crystals, sandwiched between two layers of sand. A barium chloride solution was injected into the tank causing an asymmetric flow field to develop. As the barium chloride reached the celestite region, dissolution of celestite was initiated and barite precipitated. Due to the higher molar volume of barite, its precipitation caused a porosity decrease and thus also a decrease in the permeability of the porous medium. The change of flow in space and time was observed via injection of conservative tracers and analysis of effluents. In addition, an extensive post-mortem analysis of the reacted medium was conducted. We could successfully model the flow (with and without fluid density effects) and the transport of conservative tracers with a (continuum scale) reactive transport model. The prediction of the reactive experiments initially failed. Only the inclusion of information from post-mortem analysis gave a satisfactory match for the case where the flow field changed due to dissolution/precipitation reactions. We concentrated on the refinement of post-mortem analysis and the investigation of the dissolution/precipitation mechanisms at the pore scale. Our analytical techniques combined scanning electron microscopy (SEM) and synchrotron X-ray micro-diffraction/micro-fluorescence performed at the XAS beamline (Swiss Light Source). The newly formed phases include an epitaxial growth of barite micro-crystals on large celestite crystals (epitaxial growth) and a nano-crystalline barite phase (resulting from the dissolution of small celestite crystals) with residues of celestite crystals in the pore interstices. Classical nucleation theory, using well-established and estimated parameters describing barite precipitation, was applied to explain the mineralogical changes occurring in our system. Our pore scale investigation showed limits of the continuum scale reactive transport model. Although kinetic effects were implemented by fixing two distinct rates for the dissolution of large and small celestite crystals, instantaneous precipitation of barite was assumed as soon as oversaturation occurred. Precipitation kinetics, passivation of large celestite crystals and metastability of supersaturated solutions, i.e. the conditions under which nucleation cannot occur despite high supersaturation, were neglected. These results will be used to develop a pore scale model that describes precipitation and dissolution of crystals at the pore scale for various transport and chemical conditions. Pore scale modelling can be used to parameterize constitutive equations to introduce pore-scale corrections into macroscopic (continuum) reactive transport models. Microscopic understanding of the system is fundamental for modelling from the pore to the continuum scale.
Resumo:
Standardization is a common method for adjusting confounding factors when comparing two or more exposure category to assess excess risk. Arbitrary choice of standard population in standardization introduces selection bias due to healthy worker effect. Small sample in specific groups also poses problems in estimating relative risk and the statistical significance is problematic. As an alternative, statistical models were proposed to overcome such limitations and find adjusted rates. In this dissertation, a multiplicative model is considered to address the issues related to standardized index namely: Standardized Mortality Ratio (SMR) and Comparative Mortality Factor (CMF). The model provides an alternative to conventional standardized technique. Maximum likelihood estimates of parameters of the model are used to construct an index similar to the SMR for estimating relative risk of exposure groups under comparison. Parametric Bootstrap resampling method is used to evaluate the goodness of fit of the model, behavior of estimated parameters and variability in relative risk on generated sample. The model provides an alternative to both direct and indirect standardization method. ^
Resumo:
Improving energy efficiency is an unarguable emergent issue in developing economies and an energy efficiency standard and labeling program is an ideal mechanism to achieve this target. However, there is concern regarding whether the consumers will choose the highly energy efficient appliances because of its high price in consequence of the high cost. This paper estimates how the consumer responds to introduction of the energy efficiency standard and labeling program in China. To quantify evaluation by consumers, we estimated their consumer surplus and the benefits of products based on the estimated parameters of demand function. We found the following points. First, evaluation of energy efficiency labeling by the consumer is not monotonically correlated with the number of grades. The highest efficiency label (Label 1) is not evaluated to be no less higher than labels 2 and 3, and is sometimes lower than the least energy efficient label (Label UI). This goes against the design of policy intervention. Second, several governmental policies affects in mixed directions: the subsidies for energy saving policies to the highest degree of the labels contribute to expanding consumer welfare as the program was designed. However, the replacement for new appliances policies decreased the welfare.
Resumo:
It is a known fact that noise analysis is a suitable method for sensor performance surveillance. In particular, controlling the response time of a sensor is an efficient way to anticipate failures and to have the opportunity to prevent them. In this work the response times of several sensors of Trillo NPP are estimated by means of noise analysis. The procedure applied consists of modeling each sensor with autoregressive methods and getting the searched parameter by analyzing the response of the model when a ramp is simulated as the input signal. Core exit thermocouples and in core self-powered neutron detectors are the main sensors analyzed but other plant sensors are studied as well. Since several measurement campaigns have been carried out, it has been also possible to analyze the evolution of the estimated parameters during more than one fuel cycle. Some sensitivity studies for the sample frequency of the signals and its influence on the response time are also included. Calculations and analysis have been done in the frame of a collaboration agreement between Trillo NPP operator (CNAT) and the School of Mines of Madrid.
Resumo:
We present data on the decay, after radiotherapy, of naive and memory human T lymphocytes with stable chromosome damage. These data are analyzed in conjunction with existing data on the decay of naive and memory T lymphocytes with unstable chromosome damage and older data on unsorted lymphocytes. The analyses yield in vivo estimates for some life-history parameters of human T lymphocytes. Best estimates of proliferation rates have naive lymphocytes dividing once every 3.5 years and memory lymphocytes dividing once every 22 weeks. It appears that memory lymphocytes can revert to the naive phenotype, but only, on average, after 3.5 years in the memory class. The lymphocytes with stable chromosome damage decay very slowly, yielding surprisingly low estimates of their death rate. The estimated parameters are used in a simple mathematical model of the population dynamics of undamaged naive and memory lymphocytes. We use this model to illustrate that it is possible for the unprimed subset of a constantly stimulated clone to stay small, even when there is a large population of specific primed cells reverting to the unprimed state.
Resumo:
The estimated parameters of output distance functions frequently violate the monotonicity, quasi-convexity and convexity constraints implied by economic theory, leading to estimated elasticities and shadow prices that are incorrectly signed, and ultimately to perverse conclusions concerning the effects of input and output changes on productivity growth and relative efficiency levels. We show how a Bayesian approach can be used to impose these constraints on the parameters of a translog output distance function. Implementing the approach involves the use of a Gibbs sampler with data augmentation. A Metropolis-Hastings algorithm is also used within the Gibbs to simulate observations from truncated pdfs. Our methods are developed for the case where panel data is available and technical inefficiency effects are assumed to be time-invariant. Two models-a fixed effects model and a random effects model-are developed and applied to panel data on 17 European railways. We observe significant changes in estimated elasticities and shadow price ratios when regularity restrictions are imposed. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
La prima parte di questo lavoro di tesi tratta dell’interazione tra un bacino di laminazione e il sottostante acquifero: è in fase di progetto, infatti, la costruzione di una cassa di espansione sul torrente Baganza, a monte della città di Parma. L’obiettivo di tale intervento è di ridurre il rischio di esondazione immagazzinando temporaneamente, in un serbatoio artificiale, la parte più pericolosa del volume di piena che verrebbe rilasciata successivamente con portate che possono essere agevolmente contenute nel tratto cittadino del torrente. L’acquifero è stato preliminarmente indagato e monitorato permettendone la caratterizzazione litostratigrafica. La stratigrafia si può riassumere in una sequenza di strati ghiaioso-sabbiosi con successione di lenti d’argilla più o meno spesse e continue, distinguendo due acquiferi differenti (uno freatico ed uno confinato). Nel presente studio si fa riferimento al solo acquifero superficiale che è stato modellato numericamente, alle differenze finite, per mezzo del software MODFLOW_2005. L'obiettivo del presente lavoro è di rappresentare il sistema acquifero nelle condizioni attuali (in assenza di alcuna opera) e di progetto. La calibrazione è stata condotta in condizioni stazionarie utilizzando i livelli piezometrici raccolti nei punti d’osservazione durante la primavera del 2013. I valori di conducibilità idraulica sono stati stimati per mezzo di un approccio geostatistico Bayesiano. Il codice utilizzato per la stima è il bgaPEST, un software gratuito per la soluzione di problemi inversi fortemente parametrizzati, sviluppato sulla base dei protocolli del software PEST. La metodologia inversa stima il campo di conducibilità idraulica combinando osservazioni sullo stato del sistema (livelli piezometrici nel caso in esame) e informazioni a-priori sulla struttura dei parametri incogniti. La procedura inversa richiede il calcolo della sensitività di ciascuna osservazione a ciascuno dei parametri stimati; questa è stata valutata in maniera efficiente facendo ricorso ad una formulazione agli stati aggiunti del codice in avanti MODFLOW_2005_Adjoint. I risultati della metodologia sono coerenti con la natura alluvionale dell'acquifero indagato e con le informazioni raccolte nei punti di osservazione. Il modello calibrato può quindi essere utilizzato come supporto alla progettazione e gestione dell’opera di laminazione. La seconda parte di questa tesi tratta l'analisi delle sollecitazioni indotte dai percorsi di flusso preferenziali causati da fenomeni di piping all’interno dei rilevati arginali. Tali percorsi preferenziali possono essere dovuti alla presenza di gallerie scavate da animali selvatici. Questo studio è stato ispirato dal crollo del rilevato arginale del Fiume Secchia (Modena), che si è verificato in gennaio 2014 a seguito di un evento alluvionale, durante il quale il livello dell'acqua non ha mai raggiunto la sommità arginale. La commissione scientifica, la cui relazione finale fornisce i dati utilizzati per questo studio, ha attribuito, con molta probabilità, il crollo del rilevato alla presenza di tane di animali. Con lo scopo di analizzare il comportamento del rilevato in condizioni integre e in condizioni modificate dall'esistenza di un tunnel che attraversa il manufatto arginale, è stato realizzato un modello numerico 3D dell’argine mediante i noti software Femwater e Feflow. I modelli descrivono le infiltrazioni all'interno del rilevato considerando il terreno in entrambe le porzioni sature ed insature, adottando la tecnica agli elementi finiti. La tana è stata rappresentata da elementi con elevata permeabilità e porosità, i cui valori sono stati modificati al fine di valutare le diverse influenze sui flussi e sui contenuti idrici. Per valutare se le situazioni analizzate presentino o meno il verificarsi del fenomeno di erosione, sono stati calcolati i valori del fattore di sicurezza. Questo è stato valutato in differenti modi, tra cui quello recentemente proposto da Richards e Reddy (2014), che si riferisce al criterio di energia cinetica critica. In ultima analisi è stato utilizzato il modello di Bonelli (2007) per calcolare il tempo di erosione ed il tempo rimanente al collasso del rilevato.
Resumo:
The modelling of mechanical structures using finite element analysis has become an indispensable stage in the design of new components and products. Once the theoretical design has been optimised a prototype may be constructed and tested. What can the engineer do if the measured and theoretically predicted vibration characteristics of the structure are significantly different? This thesis considers the problems of changing the parameters of the finite element model to improve the correlation between a physical structure and its mathematical model. Two new methods are introduced to perform the systematic parameter updating. The first uses the measured modal model to derive the parameter values with the minimum variance. The user must provide estimates for the variance of the theoretical parameter values and the measured data. Previous authors using similar methods have assumed that the estimated parameters and measured modal properties are statistically independent. This will generally be the case during the first iteration but will not be the case subsequently. The second method updates the parameters directly from the frequency response functions. The order of the finite element model of the structure is reduced as a function of the unknown parameters. A method related to a weighted equation error algorithm is used to update the parameters. After each iteration the weighting changes so that on convergence the output error is minimised. The suggested methods are extensively tested using simulated data. An H frame is then used to demonstrate the algorithms on a physical structure.
Resumo:
The purpose was to advance research and clinical methodology for assessing psychopathology by testing the international generalizability of an 8-syndrome model derived from collateral ratings of adult behavioral, emotional, social, and thought problems. Collateral informants rated 8,582 18-59-year-old residents of 18 societies on the Adult Behavior Checklist (ABCL). Confirmatory factor analyses tested the fit of the 8-syndrome model to ratings from each society. The primary model fit index (Root Mean Square Error of Approximation) showed good model fit for all societies, while secondary indices (Tucker Lewis Index, Comparative Fit Index) showed acceptable to good fit for 17 societies. Factor loadings were robust across societies and items. Of the 5,007 estimated parameters, 4 (0.08%) were outside the admissible parameter space, but 95% confidence intervals included the admissible space, indicating that the 4 deviant parameters could be due to sampling fluctuations. The findings are consistent with previous evidence for the generalizability of the 8-syndrome model in self-ratings from 29 societies, and support the 8-syndrome model for operationalizing phenotypes of adult psychopathology from multi-informant ratings in diverse societies. © 2014 Asociación Española de Psicología Conductual.
Resumo:
This study tested the multi-society generalizability of an eight-syndrome assessment model derived from factor analyses of American adults' self-ratings of 120 behavioral, emotional, and social problems. The Adult Self-Report (ASR; Achenbach and Rescorla 2003) was completed by 17,152 18-59-year-olds in 29 societies. Confirmatory factor analyses tested the fit of self-ratings in each sample to the eight-syndrome model. The primary model fit index (Root Mean Square Error of Approximation) showed good model fit for all samples, while secondary indices showed acceptable to good fit. Only 5 (0.06%) of the 8,598 estimated parameters were outside the admissible parameter space. Confidence intervals indicated that sampling fluctuations could account for the deviant parameters. Results thus supported the tested model in societies differing widely in social, political, and economic systems, languages, ethnicities, religions, and geographical regions. Although other items, societies, and analytic methods might yield different results, the findings indicate that adults in very diverse societies were willing and able to rate themselves on the same standardized set of 120 problem items. Moreover, their self-ratings fit an eight-syndrome model previously derived from self-ratings by American adults. The support for the statistically derived syndrome model is consistent with previous findings for parent, teacher, and self-ratings of 11/2-18-year-olds in many societies. The ASR and its parallel collateral-report instrument, the Adult Behavior Checklist (ABCL), may offer mental health professionals practical tools for the multi-informant assessment of clinical constructs of adult psychopathology that appear to be meaningful across diverse societies. © 2014 Springer Science+Business Media New York.
Resumo:
Research on temporal-order perception uses temporal-order judgment (TOJ) tasks or synchrony judgment (SJ) tasks in their binary SJ2 or ternary SJ3 variants. In all cases, two stimuli are presented with some temporal delay, and observers judge the order of presentation. Arbitrary psychometric functions are typically fitted to obtain performance measures such as sensitivity or the point of subjective simultaneity, but the parameters of these functions are uninterpretable. We describe routines in MATLAB and R that fit model-based functions whose parameters are interpretable in terms of the processes underlying temporal-order and simultaneity judgments and responses. These functions arise from an independent-channels model assuming arrival latencies with exponential distributions and a trichotomous decision space. Different routines fit data separately for SJ2, SJ3, and TOJ tasks, jointly for any two tasks, or also jointly for the three tasks (for common cases in which two or even the three tasks were used with the same stimuli and participants). Additional routines provide bootstrap p-values and confidence intervals for estimated parameters. A further routine is included that obtains performance measures from the fitted functions. An R package for Windows and source code of the MATLAB and R routines are available as Supplementary Files.
Resumo:
Se hace un análisis del estado poblacional del cangrejo violáceo Platyxanthus orbignyi (Milne Edwards y Lucas, 1843) del litoral de Lambayeque – Perú para el periodo 2001-2010 por medio de: 1) El modelo dinámico de biomasa de Schaefer en su versión de error de observación; a este modelo se le introdujo la variable ambiental anomalía de la temperatura superficial del mar (ATSM) del área de San José (Lambayeque) y se obtuvo 2) el modelo dinámico con variable ambiental, ambos basados en datos de captura, esfuerzo y CPUE. Se utilizó el método de máxima verosimilitud en el proceso de ajuste y el bootstrap para determinar los intervalos de confianza de los parámetros. Los parámetros poblacionales y pesqueros estimados por el modelo dinámico de biomasa de Schaefer (MDB) fueron: K: 750 000 kg, r : 0,21 y q: 8,36 x 10-6 y por el modelo dinámico con variable ambiental (MDVA) los parámetros fueron K: 765 000 kg, r: 0,23 y q: 8,02 x 10-6. Con los valores de los parámetros estimados mediante el MDB y el MDVA se calcularon los principales puntos biológicos de referencia (PBR) los cuales fueron: MRS: 39 822 kg, BMRS: 375 000 kg, fMRS: 12 561 nasas, FMRS: 0,11, F0.1: 0,10 para el MDB; y MRS: 44 069 kg, BMRS: 382 500 kg, fMRS: 13 782 nasas, FMRS: 0,12, F0.1: 0,10 para el MDVA. Los resultados indican que el estado actual de la pesquería del cangrejo violáceo del Litoral de Lambayeque se encuentra muy cerca al nivel óptimo. En vista de que no se dispone de información de evaluaciones directas de este recurso que confirme o no los resultados del MDB y MDVA y en virtud de la calidad de datos, se sugiere que el manejo de la pesquería sea del tipo adaptativo alrededor del punto de referencia F0.1 y teniendo en cuenta las condiciones ambientales.