997 resultados para estimated parameters
Resumo:
Pós-graduação em Ciência e Tecnologia Animal - FEIS
Resumo:
This paper addresses the investment decisions considering the presence of financial constraints of 373 large Brazilian firms from 1997 to 2004, using panel data. A Bayesian econometric model was used considering ridge regression for multicollinearity problems among the variables in the model. Prior distributions are assumed for the parameters, classifying the model into random or fixed effects. We used a Bayesian approach to estimate the parameters, considering normal and Student t distributions for the error and assumed that the initial values for the lagged dependent variable are not fixed, but generated by a random process. The recursive predictive density criterion was used for model comparisons. Twenty models were tested and the results indicated that multicollinearity does influence the value of the estimated parameters. Controlling for capital intensity, financial constraints are found to be more important for capital-intensive firms, probably due to their lower profitability indexes, higher fixed costs and higher degree of property diversification.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Further advances in magnetic hyperthermia might be limited by biological constraints, such as using sufficiently low frequencies and low field amplitudes to inhibit harmful eddy currents inside the patient's body. These incite the need to optimize the heating efficiency of the nanoparticles, referred to as the specific absorption rate (SAR). Among the several properties currently under research, one of particular importance is the transition from the linear to the non-linear regime that takes place as the field amplitude is increased, an aspect where the magnetic anisotropy is expected to play a fundamental role. In this paper we investigate the heating properties of cobalt ferrite and maghemite nanoparticles under the influence of a 500 kHz sinusoidal magnetic field with varying amplitude, up to 134 Oe. The particles were characterized by TEM, XRD, FMR and VSM, from which most relevant morphological, structural and magnetic properties were inferred. Both materials have similar size distributions and saturation magnetization, but strikingly different magnetic anisotropies. From magnetic hyperthermia experiments we found that, while at low fields maghemite is the best nanomaterial for hyperthermia applications, above a critical field, close to the transition from the linear to the non-linear regime, cobalt ferrite becomes more efficient. The results were also analyzed with respect to the energy conversion efficiency and compared with dynamic hysteresis simulations. Additional analysis with nickel, zinc and copper-ferrite nanoparticles of similar sizes confirmed the importance of the magnetic anisotropy and the damping factor. Further, the analysis of the characterization parameters suggested core-shell nanostructures, probably due to a surface passivation process during the nanoparticle synthesis. Finally, we discussed the effect of particle-particle interactions and its consequences, in particular regarding discrepancies between estimated parameters and expected theoretical predictions. Copyright 2012 Author(s). This article is distributed under a Creative Commons Attribution 3.0 Unported License. [http://dx.doi. org/10.1063/1.4739533]
Resumo:
During my PhD, starting from the original formulations proposed by Bertrand et al., 2000 and Emolo & Zollo 2005, I developed inversion methods and applied then at different earthquakes. In particular large efforts have been devoted to the study of the model resolution and to the estimation of the model parameter errors. To study the source kinematic characteristics of the Christchurch earthquake we performed a joint inversion of strong-motion, GPS and InSAR data using a non-linear inversion method. Considering the complexity highlighted by superficial deformation data, we adopted a fault model consisting of two partially overlapping segments, with dimensions 15x11 and 7x7 km2, having different faulting styles. This two-fault model allows to better reconstruct the complex shape of the superficial deformation data. The total seismic moment resulting from the joint inversion is 3.0x1025 dyne.cm (Mw = 6.2) with an average rupture velocity of 2.0 km/s. Errors associated with the kinematic model have been estimated of around 20-30 %. The 2009 Aquila sequence was characterized by an intense aftershocks sequence that lasted several months. In this study we applied an inversion method that assumes as data the apparent Source Time Functions (aSTFs), to a Mw 4.0 aftershock of the Aquila sequence. The estimation of aSTFs was obtained using the deconvolution method proposed by Vallée et al., 2004. The inversion results show a heterogeneous slip distribution, characterized by two main slip patches located NW of the hypocenter, and a variable rupture velocity distribution (mean value of 2.5 km/s), showing a rupture front acceleration in between the two high slip zones. Errors of about 20% characterize the final estimated parameters.
Resumo:
Dealing with latent constructs (loaded by reflective and congeneric measures) cross-culturally compared means studying how these unobserved variables vary, and/or covary each other, after controlling for possibly disturbing cultural forces. This yields to the so-called ‘measurement invariance’ matter that refers to the extent to which data collected by the same multi-item measurement instrument (i.e., self-reported questionnaire of items underlying common latent constructs) are comparable across different cultural environments. As a matter of fact, it would be unthinkable exploring latent variables heterogeneity (e.g., latent means; latent levels of deviations from the means (i.e., latent variances), latent levels of shared variation from the respective means (i.e., latent covariances), levels of magnitude of structural path coefficients with regard to causal relations among latent variables) across different populations without controlling for cultural bias in the underlying measures. Furthermore, it would be unrealistic to assess this latter correction without using a framework that is able to take into account all these potential cultural biases across populations simultaneously. Since the real world ‘acts’ in a simultaneous way as well. As a consequence, I, as researcher, may want to control for cultural forces hypothesizing they are all acting at the same time throughout groups of comparison and therefore examining if they are inflating or suppressing my new estimations with hierarchical nested constraints on the original estimated parameters. Multi Sample Structural Equation Modeling-based Confirmatory Factor Analysis (MS-SEM-based CFA) still represents a dominant and flexible statistical framework to work out this potential cultural bias in a simultaneous way. With this dissertation I wanted to make an attempt to introduce new viewpoints on measurement invariance handled under covariance-based SEM framework by means of a consumer behavior modeling application on functional food choices.
Resumo:
Transport of volatile hydrocarbons in soils is largely controlled by interactions of vapours with the liquid and solid phase. Sorption on solids of gaseous or dissolved comPounds may be important. Since the contact time between a chemical and a specific sorption site can be rather short, kinetic or mass-transfer resistance effects may be relevant. An existing mathematical model describing advection and diffusion in the gas phase and diffusional transport from the gaseous phase into an intra-aggregate water phase is modified to include linear kinetic sorption on ps-solid and water-solid interfaces. The model accounts for kinetic mass transfer between all three phases in a soil. The solution of the Laplace-transformed equations is inverted numerically. We performed transient column experiments with 1,1,2-Trichloroethane, Trichloroethylene, and Tetrachloroethylene using air-dry solid and water-saturated porous glass beads. The breakthrough curves were calculated based on independently estimated parameters. The model calculations agree well with experimental data. The different transport behaviour of the three compounds in our system primarily depends on Henry's constants.
Resumo:
Pre-combined SLR-GNSS solutions are studied and the impact of different types of datum definition on the estimated parameters is assessed. It is found that the origin is realized best by using only the SLR core network for defining the geodetic datum and the inclusion of the GNSS core sites degrades the origin. The orientation, however, requires a dense and continuous network, thus, the inclusion of the GNSS core network is absolutely needed.
Resumo:
When considering data from many trials, it is likely that some of them present a markedly different intervention effect or exert an undue influence on the summary results. We develop a forward search algorithm for identifying outlying and influential studies in meta-analysis models. The forward search algorithm starts by fitting the hypothesized model to a small subset of likely outlier-free studies and proceeds by adding studies into the set one-by-one that are determined to be closest to the fitted model of the existing set. As each study is added to the set, plots of estimated parameters and measures of fit are monitored to identify outliers by sharp changes in the forward plots. We apply the proposed outlier detection method to two real data sets; a meta-analysis of 26 studies that examines the effect of writing-to-learn interventions on academic achievement adjusting for three possible effect modifiers, and a meta-analysis of 70 studies that compares a fluoride toothpaste treatment to placebo for preventing dental caries in children. A simple simulated example is used to illustrate the steps of the proposed methodology, and a small-scale simulation study is conducted to evaluate the performance of the proposed method. Copyright © 2016 John Wiley & Sons, Ltd.
Resumo:
The evolution of porosity due to dissolution/precipitation processes of minerals and the associated change of transport parameters are of major interest for natural geological environments and engineered underground structures. We designed a reproducible and fast to conduct 2D experiment, which is flexible enough to investigate several process couplings implemented in the numerical code OpenGeosys-GEM (OGS-GEM). We investigated advective-diffusive transport of solutes, effect of liquid phase density on advective transport, and kinetically controlled dissolution/precipitation reactions causing porosity changes. In addition, the system allowed to investigate the influence of microscopic (pore scale) processes on macroscopic (continuum scale) transport. A Plexiglas tank of dimension 10 × 10 cm was filled with a 1 cm thick reactive layer consisting of a bimodal grain size distribution of celestite (SrSO4) crystals, sandwiched between two layers of sand. A barium chloride solution was injected into the tank causing an asymmetric flow field to develop. As the barium chloride reached the celestite region, dissolution of celestite was initiated and barite precipitated. Due to the higher molar volume of barite, its precipitation caused a porosity decrease and thus also a decrease in the permeability of the porous medium. The change of flow in space and time was observed via injection of conservative tracers and analysis of effluents. In addition, an extensive post-mortem analysis of the reacted medium was conducted. We could successfully model the flow (with and without fluid density effects) and the transport of conservative tracers with a (continuum scale) reactive transport model. The prediction of the reactive experiments initially failed. Only the inclusion of information from post-mortem analysis gave a satisfactory match for the case where the flow field changed due to dissolution/precipitation reactions. We concentrated on the refinement of post-mortem analysis and the investigation of the dissolution/precipitation mechanisms at the pore scale. Our analytical techniques combined scanning electron microscopy (SEM) and synchrotron X-ray micro-diffraction/micro-fluorescence performed at the XAS beamline (Swiss Light Source). The newly formed phases include an epitaxial growth of barite micro-crystals on large celestite crystals (epitaxial growth) and a nano-crystalline barite phase (resulting from the dissolution of small celestite crystals) with residues of celestite crystals in the pore interstices. Classical nucleation theory, using well-established and estimated parameters describing barite precipitation, was applied to explain the mineralogical changes occurring in our system. Our pore scale investigation showed limits of the continuum scale reactive transport model. Although kinetic effects were implemented by fixing two distinct rates for the dissolution of large and small celestite crystals, instantaneous precipitation of barite was assumed as soon as oversaturation occurred. Precipitation kinetics, passivation of large celestite crystals and metastability of supersaturated solutions, i.e. the conditions under which nucleation cannot occur despite high supersaturation, were neglected. These results will be used to develop a pore scale model that describes precipitation and dissolution of crystals at the pore scale for various transport and chemical conditions. Pore scale modelling can be used to parameterize constitutive equations to introduce pore-scale corrections into macroscopic (continuum) reactive transport models. Microscopic understanding of the system is fundamental for modelling from the pore to the continuum scale.
Resumo:
Standardization is a common method for adjusting confounding factors when comparing two or more exposure category to assess excess risk. Arbitrary choice of standard population in standardization introduces selection bias due to healthy worker effect. Small sample in specific groups also poses problems in estimating relative risk and the statistical significance is problematic. As an alternative, statistical models were proposed to overcome such limitations and find adjusted rates. In this dissertation, a multiplicative model is considered to address the issues related to standardized index namely: Standardized Mortality Ratio (SMR) and Comparative Mortality Factor (CMF). The model provides an alternative to conventional standardized technique. Maximum likelihood estimates of parameters of the model are used to construct an index similar to the SMR for estimating relative risk of exposure groups under comparison. Parametric Bootstrap resampling method is used to evaluate the goodness of fit of the model, behavior of estimated parameters and variability in relative risk on generated sample. The model provides an alternative to both direct and indirect standardization method. ^
Resumo:
Improving energy efficiency is an unarguable emergent issue in developing economies and an energy efficiency standard and labeling program is an ideal mechanism to achieve this target. However, there is concern regarding whether the consumers will choose the highly energy efficient appliances because of its high price in consequence of the high cost. This paper estimates how the consumer responds to introduction of the energy efficiency standard and labeling program in China. To quantify evaluation by consumers, we estimated their consumer surplus and the benefits of products based on the estimated parameters of demand function. We found the following points. First, evaluation of energy efficiency labeling by the consumer is not monotonically correlated with the number of grades. The highest efficiency label (Label 1) is not evaluated to be no less higher than labels 2 and 3, and is sometimes lower than the least energy efficient label (Label UI). This goes against the design of policy intervention. Second, several governmental policies affects in mixed directions: the subsidies for energy saving policies to the highest degree of the labels contribute to expanding consumer welfare as the program was designed. However, the replacement for new appliances policies decreased the welfare.
Resumo:
It is a known fact that noise analysis is a suitable method for sensor performance surveillance. In particular, controlling the response time of a sensor is an efficient way to anticipate failures and to have the opportunity to prevent them. In this work the response times of several sensors of Trillo NPP are estimated by means of noise analysis. The procedure applied consists of modeling each sensor with autoregressive methods and getting the searched parameter by analyzing the response of the model when a ramp is simulated as the input signal. Core exit thermocouples and in core self-powered neutron detectors are the main sensors analyzed but other plant sensors are studied as well. Since several measurement campaigns have been carried out, it has been also possible to analyze the evolution of the estimated parameters during more than one fuel cycle. Some sensitivity studies for the sample frequency of the signals and its influence on the response time are also included. Calculations and analysis have been done in the frame of a collaboration agreement between Trillo NPP operator (CNAT) and the School of Mines of Madrid.
Resumo:
We present data on the decay, after radiotherapy, of naive and memory human T lymphocytes with stable chromosome damage. These data are analyzed in conjunction with existing data on the decay of naive and memory T lymphocytes with unstable chromosome damage and older data on unsorted lymphocytes. The analyses yield in vivo estimates for some life-history parameters of human T lymphocytes. Best estimates of proliferation rates have naive lymphocytes dividing once every 3.5 years and memory lymphocytes dividing once every 22 weeks. It appears that memory lymphocytes can revert to the naive phenotype, but only, on average, after 3.5 years in the memory class. The lymphocytes with stable chromosome damage decay very slowly, yielding surprisingly low estimates of their death rate. The estimated parameters are used in a simple mathematical model of the population dynamics of undamaged naive and memory lymphocytes. We use this model to illustrate that it is possible for the unprimed subset of a constantly stimulated clone to stay small, even when there is a large population of specific primed cells reverting to the unprimed state.