916 resultados para two-step process
Resumo:
The paper investigates the role of real exchange rate misalignment on long-run growth for a set of ninety countries using time series data from 1980 to 2004. We first estimate a panel data model (using fixed and random effects) for the real exchange rate, with different model specifications, in order to produce estimates of the equilibrium real exchange rate and this is then used to construct measures of real exchange rate misalignment. We also provide an alternative set of estimates of real exchange rate misalignment using panel cointegration methods. The variables used in our real exchange rate models are: real per capita GDP; net foreign assets; terms of trade and government consumption. The results for the two-step System GMM panel growth models indicate that the coefficients for real exchange rate misalignment are positive for different model specification and samples, which means that a more depreciated (appreciated) real exchange rate helps (harms) long-run growth. The estimated coefficients are higher for developing and emerging countries.
Resumo:
In this paper we show that the inclusion of unemployment-tenure interaction variates in Mincer wage equations is subject to serious pitfalls. These variates were designed to test whether or not the sensitivity to the business cycle of a worker’s wage varies according to her tenure. We show that three canonical variates used in the literature - the minimum unemployment rate during a worker’s time at the firm(min u), the unemployment rate at the start of her tenure(Su) and the current unemployment rate interacted with a new hire dummy(δu) - can all be significant and "correctly" signed even when each worker in the firm receives the same wage, regardless of tenure (equal treatment). In matched data the problem can be resolved by the inclusion in the panel of firm-year interaction dummies. In unmatched data where this is not possible, we propose a solution for min u and Su based on Solon, Barsky and Parker’s(1994) two step method. This method is sub-optimal because it ignores a large amount of cross tenure variation in average wages and is only valid when the scaled covariances of firm wages and firm employment are acyclical. Unfortunately δu cannot be identified in unmatched data because a differential wage response to unemployment of new hires and incumbents will appear under both equal treatment and unequal treatment.
Resumo:
This paper reports on one of the first empirical attempts to investigate small firm growth and survival, and their determinants, in the Peoples’ Republic of China. The work is based on field work evidence gathered from a sample of 83 Chinese private firms (mainly SMEs) collected initially by face-to-face interviews, and subsequently by follow-up telephone interviews a year later. We extend the models of Gibrat (1931) and Jovanovic (1982), which traditionally focus on size and age alone (e.g. Brock and Evans, 1986), to a ‘comprehensive’ growth model with two types of additional explanatory variables: firm-specific (e.g. business planning); and environmental (e.g. choice of location). We estimate two econometric models: a ‘basic’ age-size-growth model; and a ‘comprehensive’ growth model, using Heckman’s two-step regression procedure. Estimation is by log-linear regression on cross-section data, with corrections for sample selection bias and heteroskedasticity. Our results refute a pure Gibrat model (but support a more general variant) and support the learning model, as regards the consequences of size and age for growth; and our extension to a comprehensive model highlights the importance of location choice and customer orientation for the growth of Chinese private firms. In the latter model, growth is explained by variables like planning, R&D orientation, market competition, elasticity of demand etc. as well as by control variables. Our work on small firm growth achieves two things. First, it upholds the validity of ‘basic’ size-age-growth models, and successfully applies them to the Chinese economy. Second, it extends the compass of such models to a ‘comprehensive’ growth model incorporating firm-specific and environmental variables.
Resumo:
Biofilm formation is a multi-step process influenced by surface properties. We investigated early and mature biofilm of Staphylococcus aureus on 4 different biological calcium phosphate (CaP) bone grafts used for filling bone defects. We investigated standardised cylinders of fresh and fresh-frozen human bone grafts were harvested from femoral heads; processed humanand bovine bone grafts were obtained preformed. Biofilm formation was done in tryptic soy broth (TSB) using S. aureus (ATCC 29213) with static conditions. Biofilm density after 3 h (early biofilm) and 24 h (mature biofilm) was investigated by sonication and microcalorimetry. After 3 h, bacterial density was highest on fresh-frozenandfresh bone grafts. After 24 h, biofilm density was lowest on freshbone grafts (p < 0.001) compared to the other 3 materials, which did not differ quantitatively (p > 0.05). The lowest increase in bacterial density was detected on fresh bone grafts (p < 0.001). Despite normal shaped colonies, we found additional small colonies on the surface of the fresh and fresh-frozen samples by sonication. This was also apparent in microcalorimetric heat-flow curves. The four investigated CaP bone grafts showed minor structural differences in architecture but marked differences concerning serum coverage and the content of bone marrow, fibrous tissue and bone cells. These variations resulted in a decreased biofilm density on freshand fresh-frozenbone grafts after 24 h, despite an increased early biofilm formation and might also be responsible for the variations in colony morphology (small colonies). Detection of small colony variants by microcalorimetry might be a new approach to improve the understanding of biofilm formation.
Resumo:
A two-step high-performance liquid chromatography method is described, using a CN column and an alpha 1-acid glycoprotein column, which allows the measurement of the enantiomers of the hydroxy metabolites of trimipramine in plasma of trimipramine-treated patients. Of the four patients analyzed, three showed approximately equimolar concentrations of the (D)- and (L)-enantiomers of the hydroxy metabolites (2-hydroxy-trimipramine and 2-hydroxy desmethyltrimipramine), and one was found to have roughly twice as much of the (L)-form and of the (D)-form of 2-hydroxy trimipramine and 2-hydroxy desmethyltrimipramine. From the data available on the pharmacological effects of the enantiomers of trimipramine, it is postulated that this interindividual variability in its pharmacokinetics is another factor that could contribute to the interindividual variability in its pharmacodynamics.
Resumo:
ACuteTox is a project within the 6th European Framework Programme which had as one of its goals to develop, optimise and prevalidate a non-animal testing strategy for predicting human acute oral toxicity. In its last 6 months, a challenging exercise was conducted to assess the predictive capacity of the developed testing strategies and final identification of the most promising ones. Thirty-two chemicals were tested blind in the battery of in vitro and in silico methods selected during the first phase of the project. This paper describes the classification approaches studied: single step procedures and two step tiered testing strategies. In summary, four in vitro testing strategies were proposed as best performing in terms of predictive capacity with respect to the European acute oral toxicity classification. In addition, a heuristic testing strategy is suggested that combines the prediction results gained from the neutral red uptake assay performed in 3T3 cells, with information on neurotoxicity alerts identified by the primary rat brain aggregates test method. Octanol-water partition coefficients and in silico prediction of intestinal absorption and blood-brain barrier passage are also considered. This approach allows to reduce the number of chemicals wrongly predicted as not classified (LD50>2000 mg/kg b.w.).
Resumo:
Simian rotavirus SA-11, experimentally seeded, was recovered from raw domestic sewage by a two-step concentration procedure, using filtration through a positively charged microporous filter (Zeta Plus 60 S) followed by ultracentrifugation, effecting an 8,000-fold concentration. By this method, a mean recovery of 81% ± 7.5 of the SA-11 virus, was achieved
Resumo:
We show here a simplified reverse transcription-polymerase chain reaction (RT-PCR) for identification of dengue type 2 virus. Three dengue type 2 virus strains, isolated from Brazilian patients, and yellow fever vaccine 17DD, as a negative control, were used in this study. C6/36 cells were infected with the virus, and tissue culture fluids were collected after 7 days of infection period. The RT-PCR, a combination of RT and PCR done after a single addition of reagents in a single reaction vessel was carried out following a digestion of virus with 1% Nonidet P-40. The 50ml assay reaction mixture included 50 pmol of a dengue type 2 specific primer pair amplifying a 210 base pair sequence of the envelope protein gene, 0.1 mM of the four deoxynucleoside triphosphates, 7.5U of reverse transcriptase, and 1U of thermostable Taq DNA polymerase. The reagent mixture was incubated for 15 min at 37oC for RT followed by a variable amount of cycles of two-step PCR amplification (92oC for 60 sec, 53oC for 60 sec) with slow temperature increment. The PCR products were subjected to 1.7% agarose gel electrophoresis and visualized with UV light after gel incubation in ethidium bromide solution. DNA bands were observed after 25 and 30 cycles of PCR. Virus amount as low as 102.8 TCID50/ml was detected by RT-PCR. Specific DNA amplification was observed with the three dengue type 2 strains. This assay has advantages compared to other RT-PCRs: it avoids laborious extraction of virus RNA; the combination of RT and PCR reduces assay time, facilitates the performance and reduces risk of contamination; the two-step PCR cycle produces a clear DNA amplification, saves assay time and simplifies the technique
Resumo:
In this study we analyze multinationality (domestic-based firms vs. multinationals) and foreignness (foreign vs. domestic firms) effects in the returns of R&D to productivity. We follow a two-step strategy. In the first step, we consistently ''s productivity by GMM and numerically compute the sample distribution of the R&D returns. In the second step, we use stochastic dominance techniques to make inferences on the multinationality and foreignness effects. Results for a panel of UK manufacturing firms suggest that multinationality and foreignness effects operate in an opposite way: whilst the multinationality effect enhances R&D returns, the foreignness diminishes them.
Resumo:
Given a sample from a fully specified parametric model, let Zn be a given finite-dimensional statistic - for example, an initial estimator or a set of sample moments. We propose to (re-)estimate the parameters of the model by maximizing the likelihood of Zn. We call this the maximum indirect likelihood (MIL) estimator. We also propose a computationally tractable Bayesian version of the estimator which we refer to as a Bayesian Indirect Likelihood (BIL) estimator. In most cases, the density of the statistic will be of unknown form, and we develop simulated versions of the MIL and BIL estimators. We show that the indirect likelihood estimators are consistent and asymptotically normally distributed, with the same asymptotic variance as that of the corresponding efficient two-step GMM estimator based on the same statistic. However, our likelihood-based estimators, by taking into account the full finite-sample distribution of the statistic, are higher order efficient relative to GMM-type estimators. Furthermore, in many cases they enjoy a bias reduction property similar to that of the indirect inference estimator. Monte Carlo results for a number of applications including dynamic and nonlinear panel data models, a structural auction model and two DSGE models show that the proposed estimators indeed have attractive finite sample properties.
Resumo:
Zero correlation between measurement error and model error has been assumed in existing panel data models dealing specifically with measurement error. We extend this literature and propose a simple model where one regressor is mismeasured, allowing the measurement error to correlate with model error. Zero correlation between measurement error and model error is a special case in our model where correlated measurement error equals zero. We ask two research questions. First, we wonder if the correlated measurement error can be identified in the context of panel data. Second, we wonder if classical instrumental variables in panel data need to be adjusted when correlation between measurement error and model error cannot be ignored. Under some regularity conditions the answer is yes to both questions. We then propose a two-step estimation corresponding to the two questions. The first step estimates correlated measurement error from a reverse regression; and the second step estimates usual coefficients of interest using adjusted instruments.
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
This Final Report is the culmination of a two-stage process aimed at fulfilling the request of the previous Minister for Health and Children (Mary Harney) in relation to the practice of symphysiotomy in Ireland. The first phase was an independent academic research report. The second phase involved consultation with relevant stakeholders to provide comment on the report. Download the report here Â
Resumo:
For doping control, analyses of samples are generally achieved in two steps: a rapid screening and, in the case of a positive result, a confirmatory analysis. A two-step methodology based on ultra-high-pressure liquid chromatography coupled to a quadrupole time-of-flight mass spectrometry (UHPLC-QTOF-MS) was developed to screen and confirm 103 doping agents from various classes (e.g., beta-blockers, stimulants, diuretics, and narcotics). The screening method was presented in a previous article as part I (i.e., Fast analysis of doping agents in urine by ultra-high-pressure liquid chromatography-quadrupole time-of-flight mass spectrometry. Part I: screening analysis). For the confirmatory method, basic, neutral and acidic compounds were extracted by a dedicated solid-phase extraction (SPE) in a 96-well plate format and detected by MS in the tandem mode to obtain precursor and characteristic product ions. The mass accuracy and the elemental composition of precursor and product ions were used for compound identification. After validation including matrix effect determination, the method was considered reliable to confirm suspect results without ambiguity according to the positivity criteria established by the World Anti-Doping Agency (WADA). Moreover, an isocratic method was developed to separate ephedrine from its isomer pseudoephedrine and cathine from phenylpropanolamine in a single run, what allowed their direct quantification in urine.
Resumo:
The multiscale finite volume (MsFV) method has been developed to efficiently solve large heterogeneous problems (elliptic or parabolic); it is usually employed for pressure equations and delivers conservative flux fields to be used in transport problems. The method essentially relies on the hypothesis that the (fine-scale) problem can be reasonably described by a set of local solutions coupled by a conservative global (coarse-scale) problem. In most cases, the boundary conditions assigned for the local problems are satisfactory and the approximate conservative fluxes provided by the method are accurate. In numerically challenging cases, however, a more accurate localization is required to obtain a good approximation of the fine-scale solution. In this paper we develop a procedure to iteratively improve the boundary conditions of the local problems. The algorithm relies on the data structure of the MsFV method and employs a Krylov-subspace projection method to obtain an unconditionally stable scheme and accelerate convergence. Two variants are considered: in the first, only the MsFV operator is used; in the second, the MsFV operator is combined in a two-step method with an operator derived from the problem solved to construct the conservative flux field. The resulting iterative MsFV algorithms allow arbitrary reduction of the solution error without compromising the construction of a conservative flux field, which is guaranteed at any iteration. Since it converges to the exact solution, the method can be regarded as a linear solver. In this context, the schemes proposed here can be viewed as preconditioned versions of the Generalized Minimal Residual method (GMRES), with a very peculiar characteristic that the residual on the coarse grid is zero at any iteration (thus conservative fluxes can be obtained).