924 resultados para two-step carcinogenesis
Resumo:
This paper is devoted to the analysis of all constitutions equipped with electoral systems involving two step procedures. First, one candidate is elected in every jurisdiction by the electors in that jurisdiction, according to some aggregation procedure. Second, another aggregation procedure collects the names of the jurisdictional winners in order to designate the final winner. It appears that whenever individuals are allowed to change jurisdiction when casting their ballot, they are able to manipulate the result of the election except in very few cases. When imposing a paretian condition on every jurisdictions voting rule, it is shown that, in the case of any finite number of candidates, any two steps voting rule that is not manipulable by movement of the electors necessarily gives to every voter the power of overruling the unanimity on its own. A characterization of the set of these rules is next provided in the case of two candidates.
Resumo:
Asbestos exposure can result in serious and frequently lethal diseases, including malignant mesothelioma. The host sensor for asbestos-induced inflammation is the NLRP3 inflammasome and it is widely assumed that this complex is essential for asbestos-induced cancers. Here, we report that acute interleukin-1β production and recruitment of immune cells into peritoneal cavity were significantly decreased in the NLRP3-deficient mice after the administration of asbestos. However, NLRP3-deficient mice displayed a similar incidence of malignant mesothelioma and survival times as wild-type mice. Thus, early inflammatory reactions triggered by asbestos are NLRP3-dependent, but NLRP3 is not critical in the chronic development of asbestos-induced mesothelioma. Notably, in a two-stage carcinogenesis-induced papilloma model, NLRP3-deficient mice showed a resistance phenotype in two different strain backgrounds, suggesting a tumour-promoting role of NLRP3 in certain chemically-induced cancer types.
Resumo:
This paper reports on one of the first empirical attempts to investigate small firm growth and survival, and their determinants, in the Peoples’ Republic of China. The work is based on field work evidence gathered from a sample of 83 Chinese private firms (mainly SMEs) collected initially by face-to-face interviews, and subsequently by follow-up telephone interviews a year later. We extend the models of Gibrat (1931) and Jovanovic (1982), which traditionally focus on size and age alone (e.g. Brock and Evans, 1986), to a ‘comprehensive’ growth model with two types of additional explanatory variables: firm-specific (e.g. business planning); and environmental (e.g. choice of location). We estimate two econometric models: a ‘basic’ age-size-growth model; and a ‘comprehensive’ growth model, using Heckman’s two-step regression procedure. Estimation is by log-linear regression on cross-section data, with corrections for sample selection bias and heteroskedasticity. Our results refute a pure Gibrat model (but support a more general variant) and support the learning model, as regards the consequences of size and age for growth; and our extension to a comprehensive model highlights the importance of location choice and customer orientation for the growth of Chinese private firms. In the latter model, growth is explained by variables like planning, R&D orientation, market competition, elasticity of demand etc. as well as by control variables. Our work on small firm growth achieves two things. First, it upholds the validity of ‘basic’ size-age-growth models, and successfully applies them to the Chinese economy. Second, it extends the compass of such models to a ‘comprehensive’ growth model incorporating firm-specific and environmental variables.
Resumo:
The paper investigates the role of real exchange rate misalignment on long-run growth for a set of ninety countries using time series data from 1980 to 2004. We first estimate a panel data model (using fixed and random effects) for the real exchange rate, with different model specifications, in order to produce estimates of the equilibrium real exchange rate and this is then used to construct measures of real exchange rate misalignment. We also provide an alternative set of estimates of real exchange rate misalignment using panel cointegration methods. The variables used in our real exchange rate models are: real per capita GDP; net foreign assets; terms of trade and government consumption. The results for the two-step System GMM panel growth models indicate that the coefficients for real exchange rate misalignment are positive for different model specification and samples, which means that a more depreciated (appreciated) real exchange rate helps (harms) long-run growth. The estimated coefficients are higher for developing and emerging countries.
Resumo:
In this paper we show that the inclusion of unemployment-tenure interaction variates in Mincer wage equations is subject to serious pitfalls. These variates were designed to test whether or not the sensitivity to the business cycle of a worker’s wage varies according to her tenure. We show that three canonical variates used in the literature - the minimum unemployment rate during a worker’s time at the firm(min u), the unemployment rate at the start of her tenure(Su) and the current unemployment rate interacted with a new hire dummy(δu) - can all be significant and "correctly" signed even when each worker in the firm receives the same wage, regardless of tenure (equal treatment). In matched data the problem can be resolved by the inclusion in the panel of firm-year interaction dummies. In unmatched data where this is not possible, we propose a solution for min u and Su based on Solon, Barsky and Parker’s(1994) two step method. This method is sub-optimal because it ignores a large amount of cross tenure variation in average wages and is only valid when the scaled covariances of firm wages and firm employment are acyclical. Unfortunately δu cannot be identified in unmatched data because a differential wage response to unemployment of new hires and incumbents will appear under both equal treatment and unequal treatment.
Resumo:
This paper reports on one of the first empirical attempts to investigate small firm growth and survival, and their determinants, in the Peoples’ Republic of China. The work is based on field work evidence gathered from a sample of 83 Chinese private firms (mainly SMEs) collected initially by face-to-face interviews, and subsequently by follow-up telephone interviews a year later. We extend the models of Gibrat (1931) and Jovanovic (1982), which traditionally focus on size and age alone (e.g. Brock and Evans, 1986), to a ‘comprehensive’ growth model with two types of additional explanatory variables: firm-specific (e.g. business planning); and environmental (e.g. choice of location). We estimate two econometric models: a ‘basic’ age-size-growth model; and a ‘comprehensive’ growth model, using Heckman’s two-step regression procedure. Estimation is by log-linear regression on cross-section data, with corrections for sample selection bias and heteroskedasticity. Our results refute a pure Gibrat model (but support a more general variant) and support the learning model, as regards the consequences of size and age for growth; and our extension to a comprehensive model highlights the importance of location choice and customer orientation for the growth of Chinese private firms. In the latter model, growth is explained by variables like planning, R&D orientation, market competition, elasticity of demand etc. as well as by control variables. Our work on small firm growth achieves two things. First, it upholds the validity of ‘basic’ size-age-growth models, and successfully applies them to the Chinese economy. Second, it extends the compass of such models to a ‘comprehensive’ growth model incorporating firm-specific and environmental variables.
Resumo:
A two-step high-performance liquid chromatography method is described, using a CN column and an alpha 1-acid glycoprotein column, which allows the measurement of the enantiomers of the hydroxy metabolites of trimipramine in plasma of trimipramine-treated patients. Of the four patients analyzed, three showed approximately equimolar concentrations of the (D)- and (L)-enantiomers of the hydroxy metabolites (2-hydroxy-trimipramine and 2-hydroxy desmethyltrimipramine), and one was found to have roughly twice as much of the (L)-form and of the (D)-form of 2-hydroxy trimipramine and 2-hydroxy desmethyltrimipramine. From the data available on the pharmacological effects of the enantiomers of trimipramine, it is postulated that this interindividual variability in its pharmacokinetics is another factor that could contribute to the interindividual variability in its pharmacodynamics.
Resumo:
ACuteTox is a project within the 6th European Framework Programme which had as one of its goals to develop, optimise and prevalidate a non-animal testing strategy for predicting human acute oral toxicity. In its last 6 months, a challenging exercise was conducted to assess the predictive capacity of the developed testing strategies and final identification of the most promising ones. Thirty-two chemicals were tested blind in the battery of in vitro and in silico methods selected during the first phase of the project. This paper describes the classification approaches studied: single step procedures and two step tiered testing strategies. In summary, four in vitro testing strategies were proposed as best performing in terms of predictive capacity with respect to the European acute oral toxicity classification. In addition, a heuristic testing strategy is suggested that combines the prediction results gained from the neutral red uptake assay performed in 3T3 cells, with information on neurotoxicity alerts identified by the primary rat brain aggregates test method. Octanol-water partition coefficients and in silico prediction of intestinal absorption and blood-brain barrier passage are also considered. This approach allows to reduce the number of chemicals wrongly predicted as not classified (LD50>2000 mg/kg b.w.).
Resumo:
Simian rotavirus SA-11, experimentally seeded, was recovered from raw domestic sewage by a two-step concentration procedure, using filtration through a positively charged microporous filter (Zeta Plus 60 S) followed by ultracentrifugation, effecting an 8,000-fold concentration. By this method, a mean recovery of 81% ± 7.5 of the SA-11 virus, was achieved
Resumo:
We show here a simplified reverse transcription-polymerase chain reaction (RT-PCR) for identification of dengue type 2 virus. Three dengue type 2 virus strains, isolated from Brazilian patients, and yellow fever vaccine 17DD, as a negative control, were used in this study. C6/36 cells were infected with the virus, and tissue culture fluids were collected after 7 days of infection period. The RT-PCR, a combination of RT and PCR done after a single addition of reagents in a single reaction vessel was carried out following a digestion of virus with 1% Nonidet P-40. The 50ml assay reaction mixture included 50 pmol of a dengue type 2 specific primer pair amplifying a 210 base pair sequence of the envelope protein gene, 0.1 mM of the four deoxynucleoside triphosphates, 7.5U of reverse transcriptase, and 1U of thermostable Taq DNA polymerase. The reagent mixture was incubated for 15 min at 37oC for RT followed by a variable amount of cycles of two-step PCR amplification (92oC for 60 sec, 53oC for 60 sec) with slow temperature increment. The PCR products were subjected to 1.7% agarose gel electrophoresis and visualized with UV light after gel incubation in ethidium bromide solution. DNA bands were observed after 25 and 30 cycles of PCR. Virus amount as low as 102.8 TCID50/ml was detected by RT-PCR. Specific DNA amplification was observed with the three dengue type 2 strains. This assay has advantages compared to other RT-PCRs: it avoids laborious extraction of virus RNA; the combination of RT and PCR reduces assay time, facilitates the performance and reduces risk of contamination; the two-step PCR cycle produces a clear DNA amplification, saves assay time and simplifies the technique
Resumo:
In this study we analyze multinationality (domestic-based firms vs. multinationals) and foreignness (foreign vs. domestic firms) effects in the returns of R&D to productivity. We follow a two-step strategy. In the first step, we consistently ''s productivity by GMM and numerically compute the sample distribution of the R&D returns. In the second step, we use stochastic dominance techniques to make inferences on the multinationality and foreignness effects. Results for a panel of UK manufacturing firms suggest that multinationality and foreignness effects operate in an opposite way: whilst the multinationality effect enhances R&D returns, the foreignness diminishes them.
Resumo:
Given a sample from a fully specified parametric model, let Zn be a given finite-dimensional statistic - for example, an initial estimator or a set of sample moments. We propose to (re-)estimate the parameters of the model by maximizing the likelihood of Zn. We call this the maximum indirect likelihood (MIL) estimator. We also propose a computationally tractable Bayesian version of the estimator which we refer to as a Bayesian Indirect Likelihood (BIL) estimator. In most cases, the density of the statistic will be of unknown form, and we develop simulated versions of the MIL and BIL estimators. We show that the indirect likelihood estimators are consistent and asymptotically normally distributed, with the same asymptotic variance as that of the corresponding efficient two-step GMM estimator based on the same statistic. However, our likelihood-based estimators, by taking into account the full finite-sample distribution of the statistic, are higher order efficient relative to GMM-type estimators. Furthermore, in many cases they enjoy a bias reduction property similar to that of the indirect inference estimator. Monte Carlo results for a number of applications including dynamic and nonlinear panel data models, a structural auction model and two DSGE models show that the proposed estimators indeed have attractive finite sample properties.
Resumo:
This study describes spermatogenesis in a majid crab (Maja brachydactyla) using electron microscopy and reports the origin of the different organelles present in the spermatozoa. Spermatogenesis in M. brachydactyla follows the general pattern observed in other brachyuran species but with several peculiarities. Annulate lamellae have been reported in brachyuran spermatogenesis during the diplotene stage of first spermatocytes, the early and mid-spermatids. Unlike previous observations, a Golgi complex has been found in midspermatids and is involved in the development of the acrosome. The Golgi complex produces two types of vesicles: light vesicles and electron-dense vesicles. The light vesicles merge into the cytoplasm, giving rise to the proacrosomal vesicle. The electron-dense vesicles are implicated in the formation of an electron-dense granule, which later merges with the proacrosomal vesicle. In the late spermatid, the endoplasmic reticulum and the Golgi complex degenerate and form the structures–organelles complex found in the spermatozoa. At the end of spermatogenesis, the materials in the proacrosomal vesicle aggregate in a two-step process, forming the characteristic concentric three-layered structure of the spermatozoon acrosome. The newly formed spermatozoa from testis show the typical brachyuran morphology.
Resumo:
Zero correlation between measurement error and model error has been assumed in existing panel data models dealing specifically with measurement error. We extend this literature and propose a simple model where one regressor is mismeasured, allowing the measurement error to correlate with model error. Zero correlation between measurement error and model error is a special case in our model where correlated measurement error equals zero. We ask two research questions. First, we wonder if the correlated measurement error can be identified in the context of panel data. Second, we wonder if classical instrumental variables in panel data need to be adjusted when correlation between measurement error and model error cannot be ignored. Under some regularity conditions the answer is yes to both questions. We then propose a two-step estimation corresponding to the two questions. The first step estimates correlated measurement error from a reverse regression; and the second step estimates usual coefficients of interest using adjusted instruments.
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.