892 resultados para estimating conditional probabilities
Resumo:
Phillips curves are often estimated without due attention being paid to the underlying time series properties of the data. In particular, the consequences of inflation having discrete breaks in mean have not been studied adequately. We show by means of simulations and a detailed empirical example based on United States data that not taking account of breaks may lead to biased, and therefore spurious, estimates of Phillips curves. We suggest a method to account for the breaks in mean inflation and obtain meaningful and unbiased estimates of the short- and long-run Phillips curves in the United States.
Resumo:
This paper uses data on the world's copper mining industry to measure the impact on efficiency of the adoption of the ISO 14001 environmental standard. Anecdotal and case study literature suggests that firms are motivated to adopt this standard so as to achieve greater efficiency through changes in operating procedures and processes. Using plant level panel data from 1992-2007 on most of the world's industrial copper mines, the study uses stochastic frontier methods to investigate the effects of ISO adoption. The variety of models used in this study find that adoption either tends to improve efficiency or has no impact on efficiency, but no evidence is found that ISO adoption decreases efficiency.
Resumo:
OBJECTIVE To establish the role of the transcription factor Pax4 in pancreatic islet expansion and survival in response to physiological stress and its impact on glucose metabolism, we generated transgenic mice conditionally and selectively overexpressing Pax4 or a diabetes-linked mutant variant (Pax4R129 W) in β-cells. RESEARCH DESIGN AND METHODS Glucose homeostasis and β-cell death and proliferation were assessed in Pax4- or Pax4R129 W-overexpressing transgenic animals challenged with or without streptozotocin. Isolated transgenic islets were also exposed to cytokines, and apoptosis was evaluated by DNA fragmentation or cytochrome C release. The expression profiles of proliferation and apoptotic genes and β-cell markers were studied by immunohistochemistry and quantitative RT-PCR. RESULTS Pax4 but not Pax4R129 W protected animals against streptozotocin-induced hyperglycemia and isolated islets from cytokine-mediated β-cell apoptosis. Cytochrome C release was abrogated in Pax4 islets treated with cytokines. Interleukin-1β transcript levels were suppressed in Pax4 islets, whereas they were increased along with NOS2 in Pax4R129 W islets. Bcl-2, Cdk4, and c-myc expression levels were increased in Pax4 islets while MafA, insulin, and GLUT2 transcript levels were suppressed in both animal models. Long-term Pax4 expression promoted proliferation of a Pdx1-positive cell subpopulation while impeding insulin secretion. Suppression of Pax4 rescued this defect with a concomitant increase in pancreatic insulin content. CONCLUSIONS Pax4 protects adult islets from stress-induced apoptosis by suppressing selective nuclear factor-κB target genes while increasing Bcl-2 levels. Furthermore, it promotes dedifferentiation and proliferation of β-cells through MafA repression, with a concomitant increase in Cdk4 and c-myc expression.
Resumo:
Climate change has been taking place at unprecedented rates over the past decades. These fast alterations caused by human activities are leading to a global warming of the planet. Warmer temperatures are going to have important effects on vegetation and especially on tropical forests. Insects as well will be affected by climate change. This study tested the hypothesis that higher temperatures lead to a higher insect pressure on vegetation. Visual estimations of leaf damage were recorded and used to assess the extent of herbivory in nine 0.1ha plots along an altitudinal gradient, and therefore a temperature gradient. These estimations were made at both a community level and a species level, on 2 target species. Leaf toughness tests were performed on samples from the target species from each plot. Results showed a strong evidence of increasing insect damage along increasing temperature, with no significant effect from the leaf toughness.
Resumo:
AbstractBreast cancer is one of the most common cancers affecting one in eight women during their lives. Survival rates have increased steadily thanks to early diagnosis with mammography screening and more efficient treatment strategies. Post-operative radiation therapy is a standard of care in the management of breast cancer and has been shown to reduce efficiently both local recurrence rate and breast cancer mortality. Radiation therapy is however associated with some late effects for long-term survivors. Radiation-induced secondary cancer is a relatively rare but severe late effect of radiation therapy. Currently, radiotherapy plans are essentially optimized to maximize tumor control and minimize late deterministic effects (tissue reactions) that are mainly associated with high doses (» 1 Gy). With improved cure rates and new radiation therapy technologies, it is also important to evaluate and minimize secondary cancer risks for different treatment techniques. This is a particularly challenging task due to the large uncertainties in the dose-response relationship.In contrast with late deterministic effects, secondary cancers may be associated with much lower doses and therefore out-of-field doses (also called peripheral doses) that are typically inferior to 1 Gy need to be determined accurately. Out-of-field doses result from patient scatter and head scatter from the treatment unit. These doses are particularly challenging to compute and we characterized it by Monte Carlo (MC) calculation. A detailed MC model of the Siemens Primus linear accelerator has been thoroughly validated with measurements. We investigated the accuracy of such a model for retrospective dosimetry in epidemiological studies on secondary cancers. Considering that patients in such large studies could be treated on a variety of machines, we assessed the uncertainty in reconstructed peripheral dose due to the variability of peripheral dose among various linac geometries. For large open fields (> 10x10 cm2), the uncertainty would be less than 50%, but for small fields and wedged fields the uncertainty in reconstructed dose could rise up to a factor of 10. It was concluded that such a model could be used for conventional treatments using large open fields only.The MC model of the Siemens Primus linac was then used to compare out-of-field doses for different treatment techniques in a female whole-body CT-based phantom. Current techniques such as conformai wedged-based radiotherapy and hybrid IMRT were investigated and compared to older two-dimensional radiotherapy techniques. MC doses were also compared to those of a commercial Treatment Planning System (TPS). While the TPS is routinely used to determine the dose to the contralateral breast and the ipsilateral lung which are mostly out of the treatment fields, we have shown that these doses may be highly inaccurate depending on the treatment technique investigated. MC shows that hybrid IMRT is dosimetrically similar to three-dimensional wedge-based radiotherapy within the field, but offers substantially reduced doses to out-of-field healthy organs.Finally, many different approaches to risk estimations extracted from the literature were applied to the calculated MC dose distribution. Absolute risks varied substantially as did the ratio of risk between two treatment techniques, reflecting the large uncertainties involved with current risk models. Despite all these uncertainties, the hybrid IMRT investigated resulted in systematically lower cancer risks than any of the other treatment techniques. More epidemiological studies with accurate dosimetry are required in the future to construct robust risk models. In the meantime, any treatment strategy that reduces out-of-field doses to healthy organs should be investigated. Electron radiotherapy might offer interesting possibilities with this regard.RésuméLe cancer du sein affecte une femme sur huit au cours de sa vie. Grâce au dépistage précoce et à des thérapies de plus en plus efficaces, le taux de guérison a augmenté au cours du temps. La radiothérapie postopératoire joue un rôle important dans le traitement du cancer du sein en réduisant le taux de récidive et la mortalité. Malheureusement, la radiothérapie peut aussi induire des toxicités tardives chez les patients guéris. En particulier, les cancers secondaires radio-induits sont une complication rare mais sévère de la radiothérapie. En routine clinique, les plans de radiothérapie sont essentiellement optimisées pour un contrôle local le plus élevé possible tout en minimisant les réactions tissulaires tardives qui sont essentiellement associées avec des hautes doses (» 1 Gy). Toutefois, avec l'introduction de différentes nouvelles techniques et avec l'augmentation des taux de survie, il devient impératif d'évaluer et de minimiser les risques de cancer secondaire pour différentes techniques de traitement. Une telle évaluation du risque est une tâche ardue étant donné les nombreuses incertitudes liées à la relation dose-risque.Contrairement aux effets tissulaires, les cancers secondaires peuvent aussi être induits par des basses doses dans des organes qui se trouvent hors des champs d'irradiation. Ces organes reçoivent des doses périphériques typiquement inférieures à 1 Gy qui résultent du diffusé du patient et du diffusé de l'accélérateur. Ces doses sont difficiles à calculer précisément, mais les algorithmes Monte Carlo (MC) permettent de les estimer avec une bonne précision. Un modèle MC détaillé de l'accélérateur Primus de Siemens a été élaboré et validé avec des mesures. La précision de ce modèle a également été déterminée pour la reconstruction de dose en épidémiologie. Si on considère que les patients inclus dans de larges cohortes sont traités sur une variété de machines, l'incertitude dans la reconstruction de dose périphérique a été étudiée en fonction de la variabilité de la dose périphérique pour différents types d'accélérateurs. Pour de grands champs (> 10x10 cm ), l'incertitude est inférieure à 50%, mais pour de petits champs et des champs filtrés, l'incertitude de la dose peut monter jusqu'à un facteur 10. En conclusion, un tel modèle ne peut être utilisé que pour les traitements conventionnels utilisant des grands champs.Le modèle MC de l'accélérateur Primus a été utilisé ensuite pour déterminer la dose périphérique pour différentes techniques dans un fantôme corps entier basé sur des coupes CT d'une patiente. Les techniques actuelles utilisant des champs filtrés ou encore l'IMRT hybride ont été étudiées et comparées par rapport aux techniques plus anciennes. Les doses calculées par MC ont été comparées à celles obtenues d'un logiciel de planification commercial (TPS). Alors que le TPS est utilisé en routine pour déterminer la dose au sein contralatéral et au poumon ipsilatéral qui sont principalement hors des faisceaux, nous avons montré que ces doses peuvent être plus ou moins précises selon la technTque étudiée. Les calculs MC montrent que la technique IMRT est dosimétriquement équivalente à celle basée sur des champs filtrés à l'intérieur des champs de traitement, mais offre une réduction importante de la dose aux organes périphériques.Finalement différents modèles de risque ont été étudiés sur la base des distributions de dose calculées par MC. Les risques absolus et le rapport des risques entre deux techniques de traitement varient grandement, ce qui reflète les grandes incertitudes liées aux différents modèles de risque. Malgré ces incertitudes, on a pu montrer que la technique IMRT offrait une réduction du risque systématique par rapport aux autres techniques. En attendant des données épidémiologiques supplémentaires sur la relation dose-risque, toute technique offrant une réduction des doses périphériques aux organes sains mérite d'être étudiée. La radiothérapie avec des électrons offre à ce titre des possibilités intéressantes.
Resumo:
To compare in the Swiss population the results of several scores estimating the risk of developing type 2 diabetes. This was a single-center, cross-sectional study conducted between 2003 and 2006 in Lausanne, Switzerland. Overall, 3,251 women and 2,937 men, aged 35-75 years, were assessed, of which 5,760 (93%) were free from diabetes and included in the current study. The risk of developing type 2 diabetes was assessed using seven different risk scores, including clinical data with or without biological data. Participants were considered to be eligible for primary prevention according to the thresholds provided for each score. The results were then extrapolated to the Swiss population of the same sex and age. The risk of developing type 2 diabetes increased with age in all scores. The prevalence of participants at high risk ranged between 1.6 and 24.9% in men and between 1.1 and 15.7% in women. Extrapolated to the Swiss population of similar age, the overall number of participants at risk, and thus susceptible to intervention, ranged between 46,708 and 636,841. In addition, scores that included the same clinical variables led to a significantly different prevalence of participants at risk (4.2% [95% CI 3.4-5.0] vs. 12.8% [11.5-14.1] in men and 2.9% [2.4-3.6] vs. 6.0% [5.2-6.9] in women). CONCLUSIONS; The prevalence of participants at risk for developing type 2 diabetes varies considerably according to the scoring system used. To adequately prevent type 2 diabetes, risk-scoring systems must be validated for each population considered.
Resumo:
This article focuses on business risk management in the insurance industry. A methodology for estimating the profit loss caused by each customer in the portfolio due to policy cancellation is proposed. Using data from a European insurance company, customer behaviour over time is analyzed in order to estimate the probability of policy cancelation and the resulting potential profit loss due to cancellation. Customers may have up to two different lines of business contracts: motor insurance and other diverse insurance (such as, home contents, life or accident insurance). Implications for understanding customer cancellation behaviour as the core of business risk management are outlined.
Estimating the Prevalence of Problem Opiate and Problem Cocaine Use in Northern Ireland (PDF 213 KB)
Resumo:
Apr-06
Resumo:
AIMS/HYPOTHESIS: betaTC-tet (H2(k)) is a conditional insulinoma cell line derived from transgenic mice expressing a tetracycline-regulated oncogene. Transgenic expression of several proteins implicated in the apoptotic pathways increase the resistance of betaTC-tet cells in vitro. We tested in vivo the sensitivity of the cells to rejection and the protective effect of genetic alterations in NOD mice. METHODS: betaTC-tet cells and genetically engineered lines expressing Bcl-2 (CDM3D), a dominant negative mutant of MyD88 or SOCS-1 were transplanted in diabetic female NOD mice or in male NOD mice with diabetes induced by high-dose streptozotocin. Survival of functional cell grafts in NOD-scid mice was also analyzed after transfer of splenocytes from diabetic NOD mice. Autoreactive T-cell hybridomas and splenocytes from diabetic NOD mice were stimulated by betaTC-tet cells. RESULTS: betaTC-tet cells and genetically engineered cell lines were all similarly rejected in diabetic NOD mice and in NOD-scid mice after splenocyte transfer. In 3- to 6-week-old male NOD mice treated with high-dose streptozotocin, the cells temporarily survived, in contrast with C57BL/6 mice treated with high-dose streptozotocin (indefinite survival) and untreated 3- to 6-week-old male NOD mice (rejection). The protective effect of high-dose streptozotocin was lost in older male NOD mice. betaTC-tet cells did not stimulate autoreactive T-cell hybridomas, but induced IL-2 secretion by splenocytes from diabetic NOD mice. CONCLUSION/INTERPRETATION: The autoimmune process seems to play an important role in the destruction of betaTC-tet cells in NOD mice. Genetic manipulations intended at increasing the resistance of beta cells were inefficient. Similar approaches should be tested in vivo as well as in vitro. High dose streptozotocin influences immune rejection and should be used with caution.
Resumo:
In fear conditioning, an animal learns to associate an unconditioned stimulus (US), such as a shock, and a conditioned stimulus (CS), such as a tone, so that the presentation of the CS alone can trigger conditioned responses. Recent research on the lateral amygdala has shown that following cued fear conditioning, only a subset of higher-excitable neurons are recruited in the memory trace. Their selective deletion after fear conditioning results in a selective erasure of the fearful memory. I hypothesize that the recruitment of highly excitable neurons depends on responsiveness to stimuli, intrinsic excitability and local connectivity. In addition, I hypothesize that neurons recruited for an initial memory also participate in subsequent memories, and that changes in neuronal excitability affect secondary fear learning. To address these hypotheses, I will show that A) a rat can learn to associate two successive short-term fearful memories; B) neuronal populations in the LA are competitively recruited in the memory traces depending on individual neuronal advantages, as well as advantages granted by the local network. By performing two successive cued fear conditioning experiments, I found that rats were able to learn and extinguish the two successive short-term memories, when tested 1 hour after learning for each memory. These rats were equipped with a system of stable extracellular recordings that I developed, which allowed to monitor neuronal activity during fear learning. 233 individual putative pyramidal neurons could modulate their firing rate in response to the conditioned tone (conditioned neurons) and/or non- conditioned tones (generalizing neurons). Out of these recorded putative pyramidal neurons 86 (37%) neurons were conditioned to one or both tones. More precisely, one population of neurons encoded for a shared memory while another group of neurons likely encoded the memories' new features. Notably, in spite of a successful behavioral extinction, the firing rate of those conditioned neurons in response to the conditioned tone remained unchanged throughout memory testing. Furthermore, by analyzing the pre-conditioning characteristics of the conditioned neurons, I determined that it was possible to predict neuronal recruitment based on three factors: 1) initial sensitivity to auditory inputs, with tone-sensitive neurons being more easily recruited than tone- insensitive neurons; 2) baseline excitability levels, with more highly excitable neurons being more likely to become conditioned; and 3) the number of afferent connections received from local neurons, with neurons destined to become conditioned receiving more connections than non-conditioned neurons. - En conditionnement de la peur, un animal apprend à associer un stimulus inconditionnel (SI), tel un choc électrique, et un stimulus conditionné (SC), comme un son, de sorte que la présentation du SC seul suffit pour déclencher des réflexes conditionnés. Des recherches récentes sur l'amygdale latérale (AL) ont montré que, suite au conditionnement à la peur, seul un sous-ensemble de neurones plus excitables sont recrutés pour constituer la trace mnésique. Pour apprendre à associer deux sons au même SI, je fais l'hypothèse que les neurones entrent en compétition afin d'être sélectionnés lors du recrutement pour coder la trace mnésique. Ce recrutement dépendrait d'un part à une activation facilité des neurones ainsi qu'une activation facilité de réseaux de neurones locaux. En outre, je fais l'hypothèse que l'activation de ces réseaux de l'AL, en soi, est suffisante pour induire une mémoire effrayante. Pour répondre à ces hypothèses, je vais montrer que A) selon un processus de mémoire à court terme, un rat peut apprendre à associer deux mémoires effrayantes apprises successivement; B) des populations neuronales dans l'AL sont compétitivement recrutées dans les traces mnésiques en fonction des avantages neuronaux individuels, ainsi que les avantages consentis par le réseau local. En effectuant deux expériences successives de conditionnement à la peur, des rats étaient capables d'apprendre, ainsi que de subir un processus d'extinction, pour les deux souvenirs effrayants. La mesure de l'efficacité du conditionnement à la peur a été effectuée 1 heure après l'apprentissage pour chaque souvenir. Ces rats ont été équipés d'un système d'enregistrements extracellulaires stables que j'ai développé, ce qui a permis de suivre l'activité neuronale pendant l'apprentissage de la peur. 233 neurones pyramidaux individuels pouvaient moduler leur taux d'activité en réponse au son conditionné (neurones conditionnés) et/ou au son non conditionné (neurones généralisant). Sur les 233 neurones pyramidaux putatifs enregistrés 86 (37%) d'entre eux ont été conditionnés à un ou deux tons. Plus précisément, une population de neurones code conjointement pour un souvenir partagé, alors qu'un groupe de neurones différent code pour de nouvelles caractéristiques de nouveaux souvenirs. En particulier, en dépit d'une extinction du comportement réussie, le taux de décharge de ces neurones conditionné en réponse à la tonalité conditionnée est resté inchangée tout au long de la mesure d'apprentissage. En outre, en analysant les caractéristiques de pré-conditionnement des neurones conditionnés, j'ai déterminé qu'il était possible de prévoir le recrutement neuronal basé sur trois facteurs : 1) la sensibilité initiale aux entrées auditives, avec les neurones sensibles aux sons étant plus facilement recrutés que les neurones ne répondant pas aux stimuli auditifs; 2) les niveaux d'excitabilité des neurones, avec les neurones plus facilement excitables étant plus susceptibles d'être conditionnés au son ; et 3) le nombre de connexions reçues, puisque les neurones conditionné reçoivent plus de connexions que les neurones non-conditionnés. Enfin, nous avons constaté qu'il était possible de remplacer de façon satisfaisante le SI lors d'un conditionnement à la peur par des injections bilatérales de bicuculline, un antagoniste des récepteurs de l'acide y-Aminobutirique.
Resumo:
A parts based model is a parametrization of an object class using a collection of landmarks following the object structure. The matching of parts based models is one of the problems where pairwise Conditional Random Fields have been successfully applied. The main reason of their effectiveness is tractable inference and learning due to the simplicity of involved graphs, usually trees. However, these models do not consider possible patterns of statistics among sets of landmarks, and thus they sufffer from using too myopic information. To overcome this limitation, we propoese a novel structure based on a hierarchical Conditional Random Fields, which we explain in the first part of this memory. We build a hierarchy of combinations of landmarks, where matching is performed taking into account the whole hierarchy. To preserve tractable inference we effectively sample the label set. We test our method on facial feature selection and human pose estimation on two challenging datasets: Buffy and MultiPIE. In the second part of this memory, we present a novel approach to multiple kernel combination that relies on stacked classification. This method can be used to evaluate the landmarks of the parts-based model approach. Our method is based on combining responses of a set of independent classifiers for each individual kernel. Unlike earlier approaches that linearly combine kernel responses, our approach uses them as inputs to another set of classifiers. We will show that we outperform state-of-the-art methods on most of the standard benchmark datasets.
Resumo:
The increase in mortality risk associated with long-term exposure to particulate air pollution is one of the most important, and best-characterised, effects of air pollution on health. This report presents estimates of the size of this effect on mortality in local authority areas in the UK, building upon the attributable fractions reported as an indicator in the public health outcomes framework for England. It discusses the concepts and assumptions underlying these calculations and gives information on how such estimates can be made. The estimates are expected to be useful to health and wellbeing boards when assessing local public health priorities, as well as to others working in the field of air quality and public health. The estimates of mortality burden are based on modelled annual average concentrations of fine particulate matter (PM2.5) in each local authority area originating from human activities. Local data on the adult population and adult mortality rates is also used. Central estimates of the fraction of mortality attributable to long-term exposure to current levels of anthropogenic (human-made) particulate air pollution range from around 2.5% in some local authorities in rural areas of Scotland and Northern Ireland and between 3 and 5% in Wales, to over 8% in some London boroughs. Because of uncertainty in the increase in mortality risk associated with ambient PM2.5, the actual burdens associated with these modelled concentrations could range from approximately one-sixth to about double these figures. Thus, current levels of particulate air pollution have a considerable impact on public health. Measures to reduce levels of particulate air pollution, or to reduce exposure of the population to such pollution, are regarded as an important public health initiative.
Resumo:
Life expectancy by socio-economic status is an important measure of health inequality. This article presents proposed changes in the methods used to estimate life expectancy by social class using the ONS Longitudinal Study.