34 resultados para Sub-registry. Empirical bayesian estimator. General equation. Balancing adjustment factor
em Université de Lausanne, Switzerland
Resumo:
This study presents a classification criteria for two-class Cannabis seedlings. As the cultivation of drug type cannabis is forbidden in Switzerland, law enforcement authorities regularly ask laboratories to determine cannabis plant's chemotype from seized material in order to ascertain that the plantation is legal or not. In this study, the classification analysis is based on data obtained from the relative proportion of three major leaf compounds measured by gas-chromatography interfaced with mass spectrometry (GC-MS). The aim is to discriminate between drug type (illegal) and fiber type (legal) cannabis at an early stage of the growth. A Bayesian procedure is proposed: a Bayes factor is computed and classification is performed on the basis of the decision maker specifications (i.e. prior probability distributions on cannabis type and consequences of classification measured by losses). Classification rates are computed with two statistical models and results are compared. Sensitivity analysis is then performed to analyze the robustness of classification criteria.
Resumo:
The interpretation of the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) is based on a 4-factor model, which is only partially compatible with the mainstream Cattell-Horn-Carroll (CHC) model of intelligence measurement. The structure of cognitive batteries is frequently analyzed via exploratory factor analysis and/or confirmatory factor analysis. With classical confirmatory factor analysis, almost all crossloadings between latent variables and measures are fixed to zero in order to allow the model to be identified. However, inappropriate zero cross-loadings can contribute to poor model fit, distorted factors, and biased factor correlations; most important, they do not necessarily faithfully reflect theory. To deal with these methodological and theoretical limitations, we used a new statistical approach, Bayesian structural equation modeling (BSEM), among a sample of 249 French-speaking Swiss children (8-12 years). With BSEM, zero-fixed cross-loadings between latent variables and measures are replaced by approximate zeros, based on informative, small-variance priors. Results indicated that a direct hierarchical CHC-based model with 5 factors plus a general intelligence factor better represented the structure of the WISC-IV than did the 4-factor structure and the higher order models. Because a direct hierarchical CHC model was more adequate, it was concluded that the general factor should be considered as a breadth rather than a superordinate factor. Because it was possible for us to estimate the influence of each of the latent variables on the 15 subtest scores, BSEM allowed improvement of the understanding of the structure of intelligence tests and the clinical interpretation of the subtest scores.
Resumo:
Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.
Resumo:
In occupational exposure assessment of airborne contaminants, exposure levels can either be estimated through repeated measurements of the pollutant concentration in air, expert judgment or through exposure models that use information on the conditions of exposure as input. In this report, we propose an empirical hierarchical Bayesian model to unify these approaches. Prior to any measurement, the hygienist conducts an assessment to generate prior distributions of exposure determinants. Monte-Carlo samples from these distributions feed two level-2 models: a physical, two-compartment model, and a non-parametric, neural network model trained with existing exposure data. The outputs of these two models are weighted according to the expert's assessment of their relevance to yield predictive distributions of the long-term geometric mean and geometric standard deviation of the worker's exposure profile (level-1 model). Bayesian inferences are then drawn iteratively from subsequent measurements of worker exposure. Any traditional decision strategy based on a comparison with occupational exposure limits (e.g. mean exposure, exceedance strategies) can then be applied. Data on 82 workers exposed to 18 contaminants in 14 companies were used to validate the model with cross-validation techniques. A user-friendly program running the model is available upon request.
Resumo:
Extensive gene flow between wheat (Triticum sp.) and several wild relatives of the genus Aegilops has recently been detected despite notoriously high levels of selfing in these species. Here, we assess and model the spread of wheat alleles into natural populations of the barbed goatgrass (Aegilops triuncialis), a wild wheat relative prevailing in the Mediterranean flora. Our sampling, based on an extensive survey of 31 Ae. triuncialis populations collected along a 60 km × 20 km area in southern Spain (Grazalema Mountain chain, Andalousia, totalling 458 specimens), is completed with 33 wheat cultivars representative of the European domesticated pool. All specimens were genotyped with amplified fragment length polymorphism with the aim of estimating wheat admixture levels in Ae. triuncialis populations. This survey first confirmed extensive hybridization and backcrossing of wheat into the wild species. We then used explicit modelling of populations and approximate Bayesian computation to estimate the selfing rate of Ae. triuncialis along with the magnitude, the tempo and the geographical distance over which wheat alleles introgress into Ae. triuncialis populations. These simulations confirmed that extensive introgression of wheat alleles (2.7 × 10(-4) wheat immigrants for each Ae. triuncialis resident, at each generation) into Ae. triuncialis occurs despite a high selfing rate (Fis ≈ 1 and selfing rate = 97%). These results are discussed in the light of risks associated with the release of genetically modified wheat cultivars in Mediterranean agrosystems.
Resumo:
OBJECTIVES: The aim of the study was to statistically model the relative increased risk of cardiovascular disease (CVD) per year older in Data collection on Adverse events of anti-HIV Drugs (D:A:D) and to compare this with the relative increased risk of CVD per year older in general population risk equations. METHODS: We analysed three endpoints: myocardial infarction (MI), coronary heart disease (CHD: MI or invasive coronary procedure) and CVD (CHD or stroke). We fitted a number of parametric age effects, adjusting for known risk factors and antiretroviral therapy (ART) use. The best-fitting age effect was determined using the Akaike information criterion. We compared the ageing effect from D:A:D with that from the general population risk equations: the Framingham Heart Study, CUORE and ASSIGN risk scores. RESULTS: A total of 24 323 men were included in analyses. Crude MI, CHD and CVD event rates per 1000 person-years increased from 2.29, 3.11 and 3.65 in those aged 40-45 years to 6.53, 11.91 and 15.89 in those aged 60-65 years, respectively. The best-fitting models included inverse age for MI and age + age(2) for CHD and CVD. In D:A:D there was a slowly accelerating increased risk of CHD and CVD per year older, which appeared to be only modest yet was consistently raised compared with the risk in the general population. The relative risk of MI with age was not different between D:A:D and the general population. CONCLUSIONS: We found only limited evidence of accelerating increased risk of CVD with age in D:A:D compared with the general population. The absolute risk of CVD associated with HIV infection remains uncertain.
Resumo:
BACKGROUND: Life partnerships other than marriage are rarely studied in childhood cancer survivors (CCS). We aimed (1) to describe life partnership and marriage in CCS and compare them to life partnerships in siblings and the general population; and (2) to identify socio-demographic and cancer-related factors associated with life partnership and marriage. METHODS: As part of the Swiss Childhood Cancer Survivor Study (SCCSS), a questionnaire was sent to all CCS (aged 20-40 years) registered in the Swiss Childhood Cancer Registry (SCCR), aged <16 years at diagnosis, who had survived ≥ 5 years. The proportion with life partner or married was compared between CSS and siblings and participants in the Swiss Health Survey (SHS). Multivariable logistic regression was used to identify factors associated with life partnership or marriage. RESULTS: We included 1,096 CCS of the SCCSS, 500 siblings and 5,593 participants of the SHS. Fewer CCS (47%) than siblings (61%, P < 0.001) had life partners, and fewer CCS were married (16%) than among the SHS population (26%, P > 0.001). Older (OR = 1.14, P < 0.001) and female CCS (OR = 1.85, <0.001) were more likely to have life partners. CCS who had undergone radiotherapy, bone marrow transplants (global P Treatment = 0.018) or who had a CNS diagnosis (global P Diagnosis < 0.001) were less likely to have life partners. CONCLUSION: CCS are less likely to have life partners than their peers. Most CCS with a life partner were not married. Future research should focus on the effect of these disparities on the quality of life of CCS.
Resumo:
This study, conducted with a representative sample of employed and unemployed adults living in Switzerland (N = 2002), focuses on work conditions (in terms of professional insecurity and job demands), career adaptability, and professional and general well-being. Analyses of covariance highlighted that both unemployed and employed participants with low job insecurity reported higher scores on career adaptability and several dimensions (notably on control) than employed participants with high job insecurity. Moreover, structural equation modeling for employed participants showed that, independent of work conditions, adaptability resources were positively associated both with general and professional well-being. As expected professional outcomes were strongly related to job strain and professional insecurity, emphasizing the central role of the work environment. Finally, career adaptability partially mediated the relationship between job strain and professional insecurity, and the outcome well-being.
Resumo:
This dissertation focuses on the practice of regulatory governance, throughout the study of the functioning of formally independent regulatory agencies (IRAs), with special attention to their de facto independence. The research goals are grounded on a "neo-positivist" (or "reconstructed positivist") position (Hawkesworth 1992; Radaelli 2000b; Sabatier 2000). This perspective starts from the ontological assumption that even if subjective perceptions are constitutive elements of political phenomena, a real world exists beyond any social construction and can, however imperfectly, become the object of scientific inquiry. Epistemologically, it follows that hypothetical-deductive theories with explanatory aims can be tested by employing a proper methodology and set of analytical techniques. It is thus possible to make scientific inferences and general conclusions to a certain extent, according to a Bayesian conception of knowledge, in order to update the prior scientific beliefs in the truth of the related hypotheses (Howson 1998), while acknowledging the fact that the conditions of truth are at least partially subjective and historically determined (Foucault 1988; Kuhn 1970). At the same time, a sceptical position is adopted towards the supposed disjunction between facts and values and the possibility of discovering abstract universal laws in social science. It has been observed that the current version of capitalism corresponds to the golden age of regulation, and that since the 1980s no government activity in OECD countries has grown faster than regulatory functions (Jacobs 1999). Following an apparent paradox, the ongoing dynamics of liberalisation, privatisation, decartelisation, internationalisation, and regional integration hardly led to the crumbling of the state, but instead promoted a wave of regulatory growth in the face of new risks and new opportunities (Vogel 1996). Accordingly, a new order of regulatory capitalism is rising, implying a new division of labour between state and society and entailing the expansion and intensification of regulation (Levi-Faur 2005). The previous order, relying on public ownership and public intervention and/or on sectoral self-regulation by private actors, is being replaced by a more formalised, expert-based, open, and independently regulated model of governance. Independent regulation agencies (IRAs), that is, formally independent administrative agencies with regulatory powers that benefit from public authority delegated from political decision makers, represent the main institutional feature of regulatory governance (Gilardi 2008). IRAs constitute a relatively new technology of regulation in western Europe, at least for certain domains, but they are increasingly widespread across countries and sectors. For instance, independent regulators have been set up for regulating very diverse issues, such as general competition, banking and finance, telecommunications, civil aviation, railway services, food safety, the pharmaceutical industry, electricity, environmental protection, and personal data privacy. Two attributes of IRAs deserve a special mention. On the one hand, they are formally separated from democratic institutions and elected politicians, thus raising normative and empirical concerns about their accountability and legitimacy. On the other hand, some hard questions about their role as political actors are still unaddressed, though, together with regulatory competencies, IRAs often accumulate executive, (quasi-)legislative, and adjudicatory functions, as well as about their performance.
Resumo:
Independent regulatory agencies are one of the main institutional features of the 'rising regulatory state' in Western Europe. Governments are increasingly willing to abandon their regulatory competencies and to delegate them to specialized institutions that are at least partially beyond their control. This article examines the empirical consistency of one particular explanation of this phenomenon, namely the credibility hypothesis, claiming that governments delegate powers so as to enhance the credibility of their policies. Three observable implications are derived from the general hypothesis, linking credibility and delegation to veto players, complexity and interdependence. An independence index is developed to measure agency independence, which is then used in a multivariate analysis where the impact of credibility concerns on delegation is tested. The analysis relies on an original data set comprising independence scores for thirty-three regulators. Results show that the credibility hypothesis can explain a good deal of the variation in delegation. The economic nature of regulation is a strong determinant of agency independence, but is mediated by national institutions in the form of veto players.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale for the purpose of improving predictions of groundwater flow and solute transport. However, extending corresponding approaches to the regional scale still represents one of the major challenges in the domain of hydrogeophysics. To address this problem, we have developed a regional-scale data integration methodology based on a two-step Bayesian sequential simulation approach. Our objective is to generate high-resolution stochastic realizations of the regional-scale hydraulic conductivity field in the common case where there exist spatially exhaustive but poorly resolved measurements of a related geophysical parameter, as well as highly resolved but spatially sparse collocated measurements of this geophysical parameter and the hydraulic conductivity. To integrate this multi-scale, multi-parameter database, we first link the low- and high-resolution geophysical data via a stochastic downscaling procedure. This is followed by relating the downscaled geophysical data to the high-resolution hydraulic conductivity distribution. After outlining the general methodology of the approach, we demonstrate its application to a realistic synthetic example where we consider as data high-resolution measurements of the hydraulic and electrical conductivities at a small number of borehole locations, as well as spatially exhaustive, low-resolution estimates of the electrical conductivity obtained from surface-based electrical resistivity tomography. The different stochastic realizations of the hydraulic conductivity field obtained using our procedure are validated by comparing their solute transport behaviour with that of the underlying ?true? hydraulic conductivity field. We find that, even in the presence of strong subsurface heterogeneity, our proposed procedure allows for the generation of faithful representations of the regional-scale hydraulic conductivity structure and reliable predictions of solute transport over long, regional-scale distances.
Resumo:
Background: Cardio-vascular diseases (CVD), their well established risk factors (CVRF) and mental disorders are common and co-occur more frequently than would be expected by chance. However, the pathogenic mechanisms and course determinants of both CVD and mental disorders have only been partially identified.Methods/Design: Comprehensive follow-up of CVRF and CVD with a psychiatric exam in all subjects who participated in the baseline cross-sectional CoLaus study (2003-2006) (n=6'738) which also included a comprehensive genetic assessment. The somatic investigation will include a shortened questionnaire on CVRF, CV events and new CVD since baseline and measurements of the same clinical and biological variables as at baseline. In addition, pro-inflammatory markers, persistent pain and sleep patterns and disorders will be assessed. In the case of a new CV event, detailed information will be abstracted from medical records. Similarly, data on the cause of death will be collected from the Swiss National Death Registry. The comprehensive psychiatric investigation of the CoLaus/PsyCoLaus study will use contemporary epidemiological methods including semi-structured diagnostic interviews, experienced clinical interviewers, standardized diagnostic criteria including threshold according to DSM-IV and sub-threshold syndromes and supplementary information on risk and protective factors for disorders. In addition, screening for objective cognitive impairment will be performed in participants older than 65 years.Discussion: The combined CoLaus/PsyCoLaus sample provides a unique opportunity to obtain prospective data on the interplay between CVRF/CVD and mental disorders, overcoming limitations of previous research by bringing together a comprehensive investigation of both CVRF and mental disorders as well as a large number of biological variables and a genome-wide genetic assessment in participants recruited from the general population.
Resumo:
Background The 'database search problem', that is, the strengthening of a case - in terms of probative value - against an individual who is found as a result of a database search, has been approached during the last two decades with substantial mathematical analyses, accompanied by lively debate and centrally opposing conclusions. This represents a challenging obstacle in teaching but also hinders a balanced and coherent discussion of the topic within the wider scientific and legal community. This paper revisits and tracks the associated mathematical analyses in terms of Bayesian networks. Their derivation and discussion for capturing probabilistic arguments that explain the database search problem are outlined in detail. The resulting Bayesian networks offer a distinct view on the main debated issues, along with further clarity. Methods As a general framework for representing and analyzing formal arguments in probabilistic reasoning about uncertain target propositions (that is, whether or not a given individual is the source of a crime stain), this paper relies on graphical probability models, in particular, Bayesian networks. This graphical probability modeling approach is used to capture, within a single model, a series of key variables, such as the number of individuals in a database, the size of the population of potential crime stain sources, and the rarity of the corresponding analytical characteristics in a relevant population. Results This paper demonstrates the feasibility of deriving Bayesian network structures for analyzing, representing, and tracking the database search problem. The output of the proposed models can be shown to agree with existing but exclusively formulaic approaches. Conclusions The proposed Bayesian networks allow one to capture and analyze the currently most well-supported but reputedly counter-intuitive and difficult solution to the database search problem in a way that goes beyond the traditional, purely formulaic expressions. The method's graphical environment, along with its computational and probabilistic architectures, represents a rich package that offers analysts and discussants with additional modes of interaction, concise representation, and coherent communication.
Resumo:
Empirical literature on the analysis of the efficiency of measures for reducing persistent government deficits has mainly focused on the direct explanation of deficit. By contrast, this paper aims at modeling government revenue and expenditure within a simultaneous framework and deriving the fiscal balance (surplus or deficit) equation as the difference between the two variables. This setting enables one to not only judge how relevant the explanatory variables are in explaining the fiscal balance but also understand their impact on revenue and/or expenditure. Our empirical results, obtained by using a panel data set on Swiss Cantons for the period 1980-2002, confirm the relevance of the approach followed here, by providing unambiguous evidence of a simultaneous relationship between revenue and expenditure. They also reveal strong dynamic components in revenue, expenditure, and fiscal balance. Among the significant determinants of public fiscal balance we not only find the usual business cycle elements, but also and more importantly institutional factors such as the number of administrative units, and the ease with which people can resort to political (direct democracy) instruments, such as public initiatives and referendum.