194 resultados para accident models
em Université de Lausanne, Switzerland
Resumo:
Abstract Traditionally, the common reserving methods used by the non-life actuaries are based on the assumption that future claims are going to behave in the same way as they did in the past. There are two main sources of variability in the processus of development of the claims: the variability of the speed with which the claims are settled and the variability between the severity of the claims from different accident years. High changes in these processes will generate distortions in the estimation of the claims reserves. The main objective of this thesis is to provide an indicator which firstly identifies and quantifies these two influences and secondly to determine which model is adequate for a specific situation. Two stochastic models were analysed and the predictive distributions of the future claims were obtained. The main advantage of the stochastic models is that they provide measures of variability of the reserves estimates. The first model (PDM) combines one conjugate family Dirichlet - Multinomial with the Poisson distribution. The second model (NBDM) improves the first one by combining two conjugate families Poisson -Gamma (for distribution of the ultimate amounts) and Dirichlet Multinomial (for distribution of the incremental claims payments). It was found that the second model allows to find the speed variability in the reporting process and development of the claims severity as function of two above mentioned distributions' parameters. These are the shape parameter of the Gamma distribution and the Dirichlet parameter. Depending on the relation between them we can decide on the adequacy of the claims reserve estimation method. The parameters have been estimated by the Methods of Moments and Maximum Likelihood. The results were tested using chosen simulation data and then using real data originating from the three lines of business: Property/Casualty, General Liability, and Accident Insurance. These data include different developments and specificities. The outcome of the thesis shows that when the Dirichlet parameter is greater than the shape parameter of the Gamma, resulting in a model with positive correlation between the past and future claims payments, suggests the Chain-Ladder method as appropriate for the claims reserve estimation. In terms of claims reserves, if the cumulated payments are high the positive correlation will imply high expectations for the future payments resulting in high claims reserves estimates. The negative correlation appears when the Dirichlet parameter is lower than the shape parameter of the Gamma, meaning low expected future payments for the same high observed cumulated payments. This corresponds to the situation when claims are reported rapidly and fewer claims remain expected subsequently. The extreme case appears in the situation when all claims are reported at the same time leading to expectations for the future payments of zero or equal to the aggregated amount of the ultimate paid claims. For this latter case, the Chain-Ladder is not recommended.
Resumo:
Radioactive soil-contamination mapping and risk assessment is a vital issue for decision makers. Traditional approaches for mapping the spatial concentration of radionuclides employ various regression-based models, which usually provide a single-value prediction realization accompanied (in some cases) by estimation error. Such approaches do not provide the capability for rigorous uncertainty quantification or probabilistic mapping. Machine learning is a recent and fast-developing approach based on learning patterns and information from data. Artificial neural networks for prediction mapping have been especially powerful in combination with spatial statistics. A data-driven approach provides the opportunity to integrate additional relevant information about spatial phenomena into a prediction model for more accurate spatial estimates and associated uncertainty. Machine-learning algorithms can also be used for a wider spectrum of problems than before: classification, probability density estimation, and so forth. Stochastic simulations are used to model spatial variability and uncertainty. Unlike regression models, they provide multiple realizations of a particular spatial pattern that allow uncertainty and risk quantification. This paper reviews the most recent methods of spatial data analysis, prediction, and risk mapping, based on machine learning and stochastic simulations in comparison with more traditional regression models. The radioactive fallout from the Chernobyl Nuclear Power Plant accident is used to illustrate the application of the models for prediction and classification problems. This fallout is a unique case study that provides the challenging task of analyzing huge amounts of data ('hard' direct measurements, as well as supplementary information and expert estimates) and solving particular decision-oriented problems.
Resumo:
Abiotic factors are considered strong drivers of species distribution and assemblages. Yet these spatial patterns are also influenced by biotic interactions. Accounting for competitors or facilitators may improve both the fit and the predictive power of species distribution models (SDMs). We investigated the influence of a dominant species, Empetrum nigrum ssp. hermaphroditum, on the distribution of 34 subordinate species in the tundra of northern Norway. We related SDM parameters of those subordinate species to their functional traits and their co-occurrence patterns with E. hermaphroditum across three spatial scales. By combining both approaches, we sought to understand whether these species may be limited by competitive interactions and/or benefit from habitat conditions created by the dominant species. The model fit and predictive power increased for most species when the frequency of occurrence of E. hermaphroditum was included in the SDMs as a predictor. The largest increase was found for species that 1) co-occur most of the time with E. hermaphroditum, both at large (i.e. 750 m) and small spatial scale (i.e. 2 m) or co-occur with E. hermaphroditum at large scale but not at small scale and 2) have particularly low or high leaf dry matter content (LDMC). Species that do not co-occur with E. hermaphroditum at the smallest scale are generally palatable herbaceous species with low LDMC, thus showing a weak ability to tolerate resource depletion that is directly or indirectly induced by E. hermaphroditum. Species with high LDMC, showing a better aptitude to face resource depletion and grazing, are often found in the proximity of E. hermaphroditum. Our results are consistent with previous findings that both competition and facilitation structure plant distribution and assemblages in the Arctic tundra. The functional and co-occurrence approaches used were complementary and provided a deeper understanding of the observed patterns by refinement of the pool of potential direct and indirect ecological effects of E. hermaphroditum on the distribution of subordinate species. Our correlative study would benefit being complemented by experimental approaches.
Resumo:
BACKGROUND: Even if a large proportion of physiotherapists work in the private sector worldwide, very little is known of the organizations within which they practice. Such knowledge is important to help understand contexts of practice and how they influence the quality of services and patient outcomes. The purpose of this study was to: 1) describe characteristics of organizations where physiotherapists practice in the private sector, and 2) explore the existence of a taxonomy of organizational models. METHODS: This was a cross-sectional quantitative survey of 236 randomly-selected physiotherapists. Participants completed a purpose-designed questionnaire online or by telephone, covering organizational vision, resources, structures and practices. Organizational characteristics were analyzed descriptively, while organizational models were identified by multiple correspondence analyses. RESULTS: Most organizations were for-profit (93.2%), located in urban areas (91.5%), and within buildings containing multiple businesses/organizations (76.7%). The majority included multiple providers (89.8%) from diverse professions, mainly physiotherapy assistants (68.7%), massage therapists (67.3%) and osteopaths (50.2%). Four organizational models were identified: 1) solo practice, 2) middle-scale multiprovider, 3) large-scale multiprovider and 4) mixed. CONCLUSIONS: The results of this study provide a detailed description of the organizations where physiotherapists practice, and highlight the importance of human resources in differentiating organizational models. Further research examining the influences of these organizational characteristics and models on outcomes such as physiotherapists' professional practices and patient outcomes are needed.
Resumo:
Among the largest resources for biological sequence data is the large amount of expressed sequence tags (ESTs) available in public and proprietary databases. ESTs provide information on transcripts but for technical reasons they often contain sequencing errors. Therefore, when analyzing EST sequences computationally, such errors must be taken into account. Earlier attempts to model error prone coding regions have shown good performance in detecting and predicting these while correcting sequencing errors using codon usage frequencies. In the research presented here, we improve the detection of translation start and stop sites by integrating a more complex mRNA model with codon usage bias based error correction into one hidden Markov model (HMM), thus generalizing this error correction approach to more complex HMMs. We show that our method maintains the performance in detecting coding sequences.
Resumo:
A large fraction of genome variation between individuals is comprised of submicroscopic copy number variation of genomic DNA segments. We assessed the relative contribution of structural changes and gene dosage alterations on phenotypic outcomes with mouse models of Smith-Magenis and Potocki-Lupski syndromes. We phenotyped mice with 1n (Deletion/+), 2n (+/+), 3n (Duplication/+), and balanced 2n compound heterozygous (Deletion/Duplication) copies of the same region. Parallel to the observations made in humans, such variation in gene copy number was sufficient to generate phenotypic consequences: in a number of cases diametrically opposing phenotypes were associated with gain versus loss of gene content. Surprisingly, some neurobehavioral traits were not rescued by restoration of the normal gene copy number. Transcriptome profiling showed that a highly significant propensity of transcriptional changes map to the engineered interval in the five assessed tissues. A statistically significant overrepresentation of the genes mapping to the entire length of the engineered chromosome was also found in the top-ranked differentially expressed genes in the mice containing rearranged chromosomes, regardless of the nature of the rearrangement, an observation robust across different cell lineages of the central nervous system. Our data indicate that a structural change at a given position of the human genome may affect not only locus and adjacent gene expression but also "genome regulation." Furthermore, structural change can cause the same perturbation in particular pathways regardless of gene dosage. Thus, the presence of a genomic structural change, as well as gene dosage imbalance, contributes to the ultimate phenotype.
Resumo:
Divergent and convergent margins actualistic models are reviewed and applied to the history of the western Alps. Tethyan rifting history and geometry are analyzed: the northern European margin is considered as an upper plate whereas the southern Apulian margin is a lower plate; the Breche basin is regarded as the former break-away trough; the internal Brianconnais domain represents the northern rift shoulder whilst the more external domains are regarded as the infill of a complex rim basin locally affected by important extension (Valaisan and Vocontain trough). The Schistes lustres and ophiolites of the Tsate nappe are compared to an accretionary prism: the imbrication of this nappe elements is regarded as a direct consequence of the accretionary phenomena already active in early Cretaceous; the Gets/Simme complex could orginate from a more internal part of the accretionary prism. Some eclogitic basements represent the former Apulian margin substratum (Sesia) others (Mont-Rose) are interpreted as the former edge of the European margin. The history of the closing Tethyan domain is analyzed and the remaining problems concerning the cinematics, the presence/absence of a volcanic arc and the eoalpine metamorphism are discussed.
Resumo:
The algorithmic approach to data modelling has developed rapidly these last years, in particular methods based on data mining and machine learning have been used in a growing number of applications. These methods follow a data-driven methodology, aiming at providing the best possible generalization and predictive abilities instead of concentrating on the properties of the data model. One of the most successful groups of such methods is known as Support Vector algorithms. Following the fruitful developments in applying Support Vector algorithms to spatial data, this paper introduces a new extension of the traditional support vector regression (SVR) algorithm. This extension allows for the simultaneous modelling of environmental data at several spatial scales. The joint influence of environmental processes presenting different patterns at different scales is here learned automatically from data, providing the optimum mixture of short and large-scale models. The method is adaptive to the spatial scale of the data. With this advantage, it can provide efficient means to model local anomalies that may typically arise in situations at an early phase of an environmental emergency. However, the proposed approach still requires some prior knowledge on the possible existence of such short-scale patterns. This is a possible limitation of the method for its implementation in early warning systems. The purpose of this paper is to present the multi-scale SVR model and to illustrate its use with an application to the mapping of Cs137 activity given the measurements taken in the region of Briansk following the Chernobyl accident.
Resumo:
Difficult tracheal intubation assessment is an important research topic in anesthesia as failed intubations are important causes of mortality in anesthetic practice. The modified Mallampati score is widely used, alone or in conjunction with other criteria, to predict the difficulty of intubation. This work presents an automatic method to assess the modified Mallampati score from an image of a patient with the mouth wide open. For this purpose we propose an active appearance models (AAM) based method and use linear support vector machines (SVM) to select a subset of relevant features obtained using the AAM. This feature selection step proves to be essential as it improves drastically the performance of classification, which is obtained using SVM with RBF kernel and majority voting. We test our method on images of 100 patients undergoing elective surgery and achieve 97.9% accuracy in the leave-one-out crossvalidation test and provide a key element to an automatic difficult intubation assessment system.
Resumo:
1. Model-based approaches have been used increasingly in conservation biology over recent years. Species presence data used for predictive species distribution modelling are abundant in natural history collections, whereas reliable absence data are sparse, most notably for vagrant species such as butterflies and snakes. As predictive methods such as generalized linear models (GLM) require absence data, various strategies have been proposed to select pseudo-absence data. However, only a few studies exist that compare different approaches to generating these pseudo-absence data. 2. Natural history collection data are usually available for long periods of time (decades or even centuries), thus allowing historical considerations. However, this historical dimension has rarely been assessed in studies of species distribution, although there is great potential for understanding current patterns, i.e. the past is the key to the present. 3. We used GLM to model the distributions of three 'target' butterfly species, Melitaea didyma, Coenonympha tullia and Maculinea teleius, in Switzerland. We developed and compared four strategies for defining pools of pseudo-absence data and applied them to natural history collection data from the last 10, 30 and 100 years. Pools included: (i) sites without target species records; (ii) sites where butterfly species other than the target species were present; (iii) sites without butterfly species but with habitat characteristics similar to those required by the target species; and (iv) a combination of the second and third strategies. Models were evaluated and compared by the total deviance explained, the maximized Kappa and the area under the curve (AUC). 4. Among the four strategies, model performance was best for strategy 3. Contrary to expectations, strategy 2 resulted in even lower model performance compared with models with pseudo-absence data simulated totally at random (strategy 1). 5. Independent of the strategy model, performance was enhanced when sites with historical species presence data were not considered as pseudo-absence data. Therefore, the combination of strategy 3 with species records from the last 100 years achieved the highest model performance. 6. Synthesis and applications. The protection of suitable habitat for species survival or reintroduction in rapidly changing landscapes is a high priority among conservationists. Model-based approaches offer planning authorities the possibility of delimiting priority areas for species detection or habitat protection. The performance of these models can be enhanced by fitting them with pseudo-absence data relying on large archives of natural history collection species presence data rather than using randomly sampled pseudo-absence data.