941 resultados para Intention-based models
Resumo:
The method of stochastic dynamic programming is widely used in ecology of behavior, but has some imperfections because of use of temporal limits. The authors presented an alternative approach based on the methods of the theory of restoration. Suggested method uses cumulative energy reserves per time unit as a criterium, that leads to stationary cycles in the area of states. This approach allows to study the optimal feeding by analytic methods.
Resumo:
Several unit root tests in panel data have recently been proposed. The test developed by Harris and Tzavalis (1999 JoE) performs particularly well when the time dimension is moderate in relation to the cross-section dimension. However, in common with the traditional tests designed for the unidimensional case, it was found to perform poorly when there is a structural break in the time series under the alternative. Here we derive the asymptotic distribution of the test allowing for a shift in the mean, and assess the small sample performance. We apply this new test to show how the hypothesis of (perfect) hysteresis in Spanish unemployment is rejected in favour of the alternative of the natural unemployment rate, when the possibility of a change in the latter is considered.
Resumo:
Several unit root tests in panel data have recently been proposed. The test developed by Harris and Tzavalis (1999 JoE) performs particularly well when the time dimension is moderate in relation to the cross-section dimension. However, in common with the traditional tests designed for the unidimensional case, it was found to perform poorly when there is a structural break in the time series under the alternative. Here we derive the asymptotic distribution of the test allowing for a shift in the mean, and assess the small sample performance. We apply this new test to show how the hypothesis of (perfect) hysteresis in Spanish unemployment is rejected in favour of the alternative of the natural unemployment rate, when the possibility of a change in the latter is considered.
Resumo:
In groundwater applications, Monte Carlo methods are employed to model the uncertainty on geological parameters. However, their brute-force application becomes computationally prohibitive for highly detailed geological descriptions, complex physical processes, and a large number of realizations. The Distance Kernel Method (DKM) overcomes this issue by clustering the realizations in a multidimensional space based on the flow responses obtained by means of an approximate (computationally cheaper) model; then, the uncertainty is estimated from the exact responses that are computed only for one representative realization per cluster (the medoid). Usually, DKM is employed to decrease the size of the sample of realizations that are considered to estimate the uncertainty. We propose to use the information from the approximate responses for uncertainty quantification. The subset of exact solutions provided by DKM is then employed to construct an error model and correct the potential bias of the approximate model. Two error models are devised that both employ the difference between approximate and exact medoid solutions, but differ in the way medoid errors are interpolated to correct the whole set of realizations. The Local Error Model rests upon the clustering defined by DKM and can be seen as a natural way to account for intra-cluster variability; the Global Error Model employs a linear interpolation of all medoid errors regardless of the cluster to which the single realization belongs. These error models are evaluated for an idealized pollution problem in which the uncertainty of the breakthrough curve needs to be estimated. For this numerical test case, we demonstrate that the error models improve the uncertainty quantification provided by the DKM algorithm and are effective in correcting the bias of the estimate computed solely from the MsFV results. The framework presented here is not specific to the methods considered and can be applied to other combinations of approximate models and techniques to select a subset of realizations
Resumo:
BACKGROUND: Qualitative frameworks, especially those based on the logical discrete formalism, are increasingly used to model regulatory and signalling networks. A major advantage of these frameworks is that they do not require precise quantitative data, and that they are well-suited for studies of large networks. While numerous groups have developed specific computational tools that provide original methods to analyse qualitative models, a standard format to exchange qualitative models has been missing. RESULTS: We present the Systems Biology Markup Language (SBML) Qualitative Models Package ("qual"), an extension of the SBML Level 3 standard designed for computer representation of qualitative models of biological networks. We demonstrate the interoperability of models via SBML qual through the analysis of a specific signalling network by three independent software tools. Furthermore, the collective effort to define the SBML qual format paved the way for the development of LogicalModel, an open-source model library, which will facilitate the adoption of the format as well as the collaborative development of algorithms to analyse qualitative models. CONCLUSIONS: SBML qual allows the exchange of qualitative models among a number of complementary software tools. SBML qual has the potential to promote collaborative work on the development of novel computational approaches, as well as on the specification and the analysis of comprehensive qualitative models of regulatory and signalling networks.
Resumo:
Rock slope instabilities such as rock slides, rock avalanche or deep-seated gravitational slope deformations are widespread in Alpine valleys. These phenomena represent at the same time a main factor that control the mountain belts erosion and also a significant natural hazard that creates important losses to the mountain communities. However, the potential geometrical and dynamic connections linking outcrop and slope-scale instabilities are often unknown. A more detailed definition of the potential links will be essential to improve the comprehension of the destabilization processes and to dispose of a more complete hazard characterization of the rock instabilities at different spatial scales. In order to propose an integrated approach in the study of the rock slope instabilities, three main themes were analysed in this PhD thesis: (1) the inventory and the spatial distribution of rock slope deformations at regional scale and their influence on the landscape evolution, (2) the influence of brittle and ductile tectonic structures on rock slope instabilities development and (3) the characterization of hazard posed by potential rock slope instabilities through the development of conceptual instability models. To prose and integrated approach for the analyses of these topics, several techniques were adopted. In particular, high resolution digital elevation models revealed to be fundamental tools that were employed during the different stages of the rock slope instability assessment. A special attention was spent in the application of digital elevation model for detailed geometrical modelling of past and potential instabilities and for the rock slope monitoring at different spatial scales. Detailed field analyses and numerical models were performed to complete and verify the remote sensing approach. In the first part of this thesis, large slope instabilities in Rhone valley (Switzerland) were mapped in order to dispose of a first overview of tectonic and climatic factors influencing their distribution and their characteristics. Our analyses demonstrate the key influence of neotectonic activity and the glacial conditioning on the spatial distribution of the rock slope deformations. Besides, the volumes of rock instabilities identified along the main Rhone valley, were then used to propose the first estimate of the postglacial denudation and filling of the Rhone valley associated to large gravitational movements. In the second part of the thesis, detailed structural analyses of the Frank slide and the Sierre rock avalanche were performed to characterize the influence of brittle and ductile tectonic structures on the geometry and on the failure mechanism of large instabilities. Our observations indicated that the geometric characteristics and the variation of the rock mass quality associated to ductile tectonic structures, that are often ignored landslide study, represent important factors that can drastically influence the extension and the failure mechanism of rock slope instabilities. In the last part of the thesis, the failure mechanisms and the hazard associated to five potential instabilities were analysed in detail. These case studies clearly highlighted the importance to incorporate different analyses and monitoring techniques to dispose of reliable and hazard scenarios. This information associated to the development of a conceptual instability model represents the primary data for an integrated risk management of rock slope instabilities. - Les mouvements de versant tels que les chutes de blocs, les éboulements ou encore les phénomènes plus lents comme les déformations gravitaires profondes de versant représentent des manifestations courantes en régions montagneuses. Les mouvements de versant sont à la fois un des facteurs principaux contrôlant la destruction progressive des chaines orogéniques mais aussi un danger naturel concret qui peut provoquer des dommages importants. Pourtant, les phénomènes gravitaires sont rarement analysés dans leur globalité et les rapports géométriques et mécaniques qui lient les instabilités à l'échelle du versant aux instabilités locales restent encore mal définis. Une meilleure caractérisation de ces liens pourrait pourtant représenter un apport substantiel dans la compréhension des processus de déstabilisation des versants et améliorer la caractérisation des dangers gravitaires à toutes les échelles spatiales. Dans le but de proposer un approche plus globale à la problématique des mouvements gravitaires, ce travail de thèse propose trois axes de recherche principaux: (1) l'inventaire et l'analyse de la distribution spatiale des grandes instabilités rocheuses à l'échelle régionale, (2) l'analyse des structures tectoniques cassantes et ductiles en relation avec les mécanismes de rupture des grandes instabilités rocheuses et (3) la caractérisation des aléas rocheux par une approche multidisciplinaire visant à développer un modèle conceptuel de l'instabilité et une meilleure appréciation du danger . Pour analyser les différentes problématiques traitées dans cette thèse, différentes techniques ont été utilisées. En particulier, le modèle numérique de terrain s'est révélé être un outil indispensable pour la majorité des analyses effectuées, en partant de l'identification de l'instabilité jusqu'au suivi des mouvements. Les analyses de terrain et des modélisations numériques ont ensuite permis de compléter les informations issues du modèle numérique de terrain. Dans la première partie de cette thèse, les mouvements gravitaires rocheux dans la vallée du Rhône (Suisse) ont été cartographiés pour étudier leur répartition en fonction des variables géologiques et morphologiques régionales. En particulier, les analyses ont mis en évidence l'influence de l'activité néotectonique et des phases glaciaires sur la distribution des zones à forte densité d'instabilités rocheuses. Les volumes des instabilités rocheuses identifiées le long de la vallée principale ont été ensuite utilisés pour estimer le taux de dénudations postglaciaire et le remplissage de la vallée du Rhône lié aux grands mouvements gravitaires. Dans la deuxième partie, l'étude de l'agencement structural des avalanches rocheuses de Sierre (Suisse) et de Frank (Canada) a permis de mieux caractériser l'influence passive des structures tectoniques sur la géométrie des instabilités. En particulier, les structures issues d'une tectonique ductile, souvent ignorées dans l'étude des instabilités gravitaires, ont été identifiées comme des structures très importantes qui contrôlent les mécanismes de rupture des instabilités à différentes échelles. Dans la dernière partie de la thèse, cinq instabilités rocheuses différentes ont été étudiées par une approche multidisciplinaire visant à mieux caractériser l'aléa et à développer un modèle conceptuel trois dimensionnel de ces instabilités. A l'aide de ces analyses on a pu mettre en évidence la nécessité d'incorporer différentes techniques d'analyses et de surveillance pour une gestion plus objective du risque associée aux grandes instabilités rocheuses.
Resumo:
Macroporosity is often used in the determination of soil compaction. Reduced macroporosity can lead to poor drainage, low root aeration and soil degradation. The aim of this study was to develop and test different models to estimate macro and microporosity efficiently, using multiple regression. Ten soils were selected within a large range of textures: sand (Sa) 0.07-0.84; silt 0.03-0.24; clay 0.13-0.78 kg kg-1 and subjected to three compaction levels (three bulk densities, BD). Two models with similar accuracy were selected, with a mean error of about 0.02 m³ m-3 (2 %). The model y = a + b.BD + c.Sa, named model 2, was selected for its simplicity to estimate Macro (Ma), Micro (Mi) or total porosity (TP): Ma = 0.693 - 0.465 BD + 0.212 Sa; Mi = 0.337 + 0.120 BD - 0.294 Sa; TP = 1.030 - 0.345 BD 0.082 Sa; porosity values were expressed in m³ m-3; BD in kg dm-3; and Sa in kg kg-1. The model was tested with 76 datum set of several other authors. An error of about 0.04 m³ m-3 (4 %) was observed. Simulations of variations in BD as a function of Sa are presented for Ma = 0 and Ma = 0.10 (10 %). The macroporosity equation was remodeled to obtain other compaction indexes: a) to simulate maximum bulk density (MBD) as a function of Sa (Equation 11), in agreement with literature data; b) to simulate relative bulk density (RBD) as a function of BD and Sa (Equation 13); c) another model to simulate RBD as a function of Ma and Sa (Equation 16), confirming the independence of this variable in relation to Sa for a fixed value of macroporosity and, also, proving the hypothesis of Hakansson & Lipiec that RBD = 0.87 corresponds approximately to 10 % macroporosity (Ma = 0.10 m³ m-3).
Resumo:
1. Identifying those areas suitable for recolonization by threatened species is essential to support efficient conservation policies. Habitat suitability models (HSM) predict species' potential distributions, but the quality of their predictions should be carefully assessed when the species-environment equilibrium assumption is violated.2. We studied the Eurasian otter Lutra lutra, whose numbers are recovering in southern Italy. To produce widely applicable results, we chose standard HSM procedures and looked for the models' capacities in predicting the suitability of a recolonization area. We used two fieldwork datasets: presence-only data, used in the Ecological Niche Factor Analyses (ENFA), and presence-absence data, used in a Generalized Linear Model (GLM). In addition to cross-validation, we independently evaluated the models with data from a recolonization event, providing presences on a previously unoccupied river.3. Three of the models successfully predicted the suitability of the recolonization area, but the GLM built with data before the recolonization disagreed with these predictions, missing the recolonized river's suitability and badly describing the otter's niche. Our results highlighted three points of relevance to modelling practices: (1) absences may prevent the models from correctly identifying areas suitable for a species spread; (2) the selection of variables may lead to randomness in the predictions; and (3) the Area Under Curve (AUC), a commonly used validation index, was not well suited to the evaluation of model quality, whereas the Boyce Index (CBI), based on presence data only, better highlighted the models' fit to the recolonization observations.4. For species with unstable spatial distributions, presence-only models may work better than presence-absence methods in making reliable predictions of suitable areas for expansion. An iterative modelling process, using new occurrences from each step of the species spread, may also help in progressively reducing errors.5. Synthesis and applications. Conservation plans depend on reliable models of the species' suitable habitats. In non-equilibrium situations, such as the case for threatened or invasive species, models could be affected negatively by the inclusion of absence data when predicting the areas of potential expansion. Presence-only methods will here provide a better basis for productive conservation management practices.
Resumo:
Toxicokinetic modeling is a useful tool to describe or predict the behavior of a chemical agent in the human or animal organism. A general model based on four compartments was developed in a previous study in order to quantify the effect of human variability on a wide range of biological exposure indicators. The aim of this study was to adapt this existing general toxicokinetic model to three organic solvents, which were methyl ethyl ketone, 1-methoxy-2-propanol and 1,1,1,-trichloroethane, and to take into account sex differences. We assessed in a previous human volunteer study the impact of sex on different biomarkers of exposure corresponding to the three organic solvents mentioned above. Results from that study suggested that not only physiological differences between men and women but also differences due to sex hormones levels could influence the toxicokinetics of the solvents. In fact the use of hormonal contraceptive had an effect on the urinary levels of several biomarkers, suggesting that exogenous sex hormones could influence CYP2E1 enzyme activity. These experimental data were used to calibrate the toxicokinetic models developed in this study. Our results showed that it was possible to use an existing general toxicokinetic model for other compounds. In fact, most of the simulation results showed good agreement with the experimental data obtained for the studied solvents, with a percentage of model predictions that lies within the 95% confidence interval varying from 44.4 to 90%. Results pointed out that for same exposure conditions, men and women can show important differences in urinary levels of biological indicators of exposure. Moreover, when running the models by simulating industrial working conditions, these differences could even be more pronounced. In conclusion, a general and simple toxicokinetic model, adapted for three well known organic solvents, allowed us to show that metabolic parameters can have an important impact on the urinary levels of the corresponding biomarkers. These observations give evidence of an interindividual variablity, an aspect that should have its place in the approaches for setting limits of occupational exposure.
Resumo:
With the advancement of high-throughput sequencing and dramatic increase of available genetic data, statistical modeling has become an essential part in the field of molecular evolution. Statistical modeling results in many interesting discoveries in the field, from detection of highly conserved or diverse regions in a genome to phylogenetic inference of species evolutionary history Among different types of genome sequences, protein coding regions are particularly interesting due to their impact on proteins. The building blocks of proteins, i.e. amino acids, are coded by triples of nucleotides, known as codons. Accordingly, studying the evolution of codons leads to fundamental understanding of how proteins function and evolve. The current codon models can be classified into three principal groups: mechanistic codon models, empirical codon models and hybrid ones. The mechanistic models grasp particular attention due to clarity of their underlying biological assumptions and parameters. However, they suffer from simplified assumptions that are required to overcome the burden of computational complexity. The main assumptions applied to the current mechanistic codon models are (a) double and triple substitutions of nucleotides within codons are negligible, (b) there is no mutation variation among nucleotides of a single codon and (c) assuming HKY nucleotide model is sufficient to capture essence of transition- transversion rates at nucleotide level. In this thesis, I develop a framework of mechanistic codon models, named KCM-based model family framework, based on holding or relaxing the mentioned assumptions. Accordingly, eight different models are proposed from eight combinations of holding or relaxing the assumptions from the simplest one that holds all the assumptions to the most general one that relaxes all of them. The models derived from the proposed framework allow me to investigate the biological plausibility of the three simplified assumptions on real data sets as well as finding the best model that is aligned with the underlying characteristics of the data sets. -- Avec l'avancement de séquençage à haut débit et l'augmentation dramatique des données géné¬tiques disponibles, la modélisation statistique est devenue un élément essentiel dans le domaine dé l'évolution moléculaire. Les résultats de la modélisation statistique dans de nombreuses découvertes intéressantes dans le domaine de la détection, de régions hautement conservées ou diverses dans un génome de l'inférence phylogénétique des espèces histoire évolutive. Parmi les différents types de séquences du génome, les régions codantes de protéines sont particulièrement intéressants en raison de leur impact sur les protéines. Les blocs de construction des protéines, à savoir les acides aminés, sont codés par des triplets de nucléotides, appelés codons. Par conséquent, l'étude de l'évolution des codons mène à la compréhension fondamentale de la façon dont les protéines fonctionnent et évoluent. Les modèles de codons actuels peuvent être classés en trois groupes principaux : les modèles de codons mécanistes, les modèles de codons empiriques et les hybrides. Les modèles mécanistes saisir une attention particulière en raison de la clarté de leurs hypothèses et les paramètres biologiques sous-jacents. Cependant, ils souffrent d'hypothèses simplificatrices qui permettent de surmonter le fardeau de la complexité des calculs. Les principales hypothèses retenues pour les modèles actuels de codons mécanistes sont : a) substitutions doubles et triples de nucleotides dans les codons sont négligeables, b) il n'y a pas de variation de la mutation chez les nucléotides d'un codon unique, et c) en supposant modèle nucléotidique HKY est suffisant pour capturer l'essence de taux de transition transversion au niveau nucléotidique. Dans cette thèse, je poursuis deux objectifs principaux. Le premier objectif est de développer un cadre de modèles de codons mécanistes, nommé cadre KCM-based model family, sur la base de la détention ou de l'assouplissement des hypothèses mentionnées. En conséquence, huit modèles différents sont proposés à partir de huit combinaisons de la détention ou l'assouplissement des hypothèses de la plus simple qui détient toutes les hypothèses à la plus générale qui détend tous. Les modèles dérivés du cadre proposé nous permettent d'enquêter sur la plausibilité biologique des trois hypothèses simplificatrices sur des données réelles ainsi que de trouver le meilleur modèle qui est aligné avec les caractéristiques sous-jacentes des jeux de données. Nos expériences montrent que, dans aucun des jeux de données réelles, tenant les trois hypothèses mentionnées est réaliste. Cela signifie en utilisant des modèles simples qui détiennent ces hypothèses peuvent être trompeuses et les résultats de l'estimation inexacte des paramètres. Le deuxième objectif est de développer un modèle mécaniste de codon généralisée qui détend les trois hypothèses simplificatrices, tandis que d'informatique efficace, en utilisant une opération de matrice appelée produit de Kronecker. Nos expériences montrent que sur un jeux de données choisis au hasard, le modèle proposé de codon mécaniste généralisée surpasse autre modèle de codon par rapport à AICc métrique dans environ la moitié des ensembles de données. En outre, je montre à travers plusieurs expériences que le modèle général proposé est biologiquement plausible.
Resumo:
Melanin is the most common pigment in animal integuments and is responsible for some of the most striking ornaments. A central tenet of sexual selection theory states that melanin-based traits can signal absolute individual quality in any environment only if their expression is condition-dependent. Significant costs imposed by an ornament would ensure that only the highest quality individuals display the most exaggerated forms of the signal. Firm evidence that melanin-based traits can be condition-dependent is still rare in birds. In an experimental test of this central assumption, we report condition-dependent expression of a melanin-based trait in the Eurasian kestrel (Falco tinnunculus). We manipulated nestling body condition by reducing or increasing the number of nestlings soon after hatching. A few days before fledging, we measured the width of sub-terminal black bands on the tail feathers. Compared to nestlings from enlarged broods, individuals raised in reduced broods were in better condition and thereby developed larger sub-terminal bands. Furthermore, in 2 years, first-born nestlings also developed larger sub-terminal bands than their younger siblings that are in poorer condition. This demonstrates that expression of melanin-based traits can be condition-dependent.
Resumo:
Because of the increase in workplace automation and the diversification of industrial processes, workplaces have become more and more complex. The classical approaches used to address workplace hazard concerns, such as checklists or sequence models, are, therefore, of limited use in such complex systems. Moreover, because of the multifaceted nature of workplaces, the use of single-oriented methods, such as AEA (man oriented), FMEA (system oriented), or HAZOP (process oriented), is not satisfactory. The use of a dynamic modeling approach in order to allow multiple-oriented analyses may constitute an alternative to overcome this limitation. The qualitative modeling aspects of the MORM (man-machine occupational risk modeling) model are discussed in this article. The model, realized on an object-oriented Petri net tool (CO-OPN), has been developed to simulate and analyze industrial processes in an OH&S perspective. The industrial process is modeled as a set of interconnected subnets (state spaces), which describe its constitutive machines. Process-related factors are introduced, in an explicit way, through machine interconnections and flow properties. While man-machine interactions are modeled as triggering events for the state spaces of the machines, the CREAM cognitive behavior model is used in order to establish the relevant triggering events. In the CO-OPN formalism, the model is expressed as a set of interconnected CO-OPN objects defined over data types expressing the measure attached to the flow of entities transiting through the machines. Constraints on the measures assigned to these entities are used to determine the state changes in each machine. Interconnecting machines implies the composition of such flow and consequently the interconnection of the measure constraints. This is reflected by the construction of constraint enrichment hierarchies, which can be used for simulation and analysis optimization in a clear mathematical framework. The use of Petri nets to perform multiple-oriented analysis opens perspectives in the field of industrial risk management. It may significantly reduce the duration of the assessment process. But, most of all, it opens perspectives in the field of risk comparisons and integrated risk management. Moreover, because of the generic nature of the model and tool used, the same concepts and patterns may be used to model a wide range of systems and application fields.
Resumo:
In this paper, we develop a data-driven methodology to characterize the likelihood of orographic precipitation enhancement using sequences of weather radar images and a digital elevation model (DEM). Geographical locations with topographic characteristics favorable to enforce repeatable and persistent orographic precipitation such as stationary cells, upslope rainfall enhancement, and repeated convective initiation are detected by analyzing the spatial distribution of a set of precipitation cells extracted from radar imagery. Topographic features such as terrain convexity and gradients computed from the DEM at multiple spatial scales as well as velocity fields estimated from sequences of weather radar images are used as explanatory factors to describe the occurrence of localized precipitation enhancement. The latter is represented as a binary process by defining a threshold on the number of cell occurrences at particular locations. Both two-class and one-class support vector machine classifiers are tested to separate the presumed orographic cells from the nonorographic ones in the space of contributing topographic and flow features. Site-based validation is carried out to estimate realistic generalization skills of the obtained spatial prediction models. Due to the high class separability, the decision function of the classifiers can be interpreted as a likelihood or susceptibility of orographic precipitation enhancement. The developed approach can serve as a basis for refining radar-based quantitative precipitation estimates and short-term forecasts or for generating stochastic precipitation ensembles conditioned on the local topography.
Resumo:
Abstract Traditionally, the common reserving methods used by the non-life actuaries are based on the assumption that future claims are going to behave in the same way as they did in the past. There are two main sources of variability in the processus of development of the claims: the variability of the speed with which the claims are settled and the variability between the severity of the claims from different accident years. High changes in these processes will generate distortions in the estimation of the claims reserves. The main objective of this thesis is to provide an indicator which firstly identifies and quantifies these two influences and secondly to determine which model is adequate for a specific situation. Two stochastic models were analysed and the predictive distributions of the future claims were obtained. The main advantage of the stochastic models is that they provide measures of variability of the reserves estimates. The first model (PDM) combines one conjugate family Dirichlet - Multinomial with the Poisson distribution. The second model (NBDM) improves the first one by combining two conjugate families Poisson -Gamma (for distribution of the ultimate amounts) and Dirichlet Multinomial (for distribution of the incremental claims payments). It was found that the second model allows to find the speed variability in the reporting process and development of the claims severity as function of two above mentioned distributions' parameters. These are the shape parameter of the Gamma distribution and the Dirichlet parameter. Depending on the relation between them we can decide on the adequacy of the claims reserve estimation method. The parameters have been estimated by the Methods of Moments and Maximum Likelihood. The results were tested using chosen simulation data and then using real data originating from the three lines of business: Property/Casualty, General Liability, and Accident Insurance. These data include different developments and specificities. The outcome of the thesis shows that when the Dirichlet parameter is greater than the shape parameter of the Gamma, resulting in a model with positive correlation between the past and future claims payments, suggests the Chain-Ladder method as appropriate for the claims reserve estimation. In terms of claims reserves, if the cumulated payments are high the positive correlation will imply high expectations for the future payments resulting in high claims reserves estimates. The negative correlation appears when the Dirichlet parameter is lower than the shape parameter of the Gamma, meaning low expected future payments for the same high observed cumulated payments. This corresponds to the situation when claims are reported rapidly and fewer claims remain expected subsequently. The extreme case appears in the situation when all claims are reported at the same time leading to expectations for the future payments of zero or equal to the aggregated amount of the ultimate paid claims. For this latter case, the Chain-Ladder is not recommended.
Resumo:
Sensory neuronopathies (SNNs) encompass paraneoplastic, infectious, dysimmune, toxic, inherited, and idiopathic disorders. Recently described diagnostic criteria allow SNN to be differentiated from other forms of sensory neuropathy, but there is no validated strategy based on routine clinical investigations for the etiological diagnosis of SNN. In a multicenter study, the clinical, biological, and electrophysiological characteristics of 148 patients with SNN were analyzed. Multiple correspondence analysis and logistic regression were used to identify patterns differentiating between forms of SNNs with different etiologies. Models were constructed using a study population of 88 patients and checked using a test population of 60 cases. Four patterns were identified. Pattern A, with an acute or subacute onset in the four limbs or arms, early pain, and frequently affecting males over 60 years of age, identified mainly paraneoplastic, toxic, and infectious SNN. Pattern B identified patients with progressive SNN and was divided into patterns C and D, the former corresponding to patients with inherited or slowly progressive idiopathic SNN with severe ataxia and electrophysiological abnormalities and the latter to patients with idiopathic, dysimmune, and sometimes paraneoplastic SNN with a more rapid course than in pattern C. The diagnostic strategy based on these patterns correctly identified 84/88 and 58/60 patients in the study and test populations, respectively.