936 resultados para Natural Catastrophe, Property Insurance, Loss Distribution, Truncated Data, Ruin Probability
Resumo:
Malgré les progrès de la médecine moderne et l’avancement des connaissances, le soulagement de la douleur demeure un domaine peu étudié et peu compris. Les enfants avec un diagnostic de paralysie cérébrale et une incapacité de communiquer verbalement font partie des populations vulnérables à risque de ne pas être compris. Il est maintenant prouvé que ces enfants peuvent ressentir des douleurs provenant de diverses sources. Le but de cette étude exploratoire de type qualitative est de tenter de déterminer les enjeux éthiques qui sont rencontrés par les intervenants d’un milieu d’hébergement, lorsque l’on veut évaluer et soulager la douleur chez cette population d’enfants. L’information a été recueillie à partir d’entrevues semi-structurées avec des familles, des gardiens et des intervenants. Les données ont ensuite été comparées à ce qui est retrouvé dans la littérature. Selon les parents et les gardiens, l’ensemble du personnel régulier du milieu d’hébergement et répit démontre une plus grande compréhension des besoins de leur enfant que les intervenants qu’ils ont rencontrés dans le milieu de soins aigus. Les intervenants évaluent les comportements observés sur une base subjective ce qui entraîne une prise en charge inégale. Ils expriment également que la principale difficulté de travailler auprès de ces enfants est l’incertitude d’avoir bien interprété le comportement et d’avoir posé le bon geste. En conclusion, malgré les recherches et la possibilité d’utiliser des outils validés, la pratique clinique ne répond pas au standard de pratique auquel ces enfants ont droit dans tous les milieux où ils reçoivent des soins.
Resumo:
Nous y introduisons une nouvelle classe de distributions bivariées de type Marshall-Olkin, la distribution Erlang bivariée. La transformée de Laplace, les moments et les densités conditionnelles y sont obtenus. Les applications potentielles en assurance-vie et en finance sont prises en considération. Les estimateurs du maximum de vraisemblance des paramètres sont calculés par l'algorithme Espérance-Maximisation. Ensuite, notre projet de recherche est consacré à l'étude des processus de risque multivariés, qui peuvent être utiles dans l'étude des problèmes de la ruine des compagnies d'assurance avec des classes dépendantes. Nous appliquons les résultats de la théorie des processus de Markov déterministes par morceaux afin d'obtenir les martingales exponentielles, nécessaires pour établir des bornes supérieures calculables pour la probabilité de ruine, dont les expressions sont intraitables.
Resumo:
En apprentissage automatique, domaine qui consiste à utiliser des données pour apprendre une solution aux problèmes que nous voulons confier à la machine, le modèle des Réseaux de Neurones Artificiels (ANN) est un outil précieux. Il a été inventé voilà maintenant près de soixante ans, et pourtant, il est encore de nos jours le sujet d'une recherche active. Récemment, avec l'apprentissage profond, il a en effet permis d'améliorer l'état de l'art dans de nombreux champs d'applications comme la vision par ordinateur, le traitement de la parole et le traitement des langues naturelles. La quantité toujours grandissante de données disponibles et les améliorations du matériel informatique ont permis de faciliter l'apprentissage de modèles à haute capacité comme les ANNs profonds. Cependant, des difficultés inhérentes à l'entraînement de tels modèles, comme les minima locaux, ont encore un impact important. L'apprentissage profond vise donc à trouver des solutions, en régularisant ou en facilitant l'optimisation. Le pré-entraînnement non-supervisé, ou la technique du ``Dropout'', en sont des exemples. Les deux premiers travaux présentés dans cette thèse suivent cette ligne de recherche. Le premier étudie les problèmes de gradients diminuants/explosants dans les architectures profondes. Il montre que des choix simples, comme la fonction d'activation ou l'initialisation des poids du réseaux, ont une grande influence. Nous proposons l'initialisation normalisée pour faciliter l'apprentissage. Le second se focalise sur le choix de la fonction d'activation et présente le rectifieur, ou unité rectificatrice linéaire. Cette étude a été la première à mettre l'accent sur les fonctions d'activations linéaires par morceaux pour les réseaux de neurones profonds en apprentissage supervisé. Aujourd'hui, ce type de fonction d'activation est une composante essentielle des réseaux de neurones profonds. Les deux derniers travaux présentés se concentrent sur les applications des ANNs en traitement des langues naturelles. Le premier aborde le sujet de l'adaptation de domaine pour l'analyse de sentiment, en utilisant des Auto-Encodeurs Débruitants. Celui-ci est encore l'état de l'art de nos jours. Le second traite de l'apprentissage de données multi-relationnelles avec un modèle à base d'énergie, pouvant être utilisé pour la tâche de désambiguation de sens.
Resumo:
The thesis entitled Studies on Thermal Structure in the Seas Around India. An attempt is made in this study to document the observed variability of thermal structure, both on seasonal and short-term scales, in the eastern Arabian Sea and southwestern Bay of Bengal, from the spatial and time series data sets from a reasonably strong data base. The present study has certain limitations. The mean temperatures are based on an uneven distribution of data in space and time. Some of the areas, although having a ‘full annual coverage, do not have adequate data for some months. Some portions in the area under study are having data gaps. The consistency and the coherence in the internal wave characteristics could not be examined due to non-availability of adequate data sets. The influence of generating mechanisms; other than winds and tides on the observed internal wave fields could not be ascertained due to lack of data. However, a comprehensive and intensive data collection can overcome these limitations. The deployment of moored buoys with arrays of sensors at different depths at some important locations for about 5 to 10 years can provide intensive and extensive data sets. This strong data base can afford to address the short-term and seasonal variability of thermal field and understand in detail the individual and collective influences of various physical and dynamical mechanisms responsible for such variability.
Resumo:
Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.
Resumo:
There are two principal chemical concepts that are important for studying the natural environment. The first one is thermodynamics, which describes whether a system is at equilibrium or can spontaneously change by chemical reactions. The second main concept is how fast chemical reactions (kinetics or rate of chemical change) take place whenever they start. In this work we examine a natural system in which both thermodynamics and kinetic factors are important in determining the abundance of NH+4 , NO−2 and NO−3 in superficial waters. Samples were collected in the Arno Basin (Tuscany, Italy), a system in which natural and antrophic effects both contribute to highly modify the chemical composition of water. Thermodynamical modelling based on the reduction-oxidation reactions involving the passage NH+4 -> NO−2 -> NO−3 in equilibrium conditions has allowed to determine the Eh redox potential values able to characterise the state of each sample and, consequently, of the fluid environment from which it was drawn. Just as pH expresses the concentration of H+ in solution, redox potential is used to express the tendency of an environment to receive or supply electrons. In this context, oxic environments, as those of river systems, are said to have a high redox potential because O2 is available as an electron acceptor. Principles of thermodynamics and chemical kinetics allow to obtain a model that often does not completely describe the reality of natural systems. Chemical reactions may indeed fail to achieve equilibrium because the products escape from the site of the rection or because reactions involving the trasformation are very slow, so that non-equilibrium conditions exist for long periods. Moreover, reaction rates can be sensitive to poorly understood catalytic effects or to surface effects, while variables as concentration (a large number of chemical species can coexist and interact concurrently), temperature and pressure can have large gradients in natural systems. By taking into account this, data of 91 water samples have been modelled by using statistical methodologies for compositional data. The application of log–contrast analysis has allowed to obtain statistical parameters to be correlated with the calculated Eh values. In this way, natural conditions in which chemical equilibrium is hypothesised, as well as underlying fast reactions, are compared with those described by a stochastic approach
Resumo:
The retention of peatland carbon (C) and the ability to continue to draw down and store C from the atmosphere is not only important for the UK terrestrial carbon inventory, but also for a range of ecosystem services, the landscape value and the ecology and hydrology of ~15% of the land area of the UK. Here we review the current state of knowledge on the C balance of UK peatlands using several studies which highlight not only the importance of making good flux measurements, but also the spatial and temporal variability of different flux terms that characterise a landscape affected by a range of natural and anthropogenic processes and threats. Our data emphasise the importance of measuring (or accurately estimating) all components of the peatland C budget. We highlight the role of the aquatic pathway and suggest that fluxes are higher than previously thought. We also compare the contemporary C balance of several UK peatlands with historical rates of C accumulation measured using peat cores, thus providing a long-term context for present-day measurements and their natural year-on-year variability. Contemporary measurements from 2 sites suggest that current accumulation rates (–56 to –72 g C m–2 yr–1) are at the lower end of those seen over the last 150 yr in peat cores (–35 to –209 g C m–2 yr–1). Finally, we highlight significant current gaps in knowledge and identify where levels of uncertainty are high, as well as emphasise the research challenges that need to be addressed if we are to improve the measurement and prediction of change in the peatland C balance over future decades.
Resumo:
Purpose – The paper addresses the practical problems which emerge when attempting to apply longitudinal approaches to the assessment of property depreciation using valuation-based data. These problems relate to inconsistent valuation regimes and the difficulties in finding appropriate benchmarks. Design/methodology/approach – The paper adopts a case study of seven major office locations around Europe and attempts to determine ten-year rental value depreciation rates based on a longitudinal approach using IPD, CBRE and BNP Paribas datasets. Findings – The depreciation rates range from a 5 per cent PA depreciation rate in Frankfurt to a 2 per cent appreciation rate in Stockholm. The results are discussed in the context of the difficulties in applying this method with inconsistent data. Research limitations/implications – The paper has methodological implications for measuring property investment depreciation and provides an example of the problems in adopting theoretically sound approaches with inconsistent information. Practical implications – Valuations play an important role in performance measurement and cross border investment decision making and, therefore, knowledge of inconsistency of valuation practice aids decision making and informs any application of valuation-based data in the attainment of depreciation rates. Originality/value – The paper provides new insights into the use of property market valuation data in a cross-border context, insights that previously had been anecdotal and unproven in nature.
Resumo:
Enteric bacteria with a demonstrable or potential ability to form attaching-effacing lesions, so-called attaching-effacing (AE) bacteria, have been found in the intestinal tracts of a wide variety of warm-blooded animal species, including man. In some host species, for example cattle, pigs, rabbits and human beings, attaching-effacing Escherichia coli (AEEC) have an established role as enteropathogens. In other host species, AE bacteria are of less certain significance. With continuing advances in the detection and typing of AE strains, the importance of these bacteria for many hosts is likely to become clearer. The pathogenic effects of AE bacteria result from adhesion to the intestinal mucosa by a variety of mechanisms, culminating in the formation of the characteristic intimate adhesion of the AE lesion. The ability to induce AE lesions is mediated by the co-ordinated expression of some 40 bacterial genes organized within a so-called pathogenicity island, known as the "Locus for Enterocyte Effacement". It is also believed that the production of bacterial toxins, principally Vero toxins, is a significant virulence factor for some A-EEC strains. Recent areas of research into AE bacteria include: the use of Citrobacter rodentium to model human AEEC disease; quorum-sensing mechanisms used by AEEC to modulate virulence gene expression; and the potential role of adhesion in the persistent colonization of the intestine by AE bacteria. This review of AE bacteria covers their molecular biology, their occurrence in various animal species, and the diagnosis, pathology and clinical aspects of animal diseases with which they are associated. Reference is made to human pathogens where appropriate. The focus is mainly on natural colonization and disease, but complementary experimental data are also included. (C) 2004 Elsevier Ltd. All rights reserved.
Cross-layer design for MIMO systems over spatially correlated and keyhole Nakagami-m fading channels
Resumo:
Cross-layer design is a generic designation for a set of efficient adaptive transmission schemes, across multiple layers of the protocol stack, that are aimed at enhancing the spectral efficiency and increasing the transmission reliability of wireless communication systems. In this paper, one such cross-layer design scheme that combines physical layer adaptive modulation and coding (AMC) with link layer truncated automatic repeat request (T-ARQ) is proposed for multiple-input multiple-output (MIMO) systems employing orthogonal space--time block coding (OSTBC). The performance of the proposed cross-layer design is evaluated in terms of achievable average spectral efficiency (ASE), average packet loss rate (PLR) and outage probability, for which analytical expressions are derived, considering transmission over two types of MIMO fading channels, namely, spatially correlated Nakagami-m fading channels and keyhole Nakagami-m fading channels. Furthermore, the effects of the maximum number of ARQ retransmissions, numbers of transmit and receive antennas, Nakagami fading parameter and spatial correlation parameters, are studied and discussed based on numerical results and comparisons. Copyright © 2009 John Wiley & Sons, Ltd.
Resumo:
In this paper, we formulate a flexible density function from the selection mechanism viewpoint (see, for example, Bayarri and DeGroot (1992) and Arellano-Valle et al. (2006)) which possesses nice biological and physical interpretations. The new density function contains as special cases many models that have been proposed recently in the literature. In constructing this model, we assume that the number of competing causes of the event of interest has a general discrete distribution characterized by its probability generating function. This function has an important role in the selection procedure as well as in computing the conditional personal cure rate. Finally, we illustrate how various models can be deduced as special cases of the proposed model. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Linear mixed models were developed to handle clustered data and have been a topic of increasing interest in statistics for the past 50 years. Generally. the normality (or symmetry) of the random effects is a common assumption in linear mixed models but it may, sometimes, be unrealistic, obscuring important features of among-subjects variation. In this article, we utilize skew-normal/independent distributions as a tool for robust modeling of linear mixed models under a Bayesian paradigm. The skew-normal/independent distributions is an attractive class of asymmetric heavy-tailed distributions that includes the skew-normal distribution, skew-t, skew-slash and the skew-contaminated normal distributions as special cases, providing an appealing robust alternative to the routine use of symmetric distributions in this type of models. The methods developed are illustrated using a real data set from Framingham cholesterol study. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The Birnbaum-Saunders regression model is becoming increasingly popular in lifetime analyses and reliability studies. In this model, the signed likelihood ratio statistic provides the basis for testing inference and construction of confidence limits for a single parameter of interest. We focus on the small sample case, where the standard normal distribution gives a poor approximation to the true distribution of the statistic. We derive three adjusted signed likelihood ratio statistics that lead to very accurate inference even for very small samples. Two empirical applications are presented. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
In this article, we deal with the issue of performing accurate small-sample inference in the Birnbaum-Saunders regression model, which can be useful for modeling lifetime or reliability data. We derive a Bartlett-type correction for the score test and numerically compare the corrected test with the usual score test and some other competitors.
Resumo:
This paper considers the issue of modeling fractional data observed on [0,1), (0,1] or [0,1]. Mixed continuous-discrete distributions are proposed. The beta distribution is used to describe the continuous component of the model since its density can have quite different shapes depending on the values of the two parameters that index the distribution. Properties of the proposed distributions are examined. Also, estimation based on maximum likelihood and conditional moments is discussed. Finally, practical applications that employ real data are presented.