957 resultados para Statistical approach
Resumo:
This paper presents an application of the Multi-Scale Integrated Analysis of Societal and Ecosystem Metabolism (MuSIASEM) approach to the estimation of quantities of Gross Value Added (GVA) referring to economic entities defined at different scales of study. The method first estimates benchmark values of the pace of GVA generation per hour of labour across economic sectors. These values are estimated as intensive variables –e.g. €/hour– by dividing the various sectorial GVA of the country (expressed in € per year) by the hours of paid work in that same sector per year. This assessment is obtained using data referring to national statistics (top down information referring to the national level). Then, the approach uses bottom-up information (the number of hours of paid work in the various economic sectors of an economic entity –e.g. a city or a province– operating within the country) to estimate the amount of GVA produced by that entity. This estimate is obtained by multiplying the number of hours of work in each sector in the economic entity by the benchmark value of GVA generation per hour of work of that particular sector (national average). This method is applied and tested on two different socio-economic systems: (i) Catalonia (considered level n) and Barcelona (considered level n-1); and (ii) the region of Lima (considered level n) and Lima Metropolitan Area (considered level n-1). In both cases, the GVA per year of the local economic entity –Barcelona and Lima Metropolitan Area – is estimated and the resulting value is compared with GVA data provided by statistical offices. The empirical analysis seems to validate the approach, even though the case of Lima Metropolitan Area indicates a need for additional care when dealing with the estimate of GVA in primary sectors (agriculture and mining).
Resumo:
Familial searching consists of searching for a full profile left at a crime scene in a National DNA Database (NDNAD). In this paper we are interested in the circumstance where no full match is returned, but a partial match is found between a database member's profile and the crime stain. Because close relatives share more of their DNA than unrelated persons, this partial match may indicate that the crime stain was left by a close relative of the person with whom the partial match was found. This approach has successfully solved important crimes in the UK and the USA. In a previous paper, a model, which takes into account substructure and siblings, was used to simulate a NDNAD. In this paper, we have used this model to test the usefulness of familial searching and offer guidelines for pre-assessment of the cases based on the likelihood ratio. Siblings of "persons" present in the simulated Swiss NDNAD were created. These profiles (N=10,000) were used as traces and were then compared to the whole database (N=100,000). The statistical results obtained show that the technique has great potential confirming the findings of previous studies. However, effectiveness of the technique is only one part of the story. Familial searching has juridical and ethical aspects that should not be ignored. In Switzerland for example, there are no specific guidelines to the legality or otherwise of familial searching. This article both presents statistical results, and addresses criminological and civil liberties aspects to take into account risks and benefits of familial searching.
Resumo:
This paper focuses on one of the methods for bandwidth allocation in an ATM network: the convolution approach. The convolution approach permits an accurate study of the system load in statistical terms by accumulated calculations, since probabilistic results of the bandwidth allocation can be obtained. Nevertheless, the convolution approach has a high cost in terms of calculation and storage requirements. This aspect makes real-time calculations difficult, so many authors do not consider this approach. With the aim of reducing the cost we propose to use the multinomial distribution function: the enhanced convolution approach (ECA). This permits direct computation of the associated probabilities of the instantaneous bandwidth requirements and makes a simple deconvolution process possible. The ECA is used in connection acceptance control, and some results are presented
Resumo:
HIV virulence, i.e. the time of progression to AIDS, varies greatly among patients. As for other rapidly evolving pathogens of humans, it is difficult to know if this variance is controlled by the genotype of the host or that of the virus because the transmission chain is usually unknown. We apply the phylogenetic comparative approach (PCA) to estimate the heritability of a trait from one infection to the next, which indicates the control of the virus genotype over this trait. The idea is to use viral RNA sequences obtained from patients infected by HIV-1 subtype B to build a phylogeny, which approximately reflects the transmission chain. Heritability is measured statistically as the propensity for patients close in the phylogeny to exhibit similar infection trait values. The approach reveals that up to half of the variance in set-point viral load, a trait associated with virulence, can be heritable. Our estimate is significant and robust to noise in the phylogeny. We also check for the consistency of our approach by showing that a trait related to drug resistance is almost entirely heritable. Finally, we show the importance of taking into account the transmission chain when estimating correlations between infection traits. The fact that HIV virulence is, at least partially, heritable from one infection to the next has clinical and epidemiological implications. The difference between earlier studies and ours comes from the quality of our dataset and from the power of the PCA, which can be applied to large datasets and accounts for within-host evolution. The PCA opens new perspectives for approaches linking clinical data and evolutionary biology because it can be extended to study other traits or other infectious diseases.
Resumo:
La tomodensitométrie (CT) est une technique d'imagerie dont l'intérêt n'a cessé de croître depuis son apparition dans le début des années 70. Dans le domaine médical, son utilisation est incontournable à tel point que ce système d'imagerie pourrait être amené à devenir victime de son succès si son impact au niveau de l'exposition de la population ne fait pas l'objet d'une attention particulière. Bien évidemment, l'augmentation du nombre d'examens CT a permis d'améliorer la prise en charge des patients ou a rendu certaines procédures moins invasives. Toutefois, pour assurer que le compromis risque - bénéfice soit toujours en faveur du patient, il est nécessaire d'éviter de délivrer des doses non utiles au diagnostic.¦Si cette action est importante chez l'adulte elle doit être une priorité lorsque les examens se font chez l'enfant, en particulier lorsque l'on suit des pathologies qui nécessitent plusieurs examens CT au cours de la vie du patient. En effet, les enfants et jeunes adultes sont plus radiosensibles. De plus, leur espérance de vie étant supérieure à celle de l'adulte, ils présentent un risque accru de développer un cancer radio-induit dont la phase de latence peut être supérieure à vingt ans. Partant du principe que chaque examen radiologique est justifié, il devient dès lors nécessaire d'optimiser les protocoles d'acquisitions pour s'assurer que le patient ne soit pas irradié inutilement. L'avancée technologique au niveau du CT est très rapide et depuis 2009, de nouvelles techniques de reconstructions d'images, dites itératives, ont été introduites afin de réduire la dose et améliorer la qualité d'image.¦Le présent travail a pour objectif de déterminer le potentiel des reconstructions itératives statistiques pour réduire au minimum les doses délivrées lors d'examens CT chez l'enfant et le jeune adulte tout en conservant une qualité d'image permettant le diagnostic, ceci afin de proposer des protocoles optimisés.¦L'optimisation d'un protocole d'examen CT nécessite de pouvoir évaluer la dose délivrée et la qualité d'image utile au diagnostic. Alors que la dose est estimée au moyen d'indices CT (CTDIV0| et DLP), ce travail a la particularité d'utiliser deux approches radicalement différentes pour évaluer la qualité d'image. La première approche dite « physique », se base sur le calcul de métriques physiques (SD, MTF, NPS, etc.) mesurées dans des conditions bien définies, le plus souvent sur fantômes. Bien que cette démarche soit limitée car elle n'intègre pas la perception des radiologues, elle permet de caractériser de manière rapide et simple certaines propriétés d'une image. La seconde approche, dite « clinique », est basée sur l'évaluation de structures anatomiques (critères diagnostiques) présentes sur les images de patients. Des radiologues, impliqués dans l'étape d'évaluation, doivent qualifier la qualité des structures d'un point de vue diagnostique en utilisant une échelle de notation simple. Cette approche, lourde à mettre en place, a l'avantage d'être proche du travail du radiologue et peut être considérée comme méthode de référence.¦Parmi les principaux résultats de ce travail, il a été montré que les algorithmes itératifs statistiques étudiés en clinique (ASIR?, VEO?) ont un important potentiel pour réduire la dose au CT (jusqu'à-90%). Cependant, par leur fonctionnement, ils modifient l'apparence de l'image en entraînant un changement de texture qui pourrait affecter la qualité du diagnostic. En comparant les résultats fournis par les approches « clinique » et « physique », il a été montré que ce changement de texture se traduit par une modification du spectre fréquentiel du bruit dont l'analyse permet d'anticiper ou d'éviter une perte diagnostique. Ce travail montre également que l'intégration de ces nouvelles techniques de reconstruction en clinique ne peut se faire de manière simple sur la base de protocoles utilisant des reconstructions classiques. Les conclusions de ce travail ainsi que les outils développés pourront également guider de futures études dans le domaine de la qualité d'image, comme par exemple, l'analyse de textures ou la modélisation d'observateurs pour le CT.¦-¦Computed tomography (CT) is an imaging technique in which interest has been growing since it first began to be used in the early 1970s. In the clinical environment, this imaging system has emerged as the gold standard modality because of its high sensitivity in producing accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase of the number of CT examinations performed has raised concerns about the potential negative effects of ionizing radiation on the population. To insure a benefit - risk that works in favor of a patient, it is important to balance image quality and dose in order to avoid unnecessary patient exposure.¦If this balance is important for adults, it should be an absolute priority for children undergoing CT examinations, especially for patients suffering from diseases requiring several follow-up examinations over the patient's lifetime. Indeed, children and young adults are more sensitive to ionizing radiation and have an extended life span in comparison to adults. For this population, the risk of developing cancer, whose latency period exceeds 20 years, is significantly higher than for adults. Assuming that each patient examination is justified, it then becomes a priority to optimize CT acquisition protocols in order to minimize the delivered dose to the patient. Over the past few years, CT advances have been developing at a rapid pace. Since 2009, new iterative image reconstruction techniques, called statistical iterative reconstructions, have been introduced in order to decrease patient exposure and improve image quality.¦The goal of the present work was to determine the potential of statistical iterative reconstructions to reduce dose as much as possible without compromising image quality and maintain diagnosis of children and young adult examinations.¦The optimization step requires the evaluation of the delivered dose and image quality useful to perform diagnosis. While the dose is estimated using CT indices (CTDIV0| and DLP), the particularity of this research was to use two radically different approaches to evaluate image quality. The first approach, called the "physical approach", computed physical metrics (SD, MTF, NPS, etc.) measured on phantoms in well-known conditions. Although this technique has some limitations because it does not take radiologist perspective into account, it enables the physical characterization of image properties in a simple and timely way. The second approach, called the "clinical approach", was based on the evaluation of anatomical structures (diagnostic criteria) present on patient images. Radiologists, involved in the assessment step, were asked to score image quality of structures for diagnostic purposes using a simple rating scale. This approach is relatively complicated to implement and also time-consuming. Nevertheless, it has the advantage of being very close to the practice of radiologists and is considered as a reference method.¦Primarily, this work revealed that the statistical iterative reconstructions studied in clinic (ASIR? and VECO have a strong potential to reduce CT dose (up to -90%). However, by their mechanisms, they lead to a modification of the image appearance with a change in image texture which may then effect the quality of the diagnosis. By comparing the results of the "clinical" and "physical" approach, it was showed that a change in texture is related to a modification of the noise spectrum bandwidth. The NPS analysis makes possible to anticipate or avoid a decrease in image quality. This project demonstrated that integrating these new statistical iterative reconstruction techniques can be complex and cannot be made on the basis of protocols using conventional reconstructions. The conclusions of this work and the image quality tools developed will be able to guide future studies in the field of image quality as texture analysis or model observers dedicated to CT.
Resumo:
When a new treatment is compared to an established one in a randomized clinical trial, it is standard practice to statistically test for non-inferiority rather than for superiority. When the endpoint is binary, one usually compares two treatments using either an odds-ratio or a difference of proportions. In this paper, we propose a mixed approach which uses both concepts. One first defines the non-inferiority margin using an odds-ratio and one ultimately proves non-inferiority statistically using a difference of proportions. The mixed approach is shown to be more powerful than the conventional odds-ratio approach when the efficacy of the established treatment is known (with good precision) and high (e.g. with more than 56% of success). The gain of power achieved may lead in turn to a substantial reduction in the sample size needed to prove non-inferiority. The mixed approach can be generalized to ordinal endpoints.
Resumo:
Background: Recent advances on high-throughput technologies have produced a vast amount of protein sequences, while the number of high-resolution structures has seen a limited increase. This has impelled the production of many strategies to built protein structures from its sequence, generating a considerable amount of alternative models. The selection of the closest model to the native conformation has thus become crucial for structure prediction. Several methods have been developed to score protein models by energies, knowledge-based potentials and combination of both.Results: Here, we present and demonstrate a theory to split the knowledge-based potentials in scoring terms biologically meaningful and to combine them in new scores to predict near-native structures. Our strategy allows circumventing the problem of defining the reference state. In this approach we give the proof for a simple and linear application that can be further improved by optimizing the combination of Zscores. Using the simplest composite score () we obtained predictions similar to state-of-the-art methods. Besides, our approach has the advantage of identifying the most relevant terms involved in the stability of the protein structure. Finally, we also use the composite Zscores to assess the conformation of models and to detect local errors.Conclusion: We have introduced a method to split knowledge-based potentials and to solve the problem of defining a reference state. The new scores have detected near-native structures as accurately as state-of-art methods and have been successful to identify wrongly modeled regions of many near-native conformations.
Resumo:
In this article we present a hybrid approach for automatic summarization of Spanish medical texts. There are a lot of systems for automatic summarization using statistics or linguistics, but only a few of them combining both techniques. Our idea is that to reach a good summary we need to use linguistic aspects of texts, but as well we should benefit of the advantages of statistical techniques. We have integrated the Cortex (Vector Space Model) and Enertex (statistical physics) systems coupled with the Yate term extractor, and the Disicosum system (linguistics). We have compared these systems and afterwards we have integrated them in a hybrid approach. Finally, we have applied this hybrid system over a corpora of medical articles and we have evaluated their performances obtaining good results.
Resumo:
The statistical properties of inflation and, in particular, its degree of persistence and stability over time is a subject of intense debate and no consensus has been achieved yet. The goal of this paper is to analyze this controversy using a general approach, with the aim of providing a plausible explanation for the existing contradictory results. We consider the inflation rates of 21 OECD countries which are modelled as fractionally integrated (FI) processes. First, we show analytically that FI can appear in inflation rates after aggregating individual prices from firms that face different costs of adjusting their prices. Then, we provide robust empirical evidence supporting the FI hypothesis using both classical and Bayesian techniques. Next, we estimate impulse response functions and other scalar measures of persistence, achieving an accurate picture of this property and its variation across countries. It is shown that the application of some popular tools for measuring persistence, such as the sum of the AR coefficients, could lead to erroneous conclusions if fractional integration is present. Finally, we explore the existence of changes in inflation inertia using a novel approach. We conclude that the persistence of inflation is very high (although non-permanent) in most post-industrial countries and that it has remained basically unchanged over the last four decades.
Resumo:
In this article we present the first empirical analysis on the associations between body size, activity, employment and wages for several European countries. The main advantage of the present work with respect to the previous literature is offered by the comparability of the data and its large geographical coverage. According to our results, for Spanish women, being obese is associated with both a 9% lower wage and probability of being employed, while for Swedish and Danish, obesity is associated with a 12% lower probability of being employed, and a 10% lower wage respectively. In Belgium, obesity is associated with a 19% lower probability of being employed for men. These robust estimates are strongly informative and may be used as a simple statistical rule of thumb to decide the countries in which lab and field experiments should be run.
Resumo:
The Aitchison vector space structure for the simplex is generalized to a Hilbert space structure A2(P) for distributions and likelihoods on arbitrary spaces. Centralnotations of statistics, such as Information or Likelihood, can be identified in the algebraical structure of A2(P) and their corresponding notions in compositional data analysis, such as Aitchison distance or centered log ratio transform.In this way very elaborated aspects of mathematical statistics can be understoodeasily in the light of a simple vector space structure and of compositional data analysis. E.g. combination of statistical information such as Bayesian updating,combination of likelihood and robust M-estimation functions are simple additions/perturbations in A2(Pprior). Weighting observations corresponds to a weightedaddition of the corresponding evidence.Likelihood based statistics for general exponential families turns out to have aparticularly easy interpretation in terms of A2(P). Regular exponential families formfinite dimensional linear subspaces of A2(P) and they correspond to finite dimensionalsubspaces formed by their posterior in the dual information space A2(Pprior).The Aitchison norm can identified with mean Fisher information. The closing constant itself is identified with a generalization of the cummulant function and shown to be Kullback Leiblers directed information. Fisher information is the local geometry of the manifold induced by the A2(P) derivative of the Kullback Leibler information and the space A2(P) can therefore be seen as the tangential geometry of statistical inference at the distribution P.The discussion of A2(P) valued random variables, such as estimation functionsor likelihoods, give a further interpretation of Fisher information as the expected squared norm of evidence and a scale free understanding of unbiased reasoning
Resumo:
Foreign trade statistics are the main data source to the study of international trade.However its accuracy has been under suspicion since Morgernstern published hisfamous work in 1963. Federico and Tena (1991) have resumed the question arguing thatthey can be useful in an adequate level of aggregation. But the geographical assignmentproblem remains unsolved. This article focuses on the spatial variable through theanalysis of the reliability of textile international data for 1913. A geographical biasarises between export and import series, but because of its quantitative importance it canbe negligible in an international scale.
Resumo:
The statistical properties of inflation and, in particular, its degree of persistence and stability over time is a subject of intense debate and no consensus has been achieved yet. The goal of this paper is to analyze this controversy using a general approach, with the aim of providing a plausible explanation for the existing contradictory results. We consider the inflation rates of 21 OECD countries which are modelled as fractionally integrated (FI) processes. First, we show analytically that FI can appear in inflation rates after aggregating individual prices from firms that face different costs of adjusting their prices. Then, we provide robust empirical evidence supporting the FI hypothesis using both classical and Bayesian techniques. Next, we estimate impulse response functions and other scalar measures of persistence, achieving an accurate picture of this property and its variation across countries. It is shown that the application of some popular tools for measuring persistence, such as the sum of the AR coefficients, could lead to erroneous conclusions if fractional integration is present. Finally, we explore the existence of changes in inflation inertia using a novel approach. We conclude that the persistence of inflation is very high (although non-permanent) in most post-industrial countries and that it has remained basically unchanged over the last four decades.
Resumo:
RÉSUMÉ : Le bullying est un type de comportement agressif qu'un élève (ou plusieurs) fait subir à un autre et qui se manifeste par des agressions verbales, physiques et/ou psychologiques. Les caractéristiques du bullying sont la répétitivité d'actions négatives sur le long terme et une relation de pouvoir asymétrique. Pour la victime, ce type de comportement peut avoir des conséquences graves telles qu'échec scolaire, dépression, troubles alimentaires, ou idées suicidaires. De plus, les auteurs de bullying commettent plus de comportements déviants au sein de l'école ou à l'extérieur de cette dernière. La mise en place d'actions ciblées auprès des auteurs de bullying pourrait donc non seulement prévenir une victimisation, mais aussi réduire les actes de délinquance en général. Hormis quelques études locales ou cantonales, aucune recherche nationale auprès d'adolescents n'existait dans le domaine. Ce travail propose de combler cette lacune afin d'obtenir une compréhension suffisante du phénomène qui permet de donner des pistes pour définir des mesures de prévention appropriées. Afin d'appréhender la problématique du bullying dans les écoles secondaires suisses, deux sondages de délinquance juvénile autoreportée ont été effectués. Le premier a eu lieu entre 2003 et 2005 dans le canton de Vaud auprès de plus de 4500 écoliers. Le second a été administré en 2006 dans toute la Suisse et environ 3600 jeunes y ont participé. Les jeunes ont répondu au sondage soit en classe (questionnaire papier) soit en salle d'informatique (questionnaire en ligne). Les jeunes ayant répondu avoir sérieusement harcelé un autre élève est d'environ 7% dans le canton de Vaud et de 4% dans l'échantillon national. Les analyses statistiques ont permis tout d'abord de sélectionner les variables les plus fortement liées au bullying. Les résultats montrent que les jeunes avec un bas niveau d'autocontrôle et ayant une attitude positive envers la violence sont plus susceptibles de commettre des actes de bullying. L'importance des variables environnementales a aussi été démontrée: plus le jeune est supervisé et encadré par des adultes, plus les autorités (école, voisinage) jouent leur rôle de contrôle social en faisant respecter les règles et en intervenant de manière impartiale, moins le jeune risque de commettre des actes de bullying. De plus, l'utilisation d'analyses multiniveaux a permis de montrer l'existence d'effets de l'école sur le bullying. En particulier, le taux de bullying dans une école donnée augmente lorsque les avis des jeunes divergent par rapport à leur perception du climat scolaire. Un autre constat que l'on peut mettre en évidence est que la réaction des enseignants lors de bagarres a une influence différente sur le taux de bullying en fonction de l'établissement scolaire. ABSTRACT : Bullying is the intentional, repetitive or persistent hurting of one pupil by another (or several), where the relationship involves an imbalance of power. Bullying is a type of aggressive behaviour and the act can be verbal, physical and/or psychological. The consequences on the victims are serious: school failure, depressive symptomatology, eating disorders, or suicidal ideation. Moreover, the authors of bullying display more delinquent behaviour within or outside the school. Thus, preventive programmes targeting bullying could not only prevent victimisation, but also reduce delinquency in general. Very little data concerning bullying had been collected in Switzerland and, except some local or cantonal studies, no national research among teenagers existed in the field. This work intends to fill the gap in order to provide sufficient understanding of the phenomenon and to suggest some tracks for defining appropriate measures of prevention. In order to understand the problems of bullying in Swiss secondary schools better, two surveys of self-reported juvenile delinquency were carried out. The first one took place between 2003 and 2005 in the canton Vaud among more than 4500 pupils, the second in 2006 across Switzerland with about 3600 youths taking part. The pupils answered to the survey either in the classroom (paper questionnaire) or in the computer room (online questionnaire). The youths that answered having seriously bullied another pupil are about 7% in canton Vaud and 4% in the national sample. Statistical analyses have selected the variables most strongly related to bullying. The results show that the youths with a low level of self-control and adopting a positive attitude towards violence are more likely to bully others. The importance of the environmental variables was also shown: the more that youth is supervised and monitored by adults, and the more the authorities (school, neighbourhood) play their role of social control by making the rules be respected through intervening in an impartial way, the less the youth bully. Moreover, the use of multilevel analyses permitted to show the existence of effects of the school on bullying. In particular, the rate of bullying in a given school increases when there is a wide variation among students of the same school in their perception of their school climate. Another important aspect concerns teachers' reactions when pupils fight: this variable does not influence the bullying rate to the same extent, and depends on the school.
Resumo:
Between 2007 and 2009, aggressions by security agents of nightclubs on clients increased from 6% to 10% among community violence situations encountered at the Violence Medical Unit (VMU) at the Lausanne University Hospital in Switzerland. Most victims were young men who had been drinking alcohol before the assault. About one quarter (25.7%) presented with one or several fractures, all of them in the head area. (For more details, refer to the previous article "When nightclub security agents assault clients" published in 2012(1).) Following this first study, we performed a second qualitative study in order to bring more information about the context and highlight victims' behaviors and experiences. Four themes emerged: how the assault began; the assault itself; third-party involvement; and the psychological state of victims when they consulted the VMU. The findings of this second study complemented the statistical results of the first study by showing under what circumstances security agents of nightclubs respond with physical violence to situations they consider a threat to security. Furthermore, the study described consequences for the victims that could be quite serious. Our findings support the need for nightclubs to improve selection and training of security staff.