993 resultados para pacs: mathematical techniques
Resumo:
We shall call an n × p data matrix fully-compositional if the rows sum to a constant, and sub-compositional if the variables are a subset of a fully-compositional data set1. Such data occur widely in archaeometry, where it is common to determine the chemical composition of ceramic, glass, metal or other artefacts using techniques such as neutron activation analysis (NAA), inductively coupled plasma spectroscopy (ICPS), X-ray fluorescence analysis (XRF) etc. Interest often centres on whether there are distinct chemical groups within the data and whether, for example, these can be associated with different origins or manufacturing technologies
Resumo:
Developments in the statistical analysis of compositional data over the last twodecades have made possible a much deeper exploration of the nature of variability,and the possible processes associated with compositional data sets from manydisciplines. In this paper we concentrate on geochemical data sets. First we explainhow hypotheses of compositional variability may be formulated within the naturalsample space, the unit simplex, including useful hypotheses of subcompositionaldiscrimination and specific perturbational change. Then we develop through standardmethodology, such as generalised likelihood ratio tests, statistical tools to allow thesystematic investigation of a complete lattice of such hypotheses. Some of these tests are simple adaptations of existing multivariate tests but others require specialconstruction. We comment on the use of graphical methods in compositional dataanalysis and on the ordination of specimens. The recent development of the conceptof compositional processes is then explained together with the necessary tools for astaying- in-the-simplex approach, namely compositional singular value decompositions. All these statistical techniques are illustrated for a substantial compositional data set, consisting of 209 major-oxide and rare-element compositions of metamorphosed limestones from the Northeast and Central Highlands of Scotland.Finally we point out a number of unresolved problems in the statistical analysis ofcompositional processes
Resumo:
R from http://www.r-project.org/ is ‘GNU S’ – a language and environment for statistical computingand graphics. The environment in which many classical and modern statistical techniques havebeen implemented, but many are supplied as packages. There are 8 standard packages and many moreare available through the cran family of Internet sites http://cran.r-project.org .We started to develop a library of functions in R to support the analysis of mixtures and our goal isa MixeR package for compositional data analysis that provides support foroperations on compositions: perturbation and power multiplication, subcomposition with or withoutresiduals, centering of the data, computing Aitchison’s, Euclidean, Bhattacharyya distances,compositional Kullback-Leibler divergence etc.graphical presentation of compositions in ternary diagrams and tetrahedrons with additional features:barycenter, geometric mean of the data set, the percentiles lines, marking and coloring ofsubsets of the data set, theirs geometric means, notation of individual data in the set . . .dealing with zeros and missing values in compositional data sets with R procedures for simpleand multiplicative replacement strategy,the time series analysis of compositional data.We’ll present the current status of MixeR development and illustrate its use on selected data sets
Resumo:
In human Population Genetics, routine applications of principal component techniques are oftenrequired. Population biologists make widespread use of certain discrete classifications of humansamples into haplotypes, the monophyletic units of phylogenetic trees constructed from severalsingle nucleotide bimorphisms hierarchically ordered. Compositional frequencies of the haplotypesare recorded within the different samples. Principal component techniques are then required as adimension-reducing strategy to bring the dimension of the problem to a manageable level, say two,to allow for graphical analysis.Population biologists at large are not aware of the special features of compositional data and normally make use of the crude covariance of compositional relative frequencies to construct principalcomponents. In this short note we present our experience with using traditional linear principalcomponents or compositional principal components based on logratios, with reference to a specificdataset
Resumo:
The statistical analysis of literary style is the part of stylometry that compares measurable characteristicsin a text that are rarely controlled by the author, with those in other texts. When thegoal is to settle authorship questions, these characteristics should relate to the author’s style andnot to the genre, epoch or editor, and they should be such that their variation between authors islarger than the variation within comparable texts from the same author.For an overview of the literature on stylometry and some of the techniques involved, see for exampleMosteller and Wallace (1964, 82), Herdan (1964), Morton (1978), Holmes (1985), Oakes (1998) orLebart, Salem and Berry (1998).Tirant lo Blanc, a chivalry book, is the main work in catalan literature and it was hailed to be“the best book of its kind in the world” by Cervantes in Don Quixote. Considered by writterslike Vargas Llosa or Damaso Alonso to be the first modern novel in Europe, it has been translatedseveral times into Spanish, Italian and French, with modern English translations by Rosenthal(1996) and La Fontaine (1993). The main body of this book was written between 1460 and 1465,but it was not printed until 1490.There is an intense and long lasting debate around its authorship sprouting from its first edition,where its introduction states that the whole book is the work of Martorell (1413?-1468), while atthe end it is stated that the last one fourth of the book is by Galba (?-1490), after the death ofMartorell. Some of the authors that support the theory of single authorship are Riquer (1990),Chiner (1993) and Badia (1993), while some of those supporting the double authorship are Riquer(1947), Coromines (1956) and Ferrando (1995). For an overview of this debate, see Riquer (1990).Neither of the two candidate authors left any text comparable to the one under study, and thereforediscriminant analysis can not be used to help classify chapters by author. By using sample textsencompassing about ten percent of the book, and looking at word length and at the use of 44conjunctions, prepositions and articles, Ginebra and Cabos (1998) detect heterogeneities that mightindicate the existence of two authors. By analyzing the diversity of the vocabulary, Riba andGinebra (2000) estimates that stylistic boundary to be near chapter 383.Following the lead of the extensive literature, this paper looks into word length, the use of the mostfrequent words and into the use of vowels in each chapter of the book. Given that the featuresselected are categorical, that leads to three contingency tables of ordered rows and therefore tothree sequences of multinomial observations.Section 2 explores these sequences graphically, observing a clear shift in their distribution. Section 3describes the problem of the estimation of a suden change-point in those sequences, in the followingsections we propose various ways to estimate change-points in multinomial sequences; the methodin section 4 involves fitting models for polytomous data, the one in Section 5 fits gamma modelsonto the sequence of Chi-square distances between each row profiles and the average profile, theone in Section 6 fits models onto the sequence of values taken by the first component of thecorrespondence analysis as well as onto sequences of other summary measures like the averageword length. In Section 7 we fit models onto the marginal binomial sequences to identify thefeatures that distinguish the chapters before and after that boundary. Most methods rely heavilyon the use of generalized linear models
Resumo:
This work provides a general description of the multi sensor data fusion concept, along with a new classification of currently used sensor fusion techniques for unmanned underwater vehicles (UUV). Unlike previous proposals that focus the classification on the sensors involved in the fusion, we propose a synthetic approach that is focused on the techniques involved in the fusion and their applications in UUV navigation. We believe that our approach is better oriented towards the development of sensor fusion systems, since a sensor fusion architecture should be first of all focused on its goals and then on the fused sensors
Resumo:
Obtaining automatic 3D profile of objects is one of the most important issues in computer vision. With this information, a large number of applications become feasible: from visual inspection of industrial parts to 3D reconstruction of the environment for mobile robots. In order to achieve 3D data, range finders can be used. Coded structured light approach is one of the most widely used techniques to retrieve 3D information of an unknown surface. An overview of the existing techniques as well as a new classification of patterns for structured light sensors is presented. This kind of systems belong to the group of active triangulation method, which are based on projecting a light pattern and imaging the illuminated scene from one or more points of view. Since the patterns are coded, correspondences between points of the image(s) and points of the projected pattern can be easily found. Once correspondences are found, a classical triangulation strategy between camera(s) and projector device leads to the reconstruction of the surface. Advantages and constraints of the different patterns are discussed
Resumo:
Few publications have compared ultrasound (US) to histology in diagnosing schistosomiasis-induced liver fibrosis (LF); none has used magnetic resonance (MR). The aim of this study was to evaluate schistosomal LF using these three methods. Fourteen patients with hepatosplenic schistosomiasis admitted to hospital for surgical treatment of variceal bleeding were investigated. They were submitted to upper digestive endoscopy, US, MR and wedge liver biopsy. The World Health Organization protocol for US in schistosomiasis was used. Hepatic fibrosis was classified as absent, slight, moderate or intense. Histology and MR confirmed Symmers' fibrosis in all cases. US failed to detect it in one patient. Moderate agreement was found comparing US to MR; poor agreement was found when US or MR were compared to histology. Re-classifying LF as only slight or intense created moderate agreement between imaging techniques and histology. Histomorphometry did not separate slight from intense LF. Two patients with advanced hepatosplenic schistosomiasis presented slight LF. Our data suggest that the presence of the characteristic periportal fibrosis, diagnosed by US, MR or histology, associated with a sign of portal hypertension, defines the severity of the disease. We conclude that imaging techniques are reliable to define the presence of LF but fail in grading its intensity.
Resumo:
The absolute necessity of obtaining 3D information of structured and unknown environments in autonomous navigation reduce considerably the set of sensors that can be used. The necessity to know, at each time, the position of the mobile robot with respect to the scene is indispensable. Furthermore, this information must be obtained in the least computing time. Stereo vision is an attractive and widely used method, but, it is rather limited to make fast 3D surface maps, due to the correspondence problem. The spatial and temporal correspondence among images can be alleviated using a method based on structured light. This relationship can be directly found codifying the projected light; then each imaged region of the projected pattern carries the needed information to solve the correspondence problem. We present the most significant techniques, used in recent years, concerning the coded structured light method
Resumo:
Low concentrations of elements in geochemical analyses have the peculiarity of beingcompositional data and, for a given level of significance, are likely to be beyond thecapabilities of laboratories to distinguish between minute concentrations and completeabsence, thus preventing laboratories from reporting extremely low concentrations of theanalyte. Instead, what is reported is the detection limit, which is the minimumconcentration that conclusively differentiates between presence and absence of theelement. A spatially distributed exhaustive sample is employed in this study to generateunbiased sub-samples, which are further censored to observe the effect that differentdetection limits and sample sizes have on the inference of population distributionsstarting from geochemical analyses having specimens below detection limit (nondetects).The isometric logratio transformation is used to convert the compositional data in thesimplex to samples in real space, thus allowing the practitioner to properly borrow fromthe large source of statistical techniques valid only in real space. The bootstrap method isused to numerically investigate the reliability of inferring several distributionalparameters employing different forms of imputation for the censored data. The casestudy illustrates that, in general, best results are obtained when imputations are madeusing the distribution best fitting the readings above detection limit and exposes theproblems of other more widely used practices. When the sample is spatially correlated, itis necessary to combine the bootstrap with stochastic simulation
Resumo:
OBJECTIVE. The main goal of this paper is to obtain a classification model based on feed-forward multilayer perceptrons in order to improve postpartum depression prediction during the 32 weeks after childbirth with a high sensitivity and specificity and to develop a tool to be integrated in a decision support system for clinicians. MATERIALS AND METHODS. Multilayer perceptrons were trained on data from 1397 women who had just given birth, from seven Spanish general hospitals, including clinical, environmental and genetic variables. A prospective cohort study was made just after delivery, at 8 weeks and at 32 weeks after delivery. The models were evaluated with the geometric mean of accuracies using a hold-out strategy. RESULTS. Multilayer perceptrons showed good performance (high sensitivity and specificity) as predictive models for postpartum depression. CONCLUSIONS. The use of these models in a decision support system can be clinically evaluated in future work. The analysis of the models by pruning leads to a qualitative interpretation of the influence of each variable in the interest of clinical protocols.
Assessment of drug-induced hepatotoxicity in clinical practice: a challenge for gastroenterologists.
Resumo:
Currently, pharmaceutical preparations are serious contributors to liver disease; hepatotoxicity ranking as the most frequent cause for acute liver failure and post-commercialization regulatory decisions. The diagnosis of hepatotoxicity remains a difficult task because of the lack of reliable markers for use in general clinical practice. To incriminate any given drug in an episode of liver dysfunction is a step-by-step process that requires a high degree of suspicion, compatible chronology, awareness of the drug's hepatotoxic potential, the exclusion of alternative causes of liver damage and the ability to detect the presence of subtle data that favors a toxic etiology. This process is time-consuming and the final result is frequently inaccurate. Diagnostic algorithms may add consistency to the diagnostic process by translating the suspicion into a quantitative score. Such scales are useful since they provide a framework that emphasizes the features that merit attention in cases of suspected hepatic adverse reaction as well. Current efforts in collecting bona fide cases of drug-induced hepatotoxicity will make refinements of existing scales feasible. It is now relatively easy to accommodate relevant data within the scoring system and to delete low-impact items. Efforts should also be directed toward the development of an abridged instrument for use in evaluating suspected drug-induced hepatotoxicity at the very beginning of the diagnosis and treatment process when clinical decisions need to be made. The instrument chosen would enable a confident diagnosis to be made on admission of the patient and treatment to be fine-tuned as further information is collected.
Resumo:
The objective of the current study was to compare two rapid methods, the BBL Mycobacteria Growth Indicator Tube (MGIT TM) and Biotec FASTPlaque TB TM (FPTB) assays, with the conventional Löwenstein-Jensen (LJ) media assay to diagnose mycobacterial infections from paucibacillary clinical specimens. For evaluation of the clinical utility of the BBL MGIT TM and FPTB assays, respiratory tract specimens (n = 208), with scanty bacilli or clinically evident, smear negative cases and non-respiratory tract specimens (n = 119) were analyzed and the performance of each assay was compared with LJ media. MGIT and FPTB demonstrated a greater sensitivity (95.92% and 87.68%), specificity (94.59% and 98.78%), positive predictive value (94.91% and 99.16%) and negative predictive value (96.56% and 90.92%), respectively, compared to LJ culture for both respiratory tract and non-respiratory tract specimens. However, the FPTB assay was unable to detect nontuberculous mycobacteria and few Mycobacterium tuberculosis complex cases from paucibacillary clinical specimens. It is likely that the analytical sensitivity of FPTB is moderately low and may not be useful for the direct detection of tuberculosis in paucibacillary specimens. The current study concluded that MGIT was a dependable, highly efficient system for recovery of M. tuberculosis complexes and nontuberculous mycobacteria from both respiratory and non-respiratory tract specimens in combination with LJ media.
Resumo:
The recognition of pathogen-derived structures by C-type lectins and the chemotactic activity mediated by the CCL2/CCR2 axis are critical steps in determining the host immune response to fungi. The present study was designed to investigate whether the presence of single nucleotide polymorphisms (SNPs) within DC-SIGN, Dectin-1, Dectin-2, CCL2 and CCR2 genes influence the risk of developing Invasive Pulmonary Aspergillosis (IPA). Twenty-seven SNPs were selected using a hybrid functional/tagging approach and genotyped in 182 haematological patients, fifty-seven of them diagnosed with proven or probable IPA according to the 2008 EORTC/MSG criteria. Association analysis revealed that carriers of the Dectin-1(rs3901533 T/T) and Dectin-1(rs7309123 G/G) genotypes and DC-SIGN(rs4804800 G), DC-SIGN(rs11465384 T), DC-SIGN(7248637 A) and DC-SIGN(7252229 C) alleles had a significantly increased risk of IPA infection (OR = 5.59 95%CI 1.37-22.77; OR = 4.91 95%CI 1.52-15.89; OR = 2.75 95%CI 1.27-5.95; OR = 2.70 95%CI 1.24-5.90; OR = 2.39 95%CI 1.09-5.22 and OR = 2.05 95%CI 1.00-4.22, respectively). There was also a significantly increased frequency of galactomannan positivity among patients carrying the Dectin-1(rs3901533_T) allele and Dectin-1(rs7309123_G/G) genotype. In addition, healthy individuals with this latter genotype showed a significantly decreased level of Dectin-1 mRNA expression compared to C-allele carriers, suggesting a role of the Dectin-1(rs7309123) polymorphism in determining the levels of Dectin-1 and, consequently, the level of susceptibility to IPA infection. SNP-SNP interaction (epistasis) analysis revealed significant interactions models including SNPs in Dectin-1, Dectin-2, CCL2 and CCR2 genes, with synergistic genetic effects. Although these results need to be further validated in larger cohorts, they suggest that Dectin-1, DC-SIGN, Dectin-2, CCL2 and CCR2 genetic variants influence the risk of IPA infection and might be useful in developing a risk-adapted prophylaxis.
Resumo:
This population study, which evaluated two parasitological methods for the diagnosis of schistosomiasis mansoni, was performed in a low-transmission area in Pedra Preta, Montes Claros, Minas Gerais, Brazil. A total of 201 inhabitants of the rural area participated in this research. Four stool samples were obtained from all participants and analysed using the Kato-Katz method (18 slides) and a commercial test, the TF-Test®, which was performed quantitatively. The data were analysed to determine prevalence, the sensitivity of the diagnostic methods, the worm burden and the definition of the "gold standard", which was obtained by totalling the results of all samples examined using the Kato-Katz technique and the TF-Test®. The results showed that the prevalence obtained from the examination of one Kato-Katz slide (the methodology adopted by the Brazilian control programme) was 8% compared to 35.8% from the "gold standard", which was a 4.5-fold difference. This result indicates that the prevalence of schistosomiasis in so-called low-transmission areas is significantly underestimated.