925 resultados para data-types
Resumo:
When continuous data are coded to categorical variables, two types of coding are possible: crisp coding in the form of indicator, or dummy, variables with values either 0 or 1; or fuzzy coding where each observation is transformed to a set of "degrees of membership" between 0 and 1, using co-called membership functions. It is well known that the correspondence analysis of crisp coded data, namely multiple correspondence analysis, yields principal inertias (eigenvalues) that considerably underestimate the quality of the solution in a low-dimensional space. Since the crisp data only code the categories to which each individual case belongs, an alternative measure of fit is simply to count how well these categories are predicted by the solution. Another approach is to consider multiple correspondence analysis equivalently as the analysis of the Burt matrix (i.e., the matrix of all two-way cross-tabulations of the categorical variables), and then perform a joint correspondence analysis to fit just the off-diagonal tables of the Burt matrix - the measure of fit is then computed as the quality of explaining these tables only. The correspondence analysis of fuzzy coded data, called "fuzzy multiple correspondence analysis", suffers from the same problem, albeit attenuated. Again, one can count how many correct predictions are made of the categories which have highest degree of membership. But here one can also defuzzify the results of the analysis to obtain estimated values of the original data, and then calculate a measure of fit in the familiar percentage form, thanks to the resultant orthogonal decomposition of variance. Furthermore, if one thinks of fuzzy multiple correspondence analysis as explaining the two-way associations between variables, a fuzzy Burt matrix can be computed and the same strategy as in the crisp case can be applied to analyse the off-diagonal part of this matrix. In this paper these alternative measures of fit are defined and applied to a data set of continuous meteorological variables, which are coded crisply and fuzzily into three categories. Measuring the fit is further discussed when the data set consists of a mixture of discrete and continuous variables.
Resumo:
Data mining can be defined as the extraction of previously unknown and potentially useful information from large datasets. The main principle is to devise computer programs that run through databases and automatically seek deterministic patterns. It is applied in different fields of application, e.g., remote sensing, biometry, speech recognition, but has seldom been applied to forensic case data. The intrinsic difficulty related to the use of such data lies in its heterogeneity, which comes from the many different sources of information. The aim of this study is to highlight potential uses of pattern recognition that would provide relevant results from a criminal intelligence point of view. The role of data mining within a global crime analysis methodology is to detect all types of structures in a dataset. Once filtered and interpreted, those structures can point to previously unseen criminal activities. The interpretation of patterns for intelligence purposes is the final stage of the process. It allows the researcher to validate the whole methodology and to refine each step if necessary. An application to cutting agents found in illicit drug seizures was performed. A combinatorial approach was done, using the presence and the absence of products. Methods coming from the graph theory field were used to extract patterns in data constituted by links between products and place and date of seizure. A data mining process completed using graphing techniques is called ``graph mining''. Patterns were detected that had to be interpreted and compared with preliminary knowledge to establish their relevancy. The illicit drug profiling process is actually an intelligence process that uses preliminary illicit drug classes to classify new samples. Methods proposed in this study could be used \textit{a priori} to compare structures from preliminary and post-detection patterns. This new knowledge of a repeated structure may provide valuable complementary information to profiling and become a source of intelligence.
Resumo:
Modern methods of compositional data analysis are not well known in biomedical research.Moreover, there appear to be few mathematical and statistical researchersworking on compositional biomedical problems. Like the earth and environmental sciences,biomedicine has many problems in which the relevant scienti c information isencoded in the relative abundance of key species or categories. I introduce three problemsin cancer research in which analysis of compositions plays an important role. Theproblems involve 1) the classi cation of serum proteomic pro les for early detection oflung cancer, 2) inference of the relative amounts of di erent tissue types in a diagnostictumor biopsy, and 3) the subcellular localization of the BRCA1 protein, and it'srole in breast cancer patient prognosis. For each of these problems I outline a partialsolution. However, none of these problems is \solved". I attempt to identify areas inwhich additional statistical development is needed with the hope of encouraging morecompositional data analysts to become involved in biomedical research
Resumo:
Les décisions de gestion des eaux souterraines doivent souvent être justiffées par des modèles quantitatifs d'aquifères qui tiennent compte de l'hétérogénéité des propriétés hydrauliques. Les aquifères fracturés sont parmi les plus hétérogènes et très difficiles à étudier. Dans ceux-ci, les fractures connectées, d'ouverture millimètrique, peuvent agir comme conducteurs hydrauliques et donc créer des écoulements très localisés. Le manque général d'informations sur la distribution spatiale des fractures limite la possibilité de construire des modèles quantitatifs de flux et de transport. Les données qui conditionnent les modèles sont généralement spatialement limitées, bruitées et elles ne représentent que des mesures indirectes de propriétés physiques. Ces limitations aux données peuvent être en partie surmontées en combinant différents types de données, telles que les données hydrologiques et de radar à pénétration de sol plus commun ément appelé géoradar. L'utilisation du géoradar en forage est un outil prometteur pour identiffer les fractures individuelles jusqu'à quelques dizaines de mètres dans la formation. Dans cette thèse, je développe des approches pour combiner le géoradar avec les données hydrologiques affn d'améliorer la caractérisation des aquifères fracturés. Des investigations hydrologiques intensives ont déjà été réalisées à partir de trois forage adjacents dans un aquifère cristallin en Bretagne (France). Néanmoins, la dimension des fractures et la géométrie 3-D des fractures conductives restaient mal connue. Affn d'améliorer la caractérisation du réseau de fractures je propose dans un premier temps un traitement géoradar avancé qui permet l'imagerie des fractures individuellement. Les résultats montrent que les fractures perméables précédemment identiffées dans les forages peuvent être caractérisées géométriquement loin du forage et que les fractures qui ne croisent pas les forages peuvent aussi être identiffées. Les résultats d'une deuxième étude montrent que les données géoradar peuvent suivre le transport d'un traceur salin. Ainsi, les fractures qui font partie du réseau conductif et connecté qui dominent l'écoulement et le transport local sont identiffées. C'est la première fois que le transport d'un traceur salin a pu être imagé sur une dizaines de mètres dans des fractures individuelles. Une troisième étude conffrme ces résultats par des expériences répétées et des essais de traçage supplémentaires dans différentes parties du réseau local. En outre, la combinaison des données de surveillance hydrologique et géoradar fournit la preuve que les variations temporelles d'amplitude des signaux géoradar peuvent nous informer sur les changements relatifs de concentrations de traceurs dans la formation. Par conséquent, les données géoradar et hydrologiques sont complémentaires. Je propose ensuite une approche d'inversion stochastique pour générer des modèles 3-D de fractures discrètes qui sont conditionnés à toutes les données disponibles en respectant leurs incertitudes. La génération stochastique des modèles conditionnés par géoradar est capable de reproduire les connexions hydrauliques observées et leur contribution aux écoulements. L'ensemble des modèles conditionnés fournit des estimations quantitatives des dimensions et de l'organisation spatiale des fractures hydrauliquement importantes. Cette thèse montre clairement que l'imagerie géoradar est un outil utile pour caractériser les fractures. La combinaison de mesures géoradar avec des données hydrologiques permet de conditionner avec succès le réseau de fractures et de fournir des modèles quantitatifs. Les approches présentées peuvent être appliquées dans d'autres types de formations rocheuses fracturées où la roche est électriquement résistive.
Resumo:
We have selected and dated three contrasting rock-types representative of the magmatic activity within the Permian layered mafic complex of Mont Collon, Austroalpine Dent Blanche nappe, Western Alps. A pegmatitic gabbro associated to the main cumulus sequence yields a concordant U/Pb zircon age of 284.2 +/- 0.6 Ma, whereas a pegmatitic granite dike crosscutting the latter yields a concordant age of 282.9 +/- 0.6 Ma. A Fe-Ti-rich ultrabasic lamprophyre, crosscutting all other lithologies of the complex, yields an 40Ar/39Ar plateau age of 260.2 +/- 0.7 Ma on a kaersutite concentrate. All ages are interpreted as magmatic. Sub-contemporaneous felsic dikes within the Mont Collon complex are ascribed to anatectic back-veining from the country-rock, related to the emplacement of the main gabbroic body in the continental crust, which is in accordance with new isotopic data. The lamprophyres have isotopic compositions typical of a depleted mantle, in contrast to those of the cumulate gabbros, close to values of the Bulk Silicate Earth. This indicates either contrasting sources for the two magma pulses - the subcontinental lithospheric mantle for the gabbros and the underlying asthenosphere for the lamprophyres - or a single depleted lithospheric source with variable degrees of crustal contamination of the gabbroic melts during their emplacement in the continental crust. The Mont Collon complex belongs to a series of Early Permian mafic massifs, which emplaced in a short time span about 285-280 Ma ago, in a limited sector of the post-Variscan continental crust now corresponding to the Austroalpine/ Southern Alpine domains and Corsica. This magmatic activity was controlled in space and time by crustal-scale transtensional shear zones.
Resumo:
Abstract Accurate characterization of the spatial distribution of hydrological properties in heterogeneous aquifers at a range of scales is a key prerequisite for reliable modeling of subsurface contaminant transport, and is essential for designing effective and cost-efficient groundwater management and remediation strategies. To this end, high-resolution geophysical methods have shown significant potential to bridge a critical gap in subsurface resolution and coverage between traditional hydrological measurement techniques such as borehole log/core analyses and tracer or pumping tests. An important and still largely unresolved issue, however, is how to best quantitatively integrate geophysical data into a characterization study in order to estimate the spatial distribution of one or more pertinent hydrological parameters, thus improving hydrological predictions. Recognizing the importance of this issue, the aim of the research presented in this thesis was to first develop a strategy for the assimilation of several types of hydrogeophysical data having varying degrees of resolution, subsurface coverage, and sensitivity to the hydrologic parameter of interest. In this regard a novel simulated annealing (SA)-based conditional simulation approach was developed and then tested in its ability to generate realizations of porosity given crosshole ground-penetrating radar (GPR) and neutron porosity log data. This was done successfully for both synthetic and field data sets. A subsequent issue that needed to be addressed involved assessing the potential benefits and implications of the resulting porosity realizations in terms of groundwater flow and contaminant transport. This was investigated synthetically assuming first that the relationship between porosity and hydraulic conductivity was well-defined. Then, the relationship was itself investigated in the context of a calibration procedure using hypothetical tracer test data. Essentially, the relationship best predicting the observed tracer test measurements was determined given the geophysically derived porosity structure. Both of these investigations showed that the SA-based approach, in general, allows much more reliable hydrological predictions than other more elementary techniques considered. Further, the developed calibration procedure was seen to be very effective, even at the scale of tomographic resolution, for predictions of transport. This also held true at locations within the aquifer where only geophysical data were available. This is significant because the acquisition of hydrological tracer test measurements is clearly more complicated and expensive than the acquisition of geophysical measurements. Although the above methodologies were tested using porosity logs and GPR data, the findings are expected to remain valid for a large number of pertinent combinations of geophysical and borehole log data of comparable resolution and sensitivity to the hydrological target parameter. Moreover, the obtained results allow us to have confidence for future developments in integration methodologies for geophysical and hydrological data to improve the 3-D estimation of hydrological properties.
Resumo:
A catalogue is provided with the type material of four superfamilies of "Acalyptrate" (Conopoidea, Diopsoidea, Nerioidea and Tephritoidea) held in the collection of the Museu de Zoologia da Universidade de São Paulo (MZUSP), São Paulo, Brazil. Concerning the taxa dealt with herein, the Diptera collection of MZUSP held 77 holotypes, 4 "allotypes" and 194 paratypes. In this paper, information about data labels, preservation and missing structures of the type specimens is given.
Resumo:
Persons with Down syndrome (DS) uniquely have an increased frequency of leukemias but a decreased total frequency of solid tumors. The distribution and frequency of specific types of brain tumors have never been studied in DS. We evaluated the frequency of primary neural cell embryonal tumors and gliomas in a large international data set. The observed number of children with DS having a medulloblastoma, central nervous system primitive neuroectodermal tumor (CNS-PNET) or glial tumor was compared to the expected number. Data were collected from cancer registries or brain tumor registries in 13 countries of Europe, America, Asia and Oceania. The number of DS children with each category of tumor was treated as a Poisson variable with mean equal to 0.000884 times the total number of registrations in that category. Among 8,043 neural cell embryonal tumors (6,882 medulloblastomas and 1,161 CNS-PNETs), only one patient with medulloblastoma had DS, while 7.11 children in total and 6.08 with medulloblastoma were expected to have DS. (p 0.016 and 0.0066 respectively). Among 13,797 children with glioma, 10 had DS, whereas 12.2 were expected. Children with DS appear to be specifically protected against primary neural cell embryonal tumors of the CNS, whereas gliomas occur at the same frequency as in the general population. A similar protection against neuroblastoma, the principal extracranial neural cell embryonal tumor, has been observed in children with DS. Additional genetic material on the supernumerary chromosome 21 may protect against embryonal neural cell tumor development.
Resumo:
L'utilisation efficace des systèmes géothermaux, la séquestration du CO2 pour limiter le changement climatique et la prévention de l'intrusion d'eau salée dans les aquifères costaux ne sont que quelques exemples qui démontrent notre besoin en technologies nouvelles pour suivre l'évolution des processus souterrains à partir de la surface. Un défi majeur est d'assurer la caractérisation et l'optimisation des performances de ces technologies à différentes échelles spatiales et temporelles. Les méthodes électromagnétiques (EM) d'ondes planes sont sensibles à la conductivité électrique du sous-sol et, par conséquent, à la conductivité électrique des fluides saturant la roche, à la présence de fractures connectées, à la température et aux matériaux géologiques. Ces méthodes sont régies par des équations valides sur de larges gammes de fréquences, permettant détudier de manières analogues des processus allant de quelques mètres sous la surface jusqu'à plusieurs kilomètres de profondeur. Néanmoins, ces méthodes sont soumises à une perte de résolution avec la profondeur à cause des propriétés diffusives du champ électromagnétique. Pour cette raison, l'estimation des modèles du sous-sol par ces méthodes doit prendre en compte des informations a priori afin de contraindre les modèles autant que possible et de permettre la quantification des incertitudes de ces modèles de façon appropriée. Dans la présente thèse, je développe des approches permettant la caractérisation statique et dynamique du sous-sol à l'aide d'ondes EM planes. Dans une première partie, je présente une approche déterministe permettant de réaliser des inversions répétées dans le temps (time-lapse) de données d'ondes EM planes en deux dimensions. Cette stratégie est basée sur l'incorporation dans l'algorithme d'informations a priori en fonction des changements du modèle de conductivité électrique attendus. Ceci est réalisé en intégrant une régularisation stochastique et des contraintes flexibles par rapport à la gamme des changements attendus en utilisant les multiplicateurs de Lagrange. J'utilise des normes différentes de la norme l2 pour contraindre la structure du modèle et obtenir des transitions abruptes entre les régions du model qui subissent des changements dans le temps et celles qui n'en subissent pas. Aussi, j'incorpore une stratégie afin d'éliminer les erreurs systématiques de données time-lapse. Ce travail a mis en évidence l'amélioration de la caractérisation des changements temporels par rapport aux approches classiques qui réalisent des inversions indépendantes à chaque pas de temps et comparent les modèles. Dans la seconde partie de cette thèse, j'adopte un formalisme bayésien et je teste la possibilité de quantifier les incertitudes sur les paramètres du modèle dans l'inversion d'ondes EM planes. Pour ce faire, je présente une stratégie d'inversion probabiliste basée sur des pixels à deux dimensions pour des inversions de données d'ondes EM planes et de tomographies de résistivité électrique (ERT) séparées et jointes. Je compare les incertitudes des paramètres du modèle en considérant différents types d'information a priori sur la structure du modèle et différentes fonctions de vraisemblance pour décrire les erreurs sur les données. Les résultats indiquent que la régularisation du modèle est nécessaire lorsqu'on a à faire à un large nombre de paramètres car cela permet d'accélérer la convergence des chaînes et d'obtenir des modèles plus réalistes. Cependent, ces contraintes mènent à des incertitudes d'estimations plus faibles, ce qui implique des distributions a posteriori qui ne contiennent pas le vrai modèledans les régions ou` la méthode présente une sensibilité limitée. Cette situation peut être améliorée en combinant des méthodes d'ondes EM planes avec d'autres méthodes complémentaires telles que l'ERT. De plus, je montre que le poids de régularisation des paramètres et l'écart-type des erreurs sur les données peuvent être retrouvés par une inversion probabiliste. Finalement, j'évalue la possibilité de caractériser une distribution tridimensionnelle d'un panache de traceur salin injecté dans le sous-sol en réalisant une inversion probabiliste time-lapse tridimensionnelle d'ondes EM planes. Etant donné que les inversions probabilistes sont très coûteuses en temps de calcul lorsque l'espace des paramètres présente une grande dimension, je propose une stratégie de réduction du modèle ou` les coefficients de décomposition des moments de Legendre du panache de traceur injecté ainsi que sa position sont estimés. Pour ce faire, un modèle de résistivité de base est nécessaire. Il peut être obtenu avant l'expérience time-lapse. Un test synthétique montre que la méthodologie marche bien quand le modèle de résistivité de base est caractérisé correctement. Cette méthodologie est aussi appliquée à un test de trac¸age par injection d'une solution saline et d'acides réalisé dans un système géothermal en Australie, puis comparée à une inversion time-lapse tridimensionnelle réalisée selon une approche déterministe. L'inversion probabiliste permet de mieux contraindre le panache du traceur salin gr^ace à la grande quantité d'informations a priori incluse dans l'algorithme. Néanmoins, les changements de conductivités nécessaires pour expliquer les changements observés dans les données sont plus grands que ce qu'expliquent notre connaissance actuelle des phénomenès physiques. Ce problème peut être lié à la qualité limitée du modèle de résistivité de base utilisé, indiquant ainsi que des efforts plus grands devront être fournis dans le futur pour obtenir des modèles de base de bonne qualité avant de réaliser des expériences dynamiques. Les études décrites dans cette thèse montrent que les méthodes d'ondes EM planes sont très utiles pour caractériser et suivre les variations temporelles du sous-sol sur de larges échelles. Les présentes approches améliorent l'évaluation des modèles obtenus, autant en termes d'incorporation d'informations a priori, qu'en termes de quantification d'incertitudes a posteriori. De plus, les stratégies développées peuvent être appliquées à d'autres méthodes géophysiques, et offrent une grande flexibilité pour l'incorporation d'informations additionnelles lorsqu'elles sont disponibles. -- The efficient use of geothermal systems, the sequestration of CO2 to mitigate climate change, and the prevention of seawater intrusion in coastal aquifers are only some examples that demonstrate the need for novel technologies to monitor subsurface processes from the surface. A main challenge is to assure optimal performance of such technologies at different temporal and spatial scales. Plane-wave electromagnetic (EM) methods are sensitive to subsurface electrical conductivity and consequently to fluid conductivity, fracture connectivity, temperature, and rock mineralogy. These methods have governing equations that are the same over a large range of frequencies, thus allowing to study in an analogous manner processes on scales ranging from few meters close to the surface down to several hundreds of kilometers depth. Unfortunately, they suffer from a significant resolution loss with depth due to the diffusive nature of the electromagnetic fields. Therefore, estimations of subsurface models that use these methods should incorporate a priori information to better constrain the models, and provide appropriate measures of model uncertainty. During my thesis, I have developed approaches to improve the static and dynamic characterization of the subsurface with plane-wave EM methods. In the first part of this thesis, I present a two-dimensional deterministic approach to perform time-lapse inversion of plane-wave EM data. The strategy is based on the incorporation of prior information into the inversion algorithm regarding the expected temporal changes in electrical conductivity. This is done by incorporating a flexible stochastic regularization and constraints regarding the expected ranges of the changes by using Lagrange multipliers. I use non-l2 norms to penalize the model update in order to obtain sharp transitions between regions that experience temporal changes and regions that do not. I also incorporate a time-lapse differencing strategy to remove systematic errors in the time-lapse inversion. This work presents improvements in the characterization of temporal changes with respect to the classical approach of performing separate inversions and computing differences between the models. In the second part of this thesis, I adopt a Bayesian framework and use Markov chain Monte Carlo (MCMC) simulations to quantify model parameter uncertainty in plane-wave EM inversion. For this purpose, I present a two-dimensional pixel-based probabilistic inversion strategy for separate and joint inversions of plane-wave EM and electrical resistivity tomography (ERT) data. I compare the uncertainties of the model parameters when considering different types of prior information on the model structure and different likelihood functions to describe the data errors. The results indicate that model regularization is necessary when dealing with a large number of model parameters because it helps to accelerate the convergence of the chains and leads to more realistic models. These constraints also lead to smaller uncertainty estimates, which imply posterior distributions that do not include the true underlying model in regions where the method has limited sensitivity. This situation can be improved by combining planewave EM methods with complimentary geophysical methods such as ERT. In addition, I show that an appropriate regularization weight and the standard deviation of the data errors can be retrieved by the MCMC inversion. Finally, I evaluate the possibility of characterizing the three-dimensional distribution of an injected water plume by performing three-dimensional time-lapse MCMC inversion of planewave EM data. Since MCMC inversion involves a significant computational burden in high parameter dimensions, I propose a model reduction strategy where the coefficients of a Legendre moment decomposition of the injected water plume and its location are estimated. For this purpose, a base resistivity model is needed which is obtained prior to the time-lapse experiment. A synthetic test shows that the methodology works well when the base resistivity model is correctly characterized. The methodology is also applied to an injection experiment performed in a geothermal system in Australia, and compared to a three-dimensional time-lapse inversion performed within a deterministic framework. The MCMC inversion better constrains the water plumes due to the larger amount of prior information that is included in the algorithm. The conductivity changes needed to explain the time-lapse data are much larger than what is physically possible based on present day understandings. This issue may be related to the base resistivity model used, therefore indicating that more efforts should be given to obtain high-quality base models prior to dynamic experiments. The studies described herein give clear evidence that plane-wave EM methods are useful to characterize and monitor the subsurface at a wide range of scales. The presented approaches contribute to an improved appraisal of the obtained models, both in terms of the incorporation of prior information in the algorithms and the posterior uncertainty quantification. In addition, the developed strategies can be applied to other geophysical methods, and offer great flexibility to incorporate additional information when available.
Resumo:
INTRODUCTION: Diverse microarray and sequencing technologies have been widely used to characterise the molecular changes in malignant epithelial cells in breast cancers. Such gene expression studies to identify markers and targets in tumour cells are, however, compromised by the cellular heterogeneity of solid breast tumours and by the lack of appropriate counterparts representing normal breast epithelial cells. METHODS: Malignant neoplastic epithelial cells from primary breast cancers and luminal and myoepithelial cells isolated from normal human breast tissue were isolated by immunomagnetic separation methods. Pools of RNA from highly enriched preparations of these cell types were subjected to expression profiling using massively parallel signature sequencing (MPSS) and four different genome wide microarray platforms. Functional related transcripts of the differential tumour epithelial transcriptome were used for gene set enrichment analysis to identify enrichment of luminal and myoepithelial type genes. Clinical pathological validation of a small number of genes was performed on tissue microarrays. RESULTS: MPSS identified 6,553 differentially expressed genes between the pool of normal luminal cells and that of primary tumours substantially enriched for epithelial cells, of which 98% were represented and 60% were confirmed by microarray profiling. Significant expression level changes between these two samples detected only by microarray technology were shown by 4,149 transcripts, resulting in a combined differential tumour epithelial transcriptome of 8,051 genes. Microarray gene signatures identified a comprehensive list of 907 and 955 transcripts whose expression differed between luminal epithelial cells and myoepithelial cells, respectively. Functional annotation and gene set enrichment analysis highlighted a group of genes related to skeletal development that were associated with the myoepithelial/basal cells and upregulated in the tumour sample. One of the most highly overexpressed genes in this category, that encoding periostin, was analysed immunohistochemically on breast cancer tissue microarrays and its expression in neoplastic cells correlated with poor outcome in a cohort of poor prognosis estrogen receptor-positive tumours. CONCLUSION: Using highly enriched cell populations in combination with multiplatform gene expression profiling studies, a comprehensive analysis of molecular changes between the normal and malignant breast tissue was established. This study provides a basis for the identification of novel and potentially important targets for diagnosis, prognosis and therapy in breast cancer.
Resumo:
The research considers the problem of spatial data classification using machine learning algorithms: probabilistic neural networks (PNN) and support vector machines (SVM). As a benchmark model simple k-nearest neighbor algorithm is considered. PNN is a neural network reformulation of well known nonparametric principles of probability density modeling using kernel density estimator and Bayesian optimal or maximum a posteriori decision rules. PNN is well suited to problems where not only predictions but also quantification of accuracy and integration of prior information are necessary. An important property of PNN is that they can be easily used in decision support systems dealing with problems of automatic classification. Support vector machine is an implementation of the principles of statistical learning theory for the classification tasks. Recently they were successfully applied for different environmental topics: classification of soil types and hydro-geological units, optimization of monitoring networks, susceptibility mapping of natural hazards. In the present paper both simulated and real data case studies (low and high dimensional) are considered. The main attention is paid to the detection and learning of spatial patterns by the algorithms applied.
Resumo:
Traffic safety engineers are among the early adopters of Bayesian statistical tools for analyzing crash data. As in many other areas of application, empirical Bayes methods were their first choice, perhaps because they represent an intuitively appealing, yet relatively easy to implement alternative to purely classical approaches. With the enormous progress in numerical methods made in recent years and with the availability of free, easy to use software that permits implementing a fully Bayesian approach, however, there is now ample justification to progress towards fully Bayesian analyses of crash data. The fully Bayesian approach, in particular as implemented via multi-level hierarchical models, has many advantages over the empirical Bayes approach. In a full Bayesian analysis, prior information and all available data are seamlessly integrated into posterior distributions on which practitioners can base their inferences. All uncertainties are thus accounted for in the analyses and there is no need to pre-process data to obtain Safety Performance Functions and other such prior estimates of the effect of covariates on the outcome of interest. In this slight, fully Bayesian methods may well be less costly to implement and may result in safety estimates with more realistic standard errors. In this manuscript, we present the full Bayesian approach to analyzing traffic safety data and focus on highlighting the differences between the empirical Bayes and the full Bayes approaches. We use an illustrative example to discuss a step-by-step Bayesian analysis of the data and to show some of the types of inferences that are possible within the full Bayesian framework.
Resumo:
Traffic safety engineers are among the early adopters of Bayesian statistical tools for analyzing crash data. As in many other areas of application, empirical Bayes methods were their first choice, perhaps because they represent an intuitively appealing, yet relatively easy to implement alternative to purely classical approaches. With the enormous progress in numerical methods made in recent years and with the availability of free, easy to use software that permits implementing a fully Bayesian approach, however, there is now ample justification to progress towards fully Bayesian analyses of crash data. The fully Bayesian approach, in particular as implemented via multi-level hierarchical models, has many advantages over the empirical Bayes approach. In a full Bayesian analysis, prior information and all available data are seamlessly integrated into posterior distributions on which practitioners can base their inferences. All uncertainties are thus accounted for in the analyses and there is no need to pre-process data to obtain Safety Performance Functions and other such prior estimates of the effect of covariates on the outcome of interest. In this light, fully Bayesian methods may well be less costly to implement and may result in safety estimates with more realistic standard errors. In this manuscript, we present the full Bayesian approach to analyzing traffic safety data and focus on highlighting the differences between the empirical Bayes and the full Bayes approaches. We use an illustrative example to discuss a step-by-step Bayesian analysis of the data and to show some of the types of inferences that are possible within the full Bayesian framework.
Resumo:
This report describes the results of the research project investigating the use of advanced field data acquisition technologies for lowa transponation agencies. The objectives of the research project were to (1) research and evaluate current data acquisition technologies for field data collection, manipulation, and reporting; (2) identify the current field data collection approach and the interest level in applying current technologies within Iowa transportation agencies; and (3) summarize findings, prioritize technology needs, and provide recommendations regarding suitable applications for future development. A steering committee consisting oretate, city, and county transportation officials provided guidance during this project. Technologies considered in this study included (1) data storage (bar coding, radio frequency identification, touch buttons, magnetic stripes, and video logging); (2) data recognition (voice recognition and optical character recognition); (3) field referencing systems (global positioning systems [GPS] and geographic information systems [GIs]); (4) data transmission (radio frequency data communications and electronic data interchange); and (5) portable computers (pen-based computers). The literature review revealed that many of these technologies could have useful applications in the transponation industry. A survey was developed to explain current data collection methods and identify the interest in using advanced field data collection technologies. Surveys were sent out to county and city engineers and state representatives responsible for certain programs (e.g., maintenance management and construction management). Results showed that almost all field data are collected using manual approaches and are hand-carried to the office where they are either entered into a computer or manually stored. A lack of standardization was apparent for the type of software applications used by each agency--even the types of forms used to manually collect data differed by agency. Furthermore, interest in using advanced field data collection technologies depended upon the technology, program (e.g.. pavement or sign management), and agency type (e.g., state, city, or county). The state and larger cities and counties seemed to be interested in using several of the technologies, whereas smaller agencies appeared to have very little interest in using advanced techniques to capture data. A more thorough analysis of the survey results is provided in the report. Recommendations are made to enhance the use of advanced field data acquisition technologies in Iowa transportation agencies: (1) Appoint a statewide task group to coordinate the effort to automate field data collection and reporting within the Iowa transportation agencies. Subgroups representing the cities, counties, and state should be formed with oversight provided by the statewide task group. (2) Educate employees so that they become familiar with the various field data acquisition technologies.
Resumo:
Why mating types exist at all is subject to much debate. Among hypotheses, mating types evolved to control organelle transmission during sexual reproduction, or to prevent inbreeding or same-clone mating. Here I review data from a diversity of taxa (including ciliates, algae, slime molds, ascomycetes, and basidiomycetes) to show that the structure and function of mating types run counter the above hypotheses. I argue instead for a key role in triggering developmental switches. Genomes must fulfill a diversity of alternative programs along the sexual cycle. As a haploid gametophyte, an individual may grow vegetatively (through haploid mitoses), or initiate gametogenesis and mating. As a diploid sporophyte, similarly, it may grow vegetatively (through diploid mitoses) or initiate meiosis and sporulation. Only diploid sporophytes (and not haploid gametophytes) should switch on the meiotic program. Similarly, only haploid gametophytes (not sporophytes) should switch on gametogenesis and mating. And they should only do so when other gametophytes are ready to do the same in the neighborhood. As argued here, mating types have evolved primarily to switch on the right program at the right moment.