952 resultados para Data sets storage
Resumo:
Ces dernières années, de nombreuses recherches ont mis en évidence les effets toxiques des micropolluants organiques pour les espèces de nos lacs et rivières. Cependant, la plupart de ces études se sont focalisées sur la toxicité des substances individuelles, alors que les organismes sont exposés tous les jours à des milliers de substances en mélange. Or les effets de ces cocktails ne sont pas négligeables. Cette thèse de doctorat s'est ainsi intéressée aux modèles permettant de prédire le risque environnemental de ces cocktails pour le milieu aquatique. Le principal objectif a été d'évaluer le risque écologique des mélanges de substances chimiques mesurées dans le Léman, mais aussi d'apporter un regard critique sur les méthodologies utilisées afin de proposer certaines adaptations pour une meilleure estimation du risque. Dans la première partie de ce travail, le risque des mélanges de pesticides et médicaments pour le Rhône et pour le Léman a été établi en utilisant des approches envisagées notamment dans la législation européenne. Il s'agit d'approches de « screening », c'est-à-dire permettant une évaluation générale du risque des mélanges. Une telle approche permet de mettre en évidence les substances les plus problématiques, c'est-à-dire contribuant le plus à la toxicité du mélange. Dans notre cas, il s'agit essentiellement de 4 pesticides. L'étude met également en évidence que toutes les substances, même en trace infime, contribuent à l'effet du mélange. Cette constatation a des implications en terme de gestion de l'environnement. En effet, ceci implique qu'il faut réduire toutes les sources de polluants, et pas seulement les plus problématiques. Mais l'approche proposée présente également un biais important au niveau conceptuel, ce qui rend son utilisation discutable, en dehors d'un screening, et nécessiterait une adaptation au niveau des facteurs de sécurité employés. Dans une deuxième partie, l'étude s'est portée sur l'utilisation des modèles de mélanges dans le calcul de risque environnemental. En effet, les modèles de mélanges ont été développés et validés espèce par espèce, et non pour une évaluation sur l'écosystème en entier. Leur utilisation devrait donc passer par un calcul par espèce, ce qui est rarement fait dû au manque de données écotoxicologiques à disposition. Le but a été donc de comparer, avec des valeurs générées aléatoirement, le calcul de risque effectué selon une méthode rigoureuse, espèce par espèce, avec celui effectué classiquement où les modèles sont appliqués sur l'ensemble de la communauté sans tenir compte des variations inter-espèces. Les résultats sont dans la majorité des cas similaires, ce qui valide l'approche utilisée traditionnellement. En revanche, ce travail a permis de déterminer certains cas où l'application classique peut conduire à une sous- ou sur-estimation du risque. Enfin, une dernière partie de cette thèse s'est intéressée à l'influence que les cocktails de micropolluants ont pu avoir sur les communautés in situ. Pour ce faire, une approche en deux temps a été adoptée. Tout d'abord la toxicité de quatorze herbicides détectés dans le Léman a été déterminée. Sur la période étudiée, de 2004 à 2009, cette toxicité due aux herbicides a diminué, passant de 4% d'espèces affectées à moins de 1%. Ensuite, la question était de savoir si cette diminution de toxicité avait un impact sur le développement de certaines espèces au sein de la communauté des algues. Pour ce faire, l'utilisation statistique a permis d'isoler d'autres facteurs pouvant avoir une influence sur la flore, comme la température de l'eau ou la présence de phosphates, et ainsi de constater quelles espèces se sont révélées avoir été influencées, positivement ou négativement, par la diminution de la toxicité dans le lac au fil du temps. Fait intéressant, une partie d'entre-elles avait déjà montré des comportements similaires dans des études en mésocosmes. En conclusion, ce travail montre qu'il existe des modèles robustes pour prédire le risque des mélanges de micropolluants sur les espèces aquatiques, et qu'ils peuvent être utilisés pour expliquer le rôle des substances dans le fonctionnement des écosystèmes. Toutefois, ces modèles ont bien sûr des limites et des hypothèses sous-jacentes qu'il est important de considérer lors de leur application. - Depuis plusieurs années, les risques que posent les micropolluants organiques pour le milieu aquatique préoccupent grandement les scientifiques ainsi que notre société. En effet, de nombreuses recherches ont mis en évidence les effets toxiques que peuvent avoir ces substances chimiques sur les espèces de nos lacs et rivières, quand elles se retrouvent exposées à des concentrations aiguës ou chroniques. Cependant, la plupart de ces études se sont focalisées sur la toxicité des substances individuelles, c'est à dire considérées séparément. Actuellement, il en est de même dans les procédures de régulation européennes, concernant la partie évaluation du risque pour l'environnement d'une substance. Or, les organismes sont exposés tous les jours à des milliers de substances en mélange, et les effets de ces "cocktails" ne sont pas négligeables. L'évaluation du risque écologique que pose ces mélanges de substances doit donc être abordé par de la manière la plus appropriée et la plus fiable possible. Dans la première partie de cette thèse, nous nous sommes intéressés aux méthodes actuellement envisagées à être intégrées dans les législations européennes pour l'évaluation du risque des mélanges pour le milieu aquatique. Ces méthodes sont basées sur le modèle d'addition des concentrations, avec l'utilisation des valeurs de concentrations des substances estimées sans effet dans le milieu (PNEC), ou à partir des valeurs des concentrations d'effet (CE50) sur certaines espèces d'un niveau trophique avec la prise en compte de facteurs de sécurité. Nous avons appliqué ces méthodes à deux cas spécifiques, le lac Léman et le Rhône situés en Suisse, et discuté les résultats de ces applications. Ces premières étapes d'évaluation ont montré que le risque des mélanges pour ces cas d'étude atteint rapidement une valeur au dessus d'un seuil critique. Cette valeur atteinte est généralement due à deux ou trois substances principales. Les procédures proposées permettent donc d'identifier les substances les plus problématiques pour lesquelles des mesures de gestion, telles que la réduction de leur entrée dans le milieu aquatique, devraient être envisagées. Cependant, nous avons également constaté que le niveau de risque associé à ces mélanges de substances n'est pas négligeable, même sans tenir compte de ces substances principales. En effet, l'accumulation des substances, même en traces infimes, atteint un seuil critique, ce qui devient plus difficile en terme de gestion du risque. En outre, nous avons souligné un manque de fiabilité dans ces procédures, qui peuvent conduire à des résultats contradictoires en terme de risque. Ceci est lié à l'incompatibilité des facteurs de sécurité utilisés dans les différentes méthodes. Dans la deuxième partie de la thèse, nous avons étudié la fiabilité de méthodes plus avancées dans la prédiction de l'effet des mélanges pour les communautés évoluant dans le système aquatique. Ces méthodes reposent sur le modèle d'addition des concentrations (CA) ou d'addition des réponses (RA) appliqués sur les courbes de distribution de la sensibilité des espèces (SSD) aux substances. En effet, les modèles de mélanges ont été développés et validés pour être appliqués espèce par espèce, et non pas sur plusieurs espèces agrégées simultanément dans les courbes SSD. Nous avons ainsi proposé une procédure plus rigoureuse, pour l'évaluation du risque d'un mélange, qui serait d'appliquer d'abord les modèles CA ou RA à chaque espèce séparément, et, dans une deuxième étape, combiner les résultats afin d'établir une courbe SSD du mélange. Malheureusement, cette méthode n'est pas applicable dans la plupart des cas, car elle nécessite trop de données généralement indisponibles. Par conséquent, nous avons comparé, avec des valeurs générées aléatoirement, le calcul de risque effectué selon cette méthode plus rigoureuse, avec celle effectuée traditionnellement, afin de caractériser la robustesse de cette approche qui consiste à appliquer les modèles de mélange sur les courbes SSD. Nos résultats ont montré que l'utilisation de CA directement sur les SSDs peut conduire à une sous-estimation de la concentration du mélange affectant 5 % ou 50% des espèces, en particulier lorsque les substances présentent un grand écart- type dans leur distribution de la sensibilité des espèces. L'application du modèle RA peut quant à lui conduire à une sur- ou sous-estimations, principalement en fonction de la pente des courbes dose- réponse de chaque espèce composant les SSDs. La sous-estimation avec RA devient potentiellement importante lorsque le rapport entre la EC50 et la EC10 de la courbe dose-réponse des espèces est plus petit que 100. Toutefois, la plupart des substances, selon des cas réels, présentent des données d' écotoxicité qui font que le risque du mélange calculé par la méthode des modèles appliqués directement sur les SSDs reste cohérent et surestimerait plutôt légèrement le risque. Ces résultats valident ainsi l'approche utilisée traditionnellement. Néanmoins, il faut garder à l'esprit cette source d'erreur lorsqu'on procède à une évaluation du risque d'un mélange avec cette méthode traditionnelle, en particulier quand les SSD présentent une distribution des données en dehors des limites déterminées dans cette étude. Enfin, dans la dernière partie de cette thèse, nous avons confronté des prédictions de l'effet de mélange avec des changements biologiques observés dans l'environnement. Dans cette étude, nous avons utilisé des données venant d'un suivi à long terme d'un grand lac européen, le lac Léman, ce qui offrait la possibilité d'évaluer dans quelle mesure la prédiction de la toxicité des mélanges d'herbicide expliquait les changements dans la composition de la communauté phytoplanctonique. Ceci à côté d'autres paramètres classiques de limnologie tels que les nutriments. Pour atteindre cet objectif, nous avons déterminé la toxicité des mélanges sur plusieurs années de 14 herbicides régulièrement détectés dans le lac, en utilisant les modèles CA et RA avec les courbes de distribution de la sensibilité des espèces. Un gradient temporel de toxicité décroissant a pu être constaté de 2004 à 2009. Une analyse de redondance et de redondance partielle, a montré que ce gradient explique une partie significative de la variation de la composition de la communauté phytoplanctonique, même après avoir enlevé l'effet de toutes les autres co-variables. De plus, certaines espèces révélées pour avoir été influencées, positivement ou négativement, par la diminution de la toxicité dans le lac au fil du temps, ont montré des comportements similaires dans des études en mésocosmes. On peut en conclure que la toxicité du mélange herbicide est l'un des paramètres clés pour expliquer les changements de phytoplancton dans le lac Léman. En conclusion, il existe diverses méthodes pour prédire le risque des mélanges de micropolluants sur les espèces aquatiques et celui-ci peut jouer un rôle dans le fonctionnement des écosystèmes. Toutefois, ces modèles ont bien sûr des limites et des hypothèses sous-jacentes qu'il est important de considérer lors de leur application, avant d'utiliser leurs résultats pour la gestion des risques environnementaux. - For several years now, the scientists as well as the society is concerned by the aquatic risk organic micropollutants may pose. Indeed, several researches have shown the toxic effects these substances may induce on organisms living in our lakes or rivers, especially when they are exposed to acute or chronic concentrations. However, most of the studies focused on the toxicity of single compounds, i.e. considered individually. The same also goes in the current European regulations concerning the risk assessment procedures for the environment of these substances. But aquatic organisms are typically exposed every day simultaneously to thousands of organic compounds. The toxic effects resulting of these "cocktails" cannot be neglected. The ecological risk assessment of mixtures of such compounds has therefore to be addressed by scientists in the most reliable and appropriate way. In the first part of this thesis, the procedures currently envisioned for the aquatic mixture risk assessment in European legislations are described. These methodologies are based on the mixture model of concentration addition and the use of the predicted no effect concentrations (PNEC) or effect concentrations (EC50) with assessment factors. These principal approaches were applied to two specific case studies, Lake Geneva and the River Rhône in Switzerland, including a discussion of the outcomes of such applications. These first level assessments showed that the mixture risks for these studied cases exceeded rapidly the critical value. This exceeding is generally due to two or three main substances. The proposed procedures allow therefore the identification of the most problematic substances for which management measures, such as a reduction of the entrance to the aquatic environment, should be envisioned. However, it was also showed that the risk levels associated with mixtures of compounds are not negligible, even without considering these main substances. Indeed, it is the sum of the substances that is problematic, which is more challenging in term of risk management. Moreover, a lack of reliability in the procedures was highlighted, which can lead to contradictory results in terms of risk. This result is linked to the inconsistency in the assessment factors applied in the different methods. In the second part of the thesis, the reliability of the more advanced procedures to predict the mixture effect to communities in the aquatic system were investigated. These established methodologies combine the model of concentration addition (CA) or response addition (RA) with species sensitivity distribution curves (SSD). Indeed, the mixture effect predictions were shown to be consistent only when the mixture models are applied on a single species, and not on several species simultaneously aggregated to SSDs. Hence, A more stringent procedure for mixture risk assessment is proposed, that would be to apply first the CA or RA models to each species separately and, in a second step, to combine the results to build an SSD for a mixture. Unfortunately, this methodology is not applicable in most cases, because it requires large data sets usually not available. Therefore, the differences between the two methodologies were studied with datasets created artificially to characterize the robustness of the traditional approach applying models on species sensitivity distribution. The results showed that the use of CA on SSD directly might lead to underestimations of the mixture concentration affecting 5% or 50% of species, especially when substances present a large standard deviation of the distribution from the sensitivity of the species. The application of RA can lead to over- or underestimates, depending mainly on the slope of the dose-response curves of the individual species. The potential underestimation with RA becomes important when the ratio between the EC50 and the EC10 for the dose-response curve of the species composing the SSD are smaller than 100. However, considering common real cases of ecotoxicity data for substances, the mixture risk calculated by the methodology applying mixture models directly on SSDs remains consistent and would rather slightly overestimate the risk. These results can be used as a theoretical validation of the currently applied methodology. Nevertheless, when assessing the risk of mixtures, one has to keep in mind this source of error with this classical methodology, especially when SSDs present a distribution of the data outside the range determined in this study Finally, in the last part of this thesis, we confronted the mixture effect predictions with biological changes observed in the environment. In this study, long-term monitoring of a European great lake, Lake Geneva, provides the opportunity to assess to what extent the predicted toxicity of herbicide mixtures explains the changes in the composition of the phytoplankton community next to other classical limnology parameters such as nutrients. To reach this goal, the gradient of the mixture toxicity of 14 herbicides regularly detected in the lake was calculated, using concentration addition and response addition models. A decreasing temporal gradient of toxicity was observed from 2004 to 2009. Redundancy analysis and partial redundancy analysis showed that this gradient explains a significant portion of the variation in phytoplankton community composition, even when having removed the effect of all other co-variables. Moreover, some species that were revealed to be influenced positively or negatively, by the decrease of toxicity in the lake over time, showed similar behaviors in mesocosms studies. It could be concluded that the herbicide mixture toxicity is one of the key parameters to explain phytoplankton changes in Lake Geneva. To conclude, different methods exist to predict the risk of mixture in the ecosystems. But their reliability varies depending on the underlying hypotheses. One should therefore carefully consider these hypotheses, as well as the limits of the approaches, before using the results for environmental risk management
Resumo:
The Organization of the Thesis The remainder of the thesis comprises five chapters and a conclusion. The next chapter formalizes the envisioned theory into a tractable model. Section 2.2 presents a formal description of the model economy: the individual heterogeneity, the individual objective, the UI setting, the population dynamics and the equilibrium. The welfare and efficiency criteria for qualifying various equilibrium outcomes are proposed in section 2.3. The fourth section shows how the model-generated information can be computed. Chapter 3 transposes the model from chapter 2 in conditions that enable its use in the analysis of individual labor market strategies and their implications for the labor market equilibrium. In section 3.2 the Swiss labor market data sets, stylized facts, and the UI system are presented. The third section outlines and motivates the parameterization method. In section 3.4 the model's replication ability is evaluated and some aspects of the parameter choice are discussed. Numerical solution issues can be found in the appendix. Chapter 4 examines the determinants of search-strategic behavior in the model economy and its implications for the labor market aggregates. In section 4.2, the unemployment duration distribution is examined and related to search strategies. Section 4.3 shows how the search- strategic behavior is influenced by the UI eligibility and section 4.4 how it is determined by individual heterogeneity. The composition effects generated by search strategies in labor market aggregates are examined in section 4.5. The last section evaluates the model's replication of empirical unemployment escape frequencies reported in Sheldon [67]. Chapter 5 applies the model economy to examine the effects on the labor market equilibrium of shocks to the labor market risk structure, to the deep underlying labor market structure and to the UI setting. Section 5.2 examines the effects of the labor market risk structure on the labor market equilibrium and the labor market strategic behavior. The effects of alterations in the labor market deep economic structural parameters, i.e. individual preferences and production technology, are shown in Section 5.3. Finally, the UI setting impacts on the labor market are studied in Section 5.4. This section also evaluates the role of the UI authority monitoring and the differences in the Way changes in the replacement rate and the UI benefit duration affect the labor market. In chapter 6 the model economy is applied in counterfactual experiments to assess several aspects of the Swiss labor market movements in the nineties. Section 6.2 examines the two equilibria characterizing the Swiss labor market in the nineties, the " growth" equilibrium with a "moderate" UI regime and the "recession" equilibrium with a more "generous" UI. Section 6.3 evaluates the isolated effects of the structural shocks, while the isolated effects of the UI reforms are analyzed in section 6.4. Particular dimensions of the UI reforms, the duration, replacement rate and the tax rate effects, are studied in section 6.5, while labor market equilibria without benefits are evaluated in section 6.6. In section 6.7 the structural and institutional interactions that may act as unemployment amplifiers are discussed in view of the obtained results. A welfare analysis based on individual welfare in different structural and UI settings is presented in the eighth section. Finally, the results are related to more favorable unemployment trends after 1997. The conclusion evaluates the features embodied in the model economy with respect to the resulting model dynamics to derive lessons from the model design." The thesis ends by proposing guidelines for future improvements of the model and directions for further research.
Resumo:
DnaSP is a software package for the analysis of DNA polymorphism data. Present version introduces several new modules and features which, among other options allow: (1) handling big data sets (~5 Mb per sequence); (2) conducting a large number of coalescent-based tests by Monte Carlo computer simulations; (3) extensive analyses of the genetic differentiation and gene flow among populations; (4) analysing the evolutionary pattern of preferred and unpreferred codons; (5) generating graphical outputs for an easy visualization of results. Availability: The software package, including complete documentation and examples, is freely available to academic users from: http://www.ub.es/dnasp
Resumo:
The present research project was designed to identify the typical Iowa material input values that are required by the Mechanistic-Empirical Pavement Design Guide (MEPDG) for the Level 3 concrete pavement design. It was also designed to investigate the existing equations that might be used to predict Iowa pavement concrete for the Level 2 pavement design. In this project, over 20,000 data were collected from the Iowa Department of Transportation (DOT) and other sources. These data, most of which were concrete compressive strength, slump, air content, and unit weight data, were synthesized and their statistical parameters (such as the mean values and standard variations) were analyzed. Based on the analyses, the typical input values of Iowa pavement concrete, such as 28-day compressive strength (f’c), splitting tensile strength (fsp), elastic modulus (Ec), and modulus of rupture (MOR), were evaluated. The study indicates that the 28-day MOR of Iowa concrete is 646 + 51 psi, very close to the MEPDG default value (650 psi). The 28-day Ec of Iowa concrete (based only on two available data of the Iowa Curling and Warping project) is 4.82 + 0.28x106 psi, which is quite different from the MEPDG default value (3.93 x106 psi); therefore, the researchers recommend re-evaluating after more Iowa test data become available. The drying shrinkage (εc) of a typical Iowa concrete (C-3WR-C20 mix) was tested at Concrete Technology Laboratory (CTL). The test results show that the ultimate shrinkage of the concrete is about 454 microstrain and the time for the concrete to reach 50% of ultimate shrinkage is at 32 days; both of these values are very close to the MEPDG default values. The comparison of the Iowa test data and the MEPDG default values, as well as the recommendations on the input values to be used in MEPDG for Iowa PCC pavement design, are summarized in Table 20 of this report. The available equations for predicting the above-mentioned concrete properties were also assembled. The validity of these equations for Iowa concrete materials was examined. Multiple-parameters nonlinear regression analyses, along with the artificial neural network (ANN) method, were employed to investigate the relationships among Iowa concrete material properties and to modify the existing equations so as to be suitable for Iowa concrete materials. However, due to lack of necessary data sets, the relationships between Iowa concrete properties were established based on the limited data from CP Tech Center’s projects and ISU classes only. The researchers suggest that the resulting relationships be used by Iowa pavement design engineers as references only. The present study furthermore indicates that appropriately documenting concrete properties, including flexural strength, elastic modulus, and information on concrete mix design, is essential for updating the typical Iowa material input values and providing rational prediction equations for concrete pavement design in the future.
Resumo:
We conduct a large-scale comparative study on linearly combining superparent-one-dependence estimators (SPODEs), a popular family of seminaive Bayesian classifiers. Altogether, 16 model selection and weighing schemes, 58 benchmark data sets, and various statistical tests are employed. This paper's main contributions are threefold. First, it formally presents each scheme's definition, rationale, and time complexity and hence can serve as a comprehensive reference for researchers interested in ensemble learning. Second, it offers bias-variance analysis for each scheme's classification error performance. Third, it identifies effective schemes that meet various needs in practice. This leads to accurate and fast classification algorithms which have an immediate and significant impact on real-world applications. Another important feature of our study is using a variety of statistical tests to evaluate multiple learning methods across multiple data sets.
Resumo:
Purpose To investigate the differences in viscoelastic properties between normal and pathologic Achilles tendons ( AT Achilles tendon s) by using real-time shear-wave elastography ( SWE shear-wave elastography ). Materials and Methods The institutional review board approved this study, and written informed consent was obtained from 25 symptomatic patients and 80 volunteers. One hundred eighty ultrasonographic (US) and SWE shear-wave elastography studies of AT Achilles tendon s without tendonopathy and 30 studies of the middle portion of the AT Achilles tendon in patients with tendonopathy were assessed prospectively. Each study included data sets acquired at B-mode US (tendon morphology and cross-sectional area) and SWE shear-wave elastography (axial and sagittal mean velocity and relative anisotropic coefficient) for two passively mobilized ankle positions. The presence of AT Achilles tendon tears at B-mode US and signal-void areas at SWE shear-wave elastography were noted. Results Significantly lower mean velocity was shown in tendons with tendonopathy than in normal tendons in the relaxed position at axial SWE shear-wave elastography (P < .001) and in the stretched position at sagittal (P < .001) and axial (P = .0026) SWE shear-wave elastography . Tendon softening was a sign of tendonopathy in relaxed AT Achilles tendon s when the mean velocity was less than or equal to 4.06 m · sec(-1) at axial SWE shear-wave elastography (sensitivity, 54.2%; 95% confidence interval [ CI confidence interval ]: 32.8, 74.4; specificity, 91.5%; 95% CI confidence interval : 86.3, 95.1) and less than or equal to 5.70 m · sec(-1) at sagittal SWE shear-wave elastography (sensitivity, 41.7%; 95% CI confidence interval : 22.1, 63.3; specificity, 81.8%; 95% CI confidence interval : 75.3, 87.2) and in stretched AT Achilles tendon s, when the mean velocity was less than or equal to 4.86 m · sec(-1) at axial SWE shear-wave elastography (sensitivity, 66.7%; 95% CI confidence interval : 44.7, 84.3; specificity, 75.6%; 95% CI confidence interval : 68.5, 81.7) and less than or equal to 14.58 m · sec(-1) at sagittal SWE shear-wave elastography (sensitivity, 58.3%; 95% CI confidence interval : 36.7, 77.9; specificity, 83.5%; 95% CI confidence interval : 77.2, 88.7). Anisotropic results were not significantly different between normal and pathologic AT Achilles tendon s. Six of six (100%) partial-thickness tears appeared as signal-void areas at SWE shear-wave elastography . Conclusion Whether the AT Achilles tendon was relaxed or stretched, SWE shear-wave elastography helped to confirm and quantify pathologic tendon softening in patients with tendonopathy in the midportion of the AT Achilles tendon and did not reveal modifications of viscoelastic anisotropy in the tendon. Tendon softening assessed by using SWE shear-wave elastography appeared to be highly specific, but sensitivity was relatively low. © RSNA, 2014.
Resumo:
BACKGROUND: Genotypes obtained with commercial SNP arrays have been extensively used in many large case-control or population-based cohorts for SNP-based genome-wide association studies for a multitude of traits. Yet, these genotypes capture only a small fraction of the variance of the studied traits. Genomic structural variants (GSV) such as Copy Number Variation (CNV) may account for part of the missing heritability, but their comprehensive detection requires either next-generation arrays or sequencing. Sophisticated algorithms that infer CNVs by combining the intensities from SNP-probes for the two alleles can already be used to extract a partial view of such GSV from existing data sets. RESULTS: Here we present several advances to facilitate the latter approach. First, we introduce a novel CNV detection method based on a Gaussian Mixture Model. Second, we propose a new algorithm, PCA merge, for combining copy-number profiles from many individuals into consensus regions. We applied both our new methods as well as existing ones to data from 5612 individuals from the CoLaus study who were genotyped on Affymetrix 500K arrays. We developed a number of procedures in order to evaluate the performance of the different methods. This includes comparison with previously published CNVs as well as using a replication sample of 239 individuals, genotyped with Illumina 550K arrays. We also established a new evaluation procedure that employs the fact that related individuals are expected to share their CNVs more frequently than randomly selected individuals. The ability to detect both rare and common CNVs provides a valuable resource that will facilitate association studies exploring potential phenotypic associations with CNVs. CONCLUSION: Our new methodologies for CNV detection and their evaluation will help in extracting additional information from the large amount of SNP-genotyping data on various cohorts and use this to explore structural variants and their impact on complex traits.
Resumo:
In contradiction to sexual selection theory, several studies showed that although the expression of melanin-based ornaments is usually under strong genetic control and weakly sensitive to the environment and body condition, they can signal individual quality. Covariation between a melanin-based ornament and phenotypic quality may result from pleiotropic effects of genes involved in the production of melanin pigments. Two categories of genes responsible for variation in melanin production may be relevant, namely those that trigger melanin production (yes or no response) and those that determine the amount of pigments produced. To investigate which of these two hypotheses is the most likely, I reanalysed data collected from barn owls ( Tyto alba). The underparts of this bird vary from immaculate to heavily marked with black spots of varying size. Published cross-fostering experiments have shown that the proportion of the plumage surface covered with black spots, a eumelanin composite trait so-called "plumage spottiness", in females positively covaries with offspring humoral immunocompetence, and negatively with offspring parasite resistance (i.show $132#e. the ability to reduce fecundity of ectoparasites) and fluctuating asymmetry of wing feathers. However, it is unclear which component of plumage spottiness causes these relationships, namely genes responsible for variation in number of spots or in spot diameter. Number of spots reflects variation in the expression of genes triggering the switch from no eumelanin production to production, whereas spot diameter reflects variation in the expression of genes determining the amount of eumelanin produced per spot. In the present study, multiple regression analyses, performed on the same data sets, showed that humoral immunocompetence, parasite resistance and wing fluctuating asymmetry of cross-fostered offspring covary with spot diameter measured in their genetic mother, but not with number of spots. This suggests that genes responsible for variation in the quantity of eumelanin produced per spot are responsible for covariation between a melanin ornament and individual attributes. In contrast, genes responsible for variation in number of black spots may not play a significant role. Covariation between a eumelanin female trait and offspring quality may therefore be due to an indirect effect of melanin production.
Resumo:
PURPOSE: To compare 3 different flow targeted magnetization preparation strategies for coronary MR angiography (cMRA), which allow selective visualization of the vessel lumen. MATERIAL AND METHODS: The right coronary artery of 10 healthy subjects was investigated on a 1.5 Tesla MR system (Gyroscan ACS-NT, Philips Healthcare, Best, NL). A navigator-gated and ECG-triggered 3D radial steady-state free-precession (SSFP) cMRA sequence with 3 different magnetization preparation schemes was performed referred to as projection SSFP (selective labeling of the aorta, subtraction of 2 data sets), LoReIn SSFP (double-inversion preparation, selective labeling of the aorta, 1 data set), and inflow SSFP (inversion preparation, selective labeling of the coronary artery, 1 data set). Signal-to-noise ratio (SNR) of the coronary artery and aorta, contrast-to-noise ratio (CNR) between the coronary artery and epicardial fat, vessel length and vessel sharpness were analyzed. RESULTS: All cMRA sequences were successfully obtained in all subjects. Both projection SSFP and LoReIn SSFP allowed for selective visualization of the coronary arteries with excellent background suppression. Scan time was doubled in projection SSFP because of the need for subtraction of 2 data sets. In inflow SSFP, background suppression was limited to the tissue included in the inversion volume. Projection SSFP (SNR(coro): 25.6 +/- 12.1; SNR(ao): 26.1 +/- 16.8; CNR(coro-fat): 22.0 +/- 11.7) and inflow SSFP (SNR(coro): 27.9 +/- 5.4; SNR(ao): 37.4 +/- 9.2; CNR(coro-fat): 24.9 +/- 4.8) yielded significantly increased SNR and CNR compared with LoReIn SSFP (SNR(coro): 12.3 +/- 5.4; SNR(ao): 11.8 +/- 5.8; CNR(coro-fat): 9.8 +/- 5.5; P < 0.05 for both). Longest visible vessel length was found with projection SSFP (79.5 mm +/- 18.9; P < 0.05 vs. LoReIn) whereas vessel sharpness was best in inflow SSFP (68.2% +/- 4.5%; P < 0.05 vs. LoReIn). Consistently good image quality was achieved using inflow SSFP likely because of the simple planning procedure and short scanning time. CONCLUSION: Three flow targeted cMRA approaches are presented, which provide selective visualization of the coronary vessel lumen and in addition blood flow information without the need of contrast agent administration. Inflow SSFP yielded highest SNR, CNR and vessel sharpness and may prove useful as a fast and efficient approach for assessing proximal and mid vessel coronary blood flow, whereas requiring less planning skills than projection SSFP or LoReIn SSFP.
Resumo:
BACKGROUND: Since the emergence of diffusion tensor imaging, a lot of work has been done to better understand the properties of diffusion MRI tractography. However, the validation of the reconstructed fiber connections remains problematic in many respects. For example, it is difficult to assess whether a connection is the result of the diffusion coherence contrast itself or the simple result of other uncontrolled parameters like for example: noise, brain geometry and algorithmic characteristics. METHODOLOGY/PRINCIPAL FINDINGS: In this work, we propose a method to estimate the respective contributions of diffusion coherence versus other effects to a tractography result by comparing data sets with and without diffusion coherence contrast. We use this methodology to assign a confidence level to every gray matter to gray matter connection and add this new information directly in the connectivity matrix. CONCLUSIONS/SIGNIFICANCE: Our results demonstrate that whereas we can have a strong confidence in mid- and long-range connections obtained by a tractography experiment, it is difficult to distinguish between short connections traced due to diffusion coherence contrast from those produced by chance due to the other uncontrolled factors of the tractography methodology.
Resumo:
Selectome (http://selectome.unil.ch/) is a database of positive selection, based on a branch-site likelihood test. This model estimates the number of nonsynonymous substitutions (dN) and synonymous substitutions (dS) to evaluate the variation in selective pressure (dN/dS ratio) over branches and over sites. Since the original release of Selectome, we have benchmarked and implemented a thorough quality control procedure on multiple sequence alignments, aiming to provide minimum false-positive results. We have also improved the computational efficiency of the branch-site test implementation, allowing larger data sets and more frequent updates. Release 6 of Selectome includes all gene trees from Ensembl for Primates and Glires, as well as a large set of vertebrate gene trees. A total of 6810 gene trees have some evidence of positive selection. Finally, the web interface has been improved to be more responsive and to facilitate searches and browsing.
Resumo:
On the basis of two parallel data sets refering to phenological events in open scrubs and pastures at two sites (Balaguer and Vic), the authors present a comprehensive report of the phenology of these Mediterranean communities. Four main phenophases (vegetative growth, flowering, fruiting and resting) were recorded monthly in 11 communities over 15 months. The results allow comparisons to be drawn between localities and communities, both at community and species levels, and to consider the effects of contemporary climatic data. This yielded useful information on general trends and on the particular responses of each community type to their corresponding constraints.
Resumo:
Effect size indices are indispensable for carrying out meta-analyses and can also be seen as an alternative for making decisions about the effectiveness of a treatment in an individual applied study. The desirable features of the procedures for quantifying the magnitude of intervention effect include educational/clinical meaningfulness, calculus easiness, insensitivity to autocorrelation, low false alarm and low miss rates. Three effect size indices related to visual analysis are compared according to the aforementioned criteria. The comparison is made by means of data sets with known parameters: degree of serial dependence, presence or absence of general trend, changes in level and/or in slope. The percent of nonoverlapping data showed the highest discrimination between data sets with and without intervention effect. In cases when autocorrelation or trend is present, the percentage of data points exceeding the median may be a better option to quantify the effectiveness of a psychological treatment.
Resumo:
The quantification of gene expression at the single cell level uncovers novel regulatory mechanisms obscured in measurements performed at the population level. Two methods based on microscopy and flow cytometry are presented to demonstrate how such data can be acquired. The expression of a fluorescent reporter induced upon activation of the high osmolarity glycerol MAPK pathway in yeast is used as an example. The specific advantages of each method are highlighted. Flow cytometry measures a large number of cells (10,000) and provides a direct measure of the dynamics of protein expression independent of the slow maturation kinetics of the fluorescent protein. Imaging of living cells by microscopy is by contrast limited to the measurement of the matured form of the reporter in fewer cells. However, the data sets generated by this technique can be extremely rich thanks to the combinations of multiple reporters and to the spatial and temporal information obtained from individual cells. The combination of these two measurement methods can deliver new insights on the regulation of protein expression by signaling pathways.
Resumo:
Sickness absence (SA) is an important social, economic and public health issue. Identifying and understanding the determinants, whether biological, regulatory or, health services-related, of variability in SA duration is essential for better management of SA. The conditional frailty model (CFM) is useful when repeated SA events occur within the same individual, as it allows simultaneous analysis of event dependence and heterogeneity due to unknown, unmeasured, or unmeasurable factors. However, its use may encounter computational limitations when applied to very large data sets, as may frequently occur in the analysis of SA duration. To overcome the computational issue, we propose a Poisson-based conditional frailty model (CFPM) for repeated SA events that accounts for both event dependence and heterogeneity. To demonstrate the usefulness of the model proposed in the SA duration context, we used data from all non-work-related SA episodes that occurred in Catalonia (Spain) in 2007, initiated by either a diagnosis of neoplasm or mental and behavioral disorders. As expected, the CFPM results were very similar to those of the CFM for both diagnosis groups. The CPU time for the CFPM was substantially shorter than the CFM. The CFPM is an suitable alternative to the CFM in survival analysis with recurrent events,especially with large databases.