991 resultados para Perturbation (Astronomy)
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
Mepraia spinolai is a silvatic species of Triatominae which prefers microhabitats near to or in rock piles. It is also able to maintain similar or higher size populations near houses. The density of bugs in quarries near Santiago, Chile, differed within microhabitats and varied significantly within sites according to season. M. spinolai was not found in sites characterized by human perturbation of quarries. Our results confirm M. spinolai as a silvatic triatomine whose importance as a vector of Chagas disease will depend on contact with humans. This could occur if the habitats where populations of this species are found become exploited for the building of urban areas.
Resumo:
Division of labour is one of the most prominent features of social insects. The efficient allocation of individuals to different tasks requires dynamic adjustment in response to environmental perturbations. Theoretical models suggest that the colony-level flexibility in responding to external changes and internal perturbation may depend on the within-colony genetic diversity, which is affected by the number of breeding individuals. However, these models have not considered the genetic architecture underlying the propensity of workers to perform the various tasks. Here, we investigated how both within-colony genetic variability (stemming from variation in the number of matings by queens) and the number of genes influencing the stimulus (threshold) for a given task at which workers begin to perform that task jointly influence task allocation efficiency. We used a numerical agent-based model to investigate the situation where workers had to perform either a regulatory task or a foraging task. One hundred generations of artificial selection in populations consisting of 500 colonies revealed that an increased number of matings always improved colony performance, whatever the number of loci encoding the thresholds of the regulatory and foraging tasks. However, the beneficial effect of additional matings was particularly important when the genetic architecture of queens comprised one or a few genes for the foraging task's threshold. By contrast, a higher number of genes encoding the foraging task reduced colony performance with the detrimental effect being stronger when queens had mated with several males. Finally, the number of genes encoding the threshold for the regulatory task only had a minor effect on colony performance. Overall, our numerical experiments support the importance of mating frequency on efficiency of division of labour and also reveal complex interactions between the number of matings and genetic architecture.
Resumo:
P>To put constraints on the Mesozoic to recent growth of the Anti-Atlas system, we investigated the temperature-time history of rocks by applying extensive low-temperature thermochronological analysis to three Precambrian inliers along the coast and 250 km into the interior. Bedrocks yield old U-Th/He ages on zircon (248-193 Ma) and apatite (150-50 Ma) and also fission-track ages of 173-121 Ma on apatite. These datasets are interpreted as recording passive margin upward movements from central Atlantic rifting until the Early Cretaceous. A phase of sedimentary burial was evidenced for the Cretaceous-Eocene. The extension of this thin (1.5 km) basin is loosely constrained but can be extended to the western regions of northern Africa. Effects of the existing thermal perturbation of lithospheric origin 100 km below the Atlas show that the 120-60 degrees C isotherms are not much deflected. Large-scale uplift has possibly occurred in the western Anti-Atlas since c. 30 Ma and is associated with a mean denudation rate of 0.08 km Ma-1.
Resumo:
HIV-1 infects CD4+ T cells and completes its replication cycle in approximately 24 hours. We employed repeated measurements in a standardized cell system and rigorous mathematical modeling to characterize the emergence of the viral replication intermediates and their impact on the cellular transcriptional response with high temporal resolution. We observed 7,991 (73%) of the 10,958 expressed genes to be modulated in concordance with key steps of viral replication. Fifty-two percent of the overall variability in the host transcriptome was explained by linear regression on the viral life cycle. This profound perturbation of cellular physiology was investigated in the light of several regulatory mechanisms, including transcription factors, miRNAs, host-pathogen interaction, and proviral integration. Key features were validated in primary CD4+ T cells, and with viral constructs using alternative entry strategies. We propose a model of early massive cellular shutdown and progressive upregulation of the cellular machinery to complete the viral life cycle.
Resumo:
The quantity of interest for high-energy photon beam therapy recommended by most dosimetric protocols is the absorbed dose to water. Thus, ionization chambers are calibrated in absorbed dose to water, which is the same quantity as what is calculated by most treatment planning systems (TPS). However, when measurements are performed in a low-density medium, the presence of the ionization chamber generates a perturbation at the level of the secondary particle range. Therefore, the measured quantity is close to the absorbed dose to a volume of water equivalent to the chamber volume. This quantity is not equivalent to the dose calculated by a TPS, which is the absorbed dose to an infinitesimally small volume of water. This phenomenon can lead to an overestimation of the absorbed dose measured with an ionization chamber of up to 40% in extreme cases. In this paper, we propose a method to calculate correction factors based on the Monte Carlo simulations. These correction factors are obtained by the ratio of the absorbed dose to water in a low-density medium □D(w,Q,V1)(low) averaged over a scoring volume V₁ for a geometry where V₁ is filled with the low-density medium and the absorbed dose to water □D(w,QV2)(low) averaged over a volume V₂ for a geometry where V₂ is filled with water. In the Monte Carlo simulations, □D(w,QV2)(low) is obtained by replacing the volume of the ionization chamber by an equivalent volume of water, according to the definition of the absorbed dose to water. The method is validated in two different configurations which allowed us to study the behavior of this correction factor as a function of depth in phantom, photon beam energy, phantom density and field size.
Resumo:
Self-organizing maps (Kohonen 1997) is a type of artificial neural network developedto explore patterns in high-dimensional multivariate data. The conventional versionof the algorithm involves the use of Euclidean metric in the process of adaptation ofthe model vectors, thus rendering in theory a whole methodology incompatible withnon-Euclidean geometries.In this contribution we explore the two main aspects of the problem:1. Whether the conventional approach using Euclidean metric can shed valid resultswith compositional data.2. If a modification of the conventional approach replacing vectorial sum and scalarmultiplication by the canonical operators in the simplex (i.e. perturbation andpowering) can converge to an adequate solution.Preliminary tests showed that both methodologies can be used on compositional data.However, the modified version of the algorithm performs poorer than the conventionalversion, in particular, when the data is pathological. Moreover, the conventional ap-proach converges faster to a solution, when data is \well-behaved".Key words: Self Organizing Map; Artificial Neural networks; Compositional data
Resumo:
Es repassa la formulació de la Teoria de Pertorbacions en notació matricial i s'exposa una aplicació senzilla com és la solució del problema de la partícula sotmesa a un potencial d'atracció dins la caixa quàntica monodimensional
Resumo:
The use of perturbation and power transformation operations permits the investigation of linear processes in the simplex as in a vectorial space. When the investigated geochemical processes can be constrained by the use of well-known starting point, the eigenvectors of the covariance matrix of a non-centred principalcomponent analysis allow to model compositional changes compared with a reference point.The results obtained for the chemistry of water collected in River Arno (central-northern Italy) have open new perspectives for considering relative changes of the analysed variables and to hypothesise the relative effect of different acting physical-chemical processes, thus posing the basis for a quantitative modelling
Resumo:
R from http://www.r-project.org/ is ‘GNU S’ – a language and environment for statistical computingand graphics. The environment in which many classical and modern statistical techniques havebeen implemented, but many are supplied as packages. There are 8 standard packages and many moreare available through the cran family of Internet sites http://cran.r-project.org .We started to develop a library of functions in R to support the analysis of mixtures and our goal isa MixeR package for compositional data analysis that provides support foroperations on compositions: perturbation and power multiplication, subcomposition with or withoutresiduals, centering of the data, computing Aitchison’s, Euclidean, Bhattacharyya distances,compositional Kullback-Leibler divergence etc.graphical presentation of compositions in ternary diagrams and tetrahedrons with additional features:barycenter, geometric mean of the data set, the percentiles lines, marking and coloring ofsubsets of the data set, theirs geometric means, notation of individual data in the set . . .dealing with zeros and missing values in compositional data sets with R procedures for simpleand multiplicative replacement strategy,the time series analysis of compositional data.We’ll present the current status of MixeR development and illustrate its use on selected data sets
Resumo:
La thrombocytopénie immune primaire (ITP) est une affection auto-immune acquise avec diminution de la survie des plaquettes et perturbation de la production plaquettaire. Il n'existe aucun test clinique simple permettant de prouver la nature auto-immune de l'affection. Pour cette raison, il s'agit presque toujours d'un diagnostic par exclusion d'autres causes. Bien que les plaquettes soient souvent inférieures à 10 x 109/l lors de la présentation initiale, la tendance hémorragique est étonnamment modérée chez la majorité des patients. Le traitement initial fait toujours appel aux corticostéroïdes, combinés à des immunoglobulines intraveineuses et à des transfusions de plaquettes dans les formes compliquées avec hémorragies significatives. Chez l'enfant, la maladie est souvent induite par des infections virales et son évolution est bénigne et spontanément régressive dans la majorité des cas. Chez l'adulte, la maladie est plus souvent persistante ou chroniquement récidivante, et le taux de plaquettes se situe souvent à un taux suffisant pour prévenir des hémorragies spontanées. Seule une faible proportion de patients souffre d'une thrombocytopénie sévère prolongée accompagnée de saignements réguliers avec risque d'hémorragies potentiellement fatales. C'est probablement ce groupe de patients restreint qui tirera surtout profit des nouvelles options thérapeutiques telles que les agonistes du récepteur de la thrombopoïétine. A la lumière de ces nouvelles possibilités, un groupe d'hématologues suisses s'est réuni pour élaborer des directives concernant la prise en charge de l'ITP conformément aux besoins et aux habitudes de notre pays.
Resumo:
A joint distribution of two discrete random variables with finite support can be displayed as a two way table of probabilities adding to one. Assume that this table hasn rows and m columns and all probabilities are non-null. This kind of table can beseen as an element in the simplex of n · m parts. In this context, the marginals areidentified as compositional amalgams, conditionals (rows or columns) as subcompositions. Also, simplicial perturbation appears as Bayes theorem. However, the Euclideanelements of the Aitchison geometry of the simplex can also be translated into the tableof probabilities: subspaces, orthogonal projections, distances.Two important questions are addressed: a) given a table of probabilities, which isthe nearest independent table to the initial one? b) which is the largest orthogonalprojection of a row onto a column? or, equivalently, which is the information in arow explained by a column, thus explaining the interaction? To answer these questionsthree orthogonal decompositions are presented: (1) by columns and a row-wise geometric marginal, (2) by rows and a columnwise geometric marginal, (3) by independenttwo-way tables and fully dependent tables representing row-column interaction. Animportant result is that the nearest independent table is the product of the two (rowand column)-wise geometric marginal tables. A corollary is that, in an independenttable, the geometric marginals conform with the traditional (arithmetic) marginals.These decompositions can be compared with standard log-linear models.Key words: balance, compositional data, simplex, Aitchison geometry, composition,orthonormal basis, arithmetic and geometric marginals, amalgam, dependence measure,contingency table
Resumo:
A continuous carbon isotope curve from Middle-Upper Jurassic pelagic carbonate rocks was acquired from two sections in the southern part of the Umbria-Marche Apennines in central Italy. At the Colle Bertone section (Terni) and the Terminilletto section (Rieti), the Upper Toarcian to Bajocian Calcari e Marne a Posidonia Formation and the Aalenian to Kimmeridgian Calcari e Marne a Posidonia and Calcari Diasprigni formations were sampled, respectively. Biostratigraphy in both sections is based on rich assemblages of calcareous nannofossils and radiolarians, as well as some ammonites found in the upper Toarcian-Bajocian interval. Both sections revealed a relative minimum of delta(13)C(PDB) close to + 2 parts per thousand in the Aalenian and a maximum around 3.5 parts per thousand in early Bajocian, associated with an increase in visible chert. In basinal sections in Umbria-Marche, this interval includes the very cherry base of the Calcari Diasprigni Formation (e.g. at Valdorbia) or the chert-rich uppermost portion of the Calcari a Posidonia (e.g at Bosso). In the Terminilletto section, the Bajocian-early Barthonian interval shows a gradual decrease in delta(13)C(PDB) values and a low around 2.3 parts per thousand. This part of the section is characterised by more than 40 m of almost chart-free limestones and correlates with a recurrence of limestone-rich facies in basinal sections at Valdorbia. A double peak with values of delta(13)C(PDB) around + 3 parts per thousand was observed in the Callovian and Oxfordian, constrained by well preserved radiolarian faunas. The maxima lie in the Callovian and the middle Oxfordian, and the minimum between the two peaks should be near the Callovian/Oxfordian boundary. In the Terminilletto section, visible chert increases together with delta(13)C(PDB) values from the middle Bathonian and reaches peak values in the Callovian-Oxfordian. In basinal sections in Umbria-Marche, a sharp increase in visible chert is observed at this level within the Calcari Diasprigni. A drop of delta(13)C values towards + 2 parts per thousand occurs in the Kimmeridgian and coincides with a decrease of visible chert in outcrop. The observed delta(13)C positive anomalies during the early Bajocian and the Callovian-Oxfordian may record changes in global climate towards warmer, more humid periods characterised by increased nutrient mobilisation and increased carbon burial. High biosiliceous (radiolarians, siliceous sponges) productivity and preservation appear to coincide with the delta(13)C positive anomalies, when the production of platform carbonates was subdued and ceased in many areas, with a drastic reduction of periplatform ooze input in many Tethyan basins. The carbon and silica cycles appear to be linked through global warming and increased continental weathering. Hydrothermal events related to extensive rifting and/or accelerated oceanic spreading may be the endogenic driving force that created a perturbation of the exogenic system (excess CO2 into the atmosphere and greenhouse conditions) reflected by the positive delta(13)C shifts and biosiliceous episodes.
Resumo:
Un protocole de tests sur labyrinthe radial permettant d'évaluer la navigation spatial chez l'homme a été réalisé. Ces tests sur labyrinthe radial sont basés sur le protocole utilisé sur l'animal modèle de schizophrénie dans le CNP (Centre de neuroscience psychiatrique) de Lausanne. Les recherches actuelles du CNP ont montré un déficit dans les capacités d'orientation spatiale de ces animaux [13]. Ainsi notre méthodologie consistera à tester des sujets humains dans des tâches de labyrinthe afin d'étudier de la manière la plus équivalente les différents déficits observés dans la pathologie humaine et dans le rat modèle. Cette démarche est à la base d'une approche translationnelle qui combine recherches cliniques et expérimentales. Le travail expérimental a été mené sur deux dispositifs analogues. a) «radial au doigt», ensemble de petits canaux qui peuvent être explorés par le doigt, yeux ouverts ou fermés et dans lesquels des textures différentes tapissent chaque bras. b) «radial sur écran tactile», deux labyrinthes qui comparent deux types d'indice locale, couleurs différentes ou patrons noir-blanc. Dans les deux dispositifs a été prévu une série de tests permettant d'évaluer la mémorisation des indices utilisés en les supprimant temporairement où en les mettant en contradiction. La première perturbation a pour but de tester l'importance du référentiel locale par une rotation de 90° du labyrinthe. La permutation des bras lors d'un dernier essai permet d'induire une situation ou les informations ont été soit correctes spatialement mais incorrectes localement (texture) soit inversement. Ces perturbations des informations sensorielles qui sont fournies au sujet, permettent d'observer les systèmes de repérage et leur poids relatif dans la construction d'un système de référence durant la navigation spatiale. Les résultats du labyrinthe radial au doigt montrent que dans les conditions utilisant les informations visuelles les participants sont sensiblement plus performants. Il est apparu que les informations visuelles prédominent sur les informations proprioceptives et tactiles. Ainsi dans la condition intégrant informations visuospatiales, proprioceptives et tactiles, les sujets basent plus fortement leur navigation spatiale sur les indices visuelles soit locale soit spatiale. Dans cette condition une différence significative de stratégie entre hommes et femmes est apparue. Les hommes se basent majoritairement sur des indices spatiaux tandis que les femmes préfèrent les indices locaux. En présence d'informations tactiles et proprioceptives mais en absence de la vision, les participants utilisent les références spatiale et locale complémentairement sans avoir un système prédominant. Alors que si uniquement les informations proprioceptives sont présentes, les sujets utilisent un système de référence spatiale (globale). Le labyrinthe radial sur écran tactile indique une différence de système de référence selon l'indice local employé. Les couleurs, étant des forts indices locaux, vont favoriser un système de référence local. Au contraire les patrons noirs-blancs sont des indices visiblement très complexes et difficiles à mémoriser qui vont pousser les sujets à utiliser une stratégie de référence spatiale.
Resumo:
The international Functional Annotation Of the Mammalian Genomes 4 (FANTOM4) research collaboration set out to better understand the transcriptional network that regulates macrophage differentiation and to uncover novel components of the transcriptome employing a series of high-throughput experiments. The primary and unique technique is cap analysis of gene expression (CAGE), sequencing mRNA 5'-ends with a second-generation sequencer to quantify promoter activities even in the absence of gene annotation. Additional genome-wide experiments complement the setup including short RNA sequencing, microarray gene expression profiling on large-scale perturbation experiments and ChIP-chip for epigenetic marks and transcription factors. All the experiments are performed in a differentiation time course of the THP-1 human leukemic cell line. Furthermore, we performed a large-scale mammalian two-hybrid (M2H) assay between transcription factors and monitored their expression profile across human and mouse tissues with qRT-PCR to address combinatorial effects of regulation by transcription factors. These interdependent data have been analyzed individually and in combination with each other and are published in related but distinct papers. We provide all data together with systematic annotation in an integrated view as resource for the scientific community (http://fantom.gsc.riken.jp/4/). Additionally, we assembled a rich set of derived analysis results including published predicted and validated regulatory interactions. Here we introduce the resource and its update after the initial release.