965 resultados para deduced optical model parameters
Resumo:
High performance fiber reinforced concrete (HPFRC) is developing rapidly to a modern structural material with unique rheological and mechanical characteristics. Despite applying several methodologies to achieve self15 compacting requirements, some doubts still remain regarding the most convenient strategy for developing a HPFRC. In the present study, an innovative mix design method is proposed for the development of high17 performance concrete reinforced with a relatively high dosage of steel fibers. The material properties of the developed concrete are assessed, and the concrete structural behavior is characterized under compressive, flexural and shear loading. This study better clarifies the significant contribution of fibers for shear resistance of concrete elements. This paper further discusses a FEM-based simulation, aiming to address the possibility of calibrating the constitutive model parameters related to fracture modes I and II.
Resumo:
Many extensions of the Standard Model posit the existence of heavy particles with long lifetimes. This article presents the results of a search for events containing at least one long-lived particle that decays at a significant distance from its production point into two leptons or into five or more charged particles. This analysis uses a data sample of proton-proton collisions at s√ = 8 TeV corresponding to an integrated luminosity of 20.3 fb−1 collected in 2012 by the ATLAS detector operating at the Large Hadron Collider. No events are observed in any of the signal regions, and limits are set on model parameters within supersymmetric scenarios involving R-parity violation, split supersymmetry, and gauge mediation. In some of the search channels, the trigger and search strategy are based only on the decay products of individual long-lived particles, irrespective of the rest of the event. In these cases, the provided limits can easily be reinterpreted in different scenarios.
Resumo:
The geodynamic forces acting in the Earth's interior manifest themselves in a variety of ways. Volcanoes are amongst the most impressive examples in this respect, but like with an iceberg, they only represent the tip of a more extensive system hidden underground. This system consists of a source region where melt forms and accumulates, feeder connections in which magma is transported towards the surface, and different reservoirs where it is stored before it eventually erupts to form a volcano. A magma represents a mixture of melt and crystals. The latter can be extracted from the source region, or form anywhere along the path towards their final crystallization place. They will retain information of the overall plumbing system. The host rocks of an intrusion, in contrast, provide information at the emplacement level. They record the effects of thermal and mechanical forces imposed by the magma. For a better understanding of the system, both parts - magmatic and metamorphic petrology - have to be integrated. I will demonstrate in my thesis that information from both is complementary. It is an iterative process, using constraints from one field to better constrain the other. Reading the history of the host rocks is not always straightforward. This is shown in chapter two, where a model for the formation of clustered garnets observed in the contact aureole is proposed. Fragments of garnets, older than the intrusive rocks are overgrown by garnet crystallizing due to the reheating during emplacement of the adjacent pluton. The formation of the clusters is therefore not a single event as generally assumed but the result of a two-stage process, namely the alteration of the old grains and the overgrowth and amalgamation of new garnet rims. This makes an important difference when applying petrological methods such as thermobarometry, geochronology or grain size distributions. The thermal conditions in the aureole are a strong function of the emplacement style of the pluton. therefore it is necessary to understand the pluton before drawing conclusions about its aureole. A study investigating the intrusive rocks by means of field, geochemical, geochronologi- cal and structural methods is presented in chapter three. This provided important information about the assembly of the intrusion, but also new insights on the nature of large, homogeneous plutons and the structure of the plumbing system in general. The incremental nature of the emplacement of the Western Adamello tonalité is documented, and the existence of an intermediate reservoir beneath homogeneous plutons is proposed. In chapter four it is demonstrated that information extracted from the host rock provides further constraints on the emplacement process of the intrusion. The temperatures obtain by combining field observations with phase petrology modeling are used together with thermal models to constrain the magmatic activity in the immediate intrusion. Instead of using the thermal models to control the petrology result, the inverse is done. The model parameters were changed until a match with the aureole temperatures was obtained. It is shown, that only a few combinations give a positive match and that temperature estimates from the aureole can constrain the frequency of ancient magmatic systems. In the fifth chapter, the Anisotropy of Magnetic Susceptibility of intrusive rocks is compared to 3D tomography. The obtained signal is a function of the shape and distribution of ferromagnetic grains, and is often used to infer flow directions of magma. It turns out that the signal is dominated by the shape of the magnetic crystals, and where they form tight clusters, also by their distribution. This is in good agreement with the predictions made in the theoretical and experimental literature. In the sixth chapter arguments for partial melting of host rock carbonates are presented. While at first very surprising, this is to be expected when considering the prior results from the intrusive study and experiments from the literature. Partial melting is documented by compelling microstructures, geochemical and structural data. The necessary conditions are far from extreme and this process might be more frequent than previously thought. The carbonate melt is highly mobile and can move along grain boundaries, infiltrating other rocks and ultimately alter the existing mineral assemblage. Finally, a mineralogical curiosity is presented in chapter seven. The mineral assemblage magne§site and calcite is in apparent equilibrium. It is well known that these two carbonates are not stable together in the system Ca0-Mg0-Fe0-C02. Indeed, magnesite and calcite should react to dolomite during metamorphism. The presented explanation for this '"forbidden" assemblage is, that a calcite melt infiltrated the magnesite bearing rock along grain boundaries and caused the peculiar microstructure. This is supported by isotopie disequilibrium between calcite and magnesite. A further implication of partially molten carbonates is, that the host rock drastically looses its strength so that its physical properties may be comparable to the ones of the intrusive rocks. This contrasting behavior of the host rock may ease the emplacement of the intrusion. We see that the circle closes and the iterative process of better constraining the emplacement could start again. - La Terre est en perpétuel mouvement et les forces tectoniques associées à ces mouvements se manifestent sous différentes formes. Les volcans en sont l'un des exemples les plus impressionnants, mais comme les icebergs, les laves émises en surfaces ne représentent que la pointe d'un vaste système caché dans les profondeurs. Ce système est constitué d'une région source, région où la roche source fond et produit le magma ; ce magma peut s'accumuler dans cette région source ou être transporté à travers différents conduits dans des réservoirs où le magma est stocké. Ce magma peut cristalliser in situ et produire des roches plutoniques ou alors être émis en surface. Un magma représente un mélange entre un liquide et des cristaux. Ces cristaux peuvent être extraits de la source ou se former tout au long du chemin jusqu'à l'endroit final de cristallisation. L'étude de ces cristaux peut ainsi donner des informations sur l'ensemble du système magmatique. Au contraire, les roches encaissantes fournissent des informations sur le niveau d'emplacement de l'intrusion. En effet ces roches enregistrent les effets thermiques et mécaniques imposés par le magma. Pour une meilleure compréhension du système, les deux parties, magmatique et métamorphique, doivent être intégrées. Cette thèse a pour but de montrer que les informations issues de l'étude des roches magmatiques et des roches encaissantes sont complémentaires. C'est un processus itératif qui utilise les contraintes d'un domaine pour améliorer la compréhension de l'autre. Comprendre l'histoire des roches encaissantes n'est pas toujours aisé. Ceci est démontré dans le chapitre deux, où un modèle de formation des grenats observés sous forme d'agrégats dans l'auréole de contact est proposé. Des fragments de grenats plus vieux que les roches intru- sives montrent une zone de surcroissance générée par l'apport thermique produit par la mise en place du pluton adjacent. La formation des agrégats de grenats n'est donc pas le résultat d'un seul événement, comme on le décrit habituellement, mais d'un processus en deux phases, soit l'altération de vieux grains engendrant une fracturation de ces grenats, puis la formation de zone de surcroissance autour de ces différents fragments expliquant la texture en agrégats observée. Cette interprétation en deux phases est importante, car elle engendre des différences notables lorsque l'on applique des méthodes pétrologiques comme la thermobarométrie, la géochronologie ou encore lorsque l'on étudie la distribution relative de la taille des grains. Les conditions thermales dans l'auréole de contact dépendent fortement du mode d'emplacement de l'intrusion et c'est pourquoi il est nécessaire de d'abord comprendre le pluton avant de faire des conclusions sur son auréole de contact. Une étude de terrain des roches intrusives ainsi qu'une étude géochimique, géochronologique et structurale est présente dans le troisième chapitre. Cette étude apporte des informations importantes sur la formation de l'intrusion mais également de nouvelles connaissances sur la nature de grands plutons homogènes et la structure de système magmatique en général. L'emplacement incrémental est mis en évidence et l'existence d'un réservoir intermédiaire en-dessous des plutons homogènes est proposé. Le quatrième chapitre de cette thèse illustre comment utiliser l'information extraite des roches encaissantes pour expliquer la mise en place de l'intrusion. Les températures obtenues par la combinaison des observations de terrain et l'assemblage métamorphique sont utilisées avec des modèles thermiques pour contraindre l'activité magmatique au contact directe de cette auréole. Au lieu d'utiliser le modèle thermique pour vérifier le résultat pétrologique, une approche inverse a été choisie. Les paramètres du modèle ont été changés jusqu'à ce qu'on obtienne une correspondance avec les températures observées dans l'auréole de contact. Ceci montre qu'il y a peu de combinaison qui peuvent expliquer les températures et qu'on peut contraindre la fréquence de l'activité magmatique d'un ancien système magmatique de cette manière. Dans le cinquième chapitre, les processus contrôlant l'anisotropie de la susceptibilité magnétique des roches intrusives sont expliqués à l'aide d'images de la distribution des minéraux dans les roches obtenues par tomographie 3D. Le signal associé à l'anisotropie de la susceptibilité magnétique est une fonction de la forme et de la distribution des grains ferromagnétiques. Ce signal est fréquemment utilisé pour déterminer la direction de mouvement d'un magma. En accord avec d'autres études de la littérature, les résultats montrent que le signal est dominé par la forme des cristaux magnétiques, ainsi que par la distribution des agglomérats de ces minéraux dans la roche. Dans le sixième chapitre, une étude associée à la fusion partielle de carbonates dans les roches encaissantes est présentée. Si la présence de liquides carbonatés dans les auréoles de contact a été proposée sur la base d'expériences de laboratoire, notre étude démontre clairement leur existence dans la nature. La fusion partielle est documentée par des microstructures caractéristiques pour la présence de liquides ainsi que par des données géochimiques et structurales. Les conditions nécessaires sont loin d'être extrêmes et ce processus pourrait être plus fréquent qu'attendu. Les liquides carbonatés sont très mobiles et peuvent circuler le long des limites de grain avant d'infiltrer d'autres roches en produisant une modification de leurs assemblages minéralogiques. Finalement, une curiosité minéralogique est présentée dans le chapitre sept. L'assemblage de minéraux de magnésite et de calcite en équilibre apparent est observé. Il est bien connu que ces deux carbonates ne sont pas stables ensemble dans le système CaO-MgO-FeO-CO.,. En effet, la magnésite et la calcite devraient réagir et produire de la dolomite pendant le métamorphisme. L'explication présentée pour cet assemblage à priori « interdit » est que un liquide carbonaté provenant des roches adjacentes infiltre cette roche et est responsable pour cette microstructure. Une autre implication associée à la présence de carbonates fondus est que la roche encaissante montre une diminution drastique de sa résistance et que les propriétés physiques de cette roche deviennent comparables à celles de la roche intrusive. Cette modification des propriétés rhéologiques des roches encaissantes peut faciliter la mise en place des roches intrusives. Ces différentes études démontrent bien le processus itératif utilisé et l'intérêt d'étudier aussi bien les roches intrusives que les roches encaissantes pour la compréhension des mécanismes de mise en place des magmas au sein de la croûte terrestre.
Resumo:
Piped water is used to remove hydration heat from concrete blocks during construction. In this paper we develop an approximate model for this process. The problem reduces to solving a one-dimensional heat equation in the concrete, coupled with a first order differential equation for the water temperature. Numerical results are presented and the effect of varying model parameters shown. An analytical solution is also provided for a steady-state constant heat generationmodel. This helps highlight the dependence on certain parameters and can therefore provide an aid in the design of cooling systems.
Resumo:
Objectives: Several population pharmacokinetic (PPK) and pharmacokinetic-pharmacodynamic (PK-PD) analyses have been performed with the anticancer drug imatinib. Inspired by the approach of meta-analysis, we aimed to compare and combine results from published studies in a useful way - in particular for improving the clinical interpretation of imatinib concentration measurements in the scope of therapeutic drug monitoring (TDM). Methods: Original PPK analyses and PK-PD studies (PK surrogate: trough concentration Cmin; PD outcomes: optimal early response and specific adverse events) were searched systematically on MEDLINE. From each identified PPK model, a predicted concentration distribution under standard dosage was derived through 1000 simulations (NONMEM), after standardizing model parameters to common covariates. A "reference range" was calculated from pooled simulated concentrations in a semi-quantitative approach (without specific weighting) over the whole dosing interval. Meta-regression summarized relationships between Cmin and optimal/suboptimal early treatment response. Results: 9 PPK models and 6 relevant PK-PD reports in CML patients were identified. Model-based predicted median Cmin ranged from 555 to 1388 ng/ml (grand median: 870 ng/ml and inter-quartile range: 520-1390 ng/ml). The probability to achieve optimal early response was predicted to increase from 60 to 85% from 520 to 1390 ng/ml across PK-PD studies (odds ratio for doubling Cmin: 2.7). Reporting of specific adverse events was too heterogeneous to perform a regression analysis. The general frequency of anemia, rash and fluid retention increased however consistently with Cmin, but less than response probability. Conclusions: Predicted drug exposure may differ substantially between various PPK analyses. In this review, heterogeneity was mainly attributed to 2 "outlying" models. The established reference range seems to cover the range where both good efficacy and acceptable tolerance are expected for most patients. TDM guided dose adjustment appears therefore justified for imatinib in CML patients. Its usefulness remains now to be prospectively validated in a randomized trial.
Resumo:
Objectives: Gentamicin is among the most commonly prescribed antibiotics in newborns, but large interindividual variability in exposure levels exists. Based on a population pharmacokinetic analysis of a cohort of unselected neonates, we aimed to validate current dosing recommendations from a recent reference guideline (Neofax®). Methods: From 3039 concentrations collected in 994 preterm (median gestational age 32.3 weeks, range 24.2-36.5) and 455 term newborns, treated at the University Hospital of Lausanne between 2006 and 2011, a population pharmacokinetic analysis was performed with NONMEM®. Model-based simulations were used to assess the ability of dosing regimens to bring concentrations into targets: trough ≤ 1mg/L and peak ~ 8mg/L. Results: A two-compartment model best characterized gentamicin pharmacokinetics. Model parameters are presented in the table. Body weight, gestational age and postnatal age positively influence clearance, which decreases under dopamine administration. Body weight and gestational age influence the distribution volume. Model based simulations confirm that preterm infants need doses superior to 4 mg/kg, and extended dosage intervals, up to 48 hours for very preterm newborns, whereas most term newborns would achieve adequate exposure under 4 mg/kg q. 24 h. More than 90% of neonates would achieve trough concentrations below 2 mg/L and peaks above 6 mg/L following most recent guidelines. Conclusions: Simulated gentamicin exposure demonstrates good accordance with recent dosing recommendations for target concentration achievement.
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
The MIGCLIM R package is a function library for the open source R software that enables the implementation of species-specific dispersal constraints into projections of species distribution models under environmental change and/or landscape fragmentation scenarios. The model is based on a cellular automaton and the basic modeling unit is a cell that is inhabited or not. Model parameters include dispersal distance and kernel, long distance dispersal, barriers to dispersal, propagule production potential and habitat invasibility. The MIGCLIM R package has been designed to be highly flexible in the parameter values it accepts, and to offer good compatibility with existing species distribution modeling software. Possible applications include the projection of future species distributions under environmental change conditions and modeling the spread of invasive species.
Resumo:
One hypothesis for the origin of alkaline lavas erupted on oceanic islands and in intracontinental settings is that they represent the melts of amphibole-rich veins in the lithosphere (or melts of their dehydrated equivalents if metasomatized lithosphere is recycled into the convecting mantle). Amphibole-rich veins are interpreted as cumulates produced by crystallization of low-degree melts of the underlying asthenosphere as they ascend through the lithosphere. We present the results of trace-element modelling of the formation and melting of veins formed in this way with the goal of testing this hypothesis and for predicting how variability in the formation and subsequent melting of such cumulates (and adjacent cryptically and modally metasomatized lithospheric peridotite) would be manifested in magmas generated by such a process. Because the high-pressure phase equilibria of hydrous near-solidus melts of garnet lherzolite are poorly constrained and given the likely high variability of the hypothesized accumulation and remelting processes, we used Monte Carlo techniques to estimate how uncertainties in the model parameters (e.g. the compositions of the asthenospheric sources, their trace-element contents, and their degree of melting; the modal proportions of crystallizing phases, including accessory phases, as the asthenospheric partial melts ascend and crystallize in the lithosphere; the amount of metasomatism of the peridotitic country rock; the degree of melting of the cumulates and the amount of melt derived from the metasomatized country rock) propagate through the process and manifest themselves as variability in the trace-element contents and radiogenic isotopic ratios of model vein compositions and erupted alkaline magma compositions. We then compare the results of the models with amphibole observed in lithospheric veins and with oceanic and continental alkaline magmas. While the trace-element patterns of the near-solidus peridotite melts, the initial anhydrous cumulate assemblage (clinopyroxene +/- garnet +/- olivine +/- orthopyroxene), and the modelled coexisting liquids do not match the patterns observed in alkaline lavas, our calculations show that with further crystallization and the appearance of amphibole (and accessory minerals such as rutile, ilmenite, apatite, etc.) the calculated cumulate assemblages have trace-element patterns that closely match those observed in the veins and lavas. These calculated hydrous cumulate assemblages are highly enriched in incompatible trace elements and share many similarities with the trace-element patterns of alkaline basalts observed in oceanic or continental setting such as positive Nb/La, negative Ce/Pb, and similiar slopes of the rare earth elements. By varying the proportions of trapped liquid and thus simulating the cryptic and modal metasomatism observed in peridotite that surrounds these veins, we can model the variations in Ba/Nb, Ce/Pb, and Nb/U ratios that are observed in alkaline basalts. If the isotopic compositions of the initial low-degree peridotite melts are similar to the range observed in mid-ocean ridge basalt, our model calculations produce cumulates that would have isotopic compositions similar to those observed in most alkaline ocean island basalt (OIB) and continental magmas after similar to 0 center dot 15 Gyr. However, to produce alkaline basalts with HIMU isotopic compositions requires much longer residence times (i.e. 1-2 Gyr), consistent with subduction and recycling of metasomatized lithosphere through the mantle. such as a heterogeneous asthenosphere. These modelling results support the interpretation proposed by various researchers that amphibole-bearing veins represent cumulates formed during the differentiation of a volatile-bearing low-degree peridotite melt and that these cumulates are significant components of the sources of alkaline OIB and continental magmas. The results of the forward models provide the potential for detailed tests of this class of hypotheses for the origin of alkaline magmas worldwide and for interpreting major and minor aspects of the geochemical variability of these magmas.
Resumo:
Humoral factors play an important role in the control of exercise hyperpnea. The role of neuromechanical ventilatory factors, however, is still being investigated. We tested the hypothesis that the afferents of the thoracopulmonary system, and consequently of the neuromechanical ventilatory loop, have an influence on the kinetics of oxygen consumption (VO2), carbon dioxide output (VCO2), and ventilation (VE) during moderate intensity exercise. We did this by comparing the ventilatory time constants (tau) of exercise with and without an inspiratory load. Fourteen healthy, trained men (age 22.6 +/- 3.2 yr) performed a continuous incremental cycle exercise test to determine maximal oxygen uptake (VO2max = 55.2 +/- 5.8 ml x min(-1) x kg(-1)). On another day, after unloaded warm-up they performed randomized constant-load tests at 40% of their VO2max for 8 min, one with and the other without an inspiratory threshold load of 15 cmH2O. Ventilatory variables were obtained breath by breath. Phase 2 ventilatory kinetics (VO2, VCO2, and VE) could be described in all cases by a monoexponential function. The bootstrap method revealed small coefficients of variation for the model parameters, indicating an accurate determination for all parameters. Paired Student's t-tests showed that the addition of the inspiratory resistance significantly increased the tau during phase 2 of VO2 (43.1 +/- 8.6 vs. 60.9 +/- 14.1 s; P < 0.001), VCO2 (60.3 +/- 17.6 vs. 84.5 +/- 18.1 s; P < 0.001) and VE (59.4 +/- 16.1 vs. 85.9 +/- 17.1 s; P < 0.001). The average rise in tau was 41.3% for VO2, 40.1% for VCO2, and 44.6% for VE. The tau changes indicated that neuromechanical ventilatory factors play a role in the ventilatory response to moderate exercise.
Resumo:
This paper discusses predictive motion control of a MiRoSoT robot. The dynamic model of the robot is deduced by taking into account the whole process - robot, vision, control and transmission systems. Based on the obtained dynamic model, an integrated predictive control algorithm is proposed to position precisely with either stationary or moving obstacle avoidance. This objective is achieved automatically by introducing distant constraints into the open-loop optimization of control inputs. Simulation results demonstrate the feasibility of such control strategy for the deduced dynamic model
Resumo:
Over the past decade, significant interest has been expressed in relating the spatial statistics of surface-based reflection ground-penetrating radar (GPR) data to those of the imaged subsurface volume. A primary motivation for this work is that changes in the radar wave velocity, which largely control the character of the observed data, are expected to be related to corresponding changes in subsurface water content. Although previous work has indeed indicated that the spatial statistics of GPR images are linked to those of the water content distribution of the probed region, a viable method for quantitatively analyzing the GPR data and solving the corresponding inverse problem has not yet been presented. Here we address this issue by first deriving a relationship between the 2-D autocorrelation of a water content distribution and that of the corresponding GPR reflection image. We then show how a Bayesian inversion strategy based on Markov chain Monte Carlo sampling can be used to estimate the posterior distribution of subsurface correlation model parameters that are consistent with the GPR data. Our results indicate that if the underlying assumptions are valid and we possess adequate prior knowledge regarding the water content distribution, in particular its vertical variability, this methodology allows not only for the reliable recovery of lateral correlation model parameters but also for estimates of parameter uncertainties. In the case where prior knowledge regarding the vertical variability of water content is not available, the results show that the methodology still reliably recovers the aspect ratio of the heterogeneity.
Resumo:
A four compartment model of the cardiovascular system is developed. To allow for easy interpretation and to minimise the number of parameters, an effort was made to keep the model as simple as possible. A sensitivity analysis is first carried out to determine which are the most important model parameters to characterise the blood pressure signal. A four stage process is then described which accurately determines all parameter values. This process is applied to data from three patients and good agreement is shown in all cases.
Resumo:
SummaryDiscrete data arise in various research fields, typically when the observations are count data.I propose a robust and efficient parametric procedure for estimation of discrete distributions. The estimation is done in two phases. First, a very robust, but possibly inefficient, estimate of the model parameters is computed and used to indentify outliers. Then the outliers are either removed from the sample or given low weights, and a weighted maximum likelihood estimate (WML) is computed.The weights are determined via an adaptive process such that if the data follow the model, then asymptotically no observation is downweighted.I prove that the final estimator inherits the breakdown point of the initial one, and that its influence function at the model is the same as the influence function of the maximum likelihood estimator, which strongly suggests that it is asymptotically fully efficient.The initial estimator is a minimum disparity estimator (MDE). MDEs can be shown to have full asymptotic efficiency, and some MDEs have very high breakdown points and very low bias under contamination. Several initial estimators are considered, and the performances of the WMLs based on each of them are studied.It results that in a great variety of situations the WML substantially improves the initial estimator, both in terms of finite sample mean square error and in terms of bias under contamination. Besides, the performances of the WML are rather stable under a change of the MDE even if the MDEs have very different behaviors.Two examples of application of the WML to real data are considered. In both of them, the necessity for a robust estimator is clear: the maximum likelihood estimator is badly corrupted by the presence of a few outliers.This procedure is particularly natural in the discrete distribution setting, but could be extended to the continuous case, for which a possible procedure is sketched.RésuméLes données discrètes sont présentes dans différents domaines de recherche, en particulier lorsque les observations sont des comptages.Je propose une méthode paramétrique robuste et efficace pour l'estimation de distributions discrètes. L'estimation est faite en deux phases. Tout d'abord, un estimateur très robuste des paramètres du modèle est calculé, et utilisé pour la détection des données aberrantes (outliers). Cet estimateur n'est pas nécessairement efficace. Ensuite, soit les outliers sont retirés de l'échantillon, soit des faibles poids leur sont attribués, et un estimateur du maximum de vraisemblance pondéré (WML) est calculé.Les poids sont déterminés via un processus adaptif, tel qu'asymptotiquement, si les données suivent le modèle, aucune observation n'est dépondérée.Je prouve que le point de rupture de l'estimateur final est au moins aussi élevé que celui de l'estimateur initial, et que sa fonction d'influence au modèle est la même que celle du maximum de vraisemblance, ce qui suggère que cet estimateur est pleinement efficace asymptotiquement.L'estimateur initial est un estimateur de disparité minimale (MDE). Les MDE sont asymptotiquement pleinement efficaces, et certains d'entre eux ont un point de rupture très élevé et un très faible biais sous contamination. J'étudie les performances du WML basé sur différents MDEs.Le résultat est que dans une grande variété de situations le WML améliore largement les performances de l'estimateur initial, autant en terme du carré moyen de l'erreur que du biais sous contamination. De plus, les performances du WML restent assez stables lorsqu'on change l'estimateur initial, même si les différents MDEs ont des comportements très différents.Je considère deux exemples d'application du WML à des données réelles, où la nécessité d'un estimateur robuste est manifeste : l'estimateur du maximum de vraisemblance est fortement corrompu par la présence de quelques outliers.La méthode proposée est particulièrement naturelle dans le cadre des distributions discrètes, mais pourrait être étendue au cas continu.
Resumo:
Recently, there has been an increased interest on the neural mechanisms underlying perceptual decision making. However, the effect of neuronal adaptation in this context has not yet been studied. We begin our study by investigating how adaptation can bias perceptual decisions. We considered behavioral data from an experiment on high-level adaptation-related aftereffects in a perceptual decision task with ambiguous stimuli on humans. To understand the driving force behind the perceptual decision process, a biologically inspired cortical network model was used. Two theoretical scenarios arose for explaining the perceptual switch from the category of the adaptor stimulus to the opposite, nonadapted one. One is noise-driven transition due to the probabilistic spike times of neurons and the other is adaptation-driven transition due to afterhyperpolarization currents. With increasing levels of neural adaptation, the system shifts from a noise-driven to an adaptation-driven modus. The behavioral results show that the underlying model is not just a bistable model, as usual in the decision-making modeling literature, but that neuronal adaptation is high and therefore the working point of the model is in the oscillatory regime. Using the same model parameters, we studied the effect of neural adaptation in a perceptual decision-making task where the same ambiguous stimulus was presented with and without a preceding adaptor stimulus. We find that for different levels of sensory evidence favoring one of the two interpretations of the ambiguous stimulus, higher levels of neural adaptation lead to quicker decisions contributing to a speed–accuracy trade off.