39 resultados para incremental computation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hypoxia increases the ventilatory response to exercise, which leads to hyperventilation-induced hypocapnia and subsequent reduction in cerebral blood flow (CBF). We studied the effects of adding CO2 to a hypoxic inspired gas on CBF during heavy exercise in an altitude naïve population. We hypothesized that augmented inspired CO2 and hypoxia would exert synergistic effects on increasing CBF during exercise, which would improve exercise capacity compared to hypocapnic hypoxia. We also examined the responsiveness of CO2 and O2 chemoreception on the regulation ventilation (E) during incremental exercise. We measured middle cerebral artery velocity (MCAv; index of CBF), E, end-tidal PCO2, respiratory compensation threshold (RC) and ventilatory response to exercise (E slope) in ten healthy men during incremental cycling to exhaustion in normoxia and hypoxia (FIO2 = 0.10) with and without augmenting the fraction of inspired CO2 (FICO2). During exercise in normoxia, augmenting FICO2 elevated MCAv throughout exercise and lowered both RC onset andE slope below RC (P<0.05). In hypoxia, MCAv and E slope below RC during exercise were elevated, while the onset of RC occurred at lower exercise intensity (P<0.05). Augmenting FICO2 in hypoxia increased E at RC (P<0.05) but no difference was observed in RC onset, MCAv, or E slope below RC (P>0.05). The E slope above RC was unchanged with either hypoxia or augmented FICO2 (P>0.05). We found augmenting FICO2 increased CBF during sub-maximal exercise in normoxia, but not in hypoxia, indicating that the 'normal' cerebrovascular response to hypercapnia is blunted during exercise in hypoxia, possibly due to an exhaustion of cerebral vasodilatory reserve. This finding may explain the lack of improvement of exercise capacity in hypoxia with augmented CO2. Our data further indicate that, during exercise below RC, chemoreception is responsive, while above RC the ventilatory response to CO2 is blunted.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Drug metabolism can produce metabolites with physicochemical and pharmacological properties that differ substantially from those of the parent drug, and consequently has important implications for both drug safety and efficacy. To reduce the risk of costly clinical-stage attrition due to the metabolic characteristics of drug candidates, there is a need for efficient and reliable ways to predict drug metabolism in vitro, in silico and in vivo. In this Perspective, we provide an overview of the state of the art of experimental and computational approaches for investigating drug metabolism. We highlight the scope and limitations of these methods, and indicate strategies to harvest the synergies that result from combining measurement and prediction of drug metabolism.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The perceived low levels of genetic diversity, poor interspecific competitive and defensive ability, and loss of dispersal capacities of insular lineages have driven the view that oceanic islands are evolutionary dead ends. Focusing on the Atlantic bryophyte flora distributed across the archipelagos of the Azores, Madeira, the Canary Islands, Western Europe, and northwestern Africa, we used an integrative approach with species distribution modeling and population genetic analyses based on approximate Bayesian computation to determine whether this view applies to organisms with inherent high dispersal capacities. Genetic diversity was found to be higher in island than in continental populations, contributing to mounting evidence that, contrary to theoretical expectations, island populations are not necessarily genetically depauperate. Patterns of genetic variation among island and continental populations consistently fitted those simulated under a scenario of de novo foundation of continental populations from insular ancestors better than those expected if islands would represent a sink or a refugium of continental biodiversity. We, suggest that the northeastern Atlantic archipelagos have played a key role as a stepping stone for transoceanic migrants. Our results challenge the traditional notion that oceanic islands are the end of the colonization road and illustrate the significant role of oceanic islands as reservoirs of novel biodiversity for the assembly of continental floras.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hope is believed to be beneficial for vocational pursuits, but the question of how and why hope is related to pivotal career development variables remains largely unaddressed. In a series of three studies,we investigated the relationship between hope and career exploration. Study 1 examined at-risk adolescents (N = 228) in Switzerland and showed that hope explains variance in career exploration beyond the significant effects of generalized self-efficacy beliefs and perceived social support. Study 2 found the same result among a group (N=223) of first-year students at a Swiss university with a measure of state hope. Study 3 applied a one-year cross-lagged design with a diverse group of students (N = 266) at a German university to investigate the mutual effects of dispositional hope and career exploration over time. Although both variables were found to be related within and over time, we could not confirm lagged effects in either direction. The results suggest that hope is significantly correlated with career exploration because both are related to personality and social-contextual variables.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Integrating single nucleotide polymorphism (SNP) p-values from genome-wide association studies (GWAS) across genes and pathways is a strategy to improve statistical power and gain biological insight. Here, we present Pascal (Pathway scoring algorithm), a powerful tool for computing gene and pathway scores from SNP-phenotype association summary statistics. For gene score computation, we implemented analytic and efficient numerical solutions to calculate test statistics. We examined in particular the sum and the maximum of chi-squared statistics, which measure the strongest and the average association signals per gene, respectively. For pathway scoring, we use a modified Fisher method, which offers not only significant power improvement over more traditional enrichment strategies, but also eliminates the problem of arbitrary threshold selection inherent in any binary membership based pathway enrichment approach. We demonstrate the marked increase in power by analyzing summary statistics from dozens of large meta-studies for various traits. Our extensive testing indicates that our method not only excels in rigorous type I error control, but also results in more biologically meaningful discoveries.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

White micas in carbonate-rich tectonites and a few other rock types of large thrusts in the Swiss Helvetic fold-and-thrust belt have been analyzed by Ar-40/Ar-39 and Rb/Sr techniques to better constrain the timing of Alpine deformation for this region. Incremental Ar-40/Ar-39 heating experiments of 25 weakly metamorphosed (anchizone to low greenschist) samples yield plateau and staircase spectra. We interpret most of the staircase release spectra result from variable mixtures of syntectonic (neoformed) and detrital micas. The range in dates obtained within individual spectra depends primarily on the duration of mica nucleation and growth, and relative proportions of neoformed and detrital mica. Rb/Sr analyses of 12 samples yield dates of ca. 10-39 Ma (excluding one anomalously young sample). These dates are slightly younger than the Ar-40/Ar-39 total gas dates obtained for the same samples. The Rb/ Sr dates were calculated using initial Sr-87/Sr-86 ratios obtained from the carbonate-dominated host rocks, which are higher than normal Mesozoic carbonate values due to exchange with fluids of higher Sr-87/Sr-86 ratios (and lower O-18/O-16 ratios). Model dates calculated using Sr-87/Sr-86 values typical of Mesozoic marine carbonates more closely approximate the Ar-40/Ar-39 total gas dates for most of the samples. The similarities of Rb/Sr and Ar-40/Ar-39 total gas dates are consistent with limited amounts of detrital mica in the samples. The delta(18)O values range from 24-15%. (VSMOW) for 2-6 mum micas and 27-16parts per thousand for the carbonate host rocks. The carbonate values are significantly lower than their protolith values due to localized fluid-rock interaction and fluid flow along most thrust surfaces. Although most calcite-mica pairs are not in oxygen isotope equilibrium at temperatures of ca. 200-400 degreesC, their isotopic fractionations are indicative of either 1) partial exchange between the minerals and a common external fluid, or 2) growth or isotopic exchange of the mica with the carbonate after the carbonate had isotopically exchanged with an external fluid. The geological significance of these results is not easily or uniquely determined, and exemplifies the difficulties inherent in dating very fine-grained micas of highly deformed tectonites in low-grade metamorphic terranes. Two generalizations can be made regarding the dates obtained from the Helvetic thrusts: 1) samples from the two highest thrusts (Mt. Gond and Sublage) have all of their Ar-40/Ar-39 steps above 20 Ma, and 2) most samples from the deepest Helvetic thrusts have steps (often accounting for more than 80% of Ar-39 release) between 15 and 25 Ma. These dates are consistent with the order of thrusting in the foreland-imbricating system and increase proportions of neoformed to detrital mica in the more metamorphosed hinterland and deeply buried portions of the nappe pile. Individual thrusts accommodated the majority of their displacement during their initial incorporation into the foreland-imbricating system, and some thrusts remained active or were reactivated down to 15 Ma.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Lipid-lowering therapy is costly but effective at reducing coronary heart disease (CHD) risk. OBJECTIVE: To assess the cost-effectiveness and public health impact of Adult Treatment Panel III (ATP III) guidelines and compare with a range of risk- and age-based alternative strategies. DESIGN: The CHD Policy Model, a Markov-type cost-effectiveness model. DATA SOURCES: National surveys (1999 to 2004), vital statistics (2000), the Framingham Heart Study (1948 to 2000), other published data, and a direct survey of statin costs (2008). TARGET POPULATION: U.S. population age 35 to 85 years. Time Horizon: 2010 to 2040. PERSPECTIVE: Health care system. INTERVENTION: Lowering of low-density lipoprotein cholesterol with HMG-CoA reductase inhibitors (statins). OUTCOME MEASURE: Incremental cost-effectiveness. RESULTS OF BASE-CASE ANALYSIS: Full adherence to ATP III primary prevention guidelines would require starting (9.7 million) or intensifying (1.4 million) statin therapy for 11.1 million adults and would prevent 20,000 myocardial infarctions and 10,000 CHD deaths per year at an annual net cost of $3.6 billion ($42,000/QALY) if low-intensity statins cost $2.11 per pill. The ATP III guidelines would be preferred over alternative strategies if society is willing to pay $50,000/QALY and statins cost $1.54 to $2.21 per pill. At higher statin costs, ATP III is not cost-effective; at lower costs, more liberal statin-prescribing strategies would be preferred; and at costs less than $0.10 per pill, treating all persons with low-density lipoprotein cholesterol levels greater than 3.4 mmol/L (>130 mg/dL) would yield net cost savings. RESULTS OF SENSITIVITY ANALYSIS: Results are sensitive to the assumptions that LDL cholesterol becomes less important as a risk factor with increasing age and that little disutility results from taking a pill every day. LIMITATION: Randomized trial evidence for statin effectiveness is not available for all subgroups. CONCLUSION: The ATP III guidelines are relatively cost-effective and would have a large public health impact if implemented fully in the United States. Alternate strategies may be preferred, however, depending on the cost of statins and how much society is willing to pay for better health outcomes. FUNDING: Flight Attendants' Medical Research Institute and the Swanson Family Fund. The Framingham Heart Study and Framingham Offspring Study are conducted and supported by the National Heart, Lung, and Blood Institute.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

INTRODUCTION: Hip fractures are responsible for excessive mortality, decreasing the 5-year survival rate by about 20%. From an economic perspective, they represent a major source of expense, with direct costs in hospitalization, rehabilitation, and institutionalization. The incidence rate sharply increases after the age of 70, but it can be reduced in women aged 70-80 years by therapeutic interventions. Recent analyses suggest that the most efficient strategy is to implement such interventions in women at the age of 70 years. As several guidelines recommend bone mineral density (BMD) screening of postmenopausal women with clinical risk factors, our objective was to assess the cost-effectiveness of two screening strategies applied to elderly women aged 70 years and older. METHODS: A cost-effectiveness analysis was performed using decision-tree analysis and a Markov model. Two alternative strategies, one measuring BMD of all women, and one measuring BMD only of those having at least one risk factor, were compared with the reference strategy "no screening". Cost-effectiveness ratios were measured as cost per year gained without hip fracture. Most probabilities were based on data observed in EPIDOS, SEMOF and OFELY cohorts. RESULTS: In this model, which is mostly based on observed data, the strategy "screen all" was more cost effective than "screen women at risk." For one woman screened at the age of 70 and followed for 10 years, the incremental (additional) cost-effectiveness ratio of these two strategies compared with the reference was 4,235 euros and 8,290 euros, respectively. CONCLUSION: The results of this model, under the assumptions described in the paper, suggest that in women aged 70-80 years, screening all women with dual-energy X-ray absorptiometry (DXA) would be more effective than no screening or screening only women with at least one risk factor. Cost-effectiveness studies based on decision-analysis trees maybe useful tools for helping decision makers, and further models based on different assumptions should be performed to improve the level of evidence on cost-effectiveness ratios of the usual screening strategies for osteoporosis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Although extended secondary prophylaxis with low-molecular-weight heparin was recently shown to be more effective than warfarin for cancer-related venous thromboembolism, its cost-effectiveness compared to traditional prophylaxis with warfarin is uncertain. We built a decision analytic model to evaluate the clinical and economic outcomes of a 6-month course of low-molecular-weight heparin or warfarin therapy in 65-year-old patients with cancer-related venous thromboembolism. We used probability estimates and utilities reported in the literature and published cost data. Using a US societal perspective, we compared strategies based on quality-adjusted life-years (QALYs) and lifetime costs. The incremental cost-effectiveness ratio of low-molecular-weight heparin compared with warfarin was 149,865 dollars/QALY. Low-molecular-weight heparin yielded a quality-adjusted life expectancy of 1.097 QALYs at the cost of 15,329 dollars. Overall, 46% (7108 dollars) of the total costs associated with low-molecular-weight heparin were attributable to pharmacy costs. Although the low-molecular-weigh heparin strategy achieved a higher incremental quality-adjusted life expectancy than the warfarin strategy (difference of 0.051 QALYs), this clinical benefit was offset by a substantial cost increment of 7,609 dollars. Cost-effectiveness results were sensitive to variation of the early mortality risks associated with low-molecular-weight heparin and warfarin and the pharmacy costs for low-molecular-weight heparin. Based on the best available evidence, secondary prophylaxis with low-molecular-weight heparin is more effective than warfarin for cancer-related venous thromboembolism. However, because of the substantial pharmacy costs of extended low-molecular-weight heparin prophylaxis in the US, this treatment is relatively expensive compared with warfarin.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Therapeutic drug monitoring (TDM) aims to optimize treatments by individualizing dosage regimens based on the measurement of blood concentrations. Dosage individualization to maintain concentrations within a target range requires pharmacokinetic and clinical capabilities. Bayesian calculations currently represent the gold standard TDM approach but require computation assistance. In recent decades computer programs have been developed to assist clinicians in this assignment. The aim of this survey was to assess and compare computer tools designed to support TDM clinical activities. The literature and the Internet were searched to identify software. All programs were tested on personal computers. Each program was scored against a standardized grid covering pharmacokinetic relevance, user friendliness, computing aspects, interfacing and storage. A weighting factor was applied to each criterion of the grid to account for its relative importance. To assess the robustness of the software, six representative clinical vignettes were processed through each of them. Altogether, 12 software tools were identified, tested and ranked, representing a comprehensive review of the available software. Numbers of drugs handled by the software vary widely (from two to 180), and eight programs offer users the possibility of adding new drug models based on population pharmacokinetic analyses. Bayesian computation to predict dosage adaptation from blood concentration (a posteriori adjustment) is performed by ten tools, while nine are also able to propose a priori dosage regimens, based only on individual patient covariates such as age, sex and bodyweight. Among those applying Bayesian calculation, MM-USC*PACK© uses the non-parametric approach. The top two programs emerging from this benchmark were MwPharm© and TCIWorks. Most other programs evaluated had good potential while being less sophisticated or less user friendly. Programs vary in complexity and might not fit all healthcare settings. Each software tool must therefore be regarded with respect to the individual needs of hospitals or clinicians. Programs should be easy and fast for routine activities, including for non-experienced users. Computer-assisted TDM is gaining growing interest and should further improve, especially in terms of information system interfacing, user friendliness, data storage capability and report generation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The determination of characteristic cardiac parameters, such as displacement, stress and strain distribution are essential for an understanding of the mechanics of the heart. The calculation of these parameters has been limited until recently by the use of idealised mathematical representations of biventricular geometries and by applying simple material laws. On the basis of 20 short axis heart slices and in consideration of linear and nonlinear material behaviour we have developed a FE model with about 100,000 degrees of freedom. Marching Cubes and Phong's incremental shading technique were used to visualise the three dimensional geometry. In a quasistatic FE analysis continuous distribution of regional stress and strain corresponding to the endsystolic state were calculated. Substantial regional variation of the Von Mises stress and the total strain energy were observed at all levels of the heart model. The results of both the linear elastic model and the model with a nonlinear material description (Mooney-Rivlin) were compared. While the stress distribution and peak stress values were found to be comparable, the displacement vectors obtained with the nonlinear model were generally higher in comparison with the linear elastic case indicating the need to include nonlinear effects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The geodynamic forces acting in the Earth's interior manifest themselves in a variety of ways. Volcanoes are amongst the most impressive examples in this respect, but like with an iceberg, they only represent the tip of a more extensive system hidden underground. This system consists of a source region where melt forms and accumulates, feeder connections in which magma is transported towards the surface, and different reservoirs where it is stored before it eventually erupts to form a volcano. A magma represents a mixture of melt and crystals. The latter can be extracted from the source region, or form anywhere along the path towards their final crystallization place. They will retain information of the overall plumbing system. The host rocks of an intrusion, in contrast, provide information at the emplacement level. They record the effects of thermal and mechanical forces imposed by the magma. For a better understanding of the system, both parts - magmatic and metamorphic petrology - have to be integrated. I will demonstrate in my thesis that information from both is complementary. It is an iterative process, using constraints from one field to better constrain the other. Reading the history of the host rocks is not always straightforward. This is shown in chapter two, where a model for the formation of clustered garnets observed in the contact aureole is proposed. Fragments of garnets, older than the intrusive rocks are overgrown by garnet crystallizing due to the reheating during emplacement of the adjacent pluton. The formation of the clusters is therefore not a single event as generally assumed but the result of a two-stage process, namely the alteration of the old grains and the overgrowth and amalgamation of new garnet rims. This makes an important difference when applying petrological methods such as thermobarometry, geochronology or grain size distributions. The thermal conditions in the aureole are a strong function of the emplacement style of the pluton. therefore it is necessary to understand the pluton before drawing conclusions about its aureole. A study investigating the intrusive rocks by means of field, geochemical, geochronologi- cal and structural methods is presented in chapter three. This provided important information about the assembly of the intrusion, but also new insights on the nature of large, homogeneous plutons and the structure of the plumbing system in general. The incremental nature of the emplacement of the Western Adamello tonalité is documented, and the existence of an intermediate reservoir beneath homogeneous plutons is proposed. In chapter four it is demonstrated that information extracted from the host rock provides further constraints on the emplacement process of the intrusion. The temperatures obtain by combining field observations with phase petrology modeling are used together with thermal models to constrain the magmatic activity in the immediate intrusion. Instead of using the thermal models to control the petrology result, the inverse is done. The model parameters were changed until a match with the aureole temperatures was obtained. It is shown, that only a few combinations give a positive match and that temperature estimates from the aureole can constrain the frequency of ancient magmatic systems. In the fifth chapter, the Anisotropy of Magnetic Susceptibility of intrusive rocks is compared to 3D tomography. The obtained signal is a function of the shape and distribution of ferromagnetic grains, and is often used to infer flow directions of magma. It turns out that the signal is dominated by the shape of the magnetic crystals, and where they form tight clusters, also by their distribution. This is in good agreement with the predictions made in the theoretical and experimental literature. In the sixth chapter arguments for partial melting of host rock carbonates are presented. While at first very surprising, this is to be expected when considering the prior results from the intrusive study and experiments from the literature. Partial melting is documented by compelling microstructures, geochemical and structural data. The necessary conditions are far from extreme and this process might be more frequent than previously thought. The carbonate melt is highly mobile and can move along grain boundaries, infiltrating other rocks and ultimately alter the existing mineral assemblage. Finally, a mineralogical curiosity is presented in chapter seven. The mineral assemblage magne§site and calcite is in apparent equilibrium. It is well known that these two carbonates are not stable together in the system Ca0-Mg0-Fe0-C02. Indeed, magnesite and calcite should react to dolomite during metamorphism. The presented explanation for this '"forbidden" assemblage is, that a calcite melt infiltrated the magnesite bearing rock along grain boundaries and caused the peculiar microstructure. This is supported by isotopie disequilibrium between calcite and magnesite. A further implication of partially molten carbonates is, that the host rock drastically looses its strength so that its physical properties may be comparable to the ones of the intrusive rocks. This contrasting behavior of the host rock may ease the emplacement of the intrusion. We see that the circle closes and the iterative process of better constraining the emplacement could start again. - La Terre est en perpétuel mouvement et les forces tectoniques associées à ces mouvements se manifestent sous différentes formes. Les volcans en sont l'un des exemples les plus impressionnants, mais comme les icebergs, les laves émises en surfaces ne représentent que la pointe d'un vaste système caché dans les profondeurs. Ce système est constitué d'une région source, région où la roche source fond et produit le magma ; ce magma peut s'accumuler dans cette région source ou être transporté à travers différents conduits dans des réservoirs où le magma est stocké. Ce magma peut cristalliser in situ et produire des roches plutoniques ou alors être émis en surface. Un magma représente un mélange entre un liquide et des cristaux. Ces cristaux peuvent être extraits de la source ou se former tout au long du chemin jusqu'à l'endroit final de cristallisation. L'étude de ces cristaux peut ainsi donner des informations sur l'ensemble du système magmatique. Au contraire, les roches encaissantes fournissent des informations sur le niveau d'emplacement de l'intrusion. En effet ces roches enregistrent les effets thermiques et mécaniques imposés par le magma. Pour une meilleure compréhension du système, les deux parties, magmatique et métamorphique, doivent être intégrées. Cette thèse a pour but de montrer que les informations issues de l'étude des roches magmatiques et des roches encaissantes sont complémentaires. C'est un processus itératif qui utilise les contraintes d'un domaine pour améliorer la compréhension de l'autre. Comprendre l'histoire des roches encaissantes n'est pas toujours aisé. Ceci est démontré dans le chapitre deux, où un modèle de formation des grenats observés sous forme d'agrégats dans l'auréole de contact est proposé. Des fragments de grenats plus vieux que les roches intru- sives montrent une zone de surcroissance générée par l'apport thermique produit par la mise en place du pluton adjacent. La formation des agrégats de grenats n'est donc pas le résultat d'un seul événement, comme on le décrit habituellement, mais d'un processus en deux phases, soit l'altération de vieux grains engendrant une fracturation de ces grenats, puis la formation de zone de surcroissance autour de ces différents fragments expliquant la texture en agrégats observée. Cette interprétation en deux phases est importante, car elle engendre des différences notables lorsque l'on applique des méthodes pétrologiques comme la thermobarométrie, la géochronologie ou encore lorsque l'on étudie la distribution relative de la taille des grains. Les conditions thermales dans l'auréole de contact dépendent fortement du mode d'emplacement de l'intrusion et c'est pourquoi il est nécessaire de d'abord comprendre le pluton avant de faire des conclusions sur son auréole de contact. Une étude de terrain des roches intrusives ainsi qu'une étude géochimique, géochronologique et structurale est présente dans le troisième chapitre. Cette étude apporte des informations importantes sur la formation de l'intrusion mais également de nouvelles connaissances sur la nature de grands plutons homogènes et la structure de système magmatique en général. L'emplacement incrémental est mis en évidence et l'existence d'un réservoir intermédiaire en-dessous des plutons homogènes est proposé. Le quatrième chapitre de cette thèse illustre comment utiliser l'information extraite des roches encaissantes pour expliquer la mise en place de l'intrusion. Les températures obtenues par la combinaison des observations de terrain et l'assemblage métamorphique sont utilisées avec des modèles thermiques pour contraindre l'activité magmatique au contact directe de cette auréole. Au lieu d'utiliser le modèle thermique pour vérifier le résultat pétrologique, une approche inverse a été choisie. Les paramètres du modèle ont été changés jusqu'à ce qu'on obtienne une correspondance avec les températures observées dans l'auréole de contact. Ceci montre qu'il y a peu de combinaison qui peuvent expliquer les températures et qu'on peut contraindre la fréquence de l'activité magmatique d'un ancien système magmatique de cette manière. Dans le cinquième chapitre, les processus contrôlant l'anisotropie de la susceptibilité magnétique des roches intrusives sont expliqués à l'aide d'images de la distribution des minéraux dans les roches obtenues par tomographie 3D. Le signal associé à l'anisotropie de la susceptibilité magnétique est une fonction de la forme et de la distribution des grains ferromagnétiques. Ce signal est fréquemment utilisé pour déterminer la direction de mouvement d'un magma. En accord avec d'autres études de la littérature, les résultats montrent que le signal est dominé par la forme des cristaux magnétiques, ainsi que par la distribution des agglomérats de ces minéraux dans la roche. Dans le sixième chapitre, une étude associée à la fusion partielle de carbonates dans les roches encaissantes est présentée. Si la présence de liquides carbonatés dans les auréoles de contact a été proposée sur la base d'expériences de laboratoire, notre étude démontre clairement leur existence dans la nature. La fusion partielle est documentée par des microstructures caractéristiques pour la présence de liquides ainsi que par des données géochimiques et structurales. Les conditions nécessaires sont loin d'être extrêmes et ce processus pourrait être plus fréquent qu'attendu. Les liquides carbonatés sont très mobiles et peuvent circuler le long des limites de grain avant d'infiltrer d'autres roches en produisant une modification de leurs assemblages minéralogiques. Finalement, une curiosité minéralogique est présentée dans le chapitre sept. L'assemblage de minéraux de magnésite et de calcite en équilibre apparent est observé. Il est bien connu que ces deux carbonates ne sont pas stables ensemble dans le système CaO-MgO-FeO-CO.,. En effet, la magnésite et la calcite devraient réagir et produire de la dolomite pendant le métamorphisme. L'explication présentée pour cet assemblage à priori « interdit » est que un liquide carbonaté provenant des roches adjacentes infiltre cette roche et est responsable pour cette microstructure. Une autre implication associée à la présence de carbonates fondus est que la roche encaissante montre une diminution drastique de sa résistance et que les propriétés physiques de cette roche deviennent comparables à celles de la roche intrusive. Cette modification des propriétés rhéologiques des roches encaissantes peut faciliter la mise en place des roches intrusives. Ces différentes études démontrent bien le processus itératif utilisé et l'intérêt d'étudier aussi bien les roches intrusives que les roches encaissantes pour la compréhension des mécanismes de mise en place des magmas au sein de la croûte terrestre.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sound localization relies on the analysis of interaural time and intensity differences, as well as attenuation patterns by the outer ear. We investigated the relative contributions of interaural time and intensity difference cues to sound localization by testing 60 healthy subjects: 25 with focal left and 25 with focal right hemispheric brain damage. Group and single-case behavioural analyses, as well as anatomo-clinical correlations, confirmed that deficits were more frequent and much more severe after right than left hemispheric lesions and for the processing of interaural time than intensity difference cues. For spatial processing based on interaural time difference cues, different error types were evident in the individual data. Deficits in discriminating between neighbouring positions occurred in both hemispaces after focal right hemispheric brain damage, but were restricted to the contralesional hemispace after focal left hemispheric brain damage. Alloacusis (perceptual shifts across the midline) occurred only after focal right hemispheric brain damage and was associated with minor or severe deficits in position discrimination. During spatial processing based on interaural intensity cues, deficits were less severe in the right hemispheric brain damage than left hemispheric brain damage group and no alloacusis occurred. These results, matched to anatomical data, suggest the existence of a binaural sound localization system predominantly based on interaural time difference cues and primarily supported by the right hemisphere. More generally, our data suggest that two distinct mechanisms contribute to: (i) the precise computation of spatial coordinates allowing spatial comparison within the contralateral hemispace for the left hemisphere and the whole space for the right hemisphere; and (ii) the building up of global auditory spatial representations in right temporo-parietal cortices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE: This study investigated maximal cardiometabolic response while running in a lower body positive pressure treadmill (antigravity treadmill (AG)), which reduces body weight (BW) and impact. The AG is used in rehabilitation of injuries but could have potential for high-speed running, if workload is maximally elevated. METHODS: Fourteen trained (nine male) runners (age 27 ± 5 yr; 10-km personal best, 38.1 ± 1.1 min) completed a treadmill incremental test (CON) to measure aerobic capacity and heart rate (V˙O2max and HRmax). They completed four identical tests (48 h apart, randomized order) on the AG at BW of 100%, 95%, 90%, and 85% (AG100 to AG85). Stride length and rate were measured at peak velocities (Vpeak). RESULTS: V˙O2max (mL·kg·min) was similar across all conditions (men: CON = 66.6 (3.0), AG100 = 65.6 (3.8), AG95 = 65.0 (5.4), AG90 = 65.6 (4.5), and AG85 = 65.0 (4.8); women: CON = 63.0 (4.6), AG100 = 61.4 (4.3), AG95 = 60.7 (4.8), AG90 = 61.4 (3.3), and AG85 = 62.8 (3.9)). Similar results were found for HRmax, except for AG85 in men and AG100 and AG90 in women, which were lower than CON. Vpeak (km·h) in men was 19.7 (0.9) in CON, which was lower than every other condition: AG100 = 21.0 (1.9) (P < 0.05), AG95 = 21.4 (1.8) (P < 0.01), AG90 = 22.3 (2.1) (P < 0.01), and AG85 = 22.6 (1.6) (P < 0.001). In women, Vpeak (km·h) was similar between CON (17.8 (1.1) ) and AG100 (19.3 (1.0)) but higher at AG95 = 19.5 (0.4) (P < 0.05), AG90 = 19.5 (0.8) (P < 0.05), and AG85 = 21.2 (0.9) (P < 0.01). CONCLUSIONS: The AG can be used at maximal exercise intensities at BW of 85% to 95%, reaching faster running speeds than normally feasible. The AG could be used for overspeed running programs at the highest metabolic response levels.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In his timely article, Cherniss offers his vision for the future of "Emotional Intelligence" (EI). However, his goal of clarifying the concept by distinguishing definitions from models and his support for "Emotional and Social Competence" (ESC) models will, in our opinion, not make the field advance. To be upfront, we agree that emotions are important for effective decision-making, leadership, performance and the like; however, at this time, EI and ESC have not yet demonstrated incremental validity over and above IQ and personality tests in meta-analyses (Harms & Credé, 2009; Van Rooy & Viswesvaran, 2004). If there is a future for EI, we see it in the ability model of Mayer, Salovey and associates (e.g, Mayer, Caruso, & Salovey, 2000), which detractors and supporters agree holds the most promise (Antonakis, Ashkanasy, & Dasborough, 2009; Zeidner, Roberts, & Matthews, 2008). With their use of quasi-objective scoring measures, the ability model grounds EI in existing frameworks of intelligence, thus differentiating itself from ESC models and their self-rated trait inventories. In fact, we do not see the value of ESC models: They overlap too much with current personality models to offer anything new for science and practice (Zeidner, et al., 2008). In this commentary we raise three concerns we have with Cherniss's suggestions for ESC models: (1) there are important conceptual problems in both the definition of ESC and the distinction of ESC from EI; (2) Cherniss's interpretation of neuroscience findings as supporting the constructs of EI and ESC is outdated, and (3) his interpretation of the famous marshmallow experiment as indicating the existence of ESCs is flawed. Building on the promise of ability models, we conclude by providing suggestions to improve research in EI.