989 resultados para standard on auditing
Resumo:
This paper presents a general equilibrium model in which nominal government debt pays an inflation risk premium. The model predicts that the inflation risk premium will be higher in economies which are exposed to unanticipated inflation through nominal asset holdings. In particular, the inflation risk premium is higher when government debt is primarily nominal, steady-state inflation is low, and when cash and nominal debt account for a large fraction of consumers' retirement portfolios. These channels do not appear to have been highlighted in previous models or tested empirically. Numerical results suggest that the inflation risk premium is comparable in magnitude to standard representative agent models. These findings have implications for management of government debt, since the inflation risk premium makes it more costly for governments to borrow using nominal rather than indexed debt. Simulations of an extended model with Epstein-Zin preferences suggest that increasing the share of indexed debt would enable governments to permanently lower taxes by an amount that is quantitatively non-trivial.
Resumo:
This paper studies the wasteful e ffect of bureaucracy on the economy by addressing the link between rent-seeking behavior of government bureaucrats and the public sector wage bill, which is taken to represent the rent component. In particular, public o fficials are modeled as individuals competing for a larger share of those public funds. The rent-seeking extraction technology in the government administration is modeled as in Murphy et al. (1991) and incorporated in an otherwise standard Real-Business-Cycle (RBC) framework with public sector. The model is calibrated to German data for the period 1970-2007. The main fi ndings are: (i) Due to the existence of a signi ficant public sector wage premium and the high public sector employment, a substantial amount of working time is spent rent-seeking, which in turn leads to signifi cant losses in terms of output; (ii) The measures for the rent-seeking cost obtained from the model for the major EU countries are highly-correlated to indices of bureaucratic ineffi ciency; (iii) Under the optimal scal policy regime,steady-state rent-seeking is smaller relative to the exogenous policy case, as the government chooses a higher public wage premium, but sets a much lower public employment, thus achieving a decrease in rent-seeking.
Resumo:
Matrix effects, which represent an important issue in liquid chromatography coupled to mass spectrometry or tandem mass spectrometry detection, should be closely assessed during method development. In the case of quantitative analysis, the use of stable isotope-labelled internal standard with physico-chemical properties and ionization behaviour similar to the analyte is recommended. In this paper, an example of the choice of a co-eluting deuterated internal standard to compensate for short-term and long-term matrix effect in the case of chiral (R,S)-methadone plasma quantification is reported. The method was fully validated over a concentration range of 5-800 ng/mL for each methadone enantiomer with satisfactory relative bias (-1.0 to 1.0%), repeatability (0.9-4.9%) and intermediate precision (1.4-12.0%). From the results obtained during validation, a control chart process during 52 series of routine analysis was established using both intermediate precision standard deviation and FDA acceptance criteria. The results of routine quality control samples were generally included in the +/-15% variability around the target value and mainly in the two standard deviation interval illustrating the long-term stability of the method. The intermediate precision variability estimated in method validation was found to be coherent with the routine use of the method. During this period, 257 trough concentration and 54 peak concentration plasma samples of patients undergoing (R,S)-methadone treatment were successfully analysed for routine therapeutic drug monitoring.
Resumo:
Rapport de synthèse : Introduction : Internet est une source importante d'information sur la santé mentale. Le trouble bipolaire est communément associé à un handicap, des comorbidités, un faible taux d'introspection et une mauvaise compliance au traitement. Le fardeau de la maladie, de par les épisodes dépressifs et maniaques, peut conduire les personnes (dont le diagnostic de trouble bipolaire a été déjà posé ou non), ainsi que leur famille à rechercher des informations sur Internet. De ce fait, il est important que les sites Web traitant du sujet contiennent de l'information de haute qualité, basée sur les évidences scientifiques. Objectif.: évaluer la qualité des informations consultables sur Internat au sujet du trouble bipolaire et identifier des indicateurs de qualité. Méthode: deux mots-clés : « bipolar disorder » et « manic depressive illness » ont été introduits dans les moteurs de recherche les plus souvent utilisés sur Internet. Les sites Internet ont été évalués avec un formulaire standard conçu pour noter les sites sur la base de l'auteur (privé, université, entreprise,...), la présentation, l'interactivité, la lisibilité et la qualité du contenu. Le label de qualité « Health On the Net» (HON), et l'outil DISCERN ont été utilisés pour vérifier leur efficacité comme indicateurs de la qualité. Résultats: sur les 80 sites identifiés, 34 ont été inclus. Sur la base de la mesure des résultats, la qualité du contenu des sites s'est avérée être bonne. La qualité du contenu des sites Web qui traitent du trouble bipolaire est expliquée de manière significative par la lisibilité, la responsabilité et l'interactivité aussi bien que par un score global. Conclusions: dans l'ensemble, la qualité du contenu de l'étude des sites Web traitant du trouble bipolaire est de bonne qualité.
Resumo:
The aim of this work is to evaluate the capabilities and limitations of chemometric methods and other mathematical treatments applied on spectroscopic data and more specifically on paint samples. The uniqueness of the spectroscopic data comes from the fact that they are multivariate - a few thousands variables - and highly correlated. Statistical methods are used to study and discriminate samples. A collection of 34 red paint samples was measured by Infrared and Raman spectroscopy. Data pretreatment and variable selection demonstrated that the use of Standard Normal Variate (SNV), together with removal of the noisy variables by a selection of the wavelengths from 650 to 1830 cm−1 and 2730-3600 cm−1, provided the optimal results for infrared analysis. Principal component analysis (PCA) and hierarchical clusters analysis (HCA) were then used as exploratory techniques to provide evidence of structure in the data, cluster, or detect outliers. With the FTIR spectra, the Principal Components (PCs) correspond to binder types and the presence/absence of calcium carbonate. 83% of the total variance is explained by the four first PCs. As for the Raman spectra, we observe six different clusters corresponding to the different pigment compositions when plotting the first two PCs, which account for 37% and 20% respectively of the total variance. In conclusion, the use of chemometrics for the forensic analysis of paints provides a valuable tool for objective decision-making, a reduction of the possible classification errors, and a better efficiency, having robust results with time saving data treatments.
Resumo:
Background and Aims: Recently, single nucleotide polymorphisms (SNPs) in IL28B were shown to correlate with response to pegylated interferon-a (IFN) and ribavirin therapy of chronic HCV infection. However, the cause for the SNPs effect on therapy response and its application for direct anti-viral (DAV) treatment are not clear. Here, we analyze early HCV kinetics as function of IL28B SNPs to determine its specific effect on viral dynamics. Methods: IL28B SNPs rs8099917, rs12979860 and rs12980275 were genotyped in 252 chronically HCV infected Caucasian naïve patients (67% HCV genotype 1, 28% genotype 2-3) receiving peginterferonalfa- 2a (180 mg/qw) plus ribavirin (1000-1200 mg/qd) in the DITTO study. HCV-RNA was measured (LD = 50 IU/ml) frequently during first 28 days. Results: RVR was achieved in 33% of genotype 1 patients with genotype CC at rs12979860 versus 12-16% for genotypes TT and CT (P < 0.03). Significant (P < 0.001) difference in viral decline was observed already at day 1 (see Figure). First phase decline was significantly (P < 0.001) larger in patients with genotype CC (2.0 log) than for TT and CT genotypes (0.6 and 0.8), indicating IFN anti-viral effectiveness in blocking virion production of 99% versus 75-84%. There was no significant association between second phase slope and rs12979860 genotype in patients with a first phase decline larger than 1 log. HCV kinetics as function of IL28b SNP. The same trend (not shown) was observed for HCV genotype 2-3 patients with different SNP genotype distribution that may indicate differential selection pressure as function of HCV genotype. Similar results were observed for SNPs rs8099917 and rs12980275, with a strong linkage disequilibrium among the 3 loci allowing to define the composite haplotype best associated with IFN effectiveness. Conclusions: IFN effectiveness in blocking virion production/ release is strongly affected by IL28B SNPs, but not other viral dynamic properties such as infected cell loss rate. Thus, IFN based therapy, as standard-of-care or in combination with DAV, should consider IL28B SNPs for prediction and personalized treatment, while response to pure DAV treatment may be less affected by IL28B SNPs. Additional analyses are undergoing to pinpoint the SNP effect on IFN anti-viral effectiveness.
Resumo:
Game theory describes and analyzes strategic interaction. It is usually distinguished between static games, which are strategic situations in which the players choose only once as well as simultaneously, and dynamic games, which are strategic situations involving sequential choices. In addition, dynamic games can be further classified according to perfect and imperfect information. Indeed, a dynamic game is said to exhibit perfect information, whenever at any point of the game every player has full informational access to all choices that have been conducted so far. However, in the case of imperfect information some players are not fully informed about some choices. Game-theoretic analysis proceeds in two steps. Firstly, games are modelled by so-called form structures which extract and formalize the significant parts of the underlying strategic interaction. The basic and most commonly used models of games are the normal form, which rather sparsely describes a game merely in terms of the players' strategy sets and utilities, and the extensive form, which models a game in a more detailed way as a tree. In fact, it is standard to formalize static games with the normal form and dynamic games with the extensive form. Secondly, solution concepts are developed to solve models of games in the sense of identifying the choices that should be taken by rational players. Indeed, the ultimate objective of the classical approach to game theory, which is of normative character, is the development of a solution concept that is capable of identifying a unique choice for every player in an arbitrary game. However, given the large variety of games, it is not at all certain whether it is possible to device a solution concept with such universal capability. Alternatively, interactive epistemology provides an epistemic approach to game theory of descriptive character. This rather recent discipline analyzes the relation between knowledge, belief and choice of game-playing agents in an epistemic framework. The description of the players' choices in a given game relative to various epistemic assumptions constitutes the fundamental problem addressed by an epistemic approach to game theory. In a general sense, the objective of interactive epistemology consists in characterizing existing game-theoretic solution concepts in terms of epistemic assumptions as well as in proposing novel solution concepts by studying the game-theoretic implications of refined or new epistemic hypotheses. Intuitively, an epistemic model of a game can be interpreted as representing the reasoning of the players. Indeed, before making a decision in a game, the players reason about the game and their respective opponents, given their knowledge and beliefs. Precisely these epistemic mental states on which players base their decisions are explicitly expressible in an epistemic framework. In this PhD thesis, we consider an epistemic approach to game theory from a foundational point of view. In Chapter 1, basic game-theoretic notions as well as Aumann's epistemic framework for games are expounded and illustrated. Also, Aumann's sufficient conditions for backward induction are presented and his conceptual views discussed. In Chapter 2, Aumann's interactive epistemology is conceptually analyzed. In Chapter 3, which is based on joint work with Conrad Heilmann, a three-stage account for dynamic games is introduced and a type-based epistemic model is extended with a notion of agent connectedness. Then, sufficient conditions for backward induction are derived. In Chapter 4, which is based on joint work with Jérémie Cabessa, a topological approach to interactive epistemology is initiated. In particular, the epistemic-topological operator limit knowledge is defined and some implications for games considered. In Chapter 5, which is based on joint work with Jérémie Cabessa and Andrés Perea, Aumann's impossibility theorem on agreeing to disagree is revisited and weakened in the sense that possible contexts are provided in which agents can indeed agree to disagree.
Resumo:
Social scientists often estimate models from correlational data, where the independent variable has not been exogenously manipulated; they also make implicit or explicit causal claims based on these models. When can these claims be made? We answer this question by first discussing design and estimation conditions under which model estimates can be interpreted, using the randomized experiment as the gold standard. We show how endogeneity--which includes omitted variables, omitted selection, simultaneity, common methods bias, and measurement error--renders estimates causally uninterpretable. Second, we present methods that allow researchers to test causal claims in situations where randomization is not possible or when causal interpretation is confounded, including fixed-effects panel, sample selection, instrumental variable, regression discontinuity, and difference-in-differences models. Third, we take stock of the methodological rigor with which causal claims are being made in a social sciences discipline by reviewing a representative sample of 110 articles on leadership published in the previous 10 years in top-tier journals. Our key finding is that researchers fail to address at least 66 % and up to 90 % of design and estimation conditions that make causal claims invalid. We conclude by offering 10 suggestions on how to improve non-experimental research.
Resumo:
OBJECTIVE: To reach a consensus on the clinical use of ambulatory blood pressure monitoring (ABPM). METHODS: A task force on the clinical use of ABPM wrote this overview in preparation for the Seventh International Consensus Conference (23-25 September 1999, Leuven, Belgium). This article was amended to account for opinions aired at the conference and to reflect the common ground reached in the discussions. POINTS OF CONSENSUS: The Riva Rocci/Korotkoff technique, although it is prone to error, is easy and cheap to perform and remains worldwide the standard procedure for measuring blood pressure. ABPM should be performed only with properly validated devices as an accessory to conventional measurement of blood pressure. Ambulatory recording of blood pressure requires considerable investment in equipment and training and its use for screening purposes cannot be recommended. ABPM is most useful for identifying patients with white-coat hypertension (WCH), also known as isolated clinic hypertension, which is arbitrarily defined as a clinic blood pressure of more than 140 mmHg systolic or 90 mmHg diastolic in a patient with daytime ambulatory blood pressure below 135 mmHg systolic and 85 mmHg diastolic. Some experts consider a daytime blood pressure below 130 mmHg systolic and 80 mmHg diastolic optimal. Whether WCH predisposes subjects to sustained hypertension remains debated. However, outcome is better correlated to the ambulatory blood pressure than it is to the conventional blood pressure. Antihypertensive drugs lower the clinic blood pressure in patients with WCH but not the ambulatory blood pressure, and also do not improve prognosis. Nevertheless, WCH should not be left unattended. If no previous cardiovascular complications are present, treatment could be limited to follow-up and hygienic measures, which should also account for risk factors other than hypertension. ABPM is superior to conventional measurement of blood pressure not only for selecting patients for antihypertensive drug treatment but also for assessing the effects both of non-pharmacological and of pharmacological therapy. The ambulatory blood pressure should be reduced by treatment to below the thresholds applied for diagnosing sustained hypertension. ABPM makes the diagnosis and treatment of nocturnal hypertension possible and is especially indicated for patients with borderline hypertension, the elderly, pregnant women, patients with treatment-resistant hypertension and patients with symptoms suggestive of hypotension. In centres with sufficient financial resources, ABPM could become part of the routine assessment of patients with clinic hypertension. For patients with WCH, it should be repeated at annual or 6-monthly intervals. Variation of blood pressure throughout the day can be monitored only by ABPM, but several advantages of the latter technique can also be obtained by self-measurement of blood pressure, a less expensive method that is probably better suited to primary practice and use in developing countries. CONCLUSIONS: ABPM or equivalent methods for tracing the white-coat effect should become part of the routine diagnostic and therapeutic procedures applied to treated and untreated patients with elevated clinic blood pressures. Results of long-term outcome trials should better establish the advantage of further integrating ABPM as an accessory to conventional sphygmomanometry into the routine care of hypertensive patients and should provide more definite information on the long-term cost-effectiveness. Because such trials are not likely to be funded by the pharmaceutical industry, governments and health insurance companies should take responsibility in this regard.
Application of standard and refined heat balance integral methods to one-dimensional Stefan problems
Resumo:
The work in this paper concerns the study of conventional and refined heat balance integral methods for a number of phase change problems. These include standard test problems, both with one and two phase changes, which have exact solutions to enable us to test the accuracy of the approximate solutions. We also consider situations where no analytical solution is available and compare these to numerical solutions. It is popular to use a quadratic profile as an approximation of the temperature, but we show that a cubic profile, seldom considered in the literature, is far more accurate in most circumstances. In addition, the refined integral method can give greater improvement still and we develop a variation on this method which turns out to be optimal in some cases. We assess which integral method is better for various problems, showing that it is largely dependent on the specified boundary conditions.
Resumo:
Manual dexterity, a prerogative of primates, is under the control of the corticospinal (CS) tract. Because 90-95% of CS axons decussate, it is assumed that this control is exerted essentially on the contralateral hand. Consistently, unilateral lesion of the hand representation in the motor cortex is followed by a complete loss of dexterity of the contralesional hand. During the months following lesion, spontaneous recovery of manual dexterity takes place to a highly variable extent across subjects, although largely incomplete. In the present study, we tested the hypothesis that after a significant postlesion period, manual performance in the ipsilesional hand is correlated with the extent of functional recovery in the contralesional hand. To this aim, ten adult macaque monkeys were subjected to permanent unilateral motor cortex lesion. Monkeys' manual performance was assessed for each hand during several months postlesion, using our standard behavioral test (modified Brinkman board task) that provides a quantitative measure of reach and grasp ability. The ipsilesional hand's performance was found to be significantly enhanced over the long term (100-300 days postlesion) in six of ten monkeys, with the six exhibiting the best, though incomplete, recovery of the contralesional hand. There was a statistically significant correlation (r = 0.932; P < 0.001) between performance in the ipsilesional hand after significant postlesion period and the extent of recovery of the contralesional hand. This observation is interpreted in terms of different possible mechanisms of recovery, dependent on the recruitment of motor areas in the lesioned and/or intact hemispheres.
Resumo:
Introduction In my thesis I argue that economic policy is all about economics and politics. Consequently, analysing and understanding economic policy ideally has at least two parts. The economics part, which is centered around the expected impact of a specific policy on the real economy both in terms of efficiency and equity. The insights of this part point into which direction the fine-tuning of economic policies should go. However, fine-tuning of economic policies will be most likely subject to political constraints. That is why, in the politics part, a much better understanding can be gained by taking into account how the incentives of politicians and special interest groups as well as the role played by different institutional features affect the formation of economic policies. The first part and chapter of my thesis concentrates on the efficiency-related impact of economic policies: how does corporate income taxation in general, and corporate income tax progressivity in specific, affect the creation of new firms? Reduced progressivity and flat-rate taxes are in vogue. By 2009, 22 countries are operating flat-rate income tax systems, as do 7 US states and 14 Swiss cantons (for corporate income only). Tax reform proposals in the spirit of the "flat tax" model typically aim to reduce three parameters: the average tax burden, the progressivity of the tax schedule, and the complexity of the tax code. In joint work, Marius Brülhart and I explore the implications of changes in these three parameters on entrepreneurial activity, measured by counts of firm births in a panel of Swiss municipalities. Our results show that lower average tax rates and reduced complexity of the tax code promote firm births. Controlling for these effects, reduced progressivity inhibits firm births. Our reading of these results is that tax progressivity has an insurance effect that facilitates entrepreneurial risk taking. The positive effects of lower tax levels and reduced complexity are estimated to be significantly stronger than the negative effect of reduced progressivity. To the extent that firm births reflect desirable entrepreneurial dynamism, it is not the flattening of tax schedules that is key to successful tax reforms, but the lowering of average tax burdens and the simplification of tax codes. Flatness per se is of secondary importance and even appears to be detrimental to firm births. The second part of my thesis, which corresponds to the second and third chapter, concentrates on how economic policies are formed. By the nature of the analysis, these two chapters draw on a broader literature than the first chapter. Both economists and political scientists have done extensive research on how economic policies are formed. Thereby, researchers in both disciplines have recognised the importance of special interest groups trying to influence policy-making through various channels. In general, economists base their analysis on a formal and microeconomically founded approach, while abstracting from institutional details. In contrast, political scientists' frameworks are generally richer in terms of institutional features but lack the theoretical rigour of economists' approaches. I start from the economist's point of view. However, I try to borrow as much as possible from the findings of political science to gain a better understanding of how economic policies are formed in reality. In the second chapter, I take a theoretical approach and focus on the institutional policy framework to explore how interactions between different political institutions affect the outcome of trade policy in presence of special interest groups' lobbying. Standard political economy theory treats the government as a single institutional actor which sets tariffs by trading off social welfare against contributions from special interest groups seeking industry-specific protection from imports. However, these models lack important (institutional) features of reality. That is why, in my model, I split up the government into a legislative and executive branch which can both be lobbied by special interest groups. Furthermore, the legislative has the option to delegate its trade policy authority to the executive. I allow the executive to compensate the legislative in exchange for delegation. Despite ample anecdotal evidence, bargaining over delegation of trade policy authority has not yet been formally modelled in the literature. I show that delegation has an impact on policy formation in that it leads to lower equilibrium tariffs compared to a standard model without delegation. I also show that delegation will only take place if the lobby is not strong enough to prevent it. Furthermore, the option to delegate increases the bargaining power of the legislative at the expense of the lobbies. Therefore, the findings of this model can shed a light on why the U.S. Congress often practices delegation to the executive. In the final chapter of my thesis, my coauthor, Antonio Fidalgo, and I take a narrower approach and focus on the individual politician level of policy-making to explore how connections to private firms and networks within parliament affect individual politicians' decision-making. Theories in the spirit of the model of the second chapter show how campaign contributions from lobbies to politicians can influence economic policies. There exists an abundant empirical literature that analyses ties between firms and politicians based on campaign contributions. However, the evidence on the impact of campaign contributions is mixed, at best. In our paper, we analyse an alternative channel of influence in the shape of personal connections between politicians and firms through board membership. We identify a direct effect of board membership on individual politicians' voting behaviour and an indirect leverage effect when politicians with board connections influence non-connected peers. We assess the importance of these two effects using a vote in the Swiss parliament on a government bailout of the national airline, Swissair, in 2001, which serves as a natural experiment. We find that both the direct effect of connections to firms and the indirect leverage effect had a strong and positive impact on the probability that a politician supported the government bailout.
Resumo:
Indirect calorimetry based on respiratory exchange measurement has been successfully used from the beginning of the century to obtain an estimate of heat production (energy expenditure) in human subjects and animals. The errors inherent to this classical technique can stem from various sources: 1) model of calculation and assumptions, 2) calorimetric factors used, 3) technical factors and 4) human factors. The physiological and biochemical factors influencing the interpretation of calorimetric data include a change in the size of the bicarbonate and urea pools and the accumulation or loss (via breath, urine or sweat) of intermediary metabolites (gluconeogenesis, ketogenesis). More recently, respiratory gas exchange data have been used to estimate substrate utilization rates in various physiological and metabolic situations (fasting, post-prandial state, etc.). It should be recalled that indirect calorimetry provides an index of overall substrate disappearance rates. This is incorrectly assumed to be equivalent to substrate "oxidation" rates. Unfortunately, there is no adequate golden standard to validate whole body substrate "oxidation" rates, and this contrasts to the "validation" of heat production by indirect calorimetry, through use of direct calorimetry under strict thermal equilibrium conditions. Tracer techniques using stable (or radioactive) isotopes, represent an independent way of assessing substrate utilization rates. When carbohydrate metabolism is measured with both techniques, indirect calorimetry generally provides consistent glucose "oxidation" rates as compared to isotopic tracers, but only when certain metabolic processes (such as gluconeogenesis and lipogenesis) are minimal or / and when the respiratory quotients are not at the extreme of the physiological range. However, it is believed that the tracer techniques underestimate true glucose "oxidation" rates due to the failure to account for glycogenolysis in the tissue storing glucose, since this escapes the systemic circulation. A major advantage of isotopic techniques is that they are able to estimate (given certain assumptions) various metabolic processes (such as gluconeogenesis) in a noninvasive way. Furthermore when, in addition to the 3 macronutrients, a fourth substrate is administered (such as ethanol), isotopic quantification of substrate "oxidation" allows one to eliminate the inherent assumptions made by indirect calorimetry. In conclusion, isotopic tracers techniques and indirect calorimetry should be considered as complementary techniques, in particular since the tracer techniques require the measurement of carbon dioxide production obtained by indirect calorimetry. However, it should be kept in mind that the assessment of substrate oxidation by indirect calorimetry may involve large errors in particular over a short period of time. By indirect calorimetry, energy expenditure (heat production) is calculated with substantially less error than substrate oxidation rates.
Resumo:
This paper is to examine the proper use of dimensions and curve fitting practices elaborating on Georgescu-Roegen’s economic methodology in relation to the three main concerns of his epistemological orientation. Section 2 introduces two critical issues in relation to dimensions and curve fitting practices in economics in view of Georgescu-Roegen’s economic methodology. Section 3 deals with the logarithmic function (ln z) and shows that z must be a dimensionless pure number, otherwise it is nonsensical. Several unfortunate examples of this analytical error are presented including macroeconomic data analysis conducted by a representative figure in this field. Section 4 deals with the standard Cobb-Douglas function. It is shown that the operational meaning cannot be obtained for capital or labor within the Cobb-Douglas function. Section 4 also deals with economists "curve fitting fetishism". Section 5 concludes thispaper with several epistemological issues in relation to dimensions and curve fitting practices in economics.
Resumo:
Stable isotope and Ar-40/Ar-39 measurements,were made on samples associated with a major tectonic discontinuity in the Helvetic Alps, the basal thrust of the Diablerets nappe (external zone of the Alpine Belt) in order to determine both the importance of fluids in this thrust zone and the timing of thrusting. A systematic decrease in the delta(18)O values (up to 6 parts per thousand) of calcite, quartz, and white mica exists within a 10- to 70-m-wide zone over a distance of 37 km along the thrust, and they become more pronounced toward the root of the nappe. A similar decrease in the delta(13)C values of calcite is observed only in the deepest sections (up to 3 parts per thousand). The delta D-SMOW (SMOW = standard mean ocean water) values of white mica are -54 parts per thousand +/- 8 parts per thousand (n = 22) and are independent of the distance from the thrust. These variations are interpreted to reflect syntectonic solution reprecipitation during fluid passage along the thrust. The calculated delta(18)O and delta D values (versus SMOW) for the fluid in equilibrium with the analyzed minerals is 12 parts per thousand to 16 parts per thousand and -30 parts per thousand to +5 parts per thousand, respectively, for assumed temperatures of 250 to 450 degrees C. The isotopic and structural data are consistent with fluids derived from the deep-seated roots of the Helvetic nappes where large volumes of Mesozoic sediments were metamorphosed to the amphibolite facies, It is suggested that connate and metamorphic waters, overpressured by rapid tectonic burial in a subductive system escaped by upward infiltration along moderately dipping pathways until they reached the main shear zone at the base of the moving pile, where they were channeled toward the surface, This model also explains the mechanism by which large amounts of fluid were removed from the Mesozoic sediments during Alpine metamorphism. White mica Ar-49/Ar-39 ages vary from 27 Ma far from the Diablerets thrust to 15 Ma along the thrust. An older component is observed in micas far from the thrust, interpreted as a detrital signature, and indicates that regional metamorphic temperatures were less than about 350 degrees C. The;plateau and near plateau ages nearest the thrust are consistent with either neocrystallization of white mica or argon loss by recrystallization during thrusting, which may have been enhanced in the zones of highest fluid flow. The 15 Ma Ar-40/Ar-39 age plateau measured on white mica sampled exactly on the thrust surface dates the end of both fluid flow and tectonic transport.