918 resultados para Optimal Sampling Time
Resumo:
L'utilisation efficace des systèmes géothermaux, la séquestration du CO2 pour limiter le changement climatique et la prévention de l'intrusion d'eau salée dans les aquifères costaux ne sont que quelques exemples qui démontrent notre besoin en technologies nouvelles pour suivre l'évolution des processus souterrains à partir de la surface. Un défi majeur est d'assurer la caractérisation et l'optimisation des performances de ces technologies à différentes échelles spatiales et temporelles. Les méthodes électromagnétiques (EM) d'ondes planes sont sensibles à la conductivité électrique du sous-sol et, par conséquent, à la conductivité électrique des fluides saturant la roche, à la présence de fractures connectées, à la température et aux matériaux géologiques. Ces méthodes sont régies par des équations valides sur de larges gammes de fréquences, permettant détudier de manières analogues des processus allant de quelques mètres sous la surface jusqu'à plusieurs kilomètres de profondeur. Néanmoins, ces méthodes sont soumises à une perte de résolution avec la profondeur à cause des propriétés diffusives du champ électromagnétique. Pour cette raison, l'estimation des modèles du sous-sol par ces méthodes doit prendre en compte des informations a priori afin de contraindre les modèles autant que possible et de permettre la quantification des incertitudes de ces modèles de façon appropriée. Dans la présente thèse, je développe des approches permettant la caractérisation statique et dynamique du sous-sol à l'aide d'ondes EM planes. Dans une première partie, je présente une approche déterministe permettant de réaliser des inversions répétées dans le temps (time-lapse) de données d'ondes EM planes en deux dimensions. Cette stratégie est basée sur l'incorporation dans l'algorithme d'informations a priori en fonction des changements du modèle de conductivité électrique attendus. Ceci est réalisé en intégrant une régularisation stochastique et des contraintes flexibles par rapport à la gamme des changements attendus en utilisant les multiplicateurs de Lagrange. J'utilise des normes différentes de la norme l2 pour contraindre la structure du modèle et obtenir des transitions abruptes entre les régions du model qui subissent des changements dans le temps et celles qui n'en subissent pas. Aussi, j'incorpore une stratégie afin d'éliminer les erreurs systématiques de données time-lapse. Ce travail a mis en évidence l'amélioration de la caractérisation des changements temporels par rapport aux approches classiques qui réalisent des inversions indépendantes à chaque pas de temps et comparent les modèles. Dans la seconde partie de cette thèse, j'adopte un formalisme bayésien et je teste la possibilité de quantifier les incertitudes sur les paramètres du modèle dans l'inversion d'ondes EM planes. Pour ce faire, je présente une stratégie d'inversion probabiliste basée sur des pixels à deux dimensions pour des inversions de données d'ondes EM planes et de tomographies de résistivité électrique (ERT) séparées et jointes. Je compare les incertitudes des paramètres du modèle en considérant différents types d'information a priori sur la structure du modèle et différentes fonctions de vraisemblance pour décrire les erreurs sur les données. Les résultats indiquent que la régularisation du modèle est nécessaire lorsqu'on a à faire à un large nombre de paramètres car cela permet d'accélérer la convergence des chaînes et d'obtenir des modèles plus réalistes. Cependent, ces contraintes mènent à des incertitudes d'estimations plus faibles, ce qui implique des distributions a posteriori qui ne contiennent pas le vrai modèledans les régions ou` la méthode présente une sensibilité limitée. Cette situation peut être améliorée en combinant des méthodes d'ondes EM planes avec d'autres méthodes complémentaires telles que l'ERT. De plus, je montre que le poids de régularisation des paramètres et l'écart-type des erreurs sur les données peuvent être retrouvés par une inversion probabiliste. Finalement, j'évalue la possibilité de caractériser une distribution tridimensionnelle d'un panache de traceur salin injecté dans le sous-sol en réalisant une inversion probabiliste time-lapse tridimensionnelle d'ondes EM planes. Etant donné que les inversions probabilistes sont très coûteuses en temps de calcul lorsque l'espace des paramètres présente une grande dimension, je propose une stratégie de réduction du modèle ou` les coefficients de décomposition des moments de Legendre du panache de traceur injecté ainsi que sa position sont estimés. Pour ce faire, un modèle de résistivité de base est nécessaire. Il peut être obtenu avant l'expérience time-lapse. Un test synthétique montre que la méthodologie marche bien quand le modèle de résistivité de base est caractérisé correctement. Cette méthodologie est aussi appliquée à un test de trac¸age par injection d'une solution saline et d'acides réalisé dans un système géothermal en Australie, puis comparée à une inversion time-lapse tridimensionnelle réalisée selon une approche déterministe. L'inversion probabiliste permet de mieux contraindre le panache du traceur salin gr^ace à la grande quantité d'informations a priori incluse dans l'algorithme. Néanmoins, les changements de conductivités nécessaires pour expliquer les changements observés dans les données sont plus grands que ce qu'expliquent notre connaissance actuelle des phénomenès physiques. Ce problème peut être lié à la qualité limitée du modèle de résistivité de base utilisé, indiquant ainsi que des efforts plus grands devront être fournis dans le futur pour obtenir des modèles de base de bonne qualité avant de réaliser des expériences dynamiques. Les études décrites dans cette thèse montrent que les méthodes d'ondes EM planes sont très utiles pour caractériser et suivre les variations temporelles du sous-sol sur de larges échelles. Les présentes approches améliorent l'évaluation des modèles obtenus, autant en termes d'incorporation d'informations a priori, qu'en termes de quantification d'incertitudes a posteriori. De plus, les stratégies développées peuvent être appliquées à d'autres méthodes géophysiques, et offrent une grande flexibilité pour l'incorporation d'informations additionnelles lorsqu'elles sont disponibles. -- The efficient use of geothermal systems, the sequestration of CO2 to mitigate climate change, and the prevention of seawater intrusion in coastal aquifers are only some examples that demonstrate the need for novel technologies to monitor subsurface processes from the surface. A main challenge is to assure optimal performance of such technologies at different temporal and spatial scales. Plane-wave electromagnetic (EM) methods are sensitive to subsurface electrical conductivity and consequently to fluid conductivity, fracture connectivity, temperature, and rock mineralogy. These methods have governing equations that are the same over a large range of frequencies, thus allowing to study in an analogous manner processes on scales ranging from few meters close to the surface down to several hundreds of kilometers depth. Unfortunately, they suffer from a significant resolution loss with depth due to the diffusive nature of the electromagnetic fields. Therefore, estimations of subsurface models that use these methods should incorporate a priori information to better constrain the models, and provide appropriate measures of model uncertainty. During my thesis, I have developed approaches to improve the static and dynamic characterization of the subsurface with plane-wave EM methods. In the first part of this thesis, I present a two-dimensional deterministic approach to perform time-lapse inversion of plane-wave EM data. The strategy is based on the incorporation of prior information into the inversion algorithm regarding the expected temporal changes in electrical conductivity. This is done by incorporating a flexible stochastic regularization and constraints regarding the expected ranges of the changes by using Lagrange multipliers. I use non-l2 norms to penalize the model update in order to obtain sharp transitions between regions that experience temporal changes and regions that do not. I also incorporate a time-lapse differencing strategy to remove systematic errors in the time-lapse inversion. This work presents improvements in the characterization of temporal changes with respect to the classical approach of performing separate inversions and computing differences between the models. In the second part of this thesis, I adopt a Bayesian framework and use Markov chain Monte Carlo (MCMC) simulations to quantify model parameter uncertainty in plane-wave EM inversion. For this purpose, I present a two-dimensional pixel-based probabilistic inversion strategy for separate and joint inversions of plane-wave EM and electrical resistivity tomography (ERT) data. I compare the uncertainties of the model parameters when considering different types of prior information on the model structure and different likelihood functions to describe the data errors. The results indicate that model regularization is necessary when dealing with a large number of model parameters because it helps to accelerate the convergence of the chains and leads to more realistic models. These constraints also lead to smaller uncertainty estimates, which imply posterior distributions that do not include the true underlying model in regions where the method has limited sensitivity. This situation can be improved by combining planewave EM methods with complimentary geophysical methods such as ERT. In addition, I show that an appropriate regularization weight and the standard deviation of the data errors can be retrieved by the MCMC inversion. Finally, I evaluate the possibility of characterizing the three-dimensional distribution of an injected water plume by performing three-dimensional time-lapse MCMC inversion of planewave EM data. Since MCMC inversion involves a significant computational burden in high parameter dimensions, I propose a model reduction strategy where the coefficients of a Legendre moment decomposition of the injected water plume and its location are estimated. For this purpose, a base resistivity model is needed which is obtained prior to the time-lapse experiment. A synthetic test shows that the methodology works well when the base resistivity model is correctly characterized. The methodology is also applied to an injection experiment performed in a geothermal system in Australia, and compared to a three-dimensional time-lapse inversion performed within a deterministic framework. The MCMC inversion better constrains the water plumes due to the larger amount of prior information that is included in the algorithm. The conductivity changes needed to explain the time-lapse data are much larger than what is physically possible based on present day understandings. This issue may be related to the base resistivity model used, therefore indicating that more efforts should be given to obtain high-quality base models prior to dynamic experiments. The studies described herein give clear evidence that plane-wave EM methods are useful to characterize and monitor the subsurface at a wide range of scales. The presented approaches contribute to an improved appraisal of the obtained models, both in terms of the incorporation of prior information in the algorithms and the posterior uncertainty quantification. In addition, the developed strategies can be applied to other geophysical methods, and offer great flexibility to incorporate additional information when available.
Resumo:
Time-lapse geophysical measurements are widely used to monitor the movement of water and solutes through the subsurface. Yet commonly used deterministic least squares inversions typically suffer from relatively poor mass recovery, spread overestimation, and limited ability to appropriately estimate nonlinear model uncertainty. We describe herein a novel inversion methodology designed to reconstruct the three-dimensional distribution of a tracer anomaly from geophysical data and provide consistent uncertainty estimates using Markov chain Monte Carlo simulation. Posterior sampling is made tractable by using a lower-dimensional model space related both to the Legendre moments of the plume and to predefined morphological constraints. Benchmark results using cross-hole ground-penetrating radar travel times measurements during two synthetic water tracer application experiments involving increasingly complex plume geometries show that the proposed method not only conserves mass but also provides better estimates of plume morphology and posterior model uncertainty than deterministic inversion results.
Resumo:
The method of stochastic dynamic programming is widely used in ecology of behavior, but has some imperfections because of use of temporal limits. The authors presented an alternative approach based on the methods of the theory of restoration. Suggested method uses cumulative energy reserves per time unit as a criterium, that leads to stationary cycles in the area of states. This approach allows to study the optimal feeding by analytic methods.
Resumo:
Executive Summary The unifying theme of this thesis is the pursuit of a satisfactory ways to quantify the riskureward trade-off in financial economics. First in the context of a general asset pricing model, then across models and finally across country borders. The guiding principle in that pursuit was to seek innovative solutions by combining ideas from different fields in economics and broad scientific research. For example, in the first part of this thesis we sought a fruitful application of strong existence results in utility theory to topics in asset pricing. In the second part we implement an idea from the field of fuzzy set theory to the optimal portfolio selection problem, while the third part of this thesis is to the best of our knowledge, the first empirical application of some general results in asset pricing in incomplete markets to the important topic of measurement of financial integration. While the first two parts of this thesis effectively combine well-known ways to quantify the risk-reward trade-offs the third one can be viewed as an empirical verification of the usefulness of the so-called "good deal bounds" theory in designing risk-sensitive pricing bounds. Chapter 1 develops a discrete-time asset pricing model, based on a novel ordinally equivalent representation of recursive utility. To the best of our knowledge, we are the first to use a member of a novel class of recursive utility generators to construct a representative agent model to address some long-lasting issues in asset pricing. Applying strong representation results allows us to show that the model features countercyclical risk premia, for both consumption and financial risk, together with low and procyclical risk free rate. As the recursive utility used nests as a special case the well-known time-state separable utility, all results nest the corresponding ones from the standard model and thus shed light on its well-known shortcomings. The empirical investigation to support these theoretical results, however, showed that as long as one resorts to econometric methods based on approximating conditional moments with unconditional ones, it is not possible to distinguish the model we propose from the standard one. Chapter 2 is a join work with Sergei Sontchik. There we provide theoretical and empirical motivation for aggregation of performance measures. The main idea is that as it makes sense to apply several performance measures ex-post, it also makes sense to base optimal portfolio selection on ex-ante maximization of as many possible performance measures as desired. We thus offer a concrete algorithm for optimal portfolio selection via ex-ante optimization over different horizons of several risk-return trade-offs simultaneously. An empirical application of that algorithm, using seven popular performance measures, suggests that realized returns feature better distributional characteristics relative to those of realized returns from portfolio strategies optimal with respect to single performance measures. When comparing the distributions of realized returns we used two partial risk-reward orderings first and second order stochastic dominance. We first used the Kolmogorov Smirnov test to determine if the two distributions are indeed different, which combined with a visual inspection allowed us to demonstrate that the way we propose to aggregate performance measures leads to portfolio realized returns that first order stochastically dominate the ones that result from optimization only with respect to, for example, Treynor ratio and Jensen's alpha. We checked for second order stochastic dominance via point wise comparison of the so-called absolute Lorenz curve, or the sequence of expected shortfalls for a range of quantiles. As soon as the plot of the absolute Lorenz curve for the aggregated performance measures was above the one corresponding to each individual measure, we were tempted to conclude that the algorithm we propose leads to portfolio returns distribution that second order stochastically dominates virtually all performance measures considered. Chapter 3 proposes a measure of financial integration, based on recent advances in asset pricing in incomplete markets. Given a base market (a set of traded assets) and an index of another market, we propose to measure financial integration through time by the size of the spread between the pricing bounds of the market index, relative to the base market. The bigger the spread around country index A, viewed from market B, the less integrated markets A and B are. We investigate the presence of structural breaks in the size of the spread for EMU member country indices before and after the introduction of the Euro. We find evidence that both the level and the volatility of our financial integration measure increased after the introduction of the Euro. That counterintuitive result suggests the presence of an inherent weakness in the attempt to measure financial integration independently of economic fundamentals. Nevertheless, the results about the bounds on the risk free rate appear plausible from the view point of existing economic theory about the impact of integration on interest rates.
Resumo:
BACKGROUND AND OBJECTIVES: The SBP values to be achieved by antihypertensive therapy in order to maximize reduction of cardiovascular outcomes are unknown; neither is it clear whether in patients with a previous cardiovascular event, the optimal values are lower than in the low-to-moderate risk hypertensive patients, or a more cautious blood pressure (BP) reduction should be obtained. Because of the uncertainty whether 'the lower the better' or the 'J-curve' hypothesis is correct, the European Society of Hypertension and the Chinese Hypertension League have promoted a randomized trial comparing antihypertensive treatment strategies aiming at three different SBP targets in hypertensive patients with a recent stroke or transient ischaemic attack. As the optimal level of low-density lipoprotein cholesterol (LDL-C) level is also unknown in these patients, LDL-C-lowering has been included in the design. PROTOCOL DESIGN: The European Society of Hypertension-Chinese Hypertension League Stroke in Hypertension Optimal Treatment trial is a prospective multinational, randomized trial with a 3 × 2 factorial design comparing: three different SBP targets (1, <145-135; 2, <135-125; 3, <125 mmHg); two different LDL-C targets (target A, 2.8-1.8; target B, <1.8 mmol/l). The trial is to be conducted on 7500 patients aged at least 65 years (2500 in Europe, 5000 in China) with hypertension and a stroke or transient ischaemic attack 1-6 months before randomization. Antihypertensive and statin treatments will be initiated or modified using suitable registered agents chosen by the investigators, in order to maintain patients within the randomized SBP and LDL-C windows. All patients will be followed up every 3 months for BP and every 6 months for LDL-C. Ambulatory BP will be measured yearly. OUTCOMES: Primary outcome is time to stroke (fatal and non-fatal). Important secondary outcomes are: time to first major cardiovascular event; cognitive decline (Montreal Cognitive Assessment) and dementia. All major outcomes will be adjudicated by committees blind to randomized allocation. A Data and Safety Monitoring Board has open access to data and can recommend trial interruption for safety. SAMPLE SIZE CALCULATION: It has been calculated that 925 patients would reach the primary outcome after a mean 4-year follow-up, and this should provide at least 80% power to detect a 25% stroke difference between SBP targets and a 20% difference between LDL-C targets.
Resumo:
ABSTRACT Understanding the spatial behavior of soil physical properties under no-tillage system (NT) is required for the adoption and maintenance of a sustainable soil management system. The aims of this study were to quantify soil bulk density (BD), porosity in the soil macropore domain (PORp) and in the soil matrix domain (PORm), air capacity in the soil matrix (ACm), field capacity (FC), and soil water storage capacity (FC/TP) in the row (R), interrow (IR), and intermediate position between R and IR (designated IP) in the 0.0-0.10 and 0.10-0.20 m soil layers under NT; and to verify if these soil properties have systematic variation in sampling positions related to rows and interrows of corn. Soil sampling was carried out in transect perpendicular to the corn rows in which 40 sampling points were selected at each position (R, IR, IP) and in each soil layer, obtaining undisturbed samples to determine the aforementioned soil physical properties. The influence of sampling position on systematic variation of soil physical properties was evaluated by spectral analysis. In the 0.0-0.1 m layer, tilling the crop rows at the time of planting led to differences in BD, PORp, ACm, FC and FC/TP only in the R position. In the R position, the FC/TP ratio was considered close to ideal (0.66), indicating good water and air availability at this sampling position. The R position also showed BD values lower than the critical bulk density that restricts root growth, suggesting good soil physical conditions for seed germination and plant establishment. Spectral analysis indicated that there was systematic variation in soil physical properties evaluated in the 0.0-0.1 m layer, except for PORm. These results indicated that the soil physical properties evaluated in the 0.0-0.1 m layer were associated with soil position in the rows and interrows of corn. Thus, proper assessment of soil physical properties under NT must take into consideration the sampling positions and previous location of crop rows and interrows.
Resumo:
BACKGROUND: Guidelines for the management of anaemia in patients with chronic kidney disease (CKD) recommend a minimal haemoglobin (Hb) target of 11 g/dL. Recent surveys indicate that this requirement is not met in many patients in Europe. In most studies, Hb is only assessed over a short-term period. The aim of this study was to examine the control of anaemia over a continuous long-term period in Switzerland. METHODS: A prospective multi-centre observational study was conducted in dialysed patients treated with recombinant human epoetin (EPO) beta, over a one-year follow-up period, with monthly assessments of anaemia parameters. RESULTS: Three hundred and fifty patients from 27 centres, representing 14% of the dialysis population in Switzerland, were included. Mean Hb was 11.9 +/- 1.0 g/dL, and remained stable over time. Eighty-five % of the patients achieved mean Hb >or= 11 g/dL. Mean EPO dose was 155 +/- 118 IU/kg/week, being delivered mostly by subcutaneous route (64-71%). Mean serum ferritin and transferrin saturation were 435 +/- 253 microg/L and 30 +/- 11%, respectively. At month 12, adequate iron stores were found in 72.5% of patients, whereas absolute and functional iron deficiencies were observed in only 5.1% and 17.8%, respectively. Multivariate analysis showed that diabetes unexpectedly influenced Hb towards higher levels (12.1 +/- 0.9 g/dL; p = 0.02). One year survival was significantly higher in patients with Hb >or= 11 g/dL than in those with Hb <11 g/dL (19.7% vs 7.3%, p = 0.006). CONCLUSION: In comparison to European studies of reference, this survey shows a remarkable and continuous control of anaemia in Swiss dialysis centres. These results were reached through moderately high EPO doses, mostly given subcutaneously, and careful iron therapy management.
Resumo:
Background: Optimal valganciclovir (VGC) dosage and duration for cytomegalovirus (CMV) prophylaxis in kidney transplant recipients remains controversial. This study aimed to determine GCV blood levels and efficacy/safety observed under low-dose oral VGC in kidney transplant recipients. Secondly, to quantify the variability of GCV blood levels, and its potential clinical impact. Methods: In this prospective study, each patient at risk for CMV undergoing kidney transplantation received low-dose VGC (450 mg qd) prophylaxis for 3 months, unless GFR was below 40 mL/min, in which case the dose was adapted to 450 mg every other day. GCV levels, at trough (Ctrough) and at peak (C3h) were measured monthly and CMV viremia was assessed during and after prophylaxis using real time quantitative Polymerase Chain Reaction. Adverse effects were recorded on each GCV sampling. Patients were followed up to one year after transplantation. Results: 38 kidney recipients (19 D+/R+, 11 D+/R-, 8 D-/R+) received 3-month VGC prophylaxis. Most patients (mean GFR of 59 mL/min) received 450 mg qd but the dose was reduced to 450 mg every other day in 6 patients with mean GFR of 22 mL/min. Average GCV C3h and Ctrough (regressed at 24h or 48h) were 3.9 mg/L (CV 33%, range: 1.3-8.2) and 0.4 mg/L (CV 111%, range 0.1-3.3). Population pharmacokinetic analysis showed a fair dispersion of the parameters mainly influenced by renal function. Despite this variability, patients remained aviremic during VGC prophylaxis. Neutropenia and thrombocytopenia (grade 2-4) were reported in 4% and 3% of patients respectively. During follow-up, asymptomatic CMV viremia was reported in 25% patients. One year after transplantation, 12% patients (all D+/R-) had developed a CMV disease, which was treated with a therapeutic 6-week course of oral VGC. Conclusion: Average GCV blood levels after oral administration of low-dose VGC in kidney transplant recipients were comparable to those previously reported with oral GCV prophylaxis, efficacious and well tolerated. Thus, a 3-month course of low-dose VGC is appropriate for the renal function of most kidney transplant recipients.
Resumo:
Hematocrit (Hct) is one of the most critical issues associated with the bioanalytical methods used for dried blood spot (DBS) sample analysis. Because Hct determines the viscosity of blood, it may affect the spreading of blood onto the filter paper. Hence, accurate quantitative data can only be obtained if the size of the paper filter extracted contains a fixed blood volume. We describe for the first time a microfluidic-based sampling procedure to enable accurate blood volume collection on commercially available DBS cards. The system allows the collection of a controlled volume of blood (e.g., 5 or 10 μL) within several seconds. Reproducibility of the sampling volume was examined in vivo on capillary blood by quantifying caffeine and paraxanthine on 5 different extracted DBS spots at two different time points and in vitro with a test compound, Mavoglurant, on 10 different spots at two Hct levels. Entire spots were extracted. In addition, the accuracy and precision (n = 3) data for the Mavoglurant quantitation in blood with Hct levels between 26% and 62% were evaluated. The interspot precision data were below 9.0%, which was equivalent to that of a manually spotted volume with a pipet. No Hct effect was observed in the quantitative results obtained for Hct levels from 26% to 62%. These data indicate that our microfluidic-based sampling procedure is accurate and precise and that the analysis of Mavoglurant is not affected by the Hct values. This provides a simple procedure for DBS sampling with a fixed volume of capillary blood, which could eliminate the recurrent Hct issue linked to DBS sample analysis.
Resumo:
OBJECTIVE: Accuracy studies of Patient Safety Indicators (PSIs) are critical but limited by the large samples required due to low occurrence of most events. We tested a sampling design based on test results (verification-biased sampling [VBS]) that minimizes the number of subjects to be verified. METHODS: We considered 3 real PSIs, whose rates were calculated using 3 years of discharge data from a university hospital and a hypothetical screen of very rare events. Sample size estimates, based on the expected sensitivity and precision, were compared across 4 study designs: random and VBS, with and without constraints on the size of the population to be screened. RESULTS: Over sensitivities ranging from 0.3 to 0.7 and PSI prevalence levels ranging from 0.02 to 0.2, the optimal VBS strategy makes it possible to reduce sample size by up to 60% in comparison with simple random sampling. For PSI prevalence levels below 1%, the minimal sample size required was still over 5000. CONCLUSIONS: Verification-biased sampling permits substantial savings in the required sample size for PSI validation studies. However, sample sizes still need to be very large for many of the rarer PSIs.
Resumo:
Background: Two or three DNA primes have been used in previous smaller clinical trials, but the number required for optimal priming of viral vectors has never been assessed in adequately powered clinical trials. The EV03/ANRS Vac20 phase I/II trial investigated this issue using the DNA prime/poxvirus NYVAC boost combination, both expressing a common HIV-1 clade C immunogen consisting of Env and Gag-Pol-Nef polypeptide. Methods: 147 healthy volunteers were randomly allocated through 8 European centres to either 3xDNA plus 1xNYVAC (weeks 0, 4, 8 plus 24; n¼74) or to 2xDNA plus 2xNYVAC (weeks 0, 4 plus 20, 24; n¼73), stratified by geographical region and sex. T cell responses were quantified using the interferon g Elispot assay and 8 peptide pools; samples from weeks 0, 26 and 28 (time points for primary immunogenicity endpoint), 48 and 72 were considered for this analysis. Results: 140 of 147 participants were evaluable at weeks 26 and/ or 28. 64/70 (91%) in the 3xDNA arm compared to 56/70 (80%) in the 2xDNA arm developed a T cell response (P¼0.053). 26 (37%) participants of the 3xDNA arm developed a broader T cell response (Env plus at least to one of the Gag, Pol, Nef peptide pools) versus 15 (22%) in the 2xDNA arm (P¼0.047). At week 26, the overall magnitude of responses was also higher in the 3xDNA than in the 2xDNA arm (similar at week 28), with a median of 545 versus 328 SFUs/106 cells at week 26 (P<0.001). Preliminary overall evaluation showed that participants still developed T-cell response at weeks 48 (78%, n¼67) and 72 (70%, n¼66). Conclusion: This large clinical trial demonstrates that optimal priming of poxvirus-based vaccine regimens requires 3 DNA regimens and further confirms that the DNA/NYVAC prime boost vaccine combination is highly immunogenic and induced durable T-cell responses.
Resumo:
OBJECTIVES: We have sought to develop an automated methodology for the continuous updating of optimal cerebral perfusion pressure (CPPopt) for patients after severe traumatic head injury, using continuous monitoring of cerebrovascular pressure reactivity. We then validated the CPPopt algorithm by determining the association between outcome and the deviation of actual CPP from CPPopt. DESIGN: Retrospective analysis of prospectively collected data. SETTING: Neurosciences critical care unit of a university hospital. PATIENTS: A total of 327 traumatic head-injury patients admitted between 2003 and 2009 with continuous monitoring of arterial blood pressure and intracranial pressure. MEASUREMENTS AND MAIN RESULTS: Arterial blood pressure, intracranial pressure, and CPP were continuously recorded, and pressure reactivity index was calculated online. Outcome was assessed at 6 months. An automated curve fitting method was applied to determine CPP at the minimum value for pressure reactivity index (CPPopt). A time trend of CPPopt was created using a moving 4-hr window, updated every minute. Identification of CPPopt was, on average, feasible during 55% of the whole recording period. Patient outcome correlated with the continuously updated difference between median CPP and CPPopt (chi-square=45, p<.001; outcome dichotomized into fatal and nonfatal). Mortality was associated with relative "hypoperfusion" (CPP<CPPopt), severe disability with "hyperperfusion" (CPP>CPPopt), and favorable outcome was associated with smaller deviations of CPP from the individualized CPPopt. While deviations from global target CPP values of 60 mm Hg and 70 mm Hg were also related to outcome, these relationships were less robust. CONCLUSIONS: Real-time CPPopt could be identified during the recording time of majority of the patients. Patients with a median CPP close to CPPopt were more likely to have a favorable outcome than those in whom median CPP was widely different from CPPopt. Deviations from individualized CPPopt were more predictive of outcome than deviations from a common target CPP. CPP management to optimize cerebrovascular pressure reactivity should be the subject of future clinical trial in severe traumatic head-injury patients.
Resumo:
Identifying the geographic distribution of populations is a basic, yet crucial step in many fundamental and applied ecological projects, as it provides key information on which many subsequent analyses depend. However, this task is often costly and time consuming, especially where rare species are concerned and where most sampling designs generally prove inefficient. At the same time, rare species are those for which distribution data are most needed for their conservation to be effective. To enhance fieldwork sampling, model-based sampling (MBS) uses predictions from species distribution models: when looking for the species in areas of high habitat suitability, chances should be higher to find them. We thoroughly tested the efficiency of MBS by conducting an important survey in the Swiss Alps, assessing the detection rate of three rare and five common plant species. For each species, habitat suitability maps were produced following an ensemble modeling framework combining two spatial resolutions and two modeling techniques. We tested the efficiency of MBS and the accuracy of our models by sampling 240 sites in the field (30 sitesx8 species). Across all species, the MBS approach proved to be effective. In particular, the MBS design strictly led to the discovery of six sites of presence of one rare plant, increasing chances to find this species from 0 to 50%. For common species, MBS doubled the new population discovery rates as compared to random sampling. Habitat suitability maps coming from the combination of four individual modeling methods predicted well the species' distribution and more accurately than the individual models. As a conclusion, using MBS for fieldwork could efficiently help in increasing our knowledge of rare species distribution. More generally, we recommend using habitat suitability models to support conservation plans.
Resumo:
We develop a theory of news coverage in environments of information abundance. News consumersare time-constrained and browse through news items that are available across competingoutlets, choosing which ones to read or skip. Media firms are aware of consumers' preferences andconstraints, and decide on rankings of news items that maximize their profits. We find that, evenwhen readers and outlets are rational and unbiased and when markets are competitive, readersmay read more than they would like to, and the stories they read may be significantly differentfrom the ones they prefer. Next, we derive implications on diverse aspects of new and traditionalmedia. These include a rationale for tabloid news, a theory of optimal advertisement placementin newscasts, and a justification for readers' migration to online media platforms in order to circumventinefficient rankings found in traditional media. We then analyze methods for restoringreader-efficient standards and discuss the political economy implications of the theory.
Resumo:
Due to important alteration caused by long time decomposition, the gases in human bodies buried for more than a year have not been investigated. For the first time, the results of gas analysis sampled from bodies recently exhumed after 30 years are presented. Adipocere formation has prevented the bodies from too important alteration, and gaseous areas were identified. The sampling was performed with airtight syringes assisted by multi-detector computed tomography (MDCT) in those specific areas. The important amount of methane (CH4), coupled to weak amounts of hydrogen (H2) and carbon dioxide (CO2), usual gaseous alteration indicators, have permitted to confirm methanogenesis mechanism for long period of alteration. H2 and CO2 produced during the first stages of the alteration process were consumed through anaerobic oxidation by methanogenic bacteria, generating CH4.