275 resultados para Value integration
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the scale of a field site represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed downscaling procedure based on a non-linear Bayesian sequential simulation approach. The main objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity logged at collocated wells and surface resistivity measurements, which are available throughout the studied site. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariatekernel density function. Then a stochastic integration of low-resolution, large-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities is applied. The overall viability of this downscaling approach is tested and validated by comparing flow and transport simulation through the original and the upscaled hydraulic conductivity fields. Our results indicate that the proposed procedure allows obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.
Resumo:
Objective: Cardiac Troponin-I (cTnI) is a well-recognized early postoperative marker for myocardial damage in adults and children after heart surgery. The present study was undertaken to evaluate whether the integrated value (area under the curve(AUC)) of postoperative cTnI is a better mode to predict long-term outcome than post operative cTnI maximum value, after surgery for congenital heart defects (CHD). Methods: retrospective cohort study. 279 patients (mean age 4.6 years; range 0-17 years-old, 185 males) with congenital heart defect repair on cardiopulmonary by-pass were retrieved from our database including postoperative cTnI values. Maximal post operative cTnI value, post operative cTnI AUC value at 48h and total post operative cTnI AUC value were calculated and then correlated with duration of intubation, duration of ICU stay and mortality. Results: the mean duration of mechanical ventilation was 5.1+/-7.2 days and mean duration of ICU stay was 11.0+/- 13.3 days,11 patients (3.9%) died in post operative period. When comparing survivor and deceased groups, there was a significant difference in the mean value for max cTnI (16.7+/- 21.8 vs 59.2+/-41.4 mcg/l, p+0.0001), 48h AUC cTnI (82.0+/-110.7 vs 268.8+/-497.7 mcg/l, p+0.0001) and total AUC cTnI (623.8+/-1216.7 vs 2564+/-2826.0, p+0.0001). Analyses for duration of mechanical ventilation and duration of ICU stay by linear regression demonstrated a better correlation for 48h AUC cTnI (ventilation time r+0.82, p+0.0001 and ICU stay r+0.74, p+0.0001) then total AUC cTnI (ventilation time r+0.65, p+0.0001 and ICU stay r+0.60, p+0.0001) and max cTnI (ventilation time r+0.64, p+0.0001 and ICU stay r+0.60, p+0.0001). Conclusion: Cardiac Troponin I is a specific and sensitive marker of myocardial injury after congenital heart surgery and it may predict early in-hospital outcomes. Integration of post operative value of cTnI by calculation of AUC improves prediction of early in-hospital outcomes. It probably takes into account, not only the initial surgical procedure, but probably also incorporates the occurrence of hypoxic-ischemic phenomena in the post-operative period.
Resumo:
OBJECTIVE: Spirituality and religiousness have been shown to be highly prevalent among patients with schizophrenia. However, clinicians are rarely aware of the importance of religion and understand little of the value or difficulties it presents to treatment. This study aimed to assess the role of religion as a mediating variable in the process of coping with psychotic illness. METHOD: Semistructured interviews about religious coping were conducted with a sample of 115 outpatients with psychotic illness. RESULTS: For some patients, religion instilled hope, purpose, and meaning in their lives (71%), whereas for others, it induced spiritual despair (14%). Patients also reported that religion lessened (54%) or increased (10%) psychotic and general symptoms. Religion was also reported to increase social integration (28%) or social isolation (3%). It may reduce (33%) or increase (10%) the risk of suicide attempts, reduce (14%) or increase (3%) substance use, and foster adherence to (16%) or be in opposition to (15%) psychiatric treatment. CONCLUSIONS: Our results highlight the clinical significance of religion in the care of patients with schizophrenia. Religion is neither a strictly personal matter nor a strictly cultural one. Spirituality should be integrated into the psychosocial dimension of care. Our results suggest that the complexity of the relationship between religion and illness requires a highly sensitive approach to each unique story.
Resumo:
Rb-82cardiac PET has been used to non-invasively assess myocardial blood flow (MBF)and myocardial flow reserve (MFR). The impact of MBF and MFR for predictingmajor adverse cardiovascular events (MACE) has not been investigated in aprospective study, which was our aim. MATERIAL AND METHODS: In total, 280patients (65±10y, 36% women) with known or suspected CAD were prospectivelyenrolled. They all underwent both a rest and adenosine stress Rb-82 cardiacPET/CT. Dynamic acquisitions were processed with the FlowQuant 2.1.3 softwareand analyzed semi-quantitatively (SSS, SDS) and quantitatively (MBF, MFR) andreported using the 17-segment AHA model. Patients were stratified based on SDS,stress MBF and MFR and allocated into tertiles. For each group, annualizedevent rates were computed by dividing the number of annualized MACE (cardiacdeath, myocardial infarction, revascularisation or hospitalisation forcardiac-related event) by the sum of individual follow-up periods in years.Outcome were analysed for each group using Kaplan-Meier event-free survivalcurves and compared using the log-rank test. Multivariate analysis wasperformed in a stepwise fashion using Cox proportional hazards regressionmodels (p<0.05 for model inclusion). RESULTS: In a median follow-up of 256days (range 168-440d), 44 MACE were observed. Ischemia (SDS≥2) was observed in95 patients who had higher annualized MACE rate as compared to those without(55% vs. 9.8%, p<0.0001). The group with the lowest MFR tertile (MFR<1.76)had higher MACE rate than the two highest tertiles (51% vs. 9% and 14%,p<0.0001). Similarly, the group with the lowest stress MBF tertile(MBF<1.78mL/min/g) had the highest annualized MACE rate (41% vs. 26% and 6%,p=0.0002). On multivariate analysis, the addition of MFR or stress MBF to SDSsignificantly increased the global χ2 (from 56 to 60, p=0.04; and from56 to 63, p=0.01). The best prognostic power was obtained in a model combiningSDS (p<0.001) and stress MBF (p=0.01). Interestingly, the integration ofstress MBF enhanced risk stratification even in absence of ischemia.CONCLUSIONS: Quantification of MBF or MFR in Rb-82 cardiac PET/CT providesindependent and incremental prognostic information over semi-quantitativeassessment with SDS and is of value for risk stratification.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the regional scale represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed a downscaling procedure based on a non-linear Bayesian sequential simulation approach. The basic objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity, which is available throughout the model space. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariate kernel density function. This method is then applied to the stochastic integration of low-resolution, re- gional-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities. Finally, the overall viability of this downscaling approach is tested and verified by performing and comparing flow and transport simulation through the original and the downscaled hydraulic conductivity fields. Our results indicate that the proposed procedure does indeed allow for obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.
Resumo:
Determining the appropriate level of integration is crucial to realizing value from acquisitions. Most prior research assumes that higher integration implies the removal of autonomy from target managers, which in turn undermines the functioning of the target firm if it entails unfamiliar elements for the acquirer. Using a survey of 86 acquisitions to obtain the richness of detail necessary to distinguish integration from autonomy, the authors argue and find that integration and autonomy are not the opposite ends of a single continuum. Certain conditions (e.g., when complementarity rather than similarity is the primary source of synergy) lead to high levels of both integration and autonomy. In addition, similarity negatively moderates the relationship between complementarity and autonomy when the target offers both synergy sources. In contrast, similarity does not moderate the link between complementarity and integration. The authors' findings advance scholarly understanding about the drivers of implementation strategy and in particular the different implementation strategies acquiring managers deploy when they attempt to leverage complementarities, similarities, or both.
Resumo:
Hypoxia, a condition of insufficient oxygen availability to support metabolism, occurs when the vascular supply is interrupted, as in stroke. The identification of the hypoxic and viable tissue in stroke as compared with irreversible lesions (necrosis) has relevant implications for the treatment of ischemic stroke. Traditionally, imaging by positron emission tomography (PET), using 15O-based radiotracers, allowed the measurement of perfusion and oxygen extraction in stroke, providing important insights in its pathophysiology. However, these multitracer evaluations are of limited applicability in clinical settings. More recently, specific tracers have been developed, which accumulate with an inverse relationship to oxygen concentration and thus allow visualizing the hypoxic tissue non invasively. These belong to two main groups: nitroimidazoles, and among these the 18F-Fluoroimidazole (18F-FMISO) is the most widely used, and the copper-based tracers, represented mainly by Cu-ATSM. While these tracers have been at first developed and tested in order to image hypoxia in tumors, they have also shown promising results in stroke models and preliminary clinical studies in patients with cardiovascular disorders, allowing the detection of hypoxic tissue and the prediction of the extent of subsequent ischemia and clinical outcome. These tracers have therefore the potential to select an appropriate subgroup of patients who could benefit from a hypoxia-directed treatment and provide prognosis relevant imaging. The molecular imaging of hypoxia made important progress over the last decade and has a potential for integration into the diagnostic and therapeutic workup of patients with ischemic stroke.
Resumo:
Molecular monitoring of BCR/ABL transcripts by real time quantitative reverse transcription PCR (qRT-PCR) is an essential technique for clinical management of patients with BCR/ABL-positive CML and ALL. Though quantitative BCR/ABL assays are performed in hundreds of laboratories worldwide, results among these laboratories cannot be reliably compared due to heterogeneity in test methods, data analysis, reporting, and lack of quantitative standards. Recent efforts towards standardization have been limited in scope. Aliquots of RNA were sent to clinical test centers worldwide in order to evaluate methods and reporting for e1a2, b2a2, and b3a2 transcript levels using their own qRT-PCR assays. Total RNA was isolated from tissue culture cells that expressed each of the different BCR/ABL transcripts. Serial log dilutions were prepared, ranging from 100 to 10-5, in RNA isolated from HL60 cells. Laboratories performed 5 independent qRT-PCR reactions for each sample type at each dilution. In addition, 15 qRT-PCR reactions of the 10-3 b3a2 RNA dilution were run to assess reproducibility within and between laboratories. Participants were asked to run the samples following their standard protocols and to report cycle threshold (Ct), quantitative values for BCR/ABL and housekeeping genes, and ratios of BCR/ABL to housekeeping genes for each sample RNA. Thirty-seven (n=37) participants have submitted qRT-PCR results for analysis (36, 37, and 34 labs generated data for b2a2, b3a2, and e1a2, respectively). The limit of detection for this study was defined as the lowest dilution that a Ct value could be detected for all 5 replicates. For b2a2, 15, 16, 4, and 1 lab(s) showed a limit of detection at the 10-5, 10-4, 10-3, and 10-2 dilutions, respectively. For b3a2, 20, 13, and 4 labs showed a limit of detection at the 10-5, 10-4, and 10-3 dilutions, respectively. For e1a2, 10, 21, 2, and 1 lab(s) showed a limit of detection at the 10-5, 10-4, 10-3, and 10-2 dilutions, respectively. Log %BCR/ABL ratio values provided a method for comparing results between the different laboratories for each BCR/ABL dilution series. Linear regression analysis revealed concordance among the majority of participant data over the 10-1 to 10-4 dilutions. The overall slope values showed comparable results among the majority of b2a2 (mean=0.939; median=0.9627; range (0.399 - 1.1872)), b3a2 (mean=0.925; median=0.922; range (0.625 - 1.140)), and e1a2 (mean=0.897; median=0.909; range (0.5174 - 1.138)) laboratory results (Fig. 1-3)). Thirty-four (n=34) out of the 37 laboratories reported Ct values for all 15 replicates and only those with a complete data set were included in the inter-lab calculations. Eleven laboratories either did not report their copy number data or used other reporting units such as nanograms or cell numbers; therefore, only 26 laboratories were included in the overall analysis of copy numbers. The median copy number was 348.4, with a range from 15.6 to 547,000 copies (approximately a 4.5 log difference); the median intra-lab %CV was 19.2% with a range from 4.2% to 82.6%. While our international performance evaluation using serially diluted RNA samples has reinforced the fact that heterogeneity exists among clinical laboratories, it has also demonstrated that performance within a laboratory is overall very consistent. Accordingly, the availability of defined BCR/ABL RNAs may facilitate the validation of all phases of quantitative BCR/ABL analysis and may be extremely useful as a tool for monitoring assay performance. Ongoing analyses of these materials, along with the development of additional control materials, may solidify consensus around their application in routine laboratory testing and possible integration in worldwide efforts to standardize quantitative BCR/ABL testing.
Resumo:
PURPOSE: To determine the value of applying finger trap distraction during direct MR arthrography of the wrist to assess intrinsic ligament and triangular fibrocartilage complex (TFCC) tears. MATERIALS AND METHODS: Twenty consecutive patients were prospectively investigated by three-compartment wrist MR arthrography. Imaging was performed with 3-T scanners using a three-dimensional isotropic (0.4 mm) T1-weighted gradient-recalled echo sequence, with and without finger trap distraction (4 kg). In a blind and independent fashion, two musculoskeletal radiologists measured the width of the scapholunate (SL), lunotriquetral (LT) and ulna-TFC (UTFC) joint spaces. They evaluated the amount of contrast medium within these spaces using a four-point scale, and assessed SL, LT and TFCC tears, as well as the disruption of Gilula's carpal arcs. RESULTS: With finger trap distraction, both readers found a significant increase in width of the SL space (mean Δ = +0.1mm, p ≤ 0.040), and noticed more contrast medium therein (p ≤ 0.035). In contrast, the differences in width of the LT (mean Δ = +0.1 mm, p ≥ 0.057) and UTFC (mean Δ = 0mm, p ≥ 0.728) spaces, as well as the amount of contrast material within these spaces were not statistically significant (p = 0.607 and ≥ 0.157, respectively). Both readers detected more SL (Δ = +1, p = 0.157) and LT (Δ = +2, p = 0.223) tears, although statistical significance was not reached, and Gilula's carpal arcs were more frequently disrupted during finger trap distraction (Δ = +5, p = 0.025). CONCLUSION: The application of finger trap distraction during direct wrist MR arthrography may enhance both detection and characterisation of SL and LT ligament tears by widening the SL space and increasing the amount of contrast within the SL and LT joint spaces.
Resumo:
Knowledge of the spatial distribution of hydraulic conductivity (K) within an aquifer is critical for reliable predictions of solute transport and the development of effective groundwater management and/or remediation strategies. While core analyses and hydraulic logging can provide highly detailed information, such information is inherently localized around boreholes that tend to be sparsely distributed throughout the aquifer volume. Conversely, larger-scale hydraulic experiments like pumping and tracer tests provide relatively low-resolution estimates of K in the investigated subsurface region. As a result, traditional hydrogeological measurement techniques contain a gap in terms of spatial resolution and coverage, and they are often alone inadequate for characterizing heterogeneous aquifers. Geophysical methods have the potential to bridge this gap. The recent increased interest in the application of geophysical methods to hydrogeological problems is clearly evidenced by the formation and rapid growth of the domain of hydrogeophysics over the past decade (e.g., Rubin and Hubbard, 2005).