44 resultados para P2P and networked data management
em Université de Lausanne, Switzerland
Resumo:
Quantitative information from magnetic resonance imaging (MRI) may substantiate clinical findings and provide additional insight into the mechanism of clinical interventions in therapeutic stroke trials. The PERFORM study is exploring the efficacy of terutroban versus aspirin for secondary prevention in patients with a history of ischemic stroke. We report on the design of an exploratory longitudinal MRI follow-up study that was performed in a subgroup of the PERFORM trial. An international multi-centre longitudinal follow-up MRI study was designed for different MR systems employing safety and efficacy readouts: new T2 lesions, new DWI lesions, whole brain volume change, hippocampal volume change, changes in tissue microstructure as depicted by mean diffusivity and fractional anisotropy, vessel patency on MR angiography, and the presence of and development of new microbleeds. A total of 1,056 patients (men and women ≥ 55 years) were included. The data analysis included 3D reformation, image registration of different contrasts, tissue segmentation, and automated lesion detection. This large international multi-centre study demonstrates how new MRI readouts can be used to provide key information on the evolution of cerebral tissue lesions and within the macrovasculature after atherothrombotic stroke in a large sample of patients.
Resumo:
Il est important pour les entreprises de compresser les informations détaillées dans des sets d'information plus compréhensibles. Au chapitre 1, je résume et structure la littérature sur le sujet « agrégation d'informations » en contrôle de gestion. Je récapitule l'analyse coûts-bénéfices que les comptables internes doivent considérer quand ils décident des niveaux optimaux d'agrégation d'informations. Au-delà de la perspective fondamentale du contenu d'information, les entreprises doivent aussi prendre en considération des perspectives cogni- tives et comportementales. Je développe ces aspects en faisant la part entre la comptabilité analytique, les budgets et plans, et la mesure de la performance. Au chapitre 2, je focalise sur un biais spécifique qui se crée lorsque les informations incertaines sont agrégées. Pour les budgets et plans, des entreprises doivent estimer les espérances des coûts et des durées des projets, car l'espérance est la seule mesure de tendance centrale qui est linéaire. A la différence de l'espérance, des mesures comme le mode ou la médiane ne peuvent pas être simplement additionnés. En considérant la forme spécifique de distributions des coûts et des durées, l'addition des modes ou des médianes résultera en une sous-estimation. Par le biais de deux expériences, je remarque que les participants tendent à estimer le mode au lieu de l'espérance résultant en une distorsion énorme de l'estimati¬on des coûts et des durées des projets. Je présente également une stratégie afin d'atténuer partiellement ce biais. Au chapitre 3, j'effectue une étude expérimentale pour comparer deux approches d'esti¬mation du temps qui sont utilisées en comptabilité analytique, spécifiquement « coûts basés sur les activités (ABC) traditionnelles » et « time driven ABC » (TD-ABC). Au contraire des affirmations soutenues par les défenseurs de l'approche TD-ABC, je constate que cette dernière n'est pas nécessairement appropriée pour les calculs de capacité. Par contre, je démontre que le TD-ABC est plus approprié pour les allocations de coûts que l'approche ABC traditionnelle. - It is essential for organizations to compress detailed sets of information into more comprehensi¬ve sets, thereby, establishing sharp data compression and good decision-making. In chapter 1, I review and structure the literature on information aggregation in management accounting research. I outline the cost-benefit trade-off that management accountants need to consider when they decide on the optimal levels of information aggregation. Beyond the fundamental information content perspective, organizations also have to account for cognitive and behavi¬oral perspectives. I elaborate on these aspects differentiating between research in cost accounti¬ng, budgeting and planning, and performance measurement. In chapter 2, I focus on a specific bias that arises when probabilistic information is aggregated. In budgeting and planning, for example, organizations need to estimate mean costs and durations of projects, as the mean is the only measure of central tendency that is linear. Different from the mean, measures such as the mode or median cannot simply be added up. Given the specific shape of cost and duration distributions, estimating mode or median values will result in underestimations of total project costs and durations. In two experiments, I find that participants tend to estimate mode values rather than mean values resulting in large distortions of estimates for total project costs and durations. I also provide a strategy that partly mitigates this bias. In the third chapter, I conduct an experimental study to compare two approaches to time estimation for cost accounting, i.e., traditional activity-based costing (ABC) and time-driven ABC (TD-ABC). Contrary to claims made by proponents of TD-ABC, I find that TD-ABC is not necessarily suitable for capacity computations. However, I also provide evidence that TD-ABC seems better suitable for cost allocations than traditional ABC.
Resumo:
Using data from the Public Health Service, we studied the demographic and clinical characteristics of 1,782 patients enrolled in methadone maintenance treatment (MMT) during 2001 in the Swiss Canton of Vaud, comparing our findings with the results of a previous study from 1976 to 1986. In 2001, most patients (76.9%) were treated in general practice. Mortality is low in this MMT population (1%/year). While patient age and sex profiles were similar to those found in the earlier study, we did observe a substantial increase in the number of patients and the number of practitioners treating MMT patients, probably reflecting the low-threshold governmental policies and the creation of specialized centers. In conclusion, easier access to MMT enhances the number of patients, but new concerns about the quality of management emerge: benzodiazepine as a concomitant prescription; low rates of screening for hepatitis B, C and HIV, and social and psychiatric preoccupations.
Resumo:
The present review will briefly summarize the interplay between coagulation and inflammation, highlighting possible effects of direct inhibition of factor Xa and thrombin beyond anticoagulation. Additionally, the rationale for the use of the new direct oral anticoagulants (DOACs) for indications such as cancer-associated venous thromboembolism (CAT), mechanical heart valves, thrombotic anti-phospholipid syndrome (APS), and heparin-induced thrombocytopenia (HIT) will be explored. Published data on patients with cancer or mechanical heart valves treated with DOAC will be discussed, as well as planned studies in APS and HIT. Although at the present time published evidence is insufficient for recommending DOAC in the above-mentioned indications, there are good arguments in favor of clinical trials investigating their efficacy in these contexts. Direct inhibition of factor Xa or thrombin may reveal interesting effects beyond anticoagulation as well.
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the scale of a field site represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed downscaling procedure based on a non-linear Bayesian sequential simulation approach. The main objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity logged at collocated wells and surface resistivity measurements, which are available throughout the studied site. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariatekernel density function. Then a stochastic integration of low-resolution, large-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities is applied. The overall viability of this downscaling approach is tested and validated by comparing flow and transport simulation through the original and the upscaled hydraulic conductivity fields. Our results indicate that the proposed procedure allows obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.
Resumo:
OBJECTIVE: The optimal coronary MR angiography sequence has yet to be determined. We sought to quantitatively and qualitatively compare four coronary MR angiography sequences. SUBJECTS AND METHODS. Free-breathing coronary MR angiography was performed in 12 patients using four imaging sequences (turbo field-echo, fast spin-echo, balanced fast field-echo, and spiral turbo field-echo). Quantitative comparisons, including signal-to-noise ratio, contrast-to-noise ratio, vessel diameter, and vessel sharpness, were performed using a semiautomated analysis tool. Accuracy for detection of hemodynamically significant disease (> 50%) was assessed in comparison with radiographic coronary angiography. RESULTS: Signal-to-noise and contrast-to-noise ratios were markedly increased using the spiral (25.7 +/- 5.7 and 15.2 +/- 3.9) and balanced fast field-echo (23.5 +/- 11.7 and 14.4 +/- 8.1) sequences compared with the turbo field-echo (12.5 +/- 2.7 and 8.3 +/- 2.6) sequence (p < 0.05). Vessel diameter was smaller with the spiral sequence (2.6 +/- 0.5 mm) than with the other techniques (turbo field-echo, 3.0 +/- 0.5 mm, p = 0.6; balanced fast field-echo, 3.1 +/- 0.5 mm, p < 0.01; fast spin-echo, 3.1 +/- 0.5 mm, p < 0.01). Vessel sharpness was highest with the balanced fast field-echo sequence (61.6% +/- 8.5% compared with turbo field-echo, 44.0% +/- 6.6%; spiral, 44.7% +/- 6.5%; fast spin-echo, 18.4% +/- 6.7%; p < 0.001). The overall accuracies of the sequences were similar (range, 74% for turbo field-echo, 79% for spiral). Scanning time for the fast spin-echo sequences was longest (10.5 +/- 0.6 min), and for the spiral acquisitions was shortest (5.2 +/- 0.3 min). CONCLUSION: Advantages in signal-to-noise and contrast-to-noise ratios, vessel sharpness, and the qualitative results appear to favor spiral and balanced fast field-echo coronary MR angiography sequences, although subjective accuracy for the detection of coronary artery disease was similar to that of other sequences.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale for the purpose of improving predictions of groundwater flow and solute transport. However, extending corresponding approaches to the regional scale still represents one of the major challenges in the domain of hydrogeophysics. To address this problem, we have developed a regional-scale data integration methodology based on a two-step Bayesian sequential simulation approach. Our objective is to generate high-resolution stochastic realizations of the regional-scale hydraulic conductivity field in the common case where there exist spatially exhaustive but poorly resolved measurements of a related geophysical parameter, as well as highly resolved but spatially sparse collocated measurements of this geophysical parameter and the hydraulic conductivity. To integrate this multi-scale, multi-parameter database, we first link the low- and high-resolution geophysical data via a stochastic downscaling procedure. This is followed by relating the downscaled geophysical data to the high-resolution hydraulic conductivity distribution. After outlining the general methodology of the approach, we demonstrate its application to a realistic synthetic example where we consider as data high-resolution measurements of the hydraulic and electrical conductivities at a small number of borehole locations, as well as spatially exhaustive, low-resolution estimates of the electrical conductivity obtained from surface-based electrical resistivity tomography. The different stochastic realizations of the hydraulic conductivity field obtained using our procedure are validated by comparing their solute transport behaviour with that of the underlying ?true? hydraulic conductivity field. We find that, even in the presence of strong subsurface heterogeneity, our proposed procedure allows for the generation of faithful representations of the regional-scale hydraulic conductivity structure and reliable predictions of solute transport over long, regional-scale distances.
Resumo:
The purpose of this study was to evaluate the factor structure and the reliability of the French versions of the Identity Style Inventory (ISI-3) and the Utrecht-Management of Identity Commitments Scale (U-MICS) in a sample of college students (N = 457, 18 to 25 years old). Confirmatory factor analyses confirmed the hypothesized three-factor solution of the ISI-3 identity styles (i.e. informational, normative, and diffuse-avoidant styles), the one-factor solution of the ISI-3 identity commitment, and the three-factor structure of the U-MICS (i.e. commitment, in-depth exploration, and reconsideration of commitment). Additionally, theoretically consistent and meaningful associations among the ISI-3, U-MICS, and Ego Identity Process Questionnaire (EIPQ) confirmed convergent validity. Overall, the results of the present study indicate that the French versions of the ISI-3 and UMICS are useful instruments for assessing identity styles and processes, and provide additional support to the cross-cultural validity of these tools.
Resumo:
Les syndromes neuropathiques sont caractérisés par une douleur d'intensité élevée, de longue durée et résistante aux analgésiques classiques. De fait, il existe un risque important de répercussions sur la vie et le bien-être des patients. A travers une vignette clinique, cet article abordera le diagnostic, le traitement spécifique et l'impact de la douleur neuropathique sur la qualité de vie et les conséquences psychologiques associées, comme la dépression et l'anxiété. Nous présenterons des outils validés qui permettent d'objectiver la composante neuropathique aux douleurs et les comorbidités psychiatriques associées. Cette évaluation globale favorise un meilleur dialogue avec les patients ainsi que l'élaboration de stratégies thérapeutiques, notamment par le biais d'antidépresseurs, dont l'efficacité sera discutée en fin d'article. Neuropathic pain syndromes are characterized by intense and long lasting pain that is resistant to usual analgesics. Patients are therefore at high risk of decreased quality of life and impaired well-being. Using a case report, we will consider in this article the diagnosis and treatment of neuropathic pain as well as its impact on the quality of life including psychological consequences such as depression and anxiety. We will present simple and reliable scales that can help the general practitioner evaluate the neuropathic component of the pain syndrome and its related psychiatric co-morbidities. This comprehensive approach to pain management should facilitate communication with the patient and help the practitioner select the most appropriate therapeutic strategy, notably the prescription of antidepressants, the efficacy of which we will discuss at the end of the article.