100 resultados para Robustness


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Daily rhythmicity in the locomotor activity of laboratory animals has been studied in great detail for many decades, but the daily pattern of locomotor activity has not received as much attention in humans. We collected waist-worn accelerometer data from more than 2000 individuals from five countries differing in socioeconomic development and conducted a detailed analysis of human locomotor activity. Body mass index (BMI) was computed from height and weight. Individual activity records lasting 7 days were subjected to cosinor analysis to determine the parameters of the daily activity rhythm: mesor (mean level), amplitude (half the range of excursion), acrophase (time of the peak) and robustness (rhythm strength). The activity records of all individual participants exhibited statistically significant 24-h rhythmicity, with activity increasing noticeably a few hours after sunrise and dropping off around the time of sunset, with a peak at 1:42 pm on average. The acrophase of the daily rhythm was comparable in men and women in each country but varied by as much as 3 h from country to country. Quantification of the socioeconomic stages of the five countries yielded suggestive evidence that more developed countries have more obese residents, who are less active, and who are active later in the day than residents from less developed countries. These results provide a detailed characterization of the daily activity pattern of individual human beings and reveal similarities and differences among people from five countries differing in socioeconomic development.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: For free-breathing cardiovascular magnetic resonance (CMR), the self-navigation technique recently emerged, which is expected to deliver high-quality data with a high success rate. The purpose of this study was to test the hypothesis that self-navigated 3D-CMR enables the reliable assessment of cardiovascular anatomy in patients with congenital heart disease (CHD) and to define factors that affect image quality. METHODS: CHD patients ≥2 years-old and referred for CMR for initial assessment or for a follow-up study were included to undergo a free-breathing self-navigated 3D CMR at 1.5T. Performance criteria were: correct description of cardiac segmental anatomy, overall image quality, coronary artery visibility, and reproducibility of great vessels diameter measurements. Factors associated with insufficient image quality were identified using multivariate logistic regression. RESULTS: Self-navigated CMR was performed in 105 patients (55% male, 23 ± 12y). Correct segmental description was achieved in 93% and 96% for observer 1 and 2, respectively. Diagnostic quality was obtained in 90% of examinations, and it increased to 94% if contrast-enhanced. Left anterior descending, circumflex, and right coronary arteries were visualized in 93%, 87% and 98%, respectively. Younger age, higher heart rate, lower ejection fraction, and lack of contrast medium were independently associated with reduced image quality. However, a similar rate of diagnostic image quality was obtained in children and adults. CONCLUSION: In patients with CHD, self-navigated free-breathing CMR provides high-resolution 3D visualization of the heart and great vessels with excellent robustness.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE: Because desmoid tumors exhibit an unpredictable clinical course, translational research is crucial to identify the predictive factors of progression in addition to the clinical parameters. The main issue is to detect patients who are at a higher risk of progression. The aim of this work was to identify molecular markers that can predict progression-free survival (PFS). EXPERIMENTAL DESIGN: Gene-expression screening was conducted on 115 available independent untreated primary desmoid tumors using cDNA microarray. We established a prognostic gene-expression signature composed of 36 genes. To test robustness, we randomly generated 1,000 36-gene signatures and compared their outcome association to our define 36-genes molecular signature and we calculated positive predictive value (PPV) and negative predictive value (NPV). RESULTS: Multivariate analysis showed that our molecular signature had a significant impact on PFS while no clinical factor had any prognostic value. Among the 1,000 random signatures generated, 56.7% were significant and none was more significant than our 36-gene molecular signature. PPV and NPV were high (75.58% and 81.82%, respectively). Finally, the top two genes downregulated in no-recurrence were FECH and STOML2 and the top gene upregulated in no-recurrence was TRIP6. CONCLUSIONS: By analyzing expression profiles, we have identified a gene-expression signature that is able to predict PFS. This tool may be useful for prospective clinical studies. Clin Cancer Res; 21(18); 4194-200. ©2015 AACR.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lexical diversity measures are notoriously sensitive to variations of sample size and recent approaches to this issue typically involve the computation of the average variety of lexical units in random subsamples of fixed size. This methodology has been further extended to measures of inflectional diversity such as the average number of wordforms per lexeme, also known as the mean size of paradigm (MSP) index. In this contribution we argue that, while random sampling can indeed be used to increase the robustness of inflectional diversity measures, using a fixed subsample size is only justified under the hypothesis that the corpora that we compare have the same degree of lexematic diversity. In the more general case where they may have differing degrees of lexematic diversity, a more sophisticated strategy can and should be adopted. A novel approach to the measurement of inflectional diversity is proposed, aiming to cope not only with variations of sample size, but also with variations of lexematic diversity. The robustness of this new method is empirically assessed and the results show that while there is still room for improvement, the proposed methodology considerably attenuates the impact of lexematic diversity discrepancies on the measurement of inflectional diversity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Coronary artery disease (CAD) continues to be one of the top public health burden. Perfusion cardiovascular magnetic resonance (CMR) is generally accepted to detect CAD, while data on its cost effectiveness are scarce. Therefore, the goal of the study was to compare the costs of a CMR-guided strategy vs two invasive strategies in a large CMR registry. METHODS: In 3'647 patients with suspected CAD of the EuroCMR-registry (59 centers/18 countries) costs were calculated for diagnostic examinations (CMR, X-ray coronary angiography (CXA) with/without FFR), revascularizations, and complications during a 1-year follow-up. Patients with ischemia-positive CMR underwent an invasive CXA and revascularization at the discretion of the treating physician (=CMR + CXA-strategy). In the hypothetical invasive arm, costs were calculated for an initial CXA and a FFR in vessels with ≥50 % stenoses (=CXA + FFR-strategy) and the same proportion of revascularizations and complications were applied as in the CMR + CXA-strategy. In the CXA-only strategy, costs included those for CXA and for revascularizations of all ≥50 % stenoses. To calculate the proportion of patients with ≥50 % stenoses, the stenosis-FFR relationship from the literature was used. Costs of the three strategies were determined based on a third payer perspective in 4 healthcare systems. RESULTS: Revascularizations were performed in 6.2 %, 4.5 %, and 12.9 % of all patients, patients with atypical chest pain (n = 1'786), and typical angina (n = 582), respectively; whereas complications (=all-cause death and non-fatal infarction) occurred in 1.3 %, 1.1 %, and 1.5 %, respectively. The CMR + CXA-strategy reduced costs by 14 %, 34 %, 27 %, and 24 % in the German, UK, Swiss, and US context, respectively, when compared to the CXA + FFR-strategy; and by 59 %, 52 %, 61 % and 71 %, respectively, versus the CXA-only strategy. In patients with typical angina, cost savings by CMR + CXA vs CXA + FFR were minimal in the German (2.3 %), intermediate in the US and Swiss (11.6 % and 12.8 %, respectively), and remained substantial in the UK (18.9 %) systems. Sensitivity analyses proved the robustness of results. CONCLUSIONS: A CMR + CXA-strategy for patients with suspected CAD provides substantial cost reduction compared to a hypothetical CXA + FFR-strategy in patients with low to intermediate disease prevalence. However, in the subgroup of patients with typical angina, cost savings were only minimal to moderate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dixon techniques are part of the methods used to suppress the signal of fat in MRI. They present many advantages compared with other fat suppression techniques including (1) the robustness of fat signal suppression, (2) the possibility to combine these techniques with all types of sequences (gradient echo, spin echo) and different weightings (T1-, T2-, proton density-, intermediate-weighted sequences), and (3) the availability of images both with and without fat suppression from one single acquisition. These advantages have opened many applications in musculoskeletal imaging. We first review the technical aspects of Dixon techniques including their advantages and disadvantages. We then illustrate their applications for the imaging of different body parts, as well as for tumors, neuromuscular disorders, and the imaging of metallic hardware.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Most available pharmacotherapies for alcohol-dependent patients target abstinence; however, reduced alcohol consumption may be a more realistic goal. Using randomized clinical trial (RCT) data, a previous microsimulation model evaluated the clinical relevance of reduced consumption in terms of avoided alcohol-attributable events. Using real-life observational data, the current analysis aimed to adapt the model and confirm previous findings about the clinical relevance of reduced alcohol consumption. METHODS: Based on the prospective observational CONTROL study, evaluating daily alcohol consumption among alcohol-dependent patients, the model predicted the probability of drinking any alcohol during a given day. Predicted daily alcohol consumption was simulated in a hypothetical sample of 200,000 patients observed over a year. Individual total alcohol consumption (TAC) and number of heavy drinking days (HDD) were derived. Using published risk equations, probabilities of alcohol-attributable adverse health events (e.g., hospitalizations or death) corresponding to simulated consumptions were computed, and aggregated for categories of patients defined by HDDs and TAC (expressed per 100,000 patient-years). Sensitivity analyses tested model robustness. RESULTS: Shifting from >220 HDDs per year to 120-140 HDDs and shifting from 36,000-39,000 g TAC per year (120-130 g/day) to 15,000-18,000 g TAC per year (50-60 g/day) impacted substantially on the incidence of events (14,588 and 6148 events avoided per 100,000 patient-years, respectively). Results were robust to sensitivity analyses. CONCLUSIONS: This study corroborates the previous microsimulation modeling approach and, using real-life data, confirms RCT-based findings that reduced alcohol consumption is a relevant objective for consideration in alcohol dependence management to improve public health.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

NLRC5, a member of the NOD-like receptor (NLR) protein family, has recently been characterized as the master transcriptional regulator of MHCI molecules in lymphocytes, in which it is highly expressed. However, its role in activated dendritic cells (DCs), which are instrumental to initiate T cell responses, remained elusive. We show in this study that, following stimulation of DCs with inflammatory stimuli, not only did NLRC5 level increase, but also its importance in directing MHCI transcription. Despite markedly reduced mRNA and intracellular H2-K levels, we unexpectedly observed nearly normal H2-K surface display in Nlrc5(-/-) DCs. Importantly, this discrepancy between a strong intracellular and a mild surface defect in H2-K levels was observed also in DCs with H2-K transcription defects independent of Nlrc5. Hence, alongside with demonstrating the importance of NLRC5 in MHCI transcription in activated DCs, we uncover a general mechanism counteracting low MHCI surface expression. In agreement with the decreased amount of neosynthesized MHCI, Nlrc5(-/-) DCs exhibited a defective capacity to display endogenous Ags. However, neither T cell priming by endogenous Ags nor cross-priming ability was substantially affected in activated Nlrc5(-/-) DCs. Altogether, these data show that Nlrc5 deficiency, despite significantly affecting MHCI transcription and Ag display, is not sufficient to hinder T cell activation, underlining the robustness of the T cell priming process by activated DCs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

High-resolution mass spectrometry (HRMS) has been associated with qualitative and research analysis and QQQ-MS with quantitative and routine analysis. This view is now challenged and for this reason, we have evaluated the quantitative LC-MS performance of a new high-resolution mass spectrometer (HRMS), a Q-orbitrap-MS, and compared the results obtained with a recent triple-quadrupole MS (QQQ-MS). High-resolution full-scan (HR-FS) and MS/MS acquisitions have been tested with real plasma extracts or pure standards. Limits of detection, dynamic range, mass accuracy and false positive or false negative detections have been determined or investigated with protease inhibitors, tyrosine kinase inhibitors, steroids and metanephrines. Our quantitative results show that today's available HRMS are reliable and sensitive quantitative instruments and comparable to QQQ-MS quantitative performance. Taking into account their versatility, user-friendliness and robustness, we believe that HRMS should be seen more and more as key instruments in quantitative LC-MS analyses. In this scenario, most targeted LC-HRMS analyses should be performed by HR-FS recording virtually "all" ions. In addition to absolute quantifications, HR-FS will allow the relative quantifications of hundreds of metabolites in plasma revealing individual's metabolome and exposome. This phenotyping of known metabolites should promote HRMS in clinical environment. A few other LC-HRMS analyses should be performed in single-ion-monitoring or MS/MS mode when increased sensitivity and/or detection selectivity will be necessary.