998 resultados para Diagnostic Algorithms
Resumo:
Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.
Resumo:
PURPOSE: Signal detection on 3D medical images depends on many factors, such as foveal and peripheral vision, the type of signal, and background complexity, and the speed at which the frames are displayed. In this paper, the authors focus on the speed with which radiologists and naïve observers search through medical images. Prior to the study, the authors asked the radiologists to estimate the speed at which they scrolled through CT sets. They gave a subjective estimate of 5 frames per second (fps). The aim of this paper is to measure and analyze the speed with which humans scroll through image stacks, showing a method to visually display the behavior of observers as the search is made as well as measuring the accuracy of the decisions. This information will be useful in the development of model observers, mathematical algorithms that can be used to evaluate diagnostic imaging systems. METHODS: The authors performed a series of 3D 4-alternative forced-choice lung nodule detection tasks on volumetric stacks of chest CT images iteratively reconstructed in lung algorithm. The strategy used by three radiologists and three naïve observers was assessed using an eye-tracker in order to establish where their gaze was fixed during the experiment and to verify that when a decision was made, a correct answer was not due only to chance. In a first set of experiments, the observers were restricted to read the images at three fixed speeds of image scrolling and were allowed to see each alternative once. In the second set of experiments, the subjects were allowed to scroll through the image stacks at will with no time or gaze limits. In both static-speed and free-scrolling conditions, the four image stacks were displayed simultaneously. All trials were shown at two different image contrasts. RESULTS: The authors were able to determine a histogram of scrolling speeds in frames per second. The scrolling speed of the naïve observers and the radiologists at the moment the signal was detected was measured at 25-30 fps. For the task chosen, the performance of the observers was not affected by the contrast or experience of the observer. However, the naïve observers exhibited a different pattern of scrolling than the radiologists, which included a tendency toward higher number of direction changes and number of slices viewed. CONCLUSIONS: The authors have determined a distribution of speeds for volumetric detection tasks. The speed at detection was higher than that subjectively estimated by the radiologists before the experiment. The speed information that was measured will be useful in the development of 3D model observers, especially anthropomorphic model observers which try to mimic human behavior.
Resumo:
INTRODUCTION: The performance of ultrasound (US) in the diagnosis of acute gouty (MSU) arthritis and calcium pyrophosphate (CPP) arthritis is not yet well defined. Most studies evaluated US as the basis for diagnosing crystal arthritis in already diagnosed cases of gout and few prospective studies have been performed. METHODS: One hundred nine consecutive patients who presented an acute arthritis of suspected microcrystalline arthritis were prospectively included. All underwent an US of the symptomatic joints(s) and of knees, ankles and 1(st) metatarsopalangeal (MTP) joints by a rheumatologist "blinded" to the clinical history. 92 also had standard X-rays. Crystal identification was the gold standard. RESULTS: Fifty-one patients had MSU, 28 CPP and 9 had both crystals by microscopic analysis. No crystals were detected in 21. One had septic arthritis. Based on US signs in the symptomatic joint, the sensitivity of US for both gout and CPP was low (60 % for both). In gout, the presence of US signs in the symptomatic joint was highly predictive of the diagnosis (PPV = 92 %). When US diagnosis was based on an examination of multiple joints, the sensitivity for both gout and CPP rose significantly but the specificity and the PPV decreased. In the absence of US signs in all the joints studied, CPP arthritis was unlikely (NPV = 87 %) particularly in patients with no previous crisis (NPV = 94 %). X-ray of the symptomatic joints was confirmed to be not useful in diagnosing gout and was equally sensitive or specific as US in CPP arthritis. CONCLUSIONS: Arthrocenthesis remains the key investigation for the diagnosis of microcrystalline acute arthritis. Although US can help in the diagnostic process, its diagnostic performance is only moderate. US should not be limited to the symptomatic joint. Examination of multiple joints gives a better diagnostic sensitivity but lower specificity.
Resumo:
L'infection par le virus de l'hépatite C (HCV) représente une des causes les plus fréquentes d'hépatite chronique, de cirrhose et de carcinome hépatocellulaire au niveau mondial. D'énormes progrès ont été réalisés durant ces 25 dernières années depuis la découverte du HCV, notamment dans la compréhension de la virologie moléculaire, de la pathogenèse et de l'histoire naturelle ainsi que dans la prévention, le diagnostic et le traitement de l'hépatite C. Ces avancées seront résumées dans cet article et discutées à la lumière de nouveaux défis. Hepatitis C virus (HCV) infection represents a major cause of chronic hepatitis, liver cirrhosis and hepatocellular carcinoma worldwide. Great progress in the understanding of the molecular virology, pathogenesis and natural course as well as the prevention, diagnosis and treatment of hepatitis C have been made in over the last 25 years since the discovery of HCV. Here, we review recent advances and discuss them in the light of new challenges.
Resumo:
Evaluation of image quality (IQ) in Computed Tomography (CT) is important to ensure that diagnostic questions are correctly answered, whilst keeping radiation dose to the patient as low as is reasonably possible. The assessment of individual aspects of IQ is already a key component of routine quality control of medical x-ray devices. These values together with standard dose indicators can be used to give rise to 'figures of merit' (FOM) to characterise the dose efficiency of the CT scanners operating in certain modes. The demand for clinically relevant IQ characterisation has naturally increased with the development of CT technology (detectors efficiency, image reconstruction and processing), resulting in the adaptation and evolution of assessment methods. The purpose of this review is to present the spectrum of various methods that have been used to characterise image quality in CT: from objective measurements of physical parameters to clinically task-based approaches (i.e. model observer (MO) approach) including pure human observer approach. When combined together with a dose indicator, a generalised dose efficiency index can be explored in a framework of system and patient dose optimisation. We will focus on the IQ methodologies that are required for dealing with standard reconstruction, but also for iterative reconstruction algorithms. With this concept the previously used FOM will be presented with a proposal to update them in order to make them relevant and up to date with technological progress. The MO that objectively assesses IQ for clinically relevant tasks represents the most promising method in terms of radiologist sensitivity performance and therefore of most relevance in the clinical environment.
Resumo:
Introduction: Evidence suggests that citrullinated fibrin(ogen) may be a potential in vivo target of anticitrullinated protein/peptide antibodies (ACPA) in rheumatoid arthritis (RA). We compared the diagnostic yield of three enzyme-linked immunosorbent assay (ELISA) tests by using chimeric fibrin/filaggrin citrullinated synthetic peptides (CFFCP1, CFFCP2, CFFCP3) with a commercial CCP2-based test in RA and analyzed their prognostic values in early RA. Methods: Samples from 307 blood donors and patients with RA (322), psoriatic arthritis (133), systemic lupus erythematosus (119), and hepatitis C infection (84) were assayed by using CFFCP- and CCP2-based tests. Autoantibodies also were analyzed at baseline and during a 2-year follow-up in 98 early RA patients to determine their prognostic value. Results: With cutoffs giving 98% specificity for RA versus blood donors, the sensitivity was 72.1% for CFFCP1, 78.0% for CFFCP2, 71.4% for CFFCP3, and 73.9% for CCP2, with positive predictive values greater than 97% in all cases. CFFCP sensitivity in RA increased to 80.4% without losing specificity when positivity was considered as any positive anti-CFFCP status. Specificity of the three CFFCP tests versus other rheumatic populations was high (> 90%) and similar to those for the CCP2. In early RA, CFFCP1 best identified patients with a poor radiographic outcome. Radiographic progression was faster in the small subgroup of CCP2-negative and CFFCP1-positive patients than in those negative for both autoantibodies. CFFCP antibodies decreased after 1 year, but without any correlation with changes in disease activity. Conclusions: CFFCP-based assays are highly sensitive and specific for RA. Early RA patients with anti-CFFCP1 antibodies, including CCP2-negative patients, show greater radiographic progression.
Resumo:
Eosinophilic oesophagitis (EoE) has first been described a little over 20 years ago. EoE has been defined by a panel of international experts as a "chronic, immune/antigen-mediated, oesophageal disease, characterized clinically by symptoms related to oesophageal dysfunction and histologically by eosinophil-predominant inflammation". A value of ≥ 15 eosinophils has been defined as histologic diagnostic cutoff. Other conditions associated with oesophageal eosinophilia, such as gastro-oesophageal reflux disease (GERD), PPI-responsive oesophageal eosinophilia, or Crohn's disease should be excluded before EoE can be diagnosed. This review highlights the latest insights regarding the diagnosis and differential diagnosis of EoE.
Resumo:
Our inability to adequately treat many patients with refractory epilepsy caused by focal cortical dysplasia (FCD), surgical inaccessibility and failures are significant clinical drawbacks. The targeting of physiologic features of epileptogenesis in FCD and colocalizing functionality has enhanced completeness of surgical resection, the main determinant of outcome. Electroencephalography (EEG)-functional magnetic resonance imaging (fMRI) and magnetoencephalography are helpful in guiding electrode implantation and surgical treatment, and high-frequency oscillations help defining the extent of the epileptogenic dysplasia. Ultra high-field MRI has a role in understanding the laminar organization of the cortex, and fluorodeoxyglucose-positron emission tomography (FDG-PET) is highly sensitive for detecting FCD in MRI-negative cases. Multimodal imaging is clinically valuable, either by improving the rate of postoperative seizure freedom or by reducing postoperative deficits. However, there is no level 1 evidence that it improves outcomes. Proof for a specific effect of antiepileptic drugs (AEDs) in FCD is lacking. Pathogenic mutations recently described in mammalian target of rapamycin (mTOR) genes in FCD have yielded important insights into novel treatment options with mTOR inhibitors, which might represent an example of personalized treatment of epilepsy based on the known mechanisms of disease. The ketogenic diet (KD) has been demonstrated to be particularly effective in children with epilepsy caused by structural abnormalities, especially FCD. It attenuates epigenetic chromatin modifications, a master regulator for gene expression and functional adaptation of the cell, thereby modifying disease progression. This could imply lasting benefit of dietary manipulation. Neurostimulation techniques have produced variable clinical outcomes in FCD. In widespread dysplasias, vagus nerve stimulation (VNS) has achieved responder rates >50%; however, the efficacy of noninvasive cranial nerve stimulation modalities such as transcutaneous VNS (tVNS) and noninvasive (nVNS) requires further study. Although review of current strategies underscores the serious shortcomings of treatment-resistant cases, initial evidence from novel approaches suggests that future success is possible.
Resumo:
BACKGROUND: Previous observations found a high prevalence of obstructive sleep apnea (OSA) in the hemodialysis population, but the best diagnostic approach remains undefined. We assessed OSA prevalence and performance of available screening tools to propose a specific diagnostic algorithm. METHODS: 104 patients from 6 Swiss hemodialysis centers underwent polygraphy and completed 3 OSA screening scores: STOP-BANG, Berlin's Questionnaire, and Adjusted Neck Circumference. The OSA predictors were identified on a derivation population and used to develop the diagnostic algorithm, which was validated on an independent population. RESULTS: We found 56% OSA prevalence (AHI ≥ 15/h), which was largely underdiagnosed. Screening scores showed poor performance for OSA screening (ROC areas 0.538 [SE 0.093] to 0.655 [SE 0.083]). Age, neck circumference, and time on renal replacement therapy were the best predictors of OSA and were used to develop a screening algorithm, with higher discriminatory performance than classical screening tools (ROC area 0.831 [0.066]). CONCLUSIONS: Our study confirms the high OSA prevalence and highlights the low diagnosis rate of this treatable cardiovascular risk factor in the hemodialysis population. Considering the poor performance of OSA screening tools, we propose and validate a specific algorithm to identify hemodialysis patients at risk for OSA for whom further sleep investigations should be considered.
Resumo:
Communication is an essential element of good medical practice also in pathology. In contrast to technical or diagnostic skills, communication skills are not easy to define, teach, or assess. Rules almost do not exist. In this paper, which has a rather personal character and cannot be taken as a set of guidelines, important aspects of communication in pathology are explored. This includes what should be communicated to the pathologist on the pathology request form, communication between pathologists during internal (interpathologist) consultation, communication around frozen section diagnoses, modalities of communication of a final diagnosis, with whom and how critical and unexpected findings should be communicated, (in-)adequate routes of communication for pathology diagnoses, who will (or might) receive pathology reports, and what should be communicated and how in case of an error or a technical problem. An earlier more formal description of what the responsibilities are of a pathologist as communicator and as collaborator in a medical team is added in separate tables. The intention of the paper is to stimulate reflection and discussion rather than to formulate strict rules.
Resumo:
The growth of five variables of the ischiopubic area was analyzed from bone material from birth to old age. The main purpose was to evaluate its significance and capacity for age and sex determination during and after growth. The material used consisted of 327 specimens from four documented Western European collections. Growth curves were calculated by polynomial regression for two classical variables of the ischiopubic area (pubis length and ischiopubic index) and three new variables of the pubic acetabular area (horizontal and vertical diameter of the pubic acetabular area and the pubic acetabular index). None of the curves showed lineal growth, with the exception of the ischiopubic index and the masculine vertical diameter of the pubis acetabular area. Pubis length has the most complicated growth, expressed by a five-degree polynomial. All the variables are useful for adult sex determination, except the pubic acetabular index. The ischopubic index, vertical diameter of the pubic acetabular area and the pubic acetabular index seem to be good variables for sub-adult sex determination. For age estimation the best variables, in both archaeological and forensic remains, are the absolute measurements (pubic length, vertical and horizontal diameter of the pubis). However, pubis length is the best variable for age estimation because it can be applied until 25 years of age.