419 resultados para Diagnostic Algorithms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND/AIMS: Primary hypoaldosteronism is a rare inborn disorder with life-threatening symptoms in newborns and infants due to an aldosterone synthase defect. Diagnosis is often difficult as the plasma aldosterone concentration (PAC) can remain within the normal range and thus lead to misinterpretation and delayed initiation of life-saving therapy. We aimed to test the eligibility of the PAC/plasma renin concentration (PRC) ratio as a tool for the diagnosis of primary hypoaldosteronism in newborns and infants. Meth ods: Data of 9 patients aged 15 days to 12 months at the time of diagnosis were collected. The diagnosis of primary hypoaldosteronism was based on clinical and laboratory findings over a period of 12 years in 3 different centers in Switzerland. To enable a valid comparison, the values of PAC and PRC were correlated to reference methods. RESULTS: In 6 patients, the PAC/PRC ratio could be determined and showed constantly decreased values <1 (pmol/l)/(mU/l). In 2 patients, renin was noted as plasma renin activity (PRA). PAC/PRA ratios were also clearly decreased. The diagnosis was subsequently genetically confirmed in 8 patients. CONCLUSION: A PAC/PRC ratio <1 pmol/mU and a PAC/PRA ratio <28 (pmol/l)/(ng/ml × h) are reliable tools to identify primary hypoaldosteronism in newborns and infants and help to diagnose this life-threatening disease faster. © 2015 S. Karger AG, Basel.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Les pneumonies causent une mortalité et une morbidité significatives. De manière simplifiée, deux types de pneumonie sont décrits : la pneumonie communautaire et la pneumonie nosocomiale avec le pneumocoque et l'Haemophilus influenzae comme causes principales pour la première, le Pseudomonas et diverses entérobactéries pour la deuxième. La réalité est cependant plus complexe puisque l'on distingue aussi la pneumonie d'aspiration par exemple. La culture est très importante dans le cas des pneumonies nosocomiales car elle permet de déterminer la sensibilité aux antibiotiques de l'agent infectieux et d'adapter le traitement. Pour les patients immunosupprimés, le diagnostic différentiel est plus large et la recherche par tests moléculaires de certains virus, de champignons filamenteux et du Pneumocystis peut se révéler informative. Pneumonia is an importance cause of mortality and morbidity in adults. Two types of pneumonia are defined: community-acquired and nosocomial pneumonia with their corresponding etiology such as pneumococci or Haemophilus influenzae and Pseudomonas or enterobacteriaceae, respectively. However, the reality is more complex with aspiration pneumonia, pneumonia in immunocompromised patient, and pneumonia in ventilated patients. Culture in the case of nosocomial pneumonia is especially important to obtain the antibiotic susceptibility of the infectious agent and to adjust therapy. Moreover for immunocompromised patients, the differential diagnosis is much wider looking for viruses, filamentous fungi and Pneumocystis can be very informative, using new molecular assays.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Lung clearance index (LCI), a marker of ventilation inhomogeneity, is elevated early in children with cystic fibrosis (CF). However, in infants with CF, LCI values are found to be normal, although structural lung abnormalities are often detectable. We hypothesized that this discrepancy is due to inadequate algorithms of the available software package. AIM: Our aim was to challenge the validity of these software algorithms. METHODS: We compared multiple breath washout (MBW) results of current software algorithms (automatic modus) to refined algorithms (manual modus) in 17 asymptomatic infants with CF, and 24 matched healthy term-born infants. The main difference between these two analysis methods lies in the calculation of the molar mass differences that the system uses to define the completion of the measurement. RESULTS: In infants with CF the refined manual modus revealed clearly elevated LCI above 9 in 8 out of 35 measurements (23%), all showing LCI values below 8.3 using the automatic modus (paired t-test comparing the means, P < 0.001). Healthy infants showed normal LCI values using both analysis methods (n = 47, paired t-test, P = 0.79). The most relevant reason for false normal LCI values in infants with CF using the automatic modus was the incorrect recognition of the end-of-test too early during the washout. CONCLUSION: We recommend the use of the manual modus for the analysis of MBW outcomes in infants in order to obtain more accurate results. This will allow appropriate use of infant lung function results for clinical and scientific purposes. Pediatr Pulmonol. 2015; 50:970-977. © 2015 Wiley Periodicals, Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background. Previous observations found a high prevalence of obstructive sleep apnea (OSA) in the hemodialysis population, but the best diagnostic approach remains undefined. We assessed OSA prevalence and performance of available screening tools to propose a specific diagnostic algorithm. Methods. 104 patients from 6 Swiss hemodialysis centers underwent polygraphy and completed 3 OSA screening scores: STOP-BANG, Berlin's Questionnaire, and Adjusted Neck Circumference. The OSA predictors were identified on a derivation population and used to develop the diagnostic algorithm, which was validated on an independent population. Results. We found 56% OSA prevalence (AHI ≥ 15/h), which was largely underdiagnosed. Screening scores showed poor performance for OSA screening (ROC areas 0.538 [SE 0.093] to 0.655 [SE 0.083]). Age, neck circumference, and time on renal replacement therapy were the best predictors of OSA and were used to develop a screening algorithm, with higher discriminatory performance than classical screening tools (ROC area 0.831 [0.066]). Conclusions. Our study confirms the high OSA prevalence and highlights the low diagnosis rate of this treatable cardiovascular risk factor in the hemodialysis population. Considering the poor performance of OSA screening tools, we propose and validate a specific algorithm to identify hemodialysis patients at risk for OSA for whom further sleep investigations should be considered.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: Signal detection on 3D medical images depends on many factors, such as foveal and peripheral vision, the type of signal, and background complexity, and the speed at which the frames are displayed. In this paper, the authors focus on the speed with which radiologists and naïve observers search through medical images. Prior to the study, the authors asked the radiologists to estimate the speed at which they scrolled through CT sets. They gave a subjective estimate of 5 frames per second (fps). The aim of this paper is to measure and analyze the speed with which humans scroll through image stacks, showing a method to visually display the behavior of observers as the search is made as well as measuring the accuracy of the decisions. This information will be useful in the development of model observers, mathematical algorithms that can be used to evaluate diagnostic imaging systems. METHODS: The authors performed a series of 3D 4-alternative forced-choice lung nodule detection tasks on volumetric stacks of chest CT images iteratively reconstructed in lung algorithm. The strategy used by three radiologists and three naïve observers was assessed using an eye-tracker in order to establish where their gaze was fixed during the experiment and to verify that when a decision was made, a correct answer was not due only to chance. In a first set of experiments, the observers were restricted to read the images at three fixed speeds of image scrolling and were allowed to see each alternative once. In the second set of experiments, the subjects were allowed to scroll through the image stacks at will with no time or gaze limits. In both static-speed and free-scrolling conditions, the four image stacks were displayed simultaneously. All trials were shown at two different image contrasts. RESULTS: The authors were able to determine a histogram of scrolling speeds in frames per second. The scrolling speed of the naïve observers and the radiologists at the moment the signal was detected was measured at 25-30 fps. For the task chosen, the performance of the observers was not affected by the contrast or experience of the observer. However, the naïve observers exhibited a different pattern of scrolling than the radiologists, which included a tendency toward higher number of direction changes and number of slices viewed. CONCLUSIONS: The authors have determined a distribution of speeds for volumetric detection tasks. The speed at detection was higher than that subjectively estimated by the radiologists before the experiment. The speed information that was measured will be useful in the development of 3D model observers, especially anthropomorphic model observers which try to mimic human behavior.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

INTRODUCTION: The performance of ultrasound (US) in the diagnosis of acute gouty (MSU) arthritis and calcium pyrophosphate (CPP) arthritis is not yet well defined. Most studies evaluated US as the basis for diagnosing crystal arthritis in already diagnosed cases of gout and few prospective studies have been performed. METHODS: One hundred nine consecutive patients who presented an acute arthritis of suspected microcrystalline arthritis were prospectively included. All underwent an US of the symptomatic joints(s) and of knees, ankles and 1(st) metatarsopalangeal (MTP) joints by a rheumatologist "blinded" to the clinical history. 92 also had standard X-rays. Crystal identification was the gold standard. RESULTS: Fifty-one patients had MSU, 28 CPP and 9 had both crystals by microscopic analysis. No crystals were detected in 21. One had septic arthritis. Based on US signs in the symptomatic joint, the sensitivity of US for both gout and CPP was low (60 % for both). In gout, the presence of US signs in the symptomatic joint was highly predictive of the diagnosis (PPV = 92 %). When US diagnosis was based on an examination of multiple joints, the sensitivity for both gout and CPP rose significantly but the specificity and the PPV decreased. In the absence of US signs in all the joints studied, CPP arthritis was unlikely (NPV = 87 %) particularly in patients with no previous crisis (NPV = 94 %). X-ray of the symptomatic joints was confirmed to be not useful in diagnosing gout and was equally sensitive or specific as US in CPP arthritis. CONCLUSIONS: Arthrocenthesis remains the key investigation for the diagnosis of microcrystalline acute arthritis. Although US can help in the diagnostic process, its diagnostic performance is only moderate. US should not be limited to the symptomatic joint. Examination of multiple joints gives a better diagnostic sensitivity but lower specificity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

L'infection par le virus de l'hépatite C (HCV) représente une des causes les plus fréquentes d'hépatite chronique, de cirrhose et de carcinome hépatocellulaire au niveau mondial. D'énormes progrès ont été réalisés durant ces 25 dernières années depuis la découverte du HCV, notamment dans la compréhension de la virologie moléculaire, de la pathogenèse et de l'histoire naturelle ainsi que dans la prévention, le diagnostic et le traitement de l'hépatite C. Ces avancées seront résumées dans cet article et discutées à la lumière de nouveaux défis. Hepatitis C virus (HCV) infection represents a major cause of chronic hepatitis, liver cirrhosis and hepatocellular carcinoma worldwide. Great progress in the understanding of the molecular virology, pathogenesis and natural course as well as the prevention, diagnosis and treatment of hepatitis C have been made in over the last 25 years since the discovery of HCV. Here, we review recent advances and discuss them in the light of new challenges.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Evaluation of image quality (IQ) in Computed Tomography (CT) is important to ensure that diagnostic questions are correctly answered, whilst keeping radiation dose to the patient as low as is reasonably possible. The assessment of individual aspects of IQ is already a key component of routine quality control of medical x-ray devices. These values together with standard dose indicators can be used to give rise to 'figures of merit' (FOM) to characterise the dose efficiency of the CT scanners operating in certain modes. The demand for clinically relevant IQ characterisation has naturally increased with the development of CT technology (detectors efficiency, image reconstruction and processing), resulting in the adaptation and evolution of assessment methods. The purpose of this review is to present the spectrum of various methods that have been used to characterise image quality in CT: from objective measurements of physical parameters to clinically task-based approaches (i.e. model observer (MO) approach) including pure human observer approach. When combined together with a dose indicator, a generalised dose efficiency index can be explored in a framework of system and patient dose optimisation. We will focus on the IQ methodologies that are required for dealing with standard reconstruction, but also for iterative reconstruction algorithms. With this concept the previously used FOM will be presented with a proposal to update them in order to make them relevant and up to date with technological progress. The MO that objectively assesses IQ for clinically relevant tasks represents the most promising method in terms of radiologist sensitivity performance and therefore of most relevance in the clinical environment.