52 resultados para Accuracy and precision
Resumo:
RésuméL'origine de l'obésité, qui atteint des proportions épidémiques, est complexe. Elle est liée au mode de vie et au comportement des individus par rapport à l'activité physique, expression des choix individuels et de l'interaction avec l'environnement. Les mesures du comportement au niveau de l'activité physique des individus face à leur environnement, la répartition des types d'activité physique, la durée, la fréquence, l'intensité, et la dépense énergétique sont d'une grande importance. Aujourd'hui, il y a un manque de méthodes permettant une évaluation précise et objective de l'activité physique et du comportement des individus. Afin de compléter les recherches relatives à l'activité physique, à l'obésité et à certaines maladies, le premier objectif du travail de thèse était de développer un modèle pour l'identification objective des types d'activité physique dans des conditions de vie réelles et l'estimation de la dépense énergétique basée sur une combinaison de 2 accéléromètres et 1 GPS. Le modèle prend en compte qu'une activité donnée peut être accomplie de différentes façons dans la vie réelle. Les activités quotidiennes ont pu être classées en 8 catégories, de sédentaires à actives, avec une précision de 1 min. La dépense énergétique a pu peut être prédite avec précision par le modèle. Après validation du modèle, le comportement des individus de l'activité physique a été évalué dans une seconde étude. Nous avons émis l'hypothèse que, dans un environnement caractérisé par les pentes, les personnes obèses sont tentées d'éviter les pentes raides et de diminuer la vitesse de marche au cours d'une activité physique spontanée, ainsi que pendant les exercices prescrits et structurés. Nous avons donc caractérisé, par moyen du modèle développé, le comportement des individus obèses dans un environnement vallonné urbain. La façon dont on aborde un environnement valloné dans les déplacements quotidiens devrait également être considérée lors de la prescription de marche supplémentaire afin d'augmenter l'activité physique.SummaryOrigin of obesity, that reached epidemic proportion, is complex and may be linked to different lifestyle and physical activity behaviour. Measurement of physical activity behaviour of individuals towards their environment, the distribution of physical activity in terms of physical activity type, volume, duration, frequency, intensity, and energy expenditure is of great importance. Nowadays, there is a lack of methods for accurate and objective assessment of physical activity and of individuals' physical activity behaviour. In order to complement the research relating physical activity to obesity and related diseases, the first aim of the thesis work was to develop a model for objective identification of physical activity types in real-life condition and energy expenditure based on a combination of 2 accelerometers and 1 GPS device. The model takes into account that a given activity can be achieved in many different ways in real life condition. Daily activities could be classified in 8 categories, as sedentary to active physical activity, within 1 min accuracy, and physical activity patterns determined. The energy expenditure could be predicted accurately with an accuracy below 10%. Furthermore, individuals' physical activity behaviour is expression of individual choices and their interaction with the neighbourhood environment. In a second study, we hypothesized that, in an environment characterized by inclines, obese individuals are tempted to avoid steep positive slopes and to decrease walking speed during spontaneous outdoor physical activity, as well as during prescribed structured bouts of exercise. Finally, we characterized, by mean of the developed model, the physical activity behaviour of obese individuals in a hilly urban environment. Quantifying how one tackles hilly environment or avoids slope in their everyday displacements should be also considered while prescribing extra walking in free-living conditions in order to increase physical activity.
Resumo:
OBJECTIVES: To preliminarily evaluate prospectively the accuracy and reliability of a specific ad hoc reduction-compression forceps in intraoral open reduction of transverse and displaced mandibular angle fractures. STUDY DESIGN: We analyzed the clinical and radiologic data of 7 patients with 7 single transverse and displaced angle fractures. An intraoral approach was performed in all of the patients without using perioperative intermaxillary fixation. A single Arbeitsgemeinschaft Osteosynthese (AO) unilock reconstruction plate was fixed to each stable fragment with 3 locking screws (2.0 mm in 5 patients and 2.4 mm in 2 patients) at the basilar border of the mandible, according to AO/American Society of Internal Fixation (ASIF) principles. Follow-up was at 1, 3, 6, and 12 months, and we noted the status of healing and complications, if any. RESULTS: All of the patients had satisfactory fracture reduction as well as a successful treatment outcome without complications. CONCLUSION: This preliminary study demonstrated that the intraoral reduction of transverse and displaced angle fractures using a specific ad hoc reduction-forceps results in a high rate of success.
Resumo:
The aim of this research was to evaluate how fingerprint analysts would incorporate information from newly developed tools into their decision making processes. Specifically, we assessed effects using the following: (1) a quality tool to aid in the assessment of the clarity of the friction ridge details, (2) a statistical tool to provide likelihood ratios representing the strength of the corresponding features between compared fingerprints, and (3) consensus information from a group of trained fingerprint experts. The measured variables for the effect on examiner performance were the accuracy and reproducibility of the conclusions against the ground truth (including the impact on error rates) and the analyst accuracy and variation for feature selection and comparison.¦The results showed that participants using the consensus information from other fingerprint experts demonstrated more consistency and accuracy in minutiae selection. They also demonstrated higher accuracy, sensitivity, and specificity in the decisions reported. The quality tool also affected minutiae selection (which, in turn, had limited influence on the reported decisions); the statistical tool did not appear to influence the reported decisions.
Resumo:
In this paper we propose a novel unsupervised approach to learning domain-specific ontologies from large open-domain text collections. The method is based on the joint exploitation of Semantic Domains and Super Sense Tagging for Information Retrieval tasks. Our approach is able to retrieve domain specific terms and concepts while associating them with a set of high level ontological types, named supersenses, providing flat ontologies characterized by very high accuracy and pertinence to the domain.
Resumo:
The research considers the problem of spatial data classification using machine learning algorithms: probabilistic neural networks (PNN) and support vector machines (SVM). As a benchmark model simple k-nearest neighbor algorithm is considered. PNN is a neural network reformulation of well known nonparametric principles of probability density modeling using kernel density estimator and Bayesian optimal or maximum a posteriori decision rules. PNN is well suited to problems where not only predictions but also quantification of accuracy and integration of prior information are necessary. An important property of PNN is that they can be easily used in decision support systems dealing with problems of automatic classification. Support vector machine is an implementation of the principles of statistical learning theory for the classification tasks. Recently they were successfully applied for different environmental topics: classification of soil types and hydro-geological units, optimization of monitoring networks, susceptibility mapping of natural hazards. In the present paper both simulated and real data case studies (low and high dimensional) are considered. The main attention is paid to the detection and learning of spatial patterns by the algorithms applied.
Resumo:
Short-TE MRS has been proposed recently as a method for the in vivo detection and quantification of γ-aminobutyric acid (GABA) in the human brain at 3 T. In this study, we investigated the accuracy and reproducibility of short-TE MRS measurements of GABA at 3 T using both simulations and experiments. LCModel analysis was performed on a large number of simulated spectra with known metabolite input concentrations. Simulated spectra were generated using a range of spectral linewidths and signal-to-noise ratios to investigate the effect of varying experimental conditions, and analyses were performed using two different baseline models to investigate the effect of an inaccurate baseline model on GABA quantification. The results of these analyses indicated that, under experimental conditions corresponding to those typically observed in the occipital cortex, GABA concentration estimates are reproducible (mean reproducibility error, <20%), even when an incorrect baseline model is used. However, simulations indicate that the accuracy of GABA concentration estimates depends strongly on the experimental conditions (linewidth and signal-to-noise ratio). In addition to simulations, in vivo GABA measurements were performed using both spectral editing and short-TE MRS in the occipital cortex of 14 healthy volunteers. Short-TE MRS measurements of GABA exhibited a significant positive correlation with edited GABA measurements (R = 0.58, p < 0.05), suggesting that short-TE measurements of GABA correspond well with measurements made using spectral editing techniques. Finally, within-session reproducibility was assessed in the same 14 subjects using four consecutive short-TE GABA measurements in the occipital cortex. Across all subjects, the average coefficient of variation of these four GABA measurements was 8.7 ± 4.9%. This study demonstrates that, under some experimental conditions, short-TE MRS can be employed for the reproducible detection of GABA at 3 T, but that the technique should be used with caution, as the results are dependent on the experimental conditions. Copyright © 2013 John Wiley & Sons, Ltd.
Resumo:
The multiscale finite-volume (MSFV) method is designed to reduce the computational cost of elliptic and parabolic problems with highly heterogeneous anisotropic coefficients. The reduction is achieved by splitting the original global problem into a set of local problems (with approximate local boundary conditions) coupled by a coarse global problem. It has been shown recently that the numerical errors in MSFV results can be reduced systematically with an iterative procedure that provides a conservative velocity field after any iteration step. The iterative MSFV (i-MSFV) method can be obtained with an improved (smoothed) multiscale solution to enhance the localization conditions, with a Krylov subspace method [e.g., the generalized-minimal-residual (GMRES) algorithm] preconditioned by the MSFV system, or with a combination of both. In a multiphase-flow system, a balance between accuracy and computational efficiency should be achieved by finding a minimum number of i-MSFV iterations (on pressure), which is necessary to achieve the desired accuracy in the saturation solution. In this work, we extend the i-MSFV method to sequential implicit simulation of time-dependent problems. To control the error of the coupled saturation/pressure system, we analyze the transport error caused by an approximate velocity field. We then propose an error-control strategy on the basis of the residual of the pressure equation. At the beginning of simulation, the pressure solution is iterated until a specified accuracy is achieved. To minimize the number of iterations in a multiphase-flow problem, the solution at the previous timestep is used to improve the localization assumption at the current timestep. Additional iterations are used only when the residual becomes larger than a specified threshold value. Numerical results show that only a few iterations on average are necessary to improve the MSFV results significantly, even for very challenging problems. Therefore, the proposed adaptive strategy yields efficient and accurate simulation of multiphase flow in heterogeneous porous media.
Resumo:
The increasing availability and precision of digital elevation model (DEM) helps in the assessment of landslide prone areas where only few data are available. This approach is performed in 6 main steps which include: DEM creation; identification of geomorphologic features; determination of the main sets of discontinuities; mapping of the most likely dangerous structures; preliminary rock-fall assessment; estimation of the large instabilities volumes. The method is applied to two the cases studies in the Oppstadhornet mountain (730m alt): (1) a 10 millions m3 slow-moving rockslide and (2) a potential high-energy rock falling prone area. The orientations of the foliation and of the major discontinuities have been determined directly from the DEM. These results are in very good agreement with field measurements. Spatial arrangements of discontinuities and foliation with the topography revealed hazardous structures. Maps of potential occurrence of these hazardous structures show highly probable sliding areas at the foot of the main landslide and potential rock falls in the eastern part of the mountain.
Resumo:
The temporal dynamics of species diversity are shaped by variations in the rates of speciation and extinction, and there is a long history of inferring these rates using first and last appearances of taxa in the fossil record. Understanding diversity dynamics critically depends on unbiased estimates of the unobserved times of speciation and extinction for all lineages, but the inference of these parameters is challenging due to the complex nature of the available data. Here, we present a new probabilistic framework to jointly estimate species-specific times of speciation and extinction and the rates of the underlying birth-death process based on the fossil record. The rates are allowed to vary through time independently of each other, and the probability of preservation and sampling is explicitly incorporated in the model to estimate the true lifespan of each lineage. We implement a Bayesian algorithm to assess the presence of rate shifts by exploring alternative diversification models. Tests on a range of simulated data sets reveal the accuracy and robustness of our approach against violations of the underlying assumptions and various degrees of data incompleteness. Finally, we demonstrate the application of our method with the diversification of the mammal family Rhinocerotidae and reveal a complex history of repeated and independent temporal shifts of both speciation and extinction rates, leading to the expansion and subsequent decline of the group. The estimated parameters of the birth-death process implemented here are directly comparable with those obtained from dated molecular phylogenies. Thus, our model represents a step towards integrating phylogenetic and fossil information to infer macroevolutionary processes.
Resumo:
RESUME Les évidences montrant que les changements globaux affectent la biodiversité s'accumulent. Les facteurs les plus influant dans ce processus sont les changements et destructions d'habitat, l'expansion des espèces envahissantes et l'impact des changements climatiques. Une évaluation pertinente de la réponse des espèces face à ces changements est essentielle pour proposer des mesures permettant de réduire le déclin actuel de la biodiversité. La modélisation de la répartition d'espèces basée sur la niche (NBM) est l'un des rares outils permettant cette évaluation. Néanmoins, leur application dans le contexte des changements globaux repose sur des hypothèses restrictives et demande une interprétation critique. Ce travail présente une série d'études de cas investiguant les possibilités et limitations de cette approche pour prédire l'impact des changements globaux. Deux études traitant des menaces sur les espèces rares et en danger d'extinction sont présentées. Les caractéristiques éco-géographiques de 118 plantes avec un haut degré de priorité de conservation sont revues. La prévalence des types de rareté sont analysées en relation avec leur risque d'extinction UICN. La revue souligne l'importance de la conservation à l'échelle régionale. Une évaluation de la rareté à échelle globale peut être trompeuse pour certaine espèces car elle ne tient pas en compte des différents degrés de rareté que présente une espèce à différentes échelles spatiales. La deuxième étude test une approche pour améliorer l'échantillonnage d'espèces rares en incluant des phases itératives de modélisation et d'échantillonnage sur le terrain. L'application de l'approche en biologie de la conservation (illustrée ici par le cas du chardon bleu, Eryngium alpinum), permettrait de réduire le temps et les coûts d'échantillonnage. Deux études sur l'impact des changements climatiques sur la faune et la flore africaine sont présentées. La première étude évalue la sensibilité de 227 mammifères africains face aux climatiques d'ici 2050. Elle montre qu'un nombre important d'espèces pourrait être bientôt en danger d'extinction et que les parcs nationaux africains (principalement ceux situé en milieux xériques) pourraient ne pas remplir leur mandat de protection de la biodiversité dans le futur. La seconde étude modélise l'aire de répartition en 2050 de 975 espèces de plantes endémiques du sud de l'Afrique. L'étude propose l'inclusion de méthodes améliorant la prédiction des risques liés aux changements climatiques. Elle propose également une méthode pour estimer a priori la sensibilité d'une espèce aux changements climatiques à partir de ses propriétés écologiques et des caractéristiques de son aire de répartition. Trois études illustrent l'utilisation des modèles dans l'étude des invasions biologiques. Une première étude relate l'expansion de la laitue sáuvage (Lactuca serriola) vers le nord de l'Europe en lien avec les changements du climat depuis 250 ans. La deuxième étude analyse le potentiel d'invasion de la centaurée tachetée (Centaures maculosa), une mauvaise herbe importée en Amérique du nord vers 1890. L'étude apporte la preuve qu'une espèce envahissante peut occuper une niche climatique différente après introduction sur un autre continent. Les modèles basés sur l'aire native prédisent de manière incorrecte l'entier de l'aire envahie mais permettent de prévoir les aires d'introductions potentielles. Une méthode alternative, incluant la calibration du modèle à partir des deux aires où l'espèce est présente, est proposée pour améliorer les prédictions de l'invasion en Amérique du nord. Je présente finalement une revue de la littérature sur la dynamique de la niche écologique dans le temps et l'espace. Elle synthétise les récents développements théoriques concernant le conservatisme de la niche et propose des solutions pour améliorer la pertinence des prédictions d'impact des changements climatiques et des invasions biologiques. SUMMARY Evidences are accumulating that biodiversity is facing the effects of global change. The most influential drivers of change in ecosystems are land-use change, alien species invasions and climate change impacts. Accurate projections of species' responses to these changes are needed to propose mitigation measures to slow down the on-going erosion of biodiversity. Niche-based models (NBM) currently represent one of the only tools for such projections. However, their application in the context of global changes relies on restrictive assumptions, calling for cautious interpretations. In this thesis I aim to assess the effectiveness and shortcomings of niche-based models for the study of global change impacts on biodiversity through the investigation of specific, unsolved limitations and suggestion of new approaches. Two studies investigating threats to rare and endangered plants are presented. I review the ecogeographic characteristic of 118 endangered plants with high conservation priority in Switzerland. The prevalence of rarity types among plant species is analyzed in relation to IUCN extinction risks. The review underlines the importance of regional vs. global conservation and shows that a global assessment of rarity might be misleading for some species because it can fail to account for different degrees of rarity at a variety of spatial scales. The second study tests a modeling framework including iterative steps of modeling and field surveys to improve the sampling of rare species. The approach is illustrated with a rare alpine plant, Eryngium alpinum and shows promise for complementing conservation practices and reducing sampling costs. Two studies illustrate the impacts of climate change on African taxa. The first one assesses the sensitivity of 277 mammals at African scale to climate change by 2050 in terms of species richness and turnover. It shows that a substantial number of species could be critically endangered in the future. National parks situated in xeric ecosystems are not expected to meet their mandate of protecting current species diversity in the future. The second study model the distribution in 2050 of 975 endemic plant species in southern Africa. The study proposes the inclusion of new methodological insights improving the accuracy and ecological realism of predictions of global changes studies. It also investigates the possibility to estimate a priori the sensitivity of a species to climate change from the geographical distribution and ecological proprieties of the species. Three studies illustrate the application of NBM in the study of biological invasions. The first one investigates the Northwards expansion of Lactuca serriola L. in Europe during the last 250 years in relation with climate changes. In the last two decades, the species could not track climate change due to non climatic influences. A second study analyses the potential invasion extent of spotted knapweed, a European weed first introduced into North America in the 1890s. The study provides one of the first empirical evidence that an invasive species can occupy climatically distinct niche spaces following its introduction into a new area. Models fail to predict the current full extent of the invasion, but correctly predict areas of introduction. An alternative approach, involving the calibration of models with pooled data from both ranges, is proposed to improve predictions of the extent of invasion on models based solely on the native range. I finally present a review on the dynamic nature of ecological niches in space and time. It synthesizes the recent theoretical developments to the niche conservatism issues and proposes solutions to improve confidence in NBM predictions of the impacts of climate change and species invasions on species distributions.
Resumo:
Purpose: To assess the global cardiovascular (CV) risk of an individual, several scores have been developed. However, their accuracy and comparability need to be evaluated in populations others from which they were derived. The aim of this study was to compare the predictive accuracy of 4 CV risk scores using data of a large population-based cohort. Methods: Prospective cohort study including 4980 participants (2698 women, mean age± SD: 52.7±10.8 years) in Lausanne, Switzerland followed for an average of 5.5 years (range 0.2 - 8.5). Two end points were assessed: 1) coronary heart disease (CHD), and 2) CV diseases (CVD). Four risk scores were compared: original and recalibrated Framingham coronary heart disease scores (1998 and 2001); original PROCAM score (2002) and its recalibrated version for Switzerland (IAS-AGLA); Reynolds risk score. Discrimination was assessed using Harrell's C statistics, model fitness using Akaike's information criterion (AIC) and calibration using pseudo Hosmer-Lemeshow test. The sensitivity, specificity and corresponding 95% confidence intervals were assessed for each risk score using the highest risk category ([20+ % at 10 years) as the "positive" test. Results: Recalibrated and original 1998 and original 2001 Framingham scores show better discrimination (>0.720) and model fitness (low AIC) for CHD and CVD. All 4 scores are correctly calibrated (Chi2<20). The recalibrated Framingham 1998 score has the best sensitivities, 37.8% and 40.4%, for CHD and CVD, respectively. All scores present specificities >90%. Framingham 1998, PROCAM and IAS-AGLA scores include the greatest proportion of subjects (>200) in the high risk category whereas recalibrated Framingham 2001 and Reynolds include <=44 subjects. Conclusion: In this cohort, we see variations of accuracy between risk scores, the original Framingham 2001 score demonstrating the best compromise between its accuracy and its limited selection of subjects in the highest risk category. We advocate that national guidelines, based on independently validated data, take into account calibrated CV risk scores for their respective countries.
Resumo:
Ligament balance is an important and subjective task performed during total knee arthroplasty (TKA) procedure. For this reason, it is desirable to develop instruments to quantitatively assess the soft-tissue balance since excessive imbalance can accelerate prosthesis wear and lead to early surgical revision. The instrumented distractor proposed in this study can assist surgeons on performing ligament balance by measuring the distraction gap and applied load. Also the device allows the determination of the ligament stiffness which can contribute a better understanding of the intrinsic mechanical behavior of the knee joint. Instrumentation of the device involved the use of hall-sensors for measuring the distractor displacement and strain gauges to transduce the force. The sensors were calibrated and tested to demonstrate their suitability for surgical use. Results show the distraction gap can be measured reliably with 0.1mm accuracy and the distractive loads could be assessed with an accuracy in the range of 4N. These characteristics are consistent with those have been proposed, in this work, for a device that could assist on performing ligament balance while permitting surgeons evaluation based on his experience. Preliminary results from in vitro tests were in accordance with expected stiffness values for medial collateral ligament (MCL) and lateral collateral ligament (LCL).
Resumo:
U-Pb dating of zircons by laser ablation inductively coupled plasma mass spectrometry (LA-ICPMS) is a widely used analytical technique in Earth Sciences. For U-Pb ages below 1 billion years (1 Ga), Pb-206/U-238 dates are usually used, showing the least bias by external parameters such as the presence of initial lead and its isotopic composition in the analysed mineral. Precision and accuracy of the Pb/U ratio are thus of highest importance in LA-ICPMS geochronology. We consider the evaluation of the statistical distribution of the sweep intensities based on goodness-of-fit tests in order to find a model probability distribution fitting the data to apply an appropriate formulation for the standard deviation. We then discuss three main methods to calculate the Pb/U intensity ratio and its uncertainty in the LA-ICPMS: (1) ratio-of-the-mean intensities method, (2) mean-of-the-intensity-ratios method and (3) intercept method. These methods apply different functions to the same raw intensity vs. time data to calculate the mean Pb/U intensity ratio. Thus, the calculated intensity ratio and its uncertainty depend on the method applied. We demonstrate that the accuracy and, conditionally, the precision of the ratio-of-the-mean intensities method are invariant to the intensity fluctuations and averaging related to the dwell time selection and off-line data transformation (averaging of several sweeps); we present a statistical approach how to calculate the uncertainty of this method for transient signals. We also show that the accuracy of methods (2) and (3) is influenced by the intensity fluctuations and averaging, and the extent of this influence can amount to tens of percentage points; we show that the uncertainty of these methods also depends on how the signal is averaged. Each of the above methods imposes requirements to the instrumentation. The ratio-of-the-mean intensities method is sufficiently accurate provided the laser induced fractionation between the beginning and the end of the signal is kept low and linear. We show, based on a comprehensive series of analyses with different ablation pit sizes, energy densities and repetition rates for a 193 nm ns-ablation system that such a fractionation behaviour requires using a low ablation speed (low energy density and low repetition rate). Overall, we conclude that the ratio-of-the-mean intensities method combined with low sampling rates is the most mathematically accurate among the existing data treatment methods for U-Pb zircon dating by sensitive sector field ICPMS.
Resumo:
ABSTRACT: BACKGROUND: Decision curve analysis has been introduced as a method to evaluate prediction models in terms of their clinical consequences if used for a binary classification of subjects into a group who should and into a group who should not be treated. The key concept for this type of evaluation is the "net benefit", a concept borrowed from utility theory. METHODS: We recall the foundations of decision curve analysis and discuss some new aspects. First, we stress the formal distinction between the net benefit for the treated and for the untreated and define the concept of the "overall net benefit". Next, we revisit the important distinction between the concept of accuracy, as typically assessed using the Youden index and a receiver operating characteristic (ROC) analysis, and the concept of utility of a prediction model, as assessed using decision curve analysis. Finally, we provide an explicit implementation of decision curve analysis to be applied in the context of case-control studies. RESULTS: We show that the overall net benefit, which combines the net benefit for the treated and the untreated, is a natural alternative to the benefit achieved by a model, being invariant with respect to the coding of the outcome, and conveying a more comprehensive picture of the situation. Further, within the framework of decision curve analysis, we illustrate the important difference between the accuracy and the utility of a model, demonstrating how poor an accurate model may be in terms of its net benefit. Eventually, we expose that the application of decision curve analysis to case-control studies, where an accurate estimate of the true prevalence of a disease cannot be obtained from the data, is achieved with a few modifications to the original calculation procedure. CONCLUSIONS: We present several interrelated extensions to decision curve analysis that will both facilitate its interpretation and broaden its potential area of application.
Resumo:
Although prosthetic joint infection (PJI) is a rare event after arthroplasty, it represents a significant complication that is associated with high morbidity, need for complex treatment, and substantial healthcare costs. An accurate and rapid diagnosis of PJI is crucial for treatment success. Current diagnostic methods in PJI are insufficient with 10-30% false-negative cultures. Consequently, there is a need for research and development into new methods aimed at improving diagnostic accuracy and speed of detection. In this article, we review available conventional diagnostic methods for the diagnosis of PJI (laboratory markers, histopathology, synovial fluid and periprosthetic tissue cultures), new diagnostic methods (sonication of implants, specific and multiplex PCR, mass spectrometry) and innovative techniques under development (new laboratory markers, microcalorimetry, electrical method, reverse transcription [RT]-PCR, fluorescence in situ hybridization [FISH], biofilm microscopy, microarray identification, and serological tests). The results of highly sensitive diagnostic techniques with unknown specificity should be interpreted with caution. The organism identified by a new method may represent a real pathogen that was unrecognized by conventional diagnostic methods or contamination during specimen sampling, transportation, or processing. For accurate interpretation, additional studies are needed, which would evaluate the long-term outcome (usually >2 years) with or without antimicrobial treatment. It is expected that new rapid, accurate, and fully automatic diagnostic tests will be developed soon.