832 resultados para accuracy analysis
Resumo:
OBJECTIVES: To examine the accuracy of the World Health Organization immunological criteria for virological failure of antiretroviral treatment. METHODS: Analysis of 10 treatment programmes in Africa and South America that monitor both CD4 cell counts and HIV-1 viral load. Adult patients with at least two CD4 counts and viral load measurements between month 6 and 18 after starting a non-nucleoside reverse transcriptase inhibitor-based regimen were included. WHO immunological criteria include CD4 counts persistently <100 cells/microl, a fall below the baseline CD4 count, or a fall of >50% from the peak value. Virological failure was defined as two measurements > or =10 0000 copies/ml (higher threshold) or > or =500 copies/ml (lower threshold). Measures of accuracy with exact binomial 95% confidence intervals (CI) were calculated. RESULTS: A total of 2009 patients were included. During 1856 person-years of follow up 63 patients met the immunological criteria and 35 patients (higher threshold) and 95 patients (lower threshold) met the virological criteria. Sensitivity [95% confidence interval (CI)] was 17.1% (6.6-33.6%) for the higher and 12.6% (6.7-21.0%) for the lower threshold. Corresponding results for specificity were 97.1% (96.3-97.8%) and 97.3% (96.5-98.0%), for positive predictive value 9.5% (3.6-19.6%) and 19.0% (10.2-30.9%) and for negative predictive value 98.5% (97.9-99.0%) and 95.7% (94.7-96.6%). CONCLUSIONS: The positive predictive value of the WHO immunological criteria for virological failure of antiretroviral treatment in resource-limited settings is poor, but the negative predictive value is high. Immunological criteria are more appropriate for ruling out than for ruling in virological failure in resource-limited settings.
Resumo:
Introduction: The aim of this systematic review was to analyze the dental literature regarding accuracy and clinical application in computer-guided template-based implant dentistry. Materials and methods: An electronic literature search complemented by manual searching was performed to gather data on accuracy and surgical, biological and prosthetic complications in connection with computer-guided implant treatment. For the assessment of accuracy meta-regression analysis was performed. Complication rates are descriptively summarized. Results: From 3120 titles after the literature search, eight articles met the inclusion criteria regarding accuracy and 10 regarding the clinical performance. Meta-regression analysis revealed a mean deviation at the entry point of 1.07 mm (95% CI: 0.76-1.22 mm) and at the apex of 1.63 mm (95% CI: 1.26-2 mm). No significant differences between the studies were found regarding method of template production or template support and stabilization. Early surgical complications occurred in 9.1%, early prosthetic complications in 18.8% and late prosthetic complications in 12% of the cases. Implant survival rates of 91-100% after an observation time of 12-60 months are reported in six clinical studies with 537 implants mainly restored immediately after flapless implantation procedures. Conclusion: Computer-guided template-based implant placement showed high implant survival rates ranging from 91% to 100%. However, a considerable number of technique-related perioperative complications were observed. Preclinical and clinical studies indicated a reasonable mean accuracy with relatively high maximum deviations. Future research should be directed to increase the number of clinical studies with longer observation periods and to improve the systems in terms of perioperative handling, accuracy and prosthetic complications.
Resumo:
We improved, evaluated, and used Sanger sequencing for quantification of single nucleotide polymorphism (SNP) variants in transcripts and gDNA samples. This improved assay resulted in highly reproducible relative allele frequencies (e.g., for a heterozygous gDNA 50.0+/-1.4%, and for a missense mutation-bearing transcript 46.9+/-3.7%) with a lower detection limit of 3-9%. It provided excellent accuracy and linear correlation between expected and observed relative allele frequencies. This sequencing assay, which can also be used for the quantification of copy number variations (CNVs), methylations, mosaicisms, and DNA pools, enabled us to analyze transcripts of the FBN1 gene in fibroblasts and blood samples of patients with suspected Marfan syndrome not only qualitatively but also quantitatively. We report a total of 18 novel and 19 known FBN1 sequence variants leading to a premature termination codon (PTC), 26 of which we analyzed by quantitative sequencing both at gDNA and cDNA levels. The relative amounts of PTC-containing FBN1 transcripts in fresh and PAXgene-stabilized blood samples were significantly higher (33.0+/-3.9% to 80.0+/-7.2%) than those detected in affected fibroblasts with inhibition of nonsense-mediated mRNA decay (NMD) (11.0+/-2.1% to 25.0+/-1.8%), whereas in fibroblasts without NMD inhibition no mutant alleles could be detected. These results provide evidence for incomplete NMD in leukocytes and have particular importance for RNA-based analyses not only in FBN1 but also in other genes.
Resumo:
High density spatial and temporal sampling of EEG data enhances the quality of results of electrophysiological experiments. Because EEG sources typically produce widespread electric fields (see Chapter 3) and operate at frequencies well below the sampling rate, increasing the number of electrodes and time samples will not necessarily increase the number of observed processes, but mainly increase the accuracy of the representation of these processes. This is namely the case when inverse solutions are computed. As a consequence, increasing the sampling in space and time increases the redundancy of the data (in space, because electrodes are correlated due to volume conduction, and time, because neighboring time points are correlated), while the degrees of freedom of the data change only little. This has to be taken into account when statistical inferences are to be made from the data. However, in many ERP studies, the intrinsic correlation structure of the data has been disregarded. Often, some electrodes or groups of electrodes are a priori selected as the analysis entity and considered as repeated (within subject) measures that are analyzed using standard univariate statistics. The increased spatial resolution obtained with more electrodes is thus poorly represented by the resulting statistics. In addition, the assumptions made (e.g. in terms of what constitutes a repeated measure) are not supported by what we know about the properties of EEG data. From the point of view of physics (see Chapter 3), the natural “atomic” analysis entity of EEG and ERP data is the scalp electric field
Resumo:
CONTEXT: Magnetic resonance imaging (MRI) combined with magnetic resonance spectroscopy imaging (MRSI) emerged as a promising test in the diagnosis of prostate cancer and showed encouraging results. OBJECTIVE: The aim of this systematic review is to meta-analyse the diagnostic accuracy of combined MRI/MRSI in prostate cancer and to explore risk profiles with highest benefit. EVIDENCE ACQUISITION: The authors searched the MEDLINE and EMBASE databases and the Cochrane Library, and the authors screened reference lists and contacted experts. There were no language restrictions. The last search was performed in August 2008. EVIDENCE SYNTHESIS: We identified 31 test-accuracy studies (1765 patients); 16 studies (17 populations) with a total of 581 patients were suitable for meta-analysis. Nine combined MRI/MRSI studies (10 populations) examining men with pathologically confirmed prostate cancer (297 patients; 1518 specimens) had a pooled sensitivity and specificity on prostate subpart level of 68% (95% CI, 56-78%) and 85% (95% CI, 78-90%), respectively. Compared with patients at high risk for clinically relevant cancer (six studies), sensitivity was lower in low-risk patients (four studies) (58% [46-69%] vs 74% [58-85%]; p>0.05) but higher for specificity (91% [86-94%] vs 78% [70-84%]; p<0.01). Seven studies examining patients with suspected prostate cancer at combined MRI/MRSI (284 patients) had an overall pooled sensitivity and specificity on patients level of 82% (59-94%) and 88% (80-95%). In the low-risk group (five studies) these values were 75% (39-93%) and 91% (77-97%), respectively. CONCLUSIONS: A limited number of small studies suggest that MRI combined with MRSI could be a rule-in test for low-risk patients. This finding needs further confirmation in larger studies and cost-effectiveness needs to be established.
Resumo:
Clinical studies indicate that exaggerated postprandial lipemia is linked to the progression of atherosclerosis, leading cause of Cardiovascular Diseases (CVD). CVD is a multi-factorial disease with complex etiology and according to the literature postprandial Triglycerides (TG) can be used as an independent CVD risk factor. Aim of the current study is to construct an Artificial Neural Network (ANN) based system for the identification of the most important gene-gene and/or gene-environmental interactions that contribute to a fast or slow postprandial metabolism of TG in blood and consequently to investigate the causality of postprandial TG response. The design and development of the system is based on a dataset of 213 subjects who underwent a two meals fatty prandial protocol. For each of the subjects a total of 30 input variables corresponding to genetic variations, sex, age and fasting levels of clinical measurements were known. Those variables provide input to the system, which is based on the combined use of Parameter Decreasing Method (PDM) and an ANN. The system was able to identify the ten (10) most informative variables and achieve a mean accuracy equal to 85.21%.
Resumo:
Sequential studies of osteopenic bone disease in small animals require the availability of non-invasive, accurate and precise methods to assess bone mineral content (BMC) and bone mineral density (BMD). Dual-energy X-ray absorptiometry (DXA), which is currently used in humans for this purpose, can also be applied to small animals by means of adapted software. Precision and accuracy of DXA was evaluated in 10 rats weighing 50-265 g. The rats were anesthetized with a mixture of ketamine-xylazine administrated intraperitoneally. Each rat was scanned six times consecutively in the antero-posterior incidence after repositioning using the rat whole-body software for determination of whole-body BMC and BMD (Hologic QDR 1000, software version 5.52). Scan duration was 10-20 min depending on rat size. After the last measurement, rats were sacrificed and soft tissues were removed by dermestid beetles. Skeletons were then scanned in vitro (ultra high resolution software, version 4.47). Bones were subsequently ashed and dissolved in hydrochloric acid and total body calcium directly assayed by atomic absorption spectrophotometry (TBCa[chem]). Total body calcium was also calculated from the DXA whole-body in vivo measurement (TBCa[DXA]) and from the ultra high resolution measurement (TBCa[UH]) under the assumption that calcium accounts for 40.5% of the BMC expressed as hydroxyapatite. Precision error for whole-body BMC and BMD (mean +/- S.D.) was 1.3% and 1.5%, respectively. Simple regression analysis between TBCa[DXA] or TBCa[UH] and TBCa[chem] revealed tight correlations (n = 0.991 and 0.996, respectively), with slopes and intercepts which were significantly different from 1 and 0, respectively.(ABSTRACT TRUNCATED AT 250 WORDS)
Resumo:
This study analyzes the accuracy of forecasted target prices within analysts’ reports. We compute a measure for target price forecast accuracy that evaluates the ability of analysts to exactly forecast the ex-ante (unknown) 12-month stock price. Furthermore, we determine factors that explain this accuracy. Target price accuracy is negatively related to analyst-specific optimism and stock-specific risk (measured by volatility and price-to-book ratio). However, target price accuracy is positively related to the level of detail of each report, company size and the reputation of the investment bank. The potential conflicts of interests between an analyst and a covered company do not bias forecast accuracy.
Resumo:
ABSTRACT ONTOLOGIES AND METHODS FOR INTEROPERABILITY OF ENGINEERING ANALYSIS MODELS (EAMS) IN AN E-DESIGN ENVIRONMENT SEPTEMBER 2007 NEELIMA KANURI, B.S., BIRLA INSTITUTE OF TECHNOLOGY AND SCIENCES PILANI INDIA M.S., UNIVERSITY OF MASSACHUSETTS AMHERST Directed by: Professor Ian Grosse Interoperability is the ability of two or more systems to exchange and reuse information efficiently. This thesis presents new techniques for interoperating engineering tools using ontologies as the basis for representing, visualizing, reasoning about, and securely exchanging abstract engineering knowledge between software systems. The specific engineering domain that is the primary focus of this report is the modeling knowledge associated with the development of engineering analysis models (EAMs). This abstract modeling knowledge has been used to support integration of analysis and optimization tools in iSIGHT FD , a commercial engineering environment. ANSYS , a commercial FEA tool, has been wrapped as an analysis service available inside of iSIGHT-FD. Engineering analysis modeling (EAM) ontology has been developed and instantiated to form a knowledge base for representing analysis modeling knowledge. The instances of the knowledge base are the analysis models of real world applications. To illustrate how abstract modeling knowledge can be exploited for useful purposes, a cantilever I-Beam design optimization problem has been used as a test bed proof-of-concept application. Two distinct finite element models of the I-beam are available to analyze a given beam design- a beam-element finite element model with potentially lower accuracy but significantly reduced computational costs and a high fidelity, high cost, shell-element finite element model. The goal is to obtain an optimized I-beam design at minimum computational expense. An intelligent KB tool was developed and implemented in FiPER . This tool reasons about the modeling knowledge to intelligently shift between the beam and the shell element models during an optimization process to select the best analysis model for a given optimization design state. In addition to improved interoperability and design optimization, methods are developed and presented that demonstrate the ability to operate on ontological knowledge bases to perform important engineering tasks. One such method is the automatic technical report generation method which converts the modeling knowledge associated with an analysis model to a flat technical report. The second method is a secure knowledge sharing method which allocates permissions to portions of knowledge to control knowledge access and sharing. Both the methods acting together enable recipient specific fine grain controlled knowledge viewing and sharing in an engineering workflow integration environment, such as iSIGHT-FD. These methods together play a very efficient role in reducing the large scale inefficiencies existing in current product design and development cycles due to poor knowledge sharing and reuse between people and software engineering tools. This work is a significant advance in both understanding and application of integration of knowledge in a distributed engineering design framework.
Resumo:
INTRODUCTION Data concerning outcome after management of acetabular fractures by anterior approaches with focus on age and fractures associated with roof impaction, central dislocation and/or quadrilateral plate displacement are rare. METHODS Between October 2005 and April 2009 a series of 59 patients (mean age 57 years, range 13-91) with fractures involving the anterior column was treated using the modified Stoppa approach alone or for reduction of displaced iliac wing or low anterior column fractures in combination with the 1st window of the ilioinguinal approach or the modified Smith-Petersen approach, respectively. Surgical data, accuracy of reduction, clinical and radiographic outcome at mid-term and the need for endoprosthetic replacement in the postoperative course (defined as failure) were assessed; uni- and multivariate regression analysis were performed to identify independent predictive factors (e.g. age, nonanatomical reduction, acetabular roof impaction, central dislocation, quadrilateral plate displacement) for a failure. Outcome was assessed for all patients in general and in accordance to age in particular; patients were subdivided into two groups according to their age (group "<60yrs", group "≥60yrs"). RESULTS Forty-three of 59 patients (mean age 54yrs, 13-89) were available for evaluation. Of these, anatomic reduction was achieved in 72% of cases. Nonanatomical reduction was identified as being the only multivariate predictor for subsequent total hip replacement (Adjusted Hazard Ratio 23.5; p<0.01). A statistically significant higher rate of nonanatomical reduction was observed in the presence of acetabular roof impaction (p=0.01). In 16% of all patients, total hip replacement was performed and in 69% of patients with preserved hips the clinical results were excellent or good at a mean follow up of 35±10 months (range: 24-55). No statistical significant differences were observed between both groups. CONCLUSION Nonanatomical reconstruction of the articular surfaces is at risk for failure of joint-preserving management of acetabular fractures through an isolated or combined modified Stoppa approach resulting in total joint replacement at mid-term. In the elderly, joint-preserving surgery is worth considering as promising clinical and radiographic results might be obtained at mid-term.
Resumo:
HYPOTHESIS A previously developed image-guided robot system can safely drill a tunnel from the lateral mastoid surface, through the facial recess, to the middle ear, as a viable alternative to conventional mastoidectomy for cochlear electrode insertion. BACKGROUND Direct cochlear access (DCA) provides a minimally invasive tunnel from the lateral surface of the mastoid through the facial recess to the middle ear for cochlear electrode insertion. A safe and effective tunnel drilled through the narrow facial recess requires a highly accurate image-guided surgical system. Previous attempts have relied on patient-specific templates and robotic systems to guide drilling tools. In this study, we report on improvements made to an image-guided surgical robot system developed specifically for this purpose and the resulting accuracy achieved in vitro. MATERIALS AND METHODS The proposed image-guided robotic DCA procedure was carried out bilaterally on 4 whole head cadaver specimens. Specimens were implanted with titanium fiducial markers and imaged with cone-beam CT. A preoperative plan was created using a custom software package wherein relevant anatomical structures of the facial recess were segmented, and a drill trajectory targeting the round window was defined. Patient-to-image registration was performed with the custom robot system to reference the preoperative plan, and the DCA tunnel was drilled in 3 stages with progressively longer drill bits. The position of the drilled tunnel was defined as a line fitted to a point cloud of the segmented tunnel using principle component analysis (PCA function in MatLab). The accuracy of the DCA was then assessed by coregistering preoperative and postoperative image data and measuring the deviation of the drilled tunnel from the plan. The final step of electrode insertion was also performed through the DCA tunnel after manual removal of the promontory through the external auditory canal. RESULTS Drilling error was defined as the lateral deviation of the tool in the plane perpendicular to the drill axis (excluding depth error). Errors of 0.08 ± 0.05 mm and 0.15 ± 0.08 mm were measured on the lateral mastoid surface and at the target on the round window, respectively (n =8). Full electrode insertion was possible for 7 cases. In 1 case, the electrode was partially inserted with 1 contact pair external to the cochlea. CONCLUSION The purpose-built robot system was able to perform a safe and reliable DCA for cochlear implantation. The workflow implemented in this study mimics the envisioned clinical procedure showing the feasibility of future clinical implementation.
Resumo:
The COSMIC-2 mission is a follow-on mission of the Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) with an upgraded payload for improved radio occultation (RO) applications. The objective of this paper is to develop a near-real-time (NRT) orbit determination system, called NRT National Chiao Tung University (NCTU) system, to support COSMIC-2 in atmospheric applications and verify the orbit product of COSMIC. The system is capable of automatic determinations of the NRT GPS clocks and LEO orbit and clock. To assess the NRT (NCTU) system, we use eight days of COSMIC data (March 24-31, 2011), which contain a total of 331 GPS observation sessions and 12 393 RO observable files. The parallel scheduling for independent GPS and LEO estimations and automatic time matching improves the computational efficiency by 64% compared to the sequential scheduling. Orbit difference analyses suggest a 10-cm accuracy for the COSMIC orbits from the NRT (NCTU) system, and it is consistent as the NRT University Corporation for Atmospheric Research (URCA) system. The mean velocity accuracy from the NRT orbits of COSMIC is 0.168 mm/s, corresponding to an error of about 0.051 μrad in the bending angle. The rms differences in the NRT COSMIC clock and in GPS clocks between the NRT (NCTU) and the postprocessing products are 3.742 and 1.427 ns. The GPS clocks determined from a partial ground GPS network [from NRT (NCTU)] and a full one [from NRT (UCAR)] result in mean rms frequency stabilities of 6.1E-12 and 2.7E-12, respectively, corresponding to range fluctuations of 5.5 and 2.4 cm and bending angle errors of 3.75 and 1.66 μrad .
Resumo:
The combination of scaled analogue experiments, material mechanics, X-ray computed tomography (XRCT) and Digital Volume Correlation techniques (DVC) is a powerful new tool not only to examine the 3 dimensional structure and kinematic evolution of complex deformation structures in scaled analogue experiments, but also to fully quantify their spatial strain distribution and complete strain history. Digital image correlation (DIC) is an important advance in quantitative physical modelling and helps to understand non-linear deformation processes. Optical non-intrusive (DIC) techniques enable the quantification of localised and distributed deformation in analogue experiments based either on images taken through transparent sidewalls (2D DIC) or on surface views (3D DIC). X-ray computed tomography (XRCT) analysis permits the non-destructive visualisation of the internal structure and kinematic evolution of scaled analogue experiments simulating tectonic evolution of complex geological structures. The combination of XRCT sectional image data of analogue experiments with 2D DIC only allows quantification of 2D displacement and strain components in section direction. This completely omits the potential of CT experiments for full 3D strain analysis of complex, non-cylindrical deformation structures. In this study, we apply digital volume correlation (DVC) techniques on XRCT scan data of “solid” analogue experiments to fully quantify the internal displacement and strain in 3 dimensions over time. Our first results indicate that the application of DVC techniques on XRCT volume data can successfully be used to quantify the 3D spatial and temporal strain patterns inside analogue experiments. We demonstrate the potential of combining DVC techniques and XRCT volume imaging for 3D strain analysis of a contractional experiment simulating the development of a non-cylindrical pop-up structure. Furthermore, we discuss various options for optimisation of granular materials, pattern generation, and data acquisition for increased resolution and accuracy of the strain results. Three-dimensional strain analysis of analogue models is of particular interest for geological and seismic interpretations of complex, non-cylindrical geological structures. The volume strain data enable the analysis of the large-scale and small-scale strain history of geological structures.
Resumo:
Oxygenated polycyclic aromatic hydrocarbons (oxy-PAHs) and nitrogen heterocyclic polycyclic aromatic compounds (N-PACs) are toxic, highly leachable and often abundant at sites that are also contaminated with PAHs. However, due to lack of regulations and standardized methods for their analysis, they are seldom included in monitoring and risk-assessment programs. This intercomparison study constitutes an important step in the harmonization of the analytical methods currently used, and may also be considered a first step towards the certification of reference materials for these compounds. The results showed that the participants were able to determine oxy-PAHs with accuracy similar to PAHs, with average determined mass fractions agreeing well with the known levels in a spiked soil and acceptable inter- and intra-laboratory precisions for all soils analyzed. For the N-PACs, the results were less satisfactory, and have to be improved by using analytical methods more specifically optimized for these compounds.
Resumo:
Firn microstructure is accurately characterized using images obtained from scanning electron microscopy (SEM). Visibly etched grain boundaries within images are used to create a skeleton outline of the microstructure. A pixel-counting utility is applied to the outline to determine grain area. Firn grain sizes calculated using the technique described here are compared to those calculated using the techniques of Cow (1969) and Gay and Weiss (1999) on samples of the same material, and are found to be substantially smaller. The differences in grain size between the techniques are attributed to sampling deficiencies (e.g. the inclusion of pore filler in the grain area) in earlier methods. The new technique offers the advantages of greater accuracy and the ability to determine individual components of the microstructure (grain and pore), which have important applications in ice-core analyses. The new method is validated by calculating activation energies of grain boundary diffusion using predicted values based on the ratio of grain-size measurements between the new and existing techniques. The resulting activation energy falls within the range of values previously reported for firn/ice.