846 resultados para robust speaker verification
Resumo:
BACKGROUND: Periodontitis is the major cause of tooth loss in adults and is linked to systemic illnesses, such as cardiovascular disease and stroke. The development of rapid point-of-care (POC) chairside diagnostics has the potential for the early detection of periodontal infection and progression to identify incipient disease and reduce health care costs. However, validation of effective diagnostics requires the identification and verification of biomarkers correlated with disease progression. This clinical study sought to determine the ability of putative host- and microbially derived biomarkers to identify periodontal disease status from whole saliva and plaque biofilm. METHODS: One hundred human subjects were equally recruited into a healthy/gingivitis group or a periodontitis population. Whole saliva was collected from all subjects and analyzed using antibody arrays to measure the levels of multiple proinflammatory cytokines and bone resorptive/turnover markers. RESULTS: Salivary biomarker data were correlated to comprehensive clinical, radiographic, and microbial plaque biofilm levels measured by quantitative polymerase chain reaction (qPCR) for the generation of models for periodontal disease identification. Significantly elevated levels of matrix metalloproteinase (MMP)-8 and -9 were found in subjects with advanced periodontitis with Random Forest importance scores of 7.1 and 5.1, respectively. The generation of receiver operating characteristic curves demonstrated that permutations of salivary biomarkers and pathogen biofilm values augmented the prediction of disease category. Multiple combinations of salivary biomarkers (especially MMP-8 and -9 and osteoprotegerin) combined with red-complex anaerobic periodontal pathogens (such as Porphyromonas gingivalis or Treponema denticola) provided highly accurate predictions of periodontal disease category. Elevated salivary MMP-8 and T. denticola biofilm levels displayed robust combinatorial characteristics in predicting periodontal disease severity (area under the curve = 0.88; odds ratio = 24.6; 95% confidence interval: 5.2 to 116.5). CONCLUSIONS: Using qPCR and sensitive immunoassays, we identified host- and bacterially derived biomarkers correlated with periodontal disease. This approach offers significant potential for the discovery of biomarker signatures useful in the development of rapid POC chairside diagnostics for oral and systemic diseases. Studies are ongoing to apply this approach to the longitudinal predictions of disease activity.
Resumo:
OBJECTIVE: Compare changes in P-wave amplitude of the intra-atrial electrocardiogram (ECG) and its corresponding transesophageal echocardiography (TEE)-controlled position to verify the exact localization of a central venous catheter (CVC) tip. DESIGN: A prospective study. SETTING: University, single-institutional setting. PARTICIPANTS: Two hundred patients undergoing elective cardiac surgery. INTERVENTIONS: CVC placement via the right internal jugular vein with ECG control using the guidewire technique and TEE control in 4 different phases: phase 1: CVC placement with normalized P wave and measurement of distance from the crista terminalis to the CVC tip; phase 2: TEE-controlled placement of the CVC tip; parallel to the superior vena cava (SVC) and measurements of P-wave amplitude; phase 3: influence of head positioning on CVC migration; and phase 4: evaluation of positioning of the CVC postoperatively using a chest x-ray. MEASUREMENTS AND MAIN RESULTS: The CVC tip could only be visualized in 67 patients on TEE with a normalized P wave. In 198 patients with the CVC parallel to the SVC wall controlled by TEE (phase 2), an elevated P wave was observed. Different head movements led to no significant migration of the CVC (phase 3). On a postoperative chest-x-ray, the CVC position was correct in 87.6% (phase 4). CONCLUSION: The study suggests that the position of the CVC tip is located parallel to the SVC and 1.5 cm above the crista terminalis if the P wave starts to decrease during withdrawal of the catheter. The authors recommend that ECG control as per their study should be routinely used for placement of central venous catheters via the right internal jugular vein.
Resumo:
In this article, the authors evaluate a merit function for 2D/3D registration called stochastic rank correlation (SRC). SRC is characterized by the fact that differences in image intensity do not influence the registration result; it therefore combines the numerical advantages of cross correlation (CC)-type merit functions with the flexibility of mutual-information-type merit functions. The basic idea is that registration is achieved on a random subset of the image, which allows for an efficient computation of Spearman's rank correlation coefficient. This measure is, by nature, invariant to monotonic intensity transforms in the images under comparison, which renders it an ideal solution for intramodal images acquired at different energy levels as encountered in intrafractional kV imaging in image-guided radiotherapy. Initial evaluation was undertaken using a 2D/3D registration reference image dataset of a cadaver spine. Even with no radiometric calibration, SRC shows a significant improvement in robustness and stability compared to CC. Pattern intensity, another merit function that was evaluated for comparison, gave rather poor results due to its limited convergence range. The time required for SRC with 5% image content compares well to the other merit functions; increasing the image content does not significantly influence the algorithm accuracy. The authors conclude that SRC is a promising measure for 2D/3D registration in IGRT and image-guided therapy in general.
Resumo:
wo methods for registering laser-scans of human heads and transforming them to a new semantically consistent topology defined by a user-provided template mesh are described. Both algorithms are stated within the Iterative Closest Point framework. The first method is based on finding landmark correspondences by iteratively registering the vicinity of a landmark with a re-weighted error function. Thin-plate spline interpolation is then used to deform the template mesh and finally the scan is resampled in the topology of the deformed template. The second algorithm employs a morphable shape model, which can be computed from a database of laser-scans using the first algorithm. It directly optimizes pose and shape of the morphable model. The use of the algorithm with PCA mixture models, where the shape is split up into regions each described by an individual subspace, is addressed. Mixture models require either blending or regularization strategies, both of which are described in detail. For both algorithms, strategies for filling in missing geometry for incomplete laser-scans are described. While an interpolation-based approach can be used to fill in small or smooth regions, the model-driven algorithm is capable of fitting a plausible complete head mesh to arbitrarily small geometry, which is known as "shape completion". The importance of regularization in the case of extreme shape completion is shown.
Resumo:
Source verification and pooling of feeder cattle into larger lots resulted in higher selling prices compared to more typical sales at a southern Iowa auction market. After higher prices due to larger lot sizes were accounted for, cattle that received a specified management program and were source verified as to origin received additional price premiums. The data do not distinguish between the value of the specific management program and the value of the source verification process. However, cow–calf producers participating in the program took home more money.
Resumo:
Source verification and pooling of feeder cattle into larger lots resulted in higher selling prices compared with more typical sales at a southern Iowa auction market. After accounting for higher prices due to larger lot sizes, cattle that received a specified management program and were source verified as to origin received additional price premiums. The data do not distinguish between the value of the specific management program and the value of the source verification process. However, cow-calf producers participating in the program took home more money.
Resumo:
Mobile multimedia ad hoc services run on dynamic topologies due to node mobility or failures and wireless channel impairments. A robust routing service must adapt to topology changes with the aim of recovering or maintaining the video quality level and reducing the impact of the user's experience. In those scenarios, beacon-less Opportunistic Routing (OR) increases the robustness by supporting routing decisions in a completely distributed manner based on protocol-specific characteristics. However, the existing beacon-less OR approaches do not efficiently combine multiple metrics for forwarding selection, which cause higher packet loss rate, and consequently reduce the video quality level. In this paper, we assess the robustness and reliability of our recently developed OR protocol under node failures, called cross-layer Link quality and Geographical-aware OR protocol (LinGO). Simulation results show that LinGO achieves multimedia dissemination with QoE support and robustness in scenarios with dynamic topologies.
Resumo:
Purpose: To investigate the dosimetric properties of an electronic portal imaging device (EPID) for electron beam detection and to evaluate its potential for quality assurance (QA) of modulated electron radiotherapy (MERT). Methods: A commercially available EPID was used to detect electron beams shaped by a photon multileaf collimator (MLC) at a source-surface distance of 70 cm. The fundamental dosimetric properties such as reproducibility, dose linearity, field size response, energy response, and saturation were investigated for electron beams. A new method to acquire the flood-field for the EPID calibration was tested. For validation purpose, profiles of open fields and various MLC fields (square and irregular) were measured with a diode in water and compared to the EPID measurements. Finally, in order to use the EPID for QA of MERT delivery, a method was developed to reconstruct EPID two-dimensional (2D) dose distributions in a water-equivalent depth of 1.5 cm. Comparisons were performed with film measurement for static and dynamic monoenergy fields as well as for multienergy fields composed by several segments of different electron energies. Results: The advantageous EPID dosimetric properties already known for photons as reproducibility, linearity with dose, and dose rate were found to be identical for electron detection. The flood-field calibration method was proven to be effective and the EPID was capable to accurately reproduce the dose measured in water at 1.0 cm depth for 6 MeV, 1.3 cm for 9 MeV, and 1.5 cm for 12, 15, and 18 MeV. The deviations between the output factors measured with EPID and in water at these depths were within ±1.2% for all the energies with a mean deviation of 0.1%. The average gamma pass rate (criteria: 1.5%, 1.5 mm) for profile comparison between EPID and measurements in water was better than 99% for all the energies considered in this study. When comparing the reconstructed EPID 2D dose distributions at 1.5 cm depth to film measurements, the gamma pass rate (criteria: 2%, 2 mm) was better than 97% for all the tested cases. Conclusions: This study demonstrates the high potential of the EPID for electron dosimetry, and in particular, confirms the possibility to use it as an efficient verification tool for MERT delivery.
Resumo:
We propose a method that robustly combines color and feature buffers to denoise Monte Carlo renderings. On one hand, feature buffers, such as per pixel normals, textures, or depth, are effective in determining denoising filters because features are highly correlated with rendered images. Filters based solely on features, however, are prone to blurring image details that are not well represented by the features. On the other hand, color buffers represent all details, but they may be less effective to determine filters because they are contaminated by the noise that is supposed to be removed. We propose to obtain filters using a combination of color and feature buffers in an NL-means and cross-bilateral filtering framework. We determine a robust weighting of colors and features using a SURE-based error estimate. We show significant improvements in subjective and quantitative errors compared to the previous state-of-the-art. We also demonstrate adaptive sampling and space-time filtering for animations.
Resumo:
In this paper, we propose a fully automatic, robust approach for segmenting proximal femur in conventional X-ray images. Our method is based on hierarchical landmark detection by random forest regression, where the detection results of 22 global landmarks are used to do the spatial normalization, and the detection results of the 59 local landmarks serve as the image cue for instantiation of a statistical shape model of the proximal femur. To detect landmarks in both levels, we use multi-resolution HoG (Histogram of Oriented Gradients) as features which can achieve better accuracy and robustness. The efficacy of the present method is demonstrated by experiments conducted on 150 clinical x-ray images. It was found that the present method could achieve an average point-to-curve error of 2.0 mm and that the present method was robust to low image contrast, noise and occlusions caused by implants.
Resumo:
Images of an object under different illumination are known to provide strong cues about the object surface. A mathematical formalization of how to recover the normal map of such a surface leads to the so-called uncalibrated photometric stereo problem. In the simplest instance, this problem can be reduced to the task of identifying only three parameters: the so-called generalized bas-relief (GBR) ambiguity. The challenge is to find additional general assumptions about the object, that identify these parameters uniquely. Current approaches are not consistent, i.e., they provide different solutions when run multiple times on the same data. To address this limitation, we propose exploiting local diffuse reflectance (LDR) maxima, i.e., points in the scene where the normal vector is parallel to the illumination direction (see Fig. 1). We demonstrate several noteworthy properties of these maxima: a closed-form solution, computational efficiency and GBR consistency. An LDR maximum yields a simple closed-form solution corresponding to a semi-circle in the GBR parameters space (see Fig. 2); because as few as two diffuse maxima in different images identify a unique solution, the identification of the GBR parameters can be achieved very efficiently; finally, the algorithm is consistent as it always returns the same solution given the same data. Our algorithm is also remarkably robust: It can obtain an accurate estimate of the GBR parameters even with extremely high levels of outliers in the detected maxima (up to 80 % of the observations). The method is validated on real data and achieves state-of-the-art results.