916 resultados para Topology-based methods


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study aimed to assess the performance of International Caries Detection and Assessment System (ICDAS), radiographic examination, and fluorescence-based methods for detecting occlusal caries in primary teeth. One occlusal site on each of 79 primary molars was assessed twice by two examiners using ICDAS, bitewing radiography (BW), DIAGNOdent 2095 (LF), DIAGNOdent 2190 (LFpen), and VistaProof fluorescence camera (FC). The teeth were histologically prepared and assessed for caries extent. Optimal cutoff limits were calculated for LF, LFpen, and FC. At the D (1) threshold (enamel and dentin lesions), ICDAS and FC presented higher sensitivity values (0.75 and 0.73, respectively), while BW showed higher specificity (1.00). At the D (2) threshold (inner enamel and dentin lesions), ICDAS presented higher sensitivity (0.83) and statistically significantly lower specificity (0.70). At the D(3) threshold (dentin lesions), LFpen and FC showed higher sensitivity (1.00 and 0.91, respectively), while higher specificity was presented by FC (0.95), ICDAS (0.94), BW (0.94), and LF (0.92). The area under the receiver operating characteristic (ROC) curve (Az) varied from 0.780 (BW) to 0.941 (LF). Spearman correlation coefficients with histology were 0.72 (ICDAS), 0.64 (BW), 0.71 (LF), 0.65 (LFpen), and 0.74 (FC). Inter- and intraexaminer intraclass correlation values varied from 0.772 to 0.963 and unweighted kappa values ranged from 0.462 to 0.750. In conclusion, ICDAS and FC exhibited better accuracy in detecting enamel and dentin caries lesions, whereas ICDAS, LF, LFpen, and FC were more appropriate for detecting dentin lesions on occlusal surfaces in primary teeth, with no statistically significant difference among them. All methods presented good to excellent reproducibility.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Responses of many real-world problems can only be evaluated perturbed by noise. In order to make an efficient optimization of these problems possible, intelligent optimization strategies successfully coping with noisy evaluations are required. In this article, a comprehensive review of existing kriging-based methods for the optimization of noisy functions is provided. In summary, ten methods for choosing the sequential samples are described using a unified formalism. They are compared on analytical benchmark problems, whereby the usual assumption of homoscedastic Gaussian noise made in the underlying models is meet. Different problem configurations (noise level, maximum number of observations, initial number of observations) and setups (covariance functions, budget, initial sample size) are considered. It is found that the choices of the initial sample size and the covariance function are not critical. The choice of the method, however, can result in significant differences in the performance. In particular, the three most intuitive criteria are found as poor alternatives. Although no criterion is found consistently more efficient than the others, two specialized methods appear more robust on average.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

PURPOSE The aim of this work is to derive a theoretical framework for quantitative noise and temporal fidelity analysis of time-resolved k-space-based parallel imaging methods. THEORY An analytical formalism of noise distribution is derived extending the existing g-factor formulation for nontime-resolved generalized autocalibrating partially parallel acquisition (GRAPPA) to time-resolved k-space-based methods. The noise analysis considers temporal noise correlations and is further accompanied by a temporal filtering analysis. METHODS All methods are derived and presented for k-t-GRAPPA and PEAK-GRAPPA. A sliding window reconstruction and nontime-resolved GRAPPA are taken as a reference. Statistical validation is based on series of pseudoreplica images. The analysis is demonstrated on a short-axis cardiac CINE dataset. RESULTS The superior signal-to-noise performance of time-resolved over nontime-resolved parallel imaging methods at the expense of temporal frequency filtering is analytically confirmed. Further, different temporal frequency filter characteristics of k-t-GRAPPA, PEAK-GRAPPA, and sliding window are revealed. CONCLUSION The proposed analysis of noise behavior and temporal fidelity establishes a theoretical basis for a quantitative evaluation of time-resolved reconstruction methods. Therefore, the presented theory allows for comparison between time-resolved parallel imaging methods and also nontime-resolved methods. Magn Reson Med, 2014. © 2014 Wiley Periodicals, Inc.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: HIV surveillance requires monitoring of new HIV diagnoses and differentiation of incident and older infections. In 2008, Switzerland implemented a system for monitoring incident HIV infections based on the results of a line immunoassay (Inno-Lia) mandatorily conducted for HIV confirmation and type differentiation (HIV-1, HIV-2) of all newly diagnosed patients. Based on this system, we assessed the proportion of incident HIV infection among newly diagnosed cases in Switzerland during 2008-2013. METHODS AND RESULTS: Inno-Lia antibody reaction patterns recorded in anonymous HIV notifications to the federal health authority were classified by 10 published algorithms into incident (up to 12 months) or older infections. Utilizing these data, annual incident infection estimates were obtained in two ways, (i) based on the diagnostic performance of the algorithms and utilizing the relationship 'incident = true incident + false incident', (ii) based on the window-periods of the algorithms and utilizing the relationship 'Prevalence = Incidence x Duration'. From 2008-2013, 3'851 HIV notifications were received. Adult HIV-1 infections amounted to 3'809 cases, and 3'636 of them (95.5%) contained Inno-Lia data. Incident infection totals calculated were similar for the performance- and window-based methods, amounting on average to 1'755 (95% confidence interval, 1588-1923) and 1'790 cases (95% CI, 1679-1900), respectively. More than half of these were among men who had sex with men. Both methods showed a continuous decline of annual incident infections 2008-2013, totaling -59.5% and -50.2%, respectively. The decline of incident infections continued even in 2012, when a 15% increase in HIV notifications had been observed. This increase was entirely due to older infections. Overall declines 2008-2013 were of similar extent among the major transmission groups. CONCLUSIONS: Inno-Lia based incident HIV-1 infection surveillance proved useful and reliable. It represents a free, additional public health benefit of the use of this relatively costly test for HIV confirmation and type differentiation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Academic and industrial research in the late 90s have brought about an exponential explosion of DNA sequence data. Automated expert systems are being created to help biologists to extract patterns, trends and links from this ever-deepening ocean of information. Two such systems aimed on retrieving and subsequently utilizing phylogenetically relevant information have been developed in this dissertation, the major objective of which was to automate the often difficult and confusing phylogenetic reconstruction process. ^ Popular phylogenetic reconstruction methods, such as distance-based methods, attempt to find an optimal tree topology (that reflects the relationships among related sequences and their evolutionary history) by searching through the topology space. Various compromises between the fast (but incomplete) and exhaustive (but computationally prohibitive) search heuristics have been suggested. An intelligent compromise algorithm that relies on a flexible “beam” search principle from the Artificial Intelligence domain and uses the pre-computed local topology reliability information to adjust the beam search space continuously is described in the second chapter of this dissertation. ^ However, sometimes even a (virtually) complete distance-based method is inferior to the significantly more elaborate (and computationally expensive) maximum likelihood (ML) method. In fact, depending on the nature of the sequence data in question either method might prove to be superior. Therefore, it is difficult (even for an expert) to tell a priori which phylogenetic reconstruction method—distance-based, ML or maybe maximum parsimony (MP)—should be chosen for any particular data set. ^ A number of factors, often hidden, influence the performance of a method. For example, it is generally understood that for a phylogenetically “difficult” data set more sophisticated methods (e.g., ML) tend to be more effective and thus should be chosen. However, it is the interplay of many factors that one needs to consider in order to avoid choosing an inferior method (potentially a costly mistake, both in terms of computational expenses and in terms of reconstruction accuracy.) ^ Chapter III of this dissertation details a phylogenetic reconstruction expert system that selects a superior proper method automatically. It uses a classifier (a Decision Tree-inducing algorithm) to map a new data set to the proper phylogenetic reconstruction method. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Based on an order-theoretic approach, we derive sufficient conditions for the existence, characterization, and computation of Markovian equilibrium decision processes and stationary Markov equilibrium on minimal state spaces for a large class of stochastic overlapping generations models. In contrast to all previous work, we consider reduced-form stochastic production technologies that allow for a broad set of equilibrium distortions such as public policy distortions, social security, monetary equilibrium, and production nonconvexities. Our order-based methods are constructive, and we provide monotone iterative algorithms for computing extremal stationary Markov equilibrium decision processes and equilibrium invariant distributions, while avoiding many of the problems associated with the existence of indeterminacies that have been well-documented in previous work. We provide important results for existence of Markov equilibria for the case where capital income is not increasing in the aggregate stock. Finally, we conclude with examples common in macroeconomics such as models with fiat money and social security. We also show how some of our results extend to settings with unbounded state spaces.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Stress can affect a person's psychological and physical health and cause a variety of conditions including depression, immune system changes, and hypertension (Alzheimer's Association, 2010; Aschbacher et al., 2009; Fredman et al., 2010; Long et al., 2004; Mills et al., 2009; von Känel et al., 2008). The severity and consequences of these conditions can vary based on the duration, amount, and sources of stress experienced by the individual (Black & Hyer, 2010; Coen et al., 1997; Conde-Sala et al., 2010; Pinquart & Sörensen, 2007). Caregivers of people with dementia have an elevated risk for stress and its related health problems because they experience more negative interactions with, and provide more emotional support for, their care recipients than other caregivers. ^ This paper uses a systematic program planning process of Intervention Mapping to organize evidence from literature, qualitative research and theory to develop recommendations for a theory- and evidence-based intervention to improve outcomes for caregivers of people with dementia. A needs assessment was conducted to identify specific dementia caregiver stress influences and a logic model of dementia caregiver stress was developed using the PRECEDE Model. Necessary behavior and environmental outcomes are identified for dementia caregiver stress reduction and performance objectives for each were combined with selected determinants to produce change objectives. Planning matrices were then designed to inform effective theory-based methods and practical applications for recommended intervention delivery. Recommendations for program components, their scope and sequence, the completed program materials, and the program protocols are delineated along with ways to insure that the program is adopted and implemented after it is shown to be effective.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The genomic era brought by recent advances in the next-generation sequencing technology makes the genome-wide scans of natural selection a reality. Currently, almost all the statistical tests and analytical methods for identifying genes under selection was performed on the individual gene basis. Although these methods have the power of identifying gene subject to strong selection, they have limited power in discovering genes targeted by moderate or weak selection forces, which are crucial for understanding the molecular mechanisms of complex phenotypes and diseases. Recent availability and rapid completeness of many gene network and protein-protein interaction databases accompanying the genomic era open the avenues of exploring the possibility of enhancing the power of discovering genes under natural selection. The aim of the thesis is to explore and develop normal mixture model based methods for leveraging gene network information to enhance the power of natural selection target gene discovery. The results show that the developed statistical method, which combines the posterior log odds of the standard normal mixture model and the Guilt-By-Association score of the gene network in a naïve Bayes framework, has the power to discover moderate/weak selection gene which bridges the genes under strong selection and it helps our understanding the biology under complex diseases and related natural selection phenotypes.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Two sets of mass spectrometry-based methods were developed specifically for the in vivo study of extracellular neuropeptide biochemistry. First, an integrated micro-concentration/desalting/matrix-addition device was constructed for matrix-assisted laser desorption ionization mass spectrometry (MALDI MS) to achieve attomole sensitivity for microdialysis samples. Second, capillary electrophoresis (CE) was incorporated into the above micro-liquid chromatography (LC) and MALDI MS system to provide two-dimensional separation and identification (i.e. electrophoretic mobility and molecular mass) for the analysis of complex mixtures. The latter technique includes two parts of instrumentation: (1) the coupling of a preconcentration LC column to the inlet of a CE capillary, and (2) the utilization of a matrix-precoated membrane target for continuous CE effluent deposition and for automatic MALDI MS analysis (imaging) of the CE track.^ Initial in vivo data reveals a carboxypeptidase A (CPA) activity in rat brain involved in extracellular neurotensin metabolism. Benzylsuccinic acid, a CPA inhibitor, inhibited neurotensin metabolite NT1-12 formation by 70%, while inhibitors of other major extracellular peptide metabolizing enzymes increased NT1-12 formation. CPA activity has not been observed in previous in vitro experiments. Next, the validity of the methodology was demonstrated in the detection and structural elucidation of an endogenous neuropeptide, (L)VV-hemorphin-7, in rat brain upon ATP stimulation. Finally, the combined micro-LC/CE/MALDI MS was used in the in vivo metabolic study of peptide E, a mu-selective opioid peptide with 25 amino acid residues. Profiles of 88 metabolites were obtained, their identity being determined by their mass-to-charge ratio and electrophoretic mobility. The results indicate that there are several primary cleavage sites in vivo for peptide E in the release of its enkephalin-containing fragments. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Independent Components Analysis is a Blind Source Separation method that aims to find the pure source signals mixed together in unknown proportions in the observed signals under study. It does this by searching for factors which are mutually statistically independent. It can thus be classified among the latent-variable based methods. Like other methods based on latent variables, a careful investigation has to be carried out to find out which factors are significant and which are not. Therefore, it is important to dispose of a validation procedure to decide on the optimal number of independent components to include in the final model. This can be made complicated by the fact that two consecutive models may differ in the order and signs of similarly-indexed ICs. As well, the structure of the extracted sources can change as a function of the number of factors calculated. Two methods for determining the optimal number of ICs are proposed in this article and applied to simulated and real datasets to demonstrate their performance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Human identification from a skull is a critical process in legal and forensic medicine, specially when no other means are available. Traditional clay-based methods attempt to generate the human face, in order to identify the corresponding person. However, these reconstructions lack of objectivity and consistence, since they depend on the practitioner. Current computerized techniques are based on facial models, which introduce undesired facial features when the final reconstruction is built. This paper presents an objective 3D craniofacial reconstruction technique, implemented in a graphic application, without using any facial template. The only information required by the software tool is the 3D image of the target skull and three parameters: age, gender and Body Mass Index (BMI) of the individual. Complexity is minimized, since the application database only consists of the anthropological information provided by soft tissue depth values in a set of points of the skull.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper outlines the problems found in the parallelization of SPH (Smoothed Particle Hydrodynamics) algorithms using Graphics Processing Units. Different results of some parallel GPU implementations in terms of the speed-up and the scalability compared to the CPU sequential codes are shown. The most problematic stage in the GPU-SPH algorithms is the one responsible for locating neighboring particles and building the vectors where this information is stored, since these specific algorithms raise many dificulties for a data-level parallelization. Because of the fact that the neighbor location using linked lists does not show enough data-level parallelism, two new approaches have been pro- posed to minimize bank conflicts in the writing and subsequent reading of the neighbor lists. The first strategy proposes an efficient coordination between CPU-GPU, using GPU algorithms for those stages that allow a straight forward parallelization, and sequential CPU algorithms for those instructions that involve some kind of vector reduction. This coordination provides a relatively orderly reading of the neighbor lists in the interactions stage, achieving a speed-up factor of x47 in this stage. However, since the construction of the neighbor lists is quite expensive, it is achieved an overall speed-up of x41. The second strategy seeks to maximize the use of the GPU in the neighbor's location process by executing a specific vector sorting algorithm that allows some data-level parallelism. Al- though this strategy has succeeded in improving the speed-up on the stage of neighboring location, the global speed-up on the interactions stage falls, due to inefficient reading of the neighbor vectors. Some changes to these strategies are proposed, aimed at maximizing the computational load of the GPU and using the GPU texture-units, in order to reach the maximum speed-up for such codes. Different practical applications have been added to the mentioned GPU codes. First, the classical dam-break problem is studied. Second, the wave impact of the sloshing fluid contained in LNG vessel tanks is also simulated as a practical example of particle methods

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objective: This research is focused in the creation and validation of a solution to the inverse kinematics problem for a 6 degrees of freedom human upper limb. This system is intended to work within a realtime dysfunctional motion prediction system that allows anticipatory actuation in physical Neurorehabilitation under the assisted-as-needed paradigm. For this purpose, a multilayer perceptron-based and an ANFIS-based solution to the inverse kinematics problem are evaluated. Materials and methods: Both the multilayer perceptron-based and the ANFIS-based inverse kinematics methods have been trained with three-dimensional Cartesian positions corresponding to the end-effector of healthy human upper limbs that execute two different activities of the daily life: "serving water from a jar" and "picking up a bottle". Validation of the proposed methodologies has been performed by a 10 fold cross-validation procedure. Results: Once trained, the systems are able to map 3D positions of the end-effector to the corresponding healthy biomechanical configurations. A high mean correlation coefficient and a low root mean squared error have been found for both the multilayer perceptron and ANFIS-based methods. Conclusions: The obtained results indicate that both systems effectively solve the inverse kinematics problem, but, due to its low computational load, crucial in real-time applications, along with its high performance, a multilayer perceptron-based solution, consisting in 3 input neurons, 1 hidden layer with 3 neurons and 6 output neurons has been considered the most appropriated for the target application.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The use of seismic hysteretic dampers for passive control is increasing exponentially in recent years for both new and existing buildings. In order to utilize hysteretic dampers within a structural system, it is of paramount importance to have simplified design procedures based upon knowledge gained from theoretical studies and validated with experimental results. Non-linear Static Procedures (NSPs) are presented as an alternative to the force-based methods more common nowadays. The application of NSPs to conventional structures has been well established; yet there is a lack of experimental information on how NSPs apply to systems with hysteretic dampers. In this research, several shaking table tests were conducted on two single bay and single story 1:2 scale structures with and without hysteretic dampers. The maximum response of the structure with dampers in terms of lateral displacement and base shear obtained from the tests was compared with the prediction provided by three well-known NSPs: (1) the improved version of the Capacity Spectrum Method (CSM) from FEMA 440; (2) the improved version of the Displacement Coefficient Method (DCM) from FEMA 440; and (3) the N2 Method implemented in Eurocode 8. In general, the improved version of the DCM and N2 methods are found to provide acceptable accuracy in prediction, but the CSM tends to underestimate the response.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we apply a hierarchical tracking strategy of planar objects (or that can be assumed to be planar) that is based on direct methods for vision-based applications on-board UAVs. The use of this tracking strategy allows to achieve the tasks at real-time frame rates and to overcome problems posed by the challenging conditions of the tasks: e.g. constant vibrations, fast 3D changes, or limited capacity on-board. The vast majority of approaches make use of feature-based methods to track objects. Nonetheless, in this paper we show that although some of these feature-based solutions are faster, direct methods can be more robust under fast 3D motions (fast changes in position), some changes in appearance, constant vibrations (without requiring any specific hardware or software for video stabilization), and situations in which part of the object to track is outside of the field of view of the camera. The performance of the proposed tracking strategy on-board UAVs is evaluated with images from realflight tests using manually-generated ground truth information, accurate position estimation using a Vicon system, and also with simulated data from a simulation environment. Results show that the hierarchical tracking strategy performs better than wellknown feature-based algorithms and well-known configurations of direct methods, and that its performance is robust enough for vision-in-the-loop tasks, e.g. for vision-based landing tasks.