964 resultados para graph-based regularization
Resumo:
Ultrasound segmentation is a challenging problem due to the inherent speckle and some artifacts like shadows, attenuation and signal dropout. Existing methods need to include strong priors like shape priors or analytical intensity models to succeed in the segmentation. However, such priors tend to limit these methods to a specific target or imaging settings, and they are not always applicable to pathological cases. This work introduces a semi-supervised segmentation framework for ultrasound imaging that alleviates the limitation of fully automatic segmentation, that is, it is applicable to any kind of target and imaging settings. Our methodology uses a graph of image patches to represent the ultrasound image and user-assisted initialization with labels, which acts as soft priors. The segmentation problem is formulated as a continuous minimum cut problem and solved with an efficient optimization algorithm. We validate our segmentation framework on clinical ultrasound imaging (prostate, fetus, and tumors of the liver and eye). We obtain high similarity agreement with the ground truth provided by medical expert delineations in all applications (94% DICE values in average) and the proposed algorithm performs favorably with the literature.
Resumo:
The paper presents a competence-based instructional design system and a way to provide a personalization of navigation in the course content. The navigation aid tool builds on the competence graph and the student model, which includes the elements of uncertainty in the assessment of students. An individualized navigation graph is constructed for each student, suggesting the competences the student is more prepared to study. We use fuzzy set theory for dealing with uncertainty. The marks of the assessment tests are transformed into linguistic terms and used for assigning values to linguistic variables. For each competence, the level of difficulty and the level of knowing its prerequisites are calculated based on the assessment marks. Using these linguistic variables and approximate reasoning (fuzzy IF-THEN rules), a crisp category is assigned to each competence regarding its level of recommendation.
Resumo:
Segmenting ultrasound images is a challenging problemwhere standard unsupervised segmentation methods such asthe well-known Chan-Vese method fail. We propose in thispaper an efficient segmentation method for this class ofimages. Our proposed algorithm is based on asemi-supervised approach (user labels) and the use ofimage patches as data features. We also consider thePearson distance between patches, which has been shown tobe robust w.r.t speckle noise present in ultrasoundimages. Our results on phantom and clinical data show avery high similarity agreement with the ground truthprovided by a medical expert.
Resumo:
Recently graph theory and complex networks have been widely used as a mean to model functionality of the brain. Among different neuroimaging techniques available for constructing the brain functional networks, electroencephalography (EEG) with its high temporal resolution is a useful instrument of the analysis of functional interdependencies between different brain regions. Alzheimer's disease (AD) is a neurodegenerative disease, which leads to substantial cognitive decline, and eventually, dementia in aged people. To achieve a deeper insight into the behavior of functional cerebral networks in AD, here we study their synchronizability in 17 newly diagnosed AD patients compared to 17 healthy control subjects at no-task, eyes-closed condition. The cross-correlation of artifact-free EEGs was used to construct brain functional networks. The extracted networks were then tested for their synchronization properties by calculating the eigenratio of the Laplacian matrix of the connection graph, i.e., the largest eigenvalue divided by the second smallest one. In AD patients, we found an increase in the eigenratio, i.e., a decrease in the synchronizability of brain networks across delta, alpha, beta, and gamma EEG frequencies within the wide range of network costs. The finding indicates the destruction of functional brain networks in early AD.
Resumo:
Background: Network reconstructions at the cell level are a major development in Systems Biology. However, we are far from fully exploiting its potentialities. Often, the incremental complexity of the pursued systems overrides experimental capabilities, or increasingly sophisticated protocols are underutilized to merely refine confidence levels of already established interactions. For metabolic networks, the currently employed confidence scoring system rates reactions discretely according to nested categories of experimental evidence or model-based likelihood. Results: Here, we propose a complementary network-based scoring system that exploits the statistical regularities of a metabolic network as a bipartite graph. As an illustration, we apply it to the metabolism of Escherichia coli. The model is adjusted to the observations to derive connection probabilities between individual metabolite-reaction pairs and, after validation, to assess the reliability of each reaction in probabilistic terms. This network-based scoring system uncovers very specific reactions that could be functionally or evolutionary important, identifies prominent experimental targets, and enables further confirmation of modeling results. Conclusions: We foresee a wide range of potential applications at different sub-cellular or supra-cellular levels of biological interactions given the natural bipartivity of many biological networks.
Resumo:
In this paper, we present an efficient numerical scheme for the recently introduced geodesic active fields (GAF) framework for geometric image registration. This framework considers the registration task as a weighted minimal surface problem. Hence, the data-term and the regularization-term are combined through multiplication in a single, parametrization invariant and geometric cost functional. The multiplicative coupling provides an intrinsic, spatially varying and data-dependent tuning of the regularization strength, and the parametrization invariance allows working with images of nonflat geometry, generally defined on any smoothly parametrizable manifold. The resulting energy-minimizing flow, however, has poor numerical properties. Here, we provide an efficient numerical scheme that uses a splitting approach; data and regularity terms are optimized over two distinct deformation fields that are constrained to be equal via an augmented Lagrangian approach. Our approach is more flexible than standard Gaussian regularization, since one can interpolate freely between isotropic Gaussian and anisotropic TV-like smoothing. In this paper, we compare the geodesic active fields method with the popular Demons method and three more recent state-of-the-art algorithms: NL-optical flow, MRF image registration, and landmark-enhanced large displacement optical flow. Thus, we can show the advantages of the proposed FastGAF method. It compares favorably against Demons, both in terms of registration speed and quality. Over the range of example applications, it also consistently produces results not far from more dedicated state-of-the-art methods, illustrating the flexibility of the proposed framework.
Resumo:
We present an alternative approach to the usual treatments of singular Lagrangians. It is based on a Hamiltonian regularization scheme inspired on the coisotropic embedding of presymplectic systems. A Lagrangian regularization of a singular Lagrangian is a regular Lagrangian defined on an extended velocity phase space that reproduces the original theory when restricted to the initial configuration space. A Lagrangian regularization does not always exists, but a family of singular Lagrangians is studied for which such a regularization can be described explicitly. These regularizations turn out to be essentially unique and provide an alternative setting to quantize the corresponding physical systems. These ideas can be applied both in classical mechanics and field theories. Several examples are discussed in detail. 1995 American Institute of Physics.
Resumo:
Schizophrenia is postulated to be the prototypical dysconnection disorder, in which hallucinations are the core symptom. Due to high heterogeneity in methodology across studies and the clinical phenotype, it remains unclear whether the structural brain dysconnection is global or focal and if clinical symptoms result from this dysconnection. In the present work, we attempt to clarify this issue by studying a population considered as a homogeneous genetic sub-type of schizophrenia, namely the 22q11.2 deletion syndrome (22q11.2DS). Cerebral MRIs were acquired for 46 patients and 48 age and gender matched controls (aged 6-26, respectively mean age = 15.20 ± 4.53 and 15.28 ± 4.35 years old). Using the Connectome mapper pipeline (connectomics.org) that combines structural and diffusion MRI, we created a whole brain network for each individual. Graph theory was used to quantify the global and local properties of the brain network organization for each participant. A global degree loss of 6% was found in patients' networks along with an increased Characteristic Path Length. After identifying and comparing hubs, a significant loss of degree in patients' hubs was found in 58% of the hubs. Based on Allen's brain network model for hallucinations, we explored the association between local efficiency and symptom severity. Negative correlations were found in the Broca's area (p < 0.004), the Wernicke area (p < 0.023) and a positive correlation was found in the dorsolateral prefrontal cortex (DLPFC) (p < 0.014). In line with the dysconnection findings in schizophrenia, our results provide preliminary evidence for a targeted alteration in the brain network hubs' organization in individuals with a genetic risk for schizophrenia. The study of specific disorganization in language, speech and thought regulation networks sharing similar network properties may help to understand their role in the hallucination mechanism.
Resumo:
Schizophrenia is often considered as a dysconnection syndrome in which, abnormal interactions between large-scale functional brain networks result in cognitive and perceptual deficits. In this article we apply the graph theoretic measures to brain functional networks based on the resting EEGs of fourteen schizophrenic patients in comparison with those of fourteen matched control subjects. The networks were extracted from common-average-referenced EEG time-series through partial and unpartial cross-correlation methods. Unpartial correlation detects functional connectivity based on direct and/or indirect links, while partial correlation allows one to ignore indirect links. We quantified the network properties with the graph metrics, including mall-worldness, vulnerability, modularity, assortativity, and synchronizability. The schizophrenic patients showed method-specific and frequency-specific changes especially pronounced for modularity, assortativity, and synchronizability measures. However, the differences between schizophrenia patients and normal controls in terms of graph theory metrics were stronger for the unpartial correlation method.
Resumo:
This paper describes the port interconnection of two subsystems: a power electronics subsystem (a back-to-back AC/CA converter (B2B), coupled to a phase of the power grid), and an electromechanical subsystem (a doubly-fed induction machine (DFIM). The B2B is a variable structure system (VSS), due to presence of control-actuated switches: however, from a modelling simulation, as well as a control-design, point of view, it is sensible to consider modulated transformers (MTF in the bond graph language) instead of the pairs of complementary switches. The port-Hamiltonian models of both subsystems are presented and, using a power-preserving interconnection, the Hamiltonian description of the whole system is obtained; detailed bond graphs of all subsystems and the complete system are also provided. Using passivity-based controllers computed in the Hamiltonian formalism for both subsystems, the whole model is simulated; simulations are run to rest the correctness and efficiency of the Hamiltonian network modelling approach used in this work.
Resumo:
A regularization method based on the non-extensive maximum entropy principle is devised. Special emphasis is given to the q=1/2 case. We show that, when the residual principle is considered as constraint, the q=1/2 generalized distribution of Tsallis yields a regularized solution for bad-conditioned problems. The so devised regularized distribution is endowed with a component which corresponds to the well known regularized solution of Tikhonov (1977).
Resumo:
Social interactions are a very important component in people"s lives. Social network analysis has become a common technique used to model and quantify the properties of social interactions. In this paper, we propose an integrated framework to explore the characteristics of a social network extracted from multimodal dyadic interactions. For our study, we used a set of videos belonging to New York Times" Blogging Heads opinion blog. The Social Network is represented as an oriented graph, whose directed links are determined by the Influence Model. The links" weights are a measure of the"influence" a person has over the other. The states of the Influence Model encode automatically extracted audio/visual features from our videos using state-of-the art algorithms. Our results are reported in terms of accuracy of audio/visual data fusion for speaker segmentation and centrality measures used to characterize the extracted social network.
Resumo:
Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than prediction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for predicting individual customer choices from the vast amount of user generated feedback. In this thesis we propose several algorithms for learning preference relations. These algorithms stem from well founded and robust class of regularized least-squares methods and have many attractive computational properties. In order to improve the performance of our methods, we introduce several non-linear kernel functions. Thus, contribution of this thesis is twofold: kernel functions for structured data that are used to take advantage of various non-vectorial data representations and the preference learning algorithms that are suitable for different tasks, namely efficient learning of preference relations, learning with large amount of training data, and semi-supervised preference learning. Proposed kernel-based algorithms and kernels are applied to the parse ranking task in natural language processing, document ranking in information retrieval, and remote homology detection in bioinformatics domain. Training of kernel-based ranking algorithms can be infeasible when the size of the training set is large. This problem is addressed by proposing a preference learning algorithm whose computation complexity scales linearly with the number of training data points. We also introduce sparse approximation of the algorithm that can be efficiently trained with large amount of data. For situations when small amount of labeled data but a large amount of unlabeled data is available, we propose a co-regularized preference learning algorithm. To conclude, the methods presented in this thesis address not only the problem of the efficient training of the algorithms but also fast regularization parameter selection, multiple output prediction, and cross-validation. Furthermore, proposed algorithms lead to notably better performance in many preference learning tasks considered.
Resumo:
Peer-reviewed
Resumo:
In this study, dispersive liquid-liquid microextraction based on the solidification of floating organic droplets was used for the preconcentration and determination of thorium in the water samples. In this method, acetone and 1-undecanol were used as disperser and extraction solvents, respectively, and the ligand 1-(2-thenoyl)-3,3,3-trifluoracetone reagent (TTA) and Aliquat 336 was used as a chelating agent and an ion-paring reagent, for the extraction of thorium, respectively. Inductively coupled plasma-optical emission spectrometry was applied for the quantitation of the analyte after preconcentration. The effect of various factors, such as the extraction and disperser solvent, sample pH, concentration of TTA and concentration of aliquat336 were investigated. Under the optimum conditions, the calibration graph was linear within the thorium content range of 1.0-250 µg L-1 with a detection limit of 0.2 µg L-1. The method was also successfully applied for the determination of thorium in the different water samples.