842 resultados para Graph Based Algorithms
Resumo:
Functional connectivity in human brain can be represented as a network using electroencephalography (EEG) signals. These networks--whose nodes can vary from tens to hundreds--are characterized by neurobiologically meaningful graph theory metrics. This study investigates the degree to which various graph metrics depend upon the network size. To this end, EEGs from 32 normal subjects were recorded and functional networks of three different sizes were extracted. A state-space based method was used to calculate cross-correlation matrices between different brain regions. These correlation matrices were used to construct binary adjacency connectomes, which were assessed with regards to a number of graph metrics such as clustering coefficient, modularity, efficiency, economic efficiency, and assortativity. We showed that the estimates of these metrics significantly differ depending on the network size. Larger networks had higher efficiency, higher assortativity and lower modularity compared to those with smaller size and the same density. These findings indicate that the network size should be considered in any comparison of networks across studies.
Resumo:
Defining an efficient training set is one of the most delicate phases for the success of remote sensing image classification routines. The complexity of the problem, the limited temporal and financial resources, as well as the high intraclass variance can make an algorithm fail if it is trained with a suboptimal dataset. Active learning aims at building efficient training sets by iteratively improving the model performance through sampling. A user-defined heuristic ranks the unlabeled pixels according to a function of the uncertainty of their class membership and then the user is asked to provide labels for the most uncertain pixels. This paper reviews and tests the main families of active learning algorithms: committee, large margin, and posterior probability-based. For each of them, the most recent advances in the remote sensing community are discussed and some heuristics are detailed and tested. Several challenging remote sensing scenarios are considered, including very high spatial resolution and hyperspectral image classification. Finally, guidelines for choosing the good architecture are provided for new and/or unexperienced user.
Resumo:
Introduction: Recommendations for statin use for primary prevention of coronary heart disease (CHD) are based on estimation of the 10-year CHD risk. We compared the 10-year CHD risk assessments and eligibility percentages for statin therapy using three scoring algorithms currently used in Switzerland. Methods: We studied 5683 women and men, aged 35-75, without overt cardiovascular disease (CVD), in a population-based study in Lausanne, Switzerland. We compared the 10-year CHD risk using three scoring schemes, i.e., the Framingham risk score (FRS) from the U.S. National Cholesterol Education Program's Adult Treatment Panel III (ATP III), the PROCAM scoring scheme from the International Atherosclerosis Society (IAS), and the European risk SCORE for low-risk countries, without and with extrapolation to 60 years as recommended by the European Society of Cardiology guidelines (ESC). With FRS and PROCAM, high-risk was defined as a 10-year risk of fatal or non-fatal CHD >20% and a 10-year risk of fatal CVD >= 5% with SCORE. We compared the proportions of high-risk participants and eligibility for statin use according to these three schemes. For each guideline, we estimated the impact of increased statin use from current partial compliance to full compliance on potential CHD deaths averted over 10 years, using a success proportion of 27% for statins. Results: Participants classified at high-risk (both genders) were 5.8% according to FRS and 3.0% to the PROCAM, whereas the European risk SCORE classified 12.5% at high-risk (15.4% with extrapolation to 60 years). For the primary prevention of CHD, 18.5% of participants were eligible for statin therapy using ATP III, 16.6% using IAS, and 10.3% using ESC (13.0% with extrapolation) because ESC guidelines recommend statin therapy only in high-risk subjects. In comparison with IAS, agreement to identify eligible adults for statins was good with ATP III, but moderate with ESC (Figure). Using a population perspective, a full compliance with ATP III guidelines would reduce up to 17.9% of the 24'310 CHD deaths expected over 10 years in Switzerland, 17.3% with IAS and 10.8% with ESC (11.5% with extrapolation). Conclusion: Full compliance with guidelines for statin therapy would result in substantial health benefits, but proportions of high-risk adults and eligible adults for statin use varied substantially depending on the scoring systems and corresponding guidelines used for estimating CHD risk in Switzerland.
Resumo:
An ab initio structure prediction approach adapted to the peptide-major histocompatibility complex (MHC) class I system is presented. Based on structure comparisons of a large set of peptide-MHC class I complexes, a molecular dynamics protocol is proposed using simulated annealing (SA) cycles to sample the conformational space of the peptide in its fixed MHC environment. A set of 14 peptide-human leukocyte antigen (HLA) A0201 and 27 peptide-non-HLA A0201 complexes for which X-ray structures are available is used to test the accuracy of the prediction method. For each complex, 1000 peptide conformers are obtained from the SA sampling. A graph theory clustering algorithm based on heavy atom root-mean-square deviation (RMSD) values is applied to the sampled conformers. The clusters are ranked using cluster size, mean effective or conformational free energies, with solvation free energies computed using Generalized Born MV 2 (GB-MV2) and Poisson-Boltzmann (PB) continuum models. The final conformation is chosen as the center of the best-ranked cluster. With conformational free energies, the overall prediction success is 83% using a 1.00 Angstroms crystal RMSD criterion for main-chain atoms, and 76% using a 1.50 Angstroms RMSD criterion for heavy atoms. The prediction success is even higher for the set of 14 peptide-HLA A0201 complexes: 100% of the peptides have main-chain RMSD values < or =1.00 Angstroms and 93% of the peptides have heavy atom RMSD values < or =1.50 Angstroms. This structure prediction method can be applied to complexes of natural or modified antigenic peptides in their MHC environment with the aim to perform rational structure-based optimizations of tumor vaccines.
Resumo:
Graph pebbling is a network model for studying whether or not a given supply of discrete pebbles can satisfy a given demand via pebbling moves. A pebbling move across an edge of a graph takes two pebbles from one endpoint and places one pebble at the other endpoint; the other pebble is lost in transit as a toll. It has been shown that deciding whether a supply can meet a demand on a graph is NP-complete. The pebbling number of a graph is the smallest t such that every supply of t pebbles can satisfy every demand of one pebble. Deciding if the pebbling number is at most k is NP 2 -complete. In this paper we develop a tool, called theWeight Function Lemma, for computing upper bounds and sometimes exact values for pebbling numbers with the assistance of linear optimization. With this tool we are able to calculate the pebbling numbers of much larger graphs than in previous algorithms, and much more quickly as well. We also obtain results for many families of graphs, in many cases by hand, with much simpler and remarkably shorter proofs than given in previously existing arguments (certificates typically of size at most the number of vertices times the maximum degree), especially for highly symmetric graphs. Here we apply theWeight Function Lemma to several specific graphs, including the Petersen, Lemke, 4th weak Bruhat, Lemke squared, and two random graphs, as well as to a number of infinite families of graphs, such as trees, cycles, graph powers of cycles, cubes, and some generalized Petersen and Coxeter graphs. This partly answers a question of Pachter, et al., by computing the pebbling exponent of cycles to within an asymptotically small range. It is conceivable that this method yields an approximation algorithm for graph pebbling.
Resumo:
Ultrasound segmentation is a challenging problem due to the inherent speckle and some artifacts like shadows, attenuation and signal dropout. Existing methods need to include strong priors like shape priors or analytical intensity models to succeed in the segmentation. However, such priors tend to limit these methods to a specific target or imaging settings, and they are not always applicable to pathological cases. This work introduces a semi-supervised segmentation framework for ultrasound imaging that alleviates the limitation of fully automatic segmentation, that is, it is applicable to any kind of target and imaging settings. Our methodology uses a graph of image patches to represent the ultrasound image and user-assisted initialization with labels, which acts as soft priors. The segmentation problem is formulated as a continuous minimum cut problem and solved with an efficient optimization algorithm. We validate our segmentation framework on clinical ultrasound imaging (prostate, fetus, and tumors of the liver and eye). We obtain high similarity agreement with the ground truth provided by medical expert delineations in all applications (94% DICE values in average) and the proposed algorithm performs favorably with the literature.
Resumo:
In this paper, we define a new scheme to develop and evaluate protection strategies for building reliable GMPLS networks. This is based on what we have called the network protection degree (NPD). The NPD consists of an a priori evaluation, the failure sensibility degree (FSD), which provides the failure probability, and an a posteriori evaluation, the failure impact degree (FID), which determines the impact on the network in case of failure, in terms of packet loss and recovery time. Having mathematical formulated these components, experimental results demonstrate the benefits of the utilization of the NPD, when used to enhance some current QoS routing algorithms in order to offer a certain degree of protection
Resumo:
IP based networks still do not have the required degree of reliability required by new multimedia services, achieving such reliability will be crucial in the success or failure of the new Internet generation. Most of existing schemes for QoS routing do not take into consideration parameters concerning the quality of the protection, such as packet loss or restoration time. In this paper, we define a new paradigm to develop new protection strategies for building reliable MPLS networks, based on what we have called the network protection degree (NPD). This NPD consists of an a priori evaluation, the failure sensibility degree (FSD), which provides the failure probability and an a posteriori evaluation, the failure impact degree (FID), to determine the impact on the network in case of failure. Having mathematical formulated these components, we point out the most relevant components. Experimental results demonstrate the benefits of the utilization of the NPD, when used to enhance some current QoS routing algorithms to offer a certain degree of protection
Resumo:
This paper presents a hybrid behavior-based scheme using reinforcement learning for high-level control of autonomous underwater vehicles (AUVs). Two main features of the presented approach are hybrid behavior coordination and semi on-line neural-Q_learning (SONQL). Hybrid behavior coordination takes advantages of robustness and modularity in the competitive approach as well as efficient trajectories in the cooperative approach. SONQL, a new continuous approach of the Q_learning algorithm with a multilayer neural network is used to learn behavior state/action mapping online. Experimental results show the feasibility of the presented approach for AUVs
Resumo:
This paper presents an automatic vision-based system for UUV station keeping. The vehicle is equipped with a down-looking camera, which provides images of the sea-floor. The station keeping system is based on a feature-based motion detection algorithm, which exploits standard correlation and explicit textural analysis to solve the correspondence problem. A visual map of the area surveyed by the vehicle is constructed to increase the flexibility of the system, allowing the vehicle to position itself when it has lost the reference image. The testing platform is the URIS underwater vehicle. Experimental results demonstrating the behavior of the system on a real environment are presented
Resumo:
This paper proposes a field application of a high-level reinforcement learning (RL) control system for solving the action selection problem of an autonomous robot in cable tracking task. The learning system is characterized by using a direct policy search method for learning the internal state/action mapping. Policy only algorithms may suffer from long convergence times when dealing with real robotics. In order to speed up the process, the learning phase has been carried out in a simulated environment and, in a second step, the policy has been transferred and tested successfully on a real robot. Future steps plan to continue the learning process on-line while on the real robot while performing the mentioned task. We demonstrate its feasibility with real experiments on the underwater robot ICTINEU AUV
Resumo:
When unmanned underwater vehicles (UUVs) perform missions near the ocean floor, optical sensors can be used to improve local navigation. Video mosaics allow to efficiently process the images acquired by the vehicle, and also to obtain position estimates. We discuss in this paper the role of lens distortions in this context, proving that degenerate mosaics have their origin not only in the selected motion model or in registration errors, but also in the cumulative effect of radial distortion residuals. Additionally, we present results on the accuracy of different feature-based approaches for self-correction of lens distortions that may guide the choice of appropriate techniques for correcting distortions
Resumo:
This paper presents a study of connection availability in GMPLS over optical transport networks (OTN) taking into account different network topologies. Two basic path protection schemes are considered and compared with the no protection case. The selected topologies are heterogeneous in geographic coverage, network diameter, link lengths, and average node degree. Connection availability is also computed considering the reliability data of physical components and a well-known network availability model. Results show several correspondences between suitable path protection algorithms and several network topology characteristics
Resumo:
This letter presents a comparison between threeFourier-based motion compensation (MoCo) algorithms forairborne synthetic aperture radar (SAR) systems. These algorithmscircumvent the limitations of conventional MoCo, namelythe assumption of a reference height and the beam-center approximation.All these approaches rely on the inherent time–frequencyrelation in SAR systems but exploit it differently, with the consequentdifferences in accuracy and computational burden. Aftera brief overview of the three approaches, the performance ofeach algorithm is analyzed with respect to azimuthal topographyaccommodation, angle accommodation, and maximum frequencyof track deviations with which the algorithm can cope. Also, ananalysis on the computational complexity is presented. Quantitativeresults are shown using real data acquired by the ExperimentalSAR system of the German Aerospace Center (DLR).
Resumo:
In this project a research both in finding predictors via clustering techniques and in reviewing the Data Mining free software is achieved. The research is based in a case of study, from where additionally to the KDD free software used by the scientific community; a new free tool for pre-processing the data is presented. The predictors are intended for the e-learning domain as the data from where these predictors have to be inferred are student qualifications from different e-learning environments. Through our case of study not only clustering algorithms are tested but also additional goals are proposed.