981 resultados para Computational algorithm
Resumo:
Proteomics has come a long way from the initial qualitative analysis of proteins present in a given sample at a given time ("cataloguing") to large-scale characterization of proteomes, their interactions and dynamic behavior. Originally enabled by breakthroughs in protein separation and visualization (by two-dimensional gels) and protein identification (by mass spectrometry), the discipline now encompasses a large body of protein and peptide separation, labeling, detection and sequencing tools supported by computational data processing. The decisive mass spectrometric developments and most recent instrumentation news are briefly mentioned accompanied by a short review of gel and chromatographic techniques for protein/peptide separation, depletion and enrichment. Special emphasis is placed on quantification techniques: gel-based, and label-free techniques are briefly discussed whereas stable-isotope coding and internal peptide standards are extensively reviewed. Another special chapter is dedicated to software and computing tools for proteomic data processing and validation. A short assessment of the status quo and recommendations for future developments round up this journey through quantitative proteomics.
Resumo:
In this paper we study the relevance of multiple kernel learning (MKL) for the automatic selection of time series inputs. Recently, MKL has gained great attention in the machine learning community due to its flexibility in modelling complex patterns and performing feature selection. In general, MKL constructs the kernel as a weighted linear combination of basis kernels, exploiting different sources of information. An efficient algorithm wrapping a Support Vector Regression model for optimizing the MKL weights, named SimpleMKL, is used for the analysis. In this sense, MKL performs feature selection by discarding inputs/kernels with low or null weights. The approach proposed is tested with simulated linear and nonlinear time series (AutoRegressive, Henon and Lorenz series).
Resumo:
OBJECTIVES: The reconstruction of the right ventricular outflow tract (RVOT) with valved conduits remains a challenge. The reoperation rate at 5 years can be as high as 25% and depends on age, type of conduit, conduit diameter and principal heart malformation. The aim of this study is to provide a bench model with computer fluid dynamics to analyse the haemodynamics of the RVOT, pulmonary artery, its bifurcation, and left and right pulmonary arteries that in the future may serve as a tool for analysis and prediction of outcome following RVOT reconstruction. METHODS: Pressure, flow and diameter at the RVOT, pulmonary artery, bifurcation of the pulmonary artery, and left and right pulmonary arteries were measured in five normal pigs with a mean weight of 24.6 ± 0.89 kg. Data obtained were used for a 3D computer fluid-dynamics simulation of flow conditions, focusing on the pressure, flow and shear stress profile of the pulmonary trunk to the level of the left and right pulmonary arteries. RESULTS: Three inlet steady flow profiles were obtained at 0.2, 0.29 and 0.36 m/s that correspond to the flow rates of 1.5, 2.0 and 2.5 l/min flow at the RVOT. The flow velocity profile was constant at the RVOT down to the bifurcation and decreased at the left and right pulmonary arteries. In all three inlet velocity profiles, low sheer stress and low-velocity areas were detected along the left wall of the pulmonary artery, at the pulmonary artery bifurcation and at the ostia of both pulmonary arteries. CONCLUSIONS: This computed fluid real-time model provides us with a realistic picture of fluid dynamics in the pulmonary tract area. Deep shear stress areas correspond to a turbulent flow profile that is a predictive factor for the development of vessel wall arteriosclerosis. We believe that this bench model may be a useful tool for further evaluation of RVOT pathology following surgical reconstructions.
Resumo:
The Generalized Assignment Problem consists in assigning a setof tasks to a set of agents with minimum cost. Each agent hasa limited amount of a single resource and each task must beassigned to one and only one agent, requiring a certain amountof the resource of the agent. We present new metaheuristics forthe generalized assignment problem based on hybrid approaches.One metaheuristic is a MAX-MIN Ant System (MMAS), an improvedversion of the Ant System, which was recently proposed byStutzle and Hoos to combinatorial optimization problems, and itcan be seen has an adaptive sampling algorithm that takes inconsideration the experience gathered in earlier iterations ofthe algorithm. Moreover, the latter heuristic is combined withlocal search and tabu search heuristics to improve the search.A greedy randomized adaptive search heuristic (GRASP) is alsoproposed. Several neighborhoods are studied, including one basedon ejection chains that produces good moves withoutincreasing the computational effort. We present computationalresults of the comparative performance, followed by concludingremarks and ideas on future research in generalized assignmentrelated problems.
Resumo:
The generalization of simple correspondence analysis, for two categorical variables, to multiple correspondence analysis where they may be three or more variables, is not straighforward, both from a mathematical and computational point of view. In this paper we detail the exact computational steps involved in performing a multiple correspondence analysis, including the special aspects of adjusting the principal inertias to correct the percentages of inertia, supplementary points and subset analysis. Furthermore, we give the algorithm for joint correspondence analysis where the cross-tabulations of all unique pairs of variables are analysed jointly. The code in the R language for every step of the computations is given, as well as the results of each computation.
Resumo:
In this paper we propose a Pyramidal Classification Algorithm,which together with an appropriate aggregation index producesan indexed pseudo-hierarchy (in the strict sense) withoutinversions nor crossings. The computer implementation of thealgorithm makes it possible to carry out some simulation testsby Monte Carlo methods in order to study the efficiency andsensitivity of the pyramidal methods of the Maximum, Minimumand UPGMA. The results shown in this paper may help to choosebetween the three classification methods proposed, in order toobtain the classification that best fits the original structureof the population, provided we have an a priori informationconcerning this structure.
Resumo:
We present a simple randomized procedure for the prediction of a binary sequence. The algorithm uses ideas from recent developments of the theory of the prediction of individual sequences. We show that if thesequence is a realization of a stationary and ergodic random process then the average number of mistakes converges, almost surely, to that of the optimum, given by the Bayes predictor.
Resumo:
Behavioral and brain responses to identical stimuli can vary with experimental and task parameters, including the context of stimulus presentation or attention. More surprisingly, computational models suggest that noise-related random fluctuations in brain responses to stimuli would alone be sufficient to engender perceptual differences between physically identical stimuli. In two experiments combining psychophysics and EEG in healthy humans, we investigated brain mechanisms whereby identical stimuli are (erroneously) perceived as different (higher vs lower in pitch or longer vs shorter in duration) in the absence of any change in the experimental context. Even though, as expected, participants' percepts to identical stimuli varied randomly, a classification algorithm based on a mixture of Gaussians model (GMM) showed that there was sufficient information in single-trial EEG to reliably predict participants' judgments of the stimulus dimension. By contrasting electrical neuroimaging analyses of auditory evoked potentials (AEPs) to the identical stimuli as a function of participants' percepts, we identified the precise timing and neural correlates (strength vs topographic modulations) as well as intracranial sources of these erroneous perceptions. In both experiments, AEP differences first occurred ∼100 ms after stimulus onset and were the result of topographic modulations following from changes in the configuration of active brain networks. Source estimations localized the origin of variations in perceived pitch of identical stimuli within right temporal and left frontal areas and of variations in perceived duration within right temporoparietal areas. We discuss our results in terms of providing neurophysiologic evidence for the contribution of random fluctuations in brain activity to conscious perception.
Resumo:
In this paper, an extension of the multi-scale finite-volume (MSFV) method is devised, which allows to Simulate flow and transport in reservoirs with complex well configurations. The new framework fits nicely into the data Structure of the original MSFV method,and has the important property that large patches covering the whole well are not required. For each well. an additional degree of freedom is introduced. While the treatment of pressure-constraint wells is trivial (the well-bore reference pressure is explicitly specified), additional equations have to be solved to obtain the unknown well-bore pressure of rate-constraint wells. Numerical Simulations of test cases with multiple complex wells demonstrate the ability of the new algorithm to capture the interference between the various wells and the reservoir accurately. (c) 2008 Elsevier Inc. All rights reserved.
Resumo:
This paper compares two well known scan matching algorithms: the MbICP and the pIC. As a result of the study, it is proposed the MSISpIC, a probabilistic scan matching algorithm for the localization of an Autonomous Underwater Vehicle (AUV). The technique uses range scans gathered with a Mechanical Scanning Imaging Sonar (MSIS), and the robot displacement estimated through dead-reckoning with the help of a Doppler Velocity Log (DVL) and a Motion Reference Unit (MRU). The proposed method is an extension of the pIC algorithm. Its major contribution consists in: 1) using an EKF to estimate the local path traveled by the robot while grabbing the scan as well as its uncertainty and 2) proposing a method to group into a unique scan, with a convenient uncertainty model, all the data grabbed along the path described by the robot. The algorithm has been tested on an AUV guided along a 600m path within a marina environment with satisfactory results
Resumo:
Summary Background: We previously derived a clinical prognostic algorithm to identify patients with pulmonary embolism (PE) who are at low-risk of short-term mortality who could be safely discharged early or treated entirely in an outpatient setting. Objectives: To externally validate the clinical prognostic algorithm in an independent patient sample. Methods: We validated the algorithm in 983 consecutive patients prospectively diagnosed with PE at an emergency department of a university hospital. Patients with none of the algorithm's 10 prognostic variables (age >/= 70 years, cancer, heart failure, chronic lung disease, chronic renal disease, cerebrovascular disease, pulse >/= 110/min., systolic blood pressure < 100 mm Hg, oxygen saturation < 90%, and altered mental status) at baseline were defined as low-risk. We compared 30-day overall mortality among low-risk patients based on the algorithm between the validation and the original derivation sample. We also assessed the rate of PE-related and bleeding-related mortality among low-risk patients. Results: Overall, the algorithm classified 16.3% of patients with PE as low-risk. Mortality at 30 days was 1.9% among low-risk patients and did not differ between the validation and the original derivation sample. Among low-risk patients, only 0.6% died from definite or possible PE, and 0% died from bleeding. Conclusions: This study validates an easy-to-use, clinical prognostic algorithm for PE that accurately identifies patients with PE who are at low-risk of short-term mortality. Low-risk patients based on our algorithm are potential candidates for less costly outpatient treatment.
Resumo:
The development and tests of an iterative reconstruction algorithm for emission tomography based on Bayesian statistical concepts are described. The algorithm uses the entropy of the generated image as a prior distribution, can be accelerated by the choice of an exponent, and converges uniformly to feasible images by the choice of one adjustable parameter. A feasible image has been defined as one that is consistent with the initial data (i.e. it is an image that, if truly a source of radiation in a patient, could have generated the initial data by the Poisson process that governs radioactive disintegration). The fundamental ideas of Bayesian reconstruction are discussed, along with the use of an entropy prior with an adjustable contrast parameter, the use of likelihood with data increment parameters as conditional probability, and the development of the new fast maximum a posteriori with entropy (FMAPE) Algorithm by the successive substitution method. It is shown that in the maximum likelihood estimator (MLE) and FMAPE algorithms, the only correct choice of initial image for the iterative procedure in the absence of a priori knowledge about the image configuration is a uniform field.