202 resultados para Array algorithm
Resumo:
Liquid chromatography-mass spectrometry (LC-MS) datasets can be compared or combined following chromatographic alignment. Here we describe a simple solution to the specific problem of aligning one LC-MS dataset and one LC-MS/MS dataset, acquired on separate instruments from an enzymatic digest of a protein mixture, using feature extraction and a genetic algorithm. First, the LC-MS dataset is searched within a few ppm of the calculated theoretical masses of peptides confidently identified by LC-MS/MS. A piecewise linear function is then fitted to these matched peptides using a genetic algorithm with a fitness function that is insensitive to incorrect matches but sufficiently flexible to adapt to the discrete shifts common when comparing LC datasets. We demonstrate the utility of this method by aligning ion trap LC-MS/MS data with accurate LC-MS data from an FTICR mass spectrometer and show how hybrid datasets can improve peptide and protein identification by combining the speed of the ion trap with the mass accuracy of the FTICR, similar to using a hybrid ion trap-FTICR instrument. We also show that the high resolving power of FTICR can improve precision and linear dynamic range in quantitative proteomics. The alignment software, msalign, is freely available as open source.
Resumo:
The acute hippocampal brain slice preparation is an important in vitro screening tool for potential anticonvulsants. Application of 4-aminopyridine (4-AP) or removal of external Mg2+ ions induces epileptiform bursting in slices which is analogous to electrical brain activity seen in status epilepticus states. We have developed these epileptiform models for use with multi-electrode arrays (MEAs), allowing recording across the hippocampal slice surface from 59 points. We present validation of this novel approach and analyses using two anticonvulsants, felbamate and phenobarbital, the effects of which have already been assessed in these models using conventional extracellular recordings. In addition to assessing drug effects on commonly described parameters (duration, amplitude and frequency), we describe novel methods using the MEA to assess burst propagation speeds and the underlying frequencies that contribute to the epileptiform activity seen. Contour plots are also used as a method of illustrating burst activity. Finally, we describe hitherto unreported properties of epileptiform, bursting induced by 100 mu M 4-AP or removal of external Mg2+ ions. Specifically, we observed decreases over time in burst amplitude and increase over time in burst frequency in the absence of additional pharmacological interventions. These MEA methods enhance the depth, quality and range of data that can be derived from the hippocampal slice preparation compared to conventional extracellular recordings. it may also uncover additional modes of action that contribute to anti-epileptiform drug effects. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
This paper represents the first step in an on-going work for designing an unsupervised method based on genetic algorithm for intrusion detection. Its main role in a broader system is to notify of an unusual traffic and in that way provide the possibility of detecting unknown attacks. Most of the machine-learning techniques deployed for intrusion detection are supervised as these techniques are generally more accurate, but this implies the need of labeling the data for training and testing which is time-consuming and error-prone. Hence, our goal is to devise an anomaly detector which would be unsupervised, but at the same time robust and accurate. Genetic algorithms are robust and able to avoid getting stuck in local optima, unlike the rest of clustering techniques. The model is verified on KDD99 benchmark dataset, generating a solution competitive with the solutions of the state-of-the-art which demonstrates high possibilities of the proposed method.
Resumo:
This paper proposes a new iterative algorithm for OFDM joint data detection and phase noise (PHN) cancellation based on minimum mean square prediction error. We particularly highlight the problem of "overfitting" such that the iterative approach may converge to a trivial solution. Although it is essential for this joint approach, the overfitting problem was relatively less studied in existing algorithms. In this paper, specifically, we apply a hard decision procedure at every iterative step to overcome the overfitting. Moreover, compared with existing algorithms, a more accurate Pade approximation is used to represent the phase noise, and finally a more robust and compact fast process based on Givens rotation is proposed to reduce the complexity to a practical level. Numerical simulations are also given to verify the proposed algorithm.
Resumo:
Using the classical Parzen window (PW) estimate as the target function, the sparse kernel density estimator is constructed in a forward constrained regression manner. The leave-one-out (LOO) test score is used for kernel selection. The jackknife parameter estimator subject to positivity constraint check is used for the parameter estimation of a single parameter at each forward step. As such the proposed approach is simple to implement and the associated computational cost is very low. An illustrative example is employed to demonstrate that the proposed approach is effective in constructing sparse kernel density estimators with comparable accuracy to that of the classical Parzen window estimate.
Resumo:
In this paper, we present an on-line estimation algorithm for an uncertain time delay in a continuous system based on the observational input-output data, subject to observational noise. The first order Pade approximation is used to approximate the time delay. At each time step, the algorithm combines the well known Kalman filter algorithm and the recursive instrumental variable least squares (RIVLS) algorithm in cascade form. The instrumental variable least squares algorithm is used in order to achieve the consistency of the delay parameter estimate, since an error-in-the-variable model is involved. An illustrative example is utilized to demonstrate the efficacy of the proposed approach.
Resumo:
In the United Kingdom and in fact throughout Europe, the chosen standard for digital terrestrial television is the European Telecommunications Standards Institute (ETSI) ETN 300 744 also known as Digital Video Broadcasting - Terrestrial (DVB-T). The modulation method under this standard was chosen to be Orthogonal Frequency Division Multiplex (0FD4 because of the apparent inherent capability for withstanding the effects of multipath. Within the DVB-T standard, the addition of pilot tones was included that can be used for many applications such as channel impulse response estimation or local oscillator phase and frequency offset estimation. This paper demonstrates a technique for an estimation of the relative path attenuation of a single multipath signal that can be used as a simple firmware update for a commercial set-top box. This technique can be used to help eliminate the effects of multipath(1).
Resumo:
This paper describes a region-based algorithm for deriving a concise description of a first order optical flow field. The algorithm described achieves performance improvements over existing algorithms without compromising the accuracy of the flow field values calculated. These improvements are brought about by not computing the entire flow field between two consecutive images, but by considering only the flow vectors of a selected subset of the images. The algorithm is presented in the context of a project to balance a bipedal robot using visual information.
Resumo:
Most haptic environments are based on single point interactions whereas in practice, object manipulation requires multiple contact points between the object, fingers, thumb and palm. The Friction Cone Algorithm was developed specifically to work well in a multi-finger haptic environment where object manipulation would occur. However, the Friction Cone Algorithm has two shortcomings when applied to polygon meshes: there is no means of transitioning polygon boundaries or feeling non-convex edges. In order to overcome these deficiencies, Face Directed Connection Graphs have been developed as well as a robust method for applying friction to non-convex edges. Both these extensions are described herein, as well as the implementation issues associated with them.
Resumo:
Previous work has presented the friction cone algorithm, a generalised method to resolve forces on a simulated haptic object when two or more fingers are in contact. Two extensions to the friction cone algorithm are presented: force shading and bump mapping. Force shading removes the discontinuities that are present when transitioning from one face to the next, whilst bump mapping provides a mechanism for rendering haptic textures on polygonal surfaces. Both these extensions can be combined whilst still maintaining the friction cone algorithm's intrinsic ability to simulate arbitrarily complex friction models.
Resumo:
How can a bridge be built between autonomic computing approaches and parallel computing system? The work reported in this paper is motivated towards bridging this gap by proposing swarm-array computing, a novel technique to achieve autonomy for distributed parallel computing systems. Among three proposed approaches, the second approach, namely 'Intelligent Agents' is of focus in this paper. The task to be executed on parallel computing cores is considered as a swarm of autonomous agents. A task is carried to a computing core by carrier. agents and can be seamlessly transferred between cores in the event of a pre-dicted failure, thereby achieving self-ware objectives of autonomic computing. The feasibility of the proposed approach is validated on a multi-agent simulator.
Resumo:
The work reported in this paper proposes 'Intelligent Agents', a Swarm-Array computing approach focused to apply autonomic computing concepts to parallel computing systems and build reliable systems for space applications. Swarm-array computing is a robotics a swarm robotics inspired novel computing approach considered as a path to achieve autonomy in parallel computing systems. In the intelligent agent approach, a task to be executed on parallel computing cores is considered as a swarm of autonomous agents. A task is carried to a computing core by carrier agents and can be seamlessly transferred between cores in the event of a predicted failure, thereby achieving self-* objectives of autonomic computing. The approach is validated on a multi-agent simulator.
Resumo:
How can a bridge be built between autonomic computing approaches and parallel computing systems? How can autonomic computing approaches be extended towards building reliable systems? How can existing technologies be merged to provide a solution for self-managing systems? The work reported in this paper aims to answer these questions by proposing Swarm-Array Computing, a novel technique inspired from swarm robotics and built on the foundations of autonomic and parallel computing paradigms. Two approaches based on intelligent cores and intelligent agents are proposed to achieve autonomy in parallel computing systems. The feasibility of the proposed approaches is validated on a multi-agent simulator.
Resumo:
Space applications demand the need for building reliable systems. Autonomic computing defines such reliable systems as self-managing systems. The work reported in this paper combines agent-based and swarm robotic approaches leading to swarm-array computing, a novel technique to achieve self-managing distributed parallel computing systems. Two swarm-array computing approaches based on swarms of computational resources and swarms of tasks are explored. FPGA is considered as the computing system. The feasibility of the two proposed approaches that binds the computing system and the task together is simulated on the SeSAm multi-agent simulator.
Resumo:
Space applications demand the need for building reliable systems. Autonomic computing defines such reliable systems as self-managing systems. The work reported in this paper combines agent-based and swarm robotic approaches leading to swarm-array computing, a novel technique to achieve self-managing distributed parallel computing systems. Two swarm-array computing approaches based on swarms of computational resources and swarms of tasks are explored. FPGA is considered as the computing system. The feasibility of the two proposed approaches that binds the computing system and the task together is simulated on the SeSAm multi-agent simulator.