814 resultados para data gathering algorithm
Restoration of images and 3D data to higher resolution by deconvolution with sparsity regularization
Resumo:
Image convolution is conventionally approximated by the LTI discrete model. It is well recognized that the higher the sampling rate, the better is the approximation. However sometimes images or 3D data are only available at a lower sampling rate due to physical constraints of the imaging system. In this paper, we model the under-sampled observation as the result of combining convolution and subsampling. Because the wavelet coefficients of piecewise smooth images tend to be sparse and well modelled by tree-like structures, we propose the L0 reweighted-L2 minimization (L0RL2 ) algorithm to solve this problem. This promotes model-based sparsity by minimizing the reweighted L2 norm, which approximates the L0 norm, and by enforcing a tree model over the weights. We test the algorithm on 3 examples: a simple ring, the cameraman image and a 3D microscope dataset; and show that good results can be obtained. © 2010 IEEE.
Resumo:
Reducing energy consumption is a major challenge for energy-intensive industries such as papermaking. A commercially viable energy saving solution is to employ data-based optimization techniques to obtain a set of optimized operational settings that satisfy certain performance indices. The difficulties of this are: 1) the problems of this type are inherently multicriteria in the sense that improving one performance index might result in compromising the other important measures; 2) practical systems often exhibit unknown complex dynamics and several interconnections which make the modeling task difficult; and 3) as the models are acquired from the existing historical data, they are valid only locally and extrapolations incorporate risk of increasing process variability. To overcome these difficulties, this paper presents a new decision support system for robust multiobjective optimization of interconnected processes. The plant is first divided into serially connected units to model the process, product quality, energy consumption, and corresponding uncertainty measures. Then multiobjective gradient descent algorithm is used to solve the problem in line with user's preference information. Finally, the optimization results are visualized for analysis and decision making. In practice, if further iterations of the optimization algorithm are considered, validity of the local models must be checked prior to proceeding to further iterations. The method is implemented by a MATLAB-based interactive tool DataExplorer supporting a range of data analysis, modeling, and multiobjective optimization techniques. The proposed approach was tested in two U.K.-based commercial paper mills where the aim was reducing steam consumption and increasing productivity while maintaining the product quality by optimization of vacuum pressures in forming and press sections. The experimental results demonstrate the effectiveness of the method. © 2006 IEEE.
Resumo:
Changepoint models are widely used to model the heterogeneity of sequential data. We present a novel sequential Monte Carlo (SMC) online Expectation-Maximization (EM) algorithm for estimating the static parameters of such models. The SMC online EM algorithm has a cost per time which is linear in the number of particles and could be particularly important when the data is representable as a long sequence of observations, since it drastically reduces the computational requirements for implementation. We present an asymptotic analysis for the stability of the SMC estimates used in the online EM algorithm and demonstrate the performance of this scheme using both simulated and real data originating from DNA analysis.
Resumo:
Changepoint models are widely used to model the heterogeneity of sequential data. We present a novel sequential Monte Carlo (SMC) online Expectation-Maximization (EM) algorithm for estimating the static parameters of such models. The SMC online EM algorithm has a cost per time which is linear in the number of particles and could be particularly important when the data is representable as a long sequence of observations, since it drastically reduces the computational requirements for implementation. We present an asymptotic analysis for the stability of the SMC estimates used in the online EM algorithm and demonstrate the performance of this scheme using both simulated and real data originating from DNA analysis.
Resumo:
The measured time-history of the cylinder pressure is the principal diagnostic in the analysis of processes within the combustion chamber. This paper defines, implements and tests a pressure analysis algorithm for a Formula One racing engine in MATLAB1. Evaluation of the software on real data is presented. The sensitivity of the model to the variability of burn parameter estimates is also discussed. Copyright © 1997 Society of Automotive Engineers, Inc.
Resumo:
DNA microarrays provide a huge amount of data and require therefore dimensionality reduction methods to extract meaningful biological information. Independent Component Analysis (ICA) was proposed by several authors as an interesting means. Unfortunately, experimental data are usually of poor quality- because of noise, outliers and lack of samples. Robustness to these hurdles will thus be a key feature for an ICA algorithm. This paper identifies a robust contrast function and proposes a new ICA algorithm. © 2007 IEEE.
Resumo:
Spatial normalisation is a key element of statistical parametric mapping and related techniques for analysing cohort statistics on voxel arrays and surfaces. The normalisation process involves aligning each individual specimen to a template using some sort of registration algorithm. Any misregistration will result in data being mapped onto the template at the wrong location. At best, this will introduce spatial imprecision into the subsequent statistical analysis. At worst, when the misregistration varies systematically with a covariate of interest, it may lead to false statistical inference. Since misregistration generally depends on the specimen's shape, we investigate here the effect of allowing for shape as a confound in the statistical analysis, with shape represented by the dominant modes of variation observed in the cohort. In a series of experiments on synthetic surface data, we demonstrate how allowing for shape can reveal true effects that were previously masked by systematic misregistration, and also guard against misinterpreting systematic misregistration as a true effect. We introduce some heuristics for disentangling misregistration effects from true effects, and demonstrate the approach's practical utility in a case study of the cortical bone distribution in 268 human femurs.
Resumo:
This work considers the problem of fitting data on a Lie group by a coset of a compact subgroup. This problem can be seen as an extension of the problem of fitting affine subspaces in n to data which can be solved using principal component analysis. We show how the fitting problem can be reduced for biinvariant distances to a generalized mean calculation on an homogeneous space. For biinvariant Riemannian distances we provide an algorithm based on the Karcher mean gradient algorithm. We illustrate our approach by some examples on SO(n). © 2010 Springer -Verlag Berlin Heidelberg.
Resumo:
This paper outlines necessary and sufficient conditions for network reconstruction of linear, time-invariant systems using data from either knock-out or over-expression experiments. These structural system perturbations, which are common in biological experiments, can be formulated as unknown system inputs, allowing the network topology and dynamics to be found. We assume that only partial state measurements are available and propose an algorithm that can reconstruct the network at the level of the measured states using either time-series or steady-state data. A simulated example illustrates how the algorithm successfully reconstructs a network from data. © 2013 EUCA.
Resumo:
Flow measurement data at the district meter area (DMA) level has the potential for burst detection in the water distribution systems. This work investigates using a polynomial function fitted to the historic flow measurements based on a weighted least-squares method for automatic burst detection in the U.K. water distribution networks. This approach, when used in conjunction with an expectationmaximization (EM) algorithm, can automatically select useful data from the historic flow measurements, which may contain normal and abnormal operating conditions in the distribution network, e.g., water burst. Thus, the model can estimate the normal water flow (nonburst condition), and hence the burst size on the water distribution system can be calculated from the difference between the measured flow and the estimated flow. The distinguishing feature of this method is that the burst detection is fully unsupervised, and the burst events that have occurred in the historic data do not affect the procedure and bias the burst detection algorithm. Experimental validation of the method has been carried out using a series of flushing events that simulate burst conditions to confirm that the simulated burst sizes are capable of being estimated correctly. This method was also applied to eight DMAs with known real burst events, and the results of burst detections are shown to relate to the water company's records of pipeline reparation work. © 2014 American Society of Civil Engineers.
Resumo:
In this paper, we constructed a Iris recognition algorithm based on point covering of high-dimensional space and Multi-weighted neuron of point covering of high-dimensional space, and proposed a new method for iris recognition based on point covering theory of high-dimensional space. In this method, irises are trained as "cognition" one class by one class, and it doesn't influence the original recognition knowledge for samples of the new added class. The results of experiments show the rejection rate is 98.9%, the correct cognition rate and the error rate are 95.71% and 3.5% respectively. The experimental results demonstrate that the rejection rate of test samples excluded in the training samples class is very high. It proves the proposed method for iris recognition is effective.
Resumo:
We used Plane Wave Expansion Method and a Rapid Genetic Algorithm to design two-dimensional photonic crystals with a large absolute band gap. A filling fraction controlling operator and Fourier transform data storage mechanism had been integrated into the genetic operators to get desired photonic crystals effectively and efficiently. Starting from randomly generated photonic crystals, the proposed RGA evolved toward the best objectives and yielded a square lattice photonic crystal with the band gap (defined as the gap to mid-gap ratio) as large as 13.25%. Furthermore, the evolutionary objective was modified and resulted in a satisfactory PC for better application to slab system.
Resumo:
Identifying protein-protein interactions is crucial for understanding cellular functions. Genomic data provides opportunities and challenges in identifying these interactions. We uncover the rules for predicting protein-protein interactions using a frequent pattern tree (FPT) approach modified to generate a minimum set of rules (mFPT), with rule attributes constructed from the interaction features of the yeast genomic data. The mFPT prediction accuracy is benchmarked against other commonly used methods such as Bayesian networks and logistic regressions under various statistical measures. Our study indicates that mFPT outranks other methods in predicting the protein-protein interactions for the database used. We predict a new protein-protein interaction complex whose biological function is related to premRNA splicing and new protein-protein interactions within existing complexes based on the rules generated.
Resumo:
A new algorithm based on the multiparameter neural network is proposed to retrieve wind speed (WS), sea surface temperature (SST), sea surface air temperature, and relative humidity ( RH) simultaneously over the global oceans from Special Sensor Microwave Imager (SSM/I) observations. The retrieved geophysical parameters are used to estimate the surface latent heat flux and sensible heat flux using a bulk method over the global oceans. The neural network is trained and validated with the matchups of SSM/I overpasses and National Data Buoy Center buoys under both clear and cloudy weather conditions. In addition, the data acquired by the 85.5-GHz channels of SSM/I are used as the input variables of the neural network to improve its performance. The root-mean-square (rms) errors between the estimated WS, SST, sea surface air temperature, and RH from SSM/I observations and the buoy measurements are 1.48 m s(-1), 1.54 degrees C, 1.47 degrees C, and 7.85, respectively. The rms errors between the estimated latent and sensible heat fluxes from SSM/I observations and the Xisha Island ( in the South China Sea) measurements are 3.21 and 30.54 W m(-2), whereas those between the SSM/ I estimates and the buoy data are 4.9 and 37.85 W m(-2), respectively. Both of these errors ( those for WS, SST, and sea surface air temperature, in particular) are smaller than those by previous retrieval algorithms of SSM/ I observations over the global oceans. Unlike previous methods, the present algorithm is capable of producing near-real-time estimates of surface latent and sensible heat fluxes for the global oceans from SSM/I data.
Resumo:
A new model is proposed to estimate the significant wave heights with ERS-1/2 scatterometer data. The results show that the relationship between wave parameters and radar backscattering cross section is similar to that between wind and the radar backscattering cross section. Therefore, the relationship between significant wave height and the radar backscattering cross section is established with a neural network algorithm, which is, if the average wave period is <= 7s, the root mean square of significant wave height retrieved from ERS-1/2 data is 0.51 m, or 0.72 m if it is >7s otherwise.