989 resultados para Pontier, Aug.
Resumo:
We investigate the problem of timing recovery for 2-D magnetic recording (TDMR) channels. We develop a timing error model for TDMR channel considering the phase and frequency offsets with noise. We propose a 2-D data-aided phase-locked loop (PLL) architecture for tracking variations in the position and movement of the read head in the down-track and cross-track directions and analyze the convergence of the algorithm under non-separable timing errors. We further develop a 2-D interpolation-based timing recovery scheme that works in conjunction with the 2-D PLL. We quantify the efficiency of our proposed algorithms by simulations over a 2-D magnetic recording channel with timing errors.
Resumo:
The exposure with band gap light of thermally evaporated As40Sb15Se45 amorphous film of 800 nm thickness, were found to be accompanied by optical changes. The as-prepared and illuminated thin films were studied by X-ray diffraction, Fourier Transform Infrared Spectroscopy and X-ray Photoelectron Spectroscopy and Raman spectroscopy. The optical band gap was reduced due to photo induced effects along with the increase in disorder. These optical properties changes are due to the change of homopolar bond densities. The core level peak shifting in XPS spectra and Raman shift supports the optical changes happening in the film due to light exposure.
Resumo:
Breast cancer is one of the leading cause of cancer related deaths in women and early detection is crucial for reducing mortality rates. In this paper, we present a novel and fully automated approach based on tissue transition analysis for lesion detection in breast ultrasound images. Every candidate pixel is classified as belonging to the lesion boundary, lesion interior or normal tissue based on its descriptor value. The tissue transitions are modeled using a Markov chain to estimate the likelihood of a candidate lesion region. Experimental evaluation on a clinical dataset of 135 images show that the proposed approach can achieve high sensitivity (95 %) with modest (3) false positives per image. The approach achieves very similar results (94 % for 3 false positives) on a completely different clinical dataset of 159 images without retraining, highlighting the robustness of the approach.
Resumo:
Clustering techniques which can handle incomplete data have become increasingly important due to varied applications in marketing research, medical diagnosis and survey data analysis. Existing techniques cope up with missing values either by using data modification/imputation or by partial distance computation, often unreliable depending on the number of features available. In this paper, we propose a novel approach for clustering data with missing values, which performs the task by Symmetric Non-Negative Matrix Factorization (SNMF) of a complete pair-wise similarity matrix, computed from the given incomplete data. To accomplish this, we define a novel similarity measure based on Average Overlap similarity metric which can effectively handle missing values without modification of data. Further, the similarity measure is more reliable than partial distances and inherently possesses the properties required to perform SNMF. The experimental evaluation on real world datasets demonstrates that the proposed approach is efficient, scalable and shows significantly better performance compared to the existing techniques.
Resumo:
For a multilayered specimen, the back-scattered signal in frequency-domain optical-coherence tomography (FDOCT) is expressible as a sum of cosines, each corresponding to a change of refractive index in the specimen. Each of the cosines represent a peak in the reconstructed tomogram. We consider a truncated cosine series representation of the signal, with the constraint that the coefficients in the basis expansion be sparse. An l(2) (sum of squared errors) data error is considered with an l(1) (summation of absolute values) constraint on the coefficients. The optimization problem is solved using Weiszfeld's iteratively reweighted least squares (IRLS) algorithm. On real FDOCT data, improved results are obtained over the standard reconstruction technique with lower levels of background measurement noise and artifacts due to a strong l(1) penalty. The previous sparse tomogram reconstruction techniques in the literature proposed collecting sparse samples, necessitating a change in the data capturing process conventionally used in FDOCT. The IRLS-based method proposed in this paper does not suffer from this drawback.
Resumo:
We address the problem of separating a speech signal into its excitation and vocal-tract filter components, which falls within the framework of blind deconvolution. Typically, the excitation in case of voiced speech is assumed to be sparse and the vocal-tract filter stable. We develop an alternating l(p) - l(2) projections algorithm (ALPA) to perform deconvolution taking into account these constraints. The algorithm is iterative, and alternates between two solution spaces. The initialization is based on the standard linear prediction decomposition of a speech signal into an autoregressive filter and prediction residue. In every iteration, a sparse excitation is estimated by optimizing an l(p)-norm-based cost and the vocal-tract filter is derived as a solution to a standard least-squares minimization problem. We validate the algorithm on voiced segments of natural speech signals and show applications to epoch estimation. We also present comparisons with state-of-the-art techniques and show that ALPA gives a sparser impulse-like excitation, where the impulses directly denote the epochs or instants of significant excitation.
Resumo:
In big data image/video analytics, we encounter the problem of learning an over-complete dictionary for sparse representation from a large training dataset, which cannot be processed at once because of storage and computational constraints. To tackle the problem of dictionary learning in such scenarios, we propose an algorithm that exploits the inherent clustered structure of the training data and make use of a divide-and-conquer approach. The fundamental idea behind the algorithm is to partition the training dataset into smaller clusters, and learn local dictionaries for each cluster. Subsequently, the local dictionaries are merged to form a global dictionary. Merging is done by solving another dictionary learning problem on the atoms of the locally trained dictionaries. This algorithm is referred to as the split-and-merge algorithm. We show that the proposed algorithm is efficient in its usage of memory and computational complexity, and performs on par with the standard learning strategy, which operates on the entire data at a time. As an application, we consider the problem of image denoising. We present a comparative analysis of our algorithm with the standard learning techniques that use the entire database at a time, in terms of training and denoising performance. We observe that the split-and-merge algorithm results in a remarkable reduction of training time, without significantly affecting the denoising performance.
Resumo:
The effect of multiplicative noise on a signal when compared with that of additive noise is very large. In this paper, we address the problem of suppressing multiplicative noise in one-dimensional signals. To deal with signals that are corrupted with multiplicative noise, we propose a denoising algorithm based on minimization of an unbiased estimator (MURE) of meansquare error (MSE). We derive an expression for an unbiased estimate of the MSE. The proposed denoising is carried out in wavelet domain (soft thresholding) by considering time-domain MURE. The parameters of thresholding function are obtained by minimizing the unbiased estimator MURE. We show that the parameters for optimal MURE are very close to the optimal parameters considering the oracle MSE. Experiments show that the SNR improvement for the proposed denoising algorithm is competitive with a state-of-the-art method.
Resumo:
Local polynomial approximation of data is an approach towards signal denoising. Savitzky-Golay (SG) filters are finite-impulse-response kernels, which convolve with the data to result in polynomial approximation for a chosen set of filter parameters. In the case of noise following Gaussian statistics, minimization of mean-squared error (MSE) between noisy signal and its polynomial approximation is optimum in the maximum-likelihood (ML) sense but the MSE criterion is not optimal for non-Gaussian noise conditions. In this paper, we robustify the SG filter for applications involving noise following a heavy-tailed distribution. The optimal filtering criterion is achieved by l(1) norm minimization of error through iteratively reweighted least-squares (IRLS) technique. It is interesting to note that at any stage of the iteration, we solve a weighted SG filter by minimizing l(2) norm but the process converges to l(1) minimized output. The results show consistent improvement over the standard SG filter performance.
Resumo:
Multilevel inverters with hexagonal voltage space vector structures have improved performance of induction motor drives compared to that of the two level inverters. Further reduction in the torque ripple on the motor shaft is possible by using multilevel dodecagonal (12-sided polygon) voltage space vector structures. The advantages of dodecagonal voltage space vector based PWM techniques are the complete elimination of fifth and seventh harmonics in phase voltages for the full modulation range and the extension of linear modulation range. This paper proposes an inverter circuit topology capable of generating multilevel dodecagonal voltage space vectors with symmetric triangles, by cascading two asymmetric three level inverters with isolated H-Bridges. This is made possible by proper selection of DC link voltages and the selection of resultant switching states for the inverters. In this paper, a simple PWM timing calculation method is proposed. Experimental results have also been presented in this paper to validate the proposed concept.
Resumo:
This paper describes the development and evolution of research themes in the Design Theory and Methodology (DTM) conference. Essays containing reflections on the history of DTM, supported by an analysis of session titles and papers winning the ``best paper award'', describe the development of the research themes. A second set of essays describes the evolution of several key research themes. Two broad trends in research themes are evident, with a third one emerging. The topics of the papers in the first decade or so reflect an underlying aim to apply artificial intelligence toward developing systems that could `design'. To do so required understanding how human designers behave, formalizing design processes so that they could be computed, and formalizing representations of design knowledge. The themes in the first DTM conference and the recollections of the DTM founders reflect this underlying aim. The second decade of DTM saw the emergence of product development as an underlying concern and included a growth in a systems view of design. More recently, there appears to be a trend toward design-led innovation, which entails both executing the design process more efficiently and understanding the characteristics of market-leading designs so as to produce engineered products and systems of exceptional levels of quality and customer satisfaction.
Resumo:
In this paper, we integrate two or more compliant mechanisms to get enhanced functionality for manipulating and mechanically characterizing the grasped objects of varied size (cm to sub-mm), stiffness (1e5 to 10 N/m), and materials (cement to biological cells). The concepts of spring-lever (SL) model, stiffness maps, and non-dimensional kinetoelastostatic maps are used to design composite and multi-scale compliant mechanisms. Composite compliant mechanisms comprise two or more different mechanisms within a single elastic continuum while multi-scale ones possess the additional feature of substantial difference in the sizes of the mechanisms that are combined into one. We present three applications: (i) a composite compliant device to measure the failure load of the cement samples; (ii) a composite multi-scale compliant gripper to measure the bulk stiffness of zebrafish embryos; and (iii) a compliant gripper combined with a negative-stiffness element to reduce the overall stiffness. The prototypes of all three devices are made and tested. The cement sample needed a breaking force of 22.5 N; the zebrafish embryo is found to have bulk stiffness of about 10 N/m; and the stiffness of a compliant gripper was reduced by 99.8 % to 0.2 N/m.
Resumo:
How do we assess the capability of a compliant mechanism of given topology and shape? The kinetoelastostatic maps proposed in this paper help answer this question. These maps are drawn in 2D using two non-dimensional quantities, one capturing the nonlinear static response and the other the geometry, material, and applied forces. Geometrically nonlinear finite element analysis is used to create the maps for compliant mechanisms consisting of slender beams. In addition to the topology and shape, the overall proportions and the proportions of the cross-sections of the beam segments are kept fixed for a map. The finite region of the map is parameterized using a non-dimensional quantity defined as the slenderness ratio. The shape and size of the map and the parameterized curves inside it indicate the complete kinetoelastostatic capability of the corresponding compliant mechanism of given topology, shape, and fixed proportions. Static responses considered in this paper include input/output displacement, geometric amplification, mechanical advantage, maximum stress, etc. The maps can be used to compare mechanisms, to choose a suitable mechanism for an application, or re-design as may be needed. The usefulness of the non-dimensional maps is presented with multiple applications of different variety. Non-dimensional portrayal of snap-through mechanisms is one such example. The effect of the shape of the cross-section of the beam segments and the role of different segments in the mechanism as well as extension to 3D compliant mechanisms, the cases of multiple inputs and outputs, and moment loads are also explained. The effects of disproportionate changes on the maps are also analyzed.
Resumo:
The Exact Cover problem takes a universe U of n elements, a family F of m subsets of U and a positive integer k, and decides whether there exists a subfamily(set cover) F' of size at most k such that each element is covered by exactly one set. The Unique Cover problem also takes the same input and decides whether there is a subfamily F' subset of F such that at least k of the elements F' covers are covered uniquely(by exactly one set). Both these problems are known to be NP-complete. In the parameterized setting, when parameterized by k, Exact Cover is W1]-hard. While Unique Cover is FPT under the same parameter, it is known to not admit a polynomial kernel under standard complexity-theoretic assumptions. In this paper, we investigate these two problems under the assumption that every set satisfies a given geometric property Pi. Specifically, we consider the universe to be a set of n points in a real space R-d, d being a positive integer. When d = 2 we consider the problem when. requires all sets to be unit squares or lines. When d > 2, we consider the problem where. requires all sets to be hyperplanes in R-d. These special versions of the problems are also known to be NP-complete. When parameterizing by k, the Unique Cover problem has a polynomial size kernel for all the above geometric versions. The Exact Cover problem turns out to be W1]-hard for squares, but FPT for lines and hyperplanes. Further, we also consider the Unique Set Cover problem, which takes the same input and decides whether there is a set cover which covers at least k elements uniquely. To the best of our knowledge, this is a new problem, and we show that it is NP-complete (even for the case of lines). In fact, the problem turns out to be W1]-hard in the abstract setting, when parameterized by k. However, when we restrict ourselves to the lines and hyperplanes versions, we obtain FPT algorithms.
Resumo:
A ray tracing based path length calculation is investigated for polarized light transport in a pixel space. Tomographic imaging using polarized light transport is promising for applications in optical projection tomography of small animal imaging and turbid media with low scattering. Polarized light transport through a medium can have complex effects due to interactions such as optical rotation of linearly polarized light, birefringence, diattenuation and interior refraction. Here we investigate the effects of refraction of polarized light in a non-scattering medium. This step is used to obtain the initial absorption estimate. This estimate can be used as prior in Monte Carlo (MC) program that simulates the transport of polarized light through a scattering medium to assist in faster convergence of the final estimate. The reflectance for p-polarized (parallel) and s-polarized (perpendicular) are different and hence there is a difference in the intensities that reach the detector end. The algorithm computes the length of the ray in each pixel along the refracted path and this is used to build the weight matrix. This weight matrix with corrected ray path length and the resultant intensity reaching the detector for each ray is used in the algebraic reconstruction (ART) method. The proposed method is tested with numerical phantoms for various noise levels. The refraction errors due to regions of different refractive index are discussed, the difference in intensities with polarization is considered. The improvements in reconstruction using the correction so applied is presented. This is achieved by tracking the path of the ray as well as the intensity of the ray as it traverses through the medium.