35 resultados para level set method
Resumo:
We present methods for fixed-lag smoothing using Sequential Importance sampling (SIS) on a discrete non-linear, non-Gaussian state space system with unknown parameters. Our particular application is in the field of digital communication systems. Each input data point is taken from a finite set of symbols. We represent transmission media as a fixed filter with a finite impulse response (FIR), hence a discrete state-space system is formed. Conventional Markov chain Monte Carlo (MCMC) techniques such as the Gibbs sampler are unsuitable for this task because they can only perform processing on a batch of data. Data arrives sequentially, so it would seem sensible to process it in this way. In addition, many communication systems are interactive, so there is a maximum level of latency that can be tolerated before a symbol is decoded. We will demonstrate this method by simulation and compare its performance to existing techniques.
Resumo:
We propose a computational method for the coupled simulation of a compressible flow interacting with a thin-shell structure undergoing large deformations. An Eulerian finite volume formulation is adopted for the fluid and a Lagrangian formulation based on subdivision finite elements is adopted for the shell response. The coupling between the fluid and the solid response is achieved via a novel approach based on level sets. The basic approach furnishes a general algorithm for coupling Lagrangian shell solvers with Cartesian grid based Eulerian fluid solvers. The efficiency and robustness of the proposed approach is demonstrated with a airbag deployment simulation. It bears emphasis that in the proposed approach the solid and the fluid components as well as their coupled interaction are considered in full detail and modeled with an equivalent level of fidelity without any oversimplifying assumptions or bias towards a particular physical aspect of the problem.
Resumo:
We present a novel method to perform an accurate registration of 3-D nonrigid bodies by using phase-shift properties of the dual-tree complex wavelet transform (DT-CWT). Since the phases of DT-\BBCWT coefficients change approximately linearly with the amount of feature displacement in the spatial domain, motion can be estimated using the phase information from these coefficients. The motion estimation is performed iteratively: first by using coarser level complex coefficients to determine large motion components and then by employing finer level coefficients to refine the motion field. We use a parametric affine model to describe the motion, where the affine parameters are found locally by substituting into an optical flow model and by solving the resulting overdetermined set of equations. From the estimated affine parameters, the motion field between the sensed and the reference data sets can be generated, and the sensed data set then can be shifted and interpolated spatially to align with the reference data set. © 2011 IEEE.
Resumo:
Level II reliability theory provides an approximate method whereby the reliability of a complex engineering structure which has multiple strength and loading variables may be estimated. This technique has been applied previously to both civil and offshore structures with considerable success. The aim of the present work is to assess the applicability of the method for aircraft structures, and to this end landing gear design is considered in detail. It is found that the technique yields useful information regarding the structural reliability, and further it enables the critical design parameters to be identified.
Resumo:
This paper provides an overview of the rationale behind the significant interest in polymer-based on-board optical links together with a brief review of recently reported work addressing certain challenges in this field. Polymer-based optical links have garnered considerable research attention due to their important functional attributes and compelling cost-benefit advantages in on-board optoelectronic systems as they can be cost-effectively integrated on conventional printed circuit boards. To date, significant work on the polymer materials, their fabrication process and their integration on standard board substrates have enabled the demonstration of numerous high-speed on-board optical links. However, to be deployed in real-world systems, these optoelectronic printed circuit boards (OE PCBs) must also be cost-effective. Here, recent advances in the integration process focusing on simple direct end-fire coupling schemes and the use of low-cost FR4 PCB substrates are presented. Performance of two proof-of-principle 10 Gb/s systems based on this integration method are summarised while work in realising more complex yet compact planar optical components is outlined. © 2011 IEEE.
Resumo:
In this paper we present a wafer level three-dimensional simulation model of the Gate Commutated Thyristor (GCT) under inductive switching conditions. The simulations are validated by extensive experimental measurements. To the authors' knowledge such a complex simulation domain has not been used so far. This method allows the in depth study of large area devices such as GCTs, Gate Turn Off Thyristors (GTOs) and Phase Control Thyristors (PCTs). The model captures complex phenomena, such as current filamentation including subsequent failure, which allow us to predict the Maximum Controllable turn-off Current (MCC) and the Safe Operating Area (SOA) previously impossible using 2D distributed models. © 2012 IEEE.
Resumo:
A novel technique is presented to facilitate the implementation of hierarchical b-splines and their interfacing with conventional finite element implementations. The discrete interpretation of the two-scale relation, as common in subdivision schemes, is used to establish algebraic relations between the basis functions and their coefficients on different levels of the hierarchical b-spline basis. The subdivision projection technique introduced allows us first to compute all element matrices and vectors using a fixed number of same-level basis functions. Their subsequent multiplication with subdivision matrices projects them, during the assembly stage, to the correct levels of the hierarchical b-spline basis. The proposed technique is applied to convergence studies of linear and geometrically nonlinear problems in one, two and three space dimensions. © 2012 Elsevier B.V.
Resumo:
When searching for characteristic subpatterns in potentially noisy graph data, it appears self-evident that having multiple observations would be better than having just one. However, it turns out that the inconsistencies introduced when different graph instances have different edge sets pose a serious challenge. In this work we address this challenge for the problem of finding maximum weighted cliques. We introduce the concept of most persistent soft-clique. This is subset of vertices, that 1) is almost fully or at least densely connected, 2) occurs in all or almost all graph instances, and 3) has the maximum weight. We present a measure of clique-ness, that essentially counts the number of edge missing to make a subset of vertices into a clique. With this measure, we show that the problem of finding the most persistent soft-clique problem can be cast either as: a) a max-min two person game optimization problem, or b) a min-min soft margin optimization problem. Both formulations lead to the same solution when using a partial Lagrangian method to solve the optimization problems. By experiments on synthetic data and on real social network data we show that the proposed method is able to reliably find soft cliques in graph data, even if that is distorted by random noise or unreliable observations. Copyright 2012 by the author(s)/owner(s).
A Videogrammetric As-Built Data Collection Method for Digital Fabrication of Sheet Metal Roof Panels
Resumo:
A roofing contractor typically needs to acquire as-built dimensions of a roof structure several times over the course of its build to be able to digitally fabricate sheet metal roof panels. Obtaining these measurements using the exiting roof surveying methods could be costly in terms of equipment, labor, and/or worker exposure to safety hazards. This paper presents a video-based surveying technology as an alternative method which is simple to use, automated, less expensive, and safe. When using this method, the contractor collects video streams with a calibrated stereo camera set. Unique visual characteristics of scenes from a roof structure are then used in the processing step to automatically extract as-built dimensions of roof planes. These dimensions are finally represented in a XML format to be loaded into sheet metal folding and cutting machines. The proposed method has been tested for a roofing project and the preliminary results indicate its capabilities.
Resumo:
Simulation of materials at the atomistic level is an important tool in studying microscopic structure and processes. The atomic interactions necessary for the simulation are correctly described by Quantum Mechanics. However, the computational resources required to solve the quantum mechanical equations limits the use of Quantum Mechanics at most to a few hundreds of atoms and only to a small fraction of the available configurational space. This thesis presents the results of my research on the development of a new interatomic potential generation scheme, which we refer to as Gaussian Approximation Potentials. In our framework, the quantum mechanical potential energy surface is interpolated between a set of predetermined values at different points in atomic configurational space by a non-linear, non-parametric regression method, the Gaussian Process. To perform the fitting, we represent the atomic environments by the bispectrum, which is invariant to permutations of the atoms in the neighbourhood and to global rotations. The result is a general scheme, that allows one to generate interatomic potentials based on arbitrary quantum mechanical data. We built a series of Gaussian Approximation Potentials using data obtained from Density Functional Theory and tested the capabilities of the method. We showed that our models reproduce the quantum mechanical potential energy surface remarkably well for the group IV semiconductors, iron and gallium nitride. Our potentials, while maintaining quantum mechanical accuracy, are several orders of magnitude faster than Quantum Mechanical methods.
Resumo:
This paper presents an efficient algorithm for robust network reconstruction of Linear Time-Invariant (LTI) systems in the presence of noise, estimation errors and unmodelled nonlinearities. The method here builds on previous work [1] on robust reconstruction to provide a practical implementation with polynomial computational complexity. Following the same experimental protocol, the algorithm obtains a set of structurally-related candidate solutions spanning every level of sparsity. We prove the existence of a magnitude bound on the noise, which if satisfied, guarantees that one of these structures is the correct solution. A problem-specific model-selection procedure then selects a single solution from this set and provides a measure of confidence in that solution. Extensive simulations quantify the expected performance for different levels of noise and show that significantly more noise can be tolerated in comparison to the original method. © 2012 IEEE.
Resumo:
Looking for a target in a visual scene becomes more difficult as the number of stimuli increases. In a signal detection theory view, this is due to the cumulative effect of noise in the encoding of the distractors, and potentially on top of that, to an increase of the noise (i.e., a decrease of precision) per stimulus with set size, reflecting divided attention. It has long been argued that human visual search behavior can be accounted for by the first factor alone. While such an account seems to be adequate for search tasks in which all distractors have the same, known feature value (i.e., are maximally predictable), we recently found a clear effect of set size on encoding precision when distractors are drawn from a uniform distribution (i.e., when they are maximally unpredictable). Here we interpolate between these two extreme cases to examine which of both conclusions holds more generally as distractor statistics are varied. In one experiment, we vary the level of distractor heterogeneity; in another we dissociate distractor homogeneity from predictability. In all conditions in both experiments, we found a strong decrease of precision with increasing set size, suggesting that precision being independent of set size is the exception rather than the rule.
Resumo:
An accurate description of sound propagation in a duct is important to obtain the sound power radiating from a source in both near and far fields. A technique has been developed and applied to decompose higher-order modes of sound emitted into a duct. Traditional experiments and theory based on two-sensor methods are limited to the plane-wave contribution to the sound field at low frequency. Due to the increase in independent measurements required, a computational method has been developed to simulate sensitivities of real measurements (e.g., noise) and optimize the set-up. An experimental rig has been constructed to decompose the first two modes using six independent measurements from surface, flush-mounted microphones. Experiments were initially performed using a loudspeaker as the source for validation. Subsequently, the sound emitted by a mixed-flow fan has been investigated and compared to measurements made in accordance with the internationally standardized in-duct fan measurement method. This method utilizes large anechoic terminations and a procedure involving averaging over measurements in space and time to account for the contribution from higher-order modes. The new method does not require either of these added complications and gives detail about the underlying modal content of the emitted sound.