933 resultados para Well-Posed Problem


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The steady MHD mixed convection flow of a viscoelastic fluid in the vicinity of two-dimensional stagnation point with magnetic field has been investigated under the assumption that the fluid obeys the upper-convected Maxwell (UCM) model. Boundary layer theory is used to simplify the equations of motion. induced magnetic field and energy which results in three coupled non-linear ordinary differential equations which are well-posed. These equations have been solved by using finite difference method. The results indicate the reduction in the surface velocity gradient, surface heat transfer and displacement thickness with the increase in the elasticity number. These trends are opposite to those reported in the literature for a second-grade fluid. The surface velocity gradient and heat transfer are enhanced by the magnetic and buoyancy parameters. The surface heat transfer increases with the Prandtl number, but the surface velocity gradient decreases.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The smooth DMS-FEM, recently proposed by the authors, is extended and applied to the geometrically nonlinear and ill-posed problem of a deformed and wrinkled/slack membrane. A key feature of this work is that three-dimensional nonlinear elasticity equations corresponding to linear momentum balance, without any dimensional reduction and the associated approximations, directly serve as the membrane governing equations. Domain discretization is performed with triangular prism elements and the higher order (C1 or more) interelement continuity of the shape functions ensures that the errors arising from possible jumps in the first derivatives of the conventional C0 shape functions do not propagate because the ill-conditioned tangent stiffness matrices are iteratively inverted. The present scheme employs no regularization and exhibits little sensitivity to h-refinement. Although the numerically computed deformed membrane profiles do show some sensitivity to initial imperfections (nonplanarity) in the membrane profile needed to initiate transverse deformations, the overall patterns of the wrinkles and the deformed shapes appear to be less so. Finally, the deformed profiles, computed through the DMS FEM-based weak formulation, are compared with those obtained through an experiment on an ultrathin Kapton membrane, wherein wrinkles form because of the applied boundary displacement conditions. Comparisons with a reported experiment on a rectangular membrane are also provided. These exercises lend credence to the feasibility of the DMS FEM-based numerical route to computing post-wrinkled membrane shapes. Copyright (c) 2012 John Wiley & Sons, Ltd.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We have developed an efficient fully three-dimensional (3D) reconstruction algorithm for diffuse optical tomography (DOT). The 3D DOT, a severely ill-posed problem, is tackled through a pseudodynamic (PD) approach wherein an ordinary differential equation representing the evolution of the solution on pseudotime is integrated that bypasses an explicit inversion of the associated, ill-conditioned system matrix. One of the most computationally expensive parts of the iterative DOT algorithm, the reevaluation of the Jacobian in each of the iterations, is avoided by using the adjoint-Broyden update formula to provide low rank updates to the Jacobian. In addition, wherever feasible, we have also made the algorithm efficient by integrating along the quadratic path provided by the perturbation equation containing the Hessian. These algorithms are then proven by reconstruction, using simulated and experimental data and verifying the PD results with those from the popular Gauss-Newton scheme. The major findings of this work are as follows: (i) the PD reconstructions are comparatively artifact free, providing superior absorption coefficient maps in terms of quantitative accuracy and contrast recovery; (ii) the scaling of computation time with the dimension of the measurement set is much less steep with the Jacobian update formula in place than without it; and (iii) an increase in the data dimension, even though it renders the reconstruction problem less ill conditioned and thus provides relatively artifact-free reconstructions, does not necessarily provide better contrast property recovery. For the latter, one should also take care to uniformly distribute the measurement points, avoiding regions close to the source so that the relative strength of the derivatives for measurements away from the source does not become insignificant. (c) 2012 Optical Society of America

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Real-time image reconstruction is essential for improving the temporal resolution of fluorescence microscopy. A number of unavoidable processes such as, optical aberration, noise and scattering degrade image quality, thereby making image reconstruction an ill-posed problem. Maximum likelihood is an attractive technique for data reconstruction especially when the problem is ill-posed. Iterative nature of the maximum likelihood technique eludes real-time imaging. Here we propose and demonstrate a compute unified device architecture (CUDA) based fast computing engine for real-time 3D fluorescence imaging. A maximum performance boost of 210x is reported. Easy availability of powerful computing engines is a boon and may accelerate to realize real-time 3D fluorescence imaging. Copyright 2012 Author(s). This article is distributed under a Creative Commons Attribution 3.0 Unported License. http://dx.doi.org/10.1063/1.4754604]

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The problem of finding a satisfying assignment that minimizes the number of variables that are set to 1 is NP-complete even for a satisfiable 2-SAT formula. We call this problem MIN ONES 2-SAT. It generalizes the well-studied problem of finding the smallest vertex cover of a graph, which can be modeled using a 2-SAT formula with no negative literals. The natural parameterized version of the problem asks for a satisfying assignment of weight at most k. In this paper, we present a polynomial-time reduction from MIN ONES 2-SAT to VERTEX COVER without increasing the parameter and ensuring that the number of vertices in the reduced instance is equal to the number of variables of the input formula. Consequently, we conclude that this problem also has a simple 2-approximation algorithm and a 2k - c logk-variable kernel subsuming (or, in the case of kernels, improving) the results known earlier. Further, the problem admits algorithms for the parameterized and optimization versions whose runtimes will always match the runtimes of the best-known algorithms for the corresponding versions of vertex cover. Finally we show that the optimum value of the LP relaxation of the MIN ONES 2-SAT and that of the corresponding VERTEX COVER are the same. This implies that the (recent) results of VERTEX COVER version parameterized above the optimum value of the LP relaxation of VERTEX COVER carry over to the MIN ONES 2-SAT version parameterized above the optimum of the LP relaxation of MIN ONES 2-SAT. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Identifying translations from comparable corpora is a well-known problem with several applications, e.g. dictionary creation in resource-scarce languages. Scarcity of high quality corpora, especially in Indian languages, makes this problem hard, e.g. state-of-the-art techniques achieve a mean reciprocal rank (MRR) of 0.66 for English-Italian, and a mere 0.187 for Telugu-Kannada. There exist comparable corpora in many Indian languages with other ``auxiliary'' languages. We observe that translations have many topically related words in common in the auxiliary language. To model this, we define the notion of a translingual theme, a set of topically related words from auxiliary language corpora, and present a probabilistic framework for translation induction. Extensive experiments on 35 comparable corpora using English and French as auxiliary languages show that this approach can yield dramatic improvements in performance (e.g. MRR improves by 124% to 0.419 for Telugu-Kannada). A user study on WikiTSu, a system for cross-lingual Wikipedia title suggestion that uses our approach, shows a 20% improvement in the quality of titles suggested.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The calculation of First Passage Time (moreover, even its probability density in time) has so far been generally viewed as an ill-posed problem in the domain of quantum mechanics. The reasons can be summarily seen in the fact that the quantum probabilities in general do not satisfy the Kolmogorov sum rule: the probabilities for entering and non-entering of Feynman paths into a given region of space-time do not in general add up to unity, much owing to the interference of alternative paths. In the present work, it is pointed out that a special case exists (within quantum framework), in which, by design, there exists one and only one available path (i.e., door-way) to mediate the (first) passage -no alternative path to interfere with. Further, it is identified that a popular family of quantum systems - namely the 1d tight binding Hamiltonian systems - falls under this special category. For these model quantum systems, the first passage time distributions are obtained analytically by suitably applying a method originally devised for classical (stochastic) mechanics (by Schroedinger in 1915). This result is interesting especially given the fact that the tight binding models are extensively used in describing everyday phenomena in condense matter physics.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work, we address the issue of modeling squeeze film damping in nontrivial geometries that are not amenable to analytical solutions. The design and analysis of microelectromechanical systems (MEMS) resonators, especially those that use platelike two-dimensional structures, require structural dynamic response over the entire range of frequencies of interest. This response calculation typically involves the analysis of squeeze film effects and acoustic radiation losses. The acoustic analysis of vibrating plates is a very well understood problem that is routinely carried out using the equivalent electrical circuits that employ lumped parameters (LP) for acoustic impedance. Here, we present a method to use the same circuit with the same elements to account for the squeeze film effects as well by establishing an equivalence between the parameters of the two domains through a rescaled equivalent relationship between the acoustic impedance and the squeeze film impedance. Our analysis is based on a simple observation that the squeeze film impedance rescaled by a factor of jx, where x is the frequency of oscillation, qualitatively mimics the acoustic impedance over a large frequency range. We present a method to curvefit the numerically simulated stiffness and damping coefficients which are obtained using finite element analysis (FEA) analysis. A significant advantage of the proposed method is that it is applicable to any trivial/nontrivial geometry. It requires very limited finite element method (FEM) runs within the frequency range of interest, hence reducing the computational cost, yet modeling the behavior in the entire range accurately. We demonstrate the method using one trivial and one nontrivial geometry.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Jet noise reduction is an important goal within both commercial and military aviation. Although large-scale numerical simulations are now able to simultaneously compute turbulent jets and their radiated sound, lost-cost, physically-motivated models are needed to guide noise-reduction efforts. A particularly promising modeling approach centers around certain large-scale coherent structures, called wavepackets, that are observed in jets and their radiated sound. The typical approach to modeling wavepackets is to approximate them as linear modal solutions of the Euler or Navier-Stokes equations linearized about the long-time mean of the turbulent flow field. The near-field wavepackets obtained from these models show compelling agreement with those educed from experimental and simulation data for both subsonic and supersonic jets, but the acoustic radiation is severely under-predicted in the subsonic case. This thesis contributes to two aspects of these models. First, two new solution methods are developed that can be used to efficiently compute wavepackets and their acoustic radiation, reducing the computational cost of the model by more than an order of magnitude. The new techniques are spatial integration methods and constitute a well-posed, convergent alternative to the frequently used parabolized stability equations. Using concepts related to well-posed boundary conditions, the methods are formulated for general hyperbolic equations and thus have potential applications in many fields of physics and engineering. Second, the nonlinear and stochastic forcing of wavepackets is investigated with the goal of identifying and characterizing the missing dynamics responsible for the under-prediction of acoustic radiation by linear wavepacket models for subsonic jets. Specifically, we use ensembles of large-eddy-simulation flow and force data along with two data decomposition techniques to educe the actual nonlinear forcing experienced by wavepackets in a Mach 0.9 turbulent jet. Modes with high energy are extracted using proper orthogonal decomposition, while high gain modes are identified using a novel technique called empirical resolvent-mode decomposition. In contrast to the flow and acoustic fields, the forcing field is characterized by a lack of energetic coherent structures. Furthermore, the structures that do exist are largely uncorrelated with the acoustic field. Instead, the forces that most efficiently excite an acoustic response appear to take the form of random turbulent fluctuations, implying that direct feedback from nonlinear interactions amongst wavepackets is not an essential noise source mechanism. This suggests that the essential ingredients of sound generation in high Reynolds number jets are contained within the linearized Navier-Stokes operator rather than in the nonlinear forcing terms, a conclusion that has important implications for jet noise modeling.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper investigates the errors of the solutions as well as the shadowing property of a class of nonlinear differential equations which possess unique solutions on a certain interval for any admissible initial condition. The class of differential equations is assumed to be approximated by well-posed truncated Taylor series expansions up to a certain order obtained about certain, in general nonperiodic, sampling points t(i) is an element of [t(0), t(J)] for i = 0, 1, . . . , J of the solution. Two examples are provided.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There is growing evidence that focal thinning of cortical bone in the proximal femur may predispose a hip to fracture. Detecting such defects in clinical CT is challenging, since cortices may be significantly thinner than the imaging system's point spread function. We recently proposed a model-fitting technique to measure sub-millimetre cortices, an ill-posed problem which was regularized by assuming a specific, fixed value for the cortical density. In this paper, we develop the work further by proposing and evaluating a more rigorous method for estimating the constant cortical density, and extend the paradigm to encompass the mapping of cortical mass (mineral mg/cm(2)) in addition to thickness. Density, thickness and mass estimates are evaluated on sixteen cadaveric femurs, with high resolution measurements from a micro-CT scanner providing the gold standard. The results demonstrate robust, accurate measurement of peak cortical density and cortical mass. Cortical thickness errors are confined to regions of thin cortex and are bounded by the extent to which the local density deviates from the peak, averaging 20% for 0.5mm cortex.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There is growing evidence that focal thinning of cortical bone in the proximal femur may predispose a hip to fracture. Detecting such defects in clinical CT is challenging, since cortices may be significantly thinner than the imaging system's point spread function. We recently proposed a model-fitting technique to measure sub-millimetre cortices, an ill-posed problem which was regularized by assuming a specific, fixed value for the cortical density. In this paper, we develop the work further by proposing and evaluating a more rigorous method for estimating the constant cortical density, and extend the paradigm to encompass the mapping of cortical mass (mineral mg/cm 2) in addition to thickness. Density, thickness and mass estimates are evaluated on sixteen cadaveric femurs, with high resolution measurements from a micro-CT scanner providing the gold standard. The results demonstrate robust, accurate measurement of peak cortical density and cortical mass. Cortical thickness errors are confined to regions of thin cortex and are bounded by the extent to which the local density deviates from the peak, averaging 20% for 0.5mm cortex. © 2012 Elsevier B.V.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Amplitude demodulation is an ill-posed problem and so it is natural to treat it from a Bayesian viewpoint, inferring the most likely carrier and envelope under probabilistic constraints. One such treatment is Probabilistic Amplitude Demodulation (PAD), which, whilst computationally more intensive than traditional approaches, offers several advantages. Here we provide methods for estimating the uncertainty in the PAD-derived envelopes and carriers, and for learning free-parameters like the time-scale of the envelope. We show how the probabilistic approach can naturally handle noisy and missing data. Finally, we indicate how to extend the model to signals which contain multiple modulators and carriers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many testing methods are based on program paths. A well-known problem with them is that some paths are infeasible. To decide the feasibility of paths, we may solve a set of constraints. In this paper, we describe constraint-based tools that can be used for this purpose. They accept constraints expressed in a natural form, which may involve variables of different types such as integers, Booleans, reals and fixed-size arrays. The constraint solver is an extension of a Boolean satisfiability checker and it makes use of a linear programming package. The solving algorithm is described, and examples are given to illustrate the use of the tools. For many paths in the testing literature, their feasibility can be decided in a reasonable amount of time.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Multi-frame image super-resolution (SR) aims to utilize information from a set of low-resolution (LR) images to compose a high-resolution (HR) one. As it is desirable or essential in many real applications, recent years have witnessed the growing interest in the problem of multi-frame SR reconstruction. This set of algorithms commonly utilizes a linear observation model to construct the relationship between the recorded LR images to the unknown reconstructed HR image estimates. Recently, regularization-based schemes have been demonstrated to be effective because SR reconstruction is actually an ill-posed problem. Working within this promising framework, this paper first proposes two new regularization items, termed as locally adaptive bilateral total variation and consistency of gradients, to keep edges and flat regions, which are implicitly described in LR images, sharp and smooth, respectively. Thereafter, the combination of the proposed regularization items is superior to existing regularization items because it considers both edges and flat regions while existing ones consider only edges. Thorough experimental results show the effectiveness of the new algorithm for SR reconstruction. (C) 2009 Elsevier B.V. All rights reserved.