992 resultados para Energy minimization


Relevância:

60.00% 60.00%

Publicador:

Resumo:

[EN] In the last years we have developed some methods for 3D reconstruction. First we began with the problem of reconstructing a 3D scene from a stereoscopic pair of images. We developed some methods based on energy functionals which produce dense disparity maps by preserving discontinuities from image boundaries. Then we passed to the problem of reconstructing a 3D scene from multiple views (more than 2). The method for multiple view reconstruction relies on the method for stereoscopic reconstruction. For every pair of consecutive images we estimate a disparity map and then we apply a robust method that searches for good correspondences through the sequence of images. Recently we have proposed several methods for 3D surface regularization. This is a postprocessing step necessary for smoothing the final surface, which could be afected by noise or mismatch correspondences. These regularization methods are interesting because they use the information from the reconstructing process and not only from the 3D surface. We have tackled all these problems from an energy minimization approach. We investigate the associated Euler-Lagrange equation of the energy functional, and we approach the solution of the underlying partial differential equation (PDE) using a gradient descent method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The objective of this dissertation is to develop and test a predictive model for the passive kinematics of human joints based on the energy minimization principle. To pursue this goal, the tibio-talar joint is chosen as a reference joint, for the reduced number of bones involved and its simplicity, if compared with other sinovial joints such as the knee or the wrist. Starting from the knowledge of the articular surface shapes, the spatial trajectory of passive motion is obtained as the envelop of joint configurations that maximize the surfaces congruence. An increase in joint congruence corresponds to an improved capability of distributing an applied load, allowing the joint to attain a better strength with less material. Thus, joint congruence maximization is a simple geometric way to capture the idea of joint energy minimization. The results obtained are validated against in vitro measured trajectories. Preliminary comparison provide strong support for the predictions of the theoretical model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Über die Sekundärstruktur von LI-Cadherin ist bislang wenig bekannt. Es gibt keine Röntgenanalysen und keine NMR-spektroskopische Untersuchungen. Man kann nur aufgrund der Sequenzhomologien zu den bereits untersuchten klassischen Cadherinen vermuten, daß im LI-Cadherin ähnliche Verhältnisse in der entscheidenden Wechselwirkungsdomäne vorliegen. In Analogie zum E-Cadherin wurde angenommen, daß es im LI-Cadherin eine „homophile Erkennungsregion“ gibt, die in einer typischen beta-Turn-Struktur mit anschließenden Faltblattbereichen vorliegen sollte. Um den Einfluß verschiedener Saccharid-Antigene auf die Turn-Bildung zu untersuchen, wurden im ersten Teil der vorliegenden Arbeit verschiedene Saccharid-Antigen-Bausteine synthetisiert, mit denen dann im zweiten Teil der Arbeit durch sequentielle Festphasensynthese entsprechende Glycopeptidstrukturen aus dieser Region des LI-Cadherins aufgebaut wurden. Zur Synthese sämtlicher Antigen-Bausteine ging man von D-Galactose aus, die über das Galactal und eine Azidonitratisierung in vier Stufen zum Azidobromid umgesetzt wurde. In einer Koenigs-Knorr-Glycosylierung wurde dieses dann auf die Seitenkette eines geschützten Serin-Derivats übertragen. Reduktion und Schutzgruppenmanipulationen lieferten den TN Antigen-Baustein. Ein TN-Antigen-Derivat war Ausgangspunkt für die Synthesen der weiteren Glycosyl-Serin-Bausteine. So ließ sich mittels der Helferich-Glycosylierung der T Antigen-Baustein herstellen, und der STN-Antigen-Baustein wurde durch eine Sialylierungsreaktion und weitere Schutzgruppenmanipulationen erhalten. Da die Route über das T-Antigen-Derivat den Hauptsyntheseweg für die weiteren komplexeren Antigene bildete, wurden verschiedene Schutzgruppenmuster getestet. Darauf aufbauend ließen sich durch verschiede Glycosylierungsreaktionen und Schutzgruppenmanipulationen der komplexe (2->6)-ST-Antigen-Baustein, (2->3)-Sialyl-T- und Glycophorin-Antigen-Baustein synthetisieren. Im nächsten Abschnitt der Doktorarbeit wurden die synthetisierten Saccharid-Antigen-Serin-Konjugate in Festphasen-Glycopeptidsynthesen eingesetzt. Zunächst wurde ein mit dem TN Antigen glycosyliertes Tricosapeptid hergestellt. Mittels NMR-spektroskopischen Untersuchungen und folgenden Energieminimierungsberechnungen konnte eine dreidimensionale Struktur ermittelt werden. Die Peptidsequenz des Turn-bildenden Bereichs wurde für die folgenden Synthesen gewählt. Die Abfolge der einzelnen Reaktionsschritte für die Festphasensynthesen mit den verschiedenen Saccharid-Antigen-Bausteinen war ähnlich. Insgesamt verlief die Festphasen-Glycopeptidsynthese in starker Abhängigkeit vom sterischen Anspruch der Saccharid-Bausteine. Sämtliche so synthetisierten Glycopeptide wurden NMR spektroskopisch charakterisiert und mittels NOE-Experimenten hinsichtlich ihrer Konformation untersucht. Durch diese Bestimmung der räumlichen Protonen-Protonen-Kontakte konnte mittels Rechnungen zur Energieminimierung, basierend auf MM2 Kraftfeldern, eine dreidimensionale Struktur für die Glycopeptide postuliert werden. Sämtliche synthetisierten Glycopeptide weisen eine schleifenartige Konformation auf. Der Einfluß der Saccharid-Antigene ist unterschiedlich, und läßt sich in drei Gruppen einteilen.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present an automatic method to segment brain tissues from volumetric MRI brain tumor images. The method is based on non-rigid registration of an average atlas in combination with a biomechanically justified tumor growth model to simulate soft-tissue deformations caused by the tumor mass-effect. The tumor growth model, which is formulated as a mesh-free Markov Random Field energy minimization problem, ensures correspondence between the atlas and the patient image, prior to the registration step. The method is non-parametric, simple and fast compared to other approaches while maintaining similar accuracy. It has been evaluated qualitatively and quantitatively with promising results on eight datasets comprising simulated images and real patient data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

PMR-15 polyimide is a polymer that is used as a matrix in composites. These composites with PMR-15 matrices are called advanced polymer matrix composite that is abundantly used in the aerospace and electronics industries because of its high temperature resistivity. Apart from having high temperature sustainability, PMR-15 composites also display good thermal-oxidative stability, mechanical properties, processability and low costs, which makes it a suitable material for manufacturing aircraft structures. PMR-15 uses the reverse Diels-Alder (RDA) method for crosslinking which provides it with the groundwork for its distinctive thermal stability and a range of 280-300 degree Centigrade use temperature. Regardless of such desirable properties, this material has a number of limitations that compromises its application on a large scale basis. PMR-15 composites has been known to be very vulnerable to micro-cracking at inter and intra-laminar cracking. But the major factor that hinders its demand is PMR-15's carcinogenic constituent, methylene dianilineme (MDA), also a liver toxin. The necessity of providing a safe working environment during its production adds up to the cost of this material. In this study, Molecular Dynamics and Energy Minimization techniques are utilized to simulate a structure of PMR-15 at a given density of 1.324 g/cc and an attempt to recreate the polyimide to reduce the number of experimental testing and hence subdue the health hazards as well as the cost involved in its production. Even though this study does not involve in validating any mechanical properties of the model, it could be used in future for the validation of its properties and further testing for different properties like aging, microcracking, creep etc.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present an algorithm for estimating dense image correspondences. Our versatile approach lends itself to various tasks typical for video post-processing, including image morphing, optical flow estimation, stereo rectification, disparity/depth reconstruction, and baseline adjustment. We incorporate recent advances in feature matching, energy minimization, stereo vision, and data clustering into our approach. At the core of our correspondence estimation we use Efficient Belief Propagation for energy minimization. While state-of-the-art algorithms only work on thumbnail-sized images, our novel feature downsampling scheme in combination with a simple, yet efficient data term compression, can cope with high-resolution data. The incorporation of SIFT (Scale-Invariant Feature Transform) features into data term computation further resolves matching ambiguities, making long-range correspondence estimation possible. We detect occluded areas by evaluating the correspondence symmetry, we further apply Geodesic matting to automatically determine plausible values in these regions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we study the problem of blind deconvolution. Our analysis is based on the algorithm of Chan and Wong [2] which popularized the use of sparse gradient priors via total variation. We use this algorithm because many methods in the literature are essentially adaptations of this framework. Such algorithm is an iterative alternating energy minimization where at each step either the sharp image or the blur function are reconstructed. Recent work of Levin et al. [14] showed that any algorithm that tries to minimize that same energy would fail, as the desired solution has a higher energy than the no-blur solution, where the sharp image is the blurry input and the blur is a Dirac delta. However, experimentally one can observe that Chan and Wong's algorithm converges to the desired solution even when initialized with the no-blur one. We provide both analysis and experiments to resolve this paradoxical conundrum. We find that both claims are right. The key to understanding how this is possible lies in the details of Chan and Wong's implementation and in how seemingly harmless choices result in dramatic effects. Our analysis reveals that the delayed scaling (normalization) in the iterative step of the blur kernel is fundamental to the convergence of the algorithm. This then results in a procedure that eludes the no-blur solution, despite it being a global minimum of the original energy. We introduce an adaptation of this algorithm and show that, in spite of its extreme simplicity, it is very robust and achieves a performance comparable to the state of the art.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work we devise two novel algorithms for blind deconvolution based on a family of logarithmic image priors. In contrast to recent approaches, we consider a minimalistic formulation of the blind deconvolution problem where there are only two energy terms: a least-squares term for the data fidelity and an image prior based on a lower-bounded logarithm of the norm of the image gradients. We show that this energy formulation is sufficient to achieve the state of the art in blind deconvolution with a good margin over previous methods. Much of the performance is due to the chosen prior. On the one hand, this prior is very effective in favoring sparsity of the image gradients. On the other hand, this prior is non convex. Therefore, solutions that can deal effectively with local minima of the energy become necessary. We devise two iterative minimization algorithms that at each iteration solve convex problems: one obtained via the primal-dual approach and one via majorization-minimization. While the former is computationally efficient, the latter achieves state-of-the-art performance on a public dataset.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The focal point of this paper is to propose and analyze a P 0 discontinuous Galerkin (DG) formulation for image denoising. The scheme is based on a total variation approach which has been applied successfully in previous papers on image processing. The main idea of the new scheme is to model the restoration process in terms of a discrete energy minimization problem and to derive a corresponding DG variational formulation. Furthermore, we will prove that the method exhibits a unique solution and that a natural maximum principle holds. In addition, a number of examples illustrate the effectiveness of the method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Introduction Diffusion weighted Imaging (DWI) techniques are able to measure, in vivo and non-invasively, the diffusivity of water molecules inside the human brain. DWI has been applied on cerebral ischemia, brain maturation, epilepsy, multiple sclerosis, etc. [1]. Nowadays, there is a very high availability of these images. DWI allows the identification of brain tissues, so its accurate segmentation is a common initial step for the referred applications. Materials and Methods We present a validation study on automated segmentation of DWI based on the Gaussian mixture and hidden Markov random field models. This methodology is widely solved with iterative conditional modes algorithm, but some studies suggest [2] that graph-cuts (GC) algorithms improve the results when initialization is not close to the final solution. We implemented a segmentation tool integrating ITK with a GC algorithm [3], and a validation software using fuzzy overlap measures [4]. Results Segmentation accuracy of each tool is tested against a gold-standard segmentation obtained from a T1 MPRAGE magnetic resonance image of the same subject, registered to the DWI space. The proposed software shows meaningful improvements by using the GC energy minimization approach on DTI and DSI (Diffusion Spectrum Imaging) data. Conclusions The brain tissues segmentation on DWI is a fundamental step on many applications. Accuracy and robustness improvements are achieved with the proposed software, with high impact on the application’s final result.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The electronic structure and spectrum of several models of the binuclear metal site in soluble CuA domains of cytochrome-c oxidase have been calculated by the use of an extended version of the complete neglect of differential overlap/spectroscopic method. The experimental spectra have two strong transitions of nearly equal intensity around 500 nm and a near-IR transition close to 800 nm. The model that best reproduces these features consists of a dimer of two blue (type 1) copper centers, in which each Cu atom replaces the missing imidazole on the other Cu atom. Thus, both Cu atoms have one cysteine sulfur atom and one imidazole nitrogen atom as ligands, and there are no bridging ligands but a direct Cu-Cu bond. According to the calculations, the two strong bands in the visible region originate from exciton coupling of the dipoles of the two copper monomers, and the near-IR band is a charge-transfer transition between the two Cu atoms. The known amino acid sequence has been used to construct a molecular model of the CuA site by the use of a template and energy minimization. In this model, the two ligand cysteine residues are in one turn of an alpha-helix, whereas one ligand histidine is in a loop following this helix and the other one is in a beta-strand.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Comunicación presentada en el VIII Simposium Nacional de Reconocimiento de Formas y Análisis de Imágenes, Bilbao, mayo 1999.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Deformable Template models are first applied to track the inner wall of coronary arteries in intravascular ultrasound sequences, mainly in the assistance to angioplasty surgery. A circular template is used for initializing an elliptical deformable model to track wall deformation when inflating a balloon placed at the tip of the catheter. We define a new energy function for driving the behavior of the template and we test its robustness both in real and synthetic images. Finally we introduce a framework for learning and recognizing spatio-temporal geometric constraints based on Principal Component Analysis (eigenconstraints).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we propose two Bayesian methods for detecting and grouping junctions. Our junction detection method evolves from the Kona approach, and it is based on a competitive greedy procedure inspired in the region competition method. Then, junction grouping is accomplished by finding connecting paths between pairs of junctions. Path searching is performed by applying a Bayesian A* algorithm that has been recently proposed. Both methods are efficient and robust, and they are tested with synthetic and real images.