897 resultados para Minimization Problem, Lattice Model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The starting point of this article is the question "How to retrieve fingerprints of rhythm in written texts?" We address this problem in the case of Brazilian and European Portuguese. These two dialects of Modern Portuguese share the same lexicon and most of the sentences they produce are superficially identical. Yet they are conjectured, on linguistic grounds, to implement different rhythms. We show that this linguistic question can be formulated as a problem of model selection in the class of variable length Markov chains. To carry on this approach, we compare texts from European and Brazilian Portuguese. These texts are previously encoded according to some basic rhythmic features of the sentences which can be automatically retrieved. This is an entirely new approach from the linguistic point of view. Our statistical contribution is the introduction of the smallest maximizer criterion which is a constant free procedure for model selection. As a by-product, this provides a solution for the problem of optimal choice of the penalty constant when using the BIC to select a variable length Markov chain. Besides proving the consistency of the smallest maximizer criterion when the sample size diverges, we also make a simulation study comparing our approach with both the standard BIC selection and the Peres-Shields order estimation. Applied to the linguistic sample constituted for our case study, the smallest maximizer criterion assigns different context-tree models to the two dialects of Portuguese. The features of the selected models are compatible with current conjectures discussed in the linguistic literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A deep theoretical analysis of the graph cut image segmentation framework presented in this paper simultaneously translates into important contributions in several directions. The most important practical contribution of this work is a full theoretical description, and implementation, of a novel powerful segmentation algorithm, GC(max). The output of GC(max) coincides with a version of a segmentation algorithm known as Iterative Relative Fuzzy Connectedness, IRFC. However, GC(max) is considerably faster than the classic IRFC algorithm, which we prove theoretically and show experimentally. Specifically, we prove that, in the worst case scenario, the GC(max) algorithm runs in linear time with respect to the variable M=|C|+|Z|, where |C| is the image scene size and |Z| is the size of the allowable range, Z, of the associated weight/affinity function. For most implementations, Z is identical to the set of allowable image intensity values, and its size can be treated as small with respect to |C|, meaning that O(M)=O(|C|). In such a situation, GC(max) runs in linear time with respect to the image size |C|. We show that the output of GC(max) constitutes a solution of a graph cut energy minimization problem, in which the energy is defined as the a"" (a) norm ayenF (P) ayen(a) of the map F (P) that associates, with every element e from the boundary of an object P, its weight w(e). This formulation brings IRFC algorithms to the realm of the graph cut energy minimizers, with energy functions ayenF (P) ayen (q) for qa[1,a]. Of these, the best known minimization problem is for the energy ayenF (P) ayen(1), which is solved by the classic min-cut/max-flow algorithm, referred to often as the Graph Cut algorithm. We notice that a minimization problem for ayenF (P) ayen (q) , qa[1,a), is identical to that for ayenF (P) ayen(1), when the original weight function w is replaced by w (q) . Thus, any algorithm GC(sum) solving the ayenF (P) ayen(1) minimization problem, solves also one for ayenF (P) ayen (q) with qa[1,a), so just two algorithms, GC(sum) and GC(max), are enough to solve all ayenF (P) ayen (q) -minimization problems. We also show that, for any fixed weight assignment, the solutions of the ayenF (P) ayen (q) -minimization problems converge to a solution of the ayenF (P) ayen(a)-minimization problem (ayenF (P) ayen(a)=lim (q -> a)ayenF (P) ayen (q) is not enough to deduce that). An experimental comparison of the performance of GC(max) and GC(sum) algorithms is included. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as the influence of the choice of the seeds on the output.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[EN] The seminal work of Horn and Schunck [8] is the first variational method for optical flow estimation. It introduced a novel framework where the optical flow is computed as the solution of a minimization problem. From the assumption that pixel intensities do not change over time, the optical flow constraint equation is derived. This equation relates the optical flow with the derivatives of the image. There are infinitely many vector fields that satisfy the optical flow constraint, thus the problem is ill-posed. To overcome this problem, Horn and Schunck introduced an additional regularity condition that restricts the possible solutions. Their method minimizes both the optical flow constraint and the magnitude of the variations of the flow field, producing smooth vector fields. One of the limitations of this method is that, typically, it can only estimate small motions. In the presence of large displacements, this method fails when the gradient of the image is not smooth enough. In this work, we describe an implementation of the original Horn and Schunck method and also introduce a multi-scale strategy in order to deal with larger displacements. For this multi-scale strategy, we create a pyramidal structure of downsampled images and change the optical flow constraint equation with a nonlinear formulation. In order to tackle this nonlinear formula, we linearize it and solve the method iteratively in each scale. In this sense, there are two common approaches: one that computes the motion increment in the iterations, like in ; or the one we follow, that computes the full flow during the iterations, like in. The solutions are incrementally refined ower the scales. This pyramidal structure is a standard tool in many optical flow methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[EN] In this paper we present a variational technique for the reconstruction of 3D cylindrical surfaces. Roughly speaking by a cylindrical surface we mean a surface that can be parameterized using the projection on a cylinder in terms of two coordinates, representing the displacement and angle in a cylindrical coordinate system respectively. The starting point for our method is a set of different views of a cylindrical surface, as well as a precomputed disparity map estimation between pair of images. The proposed variational technique is based on an energy minimization where we balance on the one hand the regularity of the cylindrical function given by the distance of the surface points to cylinder axis, and on the other hand, the distance between the projection of the surface points on the images and the expected location following the precomputed disparity map estimation between pair of images. One interesting advantage of this approach is that we regularize the 3D surface by means of a bi-dimensio al minimization problem. We show some experimental results for large stereo sequences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The electric dipole response of neutron-rich nickel isotopes has been investigated using the LAND setup at GSI in Darmstadt (Germany). Relativistic secondary beams of 56−57Ni and 67−72Ni at approximately 500 AMeV have been generated using projectile fragmentation of stable ions on a 4 g/cm2 Be target and subsequent separation in the magnetic dipole fields of the FRagment Separator (FRS). After reaching the LAND setup in Cave C, the radioactive ions were excited electromagnetically in the electric field of a Pb target. The decay products have been measured in inverse kinematics using various detectors. Neutron-rich 67−69Ni isotopes decay by the emission of neutrons, which are detected in the LAND detector. The present analysis concentrates on the (gamma,n) and (gamma,2n) channels in these nuclei, since the proton and three-neutron thresholds are unlikely to be reached considering the virtual photon spectrum for nickel ions at 500 AMeV. A measurement of the stable 58Ni isotope is used as a benchmark to check the accuracy of the present results with previously published data. The measured (gamma,n) and (gamma,np) channels are compared with an inclusive photoneutron measurement by Fultz and coworkers, which are consistent within the respective errors. The measured excitation energy distributions of 67−69Ni contain a large portion of the Giant Dipole Resonance (GDR) strength predicted by the Thomas-Reiche-Kuhn energy-weighted sum rule, as well as a significant amount of low-lying E1 strength, that cannot be attributed to the GDR alone. The GDR distribution parameters are calculated using well-established semi-empirical systematic models, providing the peak energies and widths. The GDR strength is extracted from the chi-square minimization of the model GDR to the measured data of the (gamma,2n) channel, thereby excluding any influence of eventual low-lying strength. The subtraction of the obtained GDR distribution from the total measured E1 strength provides the low-lying E1 strength distribution, which is attributed to the Pygmy Dipole Resonance (PDR). The extraction of the peak energy, width and strength is performed using a Gaussian function. The minimization of trial Gaussian distributions to the data does not converge towards a sharp minimum. Therefore, the results are presented by a chi-square distribution as a function of all three Gaussian parameters. Various predictions of PDR distributions exist, as well as a recent measurement of the 68Ni pygmy dipole-resonance obtained by virtual photon scattering, to which the present pygmy dipole-resonance distribution is also compared.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An important aspect of the QTL mapping problem is the treatment of missing genotype data. If complete genotype data were available, QTL mapping would reduce to the problem of model selection in linear regression. However, in the consideration of loci in the intervals between the available genetic markers, genotype data is inherently missing. Even at the typed genetic markers, genotype data is seldom complete, as a result of failures in the genotyping assays or for the sake of economy (for example, in the case of selective genotyping, where only individuals with extreme phenotypes are genotyped). We discuss the use of algorithms developed for hidden Markov models (HMMs) to deal with the missing genotype data problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, we propose a distributed rate allocation algorithm that minimizes the average decoding delay for multimedia clients in inter-session network coding systems. We consider a scenario where the users are organized in a mesh network and each user requests the content of one of the available sources. We propose a novel distributed algorithm where network users determine the coding operations and the packet rates to be requested from the parent nodes, such that the decoding delay is minimized for all clients. A rate allocation problem is solved by every user, which seeks the rates that minimize the average decoding delay for its children and for itself. Since this optimization problem is a priori non-convex, we introduce the concept of equivalent packet flows, which permits to estimate the expected number of packets that every user needs to collect for decoding. We then decompose our original rate allocation problem into a set of convex subproblems, which are eventually combined to obtain an effective approximate solution to the delay minimization problem. The results demonstrate that the proposed scheme eliminates the bottlenecks and reduces the decoding delay experienced by users with limited bandwidth resources. We validate the performance of our distributed rate allocation algorithm in different video streaming scenarios using the NS-3 network simulator. We show that our system is able to take benefit of inter-session network coding for simultaneous delivery of video sessions in networks with path diversity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Methods for tracking an object have generally fallen into two groups: tracking by detection and tracking through local optimization. The advantage of detection-based tracking is its ability to deal with target appearance and disappearance, but it does not naturally take advantage of target motion continuity during detection. The advantage of local optimization is efficiency and accuracy, but it requires additional algorithms to initialize tracking when the target is lost. To bridge these two approaches, we propose a framework for unified detection and tracking as a time-series Bayesian estimation problem. The basis of our approach is to treat both detection and tracking as a sequential entropy minimization problem, where the goal is to determine the parameters describing a target in each frame. To do this we integrate the Active Testing (AT) paradigm with Bayesian filtering, and this results in a framework capable of both detecting and tracking robustly in situations where the target object enters and leaves the field of view regularly. We demonstrate our approach on a retinal tool tracking problem and show through extensive experiments that our method provides an efficient and robust tracking solution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A benchmark problem set consisting of four problem levels was developed for the simulation of Cr isotope fractionation in 1D and 2D domains. The benchmark is based on a recent field study where Cr(VI) reduction and accompanying Cr isotope fractionation occurs abiotically by an aqueous reaction with dissolved Fe 2+ (Wanner et al., 2012., Appl. Geochem., 27, 644–662). The problem set includes simulation of the major processes affecting the Cr isotopic composition such as the dissolution of various Cr(VI) bearing minerals, fractionation during abiotic aqueous Cr(VI) reduction, and non-fractionating precipitation of Cr(III) as sparingly soluble Cr-hydroxide. Accuracy of the presented solutions was ensured by running the problems with four well-established reactive transport modeling codes: TOUGHREACT, MIN3P, CRUNCHFLOW, and FLOTRAN. Results were also compared with an analytical Rayleigh-type fractionation model. An additional constraint on the correctness of the results was obtained by comparing output from the problem levels simulating Cr isotope fractionation with the corresponding ones only simulating bulk concentrations. For all problem levels, model to model comparisons showed excellent agreement, suggesting that for the tested geochemical processes any code is capable of accurately simulating the fate of individual Cr isotopes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La segmentación de imágenes puede plantearse como un problema de minimización de una energía discreta. Nos enfrentamos así a una doble cuestión: definir una energía cuyo mínimo proporcione la segmentación buscada y, una vez definida la energía, encontrar un mínimo absoluto de la misma. La primera parte de esta tesis aborda el segundo problema, y la segunda parte, en un contexto más aplicado, el primero. Las técnicas de minimización basadas en cortes de grafos permiten obtener el mínimo de una energía discreta en tiempo polinomial mediante algoritmos de tipo min-cut/max-flow. Sin embargo, estas técnicas solo pueden aplicarse a energías que son representabas por grafos. Un importante reto es estudiar qué energías son representabas así como encontrar un grafo que las represente, lo que equivale a encontrar una función gadget con variables adicionales. En la primera parte de este trabajo se estudian propiedades de las funciones gadgets que permiten acotar superiormente el número de variables adicionales. Además se caracterizan las energías con cuatro variables que son representabas, definiendo gadgets con dos variables adicionales. En la segunda parte, más práctica, se aborda el problema de segmentación de imágenes médicas, base en muchas ocasiones para la diagnosis y el seguimiento de terapias. La segmentación multi-atlas es una potente técnica de segmentación automática de imágenes médicas, con tres aspectos importantes a destacar: el tipo de registro entre los atlas y la imagen objetivo, la selección de atlas y el método de fusión de etiquetas. Este último punto puede formularse como un problema de minimización de una energía. A este respecto introducimos dos nuevas energías representables. La primera, de orden dos, se utiliza en la segmentación en hígado y fondo de imágenes abdominales obtenidas mediante tomografía axial computarizada. La segunda, de orden superior, se utiliza en la segmentación en hipocampos y fondo de imágenes cerebrales obtenidas mediante resonancia magnética. ABSTRACT The image segmentation can be described as the problem of minimizing a discrete energy. We face two problems: first, to define an energy whose minimum provides the desired segmentation and, second, once the energy is defined we must find its global minimum. The first part of this thesis addresses the second problem, and the second part, in a more applied context, the first problem. Minimization techniques based on graph cuts find the minimum of a discrete energy in polynomial time via min-cut/max-flow algorithms. Nevertheless, these techniques can only be applied to graph-representable energies. An important challenge is to study which energies are graph-representable and to construct graphs which represent these energies. This is the same as finding a gadget function with additional variables. In the first part there are studied the properties of gadget functions which allow the number of additional variables to be bounded from above. Moreover, the graph-representable energies with four variables are characterised and gadgets with two additional variables are defined for these. The second part addresses the application of these ideas to medical image segmentation. This is often the first step in computer-assisted diagnosis and monitoring therapy. Multiatlas segmentation is a powerful automatic segmentation technique for medical images, with three important aspects that are highlighted here: the registration between the atlas and the target image, the atlas selection, and the label fusion method. We formulate the label fusion method as a minimization problem and we introduce two new graph-representable energies. The first is a second order energy and it is used for the segmentation of the liver in computed tomography (CT) images. The second energy is a higher order energy and it is used for the segmentation of the hippocampus in magnetic resonance images (MRI).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present study explores a “hydrophobic” energy function for folding simulations of the protein lattice model. The contribution of each monomer to conformational energy is the product of its “hydrophobicity” and the number of contacts it makes, i.e., E(h⃗, c⃗) = −Σi=1N cihi = −(h⃗.c⃗) is the negative scalar product between two vectors in N-dimensional cartesian space: h⃗ = (h1, … , hN), which represents monomer hydrophobicities and is sequence-dependent; and c⃗ = (c1, … , cN), which represents the number of contacts made by each monomer and is conformation-dependent. A simple theoretical analysis shows that restrictions are imposed concomitantly on both sequences and native structures if the stability criterion for protein-like behavior is to be satisfied. Given a conformation with vector c⃗, the best sequence is a vector h⃗ on the direction upon which the projection of c⃗ − c̄⃗ is maximal, where c̄⃗ is the diagonal vector with components equal to c̄, the average number of contacts per monomer in the unfolded state. Best native conformations are suggested to be not maximally compact, as assumed in many studies, but the ones with largest variance of contacts among its monomers, i.e., with monomers tending to occupy completely buried or completely exposed positions. This inside/outside segregation is reflected on an apolar/polar distribution on the corresponding sequence. Monte Carlo simulations in two dimensions corroborate this general scheme. Sequences targeted to conformations with large contact variances folded cooperatively with thermodynamics of a two-state transition. Sequences targeted to maximally compact conformations, which have lower contact variance, were either found to have degenerate ground state or to fold with much lower cooperativity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Protein aggregation is studied by following the simultaneous folding of two designed identical 20-letter amino acid chains within the framework of a lattice model and using Monte Carlo simulations. It is found that protein aggregation is determined by elementary structures (partially folded intermediates) controlled by local contacts among some of the most strongly interacting amino acids and formed at an early stage in the folding process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Planning a goal-directed sequence of behavior is a higher function of the human brain that relies on the integrity of prefrontal cortical areas. In the Tower of London test, a puzzle in which beads sliding on pegs must be moved to match a designated goal configuration, patients with lesioned prefrontal cortex show deficits in planning a goal-directed sequence of moves. We propose a neuronal network model of sequence planning that passes this test and, when lesioned, fails in a way that mimics prefrontal patients’ behavior. Our model comprises a descending planning system with hierarchically organized plan, operation, and gesture levels, and an ascending evaluative system that analyzes the problem and computes internal reward signals that index the correct/erroneous status of the plan. Multiple parallel pathways connecting the evaluative and planning systems amend the plan and adapt it to the current problem. The model illustrates how specialized hierarchically organized neuronal assemblies may collectively emulate central executive or supervisory functions of the human brain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The relationship between the optimization of the potential function and the foldability of theoretical protein models is studied based on investigations of a 27-mer cubic-lattice protein model and a more realistic lattice model for the protein crambin. In both the simple and the more complicated systems, optimization of the energy parameters achieves significant improvements in the statistical-mechanical characteristics of the systems and leads to foldable protein models in simulation experiments. The foldability of the protein models is characterized by their statistical-mechanical properties--e.g., by the density of states and by Monte Carlo folding simulations of the models. With optimized energy parameters, a high level of consistency exists among different interactions in the native structures of the protein models, as revealed by a correlation function between the optimized energy parameters and the native structure of the model proteins. The results of this work are relevant to the design of a general potential function for folding proteins by theoretical simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The electronic structure and spectrum of several models of the binuclear metal site in soluble CuA domains of cytochrome-c oxidase have been calculated by the use of an extended version of the complete neglect of differential overlap/spectroscopic method. The experimental spectra have two strong transitions of nearly equal intensity around 500 nm and a near-IR transition close to 800 nm. The model that best reproduces these features consists of a dimer of two blue (type 1) copper centers, in which each Cu atom replaces the missing imidazole on the other Cu atom. Thus, both Cu atoms have one cysteine sulfur atom and one imidazole nitrogen atom as ligands, and there are no bridging ligands but a direct Cu-Cu bond. According to the calculations, the two strong bands in the visible region originate from exciton coupling of the dipoles of the two copper monomers, and the near-IR band is a charge-transfer transition between the two Cu atoms. The known amino acid sequence has been used to construct a molecular model of the CuA site by the use of a template and energy minimization. In this model, the two ligand cysteine residues are in one turn of an alpha-helix, whereas one ligand histidine is in a loop following this helix and the other one is in a beta-strand.