171 resultados para LAPLACIAN


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work deals with some classes of linear second order partial differential operators with non-negative characteristic form and underlying non- Euclidean structures. These structures are determined by families of locally Lipschitz-continuous vector fields in RN, generating metric spaces of Carnot- Carath´eodory type. The Carnot-Carath´eodory metric related to a family {Xj}j=1,...,m is the control distance obtained by minimizing the time needed to go from two points along piecewise trajectories of vector fields. We are mainly interested in the causes in which a Sobolev-type inequality holds with respect to the X-gradient, and/or the X-control distance is Doubling with respect to the Lebesgue measure in RN. This study is divided into three parts (each corresponding to a chapter), and the subject of each one is a class of operators that includes the class of the subsequent one. In the first chapter, after recalling “X-ellipticity” and related concepts introduced by Kogoj and Lanconelli in [KL00], we show a Maximum Principle for linear second order differential operators for which we only assume a Sobolev-type inequality together with a lower terms summability. Adding some crucial hypotheses on measure and on vector fields (Doubling property and Poincar´e inequality), we will be able to obtain some Liouville-type results. This chapter is based on the paper [GL03] by Guti´errez and Lanconelli. In the second chapter we treat some ultraparabolic equations on Lie groups. In this case RN is the support of a Lie group, and moreover we require that vector fields satisfy left invariance. After recalling some results of Cinti [Cin07] about this class of operators and associated potential theory, we prove a scalar convexity for mean-value operators of L-subharmonic functions, where L is our differential operator. In the third chapter we prove a necessary and sufficient condition of regularity, for boundary points, for Dirichlet problem on an open subset of RN related to sub-Laplacian. On a Carnot group we give the essential background for this type of operator, and introduce the notion of “quasi-boundedness”. Then we show the strict relationship between this notion, the fundamental solution of the given operator, and the regularity of the boundary points.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis gathers the work carried out by the author in the last three years of research and it concerns the study and implementation of algorithms to coordinate and control a swarm of mobile robots moving in unknown environments. In particular, the author's attention is focused on two different approaches in order to solve two different problems. The first algorithm considered in this work deals with the possibility of decomposing a main complex task in many simple subtasks by exploiting the decentralized implementation of the so called \emph{Null Space Behavioral} paradigm. This approach to the problem of merging different subtasks with assigned priority is slightly modified in order to handle critical situations that can be detected when robots are moving through an unknown environment. In fact, issues can occur when one or more robots got stuck in local minima: a smart strategy to avoid deadlock situations is provided by the author and the algorithm is validated by simulative analysis. The second problem deals with the use of concepts borrowed from \emph{graph theory} to control a group differential wheel robots by exploiting the Laplacian solution of the consensus problem. Constraints on the swarm communication topology have been introduced by the use of a range and bearing platform developed at the Distributed Intelligent Systems and Algorithms Laboratory (DISAL), EPFL (Lausanne, CH) where part of author's work has been carried out. The control algorithm is validated by demonstration and simulation analysis and, later, is performed by a team of four robots engaged in a formation mission. To conclude, the capabilities of the algorithm based on the local solution of the consensus problem for differential wheel robots are demonstrated with an application scenario, where nine robots are engaged in a hunting task.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present thesis is concerned with certain aspects of differential and pseudodifferential operators on infinite dimensional spaces. We aim to generalize classical operator theoretical concepts of pseudodifferential operators on finite dimensional spaces to the infinite dimensional case. At first we summarize some facts about the canonical Gaussian measures on infinite dimensional Hilbert space riggings. Considering the naturally unitary group actions in $L^2(H_-,gamma)$ given by weighted shifts and multiplication with $e^{iSkp{t}{cdot}_0}$ we obtain an unitary equivalence $F$ between them. In this sense $F$ can be considered as an abstract Fourier transform. We show that $F$ coincides with the Fourier-Wiener transform. Using the Fourier-Wiener transform we define pseudodifferential operators in Weyl- and Kohn-Nirenberg form on our Hilbert space rigging. In the case of this Gaussian measure $gamma$ we discuss several possible Laplacians, at first the Ornstein-Uhlenbeck operator and then pseudo-differential operators with negative definite symbol. In the second case, these operators are generators of $L^2_gamma$-sub-Markovian semi-groups and $L^2_gamma$-Dirichlet-forms. In 1992 Gramsch, Ueberberg and Wagner described a construction of generalized Hörmander classes by commutator methods. Following this concept and the classical finite dimensional description of $Psi_{ro,delta}^0$ ($0leqdeltaleqroleq 1$, $delta< 1$) in the $C^*$-algebra $L(L^2)$ by Beals and Cordes we construct in both cases generalized Hörmander classes, which are $Psi^*$-algebras. These classes act on a scale of Sobolev spaces, generated by our Laplacian. In the case of the Ornstein-Uhlenbeck operator, we prove that a large class of continuous pseudodifferential operators considered by Albeverio and Dalecky in 1998 is contained in our generalized Hörmander class. Furthermore, in the case of a Laplacian with negative definite symbol, we develop a symbolic calculus for our operators. We show some Fredholm-criteria for them and prove that these Fredholm-operators are hypoelliptic. Moreover, in the finite dimensional case, using the Gaussian-measure instead of the Lebesgue-measure the index of these Fredholm operators is still given by Fedosov's formula. Considering an infinite dimensional Heisenberg group rigging we discuss the connection of some representations of the Heisenberg group to pseudo-differential operators on infinite dimensional spaces. We use this connections to calculate the spectrum of pseudodifferential operators and to construct generalized Hörmander classes given by smooth elements which are spectrally invariant in $L^2(H_-,gamma)$. Finally, given a topological space $X$ with Borel measure $mu$, a locally compact group $G$ and a representation $B$ of $G$ in the group of all homeomorphisms of $X$, we construct a Borel measure $mu_s$ on $X$ which is invariant under $B(G)$.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the present dissertation we consider Feynman integrals in the framework of dimensional regularization. As all such integrals can be expressed in terms of scalar integrals, we focus on this latter kind of integrals in their Feynman parametric representation and study their mathematical properties, partially applying graph theory, algebraic geometry and number theory. The three main topics are the graph theoretic properties of the Symanzik polynomials, the termination of the sector decomposition algorithm of Binoth and Heinrich and the arithmetic nature of the Laurent coefficients of Feynman integrals.rnrnThe integrand of an arbitrary dimensionally regularised, scalar Feynman integral can be expressed in terms of the two well-known Symanzik polynomials. We give a detailed review on the graph theoretic properties of these polynomials. Due to the matrix-tree-theorem the first of these polynomials can be constructed from the determinant of a minor of the generic Laplacian matrix of a graph. By use of a generalization of this theorem, the all-minors-matrix-tree theorem, we derive a new relation which furthermore relates the second Symanzik polynomial to the Laplacian matrix of a graph.rnrnStarting from the Feynman parametric parameterization, the sector decomposition algorithm of Binoth and Heinrich serves for the numerical evaluation of the Laurent coefficients of an arbitrary Feynman integral in the Euclidean momentum region. This widely used algorithm contains an iterated step, consisting of an appropriate decomposition of the domain of integration and the deformation of the resulting pieces. This procedure leads to a disentanglement of the overlapping singularities of the integral. By giving a counter-example we exhibit the problem, that this iterative step of the algorithm does not terminate for every possible case. We solve this problem by presenting an appropriate extension of the algorithm, which is guaranteed to terminate. This is achieved by mapping the iterative step to an abstract combinatorial problem, known as Hironaka's polyhedra game. We present a publicly available implementation of the improved algorithm. Furthermore we explain the relationship of the sector decomposition method with the resolution of singularities of a variety, given by a sequence of blow-ups, in algebraic geometry.rnrnMotivated by the connection between Feynman integrals and topics of algebraic geometry we consider the set of periods as defined by Kontsevich and Zagier. This special set of numbers contains the set of multiple zeta values and certain values of polylogarithms, which in turn are known to be present in results for Laurent coefficients of certain dimensionally regularized Feynman integrals. By use of the extended sector decomposition algorithm we prove a theorem which implies, that the Laurent coefficients of an arbitrary Feynman integral are periods if the masses and kinematical invariants take values in the Euclidean momentum region. The statement is formulated for an even more general class of integrals, allowing for an arbitrary number of polynomials in the integrand.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let O-2n be a symplectic toric orbifold with a fixed T-n-action and with a tonic Kahler metric g. In [10] we explored whether, when O is a manifold, the equivariant spectrum of the Laplace Delta(g) operator on C-infinity(O) determines O up to symplectomorphism. In the setting of tonic orbifolds we shmilicantly improve upon our previous results and show that a generic tone orbifold is determined by its equivariant spectrum, up to two possibilities. This involves developing the asymptotic expansion of the heat trace on an orbifold in the presence of an isometry. We also show that the equivariant spectrum determines whether the toric Kahler metric has constant scalar curvature. (C) 2012 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let M^{2n} be a symplectic toric manifold with a fixed T^n-action and with a toric K\"ahler metric g. Abreu asked whether the spectrum of the Laplace operator $\Delta_g$ on $\mathcal{C}^\infty(M)$ determines the moment polytope of M, and hence by Delzant's theorem determines M up to symplectomorphism. We report on some progress made on an equivariant version of this conjecture. If the moment polygon of M^4 is generic and does not have too many pairs of parallel sides, the so-called equivariant spectrum of M and the spectrum of its associated real manifold M_R determine its polygon, up to translation and a small number of choices. For M of arbitrary even dimension and with integer cohomology class, the equivariant spectrum of the Laplacian acting on sections of a naturally associated line bundle determines the moment polytope of M.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In 1983, M. van den Berg made his Fundamental Gap Conjecture about the difference between the first two Dirichlet eigenvalues (the fundamental gap) of any convex domain in the Euclidean plane. Recently, progress has been made in the case where the domains are polygons and, in particular, triangles. We examine the conjecture for triangles in hyperbolic geometry, though we seek an for an upper bound for the fundamental gap rather than a lower bound.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The numerical solution of the incompressible Navier-Stokes equations offers an alternative to experimental analysis of fluid-structure interaction (FSI). We would save a lot of time and effort and help cut back on costs, if we are able to accurately model systems by these numerical solutions. These advantages are even more obvious when considering huge structures like bridges, high rise buildings or even wind turbine blades with diameters as large as 200 meters. The modeling of such processes, however, involves complex multiphysics problems along with complex geometries. This thesis focuses on a novel vorticity-velocity formulation called the Kinematic Laplacian Equation (KLE) to solve the incompressible Navier-stokes equations for such FSI problems. This scheme allows for the implementation of robust adaptive ordinary differential equations (ODE) time integration schemes, allowing us to tackle each problem as a separate module. The current algortihm for the KLE uses an unstructured quadrilateral mesh, formed by dividing each triangle of an unstructured triangular mesh into three quadrilaterals for spatial discretization. This research deals with determining a suitable measure of mesh quality based on the physics of the problems being tackled. This is followed by exploring methods to improve the quality of quadrilateral elements obtained from the triangles and thereby improving the overall mesh quality. A series of numerical experiments were designed and conducted for this purpose and the results obtained were tested on different geometries with varying degrees of mesh density.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Among the classical operators of mathematical physics the Laplacian plays an important role due to the number of different situations that can be modelled by it. Because of this a great effort has been made by mathematicians as well as by engineers to master its properties till the point that nearly everything has been said about them from a qualitative viewpoint. Quantitative results have also been obtained through the use of the new numerical techniques sustained by the computer. Finite element methods and boundary techniques have been successfully applied to engineering problems as can be seen in the technical literature (for instance [ l ] , [2], [3] . Boundary techniques are especially advantageous in those cases in which the main interest is concentrated on what is happening at the boundary. This situation is very usual in potential problems due to the properties of harmonic functions. In this paper we intend to show how a boundary condition different from the classical, but physically sound, is introduced without any violence in the discretization frame of the Boundary Integral Equation Method. The idea will be developed in the context of heat conduction in axisymmetric problems but it is hoped that its extension to other situations is straightforward. After the presentation of the method several examples will show the capabilities of modelling a physical problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this chapter we will introduce the reader to the techniques of the Boundary Element Method applied to simple Laplacian problems. Most classical applications refer to electrostatic and magnetic fields, but the Laplacian operator also governs problems such as Saint-Venant torsion, irrotational flow, fluid flow through porous media and the added fluid mass in fluidstructure interaction problems. This short list, to which it would be possible to add many other physical problems governed by the same equation, is an indication of the importance of the numerical treatment of the Laplacian operator. Potential theory has pioneered the use of BEM since the papers of Jaswon and Hess. An interesting introduction to the topic is given by Cruse. In the last five years a renaissance of integral methods has been detected. This can be followed in the books by Jaswon and Symm and by Brebbia or Brebbia and Walker.In this chapter we shall maintain an elementary level and follow a classical scheme in order to make the content accessible to the reader who has just started to study the technique. The whole emphasis has been put on the socalled "direct" method because it is the one which appears to offer more advantages. In this section we recall the classical concepts of potential theory and establish the basic equations of the method. Later on we discuss the discretization philosophy, the implementation of different kinds of elements and the advantages of substructuring which is unavoidable when dealing with heterogeneous materials.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Project you are about to see it is based on the technologies used on object detection and recognition, especially on leaves and chromosomes. To do so, this document contains the typical parts of a scientific paper, as it is what it is. It is composed by an Abstract, an Introduction, points that have to do with the investigation area, future work, conclusions and references used for the elaboration of the document. The Abstract talks about what are we going to find in this paper, which is technologies employed on pattern detection and recognition for leaves and chromosomes and the jobs that are already made for cataloguing these objects. In the introduction detection and recognition meanings are explained. This is necessary as many papers get confused with these terms, specially the ones talking about chromosomes. Detecting an object is gathering the parts of the image that are useful and eliminating the useless parts. Summarizing, detection would be recognizing the objects borders. When talking about recognition, we are talking about the computers or the machines process, which says what kind of object we are handling. Afterwards we face a compilation of the most used technologies in object detection in general. There are two main groups on this category: Based on derivatives of images and based on ASIFT points. The ones that are based on derivatives of images have in common that convolving them with a previously created matrix does the treatment of them. This is done for detecting borders on the images, which are changes on the intensity of the pixels. Within these technologies we face two groups: Gradian based, which search for maximums and minimums on the pixels intensity as they only use the first derivative. The Laplacian based methods search for zeros on the pixels intensity as they use the second derivative. Depending on the level of details that we want to use on the final result, we will choose one option or the other, because, as its logic, if we used Gradian based methods, the computer will consume less resources and less time as there are less operations, but the quality will be worse. On the other hand, if we use the Laplacian based methods we will need more time and resources as they require more operations, but we will have a much better quality result. After explaining all the derivative based methods, we take a look on the different algorithms that are available for both groups. The other big group of technologies for object recognition is the one based on ASIFT points, which are based on 6 image parameters and compare them with another image taking under consideration these parameters. These methods disadvantage, for our future purposes, is that it is only valid for one single object. So if we are going to recognize two different leaves, even though if they refer to the same specie, we are not going to be able to recognize them with this method. It is important to mention these types of technologies as we are talking about recognition methods in general. At the end of the chapter we can see a comparison with pros and cons of all technologies that are employed. Firstly comparing them separately and then comparing them all together, based on our purposes. Recognition techniques, which are the next chapter, are not really vast as, even though there are general steps for doing object recognition, every single object that has to be recognized has its own method as the are different. This is why there is not a general method that we can specify on this chapter. We now move on into leaf detection techniques on computers. Now we will use the technique explained above based on the image derivatives. Next step will be to turn the leaf into several parameters. Depending on the document that you are referring to, there will be more or less parameters. Some papers recommend to divide the leaf into 3 main features (shape, dent and vein] and doing mathematical operations with them we can get up to 16 secondary features. Next proposition is dividing the leaf into 5 main features (Diameter, physiological length, physiological width, area and perimeter] and from those, extract 12 secondary features. This second alternative is the most used so it is the one that is going to be the reference. Following in to leaf recognition, we are based on a paper that provides a source code that, clicking on both leaf ends, it automatically tells to which specie belongs the leaf that we are trying to recognize. To do so, it only requires having a database. On the tests that have been made by the document, they assure us a 90.312% of accuracy over 320 total tests (32 plants on the database and 10 tests per specie]. Next chapter talks about chromosome detection, where we shall pass the metaphasis plate, where the chromosomes are disorganized, into the karyotype plate, which is the usual view of the 23 chromosomes ordered by number. There are two types of techniques to do this step: the skeletonization process and swiping angles. Skeletonization progress consists on suppressing the inside pixels of the chromosome to just stay with the silhouette. This method is really similar to the ones based on the derivatives of the image but the difference is that it doesnt detect the borders but the interior of the chromosome. Second technique consists of swiping angles from the beginning of the chromosome and, taking under consideration, that on a single chromosome we cannot have more than an X angle, it detects the various regions of the chromosomes. Once the karyotype plate is defined, we continue with chromosome recognition. To do so, there is a technique based on the banding that chromosomes have (grey scale bands] that make them unique. The program then detects the longitudinal axis of the chromosome and reconstructs the band profiles. Then the computer is able to recognize this chromosome. Concerning the future work, we generally have to independent techniques that dont reunite detection and recognition, so our main focus would be to prepare a program that gathers both techniques. On the leaf matter we have seen that, detection and recognition, have a link as both share the option of dividing the leaf into 5 main features. The work that would have to be done is to create an algorithm that linked both methods, as in the program, which recognizes leaves, it has to be clicked both leaf ends so it is not an automatic algorithm. On the chromosome side, we should create an algorithm that searches for the beginning of the chromosome and then start to swipe angles, to later give the parameters to the program that searches for the band profiles. Finally, on the summary, we explain why this type of investigation is needed, and that is because with global warming, lots of species (animals and plants] are beginning to extinguish. That is the reason why a big database, which gathers all the possible species, is needed. For recognizing animal species, we just only have to have the 23 chromosomes. While recognizing a plant, there are several ways of doing it, but the easiest way to input a computer is to scan the leaf of the plant. RESUMEN. El proyecto que se puede ver a continuación trata sobre las tecnologías empleadas en la detección y reconocimiento de objetos, especialmente de hojas y cromosomas. Para ello, este documento contiene las partes típicas de un paper de investigación, puesto que es de lo que se trata. Así, estará compuesto de Abstract, Introducción, diversos puntos que tengan que ver con el área a investigar, trabajo futuro, conclusiones y biografía utilizada para la realización del documento. Así, el Abstract nos cuenta qué vamos a poder encontrar en este paper, que no es ni más ni menos que las tecnologías empleadas en el reconocimiento y detección de patrones en hojas y cromosomas y qué trabajos hay existentes para catalogar a estos objetos. En la introducción se explican los conceptos de qué es la detección y qué es el reconocimiento. Esto es necesario ya que muchos papers científicos, especialmente los que hablan de cromosomas, confunden estos dos términos que no podían ser más sencillos. Por un lado tendríamos la detección del objeto, que sería simplemente coger las partes que nos interesasen de la imagen y eliminar aquellas partes que no nos fueran útiles para un futuro. Resumiendo, sería reconocer los bordes del objeto de estudio. Cuando hablamos de reconocimiento, estamos refiriéndonos al proceso que tiene el ordenador, o la máquina, para decir qué clase de objeto estamos tratando. Seguidamente nos encontramos con un recopilatorio de las tecnologías más utilizadas para la detección de objetos, en general. Aquí nos encontraríamos con dos grandes grupos de tecnologías: Las basadas en las derivadas de imágenes y las basadas en los puntos ASIFT. El grupo de tecnologías basadas en derivadas de imágenes tienen en común que hay que tratar a las imágenes mediante una convolución con una matriz creada previamente. Esto se hace para detectar bordes en las imágenes que son básicamente cambios en la intensidad de los píxeles. Dentro de estas tecnologías nos encontramos con dos grupos: Los basados en gradientes, los cuales buscan máximos y mínimos de intensidad en la imagen puesto que sólo utilizan la primera derivada; y los Laplacianos, los cuales buscan ceros en la intensidad de los píxeles puesto que estos utilizan la segunda derivada de la imagen. Dependiendo del nivel de detalles que queramos utilizar en el resultado final nos decantaremos por un método u otro puesto que, como es lógico, si utilizamos los basados en el gradiente habrá menos operaciones por lo que consumirá más tiempo y recursos pero por la contra tendremos menos calidad de imagen. Y al revés pasa con los Laplacianos, puesto que necesitan más operaciones y recursos pero tendrán un resultado final con mejor calidad. Después de explicar los tipos de operadores que hay, se hace un recorrido explicando los distintos tipos de algoritmos que hay en cada uno de los grupos. El otro gran grupo de tecnologías para el reconocimiento de objetos son los basados en puntos ASIFT, los cuales se basan en 6 parámetros de la imagen y la comparan con otra imagen teniendo en cuenta dichos parámetros. La desventaja de este método, para nuestros propósitos futuros, es que sólo es valido para un objeto en concreto. Por lo que si vamos a reconocer dos hojas diferentes, aunque sean de la misma especie, no vamos a poder reconocerlas mediante este método. Aún así es importante explicar este tipo de tecnologías puesto que estamos hablando de técnicas de reconocimiento en general. Al final del capítulo podremos ver una comparación con los pros y las contras de todas las tecnologías empleadas. Primeramente comparándolas de forma separada y, finalmente, compararemos todos los métodos existentes en base a nuestros propósitos. Las técnicas de reconocimiento, el siguiente apartado, no es muy extenso puesto que, aunque haya pasos generales para el reconocimiento de objetos, cada objeto a reconocer es distinto por lo que no hay un método específico que se pueda generalizar. Pasamos ahora a las técnicas de detección de hojas mediante ordenador. Aquí usaremos la técnica explicada previamente explicada basada en las derivadas de las imágenes. La continuación de este paso sería diseccionar la hoja en diversos parámetros. Dependiendo de la fuente a la que se consulte pueden haber más o menos parámetros. Unos documentos aconsejan dividir la morfología de la hoja en 3 parámetros principales (Forma, Dentina y ramificación] y derivando de dichos parámetros convertirlos a 16 parámetros secundarios. La otra propuesta es dividir la morfología de la hoja en 5 parámetros principales (Diámetro, longitud fisiológica, anchura fisiológica, área y perímetro] y de ahí extraer 12 parámetros secundarios. Esta segunda propuesta es la más utilizada de todas por lo que es la que se utilizará. Pasamos al reconocimiento de hojas, en la cual nos hemos basado en un documento que provee un código fuente que cucando en los dos extremos de la hoja automáticamente nos dice a qué especie pertenece la hoja que estamos intentando reconocer. Para ello sólo hay que formar una base de datos. En los test realizados por el citado documento, nos aseguran que tiene un índice de acierto del 90.312% en 320 test en total (32 plantas insertadas en la base de datos por 10 test que se han realizado por cada una de las especies]. El siguiente apartado trata de la detección de cromosomas, en el cual se debe de pasar de la célula metafásica, donde los cromosomas están desorganizados, al cariotipo, que es como solemos ver los 23 cromosomas de forma ordenada. Hay dos tipos de técnicas para realizar este paso: Por el proceso de esquelotonización y barriendo ángulos. El proceso de esqueletonización consiste en eliminar los píxeles del interior del cromosoma para quedarse con su silueta; Este proceso es similar a los métodos de derivación de los píxeles pero se diferencia en que no detecta bordes si no que detecta el interior de los cromosomas. La segunda técnica consiste en ir barriendo ángulos desde el principio del cromosoma y teniendo en cuenta que un cromosoma no puede doblarse más de X grados detecta las diversas regiones de los cromosomas. Una vez tengamos el cariotipo, se continua con el reconocimiento de cromosomas. Para ello existe una técnica basada en las bandas de blancos y negros que tienen los cromosomas y que son las que los hacen únicos. Para ello el programa detecta los ejes longitudinales del cromosoma y reconstruye los perfiles de las bandas que posee el cromosoma y que lo identifican como único. En cuanto al trabajo que se podría desempeñar en el futuro, tenemos por lo general dos técnicas independientes que no unen la detección con el reconocimiento por lo que se habría de preparar un programa que uniese estas dos técnicas. Respecto a las hojas hemos visto que ambos métodos, detección y reconocimiento, están vinculados debido a que ambos comparten la opinión de dividir las hojas en 5 parámetros principales. El trabajo que habría que realizar sería el de crear un algoritmo que conectase a ambos ya que en el programa de reconocimiento se debe clicar a los dos extremos de la hoja por lo que no es una tarea automática. En cuanto a los cromosomas, se debería de crear un algoritmo que busque el inicio del cromosoma y entonces empiece a barrer ángulos para después poder dárselo al programa que busca los perfiles de bandas de los cromosomas. Finalmente, en el resumen se explica el por qué hace falta este tipo de investigación, esto es que con el calentamiento global, muchas de las especies (tanto animales como plantas] se están empezando a extinguir. Es por ello que se necesitará una base de datos que contemple todas las posibles especies tanto del reino animal como del reino vegetal. Para reconocer a una especie animal, simplemente bastará con tener sus 23 cromosomas; mientras que para reconocer a una especie vegetal, existen diversas formas. Aunque la más sencilla de todas es contar con la hoja de la especie puesto que es el elemento más fácil de escanear e introducir en el ordenador.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Using the 3-D equations of linear elasticity and the asylllptotic expansion methods in terms of powers of the beam cross-section area as small parameter different beam theories can be obtained, according to the last term kept in the expansion. If it is used only the first two terms of the asymptotic expansion the classical beam theories can be recovered without resort to any "a priori" additional hypotheses. Moreover, some small corrections and extensions of the classical beam theories can be found and also there exists the possibility to use the asymptotic general beam theory as a basis procedure for a straightforward derivation of the stiffness matrix and the equivalent nodal forces of the beam. In order to obtain the above results a set of functions and constants only dependent on the cross-section of the beam it has to be computed them as solutions of different 2-D laplacian boundary value problems over the beam cross section domain. In this paper two main numerical procedures to solve these boundary value pf'oblems have been discussed, namely the Boundary Element Method (BEM) and the Finite Element Method (FEM). Results for some regular and geometrically simple cross-sections are presented and compared with ones computed analytically. Extensions to other arbitrary cross-sections are illustrated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A high resolution, second-order central difference method for incompressible flows is presented. The method is based on a recent second-order extension of the classic Lax–Friedrichs scheme introduced for hyperbolic conservation laws (Nessyahu H. & Tadmor E. (1990) J. Comp. Physics. 87, 408-463; Jiang G.-S. & Tadmor E. (1996) UCLA CAM Report 96-36, SIAM J. Sci. Comput., in press) and augmented by a new discrete Hodge projection. The projection is exact, yet the discrete Laplacian operator retains a compact stencil. The scheme is fast, easy to implement, and readily generalizable. Its performance was tested on the standard periodic double shear-layer problem; no spurious vorticity patterns appear when the flow is underresolved. A short discussion of numerical boundary conditions is also given, along with a numerical example.