826 resultados para graph matching algorithms
Resumo:
La liberalización colombiana es analizada, con frecuencia, con los coeficientes de apertura, este documento, en cambio, presenta un análisis complementario a través de algoritmos usados en la teoría de redes para caracterizar sistemas complejos. Esta nueva aproximación devela estructuras de la red mundial de comercio antes y después de la apertura, así como cambios en la posición colombiana.
Resumo:
En este documento se explica el rol de las compañías aseguradoras colombianas dentro del sistema pensional y se busca, a través de la comprensión de la evolución del entorno macroeconómico y del marco regulatorio, identificar los retos que enfrentan. Los retos explicados en el documento son tres: el reto de la rentabilidad, el reto que plantean los cambios relativamente frecuentes de la regulación, y el reto del “calce”. El documento se enfoca principalmente en el reto de la rentabilidad y desarrolla un ejercicio de frontera eficiente que utiliza retornos esperados calculados a partir de la metodología de Damodaran (2012). Los resultados del ejercicio soportan la idea de que en efecto los retornos esperados serán menores para cualquier nivel de riesgo y sugiere que ante tal panorama, la relajación de las restricciones impuestas por el Régimen de inversiones podría alivianar los preocupaciones de las compañías aseguradoras en esta materia. Para los otros dos retos también se sugieren alternativas: el Algorithmic Trading para el caso del reto que impone los cambios en la regulación, y las Asociaciones Público-Privadas para abordar el reto del “calce”.
Resumo:
We consider two–sided many–to–many matching markets in which each worker may work for multiple firms and each firm may hire multiple workers. We study individual and group manipulations in centralized markets that employ (pairwise) stable mechanisms and that require participants to submit rank order lists of agents on the other side of the market. We are interested in simple preference manipulations that have been reported and studied in empirical and theoretical work: truncation strategies, which are the lists obtained by removing a tail of least preferred partners from a preference list, and the more general dropping strategies, which are the lists obtained by only removing partners from a preference list (i.e., no reshuffling). We study when truncation / dropping strategies are exhaustive for a group of agents on the same side of the market, i.e., when each match resulting from preference manipulations can be replicated or improved upon by some truncation / dropping strategies. We prove that for each stable mechanism, truncation strategies are exhaustive for each agent with quota 1 (Theorem 1). We show that this result cannot be extended neither to group manipulations (even when all quotas equal 1 – Example 1), nor to individual manipulations when the agent’s quota is larger than 1 (even when all other agents’ quotas equal 1 – Example 2). Finally, we prove that for each stable mechanism, dropping strategies are exhaustive for each group of agents on the same side of the market (Theorem 2), i.e., independently of the quotas.
Resumo:
Resumen tomado de la revista
Resumo:
Resumen tomado de la publicación
Resumo:
Proporciona oportunidades para que los niños puedan practicar la clasificación y utilicen estas habilidades con una serie de objetos cotidianos, para desarrollar sus facultades de razonamiento y lógica. Las actividades animan a los niños a aprender a clasificar los elementos utilizando sus propios criterios y para investigar el uso dado a otros criterios como el color y tamaño utilizando sus diferentes sentidos.
Resumo:
Se compone de diez actividades destinadas a ayudar a los profesores a desarrollar las capacidades de análisis e interpretación de datos de los alumnos de ciencias. Cada actividad incluye una hoja de preguntas y una hoja de datos, ésta última, disponible en archivos de Excel, para que los estudiantes utilicen las nuevas tecnologías para crear gráficos. Los ejercicios están pensados para el nivel superior de la etapa 2 (key stage 2) del currículo nacional inglés y para la etapa 3, es decir, para primaria y secundaria, y se centran en el contenido de 'Sc2 Life and Living Proceses'.
Resumo:
Resumen basado en el de la publicaci??n
Resumo:
We present an algorithm for computing exact shortest paths, and consequently distances, from a generalized source (point, segment, polygonal chain or polygonal region) on a possibly non-convex polyhedral surface in which polygonal chain or polygon obstacles are allowed. We also present algorithms for computing discrete Voronoi diagrams of a set of generalized sites (points, segments, polygonal chains or polygons) on a polyhedral surface with obstacles. To obtain the discrete Voronoi diagrams our algorithms, exploiting hardware graphics capabilities, compute shortest path distances defined by the sites
Resumo:
This paper deals with the relationship between the periodic orbits of continuous maps on graphs and the topological entropy of the map. We show that the topological entropy of a graph map can be approximated by the entropy of its periodic orbits
Resumo:
The performance of a model-based diagnosis system could be affected by several uncertainty sources, such as,model errors,uncertainty in measurements, and disturbances. This uncertainty can be handled by mean of interval models.The aim of this thesis is to propose a methodology for fault detection, isolation and identification based on interval models. The methodology includes some algorithms to obtain in an automatic way the symbolic expression of the residual generators enhancing the structural isolability of the faults, in order to design the fault detection tests. These algorithms are based on the structural model of the system. The stages of fault detection, isolation, and identification are stated as constraint satisfaction problems in continuous domains and solved by means of interval based consistency techniques. The qualitative fault isolation is enhanced by a reasoning in which the signs of the symptoms are derived from analytical redundancy relations or bond graph models of the system. An initial and empirical analysis regarding the differences between interval-based and statistical-based techniques is presented in this thesis. The performance and efficiency of the contributions are illustrated through several application examples, covering different levels of complexity.
Resumo:
The human visual ability to perceive depth looks like a puzzle. We perceive three-dimensional spatial information quickly and efficiently by using the binocular stereopsis of our eyes and, what is mote important the learning of the most common objects which we achieved through living. Nowadays, modelling the behaviour of our brain is a fiction, that is why the huge problem of 3D perception and further, interpretation is split into a sequence of easier problems. A lot of research is involved in robot vision in order to obtain 3D information of the surrounded scene. Most of this research is based on modelling the stereopsis of humans by using two cameras as if they were two eyes. This method is known as stereo vision and has been widely studied in the past and is being studied at present, and a lot of work will be surely done in the future. This fact allows us to affirm that this topic is one of the most interesting ones in computer vision. The stereo vision principle is based on obtaining the three dimensional position of an object point from the position of its projective points in both camera image planes. However, before inferring 3D information, the mathematical models of both cameras have to be known. This step is known as camera calibration and is broadly describes in the thesis. Perhaps the most important problem in stereo vision is the determination of the pair of homologue points in the two images, known as the correspondence problem, and it is also one of the most difficult problems to be solved which is currently investigated by a lot of researchers. The epipolar geometry allows us to reduce the correspondence problem. An approach to the epipolar geometry is describes in the thesis. Nevertheless, it does not solve it at all as a lot of considerations have to be taken into account. As an example we have to consider points without correspondence due to a surface occlusion or simply due to a projection out of the camera scope. The interest of the thesis is focused on structured light which has been considered as one of the most frequently used techniques in order to reduce the problems related lo stereo vision. Structured light is based on the relationship between a projected light pattern its projection and an image sensor. The deformations between the pattern projected into the scene and the one captured by the camera, permits to obtain three dimensional information of the illuminated scene. This technique has been widely used in such applications as: 3D object reconstruction, robot navigation, quality control, and so on. Although the projection of regular patterns solve the problem of points without match, it does not solve the problem of multiple matching, which leads us to use hard computing algorithms in order to search the correct matches. In recent years, another structured light technique has increased in importance. This technique is based on the codification of the light projected on the scene in order to be used as a tool to obtain an unique match. Each token of light is imaged by the camera, we have to read the label (decode the pattern) in order to solve the correspondence problem. The advantages and disadvantages of stereo vision against structured light and a survey on coded structured light are related and discussed. The work carried out in the frame of this thesis has permitted to present a new coded structured light pattern which solves the correspondence problem uniquely and robust. Unique, as each token of light is coded by a different word which removes the problem of multiple matching. Robust, since the pattern has been coded using the position of each token of light with respect to both co-ordinate axis. Algorithms and experimental results are included in the thesis. The reader can see examples 3D measurement of static objects, and the more complicated measurement of moving objects. The technique can be used in both cases as the pattern is coded by a single projection shot. Then it can be used in several applications of robot vision. Our interest is focused on the mathematical study of the camera and pattern projector models. We are also interested in how these models can be obtained by calibration, and how they can be used to obtained three dimensional information from two correspondence points. Furthermore, we have studied structured light and coded structured light, and we have presented a new coded structured light pattern. However, in this thesis we started from the assumption that the correspondence points could be well-segmented from the captured image. Computer vision constitutes a huge problem and a lot of work is being done at all levels of human vision modelling, starting from a)image acquisition; b) further image enhancement, filtering and processing, c) image segmentation which involves thresholding, thinning, contour detection, texture and colour analysis, and so on. The interest of this thesis starts in the next step, usually known as depth perception or 3D measurement.