938 resultados para Clouds of points
Resumo:
Communication is the process of transmitting data across channel. Whenever data is transmitted across a channel, errors are likely to occur. Coding theory is a stream of science that deals with finding efficient ways to encode and decode data, so that any likely errors can be detected and corrected. There are many methods to achieve coding and decoding. One among them is Algebraic Geometric Codes that can be constructed from curves. Cryptography is the science ol‘ security of transmitting messages from a sender to a receiver. The objective is to encrypt message in such a way that an eavesdropper would not be able to read it. A eryptosystem is a set of algorithms for encrypting and decrypting for the purpose of the process of encryption and decryption. Public key eryptosystem such as RSA and DSS are traditionally being prel‘en‘ec| for the purpose of secure communication through the channel. llowever Elliptic Curve eryptosystem have become a viable altemative since they provide greater security and also because of their usage of key of smaller length compared to other existing crypto systems. Elliptic curve cryptography is based on group of points on an elliptic curve over a finite field. This thesis deals with Algebraic Geometric codes and their relation to Cryptography using elliptic curves. Here Goppa codes are used and the curves used are elliptic curve over a finite field. We are relating Algebraic Geometric code to Cryptography by developing a cryptographic algorithm, which includes the process of encryption and decryption of messages. We are making use of fundamental properties of Elliptic curve cryptography for generating the algorithm and is used here to relate both.
Resumo:
The computation of a piecewise smooth function that approximates a finite set of data points may be decomposed into two decoupled tasks: first, the computation of the locally smooth models, and hence, the segmentation of the data into classes that consist on the sets of points best approximated by each model, and second, the computation of the normalized discriminant functions for each induced class. The approximating function may then be computed as the optimal estimator with respect to this measure field. We give an efficient procedure for effecting both computations, and for the determination of the optimal number of components.
Resumo:
Mosaics have been commonly used as visual maps for undersea exploration and navigation. The position and orientation of an underwater vehicle can be calculated by integrating the apparent motion of the images which form the mosaic. A feature-based mosaicking method is proposed in this paper. The creation of the mosaic is accomplished in four stages: feature selection and matching, detection of points describing the dominant motion, homography computation and mosaic construction. In this work we demonstrate that the use of color and textures as discriminative properties of the image can improve, to a large extent, the accuracy of the constructed mosaic. The system is able to provide 3D metric information concerning the vehicle motion using the knowledge of the intrinsic parameters of the camera while integrating the measurements of an ultrasonic sensor. The experimental results of real images have been tested on the GARBI underwater vehicle
Resumo:
In this paper we present a novel structure from motion (SfM) approach able to infer 3D deformable models from uncalibrated stereo images. Using a stereo setup dramatically improves the 3D model estimation when the observed 3D shape is mostly deforming without undergoing strong rigid motion. Our approach first calibrates the stereo system automatically and then computes a single metric rigid structure for each frame. Afterwards, these 3D shapes are aligned to a reference view using a RANSAC method in order to compute the mean shape of the object and to select the subset of points on the object which have remained rigid throughout the sequence without deforming. The selected rigid points are then used to compute frame-wise shape registration and to extract the motion parameters robustly from frame to frame. Finally, all this information is used in a global optimization stage with bundle adjustment which allows to refine the frame-wise initial solution and also to recover the non-rigid 3D model. We show results on synthetic and real data that prove the performance of the proposed method even when there is no rigid motion in the original sequence
Resumo:
La teoría de restricciones, más conocida como TOC, es una herramienta que permite el diagnóstico empresarial, identificando causas y efectos que afectan a la misma; en el desarrollo de esta tesis se realiza un diagnóstico TOC de la empresa BLACK PENGUIN S.A.S en el cual se considera la parte financiera donde se calculan y examinan las cuentas en términos TOC con medidores que ayudan a determinar las falencias en que incurre la empresa en su proceso productivo y de gestión administrativa, posteriormente se realiza una identificación de los efectos indeseables( EIDEs ) y las nubes de problema encontradas en la empresa, para de esta forma elaborar el árbol de realidad actual, y conocer el estado del arte de la organización. El desarrollo de diferentes análisis estratégicos como la cadena de valor y el análisis matricial servirán como punto de referencia para el desarrollo del Mejoramiento En Ambiente T.O.C.
Resumo:
Estudio de prefactibilidad para determinar si genera rentabilidad financiera por encima de la tasa de oportunidad de la empresa, el montaje y operación de 40 Puntos Vive Digital en la ciudad de Bogotá. Teniendo en cuenta los estudios de soporte tales como: entorno, mercado, técnico, administrativo, legal, financiero, económico-social. Para llevar a cabo dicho objetivo, la empresa deberá evaluar la posibilidad de presentarse a la convocatoria que lidera el Ministerio de Tecnologías de Información y las Comunicaciones (TIC), en el marco del Plan Vive Digital a través del Programa Compartel que promueve la creación de 320 Puntos Vive Digital para el año 2012. Por esta razón, la empresa contrató un formulador para estructurar el proyecto y servir como soporte de un proceso para la toma de decisiones en el escenario real de participar en la convocatoria del proyecto de Puntos Vive Digital del Ministerio de Tecnologías de Información y las Comunicaciones (TIC). Luego de finalizar dicha investigación, resultó recomendable para la empresa, participar en el Programa Plan Vive Digital, particularmente en el proyecto de Puntos Vive Digital, luego de constatar que genera riqueza a la empresa, contribuye en su plan estratégico 2012-2014, especialmente en ampliar su portafolio de servicios TIC de manera rentable y finalmente, cuenta con la experiencia de ejecutar este tipo de proyectos debido a la gran trayectoria en el montaje y operación de puntos de acceso masivo a internet en la ciudad.
Resumo:
Se trata de una revisión de las publicaciones sobre Educación Inclusiva que han aflorado en los últimos años. Se comentan un total de veinte trabajos, principalmente: libros, artículos y capítulos de libros, que alcanzan un a variedad de puntos de vista, perspectivas y aportaciones para esta nueva educación. Entre las recensiones podemos encontrar cinco tipos de aproximaciones a la inclusión educativa: 1õ. Aquellas que conceptualmente tratan de describir, contextualizar e identificar y valorar este modelo de educación; 2õ. Ofrecen una perspectiva internacional comparando el estado de la educación inclusiva en diferentes países, la mayoría de estas, se basan en países anglosajones; 3õ. Desde una perspectiva organizativa, así los procesos intrusivos se desarrollan con la organización de las escuelas, con su estructura escolar; 4õ. Análisis y evaluación de propuestas formativas, de desarrollo profesional, capaces de ayudar al desarrollo de una Educación Intrusiva; 5õ. Investigación en y sobre educación inclusiva. Analizan y plantean la idoneidad de esa educación o en su caso el desajuste de determinadas metodologías.
Resumo:
Creativitat i subversió en les reescriptures de Joan Sales se centra en la figura de l’editor i novel•lista Joan Sales (1912-1983) i pretén ressituar l’autor d’Incerta glòria dins del panorama literari català a partir d’una diversificació dels punts de mira des dels quals esdevé possible realitzar-ne un estudi. Com a base, s’han utilitzat les teories traductològiques de finals del segle XX, elaborades per autors com André Lefevere i Susan Bassnett, que situen la traducció, l’edició, l’adaptació, la crítica literària i la historiografia dins del terreny de la reescriptura creativa i atorguen un poder subversiu a totes aquestes activitats. Així, doncs, s’ha intentat modificar la tendència que, històricament, havia dut a considerar de manera negativa les reescriptures de Sales. Sota el paraigua teòric de la reescriptura, les manipulacions, els canvis i les intervencions esdevenen una eina que contribueix a l’evolució literària d’una cultura.
Resumo:
The human visual ability to perceive depth looks like a puzzle. We perceive three-dimensional spatial information quickly and efficiently by using the binocular stereopsis of our eyes and, what is mote important the learning of the most common objects which we achieved through living. Nowadays, modelling the behaviour of our brain is a fiction, that is why the huge problem of 3D perception and further, interpretation is split into a sequence of easier problems. A lot of research is involved in robot vision in order to obtain 3D information of the surrounded scene. Most of this research is based on modelling the stereopsis of humans by using two cameras as if they were two eyes. This method is known as stereo vision and has been widely studied in the past and is being studied at present, and a lot of work will be surely done in the future. This fact allows us to affirm that this topic is one of the most interesting ones in computer vision. The stereo vision principle is based on obtaining the three dimensional position of an object point from the position of its projective points in both camera image planes. However, before inferring 3D information, the mathematical models of both cameras have to be known. This step is known as camera calibration and is broadly describes in the thesis. Perhaps the most important problem in stereo vision is the determination of the pair of homologue points in the two images, known as the correspondence problem, and it is also one of the most difficult problems to be solved which is currently investigated by a lot of researchers. The epipolar geometry allows us to reduce the correspondence problem. An approach to the epipolar geometry is describes in the thesis. Nevertheless, it does not solve it at all as a lot of considerations have to be taken into account. As an example we have to consider points without correspondence due to a surface occlusion or simply due to a projection out of the camera scope. The interest of the thesis is focused on structured light which has been considered as one of the most frequently used techniques in order to reduce the problems related lo stereo vision. Structured light is based on the relationship between a projected light pattern its projection and an image sensor. The deformations between the pattern projected into the scene and the one captured by the camera, permits to obtain three dimensional information of the illuminated scene. This technique has been widely used in such applications as: 3D object reconstruction, robot navigation, quality control, and so on. Although the projection of regular patterns solve the problem of points without match, it does not solve the problem of multiple matching, which leads us to use hard computing algorithms in order to search the correct matches. In recent years, another structured light technique has increased in importance. This technique is based on the codification of the light projected on the scene in order to be used as a tool to obtain an unique match. Each token of light is imaged by the camera, we have to read the label (decode the pattern) in order to solve the correspondence problem. The advantages and disadvantages of stereo vision against structured light and a survey on coded structured light are related and discussed. The work carried out in the frame of this thesis has permitted to present a new coded structured light pattern which solves the correspondence problem uniquely and robust. Unique, as each token of light is coded by a different word which removes the problem of multiple matching. Robust, since the pattern has been coded using the position of each token of light with respect to both co-ordinate axis. Algorithms and experimental results are included in the thesis. The reader can see examples 3D measurement of static objects, and the more complicated measurement of moving objects. The technique can be used in both cases as the pattern is coded by a single projection shot. Then it can be used in several applications of robot vision. Our interest is focused on the mathematical study of the camera and pattern projector models. We are also interested in how these models can be obtained by calibration, and how they can be used to obtained three dimensional information from two correspondence points. Furthermore, we have studied structured light and coded structured light, and we have presented a new coded structured light pattern. However, in this thesis we started from the assumption that the correspondence points could be well-segmented from the captured image. Computer vision constitutes a huge problem and a lot of work is being done at all levels of human vision modelling, starting from a)image acquisition; b) further image enhancement, filtering and processing, c) image segmentation which involves thresholding, thinning, contour detection, texture and colour analysis, and so on. The interest of this thesis starts in the next step, usually known as depth perception or 3D measurement.
Resumo:
El treball desenvolupat en aquesta tesi aprofundeix i aporta solucions innovadores en el camp orientat a tractar el problema de la correspondència en imatges subaquàtiques. En aquests entorns, el que realment complica les tasques de processat és la falta de contorns ben definits per culpa d'imatges esborronades; un fet aquest que es deu fonamentalment a il·luminació deficient o a la manca d'uniformitat dels sistemes d'il·luminació artificials. Els objectius aconseguits en aquesta tesi es poden remarcar en dues grans direccions. Per millorar l'algorisme d'estimació de moviment es va proposar un nou mètode que introdueix paràmetres de textura per rebutjar falses correspondències entre parells d'imatges. Un seguit d'assaigs efectuats en imatges submarines reals han estat portats a terme per seleccionar les estratègies més adients. Amb la finalitat d'aconseguir resultats en temps real, es proposa una innovadora arquitectura VLSI per la implementació d'algunes parts de l'algorisme d'estimació de moviment amb alt cost computacional.
Resumo:
An algorithm is presented for the generation of molecular models of defective graphene fragments, containing a majority of 6-membered rings with a small number of 5- and 7-membered rings as defects. The structures are generated from an initial random array of points in 2D space, which are then subject to Delaunay triangulation. The dual of the triangulation forms a Voronoi tessellation of polygons with a range of ring sizes. An iterative cycle of refinement, involving deletion and addition of points followed by further triangulation, is performed until the user-defined criteria for the number of defects are met. The array of points and connectivities are then converted to a molecular structure and subject to geometry optimization using a standard molecular modeling package to generate final atomic coordinates. On the basis of molecular mechanics with minimization, this automated method can generate structures, which conform to user-supplied criteria and avoid the potential bias associated with the manual building of structures. One application of the algorithm is the generation of structures for the evaluation of the reactivity of different defect sites. Ab initio electronic structure calculations on a representative structure indicate preferential fluorination close to 5-ring defects.
Resumo:
Purpose: Acquiring details of kinetic parameters of enzymes is crucial to biochemical understanding, drug development, and clinical diagnosis in ocular diseases. The correct design of an experiment is critical to collecting data suitable for analysis, modelling and deriving the correct information. As classical design methods are not targeted to the more complex kinetics being frequently studied, attention is needed to estimate parameters of such models with low variance. Methods: We have developed Bayesian utility functions to minimise kinetic parameter variance involving differentiation of model expressions and matrix inversion. These have been applied to the simple kinetics of the enzymes in the glyoxalase pathway (of importance in posttranslational modification of proteins in cataract), and the complex kinetics of lens aldehyde dehydrogenase (also of relevance to cataract). Results: Our successful application of Bayesian statistics has allowed us to identify a set of rules for designing optimum kinetic experiments iteratively. Most importantly, the distribution of points in the range is critical; it is not simply a matter of even or multiple increases. At least 60 % must be below the KM (or plural if more than one dissociation constant) and 40% above. This choice halves the variance found using a simple even spread across the range.With both the glyoxalase system and lens aldehyde dehydrogenase we have significantly improved the variance of kinetic parameter estimation while reducing the number and costs of experiments. Conclusions: We have developed an optimal and iterative method for selecting features of design such as substrate range, number of measurements and choice of intermediate points. Our novel approach minimises parameter error and costs, and maximises experimental efficiency. It is applicable to many areas of ocular drug design, including receptor-ligand binding and immunoglobulin binding, and should be an important tool in ocular drug discovery.
Resumo:
This paper describes a new method for reconstructing 3D surface using a small number, e.g. 10, of 2D photographic images. The images are taken at different viewing directions by a perspective camera with full prior knowledge of the camera configurations. The reconstructed object's surface is represented a set of triangular facets. We empirically demonstrate that if the viewing directions are uniformly distributed around the object's viewing sphere, then the reconstructed 3D points optimally cluster closely on a highly curved part of the surface and are widely, spread on smooth or fat parts. The advantage of this property is that the reconstructed points along a surface or a contour generator are not undersampled or underrepresented because surfaces or contours should be sampled or represented with more densely points where their curvatures are high. The more complex the contour's shape, the greater is the number of points required, but the greater the number of points is automatically generated by the proposed method Given that the viewing directions are uniformly distributed, the number and distribution of the reconstructed points depend on the shape or the curvature of the surface regardless of the size of the surface or the size of the object.
Resumo:
This paper describes a new method for reconstructing 3D surface points and a wireframe on the surface of a freeform object using a small number, e.g. 10, of 2D photographic images. The images are taken at different viewing directions by a perspective camera with full prior knowledge of the camera configurations. The reconstructed surface points are frontier points and the wireframe is a network of contour generators. Both of them are reconstructed by pairing apparent contours in the 2D images. Unlike previous works, we empirically demonstrate that if the viewing directions are uniformly distributed around the object's viewing sphere, then the reconstructed 3D points automatically cluster closely on a highly curved part of the surface and are widely spread on smooth or flat parts. The advantage of this property is that the reconstructed points along a surface or a contour generator are not under-sampled or under-represented because surfaces or contours should be sampled or represented with more densely points where their curvatures are high. The more complex the contour's shape, the greater is the number of points required, but the greater the number of points is automatically generated by the proposed method. Given that the viewing directions are uniformly distributed, the number and distribution of the reconstructed points depend on the shape or the curvature of the surface regardless of the size of the surface or the size of the object. The unique pattern of the reconstructed points and contours may be used in 31) object recognition and measurement without computationally intensive full surface reconstruction. The results are obtained from both computer-generated and real objects. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
The conformational properties of the hybrid amphiphile formed by the conjugation of a hydrophobic peptide with four phenylalanine (Phe) residues and hydrophilic poly(ethylene glycol), have been investigated using quantum mechanical calculations and atomistic molecular dynamics simulations. The intrinsic conformational preferences of the peptide were examined using the building-up search procedure combined with B3LYP/ 6-31G(d) geometry optimizations, which led to the identification of 78, 78, and 92 minimum energy structures for the peptides containing one, two, and four Phe residues. These peptides tend to adopt regular organizations involving turn-like motifs that define ribbon or helicallike arrangements. Furthermore, calculations indicate that backbone ... side chain interactions involving the N-H of the amide groups and the pi clouds of the aromatic rings play a crucial role in Phe-containing peptides. On the other hand,MD simulations on the complete amphiphile in aqueous solution showed that the polymer fragment rapidly unfolds maximizing the contacts with the polar solvent, even though the hydrophobic peptide reduce the number of waters of hydration with respect to an individual polymer chain of equivalent molecular weight. In spite of the small effect of the peptide in the hydrodynamic properties of the polymer, we conclude that the two counterparts of the amphiphile tend to organize as independent modules.