907 resultados para Virtual 3D model
Resumo:
Security intrusions in large systems is a problem due to its lack of scalability with the current IDS-based approaches. This paper describes the RECLAMO project, where an architecture for an Automated Intrusion Response System (AIRS) is being proposed. This system will infer the most appropriate response for a given attack, taking into account the attack type, context information, and the trust and reputation of the reporting IDSs. RECLAMO is proposing a novel approach: diverting the attack to a specific honeynet that has been dynamically built based on the attack information. Among all components forming the RECLAMO's architecture, this paper is mainly focused on defining a trust and reputation management model, essential to recognize if IDSs are exposing an honest behavior in order to accept their alerts as true. Experimental results confirm that our model helps to encourage or discourage the launch of the automatic reaction process.
Resumo:
The project arises from the need to develop improved teaching methodologies in field of the mechanics of continuous media. The objective is to offer the student a learning process to acquire the necessary theoretical knowledge, cognitive skills and the responsibility and autonomy to professional development in this area. Traditionally the teaching of the concepts of these subjects was performed through lectures and laboratory practice. During these lessons the students attitude was usually passive, and therefore their effectiveness was poor. The proposed methodology has already been successfully employed in universities like University Bochum, Germany, University the South Australia and aims to improve the effectiveness of knowledge acquisition through use by the student of a virtual laboratory. This laboratory allows to adapt the curricula and learning techniques to the European Higher Education and improve current learning processes in the University School of Public Works Engineers -EUITOP- of the Technical University of Madrid -UPM-, due there are not laboratories in this specialization. The virtual space is created using a software platform built on OpenSim, manages 3D virtual worlds, and, language LSL -Linden Scripting Language-, which imprints specific powers to objects. The student or user can access this virtual world through their avatar -your character in the virtual world- and can perform practices within the space created for the purpose, at any time, just with computer with internet access and viewfinder. The virtual laboratory has three partitions. The virtual meeting rooms, where the avatar can interact with peers, solve problems and exchange existing documentation in the virtual library. The interactive game room, where the avatar is has to resolve a number of issues in time. And the video room where students can watch instructional videos and receive group lessons. Each audiovisual interactive element is accompanied by explanations framing it within the area of knowledge and enables students to begin to acquire a vocabulary and practice of the profession for which they are being formed. Plane elasticity concepts are introduced from the tension and compression testing of test pieces of steel and concrete. The behavior of reticulated and articulated structures is reinforced by some interactive games and concepts of tension, compression, local and global buckling will by tests to break articulated structures. Pure bending concepts, simple and composite torsion will be studied by observing a flexible specimen. Earthquake resistant design of buildings will be checked by a laboratory test video.
Resumo:
Purpose: A fully three-dimensional (3D) massively parallelizable list-mode ordered-subsets expectation-maximization (LM-OSEM) reconstruction algorithm has been developed for high-resolution PET cameras. System response probabilities are calculated online from a set of parameters derived from Monte Carlo simulations. The shape of a system response for a given line of response (LOR) has been shown to be asymmetrical around the LOR. This work has been focused on the development of efficient region-search techniques to sample the system response probabilities, which are suitable for asymmetric kernel models, including elliptical Gaussian models that allow for high accuracy and high parallelization efficiency. The novel region-search scheme using variable kernel models is applied in the proposed PET reconstruction algorithm. Methods: A novel region-search technique has been used to sample the probability density function in correspondence with a small dynamic subset of the field of view that constitutes the region of response (ROR). The ROR is identified around the LOR by searching for any voxel within a dynamically calculated contour. The contour condition is currently defined as a fixed threshold over the posterior probability, and arbitrary kernel models can be applied using a numerical approach. The processing of the LORs is distributed in batches among the available computing devices, then, individual LORs are processed within different processing units. In this way, both multicore and multiple many-core processing units can be efficiently exploited. Tests have been conducted with probability models that take into account the noncolinearity, positron range, and crystal penetration effects, that produced tubes of response with varying elliptical sections whose axes were a function of the crystal's thickness and angle of incidence of the given LOR. The algorithm treats the probability model as a 3D scalar field defined within a reference system aligned with the ideal LOR. Results: This new technique provides superior image quality in terms of signal-to-noise ratio as compared with the histogram-mode method based on precomputed system matrices available for a commercial small animal scanner. Reconstruction times can be kept low with the use of multicore, many-core architectures, including multiple graphic processing units. Conclusions: A highly parallelizable LM reconstruction method has been proposed based on Monte Carlo simulations and new parallelization techniques aimed at improving the reconstruction speed and the image signal-to-noise of a given OSEM algorithm. The method has been validated using simulated and real phantoms. A special advantage of the new method is the possibility of defining dynamically the cut-off threshold over the calculated probabilities thus allowing for a direct control on the trade-off between speed and quality during the reconstruction.
Resumo:
Recent developments in the area of multiscale modeling of fiber-reinforced polymers are presented. The overall strategy takes advantage of the separa-tion of length scales between different entities (ply, laminate, and component) found in composite structures. This allows us to carry out multiscale modeling by computing the properties of one entity (e.g., individual plies) at the relevant length scale, homogenizing the results into a constitutive model, and passing this information to the next length scale to determine the mechanical behavior of the larger entity (e.g., laminate). As a result, high-fidelity numerical sim-ulations of the mechanical behavior of composite coupons and small compo-nents are nowadays feasible starting from the matrix, fiber, and interface properties and spatial distribution. Finally, the roadmap is outlined for extending the current strategy to include functional properties and processing into the simulation scheme.
Resumo:
Cloud computing and, more particularly, private IaaS, is seen as a mature technol- ogy with a myriad solutions to choose from. However, this disparity of solutions and products has instilled in potential adopters the fear of vendor and data lock- in. Several competing and incompatible interfaces and management styles have increased even more these fears. On top of this, cloud users might want to work with several solutions at the same time, an integration that is difficult to achieve in practice. In this Master Thesis I propose a management architecture that tries to solve these problems; it provides a generalized control mechanism for several cloud infrastructures, and an interface that can meet the requirements of the users. This management architecture is designed in a modular way, and using a generic infor- mation model. I have validated the approach through the implementation of the components needed for this architecture to support a sample private IaaS solution: OpenStack.
Resumo:
Delamination reduces the strenght of the composites, mainly in compression. Several methods exist to overcome this problem, but they are either not feasible for large scale production or too expensive. 3D composites are a promising solution.
Resumo:
In this paper we present an innovative technique to tackle the problem of automatic road sign detection and tracking using an on-board stereo camera. It involves a continuous 3D analysis of the road sign during the whole tracking process. Firstly, a color and appearance based model is applied to generate road sign candidates in both stereo images. A sparse disparity map between the left and right images is then created for each candidate by using contour-based and SURF-based matching in the far and short range, respectively. Once the map has been computed, the correspondences are back-projected to generate a cloud of 3D points, and the best-fit plane is computed through RANSAC, ensuring robustness to outliers. Temporal consistency is enforced by means of a Kalman filter, which exploits the intrinsic smoothness of the 3D camera motion in traffic environments. Additionally, the estimation of the plane allows to correct deformations due to perspective, thus easing further sign classification.
Resumo:
In this paper, we present a depth-color scene modeling strategy for indoors 3D contents generation. It combines depth and visual information provided by a low-cost active depth camera to improve the accuracy of the acquired depth maps considering the different dynamic nature of the scene elements. Accurate depth and color models of the scene background are iteratively built, and used to detect moving elements in the scene. The acquired depth data is continuously processed with an innovative joint-bilateral filter that efficiently combines depth and visual information thanks to the analysis of an edge-uncertainty map and the detected foreground regions. The main advantages of the proposed approach are: removing depth maps spatial noise and temporal random fluctuations; refining depth data at object boundaries, generating iteratively a robust depth and color background model and an accurate moving object silhouette.
Resumo:
Markerless video-based human pose estimation algorithms face a high-dimensional problem that is frequently broken down into several lower-dimensional ones by estimating the pose of each limb separately. However, in order to do so they need to reliably locate the torso, for which they typically rely on time coherence and tracking algorithms. Their losing track usually results in catastrophic failure of the process, requiring human intervention and thus precluding their usage in real-time applications. We propose a very fast rough pose estimation scheme based on global shape descriptors built on 3D Zernike moments. Using an articulated model that we configure in many poses, a large database of descriptor/pose pairs can be computed off-line. Thus, the only steps that must be done on-line are the extraction of the descriptors for each input volume and a search against the database to get the most likely poses. While the result of such process is not a fine pose estimation, it can be useful to help more sophisticated algorithms to regain track or make more educated guesses when creating new particles in particle-filter-based tracking schemes. We have achieved a performance of about ten fps on a single computer using a database of about one million entries.
Resumo:
En el siguiente trabajo se presenta en primer lugar de forma detallada la enfermedad denominada negligencia espacial unilateral (síntomas, tipos, causas, evaluación y tratamientos) para proporcionar una mejor comprensión del principal objetivo del estudio, que es el análisis de las soluciones virtuales, existentes en la literatura, aplicadas al tratamiento de esta enfermedad, incluyéndose una amplia descripción de cada estudio encontrado sobre el tema. A continuación, se han realizado tres implementaciones en realidad virtual de tres técnicas clásicas de rehabilitación llevadas a cabo en un entorno virtual, que son la estimulación optocinética, eye patching, y adaptación prismática y se ha desarrollado una aplicación 3D para evaluar el grado y tipo de negligencia sufrida por los pacientes. Que de forma conjunta, constituyen un primer paso hacia un enfoque alternativo para el tratamiento de la enfermedad, más personalizado y eficaz. Por último, en las conclusiones, se analizan las principales ventajas y desventajas encontradas en el uso de estas tecnologías aplicadas a la enfermedad y los trabajos futuros que pueden derivar de este trabajo.---ABSTRACT---The following work starts by presenting in detail a disease called unilateral spatial neglect (symptoms, types, causes, assessment and treatment) to provide the background for this study's main objective, which is the analysis of the virtual solutions existing in the literature for the treatment of this disease. The document includes an extensive description of the previous work found in this topic. Afterwards, three implementations of three classical rehabilitation techniques were performed in virtual reality: optokinetic stimulation, eye patching and prism adaptation, as a proof-of-concept, and a 3D application was implemented to assess the degree and type of negligence suffered by patients. Altogether, they constitute a first step towards an alternative approach for the treatment of disease, more personalized and effective. Finally, the conclusions analyze the main advantages and disadvantages encountered in the use of these technologies when applied to this disease and suggest future work.
Resumo:
Shopping agents are web-based applications that help consumers to find appropriate products in the context of e-commerce. In this paper we argue about the utility of advanced model-based techniques that recently have been proposed in the fields of Artificial Intelligence and Knowledge Engineering, in order to increase the level of support provided by this type of applications. We illustrate this approach with a virtual sales assistant that dynamically configures a product according to the needs and preferences of customers.
Resumo:
In recent years, the continuous incorporation of new technologies in the learning process has been an important factor in the educational process (1). The Technical University of Madrid (UPM) promotes educational innovation processes and develops projects related to the improvement of the education quality. The experience that we present fits into the Educational Innovation Project (EIP) of the E.U. of Agricultural Engineering of Madrid. One of the main objectives of the EIP is to Take advantage of the new opportunities offered by the Learning and Knowledge Technologies in order to enrich the educational processes and teaching management (2).
Resumo:
El objetivo principal del presente Proyecto Fin de Carrera es la construcción , montaje y calibración de una impresora 3D auto replicable modelo Prusa Mendel capaz de trabajar en coordenadas polares, lo cual abre las puertas a la investigación de calidades, tolerancias, resistencias estructurales… de estas piezas en comparación con las fabricadas por impresoras cartesianas. Encontraras una guía de montaje paso a paso, además de un listado de todos los componentes, imprimibles y no imprimibles, que componen la impresora 3D. También se analizan y comparan las opciones a la hora de introducir la electrónica necesaria, extrusor y de los posibles errores y soluciones que se pueden encontrar durante la fabricación de una de estas máquinas. Finalmente dispondrás de una guía de calibración de skeinforce 41,para poder conseguir una impresión de gran calidad. Abstract The main objective of this Thesis is the construction, installation and calibration of a self-replicating 3D printer model Prusa Mendel able to work in polar coordinates, which opens the door to research quality, tolerances, these structural resistance ... parts compared to those manufactured by Cartesian printers. In this project you will find a guide step by step assembly, and a list of all components, and 3D printer components printable and unprintable. We also analyze and compare the options when entering the necessary electronics, extruder and possible errors and solutions that may occur during manufacturing of these machines finally have an installation guide calibration skeinforge 41 to get a high quality print
Resumo:
Desentrañar el funcionamiento del cerebro es uno de los principales desafíos a los que se enfrenta la ciencia actual. Un área de estudio que ha despertado muchas expectativas e interés es el análisis de la estructura cortical desde el punto de vista morfológico, de manera que se cree una simulación del cerebro a nivel molecular. Con ello se espera poder profundizar en el estudio de numerosas enfermedades neurológicas y patológicas. Con el desarrollo de este proyecto se persigue el estudio del soma y de las espinas desde el punto de vista de la neuromorfología teórica. Es común en el estado del arte que en el análisis de las características morfológicas de una neurona en tres dimensiones el soma sea ignorado o, en el mejor de los casos, que sea sustituido por una simple esfera. De hecho, el concepto de soma resulta abstracto porque no se dispone de una dfinición estricta y robusta que especifique exactamente donde finaliza y comienzan las dendritas. En este proyecto se alcanza por primera vez una definición matemática de soma para determinar qué es el soma. Con el fin de simular somas se ahonda en los atributos utilizados en el estado del arte. Estas propiedades, de índole genérica, no especifican una morfología única. Es por ello que se propone un método que agrupe propiedades locales y globales de la morfología. En disposición de las características se procede con la categorización del cuerpo celular en distintas clases a partir de un nuevo subtipo de red bayesiana dinámica adaptada al espacio. Con ello se discute la existencia de distintas clases de somas y se descubren las diferencias entre los somas piramidales de distintas capas del cerebro. A partir del modelo matemático se simulan por primera vez somas virtuales. Algunas morfologías de espinas han sido atribuidas a ciertos comportamientos cognitivos. Por ello resulta de interés dictaminar las clases existentes y relacionarlas con funciones de la actividad cerebral. La clasificación más extendida (Peters y Kaiserman-Abramof, 1970) presenta una definición ambigua y subjetiva dependiente de la interpretación de cada individuo y por tanto discutible. Este estudio se sustenta en un conjunto de descriptores extraídos mediante una técnica de análisis topológico local para representaciones 3D. Sobre estos datos se trata de alcanzar el conjunto de clases más adecuado en el que agrupar las espinas así como de describir cada grupo mediante reglas unívocas. A partir de los resultados, se discute la existencia de un continuo de espinas y las propiedades que caracterizan a cada subtipo de espina. ---ABSTRACT---Unravel how the brain works is one of the main challenges faced by current science. A field of study which has aroused great expectations and interest is the analysis of the cortical structure from a morphological point of view, so that a molecular level simulation of the brain is achieved. This is expected to deepen the study of many neurological and pathological diseases. This project seeks the study of the soma and spines from the theoretical neuromorphology point of view. In the state of the art it is common that when it comes to analyze the morphological characteristics of a three dimension neuron the soma is ignored or, in the best case, it is replaced by a simple sphere. In fact, the concept of soma is abstract because there is not a robust and strict definition on exactly where it ends and dendrites begin. In this project a mathematical definition is reached for the first time to determine what a soma is. With the aim to simulate somas the atributes applied in the state of the art are studied. These properties, generic in nature, do not specify a unique morphology. It is why it was proposed a method to group local and global morphology properties. In arrangement of the characteristics it was proceed with the categorization of the celular body into diferent classes by using a new subtype of dynamic Bayesian network adapted to space. From the result the existance of different classes of somas and diferences among pyramidal somas from distinct brain layers are discovered. From the mathematical model virtual somas were simulated for the first time. Some morphologies of spines have been attributed to certain cognitive behaviours. For this reason it is interesting to rule the existent classes and to relate them with their functions in the brain activity. The most extended classification (Peters y Kaiserman-Abramof, 1970) presents an ambiguous and subjective definition that relies on the interpretation of each individual and consequently it is arguable. This study was based on the set of descriptors extracted from a local topological analysis technique for 3D representations. On these data it was tried to reach the most suitable set of classes to group the spines as well as to describe each cluster by unambiguous rules. From these results, the existance of a continuum of spines and the properties that characterize each spine subtype were discussed .
Resumo:
El siguiente trabajo abarca todo el proceso llevado a cabo para el rediseño de un sistema automático de tutoría que se integra con laboratorios virtuales desarrollados para la realización de prácticas por parte de estudiantes dentro de entornos virtuales tridimensionales. Los principales objetivos de este rediseño son la mejora del rendimiento del sistema automático de tutoría, haciéndolo más eficiente y por tanto permitiendo a un mayor número de estudiantes realizar una práctica al mismo tiempo. Además, este rediseño permitirá que el tutor se pueda integrar con otros motores gráficos con un coste relativamente bajo. Se realiza en primer lugar una introducción a los principales conceptos manejados en este trabajo así como algunos aspectos relacionados con trabajos previos a este rediseño del tutor automático, concretamente la versión anterior del tutor ligada a la plataforma OpenSim. Acto seguido se detallarán qué requisitos funcionales cumplirá así como las ventajas que aportará este nuevo diseño. A continuación, se explicará el desarrollo del trabajo donde se podrá ver cómo se ha reestructurado el antiguo sistema de tutoría, la aplicación de un diseño orientado a objetos y los distintos paquetes y clases que lo conforman. Por último, se detallarán las conclusiones obtenidas durante el desarrollo del trabajo así como la implicación del trabajo aquí mostrado en futuros desarrollos.---ABSTRACT--- The following work shows the process that has been carried out in order to redesign an automatic tutoring system that can be integrated into virtual laboratories developed for supporting students’ practices in 3D virtual environments. The main goals of this redesign are the improvement of automatic tutoring system performance, making it more efficient and therefore allowing more students to perform a practice at the same time. Furthermore, this redesign allows the tutor to be integrated with other graphic engines with a relative low cost. Firstly, we begin with an introduction to the main concepts used in this work and some aspects concerning the related previous works to this automatic tutoring system redesign, such as the previous version of the tutoring system bound to OpenSim. Secondly, it will be detailed what functional requirements are met and what advantages this new tutoring system will provide. Next, it will be explained how this work has been developed, how the previous tutoring system has been restructured, how an object-oriented design is applied and the classes and packages derived from this design. Finally, it will be outlined the conclusions drawn in the development of this work as well as how this work will take part in future works.