88 resultados para Redes ad hoc veiculares (Redes de computadores)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

En esta memoria se describe el trabajo de la red docente para el seguimiento y control de calidad de las asignaturas del segundo curso del Grado en Ingeniería Informática impartido en la Escuela Politécnica Superior de la Universidad de Alicante. En esta edición, el trabajo de la red se ha centrado en el estudio de las necesidades formativas y los contenidos impartidos en las asignaturas. El resultado ha sido la creación de un grafo de dependencias entre asignaturas de segundo y primer curso (y de segundo curso entre sí), un mapa de necesidades formativas para acceder a las asignaturas de segundo curso y un mapa de los contenidos impartidos en éstas. Asimismo, se ha elaborado un calendario on-line de evaluaciones para el curso 2014-2015.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En esta memoria se describe el proyecto llevado para establecer los mecanismos y procedimientos para el seguimiento y control de calidad de las asignaturas del segundo curso del Grado en Ingeniería Multimedia, curso 2013/2014. En concreto, los mecanismos de control se centran en la planificación de las sesiones docentes y las actividades de evaluación llevadas a cabo en cada una de dichas asignaturas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La web semántica consiste en un nuevo paradigma web para acceder, buscar, compartir y gestionar información a través de la combinación de tecnologías y de estructuras de gestión del conocimiento. El concepto de web semántica proporciona herramientas para el almacenamiento, intercambio y consulta de esta información mediante el desarrollo y la inclusión de metadatos y ontologías del cuerpo de conocimiento. La estructura de los datos que proporciona permite que sea consultada automáticamente por usuarios humanos o sistemas informáticos, mejorando su interoperabilidad. El desarrollo de la web semántica supone una evolución del desarrollo web en general hacia una web más inteligente o web 3.0. Este paradigma puede ser aprovechado en los procesos de docencia-aprendizaje para estructurar, almacenar y compartir los contenidos mediante sistemas automáticos de consultas alojados en web semánticas que tratan sobre los cuerpos de conocimiento de las materias. La disciplina informática es especialmente adecuada para este propósito debido a su complejidad y a la gran variedad de términos que maneja. Por otra parte, su desarrollo en continua evolución propicia la implantación de mecanismos automáticos de mantenimiento y de actualización de los nuevos contenidos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many applications including object reconstruction, robot guidance, and. scene mapping require the registration of multiple views from a scene to generate a complete geometric and appearance model of it. In real situations, transformations between views are unknown and it is necessary to apply expert inference to estimate them. In the last few years, the emergence of low-cost depth-sensing cameras has strengthened the research on this topic, motivating a plethora of new applications. Although they have enough resolution and accuracy for many applications, some situations may not be solved with general state-of-the-art registration methods due to the signal-to-noise ratio (SNR) and the resolution of the data provided. The problem of working with low SNR data, in general terms, may appear in any 3D system, then it is necessary to propose novel solutions in this aspect. In this paper, we propose a method, μ-MAR, able to both coarse and fine register sets of 3D points provided by low-cost depth-sensing cameras, despite it is not restricted to these sensors, into a common coordinate system. The method is able to overcome the noisy data problem by means of using a model-based solution of multiplane registration. Specifically, it iteratively registers 3D markers composed by multiple planes extracted from points of multiple views of the scene. As the markers and the object of interest are static in the scenario, the transformations obtained for the markers are applied to the object in order to reconstruct it. Experiments have been performed using synthetic and real data. The synthetic data allows a qualitative and quantitative evaluation by means of visual inspection and Hausdorff distance respectively. The real data experiments show the performance of the proposal using data acquired by a Primesense Carmine RGB-D sensor. The method has been compared to several state-of-the-art methods. The results show the good performance of the μ-MAR to register objects with high accuracy in presence of noisy data outperforming the existing methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En este trabajo expondremos un análisis exhaustivo de como se ha desarrollado la acción tutorial en la Escuela Politécnica Superior de la Universidad de Alicante en el curso 2014/2015. El marcado carácter de voluntariedad de la acción tutorial en nuestro centro por parte de estudiantes y tutores permite que el trabajo se realice más satisfactoriamente tanto por parte de tutores como por parte de los estudiantes, puesto que han decidido seguir el plan por ello mismos y no como una imposición. Además expondremos nuestras experiencias en el desarrollo del Taller de Gestión Eficaz del Tiempo, taller desarrollado en nuestro centro bajo la tutela de la experta Nuria Alberquilla, que tiene entre otros muchos objetivos el aprendizaje y puesta en práctica de técnicas para realizar una gestión eficaz del tiempo, con un mejor equilibrio entre la vida académica y personal y la identificación de los principales factores externos e internos que influyen en los resultados obtenidos, y cómo mejorarlos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Al igual que en cursos anteriores en la Escuela Politécnica se desarrolla, promovido por el Vicerrectorado de Estudios, Formación y Calidad, coordinado desde el Instituto de Ciencias de la Educación, el Plan de Acción Tutorial (PAT) el cual está abierto a todos aquellos tutores que deseen formar parte del plan y también a todos los estudiantes, que, voluntariamente pueden marcar la opción de participar en el plan en la matrícula, y también a aquellos que, a pesar de no marcar la opción en la matrícula, finalmente han decidido seguir el plan de acción tutorial. Esta característica de participación e inscripción voluntaria permite que el trabajo se realice más satisfactoriamente tanto por parte de tutores como por parte de los estudiantes, puesto que han decidido seguir el plan por ello mismos y no como una imposición. Con este resumen nos proponemos presentar nuestras experiencias en el desarrollo del PAT de nuestro centro.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The sustainability strategy in urban spaces arises from reflecting on how to achieve a more habitable city and is materialized in a series of sustainable transformations aimed at humanizing different environments so that they can be used and enjoyed by everyone without exception and regardless of their ability. Modern communication technologies allow new opportunities to analyze efficiency in the use of urban spaces from several points of view: adequacy of facilities, usability, and social integration capabilities. The research presented in this paper proposes a method to perform an analysis of movement accessibility in sustainable cities based on radio frequency technologies and the ubiquitous computing possibilities of the new Internet of Things paradigm. The proposal can be deployed in both indoor and outdoor environments to check specific locations of a city. Finally, a case study in a controlled context has been simulated to validate the proposal as a pre-deployment step in urban environments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Comunicación presentada en las V Jornadas de Computación Empotrada, Valladolid, 17-19 Septiembre 2014

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, we propose the use of the neural gas (NG), a neural network that uses an unsupervised Competitive Hebbian Learning (CHL) rule, to develop a reverse engineering process. This is a simple and accurate method to reconstruct objects from point clouds obtained from multiple overlapping views using low-cost sensors. In contrast to other methods that may need several stages that include downsampling, noise filtering and many other tasks, the NG automatically obtains the 3D model of the scanned objects. To demonstrate the validity of our proposal we tested our method with several models and performed a study of the neural network parameterization computing the quality of representation and also comparing results with other neural methods like growing neural gas and Kohonen maps or classical methods like Voxel Grid. We also reconstructed models acquired by low cost sensors that can be used in virtual and augmented reality environments for redesign or manipulation purposes. Since the NG algorithm has a strong computational cost we propose its acceleration. We have redesigned and implemented the NG learning algorithm to fit it onto Graphics Processing Units using CUDA. A speed-up of 180× faster is obtained compared to the sequential CPU version.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The research developed in this work consists in proposing a set of techniques for management of social networks and their integration into the educational process. The proposals made are based on assumptions that have been proven with simple examples in a real scenario of university teaching. The results show that social networks have more capacity to spread information than educational web platforms. Moreover, educational social networks are developed in a context of freedom of expression intrinsically linked to Internet freedom. In that context, users can write opinions or comments which are not liked by the staff of schools. However, this feature can be exploited to enrich the educational process and improve the quality of their achievement. The network has covered needs and created new ones. So, the figure of the Community Manager is proposed as agent in educational context for monitoring network and aims to channel the opinions and to provide a rapid response to an academic problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of 3D data in mobile robotics applications provides valuable information about the robot’s environment. However usually the huge amount of 3D information is difficult to manage due to the fact that the robot storage system and computing capabilities are insufficient. Therefore, a data compression method is necessary to store and process this information while preserving as much information as possible. A few methods have been proposed to compress 3D information. Nevertheless, there does not exist a consistent public benchmark for comparing the results (compression level, distance reconstructed error, etc.) obtained with different methods. In this paper, we propose a dataset composed of a set of 3D point clouds with different structure and texture variability to evaluate the results obtained from 3D data compression methods. We also provide useful tools for comparing compression methods, using as a baseline the results obtained by existing relevant compression methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Automated human behaviour analysis has been, and still remains, a challenging problem. It has been dealt from different points of views: from primitive actions to human interaction recognition. This paper is focused on trajectory analysis which allows a simple high level understanding of complex human behaviour. It is proposed a novel representation method of trajectory data, called Activity Description Vector (ADV) based on the number of occurrences of a person is in a specific point of the scenario and the local movements that perform in it. The ADV is calculated for each cell of the scenario in which it is spatially sampled obtaining a cue for different clustering methods. The ADV representation has been tested as the input of several classic classifiers and compared to other approaches using CAVIAR dataset sequences obtaining great accuracy in the recognition of the behaviour of people in a Shopping Centre.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Human behaviour recognition has been, and still remains, a challenging problem that involves different areas of computational intelligence. The automated understanding of people activities from video sequences is an open research topic in which the computer vision and pattern recognition areas have made big efforts. In this paper, the problem is studied from a prediction point of view. We propose a novel method able to early detect behaviour using a small portion of the input, in addition to the capabilities of it to predict behaviour from new inputs. Specifically, we propose a predictive method based on a simple representation of trajectories of a person in the scene which allows a high level understanding of the global human behaviour. The representation of the trajectory is used as a descriptor of the activity of the individual. The descriptors are used as a cue of a classification stage for pattern recognition purposes. Classifiers are trained using the trajectory representation of the complete sequence. However, partial sequences are processed to evaluate the early prediction capabilities having a specific observation time of the scene. The experiments have been carried out using the three different dataset of the CAVIAR database taken into account the behaviour of an individual. Additionally, different classic classifiers have been used for experimentation in order to evaluate the robustness of the proposal. Results confirm the high accuracy of the proposal on the early recognition of people behaviours.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since the beginning of 3D computer vision problems, the use of techniques to reduce the data to make it treatable preserving the important aspects of the scene has been necessary. Currently, with the new low-cost RGB-D sensors, which provide a stream of color and 3D data of approximately 30 frames per second, this is getting more relevance. Many applications make use of these sensors and need a preprocessing to downsample the data in order to either reduce the processing time or improve the data (e.g., reducing noise or enhancing the important features). In this paper, we present a comparison of different downsampling techniques which are based on different principles. Concretely, five different downsampling methods are included: a bilinear-based method, a normal-based, a color-based, a combination of the normal and color-based samplings, and a growing neural gas (GNG)-based approach. For the comparison, two different models have been used acquired with the Blensor software. Moreover, to evaluate the effect of the downsampling in a real application, a 3D non-rigid registration is performed with the data sampled. From the experimentation we can conclude that depending on the purpose of the application some kernels of the sampling methods can improve drastically the results. Bilinear- and GNG-based methods provide homogeneous point clouds, but color-based and normal-based provide datasets with higher density of points in areas with specific features. In the non-rigid application, if a color-based sampled point cloud is used, it is possible to properly register two datasets for cases where intensity data are relevant in the model and outperform the results if only a homogeneous sampling is used.