998 resultados para Redes de computadores - Software
Resumo:
Al igual que en cursos anteriores en la Escuela Politécnica se desarrolla, promovido por el Vicerrectorado de Estudios, Formación y Calidad, coordinado desde el Instituto de Ciencias de la Educación, el Plan de Acción Tutorial (PAT) el cual está abierto a todos aquellos tutores que deseen formar parte del plan y también a todos los estudiantes, que, voluntariamente pueden marcar la opción de participar en el plan en la matrícula, y también a aquellos que, a pesar de no marcar la opción en la matrícula, finalmente han decidido seguir el plan de acción tutorial. Esta característica de participación e inscripción voluntaria permite que el trabajo se realice más satisfactoriamente tanto por parte de tutores como por parte de los estudiantes, puesto que han decidido seguir el plan por ello mismos y no como una imposición. Con este resumen nos proponemos presentar nuestras experiencias en el desarrollo del PAT de nuestro centro.
Resumo:
The sustainability strategy in urban spaces arises from reflecting on how to achieve a more habitable city and is materialized in a series of sustainable transformations aimed at humanizing different environments so that they can be used and enjoyed by everyone without exception and regardless of their ability. Modern communication technologies allow new opportunities to analyze efficiency in the use of urban spaces from several points of view: adequacy of facilities, usability, and social integration capabilities. The research presented in this paper proposes a method to perform an analysis of movement accessibility in sustainable cities based on radio frequency technologies and the ubiquitous computing possibilities of the new Internet of Things paradigm. The proposal can be deployed in both indoor and outdoor environments to check specific locations of a city. Finally, a case study in a controlled context has been simulated to validate the proposal as a pre-deployment step in urban environments.
Resumo:
The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.
Resumo:
Comunicación presentada en las V Jornadas de Computación Empotrada, Valladolid, 17-19 Septiembre 2014
Resumo:
In this work, we propose the use of the neural gas (NG), a neural network that uses an unsupervised Competitive Hebbian Learning (CHL) rule, to develop a reverse engineering process. This is a simple and accurate method to reconstruct objects from point clouds obtained from multiple overlapping views using low-cost sensors. In contrast to other methods that may need several stages that include downsampling, noise filtering and many other tasks, the NG automatically obtains the 3D model of the scanned objects. To demonstrate the validity of our proposal we tested our method with several models and performed a study of the neural network parameterization computing the quality of representation and also comparing results with other neural methods like growing neural gas and Kohonen maps or classical methods like Voxel Grid. We also reconstructed models acquired by low cost sensors that can be used in virtual and augmented reality environments for redesign or manipulation purposes. Since the NG algorithm has a strong computational cost we propose its acceleration. We have redesigned and implemented the NG learning algorithm to fit it onto Graphics Processing Units using CUDA. A speed-up of 180× faster is obtained compared to the sequential CPU version.
Resumo:
The research developed in this work consists in proposing a set of techniques for management of social networks and their integration into the educational process. The proposals made are based on assumptions that have been proven with simple examples in a real scenario of university teaching. The results show that social networks have more capacity to spread information than educational web platforms. Moreover, educational social networks are developed in a context of freedom of expression intrinsically linked to Internet freedom. In that context, users can write opinions or comments which are not liked by the staff of schools. However, this feature can be exploited to enrich the educational process and improve the quality of their achievement. The network has covered needs and created new ones. So, the figure of the Community Manager is proposed as agent in educational context for monitoring network and aims to channel the opinions and to provide a rapid response to an academic problem.
Resumo:
The use of 3D data in mobile robotics applications provides valuable information about the robot’s environment. However usually the huge amount of 3D information is difficult to manage due to the fact that the robot storage system and computing capabilities are insufficient. Therefore, a data compression method is necessary to store and process this information while preserving as much information as possible. A few methods have been proposed to compress 3D information. Nevertheless, there does not exist a consistent public benchmark for comparing the results (compression level, distance reconstructed error, etc.) obtained with different methods. In this paper, we propose a dataset composed of a set of 3D point clouds with different structure and texture variability to evaluate the results obtained from 3D data compression methods. We also provide useful tools for comparing compression methods, using as a baseline the results obtained by existing relevant compression methods.
Resumo:
Automated human behaviour analysis has been, and still remains, a challenging problem. It has been dealt from different points of views: from primitive actions to human interaction recognition. This paper is focused on trajectory analysis which allows a simple high level understanding of complex human behaviour. It is proposed a novel representation method of trajectory data, called Activity Description Vector (ADV) based on the number of occurrences of a person is in a specific point of the scenario and the local movements that perform in it. The ADV is calculated for each cell of the scenario in which it is spatially sampled obtaining a cue for different clustering methods. The ADV representation has been tested as the input of several classic classifiers and compared to other approaches using CAVIAR dataset sequences obtaining great accuracy in the recognition of the behaviour of people in a Shopping Centre.
Resumo:
Human behaviour recognition has been, and still remains, a challenging problem that involves different areas of computational intelligence. The automated understanding of people activities from video sequences is an open research topic in which the computer vision and pattern recognition areas have made big efforts. In this paper, the problem is studied from a prediction point of view. We propose a novel method able to early detect behaviour using a small portion of the input, in addition to the capabilities of it to predict behaviour from new inputs. Specifically, we propose a predictive method based on a simple representation of trajectories of a person in the scene which allows a high level understanding of the global human behaviour. The representation of the trajectory is used as a descriptor of the activity of the individual. The descriptors are used as a cue of a classification stage for pattern recognition purposes. Classifiers are trained using the trajectory representation of the complete sequence. However, partial sequences are processed to evaluate the early prediction capabilities having a specific observation time of the scene. The experiments have been carried out using the three different dataset of the CAVIAR database taken into account the behaviour of an individual. Additionally, different classic classifiers have been used for experimentation in order to evaluate the robustness of the proposal. Results confirm the high accuracy of the proposal on the early recognition of people behaviours.
Resumo:
In many classification problems, it is necessary to consider the specific location of an n-dimensional space from which features have been calculated. For example, considering the location of features extracted from specific areas of a two-dimensional space, as an image, could improve the understanding of a scene for a video surveillance system. In the same way, the same features extracted from different locations could mean different actions for a 3D HCI system. In this paper, we present a self-organizing feature map able to preserve the topology of locations of an n-dimensional space in which the vector of features have been extracted. The main contribution is to implicitly preserving the topology of the original space because considering the locations of the extracted features and their topology could ease the solution to certain problems. Specifically, the paper proposes the n-dimensional constrained self-organizing map preserving the input topology (nD-SOM-PINT). Features in adjacent areas of the n-dimensional space, used to extract the feature vectors, are explicitly in adjacent areas of the nD-SOM-PINT constraining the neural network structure and learning. As a study case, the neural network has been instantiate to represent and classify features as trajectories extracted from a sequence of images into a high level of semantic understanding. Experiments have been thoroughly carried out using the CAVIAR datasets (Corridor, Frontal and Inria) taken into account the global behaviour of an individual in order to validate the ability to preserve the topology of the two-dimensional space to obtain high-performance classification for trajectory classification in contrast of non-considering the location of features. Moreover, a brief example has been included to focus on validate the nD-SOM-PINT proposal in other domain than the individual trajectory. Results confirm the high accuracy of the nD-SOM-PINT outperforming previous methods aimed to classify the same datasets.
Resumo:
In this work, a modified version of the elastic bunch graph matching (EBGM) algorithm for face recognition is introduced. First, faces are detected by using a fuzzy skin detector based on the RGB color space. Then, the fiducial points for the facial graph are extracted automatically by adjusting a grid of points to the result of an edge detector. After that, the position of the nodes, their relation with their neighbors and their Gabor jets are calculated in order to obtain the feature vector defining each face. A self-organizing map (SOM) framework is shown afterwards. Thus, the calculation of the winning neuron and the recognition process are performed by using a similarity function that takes into account both the geometric and texture information of the facial graph. The set of experiments carried out for our SOM-EBGM method shows the accuracy of our proposal when compared with other state-of the-art methods.
Resumo:
En el curso docente 2010-2011 se inició la implantación del grado en Ingeniería Multimedia, título próximo a la Ingeniería Informática, pero enfocada a formar a profesionales capaces de gestionar proyectos Multimedia tanto en el ámbito del ocio como en el de la gestión de contenidos en redes de información. Esta implantación ha sido progresiva, de manera que cada año se iniciaba un curso nuevo de esta titulación, motivo por el cual este año, 2014-2015, es el primer año en el que el título está completamente implantado desde el inicio del curso. Esto nos ha llevado a plantearnos realizar un estudio sobre como están interconectadas las asignaturas en los distintos cursos. Este estudio ha tenido como objetivo averiguar los problemas o carencias de conocimientos que, por un lado tienen los alumnos en 2º curso, y por otro los que se pueden encontrar en 3º, así como establecer las posibles vías de solución a estos problemas, con la finalidad de mejorar el rendimiento en el aprendizaje de los alumnos. También se ha realizado un seguimiento sobre la evaluación de los alumnos realizada en las asignaturas de 2º para contrastar su adecuación al sistema de evaluación continua promovido por el Plan Bolonia.
Resumo:
En este artículo se describe el trabajo realizado por la red de investigación en docencia universitaria denominada “Docencia semipresencial en el Máster en Ingeniería Informática” y que ha pretendido trabajar en las diferentes asignaturas del Máster en Ingeniería Informática de la Universidad de Alicante con el fin de dotarlas de un carácter semipresencial de una forma coordinada e integrada. Se ha creado un grupo de trabajo dentro de la comisión académica del máster y se ha impulsado una colaboración estrecha entre los responsables de todas las asignaturas del Máster en Ingeniería Informática a la hora de usar todos los mecanismos necesarios para dotar a las respectivas asignaturas del carácter semipresencial. Ha sido muy importante el apoyo que se ha tenido del ICE en este sentido, por ejemplo mediante la solicitud y realización de un curso específico sobre bLearning.
Resumo:
Through numerous technological advances in recent years along with the popularization of computer devices, the company is moving towards a paradigm “always connected”. Computer networks are everywhere and the advent of IPv6 paves the way for the explosion of the Internet of Things. This concept enables the sharing of data between computing machines and objects of day-to-day. One of the areas placed under Internet of Things are the Vehicular Networks. However, the information generated individually for a vehicle has no large amount and does not contribute to an improvement in transit, once information has been isolated. This proposal presents the Infostructure, a system that has to facilitate the efforts and reduce costs for development of applications context-aware to high-level semantic for the scenario of Internet of Things, which allows you to manage, store and combine the data in order to generate broader context. To this end we present a reference architecture, which aims to show the major components of the Infostructure. Soon after a prototype is presented which is used to validate our work reaches the level of contextualization desired high level semantic as well as a performance evaluation, which aims to evaluate the behavior of the subsystem responsible for managing contextual information on a large amount of data. After statistical analysis is performed with the results obtained in the evaluation. Finally, the conclusions of the work and some problems such as no assurance as to the integrity of the sensory data coming Infostructure, and future work that takes into account the implementation of other modules so that we can conduct tests in real environments are presented.
Resumo:
A evolução tecnológica na comunicação contemporânea estrutura sistemas digitais via redes de computadores conectados e exploração maciça de dispositivos tecnológicos. Os dados digitais captados e distribuídos via aplicativos instalados em smartphones criam ambiente dinâmico comunicacional. O Jornalismo e a Comunicação tentam se adaptar ao novo ecossistema informacional impetrado pelas constantes inovações tecnológicas que possibilitam a criação de novos ambientes e sistemas para acesso à informação de relevância social. Surgem novas ferramentas para produção e distribuição de conteúdos jornalísticos, produtos baseados em dados e interações inteligentes, algoritmos usados em diversos processos, plataformas hiperlocais e sistemas de narrativas e produção digitais. Nesse contexto, o objetivo da pesquisa foi elaborar uma análise e comparação entre produtos de mídia e tecnologia específicos. Se as novas tecnologias acrescentam atributos às produções e narrativas jornalísticas, seus impactos na prática da atividade e também se há modificação nos processos de produção de informação de relevância social em relação aos processos jornalísticos tradicionais e consolidados. Investiga se o uso de informações insertadas pelos usuários, em tempo real, melhora a qualidade das narrativas emergentes através de dispositivos móveis e se a gamificação ou ludificação altera a percepção de credibilidade do jornalismo. Para que assim seja repensado a forma de se produzir e gerar informação e conhecimento para os públicos que demandam conteúdo