999 resultados para tecnología de computadores


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The sustainability strategy in urban spaces arises from reflecting on how to achieve a more habitable city and is materialized in a series of sustainable transformations aimed at humanizing different environments so that they can be used and enjoyed by everyone without exception and regardless of their ability. Modern communication technologies allow new opportunities to analyze efficiency in the use of urban spaces from several points of view: adequacy of facilities, usability, and social integration capabilities. The research presented in this paper proposes a method to perform an analysis of movement accessibility in sustainable cities based on radio frequency technologies and the ubiquitous computing possibilities of the new Internet of Things paradigm. The proposal can be deployed in both indoor and outdoor environments to check specific locations of a city. Finally, a case study in a controlled context has been simulated to validate the proposal as a pre-deployment step in urban environments.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Comunicación presentada en las V Jornadas de Computación Empotrada, Valladolid, 17-19 Septiembre 2014

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work, we propose the use of the neural gas (NG), a neural network that uses an unsupervised Competitive Hebbian Learning (CHL) rule, to develop a reverse engineering process. This is a simple and accurate method to reconstruct objects from point clouds obtained from multiple overlapping views using low-cost sensors. In contrast to other methods that may need several stages that include downsampling, noise filtering and many other tasks, the NG automatically obtains the 3D model of the scanned objects. To demonstrate the validity of our proposal we tested our method with several models and performed a study of the neural network parameterization computing the quality of representation and also comparing results with other neural methods like growing neural gas and Kohonen maps or classical methods like Voxel Grid. We also reconstructed models acquired by low cost sensors that can be used in virtual and augmented reality environments for redesign or manipulation purposes. Since the NG algorithm has a strong computational cost we propose its acceleration. We have redesigned and implemented the NG learning algorithm to fit it onto Graphics Processing Units using CUDA. A speed-up of 180× faster is obtained compared to the sequential CPU version.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The research developed in this work consists in proposing a set of techniques for management of social networks and their integration into the educational process. The proposals made are based on assumptions that have been proven with simple examples in a real scenario of university teaching. The results show that social networks have more capacity to spread information than educational web platforms. Moreover, educational social networks are developed in a context of freedom of expression intrinsically linked to Internet freedom. In that context, users can write opinions or comments which are not liked by the staff of schools. However, this feature can be exploited to enrich the educational process and improve the quality of their achievement. The network has covered needs and created new ones. So, the figure of the Community Manager is proposed as agent in educational context for monitoring network and aims to channel the opinions and to provide a rapid response to an academic problem.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The use of 3D data in mobile robotics applications provides valuable information about the robot’s environment. However usually the huge amount of 3D information is difficult to manage due to the fact that the robot storage system and computing capabilities are insufficient. Therefore, a data compression method is necessary to store and process this information while preserving as much information as possible. A few methods have been proposed to compress 3D information. Nevertheless, there does not exist a consistent public benchmark for comparing the results (compression level, distance reconstructed error, etc.) obtained with different methods. In this paper, we propose a dataset composed of a set of 3D point clouds with different structure and texture variability to evaluate the results obtained from 3D data compression methods. We also provide useful tools for comparing compression methods, using as a baseline the results obtained by existing relevant compression methods.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Automated human behaviour analysis has been, and still remains, a challenging problem. It has been dealt from different points of views: from primitive actions to human interaction recognition. This paper is focused on trajectory analysis which allows a simple high level understanding of complex human behaviour. It is proposed a novel representation method of trajectory data, called Activity Description Vector (ADV) based on the number of occurrences of a person is in a specific point of the scenario and the local movements that perform in it. The ADV is calculated for each cell of the scenario in which it is spatially sampled obtaining a cue for different clustering methods. The ADV representation has been tested as the input of several classic classifiers and compared to other approaches using CAVIAR dataset sequences obtaining great accuracy in the recognition of the behaviour of people in a Shopping Centre.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Software-based techniques offer several advantages to increase the reliability of processor-based systems at very low cost, but they cause performance degradation and an increase of the code size. To meet constraints in performance and memory, we propose SETA, a new control-flow software-only technique that uses assertions to detect errors affecting the program flow. SETA is an independent technique, but it was conceived to work together with previously proposed data-flow techniques that aim at reducing performance and memory overheads. Thus, SETA is combined with such data-flow techniques and submitted to a fault injection campaign. Simulation and neutron induced SEE tests show high fault coverage at performance and memory overheads inferior to the state-of-the-art.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Abdominal Aortic Aneurism is a disease related to a weakening in the aortic wall that can cause a break in the aorta and the death. The detection of an unusual dilatation of a section of the aorta is an indicative of this disease. However, it is difficult to diagnose because it is necessary image diagnosis using computed tomography or magnetic resonance. An automatic diagnosis system would allow to analyze abdominal magnetic resonance images and to warn doctors if any anomaly is detected. We focus our research in magnetic resonance images because of the absence of ionizing radiation. Although there are proposals to identify this disease in magnetic resonance images, they need an intervention from clinicians to be precise and some of them are computationally hard. In this paper we develop a novel approach to analyze magnetic resonance abdominal images and detect the lumen and the aortic wall. The method combines different algorithms in two stages to improve the detection and the segmentation so it can be applied to similar problems with other type of images or structures. In a first stage, we use a spatial fuzzy C-means algorithm with morphological image analysis to detect and segment the lumen; and subsequently, in a second stage, we apply a graph cut algorithm to segment the aortic wall. The obtained results in the analyzed images are pretty successful obtaining an average of 79% of overlapping between the automatic segmentation provided by our method and the aortic wall identified by a medical specialist. The main impact of the proposed method is that it works in a completely automatic way with a low computational cost, which is of great significance for any expert and intelligent system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background and objective: In this paper, we have tested the suitability of using different artificial intelligence-based algorithms for decision support when classifying the risk of congenital heart surgery. In this sense, classification of those surgical risks provides enormous benefits as the a priori estimation of surgical outcomes depending on either the type of disease or the type of repair, and other elements that influence the final result. This preventive estimation may help to avoid future complications, or even death. Methods: We have evaluated four machine learning algorithms to achieve our objective: multilayer perceptron, self-organizing map, radial basis function networks and decision trees. The architectures implemented have the aim of classifying among three types of surgical risk: low complexity, medium complexity and high complexity. Results: Accuracy outcomes achieved range between 80% and 99%, being the multilayer perceptron method the one that offered a higher hit ratio. Conclusions: According to the results, it is feasible to develop a clinical decision support system using the evaluated algorithms. Such system would help cardiology specialists, paediatricians and surgeons to forecast the level of risk related to a congenital heart disease surgery.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El temblor humano puede definirse como un movimiento rápido y, en cierta manera, rítmico de una o más partes del cuerpo. En algunas personas, este movimiento puede ser un síntoma de alguna alteración a nivel neurológico. Desde el punto de vista matemático, el temblor humano puede ser definido como una suma ponderada de diferentes señales sinusoidales que causan oscilaciones de algunas partes del cuerpo. Esta sinusoide se repite en el tiempo pero su amplitud y frecuencia cambian lentamente. Por esta razón, la amplitud y la frecuencia son consideradas factores importantes en la clasificación del temblor y por tanto útiles en su diagnóstico. En este artículo, se presenta una herramienta de ayuda al diagnóstico del temblor humano. Esta herramienta usa un dispositivo hardware de bajo coste (<$40) y permite calcular las principales componentes de esta sinusoide asociada al temblor de una manera precisa. Como casos de estudio se presentan su aplicación a dos casos reales para probar la bondad de los algoritmos desarrollados. Los casos muestran pacientes que sufrían temblores con distinta severidad y que han realizado una serie de tests con el dispositivo para que el sistema calculara las principales componentes del temblor. Estas medidas aportadas por el sistema ayudarían en un futuro a los expertos a tomar decisiones más precisas permitiéndoles centrarse en determinadas fases del test o la realización de tests más específicos para evaluar mejor las características propias del temblor del paciente. De la experimentación realizada podemos afirmar que no todos los tests son válidos para el diagnóstico para todos los pacientes. Será finalmente la experiencia del profesional el que decidirá finalmente qué test o conjunto de tests son los más apropiados para cada paciente.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La plataforma tecnológica de datos abiertos universitarios (OpenData4U) permite la publicación de datos abiertos de una universidad, así como su acceso de manera que se potencie su reutilización (a través de un portal de datos abiertos y de un API para desarrolladores), a la vez que se permite disponer de un portal de transparencia para una acceso fácil a los datos de manera comprensible por cualquier persona. Esta es la plataforma que usa la Universidad de Alicante en su proyecto de datos abiertos y transparencia. Puedes acceder al código en https://github.com/UAdatos

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El presente libro “Ecosistema de Datos Abiertos de la Universidad de Alicante” pretende ser de utilidad para aquellas Universidades interesadas en desarrollar políticas de transparencia y datos abiertos. En él se detalla la experiencia de la Universidad de Alicante en la implantación de su “Ecosistema de Datos Abiertos”, tanto los aspectos normativos, procedimentales como los tecnológicos. Desde el libro se enlaza el software “Plataforma Tecnológica de Datos Abiertos Univesitarios (OpenData4U)” que busca facilitar un entorno de colaboración tecnológica entre Universidades. Se crea de esta forma, el embrión de una red de ecosistemas tecnológicos de datos abiertos universitarios.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Human behaviour recognition has been, and still remains, a challenging problem that involves different areas of computational intelligence. The automated understanding of people activities from video sequences is an open research topic in which the computer vision and pattern recognition areas have made big efforts. In this paper, the problem is studied from a prediction point of view. We propose a novel method able to early detect behaviour using a small portion of the input, in addition to the capabilities of it to predict behaviour from new inputs. Specifically, we propose a predictive method based on a simple representation of trajectories of a person in the scene which allows a high level understanding of the global human behaviour. The representation of the trajectory is used as a descriptor of the activity of the individual. The descriptors are used as a cue of a classification stage for pattern recognition purposes. Classifiers are trained using the trajectory representation of the complete sequence. However, partial sequences are processed to evaluate the early prediction capabilities having a specific observation time of the scene. The experiments have been carried out using the three different dataset of the CAVIAR database taken into account the behaviour of an individual. Additionally, different classic classifiers have been used for experimentation in order to evaluate the robustness of the proposal. Results confirm the high accuracy of the proposal on the early recognition of people behaviours.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El Cuadro de Mando SmartUA es una aplicación software que permite localizar y visualizar con facilidad, en cualquier momento y desde cualquier lugar, toda la información recopilada desde diversas fuentes de datos y redes de sensores generadas por el proyecto Smart University de la Universidad de Alicante; representarla en forma de mapas y gráficas; realizar búsquedas y filtros sobre dicha información; y mostrar a la comunidad universitaria en particular y a la ciudadanía en general, de una forma objetiva e inteligible, los fenómenos que ocurren en el campus, interconectado sistemas y personas para un mejor aprovechamiento de los recursos, una gestión eficiente y una innovación continua.