891 resultados para Protocolos de redes de computadores


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The use of 3D data in mobile robotics applications provides valuable information about the robot’s environment. However usually the huge amount of 3D information is difficult to manage due to the fact that the robot storage system and computing capabilities are insufficient. Therefore, a data compression method is necessary to store and process this information while preserving as much information as possible. A few methods have been proposed to compress 3D information. Nevertheless, there does not exist a consistent public benchmark for comparing the results (compression level, distance reconstructed error, etc.) obtained with different methods. In this paper, we propose a dataset composed of a set of 3D point clouds with different structure and texture variability to evaluate the results obtained from 3D data compression methods. We also provide useful tools for comparing compression methods, using as a baseline the results obtained by existing relevant compression methods.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Automated human behaviour analysis has been, and still remains, a challenging problem. It has been dealt from different points of views: from primitive actions to human interaction recognition. This paper is focused on trajectory analysis which allows a simple high level understanding of complex human behaviour. It is proposed a novel representation method of trajectory data, called Activity Description Vector (ADV) based on the number of occurrences of a person is in a specific point of the scenario and the local movements that perform in it. The ADV is calculated for each cell of the scenario in which it is spatially sampled obtaining a cue for different clustering methods. The ADV representation has been tested as the input of several classic classifiers and compared to other approaches using CAVIAR dataset sequences obtaining great accuracy in the recognition of the behaviour of people in a Shopping Centre.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Human behaviour recognition has been, and still remains, a challenging problem that involves different areas of computational intelligence. The automated understanding of people activities from video sequences is an open research topic in which the computer vision and pattern recognition areas have made big efforts. In this paper, the problem is studied from a prediction point of view. We propose a novel method able to early detect behaviour using a small portion of the input, in addition to the capabilities of it to predict behaviour from new inputs. Specifically, we propose a predictive method based on a simple representation of trajectories of a person in the scene which allows a high level understanding of the global human behaviour. The representation of the trajectory is used as a descriptor of the activity of the individual. The descriptors are used as a cue of a classification stage for pattern recognition purposes. Classifiers are trained using the trajectory representation of the complete sequence. However, partial sequences are processed to evaluate the early prediction capabilities having a specific observation time of the scene. The experiments have been carried out using the three different dataset of the CAVIAR database taken into account the behaviour of an individual. Additionally, different classic classifiers have been used for experimentation in order to evaluate the robustness of the proposal. Results confirm the high accuracy of the proposal on the early recognition of people behaviours.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Since the beginning of 3D computer vision problems, the use of techniques to reduce the data to make it treatable preserving the important aspects of the scene has been necessary. Currently, with the new low-cost RGB-D sensors, which provide a stream of color and 3D data of approximately 30 frames per second, this is getting more relevance. Many applications make use of these sensors and need a preprocessing to downsample the data in order to either reduce the processing time or improve the data (e.g., reducing noise or enhancing the important features). In this paper, we present a comparison of different downsampling techniques which are based on different principles. Concretely, five different downsampling methods are included: a bilinear-based method, a normal-based, a color-based, a combination of the normal and color-based samplings, and a growing neural gas (GNG)-based approach. For the comparison, two different models have been used acquired with the Blensor software. Moreover, to evaluate the effect of the downsampling in a real application, a 3D non-rigid registration is performed with the data sampled. From the experimentation we can conclude that depending on the purpose of the application some kernels of the sampling methods can improve drastically the results. Bilinear- and GNG-based methods provide homogeneous point clouds, but color-based and normal-based provide datasets with higher density of points in areas with specific features. In the non-rigid application, if a color-based sampled point cloud is used, it is possible to properly register two datasets for cases where intensity data are relevant in the model and outperform the results if only a homogeneous sampling is used.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In many classification problems, it is necessary to consider the specific location of an n-dimensional space from which features have been calculated. For example, considering the location of features extracted from specific areas of a two-dimensional space, as an image, could improve the understanding of a scene for a video surveillance system. In the same way, the same features extracted from different locations could mean different actions for a 3D HCI system. In this paper, we present a self-organizing feature map able to preserve the topology of locations of an n-dimensional space in which the vector of features have been extracted. The main contribution is to implicitly preserving the topology of the original space because considering the locations of the extracted features and their topology could ease the solution to certain problems. Specifically, the paper proposes the n-dimensional constrained self-organizing map preserving the input topology (nD-SOM-PINT). Features in adjacent areas of the n-dimensional space, used to extract the feature vectors, are explicitly in adjacent areas of the nD-SOM-PINT constraining the neural network structure and learning. As a study case, the neural network has been instantiate to represent and classify features as trajectories extracted from a sequence of images into a high level of semantic understanding. Experiments have been thoroughly carried out using the CAVIAR datasets (Corridor, Frontal and Inria) taken into account the global behaviour of an individual in order to validate the ability to preserve the topology of the two-dimensional space to obtain high-performance classification for trajectory classification in contrast of non-considering the location of features. Moreover, a brief example has been included to focus on validate the nD-SOM-PINT proposal in other domain than the individual trajectory. Results confirm the high accuracy of the nD-SOM-PINT outperforming previous methods aimed to classify the same datasets.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work, a modified version of the elastic bunch graph matching (EBGM) algorithm for face recognition is introduced. First, faces are detected by using a fuzzy skin detector based on the RGB color space. Then, the fiducial points for the facial graph are extracted automatically by adjusting a grid of points to the result of an edge detector. After that, the position of the nodes, their relation with their neighbors and their Gabor jets are calculated in order to obtain the feature vector defining each face. A self-organizing map (SOM) framework is shown afterwards. Thus, the calculation of the winning neuron and the recognition process are performed by using a similarity function that takes into account both the geometric and texture information of the facial graph. The set of experiments carried out for our SOM-EBGM method shows the accuracy of our proposal when compared with other state-of the-art methods.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En el curso docente 2010-2011 se inició la implantación del grado en Ingeniería Multimedia, título próximo a la Ingeniería Informática, pero enfocada a formar a profesionales capaces de gestionar proyectos Multimedia tanto en el ámbito del ocio como en el de la gestión de contenidos en redes de información. Esta implantación ha sido progresiva, de manera que cada año se iniciaba un curso nuevo de esta titulación, motivo por el cual este año, 2014-2015, es el primer año en el que el título está completamente implantado desde el inicio del curso. Esto nos ha llevado a plantearnos realizar un estudio sobre como están interconectadas las asignaturas en los distintos cursos. Este estudio ha tenido como objetivo averiguar los problemas o carencias de conocimientos que, por un lado tienen los alumnos en 2º curso, y por otro los que se pueden encontrar en 3º, así como establecer las posibles vías de solución a estos problemas, con la finalidad de mejorar el rendimiento en el aprendizaje de los alumnos. También se ha realizado un seguimiento sobre la evaluación de los alumnos realizada en las asignaturas de 2º para contrastar su adecuación al sistema de evaluación continua promovido por el Plan Bolonia.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En este artículo se describe el trabajo realizado por la red de investigación en docencia universitaria denominada “Docencia semipresencial en el Máster en Ingeniería Informática” y que ha pretendido trabajar en las diferentes asignaturas del Máster en Ingeniería Informática de la Universidad de Alicante con el fin de dotarlas de un carácter semipresencial de una forma coordinada e integrada. Se ha creado un grupo de trabajo dentro de la comisión académica del máster y se ha impulsado una colaboración estrecha entre los responsables de todas las asignaturas del Máster en Ingeniería Informática a la hora de usar todos los mecanismos necesarios para dotar a las respectivas asignaturas del carácter semipresencial. Ha sido muy importante el apoyo que se ha tenido del ICE en este sentido, por ejemplo mediante la solicitud y realización de un curso específico sobre bLearning.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Through numerous technological advances in recent years along with the popularization of computer devices, the company is moving towards a paradigm “always connected”. Computer networks are everywhere and the advent of IPv6 paves the way for the explosion of the Internet of Things. This concept enables the sharing of data between computing machines and objects of day-to-day. One of the areas placed under Internet of Things are the Vehicular Networks. However, the information generated individually for a vehicle has no large amount and does not contribute to an improvement in transit, once information has been isolated. This proposal presents the Infostructure, a system that has to facilitate the efforts and reduce costs for development of applications context-aware to high-level semantic for the scenario of Internet of Things, which allows you to manage, store and combine the data in order to generate broader context. To this end we present a reference architecture, which aims to show the major components of the Infostructure. Soon after a prototype is presented which is used to validate our work reaches the level of contextualization desired high level semantic as well as a performance evaluation, which aims to evaluate the behavior of the subsystem responsible for managing contextual information on a large amount of data. After statistical analysis is performed with the results obtained in the evaluation. Finally, the conclusions of the work and some problems such as no assurance as to the integrity of the sensory data coming Infostructure, and future work that takes into account the implementation of other modules so that we can conduct tests in real environments are presented.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The substantial increase in the number of applications offered through the computer networks, as well as in the volume of traffic forwarded through the network, have hampered to assure adequate service level to users. The Quality of Service (QoS) offer, honoring specified parameters in Service Level Agreements (SLA), established between the service providers and their clients, composes a traditional and extensive computer networks’ research area. Several schemes proposals for the provision of QoS were presented in the last three decades, but the acting scope of these proposals is always limited due to some factors, including the limited development of the network hardware and software, generally belonging to a single manufacturer. The advent of Software Defined Networking (SDN), along with the maturation of its main materialization, the OpenFlow protocol, allowed the decoupling between network hardware and software, through an architecture which provides a control plane and a data plane. This eases the computer networks scenario, allowing that new abstractions are applied in the hardware composing the data plane, through the development of new software pieces which are executed in the control plane. This dissertation investigates the QoS offer through the use and extension of the SDN architecture. Based on the proposal of two new modules, one to perform the data plane monitoring, SDNMon, and the second, MP-ROUTING, developed to determine the use of multiple paths in the forwarding of data referring to a flow, we demonstrated in this work that some QoS metrics specified in the SLAs, such as bandwidth, can be honored. Both modules were implemented and evaluated through a prototype. The evaluation results referring to several aspects of both proposed modules are presented in this dissertation, showing the obtained accuracy of the monitoring module SDNMon and the QoS gains due to the utilization of multiple paths defined by the MP-Routing, when forwarding data flow through the SDN.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Current and future applications pose new requirements that Internet architecture is not able to satisfy, like Mobility, Multicast, Multihoming, Bandwidth Guarantee and so on. The Internet architecture has some limitations which do not allow all future requirements to be covered. New architectures were proposed considering these requirements when a communication is established. ETArch (Entity Title Architecture) is a new Internet architecture, clean slate, able to use application’s requirements on each communication, and flexible to work with several layers. The Routing has an important role on Internet, because it decides the best way to forward primitives through the network. In Future Internet, all requirements depend on the routing. Routing is responsible for deciding the best path and, in the future, a better route can consider Mobility aspects or Energy Consumption, for instance. In the dawn of ETArch, the Routing has not been defined. This work provides intra and inter-domain routing algorithms to be used in the ETArch. It is considered that the route should be defined completely before the data start to traffic, to ensure that the requirements are met. In the Internet, the Routing has two distinct functions: (i) run specific algorithms to define the best route; and (ii) to forward data primitives to the correct link. In traditional Internet architecture, the two Routing functions are performed in all routers everytime that a packet arrives. This work allows that the complete route is defined before the communication starts, like in the telecommunication systems. This work determined the Routing for ETArch and experiments were performed to demonstrate the control plane routing viability. The initial setup before a communication takes longer, then only forwarding of primitives is performed, saving processing time.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A evolução tecnológica na comunicação contemporânea estrutura sistemas digitais via redes de computadores conectados e exploração maciça de dispositivos tecnológicos. Os dados digitais captados e distribuídos via aplicativos instalados em smartphones criam ambiente dinâmico comunicacional. O Jornalismo e a Comunicação tentam se adaptar ao novo ecossistema informacional impetrado pelas constantes inovações tecnológicas que possibilitam a criação de novos ambientes e sistemas para acesso à informação de relevância social. Surgem novas ferramentas para produção e distribuição de conteúdos jornalísticos, produtos baseados em dados e interações inteligentes, algoritmos usados em diversos processos, plataformas hiperlocais e sistemas de narrativas e produção digitais. Nesse contexto, o objetivo da pesquisa foi elaborar uma análise e comparação entre produtos de mídia e tecnologia específicos. Se as novas tecnologias acrescentam atributos às produções e narrativas jornalísticas, seus impactos na prática da atividade e também se há modificação nos processos de produção de informação de relevância social em relação aos processos jornalísticos tradicionais e consolidados. Investiga se o uso de informações insertadas pelos usuários, em tempo real, melhora a qualidade das narrativas emergentes através de dispositivos móveis e se a gamificação ou ludificação altera a percepção de credibilidade do jornalismo. Para que assim seja repensado a forma de se produzir e gerar informação e conhecimento para os públicos que demandam conteúdo

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Future pervasive environments will take into consideration not only individual user’s interest, but also social relationships. In this way, pervasive communities can lead the user to participate beyond traditional pervasive spaces, enabling the cooperation among groups and taking into account not only individual interests, but also the collective and social context. Social applications in CSCW (Computer Supported Cooperative Work) field represent new challenges and possibilities in terms of use of social context information for adaptability in pervasive environments. In particular, the research describes the approach in the design and development of a context.aware framework for collaborative applications (CAFCA), utilizing user’s context social information for proactive adaptations in pervasive environments. In order to validate the proposed framework an evaluation was conducted with a group of users based on enterprise scenario. The analysis enabled to verify the impact of the framework in terms of functionality and efficiency in real-world conditions. The main contribution of this thesis was to provide a context-aware framework to support collaborative applications in pervasive environments. The research focused on providing an innovative socio-technical approach to exploit collaboration in pervasive communities. Finally, the main results reside in social matching capabilities for session formation, communication and coordinations of groupware for collaborative activities.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work, we perform a first approach to emotion recognition from EEG single channel signals extracted in four (4) mother-child dyads experiment in developmental psychology -- Single channel EEG signals are analyzed and processed using several window sizes by performing a statistical analysis over features in the time and frequency domains -- Finally, a neural network obtained an average accuracy rate of 99% of classification in two emotional states such as happiness and sadness