21 resultados para personal data


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Contents: - Center for Open Middleware - POSDATA project - User modeling - Some early results - @posdata service

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Internet está evolucionando hacia la conocida como Live Web. En esta nueva etapa en la evolución de Internet, se pone al servicio de los usuarios multitud de streams de datos sociales. Gracias a estas fuentes de datos, los usuarios han pasado de navegar por páginas web estáticas a interacturar con aplicaciones que ofrecen contenido personalizado, basada en sus preferencias. Cada usuario interactúa a diario con multiples aplicaciones que ofrecen notificaciones y alertas, en este sentido cada usuario es una fuente de eventos, y a menudo los usuarios se sienten desbordados y no son capaces de procesar toda esa información a la carta. Para lidiar con esta sobresaturación, han aparecido múltiples herramientas que automatizan las tareas más habituales, desde gestores de bandeja de entrada, gestores de alertas en redes sociales, a complejos CRMs o smart-home hubs. La contrapartida es que aunque ofrecen una solución a problemas comunes, no pueden adaptarse a las necesidades de cada usuario ofreciendo una solucion personalizada. Los Servicios de Automatización de Tareas (TAS de sus siglas en inglés) entraron en escena a partir de 2012 para dar solución a esta liminación. Dada su semejanza, estos servicios también son considerados como un nuevo enfoque en la tecnología de mash-ups pero centra en el usuarios. Los usuarios de estas plataformas tienen la capacidad de interconectar servicios, sensores y otros aparatos con connexión a internet diseñando las automatizaciones que se ajustan a sus necesidades. La propuesta ha sido ámpliamante aceptada por los usuarios. Este hecho ha propiciado multitud de plataformas que ofrecen servicios TAS entren en escena. Al ser un nuevo campo de investigación, esta tesis presenta las principales características de los TAS, describe sus componentes, e identifica las dimensiones fundamentales que los defines y permiten su clasificación. En este trabajo se acuña el termino Servicio de Automatización de Tareas (TAS) dando una descripción formal para estos servicios y sus componentes (llamados canales), y proporciona una arquitectura de referencia. De igual forma, existe una falta de herramientas para describir servicios de automatización, y las reglas de automatización. A este respecto, esta tesis propone un modelo común que se concreta en la ontología EWE (Evented WEb Ontology). Este modelo permite com parar y equiparar canales y automatizaciones de distintos TASs, constituyendo un aporte considerable paraa la portabilidad de automatizaciones de usuarios entre plataformas. De igual manera, dado el carácter semántico del modelo, permite incluir en las automatizaciones elementos de fuentes externas sobre los que razonar, como es el caso de Linked Open Data. Utilizando este modelo, se ha generado un dataset de canales y automatizaciones, con los datos obtenidos de algunos de los TAS existentes en el mercado. Como último paso hacia el lograr un modelo común para describir TAS, se ha desarrollado un algoritmo para aprender ontologías de forma automática a partir de los datos del dataset. De esta forma, se favorece el descubrimiento de nuevos canales, y se reduce el coste de mantenimiento del modelo, el cual se actualiza de forma semi-automática. En conclusión, las principales contribuciones de esta tesis son: i) describir el estado del arte en automatización de tareas y acuñar el término Servicio de Automatización de Tareas, ii) desarrollar una ontología para el modelado de los componentes de TASs y automatizaciones, iii) poblar un dataset de datos de canales y automatizaciones, usado para desarrollar un algoritmo de aprendizaje automatico de ontologías, y iv) diseñar una arquitectura de agentes para la asistencia a usuarios en la creación de automatizaciones. ABSTRACT The new stage in the evolution of the Web (the Live Web or Evented Web) puts lots of social data-streams at the service of users, who no longer browse static web pages but interact with applications that present them contextual and relevant experiences. Given that each user is a potential source of events, a typical user often gets overwhelmed. To deal with that huge amount of data, multiple automation tools have emerged, covering from simple social media managers or notification aggregators to complex CRMs or smart-home Hub/Apps. As a downside, they cannot tailor to the needs of every single user. As a natural response to this downside, Task Automation Services broke in the Internet. They may be seen as a new model of mash-up technology for combining social streams, services and connected devices from an end-user perspective: end-users are empowered to connect those stream however they want, designing the automations they need. The numbers of those platforms that appeared early on shot up, and as a consequence the amount of platforms following this approach is growing fast. Being a novel field, this thesis aims to shed light on it, presenting and exemplifying the main characteristics of Task Automation Services, describing their components, and identifying several dimensions to classify them. This thesis coins the term Task Automation Services (TAS) by providing a formal definition of them, their components (called channels), as well a TAS reference architecture. There is also a lack of tools for describing automation services and automations rules. In this regard, this thesis proposes a theoretical common model of TAS and formalizes it as the EWE ontology This model enables to compare channels and automations from different TASs, which has a high impact in interoperability; and enhances automations providing a mechanism to reason over external sources such as Linked Open Data. Based on this model, a dataset of components of TAS was built, harvesting data from the web sites of actual TASs. Going a step further towards this common model, an algorithm for categorizing them was designed, enabling their discovery across different TAS. Thus, the main contributions of the thesis are: i) surveying the state of the art on task automation and coining the term Task Automation Service; ii) providing a semantic common model for describing TAS components and automations; iii) populating a categorized dataset of TAS components, used to learn ontologies of particular domains from the TAS perspective; and iv) designing an agent architecture for assisting users in setting up automations, that is aware of their context and acts in consequence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study of the effectiveness of the cognitive rehabilitation processes and the identification of cognitive profiles, in order to define comparable populations, is a controversial area, but concurrently it is strongly needed in order to improve therapies. There is limited evidence about cognitive rehabilitation efficacy. Many of the trials conclude that in spite of an apparent clinical good response, differences do not show statistical significance. The common feature in all these trials is heterogeneity among populations. In this situation, observational studies on very well controlled cohort of studies, together with innovative methods in knowledge extraction, could provide methodological insights for the design of more accurate comparative trials. Some correlation studies between neuropsychological tests and patients capacities have been carried out -1---2- and also correlation between tests and morphological changes in the brain -3-. The procedures efficacy depends on three main factors: the affectation profile, the scheduled tasks and the execution results. The relationship between them makes up the cognitive rehabilitation as a discipline, but its structure is not properly defined. In this work we present a clustering method used in Neuro Personal Trainer (NPT) to group patients into cognitive profiles using data mining techniques. The system uses these clusters to personalize treatments, using the patients assigned cluster to select which tasks are more suitable for its concrete needs, by comparing the results obtained in the past by patients with the same profile.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La web ha sufrido una drástica transformación en los últimos años, debido principalmente a su popularización y a la enorme cantidad de información que alberga. Debido a estos factores se ha dado el salto de la denominada Web de Documentos, a la Web Semántica, donde toda la información está relacionada con otra. Las principales ventajas de la información enlazada estriban en la facilidad de reutilización, accesibilidad y disponibilidad para ser encontrada por el usuario. En este trabajo se pretende poner de manifiesto la utilidad de los datos enlazados aplicados al ámbito geográfico y mostrar como pueden ser empleados hoy en día. Para ello se han explotado datos enlazados de carácter espacial provenientes de diferentes fuentes, a través de servidores externos o endpoints SPARQL. Además de eso se ha trabajado con un servidor privado capaz de proporcionar información enlazada almacenada en un equipo personal. La explotación de información enlazada se ha implementado en una aplicación web en lenguaje JavaScript, tratando de abstraer totalmente al usuario del tratamiento de los datos a nivel interno de la aplicación. Esta aplicación cuenta además con algunos módulos y opciones capaces de interactuar con las consultas realizadas a los servidores, consiguiendo un entorno más intuitivo y agradable para el usuario. ABSTRACT: In recent years the web has suffered a drastic transformation because of the popularization and the huge amount of stored information. Due to these factors it has gone from Documents web to Semantic web, where the data are linked. The main advantages of Linked Data lie in the ease of his reuse, accessibility and availability to be located by users. The aim of this research is to highlight the usefulness of the geographic linked data and show how can be used at present time. To get this, the spatial linked data coming from several sources have been managed through external servers or also called endpoints. Besides, it has been worked with a private server able to provide linked data stored in a personal computer. The use of linked data has been implemented in a JavaScript web application, trying completely to abstract the internally data treatment of the application to make the user ignore it. This application has some modules and options that are able to interact with the queries made to the servers, getting a more intuitive and kind environment for users.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The deployment of the Ambient Intelligence (AmI) paradigm requires designing and integrating user-centered smart environments to assist people in their daily life activities. This research paper details an integration and validation of multiple heterogeneous sensors with hybrid reasoners that support decision making in order to monitor personal and environmental data at a smart home in a private way. The results innovate on knowledge-based platforms, distributed sensors, connected objects, accessibility and authentication methods to promote independent living for elderly people. TALISMAN+, the AmI framework deployed, integrates four subsystems in the smart home: (i) a mobile biomedical telemonitoring platform to provide elderly patients with continuous disease management; (ii) an integration middleware that allows context capture from heterogeneous sensors to program environment¿s reaction; (iii) a vision system for intelligent monitoring of daily activities in the home; and (iv) an ontologies-based integrated reasoning platform to trigger local actions and manage private information in the smart home. The framework was integrated in two real running environments, the UPM Accessible Digital Home and MetalTIC house, and successfully validated by five experts in home care, elderly people and personal autonomy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Secure access to patient data is becoming of increasing importance, as medical informatics grows in significance, to both assist with population health studies, and patient specific medicine in support of treatment. However, assembling the many different types of data emanating from the clinic is in itself a difficulty, and doing so across national borders compounds the problem. In this paper we present our solution: an easy to use distributed informatics platform embedding a state of the art data warehouse incorporating a secure pseudonymisation system protecting access to personal healthcare data. Using this system, a whole range of patient derived data, from genomics to imaging to clinical records, can be assembled and linked, and then connected with analytics tools that help us to understand the data. Research performed in this environment will have immediate clinical impact for personalised patient healthcare.