923 resultados para acquisition of data system
Resumo:
Internet está evolucionando hacia la conocida como Live Web. En esta nueva etapa en la evolución de Internet, se pone al servicio de los usuarios multitud de streams de datos sociales. Gracias a estas fuentes de datos, los usuarios han pasado de navegar por páginas web estáticas a interacturar con aplicaciones que ofrecen contenido personalizado, basada en sus preferencias. Cada usuario interactúa a diario con multiples aplicaciones que ofrecen notificaciones y alertas, en este sentido cada usuario es una fuente de eventos, y a menudo los usuarios se sienten desbordados y no son capaces de procesar toda esa información a la carta. Para lidiar con esta sobresaturación, han aparecido múltiples herramientas que automatizan las tareas más habituales, desde gestores de bandeja de entrada, gestores de alertas en redes sociales, a complejos CRMs o smart-home hubs. La contrapartida es que aunque ofrecen una solución a problemas comunes, no pueden adaptarse a las necesidades de cada usuario ofreciendo una solucion personalizada. Los Servicios de Automatización de Tareas (TAS de sus siglas en inglés) entraron en escena a partir de 2012 para dar solución a esta liminación. Dada su semejanza, estos servicios también son considerados como un nuevo enfoque en la tecnología de mash-ups pero centra en el usuarios. Los usuarios de estas plataformas tienen la capacidad de interconectar servicios, sensores y otros aparatos con connexión a internet diseñando las automatizaciones que se ajustan a sus necesidades. La propuesta ha sido ámpliamante aceptada por los usuarios. Este hecho ha propiciado multitud de plataformas que ofrecen servicios TAS entren en escena. Al ser un nuevo campo de investigación, esta tesis presenta las principales características de los TAS, describe sus componentes, e identifica las dimensiones fundamentales que los defines y permiten su clasificación. En este trabajo se acuña el termino Servicio de Automatización de Tareas (TAS) dando una descripción formal para estos servicios y sus componentes (llamados canales), y proporciona una arquitectura de referencia. De igual forma, existe una falta de herramientas para describir servicios de automatización, y las reglas de automatización. A este respecto, esta tesis propone un modelo común que se concreta en la ontología EWE (Evented WEb Ontology). Este modelo permite com parar y equiparar canales y automatizaciones de distintos TASs, constituyendo un aporte considerable paraa la portabilidad de automatizaciones de usuarios entre plataformas. De igual manera, dado el carácter semántico del modelo, permite incluir en las automatizaciones elementos de fuentes externas sobre los que razonar, como es el caso de Linked Open Data. Utilizando este modelo, se ha generado un dataset de canales y automatizaciones, con los datos obtenidos de algunos de los TAS existentes en el mercado. Como último paso hacia el lograr un modelo común para describir TAS, se ha desarrollado un algoritmo para aprender ontologías de forma automática a partir de los datos del dataset. De esta forma, se favorece el descubrimiento de nuevos canales, y se reduce el coste de mantenimiento del modelo, el cual se actualiza de forma semi-automática. En conclusión, las principales contribuciones de esta tesis son: i) describir el estado del arte en automatización de tareas y acuñar el término Servicio de Automatización de Tareas, ii) desarrollar una ontología para el modelado de los componentes de TASs y automatizaciones, iii) poblar un dataset de datos de canales y automatizaciones, usado para desarrollar un algoritmo de aprendizaje automatico de ontologías, y iv) diseñar una arquitectura de agentes para la asistencia a usuarios en la creación de automatizaciones. ABSTRACT The new stage in the evolution of the Web (the Live Web or Evented Web) puts lots of social data-streams at the service of users, who no longer browse static web pages but interact with applications that present them contextual and relevant experiences. Given that each user is a potential source of events, a typical user often gets overwhelmed. To deal with that huge amount of data, multiple automation tools have emerged, covering from simple social media managers or notification aggregators to complex CRMs or smart-home Hub/Apps. As a downside, they cannot tailor to the needs of every single user. As a natural response to this downside, Task Automation Services broke in the Internet. They may be seen as a new model of mash-up technology for combining social streams, services and connected devices from an end-user perspective: end-users are empowered to connect those stream however they want, designing the automations they need. The numbers of those platforms that appeared early on shot up, and as a consequence the amount of platforms following this approach is growing fast. Being a novel field, this thesis aims to shed light on it, presenting and exemplifying the main characteristics of Task Automation Services, describing their components, and identifying several dimensions to classify them. This thesis coins the term Task Automation Services (TAS) by providing a formal definition of them, their components (called channels), as well a TAS reference architecture. There is also a lack of tools for describing automation services and automations rules. In this regard, this thesis proposes a theoretical common model of TAS and formalizes it as the EWE ontology This model enables to compare channels and automations from different TASs, which has a high impact in interoperability; and enhances automations providing a mechanism to reason over external sources such as Linked Open Data. Based on this model, a dataset of components of TAS was built, harvesting data from the web sites of actual TASs. Going a step further towards this common model, an algorithm for categorizing them was designed, enabling their discovery across different TAS. Thus, the main contributions of the thesis are: i) surveying the state of the art on task automation and coining the term Task Automation Service; ii) providing a semantic common model for describing TAS components and automations; iii) populating a categorized dataset of TAS components, used to learn ontologies of particular domains from the TAS perspective; and iv) designing an agent architecture for assisting users in setting up automations, that is aware of their context and acts in consequence.
Resumo:
Over the last few years, the Data Center market has increased exponentially and this tendency continues today. As a direct consequence of this trend, the industry is pushing the development and implementation of different new technologies that would improve the energy consumption efficiency of data centers. An adaptive dashboard would allow the user to monitor the most important parameters of a data center in real time. For that reason, monitoring companies work with IoT big data filtering tools and cloud computing systems to handle the amounts of data obtained from the sensors placed in a data center.Analyzing the market trends in this field we can affirm that the study of predictive algorithms has become an essential area for competitive IT companies. Complex algorithms are used to forecast risk situations based on historical data and warn the user in case of danger. Considering that several different users will interact with this dashboard from IT experts or maintenance staff to accounting managers, it is vital to personalize it automatically. Following that line of though, the dashboard should only show relevant metrics to the user in different formats like overlapped maps or representative graphs among others. These maps will show all the information needed in a visual and easy-to-evaluate way. To sum up, this dashboard will allow the user to visualize and control a wide range of variables. Monitoring essential factors such as average temperature, gradients or hotspots as well as energy and power consumption and savings by rack or building would allow the client to understand how his equipment is behaving, helping him to optimize the energy consumption and efficiency of the racks. It also would help him to prevent possible damages in the equipment with predictive high-tech algorithms.
Resumo:
In the MYL mutant of the Arc repressor dimer, sets of partially buried salt-bridge and hydrogen-bond interactions mediated by Arg-31, Glu-36, and Arg-40 in each subunit are replaced by hydrophobic interactions between Met-31, Tyr-36, and Leu-40. The MYL refolding/dimerization reaction differs from that of wild type in being 10- to 1250-fold faster, having an earlier transition state, and depending upon viscosity but not ionic strength. Formation of the wild-type salt bridges in a hydrophobic environment clearly imposes a kinetic barrier to folding, which can be lowered by high salt concentrations. The changes in the position of the transition state and viscosity dependence can be explained if denatured monomers interact to form a partially folded dimeric intermediate, which then continues folding to form the native dimer. The second step is postulated to be rate limiting for wild type. Replacing the salt bridge with hydrophobic interactions lowers this barrier for MYL. This makes the first kinetic barrier rate limiting for MYL refolding and creates a downhill free-energy landscape in which most molecules which reach the intermediate state continue to form native dimers.
Resumo:
N-Methyl-D-aspartate (NMDA) receptors play an important role in the development of retinal axon arbors in the mammalian lateral geniculate nucleus (LGN). We investigated whether blockade of NMDA receptors in vivo or in vitro affects the dendritic development of LGN neurons during the period that retinogeniculate axons segregate into on-center and off-center sublaminae. Osmotic minipumps containing either the NMDA receptor antagonist D-2-amino-5-phosphonovaleric acid (D-APV) or saline were implanted in ferret kits at postnatal day 14. After 1 week, LGN neurons were intracellularly injected with Lucifer yellow. Infusion of D-APV in vivo led to an increase in the number of branch points and in the density of dendritic spines compared with age-matched normal or saline-treated animals. To examine the time course of spine formation, crystals of 1,1'-dioctadecyl-3,3,3',3'-tetramethylindocarbocyanine perchlorate were placed in the LGN in brain slices from 14- to 18-day-old ferrets. Labeled LGN cell dendrites were imaged on-line in living slices by confocal microscopy, with slices maintained either in normal perfusion medium or with the addition of D-APV or NMDA to the medium. Addition of D-APV in vitro at doses specific for blocking NMDA receptors led to a > 6-fold net increase in spine density compared with control or NMDA-treated slices. Spines appeared within a few hours of NMDA receptor blockade, indicating a rapid local response by LGN cells in the absence of NMDA receptor activation. Thus, activity-dependent structural changes in postsynaptic cells act together with changes in presynaptic arbors to shape projection patterns and specific retinogeniculate connections.
Resumo:
The localization of sites of memory formation within the mammalian brain has proven to be a formidable task even for simple forms of learning and memory. Recent studies have demonstrated that reversibly inactivating a localized region of cerebellum, including the dorsal anterior interpositus nucleus, completely prevents acquisition of the conditioned eye-blink response with no effect upon subsequent learning without inactivation. This result indicates that the memory trace for this type of learning is located either (i) within this inactivated region of cerebellum or (ii) within some structure(s) efferent from the cerebellum to which output from the interpositus nucleus ultimately projects. To distinguish between these possibilities, two groups of rabbits were conditioned (by using two conditioning stimuli) while the output fibers of the interpositus (the superior cerebellar peduncle) were reversibly blocked with microinjections of the sodium channel blocker tetrodotoxin. Rabbits performed no conditioned responses during this inactivation training. However, training after inactivation revealed that the rabbits (trained with either conditioned stimulus) had fully learned the response during the previous inactivation training. Cerebellar output, therefore, does not appear to be essential for acquisition of the learned response. This result, coupled with the fact that inactivation of the appropriate region of cerebellum completely prevents learning, provides compelling evidence supporting the hypothesis that the essential memory trace for the classically conditioned eye-blink response is localized within the cerebellum.
Resumo:
The UCM Instrumentation Group (GUAIX) is developing currently Data Reduction Pipelines (DRP) for four instruments of the GTC: EMIR, FRIDA, MEGARA and MIRADAS. The purpose of the DRPs is to provide astronomers scientific quality data, removing instrumental biases, calibrating the images in physical units and providing a estimation of the associated uncertainties.
Resumo:
Nowadays, data mining is based on low-level specications of the employed techniques typically bounded to a specic analysis platform. Therefore, data mining lacks a modelling architecture that allows analysts to consider it as a truly software-engineering process. Here, we propose a model-driven approach based on (i) a conceptual modelling framework for data mining, and (ii) a set of model transformations to automatically generate both the data under analysis (via data-warehousing technology) and the analysis models for data mining (tailored to a specic platform). Thus, analysts can concentrate on the analysis problem via conceptual data-mining models instead of low-level programming tasks related to the underlying-platform technical details. These tasks are now entrusted to the model-transformations scaffolding.
Resumo:
Data mining is one of the most important analysis techniques to automatically extract knowledge from large amount of data. Nowadays, data mining is based on low-level specifications of the employed techniques typically bounded to a specific analysis platform. Therefore, data mining lacks a modelling architecture that allows analysts to consider it as a truly software-engineering process. Bearing in mind this situation, we propose a model-driven approach which is based on (i) a conceptual modelling framework for data mining, and (ii) a set of model transformations to automatically generate both the data under analysis (that is deployed via data-warehousing technology) and the analysis models for data mining (tailored to a specific platform). Thus, analysts can concentrate on understanding the analysis problem via conceptual data-mining models instead of wasting efforts on low-level programming tasks related to the underlying-platform technical details. These time consuming tasks are now entrusted to the model-transformations scaffolding. The feasibility of our approach is shown by means of a hypothetical data-mining scenario where a time series analysis is required.
Open business intelligence: on the importance of data quality awareness in user-friendly data mining
Resumo:
Citizens demand more and more data for making decisions in their daily life. Therefore, mechanisms that allow citizens to understand and analyze linked open data (LOD) in a user-friendly manner are highly required. To this aim, the concept of Open Business Intelligence (OpenBI) is introduced in this position paper. OpenBI facilitates non-expert users to (i) analyze and visualize LOD, thus generating actionable information by means of reporting, OLAP analysis, dashboards or data mining; and to (ii) share the new acquired information as LOD to be reused by anyone. One of the most challenging issues of OpenBI is related to data mining, since non-experts (as citizens) need guidance during preprocessing and application of mining algorithms due to the complexity of the mining process and the low quality of the data sources. This is even worst when dealing with LOD, not only because of the different kind of links among data, but also because of its high dimensionality. As a consequence, in this position paper we advocate that data mining for OpenBI requires data quality-aware mechanisms for guiding non-expert users in obtaining and sharing the most reliable knowledge from the available LOD.