855 resultados para Scenario Programming, Markup Language, End User Programming


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Internet está evolucionando hacia la conocida como Live Web. En esta nueva etapa en la evolución de Internet, se pone al servicio de los usuarios multitud de streams de datos sociales. Gracias a estas fuentes de datos, los usuarios han pasado de navegar por páginas web estáticas a interacturar con aplicaciones que ofrecen contenido personalizado, basada en sus preferencias. Cada usuario interactúa a diario con multiples aplicaciones que ofrecen notificaciones y alertas, en este sentido cada usuario es una fuente de eventos, y a menudo los usuarios se sienten desbordados y no son capaces de procesar toda esa información a la carta. Para lidiar con esta sobresaturación, han aparecido múltiples herramientas que automatizan las tareas más habituales, desde gestores de bandeja de entrada, gestores de alertas en redes sociales, a complejos CRMs o smart-home hubs. La contrapartida es que aunque ofrecen una solución a problemas comunes, no pueden adaptarse a las necesidades de cada usuario ofreciendo una solucion personalizada. Los Servicios de Automatización de Tareas (TAS de sus siglas en inglés) entraron en escena a partir de 2012 para dar solución a esta liminación. Dada su semejanza, estos servicios también son considerados como un nuevo enfoque en la tecnología de mash-ups pero centra en el usuarios. Los usuarios de estas plataformas tienen la capacidad de interconectar servicios, sensores y otros aparatos con connexión a internet diseñando las automatizaciones que se ajustan a sus necesidades. La propuesta ha sido ámpliamante aceptada por los usuarios. Este hecho ha propiciado multitud de plataformas que ofrecen servicios TAS entren en escena. Al ser un nuevo campo de investigación, esta tesis presenta las principales características de los TAS, describe sus componentes, e identifica las dimensiones fundamentales que los defines y permiten su clasificación. En este trabajo se acuña el termino Servicio de Automatización de Tareas (TAS) dando una descripción formal para estos servicios y sus componentes (llamados canales), y proporciona una arquitectura de referencia. De igual forma, existe una falta de herramientas para describir servicios de automatización, y las reglas de automatización. A este respecto, esta tesis propone un modelo común que se concreta en la ontología EWE (Evented WEb Ontology). Este modelo permite com parar y equiparar canales y automatizaciones de distintos TASs, constituyendo un aporte considerable paraa la portabilidad de automatizaciones de usuarios entre plataformas. De igual manera, dado el carácter semántico del modelo, permite incluir en las automatizaciones elementos de fuentes externas sobre los que razonar, como es el caso de Linked Open Data. Utilizando este modelo, se ha generado un dataset de canales y automatizaciones, con los datos obtenidos de algunos de los TAS existentes en el mercado. Como último paso hacia el lograr un modelo común para describir TAS, se ha desarrollado un algoritmo para aprender ontologías de forma automática a partir de los datos del dataset. De esta forma, se favorece el descubrimiento de nuevos canales, y se reduce el coste de mantenimiento del modelo, el cual se actualiza de forma semi-automática. En conclusión, las principales contribuciones de esta tesis son: i) describir el estado del arte en automatización de tareas y acuñar el término Servicio de Automatización de Tareas, ii) desarrollar una ontología para el modelado de los componentes de TASs y automatizaciones, iii) poblar un dataset de datos de canales y automatizaciones, usado para desarrollar un algoritmo de aprendizaje automatico de ontologías, y iv) diseñar una arquitectura de agentes para la asistencia a usuarios en la creación de automatizaciones. ABSTRACT The new stage in the evolution of the Web (the Live Web or Evented Web) puts lots of social data-streams at the service of users, who no longer browse static web pages but interact with applications that present them contextual and relevant experiences. Given that each user is a potential source of events, a typical user often gets overwhelmed. To deal with that huge amount of data, multiple automation tools have emerged, covering from simple social media managers or notification aggregators to complex CRMs or smart-home Hub/Apps. As a downside, they cannot tailor to the needs of every single user. As a natural response to this downside, Task Automation Services broke in the Internet. They may be seen as a new model of mash-up technology for combining social streams, services and connected devices from an end-user perspective: end-users are empowered to connect those stream however they want, designing the automations they need. The numbers of those platforms that appeared early on shot up, and as a consequence the amount of platforms following this approach is growing fast. Being a novel field, this thesis aims to shed light on it, presenting and exemplifying the main characteristics of Task Automation Services, describing their components, and identifying several dimensions to classify them. This thesis coins the term Task Automation Services (TAS) by providing a formal definition of them, their components (called channels), as well a TAS reference architecture. There is also a lack of tools for describing automation services and automations rules. In this regard, this thesis proposes a theoretical common model of TAS and formalizes it as the EWE ontology This model enables to compare channels and automations from different TASs, which has a high impact in interoperability; and enhances automations providing a mechanism to reason over external sources such as Linked Open Data. Based on this model, a dataset of components of TAS was built, harvesting data from the web sites of actual TASs. Going a step further towards this common model, an algorithm for categorizing them was designed, enabling their discovery across different TAS. Thus, the main contributions of the thesis are: i) surveying the state of the art on task automation and coining the term Task Automation Service; ii) providing a semantic common model for describing TAS components and automations; iii) populating a categorized dataset of TAS components, used to learn ontologies of particular domains from the TAS perspective; and iv) designing an agent architecture for assisting users in setting up automations, that is aware of their context and acts in consequence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En la actualidad, a través de los sistemas de Tecnologías de la Información (TI) se ofrecen servicios muy diversos que tienen requisitos cada vez más específicos para garantizar su funcionamiento. Con el fin de cumplir dichas garantías, se han estandarizado diferentes normativas, así como marcos de trabajo, también llamadas recomendaciones o manuales de buenas prácticas, que permiten al proveedor de servicios verificar que se cumplen dichos requisitos y ofrecer tanto al cliente final como a la propia organización un servicio de calidad que permita generar valora todas las partes. Con la aplicación de dichas recomendaciones y normativas, las empresas y Administraciones Públicas garantizan que los servicios telemáticos que ofrecen, cumplen estándares de calidad permitiendo así ofrecer al usuario final una plataforma estable y adecuada al servicio prestado. Dichas normas y marcos hacen referencia tanto al servicio TI propiamente dicho como al propio sistema de gestión del servicio TI, de tal forma que, a través de una operación adecuada del sistema que gestiona el servicio, podemos hacer que dicho servicio esté continuamente mejorando. En concreto nos centramos en la norma más empleada, ISO 20000, y en el marco de trabajo o referencia de buenas prácticas, ITIL 2011, con el fin de dar una visión clara de los diferentes procesos, actividades y funciones que ambas definen para generar valor en la empresa a través de los sistemas de TI. Con el fin de ayudar a la comprensión tanto de ITIL como de ISO 20000, se ha desarrollado a modo de ejemplo la implementación tanto de ITIL inicialmente y luego ISO 20000 sobre de un servicio que ya está en funcionamiento definiendo para cada una de las cinco fases del ciclo de vida que, tanto en la norma como en el marco se utilizan, los procesos y funciones necesarias para su implementación, y su posterior revisión. ABSTRACT. Today, we’ve got a lot of different services thanks to Information Technologies (IT) service management: they have increasingly specific requirements to ensure a good operation on the service they support. In order to meet these requirements, it has been released different standardized regulations and frameworks, also called recommendations or good practice guides. They allow the service provider to verify that such requirements are met and offer both to the end customer and to the own organization that manage this system, a quality service that will generate value to both parts. In the end, with the implementation of these recommendations and regulations, companies and public authorities ensure that the telematics services offered meet the quality standards they seek, allowing the end user to offer a stable and appropriate service platform. So these standards and reference frames both implies on the IT service itself and on the IT service management, so that, through proper operation of the parts implied on the process that manages the service, we can offer a better service. In particular we focus on the most widely used standard, ISO 20000, and the reference framework or best practices, ITIL 2011, in order to give a clear overview of the different processes, activities and functions that define both to create value in the company through IT service management. To help the understanding of both ITIL and ISO 20000, it has been developed an example of an ITIL and then ISO 20000 implementation on a service that is already in operation defining for each of the five phases of the life cycle, both in the standard as used in the context, processes and functions necessary for its implementation, and later review.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ALFRED (the ALelle FREquency Database) is designed to store and disseminate frequencies of alleles at human polymorphic sites for multiple populations, primarily for the population genetics and molecular anthropology communities. Currently ALFRED has information on over 180 polymorphic sites for more than 70 populations. Since our initial release of the database we have focussed on increasing the quantity and quality of data, making reciprocal links between ALFRED and other related databases, and providing useful tools to make the data more comprehensible to the end user. ALFRED is accessible from the Kidd Lab home page (http://info.med.yale.edu/genetics/kkidd/) or from ALFRED directly (http://alfred.med.yale.edu/alfred/index.asp).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The relationship between the development of mediated online literature searching and the recruitment of medical librarians to fill positions as online searchers was investigated. The history of database searching by medical librarians was outlined and a content analysis of thirty-five years of job advertisements in MLA News from 1961 through 1996 was summarized. Advertisements for online searchers were examined to test the hypothesis that the growth of mediated online searching was reflected in the recruitment of librarians to fill positions as mediated online searchers in medical libraries. The advent of end-user searching was also traced to determine how this trend affected the demand for mediated online searching and job availability of online searchers. Job advertisements were analyzed to determine what skills were in demand as end-user searching replaced mediated online searching as the norm in medical libraries. Finally, the trend away from mediated online searching to support of other library services was placed in the context of new roles for medical librarians.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aplicativos móveis de celulares que coletam dados pessoais estão cada vez mais presentes na rotina do cidadão comum. Associado a estas aplicações, há polêmicas sobre riscos de segurança e de invasão de privacidade, que podem se tornar entraves para aceitação destes sistemas por parte dos usuários. Por outro lado, discute-se o Paradoxo da Privacidade, em que os consumidores revelam mais informações pessoais voluntariamente, apesar de declarar que reconhecem os riscos. Há pouco consenso, nas pesquisas acadêmicas, sobre os motivos deste paradoxo ou mesmo se este fenômeno realmente existe. O objetivo desta pesquisa é analisar como a coleta de informações sensíveis influencia a escolha de aplicativos móveis. A metodologia é o estudo de aplicativos disponíveis em lojas virtuais para celulares através de técnicas qualitativas e quantitativas. Os resultados indicam que os produtos mais populares da loja são aqueles que coletam mais dados pessoais. Porém, em uma análise minuciosa, observa-se que aqueles mais buscados também pertencem a empresas de boa reputação e possuem mais funcionalidades, que exigem maior acesso aos dados privativos do celular. Na survey realizada em seguida, nota-se que os consumidores reduzem o uso dos aplicativos quando consideram que o produto coleta dados excessivamente, mas a estratégia para proteger essas informações pode variar. No grupo dos usuários que usam aplicativos que coletam dados excessivamente, conclui-se que o motivo primordial para compartilhar informações pessoais são as funcionalidades. Além disso, esta pesquisa confirma que comparar os dados solicitados pelos aplicativos com a expectativa inicial do consumidor é um constructo complementar para avaliar preocupações com privacidade, ao invés de simplesmente analisar a quantidade de informações coletadas. O processo desta pesquisa também ilustrou que, dependendo do método utilizado para análise, é possível chegar a resultados opostos sobre a ocorrência ou não do paradoxo. Isso pode dar indícios sobre os motivos da falta de consenso sobre o assunto

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Subsidence related to multiple natural and human-induced processes affects an increasing number of areas worldwide. Although this phenomenon may involve surface deformation with 3D displacement components, negative vertical movement, either progressive or episodic, tends to dominate. Over the last decades, differential SAR interferometry (DInSAR) has become a very useful remote sensing tool for accurately measuring the spatial and temporal evolution of surface displacements over broad areas. This work discusses the main advantages and limitations of addressing active subsidence phenomena by means of DInSAR techniques from an end-user point of view. Special attention is paid to the spatial and temporal resolution, the precision of the measurements, and the usefulness of the data. The presented analysis is focused on DInSAR results exploitation of various ground subsidence phenomena (groundwater withdrawal, soil compaction, mining subsidence, evaporite dissolution subsidence, and volcanic deformation) with different displacement patterns in a selection of subsidence areas in Spain. Finally, a cost comparative study is performed for the different techniques applied.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Rock mass classification systems are widely used tools for assessing the stability of rock slopes. Their calculation requires the prior quantification of several parameters during conventional fieldwork campaigns, such as the orientation of the discontinuity sets, the main properties of the existing discontinuities and the geo-mechanical characterization of the intact rock mass, which can be time-consuming and an often risky task. Conversely, the use of relatively new remote sensing data for modelling the rock mass surface by means of 3D point clouds is changing the current investigation strategies in different rock slope engineering applications. In this paper, the main practical issues affecting the application of Slope Mass Rating (SMR) for the characterization of rock slopes from 3D point clouds are reviewed, using three case studies from an end-user point of view. To this end, the SMR adjustment factors, which were calculated from different sources of information and processes, using the different softwares, are compared with those calculated using conventional fieldwork data. In the presented analysis, special attention is paid to the differences between the SMR indexes derived from the 3D point cloud and conventional field work approaches, the main factors that determine the quality of the data and some recognized practical issues. Finally, the reliability of Slope Mass Rating for the characterization of rocky slopes is highlighted.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Online geographic information systems provide the means to extract a subset of desired spatial information from a larger remote repository. Data retrieved representing real-world geographic phenomena are then manipulated to suit the specific needs of an end-user. Often this extraction requires the derivation of representations of objects specific to a particular resolution or scale from a single original stored version. Currently standard spatial data handling techniques cannot support the multi-resolution representation of such features in a database. In this paper a methodology to store and retrieve versions of spatial objects at, different resolutions with respect to scale using standard database primitives and SQL is presented. The technique involves heavy fragmentation of spatial features that allows dynamic simplification into scale-specific object representations customised to the display resolution of the end-user's device. Experimental results comparing the new approach to traditional R-Tree indexing and external object simplification reveal the former performs notably better for mobile and WWW applications where client-side resources are limited and retrieved data loads are kept relatively small.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It was hypothesized that employees' perceptions of an organizational culture strong in human relations values and open systems values would be associated with heightened levels of readiness for change which, in turn, would be predictive of change implementation success. Similarly, it was predicted that reshaping capabilities would lead to change implementation success, via its effects on employees' perceptions of readiness for change. Using a temporal research design, these propositions were tested for 67 employees working in a state government department who were about to undergo the implementation of a new end-user computing system in their workplace. Change implementation success was operationalized as user satisfaction and system usage. There was evidence to suggest that employees who perceived strong human relations values in their division at Time 1 reported higher levels of readiness for change at pre-implementation which, in turn, predicted system usage at Time 2. In addition, readiness for change mediated the relationship between reshaping capabilities and system usage. Analyses also revealed that pre-implementation levels of readiness for change exerted a positive main effect on employees' satisfaction with the system's accuracy, user friendliness, and formatting functions at post-implementation. These findings are discussed in terms of their theoretical contribution to the readiness for change literature, and in relation to the practical importance of developing positive change attitudes among employees if change initiatives are to be successful.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The schema of an information system can significantly impact the ability of end users to efficiently and effectively retrieve the information they need. Obtaining quickly the appropriate data increases the likelihood that an organization will make good decisions and respond adeptly to challenges. This research presents and validates a methodology for evaluating, ex ante, the relative desirability of alternative instantiations of a model of data. In contrast to prior research, each instantiation is based on a different formal theory. This research theorizes that the instantiation that yields the lowest weighted average query complexity for a representative sample of information requests is the most desirable instantiation for end-user queries. The theory was validated by an experiment that compared end-user performance using an instantiation of a data structure based on the relational model of data with performance using the corresponding instantiation of the data structure based on the object-relational model of data. Complexity was measured using three different Halstead metrics: program length, difficulty, and effort. For a representative sample of queries, the average complexity using each instantiation was calculated. As theorized, end users querying the instantiation with the lower average complexity made fewer semantic errors, i.e., were more effective at composing queries. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A recent focus on intermediary compensation underscores the need to organize the many complex incentives used by channel practitioners. Employing a grounded theory methodology, a channel incentives classification scheme is induced from 170 unique channel incentives used in 59 high technology suppliers’ channel programs. The incentives are organized into 16 subcategories and 5 major categories: Credible Channel Policies, Market Development Support, Supplemental Contact, High-Powered Incentives, and End-User Encouragements. Each incentive subcategory is discussed as a means of controlling reseller behaviors. Also, the conditions that give rise to the implementation of incentives are investigated through four testable research propositions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis provides an interoperable language for quantifying uncertainty using probability theory. A general introduction to interoperability and uncertainty is given, with particular emphasis on the geospatial domain. Existing interoperable standards used within the geospatial sciences are reviewed, including Geography Markup Language (GML), Observations and Measurements (O&M) and the Web Processing Service (WPS) specifications. The importance of uncertainty in geospatial data is identified and probability theory is examined as a mechanism for quantifying these uncertainties. The Uncertainty Markup Language (UncertML) is presented as a solution to the lack of an interoperable standard for quantifying uncertainty. UncertML is capable of describing uncertainty using statistics, probability distributions or a series of realisations. The capabilities of UncertML are demonstrated through a series of XML examples. This thesis then provides a series of example use cases where UncertML is integrated with existing standards in a variety of applications. The Sensor Observation Service - a service for querying and retrieving sensor-observed data - is extended to provide a standardised method for quantifying the inherent uncertainties in sensor observations. The INTAMAP project demonstrates how UncertML can be used to aid uncertainty propagation using a WPS by allowing UncertML as input and output data. The flexibility of UncertML is demonstrated with an extension to the GML geometry schemas to allow positional uncertainty to be quantified. Further applications and developments of UncertML are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most object-based approaches to Geographical Information Systems (GIS) have concentrated on the representation of geometric properties of objects in terms of fixed geometry. In our road traffic marking application domain we have a requirement to represent the static locations of the road markings but also enforce the associated regulations, which are typically geometric in nature. For example a give way line of a pedestrian crossing in the UK must be within 1100-3000 mm of the edge of the crossing pattern. In previous studies of the application of spatial rules (often called 'business logic') in GIS emphasis has been placed on the representation of topological constraints and data integrity checks. There is very little GIS literature that describes models for geometric rules, although there are some examples in the Computer Aided Design (CAD) literature. This paper introduces some of the ideas from so called variational CAD models to the GIS application domain, and extends these using a Geography Markup Language (GML) based representation. In our application we have an additional requirement; the geometric rules are often changed and vary from country to country so should be represented in a flexible manner. In this paper we describe an elegant solution to the representation of geometric rules, such as requiring lines to be offset from other objects. The method uses a feature-property model embraced in GML 3.1 and extends the possible relationships in feature collections to permit the application of parameterized geometric constraints to sub features. We show the parametric rule model we have developed and discuss the advantage of using simple parametric expressions in the rule base. We discuss the possibilities and limitations of our approach and relate our data model to GML 3.1. © 2006 Springer-Verlag Berlin Heidelberg.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Relationships between organizations can be characterized by cooperation, conflict, and change. In this dissertation we study cooperation between organizations by investigating how norms in relationships can enhance innovativeness and subsequently impact relationship performance. We do so by incorporating both beneficial aspects of long term relationships as well as “dark side” factors that may decrease innovativeness. This provides a balanced assessment of the factors increasing and decreasing the performance of relationships. Next, we study conflict between organizations by taking a network view on conflict which helps explain why organizations react to conflict. We find stakeholders to have an effect on channel conflict responsiveness. Finally we study change by means of an organization’s ability to successfully add an Internet channel to their distribution system in order to sell its products or services directly to the end-user. We find that an Internet channel is best implemented by organizations that are flexible and we identify several circumstances under which this flexibility is highest.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The INTAMAP FP6 project has developed an interoperable framework for real-time automatic mapping of critical environmental variables by extending spatial statistical methods and employing open, web-based, data exchange protocols and visualisation tools. This paper will give an overview of the underlying problem, of the project, and discuss which problems it has solved and which open problems seem to be most relevant to deal with next. The interpolation problem that INTAMAP solves is the generic problem of spatial interpolation of environmental variables without user interaction, based on measurements of e.g. PM10, rainfall or gamma dose rate, at arbitrary locations or over a regular grid covering the area of interest. It deals with problems of varying spatial resolution of measurements, the interpolation of averages over larger areas, and with providing information on the interpolation error to the end-user. In addition, monitoring network optimisation is addressed in a non-automatic context.