953 resultados para Event-driven Framework


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Sequences of timestamped events are currently being generated across nearly every domain of data analytics, from e-commerce web logging to electronic health records used by doctors and medical researchers. Every day, this data type is reviewed by humans who apply statistical tests, hoping to learn everything they can about how these processes work, why they break, and how they can be improved upon. To further uncover how these processes work the way they do, researchers often compare two groups, or cohorts, of event sequences to find the differences and similarities between outcomes and processes. With temporal event sequence data, this task is complex because of the variety of ways single events and sequences of events can differ between the two cohorts of records: the structure of the event sequences (e.g., event order, co-occurring events, or frequencies of events), the attributes about the events and records (e.g., gender of a patient), or metrics about the timestamps themselves (e.g., duration of an event). Running statistical tests to cover all these cases and determining which results are significant becomes cumbersome. Current visual analytics tools for comparing groups of event sequences emphasize a purely statistical or purely visual approach for comparison. Visual analytics tools leverage humans' ability to easily see patterns and anomalies that they were not expecting, but is limited by uncertainty in findings. Statistical tools emphasize finding significant differences in the data, but often requires researchers have a concrete question and doesn't facilitate more general exploration of the data. Combining visual analytics tools with statistical methods leverages the benefits of both approaches for quicker and easier insight discovery. Integrating statistics into a visualization tool presents many challenges on the frontend (e.g., displaying the results of many different metrics concisely) and in the backend (e.g., scalability challenges with running various metrics on multi-dimensional data at once). I begin by exploring the problem of comparing cohorts of event sequences and understanding the questions that analysts commonly ask in this task. From there, I demonstrate that combining automated statistics with an interactive user interface amplifies the benefits of both types of tools, thereby enabling analysts to conduct quicker and easier data exploration, hypothesis generation, and insight discovery. The direct contributions of this dissertation are: (1) a taxonomy of metrics for comparing cohorts of temporal event sequences, (2) a statistical framework for exploratory data analysis with a method I refer to as high-volume hypothesis testing (HVHT), (3) a family of visualizations and guidelines for interaction techniques that are useful for understanding and parsing the results, and (4) a user study, five long-term case studies, and five short-term case studies which demonstrate the utility and impact of these methods in various domains: four in the medical domain, one in web log analysis, two in education, and one each in social networks, sports analytics, and security. My dissertation contributes an understanding of how cohorts of temporal event sequences are commonly compared and the difficulties associated with applying and parsing the results of these metrics. It also contributes a set of visualizations, algorithms, and design guidelines for balancing automated statistics with user-driven analysis to guide users to significant, distinguishing features between cohorts. This work opens avenues for future research in comparing two or more groups of temporal event sequences, opening traditional machine learning and data mining techniques to user interaction, and extending the principles found in this dissertation to data types beyond temporal event sequences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The interplay between the biocolloidal characteristics (especially size and charge), pH, salt concentration and the thermal energy results in a unique collection of mesoscopic forces of importance to the molecular organization and function in biological systems. By means of Monte Carlo simulations and semi-quantitative analysis in terms of perturbation theory, we describe a general electrostatic mechanism that gives attraction at low electrolyte concentrations. This charge regulation mechanism due to titrating amino acid residues is discussed in a purely electrostatic framework. The complexation data reported here for interaction between a polyelectrolyte chain and the proteins albumin, goat and bovine alpha-lactalbumin, beta-lactoglobulin, insulin, k-casein, lysozyme and pectin methylesterase illustrate the importance of the charge regulation mechanism. Special attention is given to pH congruent to pI where ion-dipole and charge regulation interactions could overcome the repulsive ion-ion interaction. By means of protein mutations, we confirm the importance of the charge regulation mechanism, and quantify when the complexation is dominated either by charge regulation or by the ion-dipole term.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Functional magnetic resonance imaging (FMRI) analysis methods can be quite generally divided into hypothesis-driven and data-driven approaches. The former are utilised in the majority of FMRI studies, where a specific haemodynamic response is modelled utilising knowledge of event timing during the scan, and is tested against the data using a t test or a correlation analysis. These approaches often lack the flexibility to account for variability in haemodynamic response across subjects and brain regions which is of specific interest in high-temporal resolution event-related studies. Current data-driven approaches attempt to identify components of interest in the data, but currently do not utilise any physiological information for the discrimination of these components. Here we present a hypothesis-driven approach that is an extension of Friman's maximum correlation modelling method (Neurolmage 16, 454-464, 2002) specifically focused on discriminating the temporal characteristics of event-related haemodynamic activity. Test analyses, on both simulated and real event-related FMRI data, will be presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mestrado em Engenharia Electrotécnica – Sistemas Eléctricos de Energia

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern factories are complex systems where advances in networking and information technologies are opening new ways towards higher efficiency. Such move is being driven by market rules with ever-increasing competition levels, in search for faster time-to-market, improved process yield, non-stop operations, flexible manufacturing and tighter supply-chain coupling. All these aims present a common requirement, i.e. a realtime flow of information, from the plant-floor up to the management, maintenance, suppliers and clients, to support accurate monitoring and control of the factory. This stresses the importance achieved by the communication infrastructure in modern manufacturing industry. This paper presents the authors view concerning the current trends in modern factory communication systems. It addresses the problems of seamlessly integrating different information flows with diverse requirements, mainly in terms of timeliness. In this aspect, the debate between event-triggered and time-triggered communication is revisited as well as the joint support for both types of traffic. Finally, a view of where factory communication systems are moving to is also presented, showing the impact of open and widely available technologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Workflows have been successfully applied to express the decomposition of complex scientific applications. This has motivated many initiatives that have been developing scientific workflow tools. However the existing tools still lack adequate support to important aspects namely, decoupling the enactment engine from workflow tasks specification, decentralizing the control of workflow activities, and allowing their tasks to run autonomous in distributed infrastructures, for instance on Clouds. Furthermore many workflow tools only support the execution of Direct Acyclic Graphs (DAG) without the concept of iterations, where activities are executed millions of iterations during long periods of time and supporting dynamic workflow reconfigurations after certain iteration. We present the AWARD (Autonomic Workflow Activities Reconfigurable and Dynamic) model of computation, based on the Process Networks model, where the workflow activities (AWA) are autonomic processes with independent control that can run in parallel on distributed infrastructures, e. g. on Clouds. Each AWA executes a Task developed as a Java class that implements a generic interface allowing end-users to code their applications without concerns for low-level details. The data-driven coordination of AWA interactions is based on a shared tuple space that also enables support to dynamic workflow reconfiguration and monitoring of the execution of workflows. We describe how AWARD supports dynamic reconfiguration and discuss typical workflow reconfiguration scenarios. For evaluation we describe experimental results of AWARD workflow executions in several application scenarios, mapped to a small dedicated cluster and the Amazon (Elastic Computing EC2) Cloud.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Workflows have been successfully applied to express the decomposition of complex scientific applications. However the existing tools still lack adequate support to important aspects namely, decoupling the enactment engine from tasks specification, decentralizing the control of workflow activities allowing their tasks to run in distributed infrastructures, and supporting dynamic workflow reconfigurations. We present the AWARD (Autonomic Workflow Activities Reconfigurable and Dynamic) model of computation, based on Process Networks, where the workflow activities (AWA) are autonomic processes with independent control that can run in parallel on distributed infrastructures. Each AWA executes a task developed as a Java class with a generic interface allowing end-users to code their applications without low-level details. The data-driven coordination of AWA interactions is based on a shared tuple space that also enables dynamic workflow reconfiguration. For evaluation we describe experimental results of AWARD workflow executions in several application scenarios, mapped to the Amazon (Elastic Computing EC2) Cloud.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Atualmente, verifica-se um aumento na necessidade de software feito à medida do cliente, que se consiga adaptar de forma rápida as constantes mudanças da sua área de negócio. Cada cliente tem os seus problemas concretos que precisa de resolver, não lhe sendo muitas vezes possível dispensar uma elevada quantidade de recursos para atingir os fins pretendidos. De forma a dar resposta a estes problemas surgiram várias arquiteturas e metodologias de desenvolvimento de software, que permitem o desenvolvimento ágil de aplicações altamente configuráveis, que podem ser personalizadas por qualquer utilizador das mesmas. Este dinamismo, trazido para as aplicações sobre a forma de modelos que são personalizados pelos utilizadores e interpretados por uma plataforma genérica, cria maiores desafios no momento de realizar testes, visto existir um número de variáveis consideravelmente maior que numa aplicação com uma arquitetura tradicional. É necessário, em todos os momentos, garantir a integridade de todos os modelos, bem como da plataforma responsável pela sua interpretação, sem ser necessário o desenvolvimento constante de aplicações para suportar os testes sobre os diferentes modelos. Esta tese debruça-se sobre uma aplicação, a plataforma myMIS, que permite a interpretação de modelos orientados à gestão, escritos numa linguagem específica de domínio, sendo realizada a avaliação do estado atual e definida uma proposta de práticas de testes a aplicar no desenvolvimento da mesma. A proposta resultante desta tese permitiu verificar que, apesar das dificuldades inerentes à arquitetura da aplicação, o desenvolvimento de testes de uma forma genérica é possível, podendo as mesmas lógicas ser utilizadas para o teste de diversos modelos distintos.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research Project submited as partial fulfilment for the Master Degree in Statistics and Information Management

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The reported productivity gains while using models and model transformations to develop entire systems, after almost a decade of experience applying model-driven approaches for system development, are already undeniable benefits of this approach. However, the slowness of higher-level, rule based model transformation languages hinders the applicability of this approach to industrial scales. Lower-level, and efficient, languages can be used but productivity and easy maintenance seize to exist. The abstraction penalty problem is not new, it also exists for high-level, object oriented languages but everyone is using them now. Why is not everyone using rule based model transformation languages then? In this thesis, we propose a framework, comprised of a language and its respective environment, designed to tackle the most performance critical operation of high-level model transformation languages: the pattern matching. This framework shows that it is possible to mitigate the performance penalty while still using high-level model transformation languages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cloud computing has been one of the most important topics in Information Technology which aims to assure scalable and reliable on-demand services over the Internet. The expansion of the application scope of cloud services would require cooperation between clouds from different providers that have heterogeneous functionalities. This collaboration between different cloud vendors can provide better Quality of Services (QoS) at the lower price. However, current cloud systems have been developed without concerns of seamless cloud interconnection, and actually they do not support intercloud interoperability to enable collaboration between cloud service providers. Hence, the PhD work is motivated to address interoperability issue between cloud providers as a challenging research objective. This thesis proposes a new framework which supports inter-cloud interoperability in a heterogeneous computing resource cloud environment with the goal of dispatching the workload to the most effective clouds available at runtime. Analysing different methodologies that have been applied to resolve various problem scenarios related to interoperability lead us to exploit Model Driven Architecture (MDA) and Service Oriented Architecture (SOA) methods as appropriate approaches for our inter-cloud framework. Moreover, since distributing the operations in a cloud-based environment is a nondeterministic polynomial time (NP-complete) problem, a Genetic Algorithm (GA) based job scheduler proposed as a part of interoperability framework, offering workload migration with the best performance at the least cost. A new Agent Based Simulation (ABS) approach is proposed to model the inter-cloud environment with three types of agents: Cloud Subscriber agent, Cloud Provider agent, and Job agent. The ABS model is proposed to evaluate the proposed framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The year is 2015 and the startup and tech business ecosphere has never seen more activity. In New York City alone, the tech startup industry is on track to amass $8 billion dollars in total funding – the highest in 7 years (CB Insights, 2015). According to the Kauffman Index of Entrepreneurship (2015), this figure represents just 20% of the total funding in the United States. Thanks to platforms that link entrepreneurs with investors, there are simply more funding opportunities than ever, and funding can be initiated in a variety of ways (angel investors, venture capital firms, crowdfunding). And yet, in spite of all this, according to Forbes Magazine (2015), nine of ten startups will fail. Because of the unpredictable nature of the modern tech industry, it is difficult to pinpoint exactly why 90% of startups fail – but the general consensus amongst top tech executives is that “startups make products that no one wants” (Fortune, 2014). In 2011, author Eric Ries wrote a book called The Lean Startup in attempts to solve this all-too-familiar problem. It was in this book where he developed the framework for The Hypothesis-Driven Entrepreneurship Process, an iterative process that aims at proving a market before actually launching a product. Ries discusses concepts such as the Minimum Variable Product, the smallest set of activities necessary to disprove a hypothesis (or business model characteristic). Ries encourages acting briefly and often: if you are to fail, then fail fast. In today’s fast-moving economy, an entrepreneur cannot afford to waste his own time, nor his customer’s time. The purpose of this thesis is to conduct an in-depth of analysis of Hypothesis-Driven Entrepreneurship Process, in order to test market viability of a reallife startup idea, ShowMeAround. This analysis will follow the scientific Lean Startup approach; for the purpose of developing a functional business model and business plan. The objective is to conclude with an investment-ready startup idea, backed by rigorous entrepreneurial study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This project is based on the theme of capacity-building in social organisations to improve their impact readiness, which is the predictability of delivering intended outcomes. All organisations which have a social mission, non-profit or for-profit, will be considered to fall within the social sector for the purpose of this work. The thesis will look at (i) what is impact readiness and what are the considerations for building impact readiness in social organisations, (ii) what is the international benchmark in measuring and building impact readiness, (iii) understand the impact readiness of Portuguese social organisations and the supply of capacity building for social impact in Portugal currently, and (iv) provide recommendations on the design of a framework for capacity building for impact readiness adapted to the Portuguese context. This work is of particular relevance to the Social Investment Laboratory, which is a sponsor of this project, in its policy work as part of the Portuguese Social Investment Taskforce (the “Taskforce”). This in turn will inform its contribution to the set-up of Portugal Inovação Social, a wholesaler catalyst entity of social innovation and social investment in the country, launched in early 2015. Whilst the output of this work will be set a recommendations for wider application for capacity-building programmes in Portugal, Portugal Inovação Social will also clearly have a role in coordinating the efforts of market players – foundations, corporations, public sector and social organisations – in implementing these recommendations. In addition, the findings of this report could have relevance to other countries seeking to design capacity building frameworks in their local markets and to any impact-driven organisations with an interest in enhancing the delivery of impact within their work.