932 resultados para systems modeling


Relevância:

60.00% 60.00%

Publicador:

Resumo:

While existing multi-biometic Dempster-Shafer the- ory fusion approaches have demonstrated promising perfor- mance, they do not model the uncertainty appropriately, sug- gesting that further improvement can be achieved. This research seeks to develop a unified framework for multimodal biometric fusion to take advantage of the uncertainty concept of Dempster- Shafer theory, improving the performance of multi-biometric authentication systems. Modeling uncertainty as a function of uncertainty factors affecting the recognition performance of the biometric systems helps to address the uncertainty of the data and the confidence of the fusion outcome. A weighted combination of quality measures and classifiers performance (Equal Error Rate) are proposed to encode the uncertainty concept to improve the fusion. We also found that quality measures contribute unequally to the recognition performance, thus selecting only significant factors and fusing them with a Dempster-Shafer approach to generate an overall quality score play an important role in the success of uncertainty modeling. The proposed approach achieved a competitive performance (approximate 1% EER) in comparison with other Dempster-Shafer based approaches and other conventional fusion approaches.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Today’s information systems log vast amounts of data. These collections of data (implicitly) describe events (e.g. placing an order or taking a blood test) and, hence, provide information on the actual execution of business processes. The analysis of such data provides an excellent starting point for business process improvement. This is the realm of process mining, an area which has provided a repertoire of many analysis techniques. Despite the impressive capabilities of existing process mining algorithms, dealing with the abundance of data recorded by contemporary systems and devices remains a challenge. Of particular importance is the capability to guide the meaningful interpretation of “oceans of data” by process analysts. To this end, insights from the field of visual analytics can be leveraged. This article proposes an approach where process states are reconstructed from event logs and visualised in succession, leading to an animated history of a process. This approach is customisable in how a process state, partially defined through a collection of activity instances, is visualised: one can select a map and specify a projection of events on this map based on the properties of the events. This paper describes a comprehensive implementation of the proposal. It was realised using the open-source process mining framework ProM. Moreover, this paper also reports on an evaluation of the approach conducted with Suncorp, one of Australia’s largest insurance companies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

PURPOSE: This paper describes dynamic agent composition, used to support the development of flexible and extensible large-scale agent-based models (ABMs). This approach was motivated by a need to extend and modify, with ease, an ABM with an underlying networked structure as more information becomes available. Flexibility was also sought after so that simulations are set up with ease, without the need to program. METHODS: The dynamic agent composition approach consists in having agents, whose implementation has been broken into atomic units, come together at runtime to form the complex system representation on which simulations are run. These components capture information at a fine level of detail and provide a vast range of combinations and options for a modeller to create ABMs. RESULTS: A description of the dynamic agent composition is given in this paper, as well as details about its implementation within MODAM (MODular Agent-based Model), a software framework which is applied to the planning of the electricity distribution network. Illustrations of the implementation of the dynamic agent composition are consequently given for that domain throughout the paper. It is however expected that this approach will be beneficial to other problem domains, especially those with a networked structure, such as water or gas networks. CONCLUSIONS: Dynamic agent composition has many advantages over the way agent-based models are traditionally built for the users, the developers, as well as for agent-based modelling as a scientific approach. Developers can extend the model without the need to access or modify previously written code; they can develop groups of entities independently and add them to those already defined to extend the model. Users can mix-and-match already implemented components to form large-scales ABMs, allowing them to quickly setup simulations and easily compare scenarios without the need to program. The dynamic agent composition provides a natural simulation space over which ABMs of networked structures are represented, facilitating their implementation; and verification and validation of models is facilitated by quickly setting up alternative simulations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Since their inception in 1962, Petri nets have been used in a wide variety of application domains. Although Petri nets are graphical and easy to understand, they have formal semantics and allow for analysis techniques ranging from model checking and structural analysis to process mining and performance analysis. Over time Petri nets emerged as a solid foundation for Business Process Management (BPM) research. The BPM discipline develops methods, techniques, and tools to support the design, enactment, management, and analysis of operational business processes. Mainstream business process modeling notations and workflow management systems are using token-based semantics borrowed from Petri nets. Moreover, state-of-the-art BPM analysis techniques are using Petri nets as an internal representation. Users of BPM methods and tools are often not aware of this. This paper aims to unveil the seminal role of Petri nets in BPM.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Os SIG estão se popularizando cada vez mais e isso tem se dado principalmente através da Internet. Os assim chamados SIG-Web no entanto, quando desenvolvidos com as tecnologias tradicionais de web, apresentam as mesmas fraquezas daquelas, a saber: sincronicidade e pobreza na interação com o usuário. As tecnologias usadas para Rich Internet Applications (RIA) são uma alternativa que resolvem esses problemas. Na presente dissertação será demonstrada a factibilidade do seu uso para o desenvolvimento de SIG-Web, oferecendo um conjunto de códigos e estratégias para desenvolvimentos futuros, a partir de um conjunto básico de operações a se realizar em um SIG-Web. Adicionalmente será proposta a UWE-R, uma extensão a uma metodologia de engenharia web existente, para modelagem de RIA e SIG-Web.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We propose simple models to predict the performance degradation of disk requests due to storage device contention in consolidated virtualized environments. Model parameters can be deduced from measurements obtained inside Virtual Machines (VMs) from a system where a single VM accesses a remote storage server. The parameterized model can then be used to predict the effect of storage contention when multiple VMs are consolidated on the same server. We first propose a trace-driven approach that evaluates a queueing network with fair share scheduling using simulation. The model parameters consider Virtual Machine Monitor level disk access optimizations and rely on a calibration technique. We further present a measurement-based approach that allows a distinct characterization of read/write performance attributes. In particular, we define simple linear prediction models for I/O request mean response times, throughputs and read/write mixes, as well as a simulation model for predicting response time distributions. We found our models to be effective in predicting such quantities across a range of synthetic and emulated application workloads. 

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-03

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Horticultural science linked with basic studies in biology, chemistry, physics and engineering has laid the foundation for advances in applied knowledge which are at the heart of commercial, environmental and social horticulture. In few disciplines is science more rapidly translated into applicable technologies than in the huge range of man’s activities embraced within horticulture which are discussed in this Trilogy. This chapter surveys the origins of horticultural science developing as an integral part of the 16th century “Scientific Revolution”. It identifies early discoveries during the latter part of the 19th and early 20th centuries which rationalized the control of plant growth, flowering and fruiting and the media in which crops could be cultivated. The products of these discoveries formed the basis on which huge current industries of worldwide significance are founded in fruit, vegetable and ornamental production. More recent examples of the application of horticultural science are used in an explanation of how the integration of plant breeding, crop selection and astute marketing highlighted by the New Zealand industry have retained and expanded the viability of production which supplies huge volumes of fruit into the world’s markets. This is followed by an examination of science applied to tissue and cell culture as an example of technologies which have already produced massive industrial applications but hold the prospect for generating even greater advances in the future. Finally, examples are given of nascent scientific discoveries which hold the prospect for generating horticultural industries with considerable future impact. These include systems modeling and biology, nanotechnology, robotics, automation and electronics, genetics and plant breeding, and more efficient and effective use of resources and the employment of benign microbes. In conclusion there is an estimation of the value of horticultural science to society.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The objective of this research is to model and analyze candidate hull configurations for a low-cost, modular, autonomous underwater robot. As the computational power and speed of microprocessors continue to progress, we are seeing a growth in the research, development, and the utilization of underwater robots. The number of applications is broadening in the R&D and science communities, especially in the area of multiple, collaborative robots. These underwater collaborative robots represent an instantiation of a System of Systems (SoS). While each new researcher explores a unique application, control method, etc. a new underwater robot vehicle is designed, developed, and deployed. This sometimes leads to one-off designs that are costly. One limit to the wide-scale utilization of underwater robotics is the cost of development. Another limit is the ability to modify the configuration for new applications and evolving requirements. Consequently, we are exploring autonomous underwater vehicle (AUV) hull designs towards the goal of modularity, vehicle dexterity, and minimizing the cost. In our analysis, we have employed 3D solid modeling tools and finite element methods. In this paper we present our initial results and discuss ongoing work.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Although cluster environments have an enormous potential processing power, real applications that take advantage of this power remain an elusive goal. This is due, in part, to the lack of understanding about the characteristics of the applications best suited for these environments. This paper focuses on Master/Slave applications for large heterogeneous clusters. It defines application, cluster and execution models to derive an analytic expression for the execution time. It defines speedup and derives speedup bounds based on the inherent parallelism of the application and the aggregated computing power of the cluster. The paper derives an analytical expression for efficiency and uses it to define scalability of the algorithm-cluster combination based on the isoefficiency metric. Furthermore, the paper establishes necessary and sufficient conditions for an algorithm-cluster combination to be scalable which are easy to verify and use in practice. Finally, it covers the impact of network contention as the number of processors grow. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Distributed Generators (DG) are generally modeled as PQ or PV buses in power flow studies. But in order to integrate DG units into the distribution systems and control the reactive power injection it is necessary to know the operation mode and the type of connection to the system. This paper presents a single-phase and a three-phase mathematical model to integrate DG in power flow calculations in distribution systems, especially suited for Smart Grid calculations. If the DG is in PV mode, each step of the power flow algorithm calculates the reactive power injection from the DG to the system to keep the voltage in the bus in a predefined level, if the DG is in PQ mode, the power injection is considered as a negative load. The method is tested on two well known test system, presenting single-phase results on 85 bus system, and three-phase results in the IEEE 34 bus test system. © 2011 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Ciência da Informação - FFC