878 resultados para Just-in-time systems


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We introduce self-interested evolutionary market agents, which act on behalf of service providers in a large decentralised system, to adaptively price their resources over time. Our agents competitively co-evolve in the live market, driving it towards the Bertrand equilibrium, the non-cooperative Nash equilibrium, at which all sellers charge their reserve price and share the market equally. We demonstrate that this outcome results in even load-balancing between the service providers. Our contribution in this paper is twofold; the use of on-line competitive co-evolution of self-interested service providers to drive a decentralised market towards equilibrium, and a demonstration that load-balancing behaviour emerges under the assumptions we describe. Unlike previous studies on this topic, all our agents are entirely self-interested; no cooperation is assumed. This makes our problem a non-trivial and more realistic one.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper an agent-based approach for anomalies monitoring in distributed systems such as computer networks, or Grid systems is proposed. This approach envisages on-line and off-line monitoring in order to analyze users’ activity. On-line monitoring is carried in real time, and is used to predict user actions. Off-line monitoring is done after the user has ended his work, and is based on the analysis of statistical information obtained during user’s work. In both cases neural networks are used in order to predict user actions and to distinguish normal and anomalous user behavior.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work introduces a Gaussian variational mean-field approximation for inference in dynamical systems which can be modeled by ordinary stochastic differential equations. This new approach allows one to express the variational free energy as a functional of the marginal moments of the approximating Gaussian process. A restriction of the moment equations to piecewise polynomial functions, over time, dramatically reduces the complexity of approximate inference for stochastic differential equation models and makes it comparable to that of discrete time hidden Markov models. The algorithm is demonstrated on state and parameter estimation for nonlinear problems with up to 1000 dimensional state vectors and compares the results empirically with various well-known inference methodologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Real-time systems are usually modelled with timed automata and real-time requirements relating to the state durations of the system are often specifiable using Linear Duration Invariants, which is a decidable subclass of Duration Calculus formulas. Various algorithms have been developed to check timed automata or real-time automata for linear duration invariants, but each needs complicated preprocessing and exponential calculation. To the best of our knowledge, these algorithms have not been implemented. In this paper, we present an approximate model checking technique based on a genetic algorithm to check real-time automata for linear durration invariants in reasonable times. Genetic algorithm is a good optimization method when a problem needs massive computation and it works particularly well in our case because the fitness function which is derived from the linear duration invariant is linear. ACM Computing Classification System (1998): D.2.4, C.3.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

"In this paper we extend the earlier treatment of out-of-equilibrium mesoscopic fluctuations in glassy systems in several significant ways. First, via extensive simulations, we demonstrate that models of glassy behavior without quenched disorder display scalings of the probability of local two-time correlators that are qualitatively similar to that of models with short-ranged quenched interactions. The key ingredient for such scaling properties is shown to be the development of a criticallike dynamical correlation length, and not other microscopic details. This robust data collapse may be described in terms of a time-evolving "extreme value" distribution. We develop a theory to describe both the form and evolution of these distributions based on a effective sigma model approach."

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the past few years, logging has evolved from from simple printf statements to more complex and widely used logging libraries. Today logging information is used to support various development activities such as fixing bugs, analyzing the results of load tests, monitoring performance and transferring knowledge. Recent research has examined how to improve logging practices by informing developers what to log and where to log. Furthermore, the strong dependence on logging has led to the development of logging libraries that have reduced the intricacies of logging, which has resulted in an abundance of log information. Two recent challenges have emerged as modern software systems start to treat logging as a core aspect of their software. In particular, 1) infrastructural challenges have emerged due to the plethora of logging libraries available today and 2) processing challenges have emerged due to the large number of log processing tools that ingest logs and produce useful information from them. In this thesis, we explore these two challenges. We first explore the infrastructural challenges that arise due to the plethora of logging libraries available today. As systems evolve, their logging infrastructure has to evolve (commonly this is done by migrating to new logging libraries). We explore logging library migrations within Apache Software Foundation (ASF) projects. We i find that close to 14% of the pro jects within the ASF migrate their logging libraries at least once. For processing challenges, we explore the different factors which can affect the likelihood of a logging statement changing in the future in four open source systems namely ActiveMQ, Camel, Cloudstack and Liferay. Such changes are likely to negatively impact the log processing tools that must be updated to accommodate such changes. We find that 20%-45% of the logging statements within the four systems are changed at least once. We construct random forest classifiers and Cox models to determine the likelihood of both just-introduced and long-lived logging statements changing in the future. We find that file ownership, developer experience, log density and SLOC are important factors in determining the stability of logging statements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper investigates how Information Systems (IS) has emerged as the product of inter-disciplinary discourses. The research aim in this study is to better understand diversity in IS research, and the extent to which the diversity of discourse expanded and contracted from 1995 to 2011. Methodologically, we apply a combined citations/co-citations analysis based on the eight Association for Information Systems basket journals and the 22 subject-field classification framework provided by the Association of Business Schools. Our findings suggest that IS is in a state of continuous interaction and competition with other disciplines. General Management was reduced from a dominant position as a reference discipline in IS at the expense of a growing variety of other discourses including Business Strategy, Marketing, and Ethics and Governance, among others. Over time, IS as a field moved from the periphery to a central position during its discursive formation. This supports the notion of IS as a fluid discipline dynamically embracing a diverse range of adjacent reference disciplines, while keeping a degree of continuing interaction with them. Understanding where IS is currently at allows us to better understand and propose fruitful avenues for its development in both academia and practice. © 2013 JIT Palgrave Macmillan All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Echtzeitbasierte Anlagenüberwachung gewinnt in industrieller Umgebung zunehmend an Bedeutung. Die frühzeitige Erfassung von Störungen ermöglicht es Unternehmen Anlagen punktgenau zu warten und somit deren Stillstandszeiten zu reduzieren. Auch im Bereich der Fördertechnik sind diese Systeme er-wünscht, aber aufgrund des beengten Bauraums ge-staltet sich ihre Integration oftmals schwierig. Im Fol-genden wird eine Baugruppe vorgestellt, welche das Reibwert- und Schmierstoffverhalten fördertechni-scher Anlagen in Echtzeit erfassen kann.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Logistikkonzepte in der Spedition und Transportgewerbe spielen heutzutage eine immer wichtigere Rolle, um Lohnnebenkosten so niedrig wie möglich zu halten. Gerade in der Spedition entsteht ein Zusammenspiel aus zeitnaher Lieferung von Konsumgütern und Optimierung der Lagerhaltungskosten bei dem die modulare Integration von Logistik, Informationstechnischen Systemen und vernetzter Kommunikation eine ganz spezifische Rolle hat. Anbindung von der Industrie zum Kunden wird dadurch immer mehr verstärkt, wobei sich somit dies zusehends zu einem Technologietransfer der Industrie 4.0 entwickelt. Die deutsche Wirtschaft steht augenblicklich am Übergabepunkt zur virtuellen Revolution im Industriezeitalter.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new procedure was developed in this study, based on a system equipped with a cellulose membrane and a tetraethylenepentamine hexaacetate chelator (MD-TEPHA) for in situ characterization of the lability of metal species in aquatic systems. To this end, the DM-TEPHA system was prepared by adding TEPHA chelator to cellulose bags pre-purified with 1.0 mol L-1 of HCl and NaOH solutions. After the MD-TEPHA system was sealed, it was examined in the laboratory to evaluate the influence of complexation time (0-24 h), pH (3.0, 4.0, 5.0, 6.0 and 7.0), metal ions (Cu, Cd, Fe, Mn and Ni) and concentration of organic matter (15, 30 and 60 mg L-1) on the relative lability of metal species by TEPHA chelator. The results showed that Fe and Cu metals were complexed more slowly by TEPHA chelator in the MD-TEPHA system than were Cd, Ni and Mn in all pH used. It was also found that the pH strongly influences the process of metal complexation by the MD-TEPHA system. At all the pH levels, Cd, Mn and Ni showed greater complexation with TEPHA chelator (recovery of about 95-75%) than did Cu and Fe metals. Time also affects the lability of metal species complexed by aquatic humic substances (AHS); while Cd, Ni and Mn showed a faster kinetics, reaching equilibrium after about 100 min, and Cu and Fe approached equilibrium after 400 min. Increasing the AHS concentration decreases the lability of metal species by shifting the equilibrium to AHS-metal complexes. Our results indicate that the system under study offers an interesting alternative that can be applied to in situ experiments for differentiation of labile and inert metal species in aquatic systems. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dynamically reconfigurable hardware is a promising technology that combines in the same device both the high performance and the flexibility that many recent applications demand. However, one of its main drawbacks is the reconfiguration overhead, which involves important delays in the task execution, usually in the order of hundreds of milliseconds, as well as high energy consumption. One of the most powerful ways to tackle this problem is configuration reuse, since reusing a task does not involve any reconfiguration overhead. In this paper we propose a configuration replacement policy for reconfigurable systems that maximizes task reuse in highly dynamic environments. We have integrated this policy in an external taskgraph execution manager that applies task prefetch by loading and executing the tasks as soon as possible (ASAP). However, we have also modified this ASAP technique in order to make the replacements more flexible, by taking into account the mobility of the tasks and delaying some of the reconfigurations. In addition, this replacement policy is a hybrid design-time/run-time approach, which performs the bulk of the computations at design time in order to save run-time computations. Our results illustrate that the proposed strategy outperforms other state-ofthe-art replacement policies in terms of reuse rates and achieves near-optimal reconfiguration overhead reductions. In addition, by performing the bulk of the computations at design time, we reduce the execution time of the replacement technique by 10 times with respect to an equivalent purely run-time one.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Power efficiency is one of the most important constraints in the design of embedded systems since such systems are generally driven by batteries with limited energy budget or restricted power supply. In every embedded system, there are one or more processor cores to run the software and interact with the other hardware components of the system. The power consumption of the processor core(s) has an important impact on the total power dissipated in the system. Hence, the processor power optimization is crucial in satisfying the power consumption constraints, and developing low-power embedded systems. A key aspect of research in processor power optimization and management is “power estimation”. Having a fast and accurate method for processor power estimation at design time helps the designer to explore a large space of design possibilities, to make the optimal choices for developing a power efficient processor. Likewise, understanding the processor power dissipation behaviour of a specific software/application is the key for choosing appropriate algorithms in order to write power efficient software. Simulation-based methods for measuring the processor power achieve very high accuracy, but are available only late in the design process, and are often quite slow. Therefore, the need has arisen for faster, higher-level power prediction methods that allow the system designer to explore many alternatives for developing powerefficient hardware and software. The aim of this thesis is to present fast and high-level power models for the prediction of processor power consumption. Power predictability in this work is achieved in two ways: first, using a design method to develop power predictable circuits; second, analysing the power of the functions in the code which repeat during execution, then building the power model based on average number of repetitions. In the first case, a design method called Asynchronous Charge Sharing Logic (ACSL) is used to implement the Arithmetic Logic Unit (ALU) for the 8051 microcontroller. The ACSL circuits are power predictable due to the independency of their power consumption to the input data. Based on this property, a fast prediction method is presented to estimate the power of ALU by analysing the software program, and extracting the number of ALU-related instructions. This method achieves less than 1% error in power estimation and more than 100 times speedup in comparison to conventional simulation-based methods. In the second case, an average-case processor energy model is developed for the Insertion sort algorithm based on the number of comparisons that take place in the execution of the algorithm. The average number of comparisons is calculated using a high level methodology called MOdular Quantitative Analysis (MOQA). The parameters of the energy model are measured for the LEON3 processor core, but the model is general and can be used for any processor. The model has been validated through the power measurement experiments, and offers high accuracy and orders of magnitude speedup over the simulation-based method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data sources are often dispersed geographically in real life applications. Finding a knowledge model may require to join all the data sources and to run a machine learning algorithm on the joint set. We present an alternative based on a Multi Agent System (MAS): an agent mines one data source in order to extract a local theory (knowledge model) and then merges it with the previous MAS theory using a knowledge fusion technique. This way, we obtain a global theory that summarizes the distributed knowledge without spending resources and time in joining data sources. New experiments have been executed including statistical significance analysis. The results show that, as a result of knowledge fusion, the accuracy of initial theories is significantly improved as well as the accuracy of the monolithic solution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La eliminación de barreras entre países es una consecuencia que llega con la globalización y con los acuerdos de TLC firmados en los últimos años. Esto implica un crecimiento significativo del comercio exterior, lo cual se ve reflejado en un aumento de la complejidad de la cadena de suministro de las empresas. Debido a lo anterior, se hace necesaria la búsqueda de alternativas para obtener altos niveles de productividad y competitividad dentro de las empresas en Colombia, ya que el entorno se ha vuelto cada vez más complejo, saturado de competencia no sólo nacional, sino también internacional. Para mantenerse en una posición competitiva favorable, las compañías deben enfocarse en las actividades que le agregan valor a su negocio, por lo cual una de las alternativas que se están adoptando hoy en día es la tercerización de funciones logísticas a empresas especializadas en el manejo de estos servicios. Tales empresas son los Proveedores de servicios logísticos (LSP), quienes actúan como agentes externos a la organización al gestionar, controlar y proporcionar actividades logísticas en nombre de un contratante. Las actividades realizadas pueden incluir todas o parte de las actividades logísticas, pero como mínimo la gestión y ejecución del transporte y almacenamiento deben estar incluidos (Berglund, 2000). El propósito del documento es analizar el papel de los Operadores Logísticos de Tercer nivel (3PL) como promotores del desempeño organizacional en las empresas colombianas, con el fin de informar a las MIPYMES acerca de los beneficios que se obtienen al trabajar con LSP como un medio para mejorar la posición competitiva del país.