930 resultados para Memory Management (Computer science)
Resumo:
La gestió de l'aigua residual és una tasca complexa. Hi ha moltes substàncies contaminants conegudes però encara moltes per conèixer, i el seu efecte individual o col·lgectiu és difícil de predir. La identificació i avaluació dels impactes ambientals resultants de la interacció entre els sistemes naturals i socials és un assumpte multicriteri. Els gestors ambientals necessiten eines de suport pels seus diagnòstics per tal de solucionar problemes ambientals. Les contribucions d'aquest treball de recerca són dobles: primer, proposar l'ús d'un enfoc basat en la modelització amb agents per tal de conceptualitzar i integrar tots els elements que estan directament o indirectament involucrats en la gestió de l'aigua residual. Segon, proposar un marc basat en l'argumentació amb l'objectiu de permetre als agents raonar efectivament. La tesi conté alguns exemples reals per tal de mostrar com un marc basat amb agents que argumenten pot suportar diferents interessos i diferents perspectives. Conseqüentment, pot ajudar a construir un diàleg més informat i efectiu i per tant descriure millor les interaccions entre els agents. En aquest document es descriu primer el context estudiat, escalant el problema global de la gestió de la conca fluvial a la gestiódel sistema urbà d'aigües residuals, concretament l'escenari dels abocaments industrials. A continuació, s'analitza el sistema mitjançant la descripció d'agents que interaccionen. Finalment, es descriuen alguns prototips capaços de raonar i deliberar, basats en la lògica no monòtona i en un llenguatge declaratiu (answer set programming). És important remarcar que aquesta tesi enllaça dues disciplines: l'enginyeria ambiental (concretament l'àrea de la gestió de les aigües residuals) i les ciències de la computació (concretament l'àrea de la intel·ligència artificial), contribuint així a la multidisciplinarietat requerida per fer front al problema estudiat. L'enginyeria ambiental ens proporciona el coneixement del domini mentre que les ciències de la computació ens permeten estructurar i especificar aquest coneixement.
Resumo:
Competency management is a very important part of a well-functioning organisation. Unfortunately competency descriptions are not uniformly specified nor defined across borders: National, sectorial or organisational, leading to an opaque competency description market with a multitude of competency frameworks and competency benchmarks. An ontology is a formalised description of a domain, which enables automated reasoning engines to be built which by utilising the interrelations between entities can make “intelligent” choices in different situations within the domain. Introducing formalised competency ontologies automated tools, such as skill gap analysis, training suggestion generation, job search and recruitment, can be developed, which compare and contrast different competency descriptions on the semantic level. The major problem with defining a common formalised ontology for competencies is that there are so many viewpoints of competencies and competency frameworks. Work within the TRACE project has focused on finding common trends within different competency frameworks in order to allow an intermediate competency description to be made, which other frameworks can reference. This research has shown that competencies can be divided up into “knowledge”, “skills” and what we call “others”. An ontology has been created based on this with a simple structure of different “kinds” of “knowledges” and “skills” using semantic interrelations to define the basic semantic structure of the ontology. A prototype tool for analysing a skill gap analysis has been developed. Personal profiles can be produced using the tool and a skill gap analysis is performed on a desired competency profile by using an ontologically based inference engine, which is able to list closest fit and possible proficiency gaps
Resumo:
At its most fundamental, cognition as displayed by biological agents (such as humans) may be said to consist of the manipulation and utilisation of memory. Recent discussions in the field of cognitive robotics have emphasised the role of embodiment and the necessity of a value or motivation for autonomous behaviour. This work proposes a computational architecture – the Memory-Based Cognitive (MBC) architecture – based upon these considerations for the autonomous development of control of a simple mobile robot. This novel architecture will permit the exploration of theoretical issues in cognitive robotics and animal cognition. Furthermore, the biological inspiration of the architecture is anticipated to result in a mobile robot controller which displays adaptive behaviour in unknown environments.
Resumo:
Planning a project with proper considerations of all necessary factors and managing a project to ensure its successful implementation will face a lot of challenges. Initial stage in planning a project for bidding a project is costly, time consuming and usually with poor accuracy on cost and effort predictions. On the other hand, detailed information for previous projects may be buried in piles of archived documents which can be increasingly difficult to learn from the previous experiences. Project portfolio has been brought into this field aiming to improve the information sharing and management among different projects. However, the amount of information that could be shared is still limited to generic information. This paper, we report a recently developed software system COBRA to automatically generate a project plan with effort estimation of time and cost based on data collected from previous completed projects. To maximise the data sharing and management among different projects, we proposed a method of using product based planning from PRINCE2 methodology. (Automated Project Information Sharing and Management System -�COBRA) Keywords: project management, product based planning, best practice, PRINCE2
Resumo:
This paper focuses on improving computer network management by the adoption of artificial intelligence techniques. A logical inference system has being devised to enable automated isolation, diagnosis, and even repair of network problems, thus enhancing the reliability, performance, and security of networks. We propose a distributed multi-agent architecture for network management, where a logical reasoner acts as an external managing entity capable of directing, coordinating, and stimulating actions in an active management architecture. The active networks technology represents the lower level layer which makes possible the deployment of code which implement teleo-reactive agents, distributed across the whole network. We adopt the Situation Calculus to define a network model and the Reactive Golog language to implement the logical reasoner. An active network management architecture is used by the reasoner to inject and execute operational tasks in the network. The integrated system collects the advantages coming from logical reasoning and network programmability, and provides a powerful system capable of performing high-level management tasks in order to deal with network fault.
Resumo:
With the rapid development of information technology, learners demand effective personalised learning support, which imposes a new learning paradigm in learning content management. Standards as well as best practice in industry and research community have taken place to address the paradigm shift. With respect to this trend, it is recognised that finding learning content which meet personal learning requirements remains challenging. This paper describes a model of e-learning services provision which integrates the best practice in e-learning and Web services technology so that learning content management is capable of supporting applications of learning services.
Resumo:
Climate-G is a large scale distributed testbed devoted to climate change research. It is an unfunded effort started in 2008 and involving a wide community both in Europe and US. The testbed is an interdisciplinary effort involving partners from several institutions and joining expertise in the field of climate change and computational science. Its main goal is to allow scientists carrying out geographical and cross-institutional data discovery, access, analysis, visualization and sharing of climate data. It represents an attempt to address, in a real environment, challenging data and metadata management issues. This paper presents a complete overview about the Climate-G testbed highlighting the most important results that have been achieved since the beginning of this project.
Resumo:
Searching in a dataset for elements that are similar to a given query element is a core problem in applications that manage complex data, and has been aided by metric access methods (MAMs). A growing number of applications require indices that must be built faster and repeatedly, also providing faster response for similarity queries. The increase in the main memory capacity and its lowering costs also motivate using memory-based MAMs. In this paper. we propose the Onion-tree, a new and robust dynamic memory-based MAM that slices the metric space into disjoint subspaces to provide quick indexing of complex data. It introduces three major characteristics: (i) a partitioning method that controls the number of disjoint subspaces generated at each node; (ii) a replacement technique that can change the leaf node pivots in insertion operations; and (iii) range and k-NN extended query algorithms to support the new partitioning method, including a new visit order of the subspaces in k-NN queries. Performance tests with both real-world and synthetic datasets showed that the Onion-tree is very compact. Comparisons of the Onion-tree with the MM-tree and a memory-based version of the Slim-tree showed that the Onion-tree was always faster to build the index. The experiments also showed that the Onion-tree significantly improved range and k-NN query processing performance and was the most efficient MAM, followed by the MM-tree, which in turn outperformed the Slim-tree in almost all the tests. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
An important feature of a database management systems (DBMS) is its client/server architecture, where managing shared memory among the clients and the server is always an tough issue. However, similarity queries are specially sensitive to this kind of architecture, since the answer sizes vary widely. Usually, the answers of similarity query are fully processed to be sent in full to the user, who often is interested in just parts of the answer, e.g. just few elements closer or farther to the query reference. Compelling the DBMS to retrieve the full answer, further ignoring its majority is at least a waste of server processing power. Paging the answer is a technique that splits the answer onto several pages, following client requests. Despite the success of paging on traditional queries, little work has been done to support it in similarity queries. In this work, we present a technique that not only provides paging in similarity range or k-nearest neighbor queries, but also supports them in two variations: the forward similarity query and the backward similarity query. They return elements either increasingly farther of increasingly closer to the query reference. The reported experiments show that, depending on the proportion of the interesting part over the full answer, both techniques allow answering queries much faster than it is obtained in the non-paged way. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
Security administrators face the challenge of designing, deploying and maintaining a variety of configuration files related to security systems, especially in large-scale networks. These files have heterogeneous syntaxes and follow differing semantic concepts. Nevertheless, they are interdependent due to security services having to cooperate and their configuration to be consistent with each other, so that global security policies are completely and correctly enforced. To tackle this problem, our approach supports a comfortable definition of an abstract high-level security policy and provides an automated derivation of the desired configuration files. It is an extension of policy-based management and policy hierarchies, combining model-based management (MBM) with system modularization. MBM employs an object-oriented model of the managed system to obtain the details needed for automated policy refinement. The modularization into abstract subsystems (ASs) segment the system-and the model-into units which more closely encapsulate related system components and provide focused abstract views. As a result, scalability is achieved and even comprehensive IT systems can be modelled in a unified manner. The associated tool MoBaSeC (Model-Based-Service-Configuration) supports interactive graphical modelling, automated model analysis and policy refinement with the derivation of configuration files. We describe the MBM and AS approaches, outline the tool functions and exemplify their applications and results obtained. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
Policy hierarchies and automated policy refinement are powerful approaches to simplify administration of security services in complex network environments. A crucial issue for the practical use of these approaches is to ensure the validity of the policy hierarchy, i.e. since the policy sets for the lower levels are automatically derived from the abstract policies (defined by the modeller), we must be sure that the derived policies uphold the high-level ones. This paper builds upon previous work on Model-based Management, particularly on the Diagram of Abstract Subsystems approach, and goes further to propose a formal validation approach for the policy hierarchies yielded by the automated policy refinement process. We establish general validation conditions for a multi-layered policy model, i.e. necessary and sufficient conditions that a policy hierarchy must satisfy so that the lower-level policy sets are valid refinements of the higher-level policies according to the criteria of consistency and completeness. Relying upon the validation conditions and upon axioms about the model representativeness, two theorems are proved to ensure compliance between the resulting system behaviour and the abstract policies that are modelled.
Resumo:
The InteGrade project is a multi-university effort to build a novel grid computing middleware based on the opportunistic use of resources belonging to user workstations. The InteGrade middleware currently enables the execution of sequential, bag-of-tasks, and parallel applications that follow the BSP or the MPI programming models. This article presents the lessons learned over the last five years of the InteGrade development and describes the solutions achieved concerning the support for robust application execution. The contributions cover the related fields of application scheduling, execution management, and fault tolerance. We present our solutions, describing their implementation principles and evaluation through the analysis of several experimental results. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
Software Transactional Memory (STM) systems have poor performance under high contention scenarios. Since many transactions compete for the same data, most of them are aborted, wasting processor runtime. Contention management policies are typically used to avoid that, but they are passive approaches as they wait for an abort to happen so they can take action. More proactive approaches have emerged, trying to predict when a transaction is likely to abort so its execution can be delayed. Such techniques are limited, as they do not replace the doomed transaction by another or, when they do, they rely on the operating system for that, having little or no control on which transaction should run. In this paper we propose LUTS, a Lightweight User-Level Transaction Scheduler, which is based on an execution context record mechanism. Unlike other techniques, LUTS provides the means for selecting another transaction to run in parallel, thus improving system throughput. Moreover, it avoids most of the issues caused by pseudo parallelism, as it only launches as many system-level threads as the number of available processor cores. We discuss LUTS design and present three conflict-avoidance heuristics built around LUTS scheduling capabilities. Experimental results, conducted with STMBench7 and STAMP benchmark suites, show LUTS efficiency when running high contention applications and how conflict-avoidance heuristics can improve STM performance even more. In fact, our transaction scheduling techniques are capable of improving program performance even in overloaded scenarios. © 2011 Springer-Verlag.