892 resultados para Distributed computer systems


Relevância:

90.00% 90.00%

Publicador:

Resumo:

In electronic commerce, systems development is based on two fundamental types of models, business models and process models. A business model is concerned with value exchanges among business partners, while a process model focuses on operational and procedural aspects of business communication. Thus, a business model defines the what in an e-commerce system, while a process model defines the how. Business process design can be facilitated and improved by a method for systematically moving from a business model to a process model. Such a method would provide support for traceability, evaluation of design alternatives, and seamless transition from analysis to realization. This work proposes a unified framework that can be used as a basis to analyze, to interpret and to understand different concepts associated at different stages in e-Commerce system development. In this thesis, we illustrate how UN/CEFACT’s recommended metamodels for business and process design can be analyzed, extended and then integrated for the final solutions based on the proposed unified framework. Also, as an application of the framework, we demonstrate how process-modeling tasks can be facilitated in e-Commerce system design. The proposed methodology, called BP3 stands for Business Process Patterns Perspective. The BP3 methodology uses a question-answer interface to capture different business requirements from the designers. It is based on pre-defined process patterns, and the final solution is generated by applying the captured business requirements by means of a set of production rules to complete the inter-process communication among these patterns.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Sustainable computer systems require some flexibility to adapt to environmental unpredictable changes. A solution lies in autonomous software agents which can adapt autonomously to their environments. Though autonomy allows agents to decide which behavior to adopt, a disadvantage is a lack of control, and as a side effect even untrustworthiness: we want to keep some control over such autonomous agents. How to control autonomous agents while respecting their autonomy? A solution is to regulate agents’ behavior by norms. The normative paradigm makes it possible to control autonomous agents while respecting their autonomy, limiting untrustworthiness and augmenting system compliance. It can also facilitate the design of the system, for example, by regulating the coordination among agents. However, an autonomous agent will follow norms or violate them in some conditions. What are the conditions in which a norm is binding upon an agent? While autonomy is regarded as the driving force behind the normative paradigm, cognitive agents provide a basis for modeling the bindingness of norms. In order to cope with the complexity of the modeling of cognitive agents and normative bindingness, we adopt an intentional stance. Since agents are embedded into a dynamic environment, things may not pass at the same instant. Accordingly, our cognitive model is extended to account for some temporal aspects. Special attention is given to the temporal peculiarities of the legal domain such as, among others, the time in force and the time in efficacy of provisions. Some types of normative modifications are also discussed in the framework. It is noteworthy that our temporal account of legal reasoning is integrated to our commonsense temporal account of cognition. As our intention is to build sustainable reasoning systems running unpredictable environment, we adopt a declarative representation of knowledge. A declarative representation of norms will make it easier to update their system representation, thus facilitating system maintenance; and to improve system transparency, thus easing system governance. Since agents are bounded and are embedded into unpredictable environments, and since conflicts may appear amongst mental states and norms, agent reasoning has to be defeasible, i.e. new pieces of information can invalidate formerly derivable conclusions. In this dissertation, our model is formalized into a non-monotonic logic, namely into a temporal modal defeasible logic, in order to account for the interactions between normative systems and software cognitive agents.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Traditional software engineering approaches and metaphors fall short when applied to areas of growing relevance such as electronic commerce, enterprise resource planning, and mobile computing: such areas, in fact, generally call for open architectures that may evolve dynamically over time so as to accommodate new components and meet new requirements. This is probably one of the main reasons that the agent metaphor and the agent-oriented paradigm are gaining momentum in these areas. This thesis deals with the engineering of complex software systems in terms of the agent paradigm. This paradigm is based on the notions of agent and systems of interacting agents as fundamental abstractions for designing, developing and managing at runtime typically distributed software systems. However, today the engineer often works with technologies that do not support the abstractions used in the design of the systems. For this reason the research on methodologies becomes the basic point in the scientific activity. Currently most agent-oriented methodologies are supported by small teams of academic researchers, and as a result, most of them are in an early stage and still in the first context of mostly \academic" approaches for agent-oriented systems development. Moreover, such methodologies are not well documented and very often defined and presented only by focusing on specific aspects of the methodology. The role played by meta- models becomes fundamental for comparing and evaluating the methodologies. In fact a meta-model specifies the concepts, rules and relationships used to define methodologies. Although it is possible to describe a methodology without an explicit meta-model, formalising the underpinning ideas of the methodology in question is valuable when checking its consistency or planning extensions or modifications. A good meta-model must address all the different aspects of a methodology, i.e. the process to be followed, the work products to be generated and those responsible for making all this happen. In turn, specifying the work products that must be developed implies dening the basic modelling building blocks from which they are built. As a building block, the agent abstraction alone is not enough to fully model all the aspects related to multi-agent systems in a natural way. In particular, different perspectives exist on the role that environment plays within agent systems: however, it is clear at least that all non-agent elements of a multi-agent system are typically considered to be part of the multi-agent system environment. The key role of environment as a first-class abstraction in the engineering of multi-agent system is today generally acknowledged in the multi-agent system community, so environment should be explicitly accounted for in the engineering of multi-agent system, working as a new design dimension for agent-oriented methodologies. At least two main ingredients shape the environment: environment abstractions - entities of the environment encapsulating some functions -, and topology abstractions - entities of environment that represent the (either logical or physical) spatial structure. In addition, the engineering of non-trivial multi-agent systems requires principles and mechanisms for supporting the management of the system representation complexity. These principles lead to the adoption of a multi-layered description, which could be used by designers to provide different levels of abstraction over multi-agent systems. The research in these fields has lead to the formulation of a new version of the SODA methodology where environment abstractions and layering principles are exploited for en- gineering multi-agent systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis describes modelling tools and methods suited for complex systems (systems that typically are represented by a plurality of models). The basic idea is that all models representing the system should be linked by well-defined model operations in order to build a structured repository of information, a hierarchy of models. The port-Hamiltonian framework is a good candidate to solve this kind of problems as it supports the most important model operations natively. The thesis in particular addresses the problem of integrating distributed parameter systems in a model hierarchy, and shows two possible mechanisms to do that: a finite-element discretization in port-Hamiltonian form, and a structure-preserving model order reduction for discretized models obtainable from commercial finite-element packages.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Die vorliegende Dissertation analysiert die Middleware- Technologien CORBA (Common Object Request Broker Architecture), COM/DCOM (Component Object Model/Distributed Component Object Model), J2EE (Java-2-Enterprise Edition) und Web Services (inklusive .NET) auf ihre Eignung bzgl. eng und lose gekoppelten verteilten Anwendungen. Zusätzlich werden primär für CORBA die dynamischen CORBA-Komponenten DII (Dynamic Invocation Interface), IFR (Interface Repository) und die generischen Datentypen Any und DynAny (dynamisches Any) im Detail untersucht. Ziel ist es, a. konkrete Aussagen über diese Komponenten zu erzielen, und festzustellen, in welchem Umfeld diese generischen Ansätze ihre Berechtigung finden. b. das zeitliche Verhalten der dynamischen Komponenten bzgl. der Informationsgewinnung über die unbekannten Objekte zu analysieren. c. das zeitliche Verhalten der dynamischen Komponenten bzgl. ihrer Kommunikation zu messen. d. das zeitliche Verhalten bzgl. der Erzeugung von generischen Datentypen und das Einstellen von Daten zu messen und zu analysieren. e. das zeitliche Verhalten bzgl. des Erstellens von unbekannten, d. h. nicht in IDL beschriebenen Datentypen zur Laufzeit zu messen und zu analysieren. f. die Vorzüge/Nachteile der dynamischen Komponenten aufzuzeigen, ihre Einsatzgebiete zu definieren und mit anderen Technologien wie COM/DCOM, J2EE und den Web Services bzgl. ihrer Möglichkeiten zu vergleichen. g. Aussagen bzgl. enger und loser Koppelung zu tätigen. CORBA wird als standardisierte und vollständige Verteilungsplattform ausgewählt, um die o. a. Problemstellungen zu untersuchen. Bzgl. seines dynamischen Verhaltens, das zum Zeitpunkt dieser Ausarbeitung noch nicht oder nur unzureichend untersucht wurde, sind CORBA und die Web Services richtungsweisend bzgl. a. Arbeiten mit unbekannten Objekten. Dies kann durchaus Implikationen bzgl. der Entwicklung intelligenter Softwareagenten haben. b. der Integration von Legacy-Applikationen. c. der Möglichkeiten im Zusammenhang mit B2B (Business-to-Business). Diese Problemstellungen beinhalten auch allgemeine Fragen zum Marshalling/Unmarshalling von Daten und welche Aufwände hierfür notwendig sind, ebenso wie allgemeine Aussagen bzgl. der Echtzeitfähigkeit von CORBA-basierten, verteilten Anwendungen. Die Ergebnisse werden anschließend auf andere Technologien wie COM/DCOM, J2EE und den Web Services, soweit es zulässig ist, übertragen. Die Vergleiche CORBA mit DCOM, CORBA mit J2EE und CORBA mit Web Services zeigen im Detail die Eignung dieser Technologien bzgl. loser und enger Koppelung. Desweiteren werden aus den erzielten Resultaten allgemeine Konzepte bzgl. der Architektur und der Optimierung der Kommunikation abgeleitet. Diese Empfehlungen gelten uneingeschränkt für alle untersuchten Technologien im Zusammenhang mit verteilter Verarbeitung.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In computer systems, specifically in multithread, parallel and distributed systems, a deadlock is both a very subtle problem - because difficult to pre- vent during the system coding - and a very dangerous one: a deadlocked system is easily completely stuck, with consequences ranging from simple annoyances to life-threatening circumstances, being also in between the not negligible scenario of economical losses. Then, how to avoid this problem? A lot of possible solutions has been studied, proposed and implemented. In this thesis we focus on detection of deadlocks with a static program analysis technique, i.e. an analysis per- formed without actually executing the program. To begin, we briefly present the static Deadlock Analysis Model devel- oped for coreABS−− in chapter 1, then we proceed by detailing the Class- based coreABS−− language in chapter 2. Then, in Chapter 3 we lay the foundation for further discussions by ana- lyzing the differences between coreABS−− and ASP, an untyped Object-based calculi, so as to show how it can be possible to extend the Deadlock Analysis to Object-based languages in general. In this regard, we explicit some hypotheses in chapter 4 first by present- ing a possible, unproven type system for ASP, modeled after the Deadlock Analysis Model developed for coreABS−−. Then, we conclude our discussion by presenting a simpler hypothesis, which may allow to circumvent the difficulties that arises from the definition of the ”ad-hoc” type system discussed in the aforegoing chapter.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A web service is a collection of industry standards to enable reusability of services and interoperability of heterogeneous applications. The UMLS Knowledge Source (UMLSKS) Server provides remote access to the UMLSKS and related resources. We propose a Web Services Architecture that encapsulates UMLSKS-API and makes it available in distributed and heterogeneous environments. This is the first step towards intelligent and automatic UMLS services discovery and invocation by computer systems in distributed environments such as web.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The technological advances in last decades have transformed the external resources of Vocational Counseling, Occupational Information and assessment of clients. Most computer systems follow a behaviorist-cognitive approach. However, the use of vocational counseling software is not exclusive to one conceptual approach. Computers are introduced in education from primary school; counselors and other educators are expected to use those systems. The attitude of counselors ranges from enthusiastic acceptance to complete refusal. Many counselors fear that computers will replace them. An underlying theory holds that counseling is based on the counselor-client interaction. A computer- client interaction cannot be considered vocational counseling. Counseling has five basic aims: prevention, assistance, education and development, service of diverse groups and research. The most relevant trends in computer-based counseling are: tests and questionnaires based on computers, adaptive development, computarized information, vocational counseling systems and research. Basic aims and the potential role of computers in achieving them are discussed. Present vocational counselors can use the technology of computers to link the past of our profession to its promising future. In view of these premises we have developed two computer systems that assist the vocational counseling process: "Professional Interests Questionnaire, Computer Version", and "Computer-based System of Vocational Counseling".

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The technological advances in last decades have transformed the external resources of Vocational Counseling, Occupational Information and assessment of clients. Most computer systems follow a behaviorist-cognitive approach. However, the use of vocational counseling software is not exclusive to one conceptual approach. Computers are introduced in education from primary school; counselors and other educators are expected to use those systems. The attitude of counselors ranges from enthusiastic acceptance to complete refusal. Many counselors fear that computers will replace them. An underlying theory holds that counseling is based on the counselor-client interaction. A computer- client interaction cannot be considered vocational counseling. Counseling has five basic aims: prevention, assistance, education and development, service of diverse groups and research. The most relevant trends in computer-based counseling are: tests and questionnaires based on computers, adaptive development, computarized information, vocational counseling systems and research. Basic aims and the potential role of computers in achieving them are discussed. Present vocational counselors can use the technology of computers to link the past of our profession to its promising future. In view of these premises we have developed two computer systems that assist the vocational counseling process: "Professional Interests Questionnaire, Computer Version", and "Computer-based System of Vocational Counseling".

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The technological advances in last decades have transformed the external resources of Vocational Counseling, Occupational Information and assessment of clients. Most computer systems follow a behaviorist-cognitive approach. However, the use of vocational counseling software is not exclusive to one conceptual approach. Computers are introduced in education from primary school; counselors and other educators are expected to use those systems. The attitude of counselors ranges from enthusiastic acceptance to complete refusal. Many counselors fear that computers will replace them. An underlying theory holds that counseling is based on the counselor-client interaction. A computer- client interaction cannot be considered vocational counseling. Counseling has five basic aims: prevention, assistance, education and development, service of diverse groups and research. The most relevant trends in computer-based counseling are: tests and questionnaires based on computers, adaptive development, computarized information, vocational counseling systems and research. Basic aims and the potential role of computers in achieving them are discussed. Present vocational counselors can use the technology of computers to link the past of our profession to its promising future. In view of these premises we have developed two computer systems that assist the vocational counseling process: "Professional Interests Questionnaire, Computer Version", and "Computer-based System of Vocational Counseling".

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Runtime management of distributed information systems is a complex and costly activity. One of the main challenges that must be addressed is obtaining a complete and updated view of all the managed runtime resources. This article presents a monitoring architecture for heterogeneous and distributed information systems. It is composed of two elements: an information model and an agent infrastructure. The model negates the complexity and variability of these systems and enables the abstraction over non-relevant details. The infrastructure uses this information model to monitor and manage the modeled environment, performing and detecting changes in execution time. The agents infrastructure is further detailed and its components and the relationships between them are explained. Moreover, the proposal is validated through a set of agents that instrument the JEE Glassfish application server, paying special attention to support distributed configuration scenarios.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

With the advent of cloud computing model, distributed caches have become the cornerstone for building scalable applications. Popular systems like Facebook [1] or Twitter use Memcached [5], a highly scalable distributed object cache, to speed up applications by avoiding database accesses. Distributed object caches assign objects to cache instances based on a hashing function, and objects are not moved from a cache instance to another unless more instances are added to the cache and objects are redistributed. This may lead to situations where some cache instances are overloaded when some of the objects they store are frequently accessed, while other cache instances are less frequently used. In this paper we propose a multi-resource load balancing algorithm for distributed cache systems. The algorithm aims at balancing both CPU and Memory resources among cache instances by redistributing stored data. Considering the possible conflict of balancing multiple resources at the same time, we give CPU and Memory resources weighted priorities based on the runtime load distributions. A scarcer resource is given a higher weight than a less scarce resource when load balancing. The system imbalance degree is evaluated based on monitoring information, and the utility load of a node, a unit for resource consumption. Besides, since continuous rebalance of the system may affect the QoS of applications utilizing the cache system, our data selection policy ensures that each data migration minimizes the system imbalance degree and hence, the total reconfiguration cost can be minimized. An extensive simulation is conducted to compare our policy with other policies. Our policy shows a significant improvement in time efficiency and decrease in reconfiguration cost.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La tendencia actual de las redes de telecomunicaciones conduce a pensar en un futuro basado en el concepto emergente de las Smart Cities¸ que tienen como objetivo el desarrollo urbano basado en un modelo de sostenibilidad que responda a las necesidades crecientes de las ciudades. Dentro de las Smart Cities podemos incluir el concepto de Smart Grid, el cual está referido a sistemas de administración y producción de energía eficientes, que permitan un sistema energético sostenible, y que den cabida a las fuentes de energía renovables. Sistemas de este tipo se muestran a los usuarios como un conjunto de servicios con los que interactuar sin ser tan sólo un mero cliente, sino un agente más del entorno energético. Por otro lado, los sistemas de software distribuidos son cada vez más comunes en una infraestructura de telecomunicaciones cada vez más extensa y con más capacidades. Dentro de este ámbito tecnológico, las arquitecturas orientadas a servicios han crecido exponencialmente sobre todo en el sector empresarial. Con sistemas basados en estas arquitecturas, se pueden ofrecer a empresas y usuarios sistemas software basados en el concepto de servicio. Con la progresión del hardware actual, la miniaturización de los equipos es cada vez mayor, sin renunciar por ello a la potencia que podemos encontrar en sistemas de mayor tamaño. Un ejemplo es el dispositivo Raspberry Pi, que contiene un ordenador plenamente funcional contenido en el tamaño de una cajetilla de tabaco, y con un coste muy reducido. En este proyecto se pretenden aunar los tres conceptos expuestos. De esta forma, se busca utilizar el dispositivo Raspberry Pi como elemento de despliegue integrado en una arquitectura de Smart Grid orientada a servicios. En los trabajos realizados se ha utilizado la propuesta definida por el proyecto de I+D europeo e-GOTHAM, con cuya infraestructura se ha tenido ocasión de realizar diferentes pruebas de las descritas en esta memoria. Aunque esta arquitectura está orientada a la creación de una Smart Grid, lo experimentado en este PFG podría encajar en otro tipo de aplicaciones. Dentro del estudio sobre las soluciones software actuales, se ha trabajado en la evaluación de la posibilidad de instalar un Enterprise Service Bus en el Raspberry Pi y en la optimización de la citada instalación. Una vez conseguida una instalación operativa, se ha desarrollado un controlador de un dispositivo físico (sensor/actuador), denominado Dispositivo Lógico, a modo de prueba de la viabilidad del uso del Raspberry Pi para actuar como elemento en el que instalar aplicaciones en entornos de Smart Grid o Smart Home. El éxito logrado con esta experimentación refuerza la idea de considerar al Raspberry Pi, como un importante elemento a tener en cuenta para el despliegue de servicios de Smart Cities o incluso en otros ámbitos tecnológicos. ABSTRACT. The current trend of telecommunication networks lead to think in a future based on the emerging concept of Smart Cities, whose objective is to ensure the urban development based on a sustainable model to respond the new necessities of the cities. Within the Smart cites we can include the concept of Smart Grid, which is based on management systems and efficient energy production, allowing a sustainable energy producing system, and that includes renewable energy sources. Systems of this type are shown to users as a set of services that allow users to interact with the system not only as a single customer, but also as other energy environment agent. Furthermore, distributed software systems are increasingly common in a telecommunications infrastructure more extensive and with more capabilities. Within this area of technology, service-oriented architectures have grown exponentially especially in the business sector. With systems based on these architectures, can be offered to businesses and users software systems based on the concept of service. With the progression of the actual hardware, the miniaturization of computers is increasing, without sacrificing the power of larger systems. An example is the Raspberry Pi, which contains a fully functional computer contained in the size of a pack of cigarettes, and with a very low cost. This PFG (Proyecto Fin de Grado) tries to combine the three concepts presented. Thus, it is intended to use the Raspberry Pi device as a deployment element integrated into a service oriented Smart Grid architecture. In this PFG, the one proposed in the European R&D e-GOTHAM project has been observed. In addition several tests described herein have been carried out using the infrastructure of that project. Although this architecture is oriented to the creation of a Smart Grid, the experiences reported in this document could fit into other applications. Within the study on current software solutions, it have been working on assessing the possibility of installing an Enterprise Service Bus in the Raspberry Pi and optimizing that facility. Having achieved an operating installation, it has been developed a driver for a physical device (sensor / actuator), called logical device, for testing the feasibility of using the Raspberry Pi to act as an element in which to install applications in Smart Grid and Smart Home Environments. The success of this experiment reinforces the idea of considering the Raspberry Pi as an important element to take into account in the deployment of Smart Cities services or even in other technological fields.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Bologna Declaration and the implementation of the European Higher Education Area are promoting the use of active learning methodologies. The aim of this study is to evaluate the effects obtained after applying active learning methodologies to the achievement of generic competences as well as to the academic performance. This study has been carried out at the Universidad Politécnica de Madrid, where these methodologies have been applied to the Operating Systems I subject of the degree in Technical Engineering in Computer Systems. The fundamental hypothesis tested was whether the implementation of active learning methodologies (cooperative learning and problem based learning) favours the achievement of certain generic competences (‘teamwork’ and ‘planning and time management’) and also whether this fact improved the academic performance of our students. The original approach of this work consists in using psychometric tests to measure the degree of acquired student’s generic competences instead of using opinion surveys, as usual. Results indicated that active learning methodologies improve the academic performance when compared to the traditional lecture/discussion method, according to the success rate obtained. These methods seem to have as well an effect on the teamwork competence (the perception of the behaviour of the other members in the group) but not on the perception of each students’ behaviour. Active learning does not produce any significant change in the generic competence ‘planning and time management'.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A first-rate e-Health system saves lives, provides better patient care, allows complex but useful epidemiologic analysis and saves money. However, there may also be concerns about the costs and complexities associated with e-health implementation, and the need to solve issues about the energy footprint of the high-demanding computing facilities. This paper proposes a novel and evolved computing paradigm that: (i) provides the required computing and sensing resources; (ii) allows the population-wide diffusion; (iii) exploits the storage, communication and computing services provided by the Cloud; (iv) tackles the energy-optimization issue as a first-class requirement, taking it into account during the whole development cycle. The novel computing concept and the multi-layer top-down energy-optimization methodology obtain promising results in a realistic scenario for cardiovascular tracking and analysis, making the Home Assisted Living a reality.