95 resultados para intelligent computing
Resumo:
Indoor location systems cannot rely on technologies such as GPS (Global Positioning System) to determine the position of a mobile terminal, because its signals are blocked by obstacles such as walls, ceilings, roofs, etc. In such environments. The use of alternative techniques, such as the use of wireless networks, should be considered. The location estimation is made by measuring and analysing one of the parameters of the wireless signal, usually the received power. One of the techniques used to estimate the locations using wireless networks is fingerprinting. This technique comprises two phases: in the first phase data is collected from the scenario and stored in a database; the second phase consists in determining the location of the mobile node by comparing the data collected from the wireless transceiver with the data previously stored in the database. In this paper an approach for localisation using fingerprinting based on Fuzzy Logic and pattern searching is presented. The performance of the proposed approach is compared with the performance of classic methods, and it presents an improvement between 10.24% and 49.43%, depending on the mobile node and the Fuzzy Logic parameters.ł
Resumo:
With advancement in computer science and information technology, computing systems are becoming increasingly more complex with an increasing number of heterogeneous components. They are thus becoming more difficult to monitor, manage, and maintain. This process has been well known as labor intensive and error prone. In addition, traditional approaches for system management are difficult to keep up with the rapidly changing environments. There is a need for automatic and efficient approaches to monitor and manage complex computing systems. In this paper, we propose an innovative framework for scheduling system management by combining Autonomic Computing (AC) paradigm, Multi-Agent Systems (MAS) and Nature Inspired Optimization Techniques (NIT). Additionally, we consider the resolution of realistic problems. The scheduling of a Cutting and Treatment Stainless Steel Sheet Line will be evaluated. Results show that proposed approach has advantages when compared with other scheduling systems
Resumo:
Dynamically reconfigurable SRAM-based field-programmable gate arrays (FPGAs) enable the implementation of reconfigurable computing systems where several applications may be run simultaneously, sharing the available resources according to their own immediate functional requirements. To exclude malfunctioning due to faulty elements, the reliability of all FPGA resources must be guaranteed. Since resource allocation takes place asynchronously, an online structural test scheme is the only way of ensuring reliable system operation. On the other hand, this test scheme should not disturb the operation of the circuit, otherwise availability would be compromised. System performance is also influenced by the efficiency of the management strategies that must be able to dynamically allocate enough resources when requested by each application. As those resources are allocated and later released, many small free resource blocks are created, which are left unused due to performance and routing restrictions. To avoid wasting logic resources, the FPGA logic space must be defragmented regularly. This paper presents a non-intrusive active replication procedure that supports the proposed test methodology and the implementation of defragmentation strategies, assuring both the availability of resources and their perfect working condition, without disturbing system operation.
Resumo:
Para muitos, o ato de ensinar, era e continua a ser uma “arte”, em que os professores e os grandes mestres mais eficientes são aqueles que têm a capacidade e a arte de fazer passar as suas mensagens e conhecimentos, de forma simples e apelativa, independentemente da área de estudo. A informação relacionada com a aula, é cada vez mais digital, sendo importante, por parte dos docentes, o domínio de tecnologias de criação, organização e disponibilização de conteúdos. Essa partilha foi inicialmente possível pelas páginas Web e mais tarde pelas plataformas LMS (Learning Management System). Criar um Website era uma tarefa complicada quer ao nível do seu custo quer ao nível do domínio da tecnologia Web e era por vezes necessário contratar profissionais para o efeito. Surgiram então as CMS (Content Management System) que são tecnologias Open Source, que permitem a gestão de conteúdos. Neste sentido foi realizado um estudo com o objetivo de aferir sobre as competências dos professores no domínio da partilha de Gestão de Conteúdos Digitais. O presente estudo permitiu retirar conclusões sobre o potencial e aplicabilidade das CMS no ensino. O principal objetivo do presente estudo incidiu no potencial de distribuição e partilha de Recursos Educativos Digitais organizados sobre o ponto de vista pedagógico aos alunos. Foi ainda analisado e estudado o papel do Cloud Computing no processo de partilha colaborativa de documentos. Foi delineado como suporte à presente investigação um curso modelo que por sua vez foi implementado nas três principais CMS da atualidade e avaliado o potencial de cada uma neste contexto. Finalmente foram apresentadas as conclusões retiradas do presente estudo.
Resumo:
Empowered by virtualisation technology, cloud infrastructures enable the construction of flexi- ble and elastic computing environments, providing an opportunity for energy and resource cost optimisation while enhancing system availability and achieving high performance. A crucial re- quirement for effective consolidation is the ability to efficiently utilise system resources for high- availability computing and energy-efficiency optimisation to reduce operational costs and carbon footprints in the environment. Additionally, failures in highly networked computing systems can negatively impact system performance substantially, prohibiting the system from achieving its initial objectives. In this paper, we propose algorithms to dynamically construct and readjust vir- tual clusters to enable the execution of users’ jobs. Allied with an energy optimising mechanism to detect and mitigate energy inefficiencies, our decision-making algorithms leverage virtuali- sation tools to provide proactive fault-tolerance and energy-efficiency to virtual clusters. We conducted simulations by injecting random synthetic jobs and jobs using the latest version of the Google cloud tracelogs. The results indicate that our strategy improves the work per Joule ratio by approximately 12.9% and the working efficiency by almost 15.9% compared with other state-of-the-art algorithms.
Resumo:
Extracting the semantic relatedness of terms is an important topic in several areas, including data mining, information retrieval and web recommendation. This paper presents an approach for computing the semantic relatedness of terms using the knowledge base of DBpedia — a community effort to extract structured information from Wikipedia. Several approaches to extract semantic relatedness from Wikipedia using bag-of-words vector models are already available in the literature. The research presented in this paper explores a novel approach using paths on an ontological graph extracted from DBpedia. It is based on an algorithm for finding and weighting a collection of paths connecting concept nodes. This algorithm was implemented on a tool called Shakti that extract relevant ontological data for a given domain from DBpedia using its SPARQL endpoint. To validate the proposed approach Shakti was used to recommend web pages on a Portuguese social site related to alternative music and the results of that experiment are reported in this paper.
Resumo:
This paper reports the development of a B2B platform for the personalization of the publicity transmitted during the program intervals. The platform as a whole must ensure that the intervals are filled with ads compatible with the profile, context and expressed interests of the viewers. The platform acts as an electronic marketplace for advertising agencies (content producer companies) and multimedia content providers (content distribution companies). The companies, once registered at the platform, are represented by agents who negotiate automatically the price of the interval timeslots according to the specified price range and adaptation behaviour. The candidate ads for a given viewer interval are selected through a matching mechanism between ad, viewer and the current context (program being watched) profiles. The overall architecture of the platform consists of a multiagent system organized into three layers consisting of: (i) interface agents that interact with companies; (ii) enterprise agents that model the companies, and (iii) delegate agents that negotiate a specific ad or interval. The negotiation follows a variant of the Iterated Contract Net Interaction Protocol (ICNIP) and is based on the price/s offered by the advertising agencies to occupy the viewer’s interval.
Resumo:
Environmental management is a complex task. The amount and heterogeneity of the data needed for an environmental decision making tool is overwhelming without adequate database systems and innovative methodologies. As far as data management, data interaction and data processing is concerned we here propose the use of a Geographical Information System (GIS) whilst for the decision making we suggest a Multi-Agent System (MAS) architecture. With the adoption of a GIS we hope to provide a complementary coexistence between heterogeneous data sets, a correct data structure, a good storage capacity and a friendly user’s interface. By choosing a distributed architecture such as a Multi-Agent System, where each agent is a semi-autonomous Expert System with the necessary skills to cooperate with the others in order to solve a given task, we hope to ensure a dynamic problem decomposition and to achieve a better performance compared with standard monolithical architectures. Finally, and in view of the partial, imprecise, and ever changing character of information available for decision making, Belief Revision capabilities are added to the system. Our aim is to present and discuss an intelligent environmental management system capable of suggesting the more appropriate land-use actions based on the existing spatial and non-spatial constraints.
Resumo:
This article discusses the development of an Intelligent Distributed Environmental Decision Support System, built upon the association of a Multi-agent Belief Revision System with a Geographical Information System (GIS). The inherent multidisciplinary features of the involved expertises in the field of environmental management, the need to define clear policies that allow the synthesis of divergent perspectives, its systematic application, and the reduction of the costs and time that result from this integration, are the main reasons that motivate the proposal of this project. This paper is organised in two parts: in the first part we present and discuss the developed Distributed Belief Revision Test-bed — DiBeRT; in the second part we analyse its application to the environmental decision support domain, with special emphasis on the interface with a GIS.
Resumo:
Applications refactorings that imply the schema evolution are common activities in programming practices. Although modern object-oriented databases provide transparent schema evolution mechanisms, those refactorings continue to be time consuming tasks for programmers. In this paper we address this problem with a novel approach based on aspect-oriented programming and orthogonal persistence paradigms, as well as our meta-model. An overview of our framework is presented. This framework, a prototype based on that approach, provides applications with aspects of persistence and database evolution. It also provides a new pointcut/advice language that enables the modularization of the instance adaptation crosscutting concern of classes, which were subject to a schema evolution. We also present an application that relies on our framework. This application was developed without any concern regarding persistence and database evolution. However, its data is recovered in each execution, as well as objects, in previous schema versions, remain available, transparently, by means of our framework.
Resumo:
A distributed, agent-based intelligent system models and simulates a smart grid using physical players and computationally simulated agents. The proposed system can assess the impact of demand response programs.
Resumo:
The development of an intelligent wheelchair (IW) platform that may be easily adapted to any commercial electric powered wheelchair and aid any person with special mobility needs is the main objective of this project. To be able to achieve this main objective, three distinct control methods were implemented in the IW: manual, shared and automatic. Several algorithms were developed for each of these control methods. This paper presents three of the most significant of those algorithms with emphasis on the shared control method. Experiments were performed by users suffering from cerebral palsy, using a realistic simulator, in order to validate the approach. The experiments revealed the importance of using shared (aided) controls for users with severe disabilities. The patients still felt having complete control over the wheelchair movement when using a shared control at a 50% level and thus this control type was very well accepted. Thus it may be used in intelligent wheelchairs since it is able to correct the direction in case of involuntary movements of the user but still gives him a sense of complete control over the IW movement.
Resumo:
Technology is present in almost every simple aspect of the people’s daily life. As an instance, let us refer to the smartphone. This device is usually equipped with a GPS modulewhich may be used as an orientation system, if it carries the right functionalities. The problem is that these applications may be complex to operate and may not be within the bounds of everybody. Therefore, the main goal here is to develop an orientation system that may help people with cognitive disabilities in their day-to-day journeys, when the caregivers are absent. On the other hand, to keep paid helpers aware of the current location of the disable people, it will be also considered a localization system. Knowing their current locations, caregiversmay engage in others activities without neglecting their prime work, and, at the same time, turning people with cognitive disabilities more independent.
Resumo:
Develop a client-server application for a mobile environment can bring many challenges because of the mobile devices limitations. So, in this paper is discussed what can be the more reliable way to exchange information between a server and an Android mobile application, since it is important for users to have an application that really works in a responsive way and preferably without any errors. In this discussion two data transfer protocols (Socket and HTTP) and three serialization data formats (XML, JSON and Protocol Buffers) were tested using some metrics to evaluate which is the most practical and fast to use.
Resumo:
Nowadays the incredible grow of mobile devices market led to the need for location-aware applications. However, sometimes person location is difficult to obtain, since most of these devices only have a GPS (Global Positioning System) chip to retrieve location. In order to suppress this limitation and to provide location everywhere (even where a structured environment doesn’t exist) a wearable inertial navigation system is proposed, which is a convenient way to track people in situations where other localization systems fail. The system combines pedestrian dead reckoning with GPS, using widely available, low-cost and low-power hardware components. The system innovation is the information fusion and the use of probabilistic methods to learn persons gait behavior to correct, in real-time, the drift errors given by the sensors.