949 resultados para Intelligent computing techniques
Resumo:
A primary goal of context-aware systems is delivering the right information at the right place and right time to users in order to enable them to make effective decisions and improve their quality of life. There are three key requirements for achieving this goal: determining what information is relevant, personalizing it based on the users’ context (location, preferences, behavioral history etc.), and delivering it to them in a timely manner without an explicit request from them. These requirements create a paradigm that we term as “Proactive Context-aware Computing”. Most of the existing context-aware systems fulfill only a subset of these requirements. Many of these systems focus only on personalization of the requested information based on users’ current context. Moreover, they are often designed for specific domains. In addition, most of the existing systems are reactive - the users request for some information and the system delivers it to them. These systems are not proactive i.e. they cannot anticipate users’ intent and behavior and act proactively without an explicit request from them. In order to overcome these limitations, we need to conduct a deeper analysis and enhance our understanding of context-aware systems that are generic, universal, proactive and applicable to a wide variety of domains. To support this dissertation, we explore several directions. Clearly the most significant sources of information about users today are smartphones. A large amount of users’ context can be acquired through them and they can be used as an effective means to deliver information to users. In addition, social media such as Facebook, Flickr and Foursquare provide a rich and powerful platform to mine users’ interests, preferences and behavioral history. We employ the ubiquity of smartphones and the wealth of information available from social media to address the challenge of building proactive context-aware systems. We have implemented and evaluated a few approaches, including some as part of the Rover framework, to achieve the paradigm of Proactive Context-aware Computing. Rover is a context-aware research platform which has been evolving for the last 6 years. Since location is one of the most important context for users, we have developed ‘Locus’, an indoor localization, tracking and navigation system for multi-story buildings. Other important dimensions of users’ context include the activities that they are engaged in. To this end, we have developed ‘SenseMe’, a system that leverages the smartphone and its multiple sensors in order to perform multidimensional context and activity recognition for users. As part of the ‘SenseMe’ project, we also conducted an exploratory study of privacy, trust, risks and other concerns of users with smart phone based personal sensing systems and applications. To determine what information would be relevant to users’ situations, we have developed ‘TellMe’ - a system that employs a new, flexible and scalable approach based on Natural Language Processing techniques to perform bootstrapped discovery and ranking of relevant information in context-aware systems. In order to personalize the relevant information, we have also developed an algorithm and system for mining a broad range of users’ preferences from their social network profiles and activities. For recommending new information to the users based on their past behavior and context history (such as visited locations, activities and time), we have developed a recommender system and approach for performing multi-dimensional collaborative recommendations using tensor factorization. For timely delivery of personalized and relevant information, it is essential to anticipate and predict users’ behavior. To this end, we have developed a unified infrastructure, within the Rover framework, and implemented several novel approaches and algorithms that employ various contextual features and state of the art machine learning techniques for building diverse behavioral models of users. Examples of generated models include classifying users’ semantic places and mobility states, predicting their availability for accepting calls on smartphones and inferring their device charging behavior. Finally, to enable proactivity in context-aware systems, we have also developed a planning framework based on HTN planning. Together, these works provide a major push in the direction of proactive context-aware computing.
Resumo:
The central motif of this work is prediction and optimization in presence of multiple interacting intelligent agents. We use the phrase `intelligent agents' to imply in some sense, a `bounded rationality', the exact meaning of which varies depending on the setting. Our agents may not be `rational' in the classical game theoretic sense, in that they don't always optimize a global objective. Rather, they rely on heuristics, as is natural for human agents or even software agents operating in the real-world. Within this broad framework we study the problem of influence maximization in social networks where behavior of agents is myopic, but complication stems from the structure of interaction networks. In this setting, we generalize two well-known models and give new algorithms and hardness results for our models. Then we move on to models where the agents reason strategically but are faced with considerable uncertainty. For such games, we give a new solution concept and analyze a real-world game using out techniques. Finally, the richest model we consider is that of Network Cournot Competition which deals with strategic resource allocation in hypergraphs, where agents reason strategically and their interaction is specified indirectly via player's utility functions. For this model, we give the first equilibrium computability results. In all of the above problems, we assume that payoffs for the agents are known. However, for real-world games, getting the payoffs can be quite challenging. To this end, we also study the inverse problem of inferring payoffs, given game history. We propose and evaluate a data analytic framework and we show that it is fast and performant.
Resumo:
This paper introduces two ongoing research projects which seek to apply computer modelling techniques in order to simulate human behaviour within organisations. Previous research in other disciplines has suggested that complex social behaviours are governed by relatively simple rules which, when identified, can be used to accurately model such processes using computer technology. The broad objective of our research is to develop a similar capability within organisational psychology.
Resumo:
Computational intelligent support for decision making is becoming increasingly popular and essential among medical professionals. Also, with the modern medical devices being capable to communicate with ICT, created models can easily find practical translation into software. Machine learning solutions for medicine range from the robust but opaque paradigms of support vector machines and neural networks to the also performant, yet more comprehensible, decision trees and rule-based models. So how can such different techniques be combined such that the professional obtains the whole spectrum of their particular advantages? The presented approaches have been conceived for various medical problems, while permanently bearing in mind the balance between good accuracy and understandable interpretation of the decision in order to truly establish a trustworthy ‘artificial’ second opinion for the medical expert.
Resumo:
In contemporary societies higher education must shape individuals able to solve problems in a workable and simpler manner and, therefore, a multidisciplinary view of the problems, with insights in disciplines like psychology, mathematics or computer science becomes mandatory. Undeniably, the great challenge for teachers is to provide a comprehensive training in General Chemistry with high standards of quality, and aiming not only at the promotion of the student’s academic success, but also at the understanding of the competences/skills required to their future doings. Thus, this work will be focused on the development of an intelligent system to assess the Quality-of-General-Chemistry-Learning, based on factors related with subject, teachers and students.
Resumo:
Presentaciones de la asignatura Interfaces para Entornos Inteligentes del Máster en Tecnologías de la Informática/Machine Learning and Data Mining.
Resumo:
In this report, we develop an intelligent adaptive neuro-fuzzy controller by using adaptive neuro fuzzy inference system (ANFIS) techniques. We begin by starting with a standard proportional-derivative (PD) controller and use the PD controller data to train the ANFIS system to develop a fuzzy controller. We then propose and validate a method to implement this control strategy on commercial off-the-shelf (COTS) hardware. An analysis is made into the choice of filters for attitude estimation. These choices are limited by the complexity of the filter and the computing ability and memory constraints of the micro-controller. Simplified Kalman filters are found to be good at estimation of attitude given the above constraints. Using model based design techniques, the models are implemented on an embedded system. This enables the deployment of fuzzy controllers on enthusiast-grade controllers. We evaluate the feasibility of the proposed control strategy in a model-in-the-loop simulation. We then propose a rapid prototyping strategy, allowing us to deploy these control algorithms on a system consisting of a combination of an ARM-based microcontroller and two Arduino-based controllers. We then use a combination of the code generation capabilities within MATLAB/Simulink in combination with multiple open-source projects in order to deploy code to an ARM CortexM4 based controller board. We also evaluate this strategy on an ARM-A8 based board, and a much less powerful Arduino based flight controller. We conclude by proving the feasibility of fuzzy controllers on Commercial-off the shelf (COTS) hardware, we also point out the limitations in the current hardware and make suggestions for hardware that we think would be better suited for memory heavy controllers.
Resumo:
This thesis presents a study of the Grid data access patterns in distributed analysis in the CMS experiment at the LHC accelerator. This study ranges from the deep analysis of the historical patterns of access to the most relevant data types in CMS, to the exploitation of a supervised Machine Learning classification system to set-up a machinery able to eventually predict future data access patterns - i.e. the so-called dataset “popularity” of the CMS datasets on the Grid - with focus on specific data types. All the CMS workflows run on the Worldwide LHC Computing Grid (WCG) computing centers (Tiers), and in particular the distributed analysis systems sustains hundreds of users and applications submitted every day. These applications (or “jobs”) access different data types hosted on disk storage systems at a large set of WLCG Tiers. The detailed study of how this data is accessed, in terms of data types, hosting Tiers, and different time periods, allows to gain precious insight on storage occupancy over time and different access patterns, and ultimately to extract suggested actions based on this information (e.g. targetted disk clean-up and/or data replication). In this sense, the application of Machine Learning techniques allows to learn from past data and to gain predictability potential for the future CMS data access patterns. Chapter 1 provides an introduction to High Energy Physics at the LHC. Chapter 2 describes the CMS Computing Model, with special focus on the data management sector, also discussing the concept of dataset popularity. Chapter 3 describes the study of CMS data access patterns with different depth levels. Chapter 4 offers a brief introduction to basic machine learning concepts and gives an introduction to its application in CMS and discuss the results obtained by using this approach in the context of this thesis.
Resumo:
Efficient and reliable techniques for power delivery and utilization are needed to account for the increased penetration of renewable energy sources in electric power systems. Such methods are also required for current and future demands of plug-in electric vehicles and high-power electronic loads. Distributed control and optimal power network architectures will lead to viable solutions to the energy management issue with high level of reliability and security. This dissertation is aimed at developing and verifying new techniques for distributed control by deploying DC microgrids, involving distributed renewable generation and energy storage, through the operating AC power system. To achieve the findings of this dissertation, an energy system architecture was developed involving AC and DC networks, both with distributed generations and demands. The various components of the DC microgrid were designed and built including DC-DC converters, voltage source inverters (VSI) and AC-DC rectifiers featuring novel designs developed by the candidate. New control techniques were developed and implemented to maximize the operating range of the power conditioning units used for integrating renewable energy into the DC bus. The control and operation of the DC microgrids in the hybrid AC/DC system involve intelligent energy management. Real-time energy management algorithms were developed and experimentally verified. These algorithms are based on intelligent decision-making elements along with an optimization process. This was aimed at enhancing the overall performance of the power system and mitigating the effect of heavy non-linear loads with variable intensity and duration. The developed algorithms were also used for managing the charging/discharging process of plug-in electric vehicle emulators. The protection of the proposed hybrid AC/DC power system was studied. Fault analysis and protection scheme and coordination, in addition to ideas on how to retrofit currently available protection concepts and devices for AC systems in a DC network, were presented. A study was also conducted on the effect of changing the distribution architecture and distributing the storage assets on the various zones of the network on the system’s dynamic security and stability. A practical shipboard power system was studied as an example of a hybrid AC/DC power system involving pulsed loads. Generally, the proposed hybrid AC/DC power system, besides most of the ideas, controls and algorithms presented in this dissertation, were experimentally verified at the Smart Grid Testbed, Energy Systems Research Laboratory. All the developments in this dissertation were experimentally verified at the Smart Grid Testbed.
Resumo:
Two key solutions to reduce the greenhouse gas emissions and increase the overall energy efficiency are to maximize the utilization of renewable energy resources (RERs) to generate energy for load consumption and to shift to low or zero emission plug-in electric vehicles (PEVs) for transportation. The present U.S. aging and overburdened power grid infrastructure is under a tremendous pressure to handle the issues involved in penetration of RERS and PEVs. The future power grid should be designed with for the effective utilization of distributed RERs and distributed generations to intelligently respond to varying customer demand including PEVs with high level of security, stability and reliability. This dissertation develops and verifies such a hybrid AC-DC power system. The system will operate in a distributed manner incorporating multiple components in both AC and DC styles and work in both grid-connected and islanding modes. ^ The verification was performed on a laboratory-based hybrid AC-DC power system testbed as hardware/software platform. In this system, RERs emulators together with their maximum power point tracking technology and power electronics converters were designed to test different energy harvesting algorithms. The Energy storage devices including lithium-ion batteries and ultra-capacitors were used to optimize the performance of the hybrid power system. A lithium-ion battery smart energy management system with thermal and state of charge self-balancing was proposed to protect the energy storage system. A grid connected DC PEVs parking garage emulator, with five lithium-ion batteries was also designed with the smart charging functions that can emulate the future vehicle-to-grid (V2G), vehicle-to-vehicle (V2V) and vehicle-to-house (V2H) services. This includes grid voltage and frequency regulations, spinning reserves, micro grid islanding detection and energy resource support. ^ The results show successful integration of the developed techniques for control and energy management of future hybrid AC-DC power systems with high penetration of RERs and PEVs.^
Resumo:
Developers strive to create innovative Artificial Intelligence (AI) behaviour in their games as a key selling point. Machine Learning is an area of AI that looks at how applications and agents can be programmed to learn their own behaviour without the need to manually design and implement each aspect of it. Machine learning methods have been utilised infrequently within games and are usually trained to learn offline before the game is released to the players. In order to investigate new ways AI could be applied innovatively to games it is wise to explore how machine learning methods could be utilised in real-time as the game is played, so as to allow AI agents to learn directly from the player or their environment. Two machine learning methods were implemented into a simple 2D Fighter test game to allow the agents to fully showcase their learned behaviour as the game is played. The methods chosen were: Q-Learning and an NGram based system. It was found that N-Grams and QLearning could significantly benefit game developers as they facilitate fast, realistic learning at run-time.
Resumo:
Data sources are often dispersed geographically in real life applications. Finding a knowledge model may require to join all the data sources and to run a machine learning algorithm on the joint set. We present an alternative based on a Multi Agent System (MAS): an agent mines one data source in order to extract a local theory (knowledge model) and then merges it with the previous MAS theory using a knowledge fusion technique. This way, we obtain a global theory that summarizes the distributed knowledge without spending resources and time in joining data sources. New experiments have been executed including statistical significance analysis. The results show that, as a result of knowledge fusion, the accuracy of initial theories is significantly improved as well as the accuracy of the monolithic solution.
Resumo:
A link between patterns of pelvic growth and human life history is supported by the finding that, cross-culturally, variation in maturation rates of female pelvis are correlated with variation in ages of menarche and first reproduction, i.e., it is well known that the human dimensions of the pelvic bones depend on the gender and vary with the age. Indeed, one feature in which humans appear to be unique is the prolonged growth of the pelvis after the age of sexual maturity. Both the total superoinferior length and mediolateral breadth of the pelvis continues to grow markedly after puberty, and do not reach adult proportions until the late teens years. This continuation of growth is accomplished by relatively late fusion of the separate centers of ossification that form the bones of the pelvis. Hence, in this work we will focus on the development of an intelligent decision support system to predict individual’s age based on a pelvis' dimensions criteria. Some basic image processing techniques were applied in order to extract the relevant features from pelvic X-rays, being the computational framework built on top of a Logic Programming approach to Knowledge Representation and Reasoning that caters for the handling of incomplete, unknown, or even self-contradictory information, complemented with a Case Base approach to computing.
Resumo:
Modern scientific discoveries are driven by an unsatisfiable demand for computational resources. High-Performance Computing (HPC) systems are an aggregation of computing power to deliver considerably higher performance than one typical desktop computer can provide, to solve large problems in science, engineering, or business. An HPC room in the datacenter is a complex controlled environment that hosts thousands of computing nodes that consume electrical power in the range of megawatts, which gets completely transformed into heat. Although a datacenter contains sophisticated cooling systems, our studies indicate quantitative evidence of thermal bottlenecks in real-life production workload, showing the presence of significant spatial and temporal thermal and power heterogeneity. Therefore minor thermal issues/anomalies can potentially start a chain of events that leads to an unbalance between the amount of heat generated by the computing nodes and the heat removed by the cooling system originating thermal hazards. Although thermal anomalies are rare events, anomaly detection/prediction in time is vital to avoid IT and facility equipment damage and outage of the datacenter, with severe societal and business losses. For this reason, automated approaches to detect thermal anomalies in datacenters have considerable potential. This thesis analyzed and characterized the power and thermal characteristics of a Tier0 datacenter (CINECA) during production and under abnormal thermal conditions. Then, a Deep Learning (DL)-powered thermal hazard prediction framework is proposed. The proposed models are validated against real thermal hazard events reported for the studied HPC cluster while in production. This thesis is the first empirical study of thermal anomaly detection and prediction techniques of a real large-scale HPC system to the best of my knowledge. For this thesis, I used a large-scale dataset, monitoring data of tens of thousands of sensors for around 24 months with a data collection rate of around 20 seconds.
Resumo:
The modern industrial environment is populated by a myriad of intelligent devices that collaborate for the accomplishment of the numerous business processes in place at the production sites. The close collaboration between humans and work machines poses new interesting challenges that industry must overcome in order to implement the new digital policies demanded by the industrial transition. The Industry 5.0 movement is a companion revolution of the previous Industry 4.0, and it relies on three characteristics that any industrial sector should have and pursue: human centrality, resilience, and sustainability. The application of the fifth industrial revolution cannot be completed without moving from the implementation of Industry 4.0-enabled platforms. The common feature found in the development of this kind of platform is the need to integrate the Information and Operational layers. Our thesis work focuses on the implementation of a platform addressing all the digitization features foreseen by the fourth industrial revolution, making the IT/OT convergence inside production plants an improvement and not a risk. Furthermore, we added modular features to our platform enabling the Industry 5.0 vision. We favored the human centrality using the mobile crowdsensing techniques and the reliability and sustainability using pluggable cloud computing services, combined with data coming from the crowd support. We achieved important and encouraging results in all the domains in which we conducted our experiments. Our IT/OT convergence-enabled platform exhibits the right performance needed to satisfy the strict requirements of production sites. The multi-layer capability of the framework enables the exploitation of data not strictly coming from work machines, allowing a more strict interaction between the company, its employees, and customers.