878 resultados para Multi-agent computing
Resumo:
As the complexity of parallel applications increase, the performance limitations resulting from computational load imbalance become dominant. Mapping the problem space to the processors in a parallel machine in a manner that balances the workload of each processors will typically reduce the run-time. In many cases the computation time required for a given calculation cannot be predetermined even at run-time and so static partition of the problem returns poor performance. For problems in which the computational load across the discretisation is dynamic and inhomogeneous, for example multi-physics problems involving fluid and solid mechanics with phase changes, the workload for a static subdomain will change over the course of a computation and cannot be estimated beforehand. For such applications the mapping of loads to process is required to change dynamically, at run-time in order to maintain reasonable efficiency. The issue of dynamic load balancing are examined in the context of PHYSICA, a three dimensional unstructured mesh multi-physics continuum mechanics computational modelling code.
Resumo:
The central product of the DRAMA (Dynamic Re-Allocation of Meshes for parallel Finite Element Applications) project is a library comprising a variety of tools for dynamic re-partitioning of unstructured Finite Element (FE) applications. The input to the DRAMA library is the computational mesh, and corresponding costs, partitioned into sub-domains. The core library functions then perform a parallel computation of a mesh re-allocation that will re-balance the costs based on the DRAMA cost model. We discuss the basic features of this cost model, which allows a general approach to load identification, modelling and imbalance minimisation. Results from crash simulations are presented which show the necessity for multi-phase/multi-constraint partitioning components.
Resumo:
A primary goal of context-aware systems is delivering the right information at the right place and right time to users in order to enable them to make effective decisions and improve their quality of life. There are three key requirements for achieving this goal: determining what information is relevant, personalizing it based on the users’ context (location, preferences, behavioral history etc.), and delivering it to them in a timely manner without an explicit request from them. These requirements create a paradigm that we term as “Proactive Context-aware Computing”. Most of the existing context-aware systems fulfill only a subset of these requirements. Many of these systems focus only on personalization of the requested information based on users’ current context. Moreover, they are often designed for specific domains. In addition, most of the existing systems are reactive - the users request for some information and the system delivers it to them. These systems are not proactive i.e. they cannot anticipate users’ intent and behavior and act proactively without an explicit request from them. In order to overcome these limitations, we need to conduct a deeper analysis and enhance our understanding of context-aware systems that are generic, universal, proactive and applicable to a wide variety of domains. To support this dissertation, we explore several directions. Clearly the most significant sources of information about users today are smartphones. A large amount of users’ context can be acquired through them and they can be used as an effective means to deliver information to users. In addition, social media such as Facebook, Flickr and Foursquare provide a rich and powerful platform to mine users’ interests, preferences and behavioral history. We employ the ubiquity of smartphones and the wealth of information available from social media to address the challenge of building proactive context-aware systems. We have implemented and evaluated a few approaches, including some as part of the Rover framework, to achieve the paradigm of Proactive Context-aware Computing. Rover is a context-aware research platform which has been evolving for the last 6 years. Since location is one of the most important context for users, we have developed ‘Locus’, an indoor localization, tracking and navigation system for multi-story buildings. Other important dimensions of users’ context include the activities that they are engaged in. To this end, we have developed ‘SenseMe’, a system that leverages the smartphone and its multiple sensors in order to perform multidimensional context and activity recognition for users. As part of the ‘SenseMe’ project, we also conducted an exploratory study of privacy, trust, risks and other concerns of users with smart phone based personal sensing systems and applications. To determine what information would be relevant to users’ situations, we have developed ‘TellMe’ - a system that employs a new, flexible and scalable approach based on Natural Language Processing techniques to perform bootstrapped discovery and ranking of relevant information in context-aware systems. In order to personalize the relevant information, we have also developed an algorithm and system for mining a broad range of users’ preferences from their social network profiles and activities. For recommending new information to the users based on their past behavior and context history (such as visited locations, activities and time), we have developed a recommender system and approach for performing multi-dimensional collaborative recommendations using tensor factorization. For timely delivery of personalized and relevant information, it is essential to anticipate and predict users’ behavior. To this end, we have developed a unified infrastructure, within the Rover framework, and implemented several novel approaches and algorithms that employ various contextual features and state of the art machine learning techniques for building diverse behavioral models of users. Examples of generated models include classifying users’ semantic places and mobility states, predicting their availability for accepting calls on smartphones and inferring their device charging behavior. Finally, to enable proactivity in context-aware systems, we have also developed a planning framework based on HTN planning. Together, these works provide a major push in the direction of proactive context-aware computing.
Resumo:
Choosing a single similarity threshold for cutting dendrograms is not sufficient for performing hierarchical clustering analysis of heterogeneous data sets. In addition, alternative automated or semi-automated methods that cut dendrograms in multiple levels make assumptions about the data in hand. In an attempt to help the user to find patterns in the data and resolve ambiguities in cluster assignments, we developed MLCut: a tool that provides visual support for exploring dendrograms of heterogeneous data sets in different levels of detail. The interactive exploration of the dendrogram is coordinated with a representation of the original data, shown as parallel coordinates. The tool supports three analysis steps. Firstly, a single-height similarity threshold can be applied using a dynamic slider to identify the main clusters. Secondly, a distinctiveness threshold can be applied using a second dynamic slider to identify “weak-edges” that indicate heterogeneity within clusters. Thirdly, the user can drill-down to further explore the dendrogram structure - always in relation to the original data - and cut the branches of the tree at multiple levels. Interactive drill-down is supported using mouse events such as hovering, pointing and clicking on elements of the dendrogram. Two prototypes of this tool have been developed in collaboration with a group of biologists for analysing their own data sets. We found that enabling the users to cut the tree at multiple levels, while viewing the effect in the original data, is a promising method for clustering which could lead to scientific discoveries.
Resumo:
Accurate estimation of road pavement geometry and layer material properties through the use of proper nondestructive testing and sensor technologies is essential for evaluating pavement’s structural condition and determining options for maintenance and rehabilitation. For these purposes, pavement deflection basins produced by the nondestructive Falling Weight Deflectometer (FWD) test data are commonly used. The nondestructive FWD test drops weights on the pavement to simulate traffic loads and measures the created pavement deflection basins. Backcalculation of pavement geometry and layer properties using FWD deflections is a difficult inverse problem, and the solution with conventional mathematical methods is often challenging due to the ill-posed nature of the problem. In this dissertation, a hybrid algorithm was developed to seek robust and fast solutions to this inverse problem. The algorithm is based on soft computing techniques, mainly Artificial Neural Networks (ANNs) and Genetic Algorithms (GAs) as well as the use of numerical analysis techniques to properly simulate the geomechanical system. A widely used pavement layered analysis program ILLI-PAVE was employed in the analyses of flexible pavements of various pavement types; including full-depth asphalt and conventional flexible pavements, were built on either lime stabilized soils or untreated subgrade. Nonlinear properties of the subgrade soil and the base course aggregate as transportation geomaterials were also considered. A computer program, Soft Computing Based System Identifier or SOFTSYS, was developed. In SOFTSYS, ANNs were used as surrogate models to provide faster solutions of the nonlinear finite element program ILLI-PAVE. The deflections obtained from FWD tests in the field were matched with the predictions obtained from the numerical simulations to develop SOFTSYS models. The solution to the inverse problem for multi-layered pavements is computationally hard to achieve and is often not feasible due to field variability and quality of the collected data. The primary difficulty in the analysis arises from the substantial increase in the degree of non-uniqueness of the mapping from the pavement layer parameters to the FWD deflections. The insensitivity of some layer properties lowered SOFTSYS model performances. Still, SOFTSYS models were shown to work effectively with the synthetic data obtained from ILLI-PAVE finite element solutions. In general, SOFTSYS solutions very closely matched the ILLI-PAVE mechanistic pavement analysis results. For SOFTSYS validation, field collected FWD data were successfully used to predict pavement layer thicknesses and layer moduli of in-service flexible pavements. Some of the very promising SOFTSYS results indicated average absolute errors on the order of 2%, 7%, and 4% for the Hot Mix Asphalt (HMA) thickness estimation of full-depth asphalt pavements, full-depth pavements on lime stabilized soils and conventional flexible pavements, respectively. The field validations of SOFTSYS data also produced meaningful results. The thickness data obtained from Ground Penetrating Radar testing matched reasonably well with predictions from SOFTSYS models. The differences observed in the HMA and lime stabilized soil layer thicknesses observed were attributed to deflection data variability from FWD tests. The backcalculated asphalt concrete layer thickness results matched better in the case of full-depth asphalt flexible pavements built on lime stabilized soils compared to conventional flexible pavements. Overall, SOFTSYS was capable of producing reliable thickness estimates despite the variability of field constructed asphalt layer thicknesses.
Resumo:
This work describes preliminary results of a two-modality imaging system aimed at the early detection of breast cancer. The first technique is based on compounding conventional echographic images taken at regular angular intervals around the imaged breast. The other modality obtains tomographic images of propagation velocity using the same circular geometry. For this study, a low-cost prototype has been built. It is based on a pair of opposed 128-element, 3.2 MHz array transducers that are mechanically moved around tissue mimicking phantoms. Compounded images around 360 degrees provide improved resolution, clutter reduction, artifact suppression and reinforce the visualization of internal structures. However, refraction at the skin interface must be corrected for an accurate image compounding process. This is achieved by estimation of the interface geometry followed by computing the internal ray paths. On the other hand, sound velocity tomographic images from time of flight projections have been also obtained. Two reconstruction methods, Filtered Back Projection (FBP) and 2D Ordered Subset Expectation Maximization (2D OSEM), were used as a first attempt towards tomographic reconstruction. These methods yield useable images in short computational times that can be considered as initial estimates in subsequent more complex methods of ultrasound image reconstruction. These images may be effective to differentiate malignant and benign masses and are very promising for breast cancer screening. (C) 2015 The Authors. Published by Elsevier B.V.
Resumo:
Background: The emergence of multiple-drug resistance bacteria has become a major threat and thus calls for an urgent need to search for new effective and safe anti-bacterial agents. Objectives: This study aims to evaluate the anticancer and antibacterial activities of secondary metabolites from Penicillium sp. , an endophytic fungus associated with leaves of Garcinia nobilis . Methods: The culture filtrate from the fermentation of Penicillium sp. was extracted and analyzed by liquid chromatography– mass spectrometry, and the major metabolites were isolated and identified by spectroscopic analyses and by comparison with published data. The antibacterial activity of the compounds was assessed by broth microdilution method while the anticancer activity was determined by the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide assay. Results: The fractionation of the crude extract afforded penialidin A-C (1-3), citromycetin (4), p-hydroxyphenylglyoxalaldoxime (5) and brefelfin A (6). All of the compounds tested here showed antibacterial activity (MIC = 0.50 – 128 μg/mL) against Gramnegative multi-drug resistance bacteria, Vibrio cholerae (causative agent of dreadful disease cholera) and Shigella flexneri (causative agent of shigellosis), as well as the significant anticancer activity (LC50 = 0.88 – 9.21 μg/mL) against HeLa cells. Conclusion: The results obtained indicate that compounds 1-6 showed good antibacterial and anticancer activities with no toxicity to human red blood cells and normal Vero cells.
Resumo:
Intelligent agents offer a new and exciting way of understanding the world of work. We apply agent-based simulation to investigate a set of problems in a retail context. Specifically, we are working to understand the relationship between human resource management practices and retail productivity. Our multi-disciplinary research team draws upon expertise from work psychologists and computer scientists. Our research so far has led us to conduct case study work with a top ten UK retailer. Based on our case study experience and data we are developing a simulator that can be used to investigate the impact of management practices (e.g. training, empowerment, teamwork) on customer satisfaction and retail productivity.
Resumo:
Executing a cloud or aerosol physical properties retrieval algorithm from controlled synthetic data is an important step in retrieval algorithm development. Synthetic data can help answer questions about the sensitivity and performance of the algorithm or aid in determining how an existing retrieval algorithm may perform with a planned sensor. Synthetic data can also help in solving issues that may have surfaced in the retrieval results. Synthetic data become very important when other validation methods, such as field campaigns,are of limited scope. These tend to be of relatively short duration and often are costly. Ground stations have limited spatial coverage whilesynthetic data can cover large spatial and temporal scales and a wide variety of conditions at a low cost. In this work I develop an advanced cloud and aerosol retrieval simulator for the MODIS instrument, also known as Multi-sensor Cloud and Aerosol Retrieval Simulator (MCARS). In a close collaboration with the modeling community I have seamlessly combined the GEOS-5 global climate model with the DISORT radiative transfer code, widely used by the remote sensing community, with the observations from the MODIS instrument to create the simulator. With the MCARS simulator it was then possible to solve the long standing issue with the MODIS aerosol optical depth retrievals that had a low bias for smoke aerosols. MODIS aerosol retrieval did not account for effects of humidity on smoke aerosols. The MCARS simulator also revealed an issue that has not been recognized previously, namely,the value of fine mode fraction could create a linear dependence between retrieved aerosol optical depth and land surface reflectance. MCARS provided the ability to examine aerosol retrievals against “ground truth” for hundreds of thousands of simultaneous samples for an area covered by only three AERONET ground stations. Findings from MCARS are already being used to improve the performance of operational MODIS aerosol properties retrieval algorithms. The modeling community will use the MCARS data to create new parameterizations for aerosol properties as a function of properties of the atmospheric column and gain the ability to correct any assimilated retrieval data that may display similar dependencies in comparisons with ground measurements.
Resumo:
Image and video compression play a major role in the world today, allowing the storage and transmission of large multimedia content volumes. However, the processing of this information requires high computational resources, hence the improvement of the computational performance of these compression algorithms is very important. The Multidimensional Multiscale Parser (MMP) is a pattern-matching-based compression algorithm for multimedia contents, namely images, achieving high compression ratios, maintaining good image quality, Rodrigues et al. [2008]. However, in comparison with other existing algorithms, this algorithm takes some time to execute. Therefore, two parallel implementations for GPUs were proposed by Ribeiro [2016] and Silva [2015] in CUDA and OpenCL-GPU, respectively. In this dissertation, to complement the referred work, we propose two parallel versions that run the MMP algorithm in CPU: one resorting to OpenMP and another that converts the existing OpenCL-GPU into OpenCL-CPU. The proposed solutions are able to improve the computational performance of MMP by 3 and 2:7 , respectively. The High Efficiency Video Coding (HEVC/H.265) is the most recent standard for compression of image and video. Its impressive compression performance, makes it a target for many adaptations, particularly for holoscopic image/video processing (or light field). Some of the proposed modifications to encode this new multimedia content are based on geometry-based disparity compensations (SS), developed by Conti et al. [2014], and a Geometric Transformations (GT) module, proposed by Monteiro et al. [2015]. These compression algorithms for holoscopic images based on HEVC present an implementation of specific search for similar micro-images that is more efficient than the one performed by HEVC, but its implementation is considerably slower than HEVC. In order to enable better execution times, we choose to use the OpenCL API as the GPU enabling language in order to increase the module performance. With its most costly setting, we are able to reduce the GT module execution time from 6.9 days to less then 4 hours, effectively attaining a speedup of 45 .
Resumo:
This paper explores the role of information and communication technologies in managing risk and early discharge patients, and suggests innovative actions in the area of E-Health services. Treatments of chronic illnesses, or treatments of special needs such as cardiovascular diseases, are conducted in long-stay hospitals, and in some cases, in the homes of patients with a follow-up from primary care centre. The evolution of this model is following a clear trend: trying to reduce the time and the number of visits by patients to health centres and derive tasks, so far as possible, toward outpatient care. Also the number of Early Discharge Patients (EDP) is growing, thus permiting a saving in the resources of the care center. The adequacy of agent and mobile technologies is assessed in light of the particular requirements of health care applications. A software system architecture is outlined and discussed. The major contributions are: first, the conceptualization of multiple mobile and desktop devices as part of a single distributed computing system where software agents are being executed and interact from their remote locations. Second, the use of distributed decision making in multiagent systems, as a means to integrate remote evidence and knowledge obtained from data that is being collected and/or processed by distributed devices. The system will be applied to patients with cardiovascular or Chronic Obstructive Pulmonary Diseases (COPD) as well as to ambulatory surgery patients. The proposed system will allow to transmit the patient's location and some information about his/her illness to the hospital or care centre
Resumo:
Effective and efficient implementation of intelligent and/or recently emerged networked manufacturing systems require an enterprise level integration. The networked manufacturing offers several advantages in the current competitive atmosphere by way to reduce, by shortening manufacturing cycle time and maintaining the production flexibility thereby achieving several feasible process plans. The first step in this direction is to integrate manufacturing functions such as process planning and scheduling for multi-jobs in a network based manufacturing system. It is difficult to determine a proper plan that meets conflicting objectives simultaneously. This paper describes a mobile-agent based negotiation approach to integrate manufacturing functions in a distributed manner; and its fundamental framework and functions are presented. Moreover, ontology has been constructed by using the Protégé software which possesses the flexibility to convert knowledge into Extensible Markup Language (XML) schema of Web Ontology Language (OWL) documents. The generated XML schemas have been used to transfer information throughout the manufacturing network for the intelligent interoperable integration of product data models and manufacturing resources. To validate the feasibility of the proposed approach, an illustrative example along with varied production environments that includes production demand fluctuations is presented and compared the proposed approach performance and its effectiveness with evolutionary algorithm based Hybrid Dynamic-DNA (HD-DNA) algorithm. The results show that the proposed scheme is very effective and reasonably acceptable for integration of manufacturing functions.
Resumo:
Acute Coronary Syndrome (ACS) is transversal to a broad and heterogeneous set of human beings, and assumed as a serious diagnosis and risk stratification problem. Although one may be faced with or had at his disposition different tools as biomarkers for the diagnosis and prognosis of ACS, they have to be previously evaluated and validated in different scenarios and patient cohorts. Besides ensuring that a diagnosis is correct, attention should also be directed to ensure that therapies are either correctly or safely applied. Indeed, this work will focus on the development of a diagnosis decision support system in terms of its knowledge representation and reasoning mechanisms, given here in terms of a formal framework based on Logic Programming, complemented with a problem solving methodology to computing anchored on Artificial Neural Networks. On the one hand it caters for the evaluation of ACS predisposing risk and the respective Degree-of-Confidence that one has on such a happening. On the other hand it may be seen as a major development on the Multi-Value Logics to understand things and ones behavior. Undeniably, the proposed model allows for an improvement of the diagnosis process, classifying properly the patients that presented the pathology (sensitivity ranging from 89.7% to 90.9%) as well as classifying the absence of ACS (specificity ranging from 88.4% to 90.2%).
Resumo:
Several decision and control tasks in cyber-physical networks can be formulated as large- scale optimization problems with coupling constraints. In these "constraint-coupled" problems, each agent is associated to a local decision variable, subject to individual constraints. This thesis explores the use of primal decomposition techniques to develop tailored distributed algorithms for this challenging set-up over graphs. We first develop a distributed scheme for convex problems over random time-varying graphs with non-uniform edge probabilities. The approach is then extended to unknown cost functions estimated online. Subsequently, we consider Mixed-Integer Linear Programs (MILPs), which are of great interest in smart grid control and cooperative robotics. We propose a distributed methodological framework to compute a feasible solution to the original MILP, with guaranteed suboptimality bounds, and extend it to general nonconvex problems. Monte Carlo simulations highlight that the approach represents a substantial breakthrough with respect to the state of the art, thus representing a valuable solution for new toolboxes addressing large-scale MILPs. We then propose a distributed Benders decomposition algorithm for asynchronous unreliable networks. The framework has been then used as starting point to develop distributed methodologies for a microgrid optimal control scenario. We develop an ad-hoc distributed strategy for a stochastic set-up with renewable energy sources, and show a case study with samples generated using Generative Adversarial Networks (GANs). We then introduce a software toolbox named ChoiRbot, based on the novel Robot Operating System 2, and show how it facilitates simulations and experiments in distributed multi-robot scenarios. Finally, we consider a Pickup-and-Delivery Vehicle Routing Problem for which we design a distributed method inspired to the approach of general MILPs, and show the efficacy through simulations and experiments in ChoiRbot with ground and aerial robots.