887 resultados para Scalable Intelligence
Resumo:
Arguably, the world has become one large pervasive computing environment. Our planet is growing a digital skin of a wide array of sensors, hand-held computers, mobile phones, laptops, web services and publicly accessible web-cams. Often, these devices and services are deployed in groups, forming small communities of interacting devices. Service discovery protocols allow processes executing on each device to discover services offered by other devices within the community. These communities can be linked together to form a wide-area pervasive environment, allowing processes in one p u p tu interact with services in another. However, the costs of communication and the protocols by which this communication is mediated in the wide-area differ from those of intra-group, or local-area, communication. Communication is an expensive operation for small, battery powered devices, but it is less expensive for servem and workstations, which have a constant power supply and 81'e connected to high bandwidth networks. This paper introduces Superstring, a peer to-peer service discovery protocol optimised fur use in the wide-area. Its goals are to minimise computation and memory overhead in the face of large numbers of resources. It achieves this memory and computation scalability by distributing the storage cost of service descriptions and the computation cost of queries over multiple resolvers.
Resumo:
Location information is commonly used in context-aware applications and pervasive systems. These applications and systems may require knowledge, of the location of users, devices and services. This paper presents a location management system able to gather, process and manage location information from a variety of physical and virtual location sensors. The system scales to the complexity of context-aware applications, to a variety of types and large number of location sensors and clients, and to geographical size of the system. The proposed location management system provides conflict resolution of location information and mechanisms to ensure privacy.
Resumo:
Yorick Wilks is a central figure in the fields of Natural Language Processing and Artificial Intelligence. His influence extends to many areas and includes contributions to Machines Translation, word sense disambiguation, dialogue modeling and Information Extraction. This book celebrates the work of Yorick Wilks in the form of a selection of his papers which are intended to reflect the range and depth of his work. The volume accompanies a Festschrift which celebrates his contribution to the fields of Computational Linguistics and Artificial Intelligence. The papers include early work carried out at Cambridge University, descriptions of groundbreaking work on Machine Translation and Preference Semantics as well as more recent works on belief modeling and computational semantics. The selected papers reflect Yorick’s contribution to both practical and theoretical aspects of automatic language processing.
Resumo:
Background The optimisation and scale-up of process conditions leading to high yields of recombinant proteins is an enduring bottleneck in the post-genomic sciences. Typical experiments rely on varying selected parameters through repeated rounds of trial-and-error optimisation. To rationalise this, several groups have recently adopted the 'design of experiments' (DoE) approach frequently used in industry. Studies have focused on parameters such as medium composition, nutrient feed rates and induction of expression in shake flasks or bioreactors, as well as oxygen transfer rates in micro-well plates. In this study we wanted to generate a predictive model that described small-scale screens and to test its scalability to bioreactors. Results Here we demonstrate how the use of a DoE approach in a multi-well mini-bioreactor permitted the rapid establishment of high yielding production phase conditions that could be transferred to a 7 L bioreactor. Using green fluorescent protein secreted from Pichia pastoris, we derived a predictive model of protein yield as a function of the three most commonly-varied process parameters: temperature, pH and the percentage of dissolved oxygen in the culture medium. Importantly, when yield was normalised to culture volume and density, the model was scalable from mL to L working volumes. By increasing pre-induction biomass accumulation, model-predicted yields were further improved. Yield improvement was most significant, however, on varying the fed-batch induction regime to minimise methanol accumulation so that the productivity of the culture increased throughout the whole induction period. These findings suggest the importance of matching the rate of protein production with the host metabolism. Conclusion We demonstrate how a rational, stepwise approach to recombinant protein production screens can reduce process development time.
Resumo:
Machine breakdowns are one of the main sources of disruption and throughput fluctuation in highly automated production facilities. One element in reducing this disruption is ensuring that the maintenance team responds correctly to machine failures. It is, however, difficult to determine the current practice employed by the maintenance team, let alone suggest improvements to it. 'Knowledge based improvement' is a methodology that aims to address this issue, by (a) eliciting knowledge on current practice, (b) evaluating that practice and (c) looking for improvements. The methodology, based on visual interactive simulation and artificial intelligence methods, and its application to a Ford engine assembly facility are described. Copyright © 2002 Society of Automotive Engineers, Inc.
Resumo:
The performance of most operations systems is significantly affected by the interaction of human decision-makers. A methodology, based on the use of visual interactive simulation (VIS) and artificial intelligence (AI), is described that aims to identify and improve human decision-making in operations systems. The methodology, known as 'knowledge-based improvement' (KBI), elicits knowledge from a decision-maker via a VIS and then uses AI methods to represent decision-making. By linking the VIS and AI representation, it is possible to predict the performance of the operations system under different decision-making strategies and to search for improved strategies. The KBI methodology is applied to the decision-making surrounding unplanned maintenance operations at a Ford Motor Company engine assembly plant.
Resumo:
The purpose of this research is to propose a procurement system across other disciplines and retrieved information with relevant parties so as to have a better co-ordination between supply and demand sides. This paper demonstrates how to analyze the data with an agent-based procurement system (APS) to re-engineer and improve the existing procurement process. The intelligence agents take the responsibility of searching the potential suppliers, negotiation with the short-listed suppliers and evaluating the performance of suppliers based on the selection criteria with mathematical model. Manufacturing firms and trading companies spend more than half of their sales dollar in the purchase of raw material and components. Efficient data collection with high accuracy is one of the key success factors to generate quality procurement which is to purchasing right material at right quality from right suppliers. In general, the enterprises spend a significant amount of resources on data collection and storage, but too little on facilitating data analysis and sharing. To validate the feasibility of the approach, a case study on a manufacturing small and medium-sized enterprise (SME) has been conducted. APS supports the data and information analyzing technique to facilitate the decision making such that the agent can enhance the negotiation and suppler evaluation efficiency by saving time and cost.
Resumo:
Reports some insights into knowledge management (KM) derived from UK one-day workshops with six businesses, three non-profits and one public sector organization. Lists the four questions posed to participants and discusses the themes which emerged, e.g. the need for a KM strategy to make raw information more useable, KM performance measurement etc. Stresses the need for commitment from a top-level champion and a wide range of employees to make this work and identifies three types of solutions for improving KM strategy: technological (e.g. databases and intranets), people (e.g. motivation, retention, training and networking) and processes (e.g. procedural instructions and balancing formal/informal knowledge sharing methods). Finds that accountants and senior managers do not generally see KM as very important but argues that management accountants are suitable knowledge champions who could develop explicit links between KM and organizational performance.
Resumo:
This paper examined the joint predictive effects of trait emotional intelligence (trait-EI), Extraversion, Conscientiousness, and Neuroticism on 2 facets of general well-being and job satisfaction. An employed community sample of 123 individuals from the Indian subcontinent participated in the study, and completed measures of the five-factor model of personality, trait-EI, job satisfaction, and general well-being facets worn-out and up-tight. Trait-EI was related but distinct from the 3 personality variables. Trait-EI demonstrated the strongest correlation with job satisfaction, but predicted general well-being no better than Neuroticism. In regression analyses, trait-EI predicted between 6% and 9% additional variance in the well-being criteria, beyond the 3 personality traits. It was concluded that trait-EI may be useful in examining dispositional influences on psychological well-being.
Resumo:
Theory suggests that people fear the unknown and no matter how experienced one is, the feelings of anxiety and uncertainty, if not managed well would affect how we view ourselves and how others view us. Hence, it is in human nature to engage in activities to help decipher behaviours that seem contrary to their beliefs and hinder the smooth-flowing of their work and daily activities. Building on these arguments, this research investigates the two types of support that are provided by multinational corporations (MNCs) and host country nationals (HCNs) to the expatriates and their family members whilst on international assignments in Malaysia as antecedents to their adjustment and performance in the host country. To complement the support provided, cultural intelligence (CQ) is investigated to explain the influence of cultural elements in facilitating adjustment and performance of the relocating families, especially to socially integrate into the host country. This research aims to investigate the influence of support and CQ on the adjustment and performance of expatriates in Malaysia. Path analyses are used to test the hypothesised relationships. The findings substantiate the pivotal roles that MNCs and HCNs play in helping the expatriates and their families acclimatise to the host country. This corroborates the norm of reciprocity where assistance or support rendered especially at the times when they were crucially needed would be reciprocated with positive behaviour deemed of equal value. Additionally, CQ is significantly positive in enhancing adjustment to the host country, which highlights the vital role that cultural awareness and knowledge play in enhancing effective intercultural communication and better execution of contextual performance. The research highlights the interdependence of the expatriates? multiple stakeholders (i.e. MNCs, HCNs, family members) in supporting the expatriates whilst on assignments. Finally, the findings reveal that the expatriate families do influence how the locals view the families and would be a great asset in initiating future communication between the expatriates and HCNs. The research contributes to the fields of intercultural adjustment and communication and also has key messages for policy makers.
Resumo:
The computer systems of today are characterised by data and program control that are distributed functionally and geographically across a network. A major issue of concern in this environment is the operating system activity of resource management for different processors in the network. To ensure equity in load distribution and improved system performance, load balancing is often undertaken. The research conducted in this field so far, has been primarily concerned with a small set of algorithms operating on tightly-coupled distributed systems. More recent studies have investigated the performance of such algorithms in loosely-coupled architectures but using a small set of processors. This thesis describes a simulation model developed to study the behaviour and general performance characteristics of a range of dynamic load balancing algorithms. Further, the scalability of these algorithms are discussed and a range of regionalised load balancing algorithms developed. In particular, we examine the impact of network diameter and delay on the performance of such algorithms across a range of system workloads. The results produced seem to suggest that the performance of simple dynamic policies are scalable but lack the load stability of more complex global average algorithms.