17 resultados para adaptive ecosystem management
em Aston University Research Archive
Resumo:
Self-adaptation is emerging as an increasingly important capability for many applications, particularly those deployed in dynamically changing environments, such as ecosystem monitoring and disaster management. One key challenge posed by Dynamically Adaptive Systems (DASs) is the need to handle changes to the requirements and corresponding behavior of a DAS in response to varying environmental conditions. Berry et al. previously identified four levels of RE that should be performed for a DAS. In this paper, we propose the Levels of RE for Modeling that reify the original levels to describe RE modeling work done by DAS developers. Specifically, we identify four types of developers: the system developer, the adaptation scenario developer, the adaptation infrastructure developer, and the DAS research community. Each level corresponds to the work of a different type of developer to construct goal model(s) specifying their requirements. We then leverage the Levels of RE for Modeling to propose two complementary processes for performing RE for a DAS. We describe our experiences with applying this approach to GridStix, an adaptive flood warning system, deployed to monitor the River Ribble in Yorkshire, England.
Resumo:
Heterogeneous multi-core FPGAs contain different types of cores, which can improve efficiency when used with an effective online task scheduler. However, it is not easy to find the right cores for tasks when there are multiple objectives or dozens of cores. Inappropriate scheduling may cause hot spots which decrease the reliability of the chip. Given that, our research builds a simulating platform to evaluate all kinds of scheduling algorithms on a variety of architectures. On this platform, we provide an online scheduler which uses multi-objective evolutionary algorithm (EA). Comparing the EA and current algorithms such as Predictive Dynamic Thermal Management (PDTM) and Adaptive Temperature Threshold Dynamic Thermal Management (ATDTM), we find some drawbacks in previous work. First, current algorithms are overly dependent on manually set constant parameters. Second, those algorithms neglect optimization for heterogeneous architectures. Third, they use single-objective methods, or use linear weighting method to convert a multi-objective optimization into a single-objective optimization. Unlike other algorithms, the EA is adaptive and does not require resetting parameters when workloads switch from one to another. EAs also improve performance when used on heterogeneous architecture. A efficient Pareto front can be obtained with EAs for the purpose of multiple objectives.
Resumo:
Automatic ontology building is a vital issue in many fields where they are currently built manually. This paper presents a user-centred methodology for ontology construction based on the use of Machine Learning and Natural Language Processing. In our approach, the user selects a corpus of texts and sketches a preliminary ontology (or selects an existing one) for a domain with a preliminary vocabulary associated to the elements in the ontology (lexicalisations). Examples of sentences involving such lexicalisation (e.g. ISA relation) in the corpus are automatically retrieved by the system. Retrieved examples are validated by the user and used by an adaptive Information Extraction system to generate patterns that discover other lexicalisations of the same objects in the ontology, possibly identifying new concepts or relations. New instances are added to the existing ontology or used to tune it. This process is repeated until a satisfactory ontology is obtained. The methodology largely automates the ontology construction process and the output is an ontology with an associated trained leaner to be used for further ontology modifications.
Resumo:
The two areas of theory upon which this research was based were „strategy development process?(SDP) and „complex adaptive systems? (CAS), as part of complexity theory, focused on human social organisations. The literature reviewed showed that there is a paucity of empirical work and theory in the overlap of the two areas, providing an opportunity for contributions to knowledge in each area of theory, and for practitioners. An inductive approach was adopted for this research, in an effort to discover new insights to the focus area of study. It was undertaken from within an interpretivist paradigm, and based on a novel conceptual framework. The organisationally intimate nature of the research topic, and the researcher?s circumstances required a research design that was both in-depth and long term. The result was a single, exploratory, case study, which included use of data from 44 in-depth, semi-structured interviews, from 36 people, involving all the top management team members and significant other staff members; observations, rumour and grapevine (ORG) data; and archive data, over a 5½ year period (2005 – 2010). Findings confirm the validity of the conceptual framework, and that complex adaptive systems theory has potential to extend strategy development process theory. It has shown how and why the strategy process developed in the case study organisation by providing deeper insights to the behaviour of the people, their backgrounds, and interactions. Broad predictions of the „latent strategy development? process and some elements of the strategy content are also possible. Based on this research, it is possible to extend the utility of the SDP model by including peoples? behavioural characteristics within the organisation, via complex adaptive systems theory. Further research is recommended to test limits of the application of the conceptual framework and improve its efficacy with more organisations across a variety of sectors.
Resumo:
The behaviour of control functions in safety critical software systems is typically bounded to prevent the occurrence of known system level hazards. These bounds are typically derived through safety analyses and can be implemented through the use of necessary design features. However, the unpredictability of real world problems can result in changes in the operating context that may invalidate the behavioural bounds themselves, for example, unexpected hazardous operating contexts as a result of failures or degradation. For highly complex problems it may be infeasible to determine the precise desired behavioural bounds of a function that addresses or minimises risk for hazardous operation cases prior to deployment. This paper presents an overview of the safety challenges associated with such a problem and how such problems might be addressed. A self-management framework is proposed that performs on-line risk management. The features of the framework are shown in context of employing intelligent adaptive controllers operating within complex and highly dynamic problem domains such as Gas-Turbine Aero Engine control. Safety assurance arguments enabled by the framework necessary for certification are also outlined.
Resumo:
With the advent of distributed computer systems with a largely transparent user interface, new questions have arisen regarding the management of such an environment by an operating system. One fertile area of research is that of load balancing, which attempts to improve system performance by redistributing the workload submitted to the system by the users. Early work in this field concentrated on static placement of computational objects to improve performance, given prior knowledge of process behaviour. More recently this has evolved into studying dynamic load balancing with process migration, thus allowing the system to adapt to varying loads. In this thesis, we describe a simulated system which facilitates experimentation with various load balancing algorithms. The system runs under UNIX and provides functions for user processes to communicate through software ports; processes reside on simulated homogeneous processors, connected by a user-specified topology, and a mechanism is included to allow migration of a process from one processor to another. We present the results of a study of adaptive load balancing algorithms, conducted using the aforementioned simulated system, under varying conditions; these results show the relative merits of different approaches to the load balancing problem, and we analyse the trade-offs between them. Following from this study, we present further novel modifications to suggested algorithms, and show their effects on system performance.
Resumo:
This thesis deals with the problem of Information Systems design for Corporate Management. It shows that the results of applying current approaches to Management Information Systems and Corporate Modelling fully justify a fresh look to the problem. The thesis develops an approach to design based on Cybernetic principles and theories. It looks at Management as an informational process and discusses the relevance of regulation theory to its practice. The work proceeds around the concept of change and its effects on the organization's stability and survival. The idea of looking at organizations as viable systems is discussed and a design to enhance survival capacity is developed. It takes Ashby's theory of adaptation and developments on ultra-stability as a theoretical framework and considering conditions for learning and foresight deduces that a design should include three basic components: A dynamic model of the organization- environment relationships; a method to spot significant changes in the value of the essential variables and in a certain set of parameters; and a Controller able to conceive and change the other two elements and to make choices among alternative policies. Further considerations of the conditions for rapid adaptation in organisms composed of many parts, and the law of Requisite Variety determine that successful adaptive behaviour requires certain functional organization. Beer's model of viable organizations is put in relation to Ashby's theory of adaptation and regulation. The use of the Ultra-stable system as abstract unit of analysis permits developing a rigorous taxonomy of change; it starts distinguishing between change with in behaviour and change of behaviour to complete the classification with organizational change. It relates these changes to the logical categories of learning connecting the topic of Information System design with that of organizational learning.
Resumo:
Service-based systems that are dynamically composed at run time to provide complex, adaptive functionality are currently one of the main development paradigms in software engineering. However, the Quality of Service (QoS) delivered by these systems remains an important concern, and needs to be managed in an equally adaptive and predictable way. To address this need, we introduce a novel, tool-supported framework for the development of adaptive service-based systems called QoSMOS (QoS Management and Optimisation of Service-based systems). QoSMOS can be used to develop service-based systems that achieve their QoS requirements through dynamically adapting to changes in the system state, environment and workload. QoSMOS service-based systems translate high-level QoS requirements specified by their administrators into probabilistic temporal logic formulae, which are then formally and automatically analysed to identify and enforce optimal system configurations. The QoSMOS self-adaptation mechanism can handle reliability- and performance-related QoS requirements, and can be integrated into newly developed solutions or legacy systems. The effectiveness and scalability of the approach are validated using simulations and a set of experiments based on an implementation of an adaptive service-based system for remote medical assistance.
Resumo:
Class-based service differentiation is provided in DiffServ networks. However, this differentiation will be disordered under dynamic traffic loads due to the fixed weighted scheduling. An adaptive weighted scheduling scheme is proposed in this paper to achieve fair bandwidth allocation among different service classes. In this scheme, the number of active flows and the subscribed bandwidth are estimated based on the measurement of local queue metrics, then the scheduling weights of each service class are adjusted for the per-flow fairness of excess bandwidth allocation. This adaptive scheme can be combined with any weighted scheduling algorithm. Simulation results show that, comparing with fixed weighted scheduling, it effectively improve the fairness of excess bandwidth allocation.
Resumo:
Browsing constitutes an important part of the user information searching process on the Web. In this paper, we present a browser plug-in called ESpotter, which recognizes entities of various types on Web pages and highlights them according to their types to assist user browsing. ESpotter uses a range of standard named entity recognition techniques. In addition, a key new feature of ESpotter is that it addresses the problem of multiple domains on the Web by adapting lexicon and patterns to these domains.
Resumo:
In an ecosystem of management journals dominated by US heavyweights, the British Journal of Management (BJM) assumes a unique role. There is no other European business and management journal of the standing and stature of this journal that seeks both to be multidisciplinary and to appeal to a general readership of business and management scholars. This is a tribute to the efforts of previous editorial teams. At the same time, the journal faces the challenge of remaining relevant both to the immediate British Academy of Management community and to the wider scholarly one, while raising its own game in pioneering and leading debates, and in global esteem. © 2014 British Academy of Management.
Resumo:
Energy consumption has been a key concern of data gathering in wireless sensor networks. Previous research works show that modulation scaling is an efficient technique to reduce energy consumption. However, such technique will also impact on both packet delivery latency and packet loss, therefore, may result in adverse effects on the qualities of applications. In this paper, we study the problem of modulation scaling and energy-optimization. A mathematical model is proposed to analyze the impact of modulation scaling on the overall energy consumption, end-to-end mean delivery latency and mean packet loss rate. A centralized optimal management mechanism is developed based on the model, which adaptively adjusts the modulation levels to minimize energy consumption while ensuring the QoS for data gathering. Experimental results show that the management mechanism saves significant energy in all the investigated scenarios. Some valuable results are also observed in the experiments. © 2004 IEEE.
Resumo:
This article builds theory at the intersection of ecological sustainability and strategic management literature—specifically, in relation to dynamic capabilities literature. By combining industrial organization economics–based, resource-based, and dynamic capability–based views, it is possible to develop a better understanding of the strategies that businesses may follow, depending on their managers’ assumptions about ecological sustainability. To develop innovative strategies for ecological sustainability, the dynamic capabilities framework needs to be extended. In particular, the sensing–seizing–maintaining competitiveness framework should operate not only within the boundaries of a business ecosystem but in relation to global biophysical ecosystems; in addition, two more dynamic capabilities should be added, namely, remapping and reaping. This framework can explicate core managerial beliefs about ecological sustainability. Finally, this approach offers opportunities for managers and academics to identify, categorize, and exploit business strategies for ecological sustainability.
Resumo:
Increased global uptake of entertainment gaming has the potential to lead to high expectations of engagement and interactivity from users of technology-enhanced learning environments. Blended approaches to implementing game-based learning as part of distance or technology-enhanced education have led to demonstrations of the benefits they might bring, allowing learners to interact with immersive technologies as part of a broader, structured learning experience. In this article, we explore how the integration of a serious game can be extended to a learning content management system (LCMS) to support a blended and holistic approach, described as an 'intuitive-guided' method. Through a case study within the EU-Funded Adaptive Learning via Intuitive/Interactive, Collaborative and Emotional Systems (ALICE) project, a technical integration of a gaming engine with a proprietary LCMS is demonstrated, building upon earlier work and demonstrating how this approach might be realized. In particular, how this method can support an intuitive-guided approach to learning is considered, whereby the learner is given the potential to explore a non-linear environment whilst scaffolding and blending provide guidance ensuring targeted learning objectives are met. Through an evaluation of the developed prototype with 32 students aged 14-16 across two Italian schools, a varied response from learners is observed, coupled with a positive reception from tutors. The study demonstrates that challenges remain in providing high-fidelity content in a classroom environment, particularly as an increasing gap in technology availability between leisure and school times emerges.
Resumo:
When visual sensor networks are composed of cameras which can adjust the zoom factor of their own lens, one must determine the optimal zoom levels for the cameras, for a given task. This gives rise to an important trade-off between the overlap of the different cameras’ fields of view, providing redundancy, and image quality. In an object tracking task, having multiple cameras observe the same area allows for quicker recovery, when a camera fails. In contrast having narrow zooms allow for a higher pixel count on regions of interest, leading to increased tracking confidence. In this paper we propose an approach for the self-organisation of redundancy in a distributed visual sensor network, based on decentralised multi-objective online learning using only local information to approximate the global state. We explore the impact of different zoom levels on these trade-offs, when tasking omnidirectional cameras, having perfect 360-degree view, with keeping track of a varying number of moving objects. We further show how employing decentralised reinforcement learning enables zoom configurations to be achieved dynamically at runtime according to an operator’s preference for maximising either the proportion of objects tracked, confidence associated with tracking, or redundancy in expectation of camera failure. We show that explicitly taking account of the level of overlap, even based only on local knowledge, improves resilience when cameras fail. Our results illustrate the trade-off between maintaining high confidence and object coverage, and maintaining redundancy, in anticipation of future failure. Our approach provides a fully tunable decentralised method for the self-organisation of redundancy in a changing environment, according to an operator’s preferences.