842 resultados para Complex Adaptive Systems
Resumo:
Adaptive Multiple-Input Multiple-Output (MIMO) systems achieve a much higher information rate than conventional fixed schemes due to their ability to adapt their configurations according to the wireless communications environment. However, current adaptive MIMO detection schemes exhibit either low performance (and hence low spectral efficiency) or huge computational
complexity. In particular, whilst deterministic Sphere Decoder (SD) detection schemes are well established for static MIMO systems, exhibiting deterministic parallel structure, low computational complexity and quasi-ML detection performance, there are no corresponding adaptive schemes. This paper solves
this problem, describing a hybrid tree based adaptive modulation detection scheme. Fixed Complexity Sphere Decoding (FSD) and Real-Values FSD (RFSD) are modified and combined into a hybrid scheme exploited at low and medium SNR to provide the highest possible information rate with quasi-ML Bit Error
Rate (BER) performance, while Reduced Complexity RFSD, BChase and Decision Feedback (DFE) schemes are exploited in the high SNR regions. This algorithm provides the facility to balance the detection complexity with BER performance with compatible information rate in dynamic, adaptive MIMO communications
environments.
Resumo:
Fixed and wireless networks are increasingly converging towards common connectivity with IP-based core networks. Providing effective end-to-end resource and QoS management in such complex heterogeneous converged network scenarios requires unified, adaptive and scalable solutions to integrate and co-ordinate diverse QoS mechanisms of different access technologies with IP-based QoS. Policy-Based Network Management (PBNM) is one approach that could be employed to address this challenge. Hence, a policy-based framework for end-to-end QoS management in converged networks, CNQF (Converged Networks QoS Management Framework) has been proposed within our project. In this paper, the CNQF architecture, a Java implementation of its prototype and experimental validation of key elements are discussed. We then present a fuzzy-based CNQF resource management approach and study the performance of our implementation with real traffic flows on an experimental testbed. The results demonstrate the efficacy of our resource-adaptive approach for practical PBNM systems
Resumo:
This paper describes the ParaPhrase project, a new 3-year targeted research project funded under EU Framework 7 Objective 3.4 (Computer Systems), starting in October 2011. ParaPhrase aims to follow a new approach to introducing parallelism using advanced refactoring techniques coupled with high-level parallel design patterns. The refactoring approach will use these design patterns to restructure programs defined as networks of software components into other forms that are more suited to parallel execution. The programmer will be aided by high-level cost information that will be integrated into the refactoring tools. The implementation of these patterns will then use a well-understood algorithmic skeleton approach to achieve good parallelism. A key ParaPhrase design goal is that parallel components are intended to match heterogeneous architectures, defined in terms of CPU/GPU combinations, for example. In order to achieve this, the ParaPhrase approach will map components at link time to the available hardware, and will then re-map them during program execution, taking account of multiple applications, changes in hardware resource availability, the desire to reduce communication costs etc. In this way, we aim to develop a new approach to programming that will be able to produce software that can adapt to dynamic changes in the system environment. Moreover, by using a strong component basis for parallelism, we can achieve potentially significant gains in terms of reducing sharing at a high level of abstraction, and so in reducing or even eliminating the costs that are usually associated with cache management, locking, and synchronisation. © 2013 Springer-Verlag Berlin Heidelberg.
Resumo:
This paper presents a preliminary study of developing a novel distributed adaptive real-time learning framework for wide area monitoring of power systems integrated with distributed generations using synchrophasor technology. The framework comprises distributed agents (synchrophasors) for autonomous local condition monitoring and fault detection, and a central unit for generating global view for situation awareness and decision making. Key technologies that can be integrated into this hierarchical distributed learning scheme are discussed to enable real-time information extraction and knowledge discovery for decision making, without explicitly accumulating and storing all raw data by the central unit. Based on this, the configuration of a wide area monitoring system of power systems using synchrophasor technology, and the functionalities for locally installed open-phasor-measurement-units (OpenPMUs) and a central unit are presented. Initial results on anti-islanding protection using the proposed approach are given to illustrate the effectiveness.
Resumo:
In this paper, we propose a system level design approach considering voltage over-scaling (VOS) that achieves error resiliency using unequal error protection of different computation elements, while incurring minor quality degradation. Depending on user specifications and severity of process variations/channel noise, the degree of VOS in each block of the system is adaptively tuned to ensure minimum system power while providing "just-the-right" amount of quality and robustness. This is achieved, by taking into consideration block level interactions and ensuring that under any change of operating conditions, only the "less-crucial" computations, that contribute less to block/system output quality, are affected. The proposed approach applies unequal error protection to various blocks of a system-logic and memory-and spans multiple layers of design hierarchy-algorithm, architecture and circuit. The design methodology when applied to a multimedia subsystem shows large power benefits ( up to 69% improvement in power consumption) at reasonable image quality while tolerating errors introduced due to VOS, process variations, and channel noise.
Resumo:
Correctly modelling and reasoning with uncertain information from heterogeneous sources in large-scale systems is critical when the reliability is unknown and we still want to derive adequate conclusions. To this end, context-dependent merging strategies have been proposed in the literature. In this paper we investigate how one such context-dependent merging strategy (originally defined for possibility theory), called largely partially maximal consistent subsets (LPMCS), can be adapted to Dempster-Shafer (DS) theory. We identify those measures for the degree of uncertainty and internal conflict that are available in DS theory and show how they can be used for guiding LPMCS merging. A simplified real-world power distribution scenario illustrates our framework. We also briefly discuss how our approach can be incorporated into a multi-agent programming language, thus leading to better plan selection and decision making.
Adaptive backstepping droop controller design for multi-terminal high-voltage direct current systems
Resumo:
Wind power is one of the most developed renewable energy resources worldwide. To integrate offshore wind farms to onshore grids, the high-voltage direct current (HVDC) transmission cables interfaced with voltage source converters (VSCs) are considered to be a better solution than conventional approaches. Proper DC voltage indicates successive power transfer. To connect more than one onshore grid, the DC voltage droop control is one of the most popular methods to share the control burden between different terminals. However, the challenges are that small droop gains will cause voltage deviations, while higher droop gain settings will cause large oscillations. This study aims to enhance the performance of the traditional droop controller by considering the DC cable dynamics. Based on the backstepping control concept, DC cables are modelled with a series of capacitors and inductors. The final droop control law is deduced step-by-step from the original remote side. At each step the control error from the previous step is considered. Simulation results show that both the voltage deviations and oscillations can be effectively reduced using the proposed method. Further, power sharing between different terminals can be effectively simplified such that it correlates linearly with the droop gains, thus enabling simple yet accurate system operation and control.
Resumo:
The nature and challenges of public sector leadership and management are examined in four case studies of project management in complex metropolitan environments. The cases selected by the authors as representative of contextual factors affecting decision-making processes and project outcomes. Drawing on recent theoretical work on complex leadership approaches (Uhl-Bein et al 2007, Hazy 2008, Lichtenstein & Plowman 2009), the authors assess leadership practices enacted and the circumstances that influence these practices. Leadership types theorized by Uhl-Bein et al (2007) are identified operating at different levels and across networks, with contextual factors outlined. The article concludes with a framework for leadership practice and management identifying network facilitation and complexity friendly tools as a practice within complex public sector systems.
Resumo:
Institutions involved in the provision of tertiary education across Europe are feeling the pinch. European universities, and other higher education (HE) institutions, must operate in a climate where the pressure of government spending cuts (Garben, 2012) is in stark juxtaposition to the EU’s strategy to drive forward and maintain a growth of student numbers in the sector (eurostat, 2015).
In order to remain competitive, universities and HE institutions are making ever-greater use of electronic assessment (E-Assessment) systems (Chatzigavriil et all, 2015; Ferrell, 2012). These systems are attractive primarily because they offer a cost-effect and scalable approach for assessment. In addition to scalability, they also offer reliability, consistency and impartiality; furthermore, from the perspective of a student they are most popular because they can offer instant feedback (Walet, 2012).
There are disadvantages, though.
First, feedback is often returned to a student immediately on competition of their assessment. While it is possible to disable the instant feedback option (this is often the case during an end of semester exam period when assessment scores must be can be ratified before release), however, this option tends to be a global ‘all on’ or ‘all off’ configuration option which is controlled centrally rather than configurable on a per-assessment basis.
If a formative in-term assessment is to be taken by multiple groups of
students, each at different times, this restriction means that answers to each question will be disclosed to the first group of students undertaking the assessment. As soon as the answers are released “into the wild” the academic integrity of the assessment is lost for subsequent student groups.
Second, the style of feedback provided to a student for each question is often limited to a simple ‘correct’ or ‘incorrect’ indicator. While this type of feedback has its place, it often does not provide a student with enough insight to improve their understanding of a topic that they did not answer correctly.
Most E-Assessment systems boast a wide range of question types including Multiple Choice, Multiple Response, Free Text Entry/Text Matching and Numerical questions. The design of these types of questions is often quite restrictive and formulaic, which has a knock-on effect on the quality of feedback that can be provided in each case.
Multiple Choice Questions (MCQs) are most prevalent as they are the most prescriptive and therefore most the straightforward to mark consistently. They are also the most amenable question types, which allow easy provision of meaningful, relevant feedback to each possible outcome chosen.
Text matching questions tend to be more problematic due to their free text entry nature. Common misspellings or case-sensitivity errors can often be accounted for by the software but they are by no means fool proof, as it is very difficult to predict in advance the range of possible variations on an answer that would be considered worthy of marks by a manual marker of a paper based equivalent of the same question.
Numerical questions are similarly restricted. An answer can be checked for accuracy or whether it is within a certain range of the correct answer, but unless it is a special purpose-built mathematical E-Assessment system the system is unlikely to have computational capability and so cannot, for example, account for “method marks” which are commonly awarded in paper-based marking.
From a pedagogical perspective, the importance of providing useful formative feedback to students at a point in their learning when they can benefit from the feedback and put it to use must not be understated (Grieve et all, 2015; Ferrell, 2012).
In this work, we propose a number of software-based solutions, which will overcome the limitations and inflexibilities of existing E-Assessment systems.
Design of a Virtual Reality Framework for Maintainability and assemblability test of complex systems
Resumo:
This paper presents a unique environment whose features are able to satisfy requirements for both virtual maintenance and virtual manufacturing through the conception of original virtual reality (VR) architecture. Virtual Reality for the Maintainability and Assemblability Tests (VR_MATE) encompasses VR hardware and software and a simulation manager which allows customisation of the architecture itself as well as interfacing with a wide range of devices employed in the simulations. Two case studies are presented to illustrate VR_MATE's unique ability to allow for both maintainability tests and assembly analysis of an aircraft carriage and a railway coach cooling system respectively. The key impact of this research is the demonstration of the potentialities of using VR techniques in industry and its multiple applications despite the subjective character within the simulation. VR_MATE has been presented as a framework to support the strategic and operative objectives of companies to reduce product development time and costs whilst maintaining product quality for applications which would be too expensive to simulate and evaluate in the real world.
Resumo:
Abstract
Complexity and environmental uncertainty in public sector systems requires leaders to balance the administrative practices necessary to be aligned and efficient in the management of routine challenges, and the adaptive practices required to respond to complex and dynamic circumstances. Conventional notions of leadership in the field of public administration do not fully explain the role of leadership in enabling and balancing the entanglement of formal, top-down, administrative functions and informal, emergent, adaptive functions within public sector settings with different levels of complexity. Drawing on and extending existing complexity leadership constructs, this paper explores how change was enabled over the duration of three urban regeneration projects, each representing high, medium and low levels of project complexity. The data reveals six distinct yet interconnected functions of enabling leadership that were identified within the three urban regeneration projects. The paper contributes to our understanding of how leadership is enacted and poses questions for those engaged in leading in complex public sector settings.
Resumo:
Esta tese descreve uma framework de trabalho assente no paradigma multi-camada para analisar, modelar, projectar e optimizar sistemas de comunicação. Nela se explora uma nova perspectiva acerca da camada física que nasce das relações entre a teoria de informação, estimação, métodos probabilísticos, teoria da comunicação e codificação. Esta framework conduz a métodos de projecto para a próxima geração de sistemas de comunicação de alto débito. Além disso, a tese explora várias técnicas de camada de acesso com base na relação entre atraso e débito para o projeto de redes sem fio tolerantes a atrasos. Alguns resultados fundamentais sobre a interação entre a teoria da informação e teoria da estimação conduzem a propostas de um paradigma alternativo para a análise, projecto e optimização de sistemas de comunicação. Com base em estudos sobre a relação entre a informação recíproca e MMSE, a abordagem descrita na tese permite ultrapassar, de forma inovadora, as dificuldades inerentes à optimização das taxas de transmissão de informação confiáveis em sistemas de comunicação, e permite a exploração da atribuição óptima de potência e estruturas óptimas de pre-codificação para diferentes modelos de canal: com fios, sem fios e ópticos. A tese aborda também o problema do atraso, numa tentativa de responder a questões levantadas pela enorme procura de débitos elevados em sistemas de comunicação. Isso é feito através da proposta de novos modelos para sistemas com codificação de rede (network coding) em camadas acima da sua camada física. Em particular, aborda-se a utilização de sistemas de codificação em rede para canais que variam no tempo e são sensíveis a atrasos. Isso foi demonstrado através da proposta de um novo modelo e esquema adaptativo, cujos algoritmos foram aplicados a sistemas sem fios com desvanecimento (fading) complexo, de que são exemplos os sistemas de comunicação via satélite. A tese aborda ainda o uso de sistemas de codificação de rede em cenários de transferência (handover) exigentes. Isso é feito através da proposta de novos modelos de transmissão WiFi IEEE 801.11 MAC, que são comparados com codificação de rede, e que se demonstram possibilitar transferência sem descontinuidades. Pode assim dizer-se que esta tese, através de trabalho de análise e de propostas suportadas por simulações, defende que na concepção de sistemas de comunicação se devem considerar estratégias de transmissão e codificação que sejam não só próximas da capacidade dos canais, mas também tolerantes a atrasos, e que tais estratégias têm de ser concebidas tendo em vista características do canal e a camada física.