912 resultados para Distributed virtualization
Resumo:
This paper focuses on understanding distributed leadership and professional learning communities (PLCs). Through an Australian Government grant, the Teacher Education Done Differently (TEDD) project, data were analysed from 25 school executives about distributed leadership as a potential for influencing educational change through forums such as PLCs. Findings will be discussed in relation to: (1) Understanding the nature of a PLC, (2) Leadership within PLCs, (3) Advancing PLCs, and (4) PLCs as forums for capacity building a profession. A cyclic model for facilitating PLCs is presented, where information such as issues and problems are brought to the collective, discussed and analysed openly to provide further feedback. There are implications for leaders to up-skill staff on distributed leadership practices and further research required to determine which practices facilitate successful PLCs.
Resumo:
Distributed generators (DGs) are defined as generators that are connected to a distribution network. The direction of the power flow and short-circuit current in a network could be changed compared with one without DGs. The conventional protective relay scheme does not meet the requirement in this emerging situation. As the number and capacity of DGs in the distribution network increase, the problem of coordinating protective relays becomes more challenging. Given this background, the protective relay coordination problem in distribution systems is investigated, with directional overcurrent relays taken as an example, and formulated as a mixed integer nonlinear programming problem. A mathematical model describing this problem is first developed, and the well-developed differential evolution algorithm is then used to solve it. Finally, a sample system is used to demonstrate the feasiblity and efficiency of the developed method.
Resumo:
Some uncertainties such as the stochastic input/output power of a plug-in electric vehicle due to its stochastic charging and discharging schedule, that of a wind unit and that of a photovoltaic generation source, volatile fuel prices and future uncertain load growth, all together could lead to some risks in determining the optimal siting and sizing of distributed generators (DGs) in distributed systems. Given this background, under the chance constrained programming (CCP) framework, a new method is presented to handle these uncertainties in the optimal sitting and sizing problem of DGs. First, a mathematical model of CCP is developed with the minimization of DGs investment cost, operational cost and maintenance cost as well as the network loss cost as the objective, security limitations as constraints, the sitting and sizing of DGs as optimization variables. Then, a Monte Carolo simulation embedded genetic algorithm approach is developed to solve the developed CCP model. Finally, the IEEE 37-node test feeder is employed to verify the feasibility and effectiveness of the developed model and method. This work is supported by an Australian Commonwealth Scientific and Industrial Research Organisation (CSIRO) Project on Intelligent Grids Under the Energy Transformed Flagship, and Project from Jiangxi Power Company.
Resumo:
A distributed fuzzy system is a real-time fuzzy system in which the input, output and computation may be located on different networked computing nodes. The ability for a distributed software application, such as a distributed fuzzy system, to adapt to changes in the computing network at runtime can provide real-time performance improvement and fault-tolerance. This paper introduces an Adaptable Mobile Component Framework (AMCF) that provides a distributed dataflow-based platform with a fine-grained level of runtime reconfigurability. The execution location of small fragments (possibly as little as few machine-code instructions) of an AMCF application can be moved between different computing nodes at runtime. A case study is included that demonstrates the applicability of the AMCF to a distributed fuzzy system scenario involving multiple physical agents (such as autonomous robots). Using the AMCF, fuzzy systems can now be developed such that they can be distributed automatically across multiple computing nodes and are adaptable to runtime changes in the networked computing environment. This provides the opportunity to improve the performance of fuzzy systems deployed in scenarios where the computing environment is resource-constrained and volatile, such as multiple autonomous robots, smart environments and sensor networks.
Resumo:
The ability to detect unusual events in surviellance footage as they happen is a highly desireable feature for a surveillance system. However, this problem remains challenging in crowded scenes due to occlusions and the clustering of people. In this paper, we propose using the Distributed Behavior Model (DBM), which has been widely used in computer graphics, for video event detection. Our approach does not rely on object tracking, and is robust to camera movements. We use sparse coding for classification, and test our approach on various datasets. Our proposed approach outperforms a state-of-the-art work which uses the social force model and Latent Dirichlet Allocation.
Resumo:
In this conceptual article, we extend earlier work on Open Innovation and Absorptive Capacity. We suggest that the literature on Absorptive Capacity does not place sufficient emphasis on distributed knowledge and learning or on the application of innovative knowledge. To accomplish physical transformations, organisations need specific Innovative Capacities that extend beyond knowledge management. Accessive Capacity is the ability to collect, sort and analyse knowledge from both internal and external sources. Adaptive Capacity is needed to ensure that new pieces of equipment are suitable for the organisation's own purposes even though they may have been originally developed for other uses. Integrative Capacity makes it possible for a new or modified piece of equipment to be fitted into an existing production process with a minimum of inessential and expensive adjustment elsewhere in the process. These Innovative Capacities are controlled and coordinated by Innovative Management Capacity, a higher-order dynamic capability.
Resumo:
The intensity pulsations of a cw 1030 nm Yb:Phosphate monolithic waveguide laser with distributed feedback are described. We show that the pulsations could result from the coupling of the two orthogonal polarization modes through the two photon process of cooperative luminescence. The predictions of the presented theoretical model agree well with the observed behaviour.
Resumo:
The presence of large number of single-phase distributed energy resources (DERs) can cause severe power quality problems in distribution networks. The DERs can be installed in random locations. This may cause the generation in a particular phase exceeds the load demand in that phase. Therefore the excess power in that phase will be fed back to the transmission network. To avoid this problem, the paper proposes the use of distribution static compensator (DSTATCOM) that needs to be connected at the first bus following a substation. When operated properly, the DSTATCOM can facilitate a set of balanced current flow from the substation, even when excess power is generated by DERs. The proposals are validated through extensive digital computer simulation studies using PSCAD and MATLAB.
Resumo:
Server consolidation using virtualization technology has become an important technology to improve the energy efficiency of data centers. Virtual machine placement is the key in the server consolidation. In the past few years, many approaches to the virtual machine placement have been proposed. However, existing virtual machine placement approaches to the virtual machine placement problem consider the energy consumption by physical machines in a data center only, but do not consider the energy consumption in communication network in the data center. However, the energy consumption in the communication network in a data center is not trivial, and therefore should be considered in the virtual machine placement in order to make the data center more energy-efficient. In this paper, we propose a genetic algorithm for a new virtual machine placement problem that considers the energy consumption in both the servers and the communication network in the data center. Experimental results show that the genetic algorithm performs well when tackling test problems of different kinds, and scales up well when the problem size increases.
Resumo:
Implementing educational reform requires partnerships, and university-school collaborations in the form of investigative and experimental projects can aim to determine the practicalities of reform. However, there are funded projects that do not achieve intended outcomes. In the context of a new reform initiative in education, namely, science, technology, engineering and mathematics (STEM) education, this article explores the management of a government-funded project. In a university school partnership for STEM education, how can leadership be distributed for achieving project outcomes? Participants included university personnel from different STEM areas, school teachers and school executives. Data collected included observations, interviews, resource materials, and video and photographic images. Findings indicated that leadership roles were distributed and selfactivated by project partners according to their areas of expertise and proximal activeness to the project phases, that is: (1) establishing partnerships; (2) planning and collaboration; (3) project implementation; and (4) project evaluation and further initiatives. Leadership can be intentional and unintentional within project phases, and understanding how leadership can be distributed and selfactivated more purposefully may aid in generating more expedient project outcomes.
Resumo:
Modern applications comprise multiple components, such as browser plug-ins, often of unknown provenance and quality. Statistics show that failure of such components accounts for a high percentage of software faults. Enabling isolation of such fine-grained components is therefore necessary to increase the robustness and resilience of security-critical and safety-critical computer systems. In this paper, we evaluate whether such fine-grained components can be sandboxed through the use of the hardware virtualization support available in modern Intel and AMD processors. We compare the performance and functionality of such an approach to two previous software based approaches. The results demonstrate that hardware isolation minimizes the difficulties encountered with software based approaches, while also reducing the size of the trusted computing base, thus increasing confidence in the solution's correctness. We also show that our relatively simple implementation has equivalent run-time performance, with overheads of less than 34%, does not require custom tool chains and provides enhanced functionality over software-only approaches, confirming that hardware virtualization technology is a viable mechanism for fine-grained component isolation.
Resumo:
The symbolic and improvisational nature of Livecoding requires a shared networking framework to be flexible and extensible, while at the same time providing support for synchronisation, persistence and redundancy. Above all the framework should be robust and available across a range of platforms. This paper proposes tuple space as a suitable framework for network communication in ensemble livecoding contexts. The role of tuple space as a concurrency framework and the associated timing aspects of the tuple space model are explored through Spaces, an implementation of tuple space for the Impromptu environment.