977 resultados para Distributed virtual machines
Resumo:
The aim of this work is distributed genetic algorithm implementation (so called island algorithm) to accelerate the optimum searching process in space of solutions. Distributed genetic algorithm has also smaller chances to fall in local optimum. This conception depends on mutual cooperation of the clients which realize separate working of genetic algorithms on local machines. As a tool for implementation of distributed genetic algorithm, created to produce net's applications Java technology was chosen. In Java technology, there is a technique of remote methods invocation - Java RMI. By means of invoking remote methods it can send objects between clients and server RMI.
Resumo:
This article takes stock of the current state of research on knowledge processes in virtual teams (VTs) and consolidates the extent research findings. Virtual teams, on the one hand, constitute important organisational entities that facilitate the integration of diverse and distributed knowledge resources. On the other hand, collaborating in a virtual environment creates particular challenges for the knowledge processes. The article seeks to consolidate the diverse evidence on knowledge processes in VTs with a specific focus on identifying the factors that influence the effectiveness of these knowledge processes. The article draws on the four basic knowledge processes outlined by Alavi and Leidner (2001) (i.e. creation, transferring, storage/retrieval and application) to frame the investigation and discuss the extent research. The consolidation of the existing research findings allows us to recognise the gaps in the understanding of knowledge processes in VTs and identify the important avenues for future research.
Resumo:
This paper proposes a new thermography-based maximum power point tracking (MPPT) scheme to address photovoltaic (PV) partial shading faults. Solar power generation utilizes a large number of PV cells connected in series and in parallel in an array, and that are physically distributed across a large field. When a PV module is faulted or partial shading occurs, the PV system sees a nonuniform distribution of generated electrical power and thermal profile, and the generation of multiple maximum power points (MPPs). If left untreated, this reduces the overall power generation and severe faults may propagate, resulting in damage to the system. In this paper, a thermal camera is employed for fault detection and a new MPPT scheme is developed to alter the operating point to match an optimized MPP. Extensive data mining is conducted on the images from the thermal camera in order to locate global MPPs. Based on this, a virtual MPPT is set out to find the global MPP. This can reduce MPPT time and be used to calculate the MPP reference voltage. Finally, the proposed methodology is experimentally implemented and validated by tests on a 600-W PV array.
Resumo:
This research presents several components encompassing the scope of the objective of Data Partitioning and Replication Management in Distributed GIS Database. Modern Geographic Information Systems (GIS) databases are often large and complicated. Therefore data partitioning and replication management problems need to be addresses in development of an efficient and scalable solution. ^ Part of the research is to study the patterns of geographical raster data processing and to propose the algorithms to improve availability of such data. These algorithms and approaches are targeting granularity of geographic data objects as well as data partitioning in geographic databases to achieve high data availability and Quality of Service(QoS) considering distributed data delivery and processing. To achieve this goal a dynamic, real-time approach for mosaicking digital images of different temporal and spatial characteristics into tiles is proposed. This dynamic approach reuses digital images upon demand and generates mosaicked tiles only for the required region according to user's requirements such as resolution, temporal range, and target bands to reduce redundancy in storage and to utilize available computing and storage resources more efficiently. ^ Another part of the research pursued methods for efficient acquiring of GIS data from external heterogeneous databases and Web services as well as end-user GIS data delivery enhancements, automation and 3D virtual reality presentation. ^ There are vast numbers of computing, network, and storage resources idling or not fully utilized available on the Internet. Proposed "Crawling Distributed Operating System "(CDOS) approach employs such resources and creates benefits for the hosts that lend their CPU, network, and storage resources to be used in GIS database context. ^ The results of this dissertation demonstrate effective ways to develop a highly scalable GIS database. The approach developed in this dissertation has resulted in creation of TerraFly GIS database that is used by US government, researchers, and general public to facilitate Web access to remotely-sensed imagery and GIS vector information. ^
Resumo:
Distributed Computing frameworks belong to a class of programming models that allow developers to
launch workloads on large clusters of machines. Due to the dramatic increase in the volume of
data gathered by ubiquitous computing devices, data analytic workloads have become a common
case among distributed computing applications, making Data Science an entire field of
Computer Science. We argue that Data Scientist's concern lays in three main components: a dataset,
a sequence of operations they wish to apply on this dataset, and some constraint they may have
related to their work (performances, QoS, budget, etc). However, it is actually extremely
difficult, without domain expertise, to perform data science. One need to select the right amount
and type of resources, pick up a framework, and configure it. Also, users are often running their
application in shared environments, ruled by schedulers expecting them to specify precisely their resource
needs. Inherent to the distributed and concurrent nature of the cited frameworks, monitoring and
profiling are hard, high dimensional problems that block users from making the right
configuration choices and determining the right amount of resources they need. Paradoxically, the
system is gathering a large amount of monitoring data at runtime, which remains unused.
In the ideal abstraction we envision for data scientists, the system is adaptive, able to exploit
monitoring data to learn about workloads, and process user requests into a tailored execution
context. In this work, we study different techniques that have been used to make steps toward
such system awareness, and explore a new way to do so by implementing machine learning
techniques to recommend a specific subset of system configurations for Apache Spark applications.
Furthermore, we present an in depth study of Apache Spark executors configuration, which highlight
the complexity in choosing the best one for a given workload.
Resumo:
Bioscience subjects require a significant amount of training in laboratory techniques to produce highly skilled science graduates. Many techniques which are currently used in diagnostic, research and industrial laboratories require expensive equipment for single users; examples of which include next generation sequencing, quantitative PCR, mass spectrometry and other analytical techniques. The cost of the machines, reagents and limited access frequently preclude undergraduate students from using such cutting edge techniques. In addition to cost and availability, the time taken for analytical runs on equipment such as High Performance Liquid Chromatography (HPLC) does not necessarily fit with the limitations of timetabling. Understanding the theory underlying these techniques without the accompanying practical classes can be unexciting for students. One alternative from wet laboratory provision is to use virtual simulations of such practical which enable students to see the machines and interact with them to generate data. The Faculty of Science and Technology at the University of Westminster has provided all second and third year undergraduate students with iPads so that these students all have access to a mobile device to assist with learning. We have purchased licences from Labster to access a range of virtual laboratory simulations. These virtual laboratories are fully equipped and require student responses to multiple answer questions in order to progress through the experiment. In a pilot study to look at the feasibility of the Labster virtual laboratory simulations with the iPad devices; second year Biological Science students (n=36) worked through the Labster HPLC simulation on iPads. The virtual HPLC simulation enabled students to optimise the conditions for the separation of drugs. Answers to Multiple choice questions were necessary to progress through the simulation, these focussed on the underlying principles of the HPLC technique. Following the virtual laboratory simulation students went to a real HPLC in the analytical suite in order to separate of asprin, caffeine and paracetamol. In a survey 100% of students (n=36) in this cohort agreed that the Labster virtual simulation had helped them to understand HPLC. In free text responses one student commented that "The terminology is very clear and I enjoyed using Labster very much”. One member of staff commented that “there was a very good knowledge interaction with the virtual practical”.
Resumo:
Graphics Processing Units (GPUs) are becoming popular accelerators in modern High-Performance Computing (HPC) clusters. Installing GPUs on each node of the cluster is not efficient resulting in high costs and power consumption as well as underutilisation of the accelerator. The research reported in this paper is motivated towards the use of few physical GPUs by providing cluster nodes access to remote GPUs on-demand for a financial risk application. We hypothesise that sharing GPUs between several nodes, referred to as multi-tenancy, reduces the execution time and energy consumed by an application. Two data transfer modes between the CPU and the GPUs, namely concurrent and sequential, are explored. The key result from the experiments is that multi-tenancy with few physical GPUs using sequential data transfers lowers the execution time and the energy consumed, thereby improving the overall performance of the application.
Resumo:
Utilization of renewable energy sources and energy storage systems is increasing with fostering new policies on energy industries. However, the increase of distributed generation hinders the reliability of power systems. In order to stabilize them, a virtual power plant emerges as a novel power grid management system. The VPP has a role to make a participation of different distributed energy resources and energy storage systems. This paper defines core technology of the VPP which are demand response and ancillary service concerning about Korea, America and Europe cases. It also suggests application solutions of the VPP to V2G market for restructuring national power industries in Korea.
Resumo:
Cache-coherent non uniform memory access (ccNUMA) architecture is a standard design pattern for contemporary multicore processors, and future generations of architectures are likely to be NUMA. NUMA architectures create new challenges for managed runtime systems. Memory-intensive applications use the system’s distributed memory banks to allocate data, and the automatic memory manager collects garbage left in these memory banks. The garbage collector may need to access remote memory banks, which entails access latency overhead and potential bandwidth saturation for the interconnection between memory banks. This dissertation makes five significant contributions to garbage collection on NUMA systems, with a case study implementation using the Hotspot Java Virtual Machine. It empirically studies data locality for a Stop-The-World garbage collector when tracing connected objects in NUMA heaps. First, it identifies a locality richness which exists naturally in connected objects that contain a root object and its reachable set— ‘rooted sub-graphs’. Second, this dissertation leverages the locality characteristic of rooted sub-graphs to develop a new NUMA-aware garbage collection mechanism. A garbage collector thread processes a local root and its reachable set, which is likely to have a large number of objects in the same NUMA node. Third, a garbage collector thread steals references from sibling threads that run on the same NUMA node to improve data locality. This research evaluates the new NUMA-aware garbage collector using seven benchmarks of an established real-world DaCapo benchmark suite. In addition, evaluation involves a widely used SPECjbb benchmark and Neo4J graph database Java benchmark, as well as an artificial benchmark. The results of the NUMA-aware garbage collector on a multi-hop NUMA architecture show an average of 15% performance improvement. Furthermore, this performance gain is shown to be as a result of an improved NUMA memory access in a ccNUMA system. Fourth, the existing Hotspot JVM adaptive policy for configuring the number of garbage collection threads is shown to be suboptimal for current NUMA machines. The policy uses outdated assumptions and it generates a constant thread count. In fact, the Hotspot JVM still uses this policy in the production version. This research shows that the optimal number of garbage collection threads is application-specific and configuring the optimal number of garbage collection threads yields better collection throughput than the default policy. Fifth, this dissertation designs and implements a runtime technique, which involves heuristics from dynamic collection behavior to calculate an optimal number of garbage collector threads for each collection cycle. The results show an average of 21% improvements to the garbage collection performance for DaCapo benchmarks.
Resumo:
No nos dediquemos más a dar un servicio bibliotecario que consiste en TRANSACCIONES. Dirijámonos en cambio hacia las TRANSFORMACIONES. Cierto que nos hemos ganado el respeto encontrando libros y contestando preguntas de referencia, y catalogando, pero eso lo están haciendo las máquinas de una manera muy eficiente.¿Cómo hacemos para cambiar? Un método es pensar en el futuro, plantearnos escenarios de bibliotecas que tendremos dentro de 10 años, en el 2005.
Resumo:
Virtual Screening (VS) methods can considerably aid clinical research, predicting how ligands interact with drug targets. However, the accuracy of most VS methods is constrained by limitations in the scoring function that describes biomolecular interactions, and even nowadays these uncertainties are not completely understood. In order to improve accuracy of scoring functions used in most VS methods we propose a hybrid novel approach where neural networks (NNET) and support vector machines (SVM) methods are trained with databases of known active (drugs) and inactive compounds, this information being exploited afterwards to improve VS predictions.
Resumo:
Performance and scalability of model transformations are becoming prominent topics in Model-Driven Engineering. In previous works we introduced LinTra, a platform for executing model transformations in parallel. LinTra is based on the Linda model of a coordination language and is intended to be used as a middleware where high-level model transformation languages are compiled. In this paper we present the initial results of our analyses on the scalability of out-place model-to-model transformation executions in LinTra when the models and the processing elements are distributed over a set of machines.
Resumo:
The modern industrial environment is populated by a myriad of intelligent devices that collaborate for the accomplishment of the numerous business processes in place at the production sites. The close collaboration between humans and work machines poses new interesting challenges that industry must overcome in order to implement the new digital policies demanded by the industrial transition. The Industry 5.0 movement is a companion revolution of the previous Industry 4.0, and it relies on three characteristics that any industrial sector should have and pursue: human centrality, resilience, and sustainability. The application of the fifth industrial revolution cannot be completed without moving from the implementation of Industry 4.0-enabled platforms. The common feature found in the development of this kind of platform is the need to integrate the Information and Operational layers. Our thesis work focuses on the implementation of a platform addressing all the digitization features foreseen by the fourth industrial revolution, making the IT/OT convergence inside production plants an improvement and not a risk. Furthermore, we added modular features to our platform enabling the Industry 5.0 vision. We favored the human centrality using the mobile crowdsensing techniques and the reliability and sustainability using pluggable cloud computing services, combined with data coming from the crowd support. We achieved important and encouraging results in all the domains in which we conducted our experiments. Our IT/OT convergence-enabled platform exhibits the right performance needed to satisfy the strict requirements of production sites. The multi-layer capability of the framework enables the exploitation of data not strictly coming from work machines, allowing a more strict interaction between the company, its employees, and customers.
Resumo:
With the aim of heading towards a more sustainable future, there has been a noticeable increase in the installation of Renewable Energy Sources (RES) in power systems in the latest years. Besides the evident environmental benefits, RES pose several technological challenges in terms of scheduling, operation, and control of transmission and distribution power networks. Therefore, it raised the necessity of developing smart grids, relying on suitable distributed measurement infrastructure, for instance, based on Phasor Measurement Units (PMUs). Not only are such devices able to estimate a phasor, but they can also provide time information which is essential for real-time monitoring. This Thesis falls within this context by analyzing the uncertainty requirements of PMUs in distribution and transmission applications. Concerning the latter, the reliability of PMU measurements during severe power system events is examined, whereas for the first, typical configurations of distribution networks are studied for the development of target uncertainties. The second part of the Thesis, instead, is dedicated to the application of PMUs in low-inertia power grids. The replacement of traditional synchronous machines with inertia-less RES is progressively reducing the overall system inertia, resulting in faster and more severe events. In this scenario, PMUs may play a vital role in spite of the fact that no standard requirements nor target uncertainties are yet available. This Thesis deeply investigates PMU-based applications, by proposing a new inertia index relying only on local measurements and evaluating their reliability in low-inertia scenarios. It also develops possible uncertainty intervals based on the electrical instrumentation currently used in power systems and assesses the interoperability with other devices before and after contingency events.
Resumo:
The quantity of electric energy utilized by a home, a business, or an electrically powered device is measured by an electricity meter, also known as an electric meter, electrical meter, or energy meter. Electric meters located at customers' locations are used by electric providers for billing. They are usually calibrated in billing units, with the kilowatt hour being the most popular (kWh). Typically, they are read once each billing cycle. When energy savings are sought during specific times, some meters may monitor demand, or the highest amount of electricity used during a specific time. Additionally, some meters feature relays for load shedding in response to responses during periods of peak load. The amount of electrical energy consumed by users is measured by a Watt-hour meter, also known as an energy meter. To charge the electricity usage by loads like lights, fans, and other appliances, utilities put these gadgets everywhere, including in households, businesses, and organizations. Watts are a fundamental power unit. A kilowatt is equal to one thousand watts. One kilowatt is regarded as one unit of energy used if used for one hour. These meters calculate the product of the instantaneous voltage and current readings and provide instantaneous power. This power is distributed over a period and is used during that time. Depending on the supply used by home or commercial installations, these may be single or three phase meters. These can be linked directly between line and load for minor service measurements, such as home consumers. However, step-down current transformers must be installed for greater loads to handle their higher current demands.