829 resultados para Distributed operating systems (Computers)
Resumo:
The P-found protein folding and unfolding simulation repository is designed to allow scientists to perform data mining and other analyses across large, distributed simulation data sets. There are two storage components in P-found: a primary repository of simulation data that is used to populate the second component, and a data warehouse that contains important molecular properties. These properties may be used for data mining studies. Here we demonstrate how grid technologies can support multiple, distributed P-found installations. In particular, we look at two aspects: firstly, how grid data management technologies can be used to access the distributed data warehouses; and secondly, how the grid can be used to transfer analysis programs to the primary repositories — this is an important and challenging aspect of P-found, due to the large data volumes involved and the desire of scientists to maintain control of their own data. The grid technologies we are developing with the P-found system will allow new large data sets of protein folding simulations to be accessed and analysed in novel ways, with significant potential for enabling scientific discovery.
Resumo:
Reduced flexibility of low carbon generation could pose new challenges for future energy systems. Both demand response and distributed storage may have a role to play in supporting future system balancing. This paper reviews how these technically different, but functionally similar approaches compare and compete with one another. Household survey data is used to test the effectiveness of price signals to deliver demand responses for appliances with a high degree of agency. The underlying unit of storage for different demand response options is discussed, with particular focus on the ability to enhance demand side flexibility in the residential sector. We conclude that a broad range of options, with different modes of storage, may need to be considered, if residential demand flexibility is to be maximised.
Resumo:
The Complex Adaptive Systems, Cognitive Agents and Distributed Energy (CASCADE) project is developing a framework based on Agent Based Modelling (ABM). The CASCADE Framework can be used both to gain policy and industry relevant insights into the smart grid concept itself and as a platform to design and test distributed ICT solutions for smart grid based business entities. ABM is used to capture the behaviors of diff erent social, economic and technical actors, which may be defi ned at various levels of abstraction. It is applied to understanding their interactions and can be adapted to include learning processes and emergent patterns. CASCADE models ‘prosumer’ agents (i.e., producers and/or consumers of energy) and ‘aggregator’ agents (e.g., traders of energy in both wholesale and retail markets) at various scales, from large generators and Energy Service Companies down to individual people and devices. The CASCADE Framework is formed of three main subdivisions that link models of electricity supply and demand, the electricity market and power fl ow. It can also model the variability of renewable energy generation caused by the weather, which is an important issue for grid balancing and the profi tability of energy suppliers. The development of CASCADE has already yielded some interesting early fi ndings, demonstrating that it is possible for a mediating agent (aggregator) to achieve stable demandfl attening across groups of domestic households fi tted with smart energy control and communication devices, where direct wholesale price signals had previously been found to produce characteristic complex system instability. In another example, it has demonstrated how large changes in supply mix can be caused even by small changes in demand profi le. Ongoing and planned refi nements to the Framework will support investigation of demand response at various scales, the integration of the power sector with transport and heat sectors, novel technology adoption and diffusion work, evolution of new smart grid business models, and complex power grid engineering and market interactions.
Resumo:
A universal systems design process is specified, tested in a case study and evaluated. It links English narratives to numbers using a categorical language framework with mathematical mappings taking the place of conjunctions and numbers. The framework is a ring of English narrative words between 1 (option) and 360 (capital); beyond 360 the ring cycles again to 1. English narratives are shown to correspond to the field of fractional numbers. The process can enable the development, presentation and communication of complex narrative policy information among communities of any scale, on a software implementation known as the "ecoputer". The information is more accessible and comprehensive than that in conventional decision support, because: (1) it is expressed in narrative language; and (2) the narratives are expressed as compounds of words within the framework. Hence option generation is made more effective than in conventional decision support processes including Multiple Criteria Decision Analysis, Life Cycle Assessment and Cost-Benefit Analysis.The case study is of a participatory workshop in UK bioenergy project objectives and criteria, at which attributes were elicited in environmental, economic and social systems. From the attributes, the framework was used to derive consequences at a range of levels of precision; these are compared with the project objectives and criteria as set out in the Case for Support. The design process is to be supported by a social information manipulation, storage and retrieval system for numeric and verbal narratives attached to the "ecoputer". The "ecoputer" will have an integrated verbal and numeric operating system. Novel design source code language will assist the development of narrative policy. The utility of the program, including in the transition to sustainable development and in applications at both community micro-scale and policy macro-scale, is discussed from public, stakeholder, corporate, Governmental and regulatory perspectives.
Resumo:
In this paper we provide a connection between the geometrical properties of the attractor of a chaotic dynamical system and the distribution of extreme values. We show that the extremes of so-called physical observables are distributed according to the classical generalised Pareto distribution and derive explicit expressions for the scaling and the shape parameter. In particular, we derive that the shape parameter does not depend on the cho- sen observables, but only on the partial dimensions of the invariant measure on the stable, unstable, and neutral manifolds. The shape parameter is negative and is close to zero when high-dimensional systems are considered. This result agrees with what was derived recently using the generalized extreme value approach. Combining the results obtained using such physical observables and the properties of the extremes of distance observables, it is possible to derive estimates of the partial dimensions of the attractor along the stable and the unstable directions of the flow. Moreover, by writing the shape parameter in terms of moments of the extremes of the considered observable and by using linear response theory, we relate the sensitivity to perturbations of the shape parameter to the sensitivity of the moments, of the partial dimensions, and of the Kaplan–Yorke dimension of the attractor. Preliminary numer- ical investigations provide encouraging results on the applicability of the theory presented here. The results presented here do not apply for all combinations of Axiom A systems and observables, but the breakdown seems to be related to very special geometrical configurations.
Resumo:
The Mobile Network Optimization (MNO) technologies have advanced at a tremendous pace in recent years. And the Dynamic Network Optimization (DNO) concept emerged years ago, aimed to continuously optimize the network in response to variations in network traffic and conditions. Yet, DNO development is still at its infancy, mainly hindered by a significant bottleneck of the lengthy optimization runtime. This paper identifies parallelism in greedy MNO algorithms and presents an advanced distributed parallel solution. The solution is designed, implemented and applied to real-life projects whose results yield a significant, highly scalable and nearly linear speedup up to 6.9 and 14.5 on distributed 8-core and 16-core systems respectively. Meanwhile, optimization outputs exhibit self-consistency and high precision compared to their sequential counterpart. This is a milestone in realizing the DNO. Further, the techniques may be applied to similar greedy optimization algorithm based applications.
Resumo:
It has been years since the introduction of the Dynamic Network Optimization (DNO) concept, yet the DNO development is still at its infant stage, largely due to a lack of breakthrough in minimizing the lengthy optimization runtime. Our previous work, a distributed parallel solution, has achieved a significant speed gain. To cater for the increased optimization complexity pressed by the uptake of smartphones and tablets, however, this paper examines the potential areas for further improvement and presents a novel asynchronous distributed parallel design that minimizes the inter-process communications. The new approach is implemented and applied to real-life projects whose results demonstrate an augmented acceleration of 7.5 times on a 16-core distributed system compared to 6.1 of our previous solution. Moreover, there is no degradation in the optimization outcome. This is a solid sprint towards the realization of DNO.
Resumo:
The evolution of commodity computing lead to the possibility of efficient usage of interconnected machines to solve computationally-intensive tasks, which were previously solvable only by using expensive supercomputers. This, however, required new methods for process scheduling and distribution, considering the network latency, communication cost, heterogeneous environments and distributed computing constraints. An efficient distribution of processes over such environments requires an adequate scheduling strategy, as the cost of inefficient process allocation is unacceptably high. Therefore, a knowledge and prediction of application behavior is essential to perform effective scheduling. In this paper, we overview the evolution of scheduling approaches, focusing on distributed environments. We also evaluate the current approaches for process behavior extraction and prediction, aiming at selecting an adequate technique for online prediction of application execution. Based on this evaluation, we propose a novel model for application behavior prediction, considering chaotic properties of such behavior and the automatic detection of critical execution points. The proposed model is applied and evaluated for process scheduling in cluster and grid computing environments. The obtained results demonstrate that prediction of the process behavior is essential for efficient scheduling in large-scale and heterogeneous distributed environments, outperforming conventional scheduling policies by a factor of 10, and even more in some cases. Furthermore, the proposed approach proves to be efficient for online predictions due to its low computational cost and good precision. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Carbon nanotubes rank amongst potential candidates for a new family of nanoscopic devices, in particular for sensing applications. At the same time that defects in carbon nanotubes act as binding sites for foreign species, our current level of control over the fabrication process does not allow one to specifically choose where these binding sites will actually be positioned. In this work we present a theoretical framework for accurately calculating the electronic and transport properties of long disordered carbon nanotubes containing a large number of binding sites randomly distributed along a sample. This method combines the accuracy and functionality of ab initio density functional theory to determine the electronic structure with a recursive Green`s functions method. We apply this methodology on the problem of nitrogen-rich carbon nanotubes, first considering different types of defects and then demonstrating how our simulations can help in the field of sensor design by allowing one to compute the transport properties of realistic nanotube devices containing a large number of randomly distributed binding sites.
Resumo:
This paper applies the concepts and methods of complex networks to the development of models and simulations of master-slave distributed real-time systems by introducing an upper bound in the allowable delivery time of the packets with computation results. Two representative interconnection models are taken into account: Uniformly random and scale free (Barabasi-Albert), including the presence of background traffic of packets. The obtained results include the identification of the uniformly random interconnectivity scheme as being largely more efficient than the scale-free counterpart. Also, increased latency tolerance of the application provides no help under congestion.
Resumo:
Usually, a Petri net is applied as an RFID model tool. This paper, otherwise, presents another approach to the Petri net concerning RFID systems. This approach, called elementary Petri net inside an RFID distributed database, or PNRD, is the first step to improve RFID and control systems integration, based on a formal data structure to identify and update the product state in real-time process execution, allowing automatic discovery of unexpected events during tag data capture. There are two main features in this approach: to use RFID tags as the object process expected database and last product state identification; and to apply Petri net analysis to automatically update the last product state registry during reader data capture. RFID reader data capture can be viewed, in Petri nets, as a direct analysis of locality for a specific transition that holds in a specific workflow. Following this direction, RFID readers storage Petri net control vector list related to each tag id is expected to be perceived. This paper presents PNRD cornerstones and a PNRD implementation example in software called DEMIS Distributed Environment in Manufacturing Information Systems.
Resumo:
The purpose of this thesis is to show how to use vulnerability testing to identify and search for security flaws in networks of computers. The goal is partly to give a casual description of different types of methods of vulnerability testing and partly to present the method and results from a vulnerability test. A document containing the results of the vulnerability test will be handed over and a solution to the found high risk vulnerabilities. The goal is also to carry out and present this work as a form of a scholarly work.The problem was to show how to perform vulnerability tests and identify vulnerabilities in the organization's network and systems. Programs would be run under controlled circumstances in a way that they did not burden the network. Vulnerability tests were conducted sequentially, when data from the survey was needed to continue the scan.A survey of the network was done and data in the form of operating system, among other things, were collected in the tables. A number of systems were selected from the tables and were scanned with Nessus. The result was a table across the network and a table of found vulnerabilities. The table of vulnerabilities has helped the organization to prevent these vulnerabilities by updating the affected computers. Also a wireless network with WEP encryption, which is insecure, has been detected and decrypted.
Resumo:
Objective: For the evaluation of the energetic performance of combined renewable heating systems that supply space heat and domestic hot water for single family houses, dynamic behaviour, component interactions, and control of the system play a crucial role and should be included in test methods. Methods: New dynamic whole system test methods were developed based on “hardware in the loop” concepts. Three similar approaches are described and their differences are discussed. The methods were applied for testing solar thermal systems in combination with fossil fuel boilers (heating oil and natural gas), biomass boilers, and/or heat pumps. Results: All three methods were able to show the performance of combined heating systems under transient operating conditions. The methods often detected unexpected behaviour of the tested system that cannot be detected based on steady state performance tests that are usually applied to single components. Conclusion: Further work will be needed to harmonize the different test methods in order to reach comparable results between the different laboratories. Practice implications: A harmonized approach for whole system tests may lead to new test standards and improve the accuracy of performance prediction as well as reduce the need for field tests.
Resumo:
Dynamic system test methods for heating systems were developed and applied by the institutes SERC and SP from Sweden, INES from France and SPF from Switzerland already before the MacSheep project started. These test methods followed the same principle: a complete heating system – including heat generators, storage, control etc., is installed on the test rig; the test rig software and hardware simulates and emulates the heat load for space heating and domestic hot water of a single family house, while the unit under test has to act autonomously to cover the heat demand during a representative test cycle. Within the work package 2 of the MacSheep project these similar – but different – test methods were harmonized and improved. The work undertaken includes: • Harmonization of the physical boundaries of the unit under test. • Harmonization of the boundary conditions of climate and load. • Definition of an approach to reach identical space heat load in combination with an autonomous control of the space heat distribution by the unit under test. • Derivation and validation of new six day and a twelve day test profiles for direct extrapolation of test results. The new harmonized test method combines the advantages of the different methods that existed before the MacSheep project. The new method is a benchmark test, which means that the load for space heating and domestic hot water preparation will be identical for all tested systems, and that the result is representative for the performance of the system over a whole year. Thus, no modelling and simulation of the tested system is needed in order to obtain the benchmark results for a yearly cycle. The method is thus also applicable to products for which simulation models are not available yet. Some of the advantages of the new whole system test method and performance rating compared to the testing and energy rating of single components are: • Interaction between the different components of a heating system, e.g. storage, solar collector circuit, heat pump, control, etc. are included and evaluated in this test. • Dynamic effects are included and influence the result just as they influence the annual performance in the field. • Heat losses are influencing the results in a more realistic way, since they are evaluated under "real installed" and representative part-load conditions rather than under single component steady state conditions. The described method is also suited for the development process of new systems, where it replaces time-consuming and costly field testing with the advantage of a higher accuracy of the measured data (compared to the typically used measurement equipment in field tests) and identical, thus comparable boundary conditions. Thus, the method can be used for system optimization in the test bench under realistic operative conditions, i.e. under relevant operating environment in the lab. This report describes the physical boundaries of the tested systems, as well as the test procedures and the requirements for both the unit under test and the test facility. The new six day and twelve day test profiles are also described as are the validation results.
Resumo:
Emissions from residential combustion appliances vary significantly depending on the firing behaviours and combustion conditions, in addition to combustion technologies and fuel quality. Although wood pellet combustion in residential heating boilers is efficient, the combustion conditions during start-up and stop phases are not optimal and produce significantly high emissions such as carbon monoxide and hydrocarbon from incomplete combustion. The emissions from the start-up and stop phases of the pellet boilers are not fully taken into account in test methods for ecolabels which primarily focus on emissions during operation on full load and part load. The objective of the thesis is to investigate the emission characteristics during realistic operation of residential wood pellet boilers in order to identify when the major part of the annual emissions occur. Emissions from four residential wood pellet boilers were measured and characterized for three operating phases (start-up, steady and stop). Emissions from realistic operation of combined solar and wood pellet heating systems was continuously measured to investigate the influence of start-up and stop phases on total annual emissions. Measured emission data from the pellet devices were used to build an emission model to predict the annual emission factors from the dynamic operation of the heating system using the simulation software TRNSYS. Start-up emissions are found to vary with ignition type, supply of air and fuel, and time to complete the phase. Stop emissions are influenced by fan operation characteristics and the cleaning routine. Start-up and stop phases under realistic operation conditions contribute 80 – 95% of annual carbon monoxide (CO) emission, 60 – 90% total hydrocarbon (TOC), 10 – 20% of nitrogen oxides (NO), and 30 – 40% particles emissions. Annual emission factors from realistic operation of tested residential heating system with a top fed wood pelt boiler can be between 190 and 400 mg/MJ for the CO emissions, between 60 and 95 mg/MJ for the NO, between 6 and 25 mg/MJ for the TOC, between 30 and 116 mg/MJ for the particulate matter and between 2x10-13 /MJ and 4x10-13 /MJ for the number of particles. If the boiler has the cleaning sequence with compressed air such as in boiler B2, annual CO emission factor can be up to 550 mg/MJ. Average CO, TOC and particles emissions under realistic annual condition were greater than the limits values of two eco labels. These results highlight the importance of start-up and stop phases in annual emission factors (especially CO and TOC). Since a large or dominating part of the annual emissions in real operation arise from the start-up and stop sequences, test methods required by the ecolabels should take these emissions into account. In this way it will encourage the boiler manufacturers to minimize annual emissions. The annual emissions of residential pellet heating system can be reduced by optimizing the number of start-ups of the pellet boiler. It is possible to reduce up to 85% of the number of start-ups by optimizing the system design and its controller such as switching of the boiler pump after it stops, using two temperature sensors for boiler ON/OFF control, optimizing of the positions of the connections to the storage tank, increasing the mixing valve temperature in the boiler circuit and decreasing the pump flow rate. For 85 % reduction of start-ups, 75 % of CO and TOC emission factors were reduced while 13% increase in NO and 15 % increase in particle emissions was observed.