905 resultados para distributed algorithms
Resumo:
Where users are interacting in a distributed virtual environment, the actions of each user must be observed by peers with sufficient consistency and within a limited delay so as not to be detrimental to the interaction. The consistency control issue may be split into three parts: update control; consistent enactment and evolution of events; and causal consistency. The delay in the presentation of events, termed latency, is primarily dependent on the network propagation delay and the consistency control algorithms. The latency induced by the consistency control algorithm, in particular causal ordering, is proportional to the number of participants. This paper describes how the effect of network delays may be reduced and introduces a scalable solution that provides sufficient consistency control while minimising its effect on latency. The principles described have been developed at Reading over the past five years. Similar principles are now emerging in the simulation community through the HLA standard. This paper attempts to validate the suggested principles within the schema of distributed simulation and virtual environments and to compare and contrast with those described by the HLA definition documents.
Resumo:
Distributed energy and water balance models require time-series surfaces of the meteorological variables involved in hydrological processes. Most of the hydrological GIS-based models apply simple interpolation techniques to extrapolate the point scale values registered at weather stations at a watershed scale. In mountainous areas, where the monitoring network ineffectively covers the complex terrain heterogeneity, simple geostatistical methods for spatial interpolation are not always representative enough, and algorithms that explicitly or implicitly account for the features creating strong local gradients in the meteorological variables must be applied. Originally developed as a meteorological pre-processing tool for a complete hydrological model (WiMMed), MeteoMap has become an independent software. The individual interpolation algorithms used to approximate the spatial distribution of each meteorological variable were carefully selected taking into account both, the specific variable being mapped, and the common lack of input data from Mediterranean mountainous areas. They include corrections with height for both rainfall and temperature (Herrero et al., 2007), and topographic corrections for solar radiation (Aguilar et al., 2010). MeteoMap is a GIS-based freeware upon registration. Input data include weather station records and topographic data and the output consists of tables and maps of the meteorological variables at hourly, daily, predefined rainfall event duration or annual scales. It offers its own pre and post-processing tools, including video outlook, map printing and the possibility of exporting the maps to images or ASCII ArcGIS formats. This study presents the friendly user interface of the software and shows some case studies with applications to hydrological modeling.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Distribution systems with distributed generation require new analysis methods since networks are not longer passive. Two of the main problems in this new scenario are the network reconfiguration and the loss allocation. This work presents a distribution systems graphic simulator, developed with reconfiguration functions and a special focus on loss allocation, both considering the presence of distributed generation. This simulator uses a fast and robust power flow algorithm based on the current summation backward-forward technique. Reconfiguration problem is solved through a heuristic methodology and the losses allocation function, based on the Zbus method, is presented as an attached result for each obtained configuration. Results are presented and discussed, remarking the easiness of analysis through the graphic simulator as an excellent tool for planning and operation engineers, and very useful for training. © 2004 IEEE.
Resumo:
Here a multiobjective performance index for distribution systems with distributed generation based on a steady-state analysis of the network is proposed. This index quantifies the distributed generation impact on total losses, voltage profile and short circuit currents, and will be used as objective function in an evolutionary algorithm aimed at searching the best points for connecting distributed generators. Moreover, a loss allocation technique, based on the Zbus method, is applied on the original configuration of the network to obtain a good quality initial population. An IEEE medium voltage distribution network is analysed and results are presented and discussed.
Resumo:
In large distributed systems, where shared resources are owned by distinct entities, there is a need to reflect resource ownership in resource allocation. An appropriate resource management system should guarantee that resource's owners have access to a share of resources proportional to the share they provide. In order to achieve that some policies can be used for revoking access to resources currently used by other users. In this paper, a scheduling policy based in the concept of distributed ownership is introduced called Owner Share Enforcement Policy (OSEP). OSEP goal is to guarantee that owner do not have their jobs postponed for longer periods of time. We evaluate the results achieved with the application of this policy using metrics that describe policy violation, loss of capacity, policy cost and user satisfaction in environments with and without job checkpointing. We also evaluate and compare the OSEP policy with the Fair-Share policy, and from these results it is possible to capture the trade-offs from different ways to achieve fairness based on the user satisfaction. © 2009 IEEE.
Resumo:
The multi-relational Data Mining approach has emerged as alternative to the analysis of structured data, such as relational databases. Unlike traditional algorithms, the multi-relational proposals allow mining directly multiple tables, avoiding the costly join operations. In this paper, is presented a comparative study involving the traditional Patricia Mine algorithm and its corresponding multi-relational proposed, MR-Radix in order to evaluate the performance of two approaches for mining association rules are used for relational databases. This study presents two original contributions: the proposition of an algorithm multi-relational MR-Radix, which is efficient for use in relational databases, both in terms of execution time and in relation to memory usage and the presentation of the empirical approach multirelational advantage in performance over several tables, which avoids the costly join operations from multiple tables. © 2011 IEEE.
Resumo:
Traditionally, ancillary services are supplied by large conventional generators. However, with the huge penetration of distributed generators (DGs) as a result of the growing interest in satisfying energy requirements, and considering the benefits that they can bring along to the electrical system and to the environment, it appears reasonable to assume that ancillary services could also be provided by DGs in an economical and efficient way. In this paper, a settlement procedure for a reactive power market for DGs in distribution systems is proposed. Attention is directed to wind turbines connected to the network through synchronous generators with permanent magnets and doubly-fed induction generators. The generation uncertainty of this kind of DG is reduced by running a multi-objective optimization algorithm in multiple probabilistic scenarios through the Monte Carlo method and by representing the active power generated by the DGs through Markov models. The objectives to be minimized are the payments of the distribution system operator to the DGs for reactive power, the curtailment of transactions committed in an active power market previously settled, the losses in the lines of the network, and a voltage profile index. The proposed methodology was tested using a modified IEEE 37-bus distribution test system. © 1969-2012 IEEE.
Resumo:
Sao Paulo State Research Foundation-FAPESP
Resumo:
Centralized and Distributed methods are two connection management schemes in wavelength convertible optical networks. In the earlier work, the centralized scheme is said to have lower network blocking probability than the distributed one. Hence, much of the previous work in connection management has focused on the comparison of different algorithms in only distributed scheme or in only centralized scheme. However, we believe that the network blocking probability of these two connection management schemes depends, to a great extent, on the network traffic patterns and reservation times. Our simulation results reveal that the performance improvement (in terms of blocking probability) of centralized method over distributed method is inversely proportional to the ratio of average connection interarrival time to reservation time. After that ratio increases beyond a threshold, those two connection management schemes yield almost the same blocking probability under the same network load. In this paper, we review the working procedure of distributed and centralized schemes, discuss the tradeoff between them, compare these two methods under different network traffic patterns via simulation and give our conclusion based on the simulation data.
Resumo:
There are some variants of the widely used Fuzzy C-Means (FCM) algorithm that support clustering data distributed across different sites. Those methods have been studied under different names, like collaborative and parallel fuzzy clustering. In this study, we offer some augmentation of the two FCM-based clustering algorithms used to cluster distributed data by arriving at some constructive ways of determining essential parameters of the algorithms (including the number of clusters) and forming a set of systematically structured guidelines such as a selection of the specific algorithm depending on the nature of the data environment and the assumptions being made about the number of clusters. A thorough complexity analysis, including space, time, and communication aspects, is reported. A series of detailed numeric experiments is used to illustrate the main ideas discussed in the study.
Resumo:
Background: Warfarin-dosing pharmacogenetic algorithms have presented different performances across ethnicities, and the impact in admixed populations is not fully known. Aims: To evaluate the CYP2C9 and VKORC1 polymorphisms and warfarin-predicted metabolic phenotypes according to both self-declared ethnicity and genetic ancestry in a Brazilian general population plus Amerindian groups. Methods: Two hundred twenty-two Amerindians (Tupinikin and Guarani) were enrolled and 1038 individuals from the Brazilian general population who were self-declared as White, Intermediate (Brown, Pardo in Portuguese), or Black. Samples of 274 Brazilian subjects from Sao Paulo were analyzed for genetic ancestry using an Affymetrix 6.0 (R) genotyping platform. The CYP2C9*2 (rs1799853), CYP2C9*3 (rs1057910), and VKORC1 g.-1639G>A (rs9923231) polymorphisms were genotyped in all studied individuals. Results: The allelic frequency for the VKORC1 polymorphism was differently distributed according to self-declared ethnicity: White (50.5%), Intermediate (46.0%), Black (39.3%), Tupinikin (40.1%), and Guarani (37.3%) (p < 0.001), respectively. The frequency of intermediate plus poor metabolizers (IM + PM) was higher in White (28.3%) than in Intermediate (22.7%), Black (20.5%), Tupinikin (12.9%), and Guarani (5.3%), (p < 0.001). For the samples with determined ancestry, subjects carrying the GG genotype for the VKORC1 had higher African ancestry and lower European ancestry (0.14 +/- 0.02 and 0.62 +/- 0.02) than in subjects carrying AA (0.05 +/- 0.01 and 0.73 +/- 0.03) (p = 0.009 and 0.03, respectively). Subjects classified as IM + PM had lower African ancestry (0.08 +/- 0.01) than extensive metabolizers (0.12 +/- 0.01) (p = 0.02). Conclusions: The CYP2C9 and VKORC1 polymorphisms are differently distributed according to self-declared ethnicity or genetic ancestry in the Brazilian general population plus Amerindians. This information is an initial step toward clinical pharmacogenetic implementation, and it could be very useful in strategic planning aiming at an individual therapeutic approach and an adverse drug effect profile prediction in an admixed population.
Resumo:
Failure detection is at the core of most fault tolerance strategies, but it often depends on reliable communication. We present new algorithms for failure detectors which are appropriate as components of a fault tolerance system that can be deployed in situations of adverse network conditions (such as loosely connected and administered computing grids). It packs redundancy into heartbeat messages, thereby improving on the robustness of the traditional protocols. Results from experimental tests conducted in a simulated environment with adverse network conditions show significant improvement over existing solutions.
Resumo:
Large scale wireless adhoc networks of computers, sensors, PDAs etc. (i.e. nodes) are revolutionizing connectivity and leading to a paradigm shift from centralized systems to highly distributed and dynamic environments. An example of adhoc networks are sensor networks, which are usually composed by small units able to sense and transmit to a sink elementary data which are successively processed by an external machine. Recent improvements in the memory and computational power of sensors, together with the reduction of energy consumptions, are rapidly changing the potential of such systems, moving the attention towards datacentric sensor networks. A plethora of routing and data management algorithms have been proposed for the network path discovery ranging from broadcasting/floodingbased approaches to those using global positioning systems (GPS). We studied WGrid, a novel decentralized infrastructure that organizes wireless devices in an adhoc manner, where each node has one or more virtual coordinates through which both message routing and data management occur without reliance on either flooding/broadcasting operations or GPS. The resulting adhoc network does not suffer from the deadend problem, which happens in geographicbased routing when a node is unable to locate a neighbor closer to the destination than itself. WGrid allow multidimensional data management capability since nodes' virtual coordinates can act as a distributed database without needing neither special implementation or reorganization. Any kind of data (both single and multidimensional) can be distributed, stored and managed. We will show how a location service can be easily implemented so that any search is reduced to a simple query, like for any other data type. WGrid has then been extended by adopting a replication methodology. We called the resulting algorithm WRGrid. Just like WGrid, WRGrid acts as a distributed database without needing neither special implementation nor reorganization and any kind of data can be distributed, stored and managed. We have evaluated the benefits of replication on data management, finding out, from experimental results, that it can halve the average number of hops in the network. The direct consequence of this fact are a significant improvement on energy consumption and a workload balancing among sensors (number of messages routed by each node). Finally, thanks to the replications, whose number can be arbitrarily chosen, the resulting sensor network can face sensors disconnections/connections, due to failures of sensors, without data loss. Another extension to {WGrid} is {W*Grid} which extends it by strongly improving network recovery performance from link and/or device failures that may happen due to crashes or battery exhaustion of devices or to temporary obstacles. W*Grid guarantees, by construction, at least two disjoint paths between each couple of nodes. This implies that the recovery in W*Grid occurs without broadcasting transmissions and guaranteeing robustness while drastically reducing the energy consumption. An extensive number of simulations shows the efficiency, robustness and traffic road of resulting networks under several scenarios of device density and of number of coordinates. Performance analysis have been compared to existent algorithms in order to validate the results.
Resumo:
Beamforming entails joint processing of multiple signals received or transmitted by an array of antennas. This thesis addresses the implementation of beamforming in two distinct systems, namely a distributed network of independent sensors, and a broad-band multi-beam satellite network. With the rising popularity of wireless sensors, scientists are taking advantage of the flexibility of these devices, which come with very low implementation costs. Simplicity, however, is intertwined with scarce power resources, which must be carefully rationed to ensure successful measurement campaigns throughout the whole duration of the application. In this scenario, distributed beamforming is a cooperative communication technique, which allows nodes in the network to emulate a virtual antenna array seeking power gains in the order of the size of the network itself, when required to deliver a common message signal to the receiver. To achieve a desired beamforming configuration, however, all nodes in the network must agree upon the same phase reference, which is challenging in a distributed set-up where all devices are independent. The first part of this thesis presents new algorithms for phase alignment, which prove to be more energy efficient than existing solutions. With the ever-growing demand for broad-band connectivity, satellite systems have the great potential to guarantee service where terrestrial systems can not penetrate. In order to satisfy the constantly increasing demand for throughput, satellites are equipped with multi-fed reflector antennas to resolve spatially separated signals. However, incrementing the number of feeds on the payload corresponds to burdening the link between the satellite and the gateway with an extensive amount of signaling, and to possibly calling for much more expensive multiple-gateway infrastructures. This thesis focuses on an on-board non-adaptive signal processing scheme denoted as Coarse Beamforming, whose objective is to reduce the communication load on the link between the ground station and space segment.