974 resultados para distributed environment
Resumo:
Demand response concept has been gaining increasing importance while the success of several recent implementations makes this resource benefits unquestionable. This happens in a power systems operation environment that also considers an intensive use of distributed generation. However, more adequate approaches and models are needed in order to address the small size consumers and producers aggregation, while taking into account these resources goals. The present paper focuses on the demand response programs and distributed generation resources management by a Virtual Power Player that optimally aims to minimize its operation costs taking the consumption shifting constraints into account. The impact of the consumption shifting in the distributed generation resources schedule is also considered. The methodology is applied to three scenarios based on 218 consumers and 4 types of distributed generation, in a time frame of 96 periods.
Resumo:
Multi-agent approaches have been widely used to model complex systems of distributed nature with a large amount of interactions between the involved entities. Power systems are a reference case, mainly due to the increasing use of distributed energy sources, largely based on renewable sources, which have potentiated huge changes in the power systems’ sector. Dealing with such a large scale integration of intermittent generation sources led to the emergence of several new players, as well as the development of new paradigms, such as the microgrid concept, and the evolution of demand response programs, which potentiate the active participation of consumers. This paper presents a multi-agent based simulation platform which models a microgrid environment, considering several different types of simulated players. These players interact with real physical installations, creating a realistic simulation environment with results that can be observed directly in the reality. A case study is presented considering players’ responses to a demand response event, resulting in an intelligent increase of consumption in order to face the wind generation surplus.
Resumo:
Harnessing idle PCs CPU cycles, storage space and other resources of networked computers to collaborative are mainly fixated on for all major grid computing research projects. Most of the university computers labs are occupied with the high puissant desktop PC nowadays. It is plausible to notice that most of the time machines are lying idle or wasting their computing power without utilizing in felicitous ways. However, for intricate quandaries and for analyzing astronomically immense amounts of data, sizably voluminous computational resources are required. For such quandaries, one may run the analysis algorithms in very puissant and expensive computers, which reduces the number of users that can afford such data analysis tasks. Instead of utilizing single expensive machines, distributed computing systems, offers the possibility of utilizing a set of much less expensive machines to do the same task. BOINC and Condor projects have been prosperously utilized for solving authentic scientific research works around the world at a low cost. In this work the main goal is to explore both distributed computing to implement, Condor and BOINC, and utilize their potency to harness the ideal PCs resources for the academic researchers to utilize in their research work. In this thesis, Data mining tasks have been performed in implementation of several machine learning algorithms on the distributed computing environment.
Resumo:
In recent years, vehicular cloud computing (VCC) has emerged as a new technology which is being used in wide range of applications in the area of multimedia-based healthcare applications. In VCC, vehicles act as the intelligent machines which can be used to collect and transfer the healthcare data to the local, or global sites for storage, and computation purposes, as vehicles are having comparatively limited storage and computation power for handling the multimedia files. However, due to the dynamic changes in topology, and lack of centralized monitoring points, this information can be altered, or misused. These security breaches can result in disastrous consequences such as-loss of life or financial frauds. Therefore, to address these issues, a learning automata-assisted distributive intrusion detection system is designed based on clustering. Although there exist a number of applications where the proposed scheme can be applied but, we have taken multimedia-based healthcare application for illustration of the proposed scheme. In the proposed scheme, learning automata (LA) are assumed to be stationed on the vehicles which take clustering decisions intelligently and select one of the members of the group as a cluster-head. The cluster-heads then assist in efficient storage and dissemination of information through a cloud-based infrastructure. To secure the proposed scheme from malicious activities, standard cryptographic technique is used in which the auotmaton learns from the environment and takes adaptive decisions for identification of any malicious activity in the network. A reward and penalty is given by the stochastic environment where an automaton performs its actions so that it updates its action probability vector after getting the reinforcement signal from the environment. The proposed scheme was evaluated using extensive simulations on ns-2 with SUMO. The results obtained indicate that the proposed scheme yields an improvement of 10 % in detection rate of malicious nodes when compared with the existing schemes.
Resumo:
4th International Conference, SIMPAR 2014, Bergamo, Italy, October 20-23, 2014
Resumo:
The high penetration of distributed energy resources (DER) in distribution networks and the competitive environment of electricity markets impose the use of new approaches in several domains. The network cost allocation, traditionally used in transmission networks, should be adapted and used in the distribution networks considering the specifications of the connected resources. The main goal is to develop a fairer methodology trying to distribute the distribution network use costs to all players which are using the network in each period. In this paper, a model considering different type of costs (fixed, losses, and congestion costs) is proposed comprising the use of a large set of DER, namely distributed generation (DG), demand response (DR) of direct load control type, energy storage systems (ESS), and electric vehicles with capability of discharging energy to the network, which is known as vehicle-to-grid (V2G). The proposed model includes three distinct phases of operation. The first phase of the model consists in an economic dispatch based on an AC optimal power flow (AC-OPF); in the second phase Kirschen's and Bialek's tracing algorithms are used and compared to evaluate the impact of each resource in the network. Finally, the MW-mile method is used in the third phase of the proposed model. A distribution network of 33 buses with large penetration of DER is used to illustrate the application of the proposed model.
Resumo:
Ligand K-edge XAS of an [Fe3S4]0 model complex is reported. The pre-edge can be resolved into contributions from the í2Ssulfide, í3Ssulfide, and Sthiolate ligands. The average ligand-metal bond covalencies obtained from these pre-edges are further distributed between Fe3+ and Fe2.5+ components using DFT calculations. The bridging ligand covalency in the [Fe2S2]+ subsite of the [Fe3S4]0 cluster is found to be significantly lower than its value in a reduced [Fe2S2] cluster (38% vs 61%, respectively). This lowered bridging ligand covalency reduces the superexchange coupling parameter J relative to its value in a reduced [Fe2S2]+ site (-146 cm-1 vs -360 cm-1, respectively). This decrease in J, along with estimates of the double exchange parameter B and vibronic coupling parameter ì2/k-, leads to an S ) 2 delocalized ground state in the [Fe3S4]0 cluster. The S K-edge XAS of the protein ferredoxin II (Fd II) from the D. gigas active site shows a decrease in covalency compared to the model complex, in the same oxidation state, which correlates with the number of H-bonding interactions to specific sulfur ligands present in the active site. The changes in ligand-metal bond covalencies upon redox compared with DFT calculations indicate that the redox reaction involves a two-electron change (one-electron ionization plus a spin change of a second electron) with significant electronic relaxation. The presence of the redox inactive Fe3+ center is found to decrease the barrier of the redox process in the [Fe3S4] cluster due to its strong antiferromagnetic coupling with the redox active Fe2S2 subsite.
Resumo:
Background: Indoor air quality (IAQ) is considered an important determinant of human health. The association between exposure to volatile organic compounds, particulate matter, house dust mite, molds and bacteria in day care centers (DCC) is not completely clear. The aim of this project was to study these effects. Methods --- study design: This study comprised two phases. Phase I included an evaluation of 45 DCCs (25 from Lisbon and 20 from Oporto, targeting 5161 children). In this phase, building characteristics, indoor CO2 and air temperature/relative humidity, were assessed. A children’s respiratory health questionnaire derived from the ISAAC (International Study on Asthma and Allergies in Children) was also distributed. Phase II encompassed two evaluations and included 20 DCCs selected from phase I after a cluster analysis (11 from Lisbon and 9 from Oporto, targeting 2287 children). In this phase, data on ventilation, IAQ, thermal comfort parameters, respiratory and allergic health, airway inflammation biomarkers, respiratory virus infection patterns and parental and child stress were collected. Results: In Phase I, building characteristics, occupant behavior and ventilation surrogates were collected from all DCCs. The response rate of the questionnaire was 61.7% (3186 children). Phase II included 1221 children. Association results between DCC characteristics, IAQ and health outcomes will be provided in order to support recommendations on IAQ and children’s health. A building ventilation model will also be developed. Discussion: This paper outlines methods that might be implemented by other investigators conducting studies on the association between respiratory health and indoor air quality at DCC.
Resumo:
Dissertação para obtenção do Grau de Doutor em Química Sustentável
Resumo:
In order to investigate a possible method of biological control of schistosomiasis, we used the fish Geophagus brasiliensis (Quoy & Gaimard, 1824) which is widely distributed throughout Brazil, to interrupt the life cycle of the snail Biomphalaria tenagophila (Orbigny, 1835), an intermediate host of Schistosoma mansoni. In the laboratory, predation eliminated 97.6% of the smaller snails (3-8 mm shell diameter) and 9.2% of the larger ones (12-14 mm shell diameter). Very promising results were also obtained in a seminatural environment. Studies of this fish in natural snail habitats should be further encouraged.
Resumo:
Engineering of negotiation model allows to develop effective heuristic for business intelligence. Digital ecosystems demand open negotiation models. To define in advance effective heuristics is not compliant with the requirement of openness. The new challenge is to develop business intelligence in advance exploiting an adaptive approach. The idea is to learn business strategy once new negotiation model rise in the e-market arena. In this paper we present how recommendation technology may be deployed in an open negotiation environment where the interaction protocol models are not known in advance. The solution we propose is delivered as part of the ONE Platform, open source software that implements a fully distributed open environment for business negotiation
Resumo:
Abstract This thesis proposes a set of adaptive broadcast solutions and an adaptive data replication solution to support the deployment of P2P applications. P2P applications are an emerging type of distributed applications that are running on top of P2P networks. Typical P2P applications are video streaming, file sharing, etc. While interesting because they are fully distributed, P2P applications suffer from several deployment problems, due to the nature of the environment on which they perform. Indeed, defining an application on top of a P2P network often means defining an application where peers contribute resources in exchange for their ability to use the P2P application. For example, in P2P file sharing application, while the user is downloading some file, the P2P application is in parallel serving that file to other users. Such peers could have limited hardware resources, e.g., CPU, bandwidth and memory or the end-user could decide to limit the resources it dedicates to the P2P application a priori. In addition, a P2P network is typically emerged into an unreliable environment, where communication links and processes are subject to message losses and crashes, respectively. To support P2P applications, this thesis proposes a set of services that address some underlying constraints related to the nature of P2P networks. The proposed services include a set of adaptive broadcast solutions and an adaptive data replication solution that can be used as the basis of several P2P applications. Our data replication solution permits to increase availability and to reduce the communication overhead. The broadcast solutions aim, at providing a communication substrate encapsulating one of the key communication paradigms used by P2P applications: broadcast. Our broadcast solutions typically aim at offering reliability and scalability to some upper layer, be it an end-to-end P2P application or another system-level layer, such as a data replication layer. Our contributions are organized in a protocol stack made of three layers. In each layer, we propose a set of adaptive protocols that address specific constraints imposed by the environment. Each protocol is evaluated through a set of simulations. The adaptiveness aspect of our solutions relies on the fact that they take into account the constraints of the underlying system in a proactive manner. To model these constraints, we define an environment approximation algorithm allowing us to obtain an approximated view about the system or part of it. This approximated view includes the topology and the components reliability expressed in probabilistic terms. To adapt to the underlying system constraints, the proposed broadcast solutions route messages through tree overlays permitting to maximize the broadcast reliability. Here, the broadcast reliability is expressed as a function of the selected paths reliability and of the use of available resources. These resources are modeled in terms of quotas of messages translating the receiving and sending capacities at each node. To allow a deployment in a large-scale system, we take into account the available memory at processes by limiting the view they have to maintain about the system. Using this partial view, we propose three scalable broadcast algorithms, which are based on a propagation overlay that tends to the global tree overlay and adapts to some constraints of the underlying system. At a higher level, this thesis also proposes a data replication solution that is adaptive both in terms of replica placement and in terms of request routing. At the routing level, this solution takes the unreliability of the environment into account, in order to maximize reliable delivery of requests. At the replica placement level, the dynamically changing origin and frequency of read/write requests are analyzed, in order to define a set of replica that minimizes communication cost.
Resumo:
The integrity of the cornea, the most anterior part of the eye, is indispensable for vision. Forty-five million individuals worldwide are bilaterally blind and another 135 million have severely impaired vision in both eyes because of loss of corneal transparency; treatments range from local medications to corneal transplants, and more recently to stem cell therapy. The corneal epithelium is a squamous epithelium that is constantly renewing, with a vertical turnover of 7 to 14 days in many mammals. Identification of slow cycling cells (label-retaining cells) in the limbus of the mouse has led to the notion that the limbus is the niche for the stem cells responsible for the long-term renewal of the cornea; hence, the corneal epithelium is supposedly renewed by cells generated at and migrating from the limbus, in marked opposition to other squamous epithelia in which each resident stem cell has in charge a limited area of epithelium. Here we show that the corneal epithelium of the mouse can be serially transplanted, is self-maintained and contains oligopotent stem cells with the capacity to generate goblet cells if provided with a conjunctival environment. Furthermore, the entire ocular surface of the pig, including the cornea, contains oligopotent stem cells (holoclones) with the capacity to generate individual colonies of corneal and conjunctival cells. Therefore, the limbus is not the only niche for corneal stem cells and corneal renewal is not different from other squamous epithelia. We propose a model that unifies our observations with the literature and explains why the limbal region is enriched in stem cells.
Resumo:
Understanding the factors that drive geographic variation in life history is an important challenge in evolutionary ecology. Here, we analyze what predicts geographic variation in life-history traits of the common lizard, Zootoca vivipara, which has the globally largest distribution range of all terrestrial reptile species. Variation in body size was predicted by differences in the length of activity season, while we found no effects of environmental temperature per se. Females experiencing relatively short activity season mature at a larger size and remain larger on average than females in populations with relatively long activity seasons. Interpopulation variation in fecundity was largely explained by mean body size of females and reproductive mode, with viviparous populations having larger clutch size than oviparous populations. Finally, body size-fecundity relationship differs between viviparous and oviparous populations, with relatively lower reproductive investment for a given body size in oviparous populations. While the phylogenetic signal was weak overall, the patterns of variation showed spatial effects, perhaps reflecting genetic divergence or geographic variation in additional biotic and abiotic factors. Our findings emphasize that time constraints imposed by the environment rather than ambient temperature play a major role in shaping life histories in the common lizard. This might be attributed to the fact that lizards can attain their preferred body temperature via behavioral thermoregulation across different thermal environments. Length of activity season, defining the maximum time available for lizards to maintain optimal performance, is thus the main environmental factor constraining growth rate and annual rates of mortality. Our results suggest that this factor may partly explain variation in the extent to which different taxa follow ecogeographic rules.
Resumo:
The past few decades have seen a considerable increase in the number of parallel and distributed systems. With the development of more complex applications, the need for more powerful systems has emerged and various parallel and distributed environments have been designed and implemented. Each of the environments, including hardware and software, has unique strengths and weaknesses. There is no single parallel environment that can be identified as the best environment for all applications with respect to hardware and software properties. The main goal of this thesis is to provide a novel way of performing data-parallel computation in parallel and distributed environments by utilizing the best characteristics of difference aspects of parallel computing. For the purpose of this thesis, three aspects of parallel computing were identified and studied. First, three parallel environments (shared memory, distributed memory, and a network of workstations) are evaluated to quantify theirsuitability for different parallel applications. Due to the parallel and distributed nature of the environments, networks connecting the processors in these environments were investigated with respect to their performance characteristics. Second, scheduling algorithms are studied in order to make them more efficient and effective. A concept of application-specific information scheduling is introduced. The application- specific information is data about the workload extractedfrom an application, which is provided to a scheduling algorithm. Three scheduling algorithms are enhanced to utilize the application-specific information to further refine their scheduling properties. A more accurate description of the workload is especially important in cases where the workunits are heterogeneous and the parallel environment is heterogeneous and/or non-dedicated. The results obtained show that the additional information regarding the workload has a positive impact on the performance of applications. Third, a programming paradigm for networks of symmetric multiprocessor (SMP) workstations is introduced. The MPIT programming paradigm incorporates the Message Passing Interface (MPI) with threads to provide a methodology to write parallel applications that efficiently utilize the available resources and minimize the overhead. The MPIT allows for communication and computation to overlap by deploying a dedicated thread for communication. Furthermore, the programming paradigm implements an application-specific scheduling algorithm. The scheduling algorithm is executed by the communication thread. Thus, the scheduling does not affect the execution of the parallel application. Performance results achieved from the MPIT show that considerable improvements over conventional MPI applications are achieved.