884 resultados para Hadoop distributed file system (HDFS)
Resumo:
Power system policies are broadly on track to escalate the use of renewable energy resources in electric power generation. Integration of dispersed generation to the utility network not only intensifies the benefits of renewable generation but also introduces further advantages such as power quality enhancement and freedom of power generation for the consumers. However, issues arise from the integration of distributed generators to the existing utility grid are as significant as its benefits. The issues are aggravated as the number of grid-connected distributed generators increases. Therefore, power quality demands become stricter to ensure a safe and proper advancement towards the emerging smart grid. In this regard, system protection is the area that is highly affected as the grid-connected distributed generation share in electricity generation increases. Islanding detection, amongst all protection issues, is the most important concern for a power system with high penetration of distributed sources. Islanding occurs when a portion of the distribution network which includes one or more distributed generation units and local loads is disconnected from the remaining portion of the grid. Upon formation of a power island, it remains energized due to the presence of one or more distributed sources. This thesis introduces a new islanding detection technique based on an enhanced multi-layer scheme that shows superior performance over the existing techniques. It provides improved solutions for safety and protection of power systems and distributed sources that are capable of operating in grid-connected mode. The proposed active method offers negligible non-detection zone. It is applicable to micro-grids with a number of distributed generation sources without sacrificing the dynamic response of the system. In addition, the information obtained from the proposed scheme allows for smooth transition to stand-alone operation if required. The proposed technique paves the path towards a comprehensive protection solution for future power networks. The proposed method is converter-resident and all power conversion systems that are operating based on power electronics converters can benefit from this method. The theoretical analysis is presented, and extensive simulation results confirm the validity of the analytical work.
Resumo:
Data sources are often dispersed geographically in real life applications. Finding a knowledge model may require to join all the data sources and to run a machine learning algorithm on the joint set. We present an alternative based on a Multi Agent System (MAS): an agent mines one data source in order to extract a local theory (knowledge model) and then merges it with the previous MAS theory using a knowledge fusion technique. This way, we obtain a global theory that summarizes the distributed knowledge without spending resources and time in joining data sources. New experiments have been executed including statistical significance analysis. The results show that, as a result of knowledge fusion, the accuracy of initial theories is significantly improved as well as the accuracy of the monolithic solution.
Resumo:
Surgical interventions are usually performed in an operation room; however, access to the information by the medical team members during the intervention is limited. While in conversations with the medical staff, we observed that they attach significant importance to the improvement of the information and communication direct access by queries during the process in real time. It is due to the fact that the procedure is rather slow and there is lack of interaction with the systems in the operation room. These systems can be integrated on the Cloud adding new functionalities to the existing systems the medical expedients are processed. Therefore, such a communication system needs to be built upon the information and interaction access specifically designed and developed to aid the medical specialists. Copyright 2014 ACM.
Resumo:
Several decision and control tasks in cyber-physical networks can be formulated as large- scale optimization problems with coupling constraints. In these "constraint-coupled" problems, each agent is associated to a local decision variable, subject to individual constraints. This thesis explores the use of primal decomposition techniques to develop tailored distributed algorithms for this challenging set-up over graphs. We first develop a distributed scheme for convex problems over random time-varying graphs with non-uniform edge probabilities. The approach is then extended to unknown cost functions estimated online. Subsequently, we consider Mixed-Integer Linear Programs (MILPs), which are of great interest in smart grid control and cooperative robotics. We propose a distributed methodological framework to compute a feasible solution to the original MILP, with guaranteed suboptimality bounds, and extend it to general nonconvex problems. Monte Carlo simulations highlight that the approach represents a substantial breakthrough with respect to the state of the art, thus representing a valuable solution for new toolboxes addressing large-scale MILPs. We then propose a distributed Benders decomposition algorithm for asynchronous unreliable networks. The framework has been then used as starting point to develop distributed methodologies for a microgrid optimal control scenario. We develop an ad-hoc distributed strategy for a stochastic set-up with renewable energy sources, and show a case study with samples generated using Generative Adversarial Networks (GANs). We then introduce a software toolbox named ChoiRbot, based on the novel Robot Operating System 2, and show how it facilitates simulations and experiments in distributed multi-robot scenarios. Finally, we consider a Pickup-and-Delivery Vehicle Routing Problem for which we design a distributed method inspired to the approach of general MILPs, and show the efficacy through simulations and experiments in ChoiRbot with ground and aerial robots.
Resumo:
Airborne Particulate Matter (PM), can get removed from the atmosphere through wet and dry mechanisms, and physically/chemically interact with materials and induce premature decay. The effect of dry depositions is a complex issue, especially for outdoor materials, because of the difficulties to collect atmospheric deposits repeatable in terms of mass and homogeneously distributed on the entire investigated substrate. In this work, to overcome these problems by eliminating the variability induced by outdoor removal mechanisms (e.g. winds and rainfalls), a new sampling system called ‘Deposition Box’, was used for PM sampling. Four surrogate materials (Cellulose Acetate, Regenerated Cellulose, Cellulose Nitrate and Aluminum) with different surfaces features were exposed in the urban-marine site of Rimini (Italy), in vertical and horizontal orientations. Homogeneous and reproducible PM deposits were obtained and different analytical techniques (IC, AAS, TOC, VP-SEM-EDX, Vis-Spectrophotometry) were employed to characterize their mass, dimension and composition. Results allowed to discriminate the mechanisms responsible of the dry deposition of atmospheric particles on surfaces with different nature and orientation and to determine which chemical species, and in which amount, tend to preferentially deposit on them. This work demonstrated that “Deposition Box” can represent an affordable tool to study dry deposition fluxes on materials and results obtained will be fundamental in order to extend this kind of exposure to actual building and heritage materials, to investigate the PM contribution in their decay.
Resumo:
With the increasing of the distributed generation, DC microgrids have become more and more common in the electrical network. To connect devices in a microgrid, converter are necessary, but they are also source of disturbances due to their functioning. In this thesis, measurement and simulation of conducted emissions, within the frequency range 2-150kHz, of a DC/DC buck converter are studied.
Resumo:
Al giorno d'oggi il reinforcement learning ha dimostrato di essere davvero molto efficace nel machine learning in svariati campi, come ad esempio i giochi, il riconoscimento vocale e molti altri. Perciò, abbiamo deciso di applicare il reinforcement learning ai problemi di allocazione, in quanto sono un campo di ricerca non ancora studiato con questa tecnica e perchè questi problemi racchiudono nella loro formulazione un vasto insieme di sotto-problemi con simili caratteristiche, per cui una soluzione per uno di essi si estende ad ognuno di questi sotto-problemi. In questo progetto abbiamo realizzato un applicativo chiamato Service Broker, il quale, attraverso il reinforcement learning, apprende come distribuire l'esecuzione di tasks su dei lavoratori asincroni e distribuiti. L'analogia è quella di un cloud data center, il quale possiede delle risorse interne - possibilmente distribuite nella server farm -, riceve dei tasks dai suoi clienti e li esegue su queste risorse. L'obiettivo dell'applicativo, e quindi del data center, è quello di allocare questi tasks in maniera da minimizzare il costo di esecuzione. Inoltre, al fine di testare gli agenti del reinforcement learning sviluppati è stato creato un environment, un simulatore, che permettesse di concentrarsi nello sviluppo dei componenti necessari agli agenti, invece che doversi anche occupare di eventuali aspetti implementativi necessari in un vero data center, come ad esempio la comunicazione con i vari nodi e i tempi di latenza di quest'ultima. I risultati ottenuti hanno dunque confermato la teoria studiata, riuscendo a ottenere prestazioni migliori di alcuni dei metodi classici per il task allocation.
Resumo:
In this thesis, a tube-based Distributed Economic Predictive Control (DEPC) scheme is presented for a group of dynamically coupled linear subsystems. These subsystems are components of a large scale system and control inputs are computed based on optimizing a local economic objective. Each subsystem is interacting with its neighbors by sending its future reference trajectory, at each sampling time. It solves a local optimization problem in parallel, based on the received future reference trajectories of the other subsystems. To ensure recursive feasibility and a performance bound, each subsystem is constrained to not deviate too much from its communicated reference trajectory. This difference between the plan trajectory and the communicated one is interpreted as a disturbance on the local level. Then, to ensure the satisfaction of both state and input constraints, they are tightened by considering explicitly the effect of these local disturbances. The proposed approach averages over all possible disturbances, handles tightened state and input constraints, while satisfies the compatibility constraints to guarantee that the actual trajectory lies within a certain bound in the neighborhood of the reference one. Each subsystem is optimizing a local arbitrary economic objective function in parallel while considering a local terminal constraint to guarantee recursive feasibility. In this framework, economic performance guarantees for a tube-based distributed predictive control (DPC) scheme are developed rigorously. It is presented that the closed-loop nominal subsystem has a robust average performance bound locally which is no worse than that of a local robust steady state. Since a robust algorithm is applying on the states of the real (with disturbances) subsystems, this bound can be interpreted as an average performance result for the real closed-loop system. To this end, we present our outcomes on local and global performance, illustrated by a numerical example.
Resumo:
The General Data Protection Regulation (GDPR) has been designed to help promote a view in favor of the interests of individuals instead of large corporations. However, there is the need of more dedicated technologies that can help companies comply with GDPR while enabling people to exercise their rights. We argue that such a dedicated solution must address two main issues: the need for more transparency towards individuals regarding the management of their personal information and their often hindered ability to access and make interoperable personal data in a way that the exercise of one's rights would result in straightforward. We aim to provide a system that helps to push personal data management towards the individual's control, i.e., a personal information management system (PIMS). By using distributed storage and decentralized computing networks to control online services, users' personal information could be shifted towards those directly concerned, i.e., the data subjects. The use of Distributed Ledger Technologies (DLTs) and Decentralized File Storage (DFS) as an implementation of decentralized systems is of paramount importance in this case. The structure of this dissertation follows an incremental approach to describing a set of decentralized systems and models that revolves around personal data and their subjects. Each chapter of this dissertation builds up the previous one and discusses the technical implementation of a system and its relation with the corresponding regulations. We refer to the EU regulatory framework, including GDPR, eIDAS, and Data Governance Act, to build our final system architecture's functional and non-functional drivers. In our PIMS design, personal data is kept in a Personal Data Space (PDS) consisting of encrypted personal data referring to the subject stored in a DFS. On top of that, a network of authorization servers acts as a data intermediary to provide access to potential data recipients through smart contracts.
Resumo:
With the aim of heading towards a more sustainable future, there has been a noticeable increase in the installation of Renewable Energy Sources (RES) in power systems in the latest years. Besides the evident environmental benefits, RES pose several technological challenges in terms of scheduling, operation, and control of transmission and distribution power networks. Therefore, it raised the necessity of developing smart grids, relying on suitable distributed measurement infrastructure, for instance, based on Phasor Measurement Units (PMUs). Not only are such devices able to estimate a phasor, but they can also provide time information which is essential for real-time monitoring. This Thesis falls within this context by analyzing the uncertainty requirements of PMUs in distribution and transmission applications. Concerning the latter, the reliability of PMU measurements during severe power system events is examined, whereas for the first, typical configurations of distribution networks are studied for the development of target uncertainties. The second part of the Thesis, instead, is dedicated to the application of PMUs in low-inertia power grids. The replacement of traditional synchronous machines with inertia-less RES is progressively reducing the overall system inertia, resulting in faster and more severe events. In this scenario, PMUs may play a vital role in spite of the fact that no standard requirements nor target uncertainties are yet available. This Thesis deeply investigates PMU-based applications, by proposing a new inertia index relying only on local measurements and evaluating their reliability in low-inertia scenarios. It also develops possible uncertainty intervals based on the electrical instrumentation currently used in power systems and assesses the interoperability with other devices before and after contingency events.
Resumo:
The Internet of Vehicles (IoV) paradigm has emerged in recent times, where with the support of technologies like the Internet of Things and V2X , Vehicular Users (VUs) can access different services through internet connectivity. With the support of 6G technology, the IoV paradigm will evolve further and converge into a fully connected and intelligent vehicular system. However, this brings new challenges over dynamic and resource-constrained vehicular systems, and advanced solutions are demanded. This dissertation analyzes the future 6G enabled IoV systems demands, corresponding challenges, and provides various solutions to address them. The vehicular services and application requests demands proper data processing solutions with the support of distributed computing environments such as Vehicular Edge Computing (VEC). While analyzing the performance of VEC systems it is important to take into account the limited resources, coverage, and vehicular mobility into account. Recently, Non terrestrial Networks (NTN) have gained huge popularity for boosting the coverage and capacity of terrestrial wireless networks. Integrating such NTN facilities into the terrestrial VEC system can address the above mentioned challenges. Additionally, such integrated Terrestrial and Non-terrestrial networks (T-NTN) can also be considered to provide advanced intelligent solutions with the support of the edge intelligence paradigm. In this dissertation, we proposed an edge computing-enabled joint T-NTN-based vehicular system architecture to serve VUs. Next, we analyze the terrestrial VEC systems performance for VUs data processing problems and propose solutions to improve the performance in terms of latency and energy costs. Next, we extend the scenario toward the joint T-NTN system and address the problem of distributed data processing through ML-based solutions. We also proposed advanced distributed learning frameworks with the support of a joint T-NTN framework with edge computing facilities. In the end, proper conclusive remarks and several future directions are provided for the proposed solutions.
Resumo:
The Internet of Things (IoT) is a critical pillar in the digital transformation because it enables interaction with the physical world through remote sensing and actuation. Owing to the advancements in wireless technology, we now have the opportunity of using their features to the best of our abilities and improve over the current situation. Indeed, the Internet of Things market is expanding at an exponential rate, with devices such as alarms and detectors, smart metres, trackers, and wearables being used on a global scale for automotive and agriculture, environment monitoring, infrastructure surveillance and management, healthcare, energy and utilities, logistics, good tracking, and so on. The Third Generation Partnership Project (3GPP) acknowledged the importance of IoT by introducing new features to support it. In particular, in Rel.13, the 3GPP introduced the so-called IoT to support Low Power Wide Area Networks (LPWAN).As these devices will be distributed in areas where terrestrial networks are not feasible or commercially viable, satellite networks will play a complementary role due to their ability to provide global connectivity via their large footprint size and short service deployment time. In this context, the goal of this thesis is to investigate the viability of integrating IoT technology with satellite communication (SatCom) systems, with a focus on the Random Access(RA) Procedure. Indeed, the RA is the most critical procedure because it allows the UE to achieve uplink synchronisation, obtain the permanent ID, and obtain uplink transmission resources. The goal of this thesis is to evaluate preamble detection in the SatCom environment.
Resumo:
The idea of Grid Computing originated in the nineties and found its concrete applications in contexts like the SETI@home project where a lot of computers (offered by volunteers) cooperated, performing distributed computations, inside the Grid environment analyzing radio signals trying to find extraterrestrial life. The Grid was composed of traditional personal computers but, with the emergence of the first mobile devices like Personal Digital Assistants (PDAs), researchers started theorizing the inclusion of mobile devices into Grid Computing; although impressive theoretical work was done, the idea was discarded due to the limitations (mainly technological) of mobile devices available at the time. Decades have passed, and now mobile devices are extremely more performant and numerous than before, leaving a great amount of resources available on mobile devices, such as smartphones and tablets, untapped. Here we propose a solution for performing distributed computations over a Grid Computing environment that utilizes both desktop and mobile devices, exploiting the resources from day-to-day mobile users that alternatively would end up unused. The work starts with an introduction on what Grid Computing is, the evolution of mobile devices, the idea of integrating such devices into the Grid and how to convince device owners to participate in the Grid. Then, the tone becomes more technical, starting with an explanation on how Grid Computing actually works, followed by the technical challenges of integrating mobile devices into the Grid. Next, the model, which constitutes the solution offered by this study, is explained, followed by a chapter regarding the realization of a prototype that proves the feasibility of distributed computations over a Grid composed by both mobile and desktop devices. To conclude future developments and ideas to improve this project are presented.
Resumo:
Bone marrow is organized in specialized microenvironments known as 'marrow niches'. These are important for the maintenance of stem cells and their hematopoietic progenitors whose homeostasis also depends on other cell types present in the tissue. Extrinsic factors, such as infection and inflammatory states, may affect this system by causing cytokine dysregulation (imbalance in cytokine production) and changes in cell proliferation and self-renewal rates, and may also induce changes in the metabolism and cell cycle. Known to relate to chronic inflammation, obesity is responsible for systemic changes that are best studied in the cardiovascular system. Little is known regarding the changes in the hematopoietic system induced by the inflammatory state carried by obesity or the cell and molecular mechanisms involved. The understanding of the biological behavior of hematopoietic stem cells under obesity-induced chronic inflammation could help elucidate the pathophysiological mechanisms involved in other inflammatory processes, such as neoplastic diseases and bone marrow failure syndromes.
Resumo:
To compare time and risk to biochemical recurrence (BR) after radical prostatectomy of two chronologically different groups of patients using the standard and the modified Gleason system (MGS). Cohort 1 comprised biopsies of 197 patients graded according to the standard Gleason system (SGS) in the period 1997/2004, and cohort 2, 176 biopsies graded according to the modified system in the period 2005/2011. Time to BR was analyzed with the Kaplan-Meier product-limit analysis and prediction of shorter time to recurrence using univariate and multivariate Cox proportional hazards model. Patients in cohort 2 reflected time-related changes: striking increase in clinical stage T1c, systematic use of extended biopsies, and lower percentage of total length of cancer in millimeter in all cores. The MGS used in cohort 2 showed fewer biopsies with Gleason score ≤ 6 and more biopsies of the intermediate Gleason score 7. Time to BR using the Kaplan-Meier curves showed statistical significance using the MGS in cohort 2, but not the SGS in cohort 1. Only the MGS predicted shorter time to BR on univariate analysis and on multivariate analysis was an independent predictor. The results favor that the 2005 International Society of Urological Pathology modified system is a refinement of the Gleason grading and valuable for contemporary clinical practice.