976 resultados para distributed application


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the last decade, there has been a trend where water utility companies aim to make water distribution networks more intelligent in order to improve their quality of service, reduce water waste, minimize maintenance costs etc., by incorporating IoT technologies. Current state of the art solutions use expensive power hungry deployments to monitor and transmit water network states periodically in order to detect anomalous behaviors such as water leakage and bursts. However, more than 97% of water network assets are remote away from power and are often in geographically remote underpopulated areas, facts that make current approaches unsuitable for next generation more dynamic adaptive water networks. Battery-driven wireless sensor/actuator based solutions are theoretically the perfect choice to support next generation water distribution. In this paper, we present an end-to-end water leak localization system, which exploits edge processing and enables the use of battery-driven sensor nodes. Our system combines a lightweight edge anomaly detection algorithm based on compression rates and an efficient localization algorithm based on graph theory. The edge anomaly detection and localization elements of the systems produce a timely and accurate localization result and reduce the communication by 99% compared to the traditional periodic communication. We evaluated our schemes by deploying non-intrusive sensors measuring vibrational data on a real-world water test rig that have had controlled leakage and burst scenarios implemented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sustainability and responsible environmental behaviour constitute a vital premise in the development of the humankind. In fact, during last decades, the global energetic scenario is evolving towards a scheme with increasing relevance of Renewable Energy Sources (RES) like photovoltaic, wind, biomass and hydrogen. Furthermore, hydrogen is an energy carrier which constitutes a mean for long-term energy storage. The integration of hydrogen with local RES contributes to distributed power generation and early introduction of hydrogen economy. Intermittent nature of many of RES, for instance solar and wind sources, impose the development of a management and control strategy to overcome this drawback. This strategy is responsible of providing a reliable, stable and efficient operation of the system. To implement such strategy, a monitoring system is required.The present paper aims to contribute to experimentally validate LabVIEW as valuable tool to develop monitoring platforms in the field of RES-based facilities. To this aim, a set of real systems successfully monitored is exposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Al giorno d'oggi il reinforcement learning ha dimostrato di essere davvero molto efficace nel machine learning in svariati campi, come ad esempio i giochi, il riconoscimento vocale e molti altri. Perciò, abbiamo deciso di applicare il reinforcement learning ai problemi di allocazione, in quanto sono un campo di ricerca non ancora studiato con questa tecnica e perchè questi problemi racchiudono nella loro formulazione un vasto insieme di sotto-problemi con simili caratteristiche, per cui una soluzione per uno di essi si estende ad ognuno di questi sotto-problemi. In questo progetto abbiamo realizzato un applicativo chiamato Service Broker, il quale, attraverso il reinforcement learning, apprende come distribuire l'esecuzione di tasks su dei lavoratori asincroni e distribuiti. L'analogia è quella di un cloud data center, il quale possiede delle risorse interne - possibilmente distribuite nella server farm -, riceve dei tasks dai suoi clienti e li esegue su queste risorse. L'obiettivo dell'applicativo, e quindi del data center, è quello di allocare questi tasks in maniera da minimizzare il costo di esecuzione. Inoltre, al fine di testare gli agenti del reinforcement learning sviluppati è stato creato un environment, un simulatore, che permettesse di concentrarsi nello sviluppo dei componenti necessari agli agenti, invece che doversi anche occupare di eventuali aspetti implementativi necessari in un vero data center, come ad esempio la comunicazione con i vari nodi e i tempi di latenza di quest'ultima. I risultati ottenuti hanno dunque confermato la teoria studiata, riuscendo a ottenere prestazioni migliori di alcuni dei metodi classici per il task allocation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The modern industrial environment is populated by a myriad of intelligent devices that collaborate for the accomplishment of the numerous business processes in place at the production sites. The close collaboration between humans and work machines poses new interesting challenges that industry must overcome in order to implement the new digital policies demanded by the industrial transition. The Industry 5.0 movement is a companion revolution of the previous Industry 4.0, and it relies on three characteristics that any industrial sector should have and pursue: human centrality, resilience, and sustainability. The application of the fifth industrial revolution cannot be completed without moving from the implementation of Industry 4.0-enabled platforms. The common feature found in the development of this kind of platform is the need to integrate the Information and Operational layers. Our thesis work focuses on the implementation of a platform addressing all the digitization features foreseen by the fourth industrial revolution, making the IT/OT convergence inside production plants an improvement and not a risk. Furthermore, we added modular features to our platform enabling the Industry 5.0 vision. We favored the human centrality using the mobile crowdsensing techniques and the reliability and sustainability using pluggable cloud computing services, combined with data coming from the crowd support. We achieved important and encouraging results in all the domains in which we conducted our experiments. Our IT/OT convergence-enabled platform exhibits the right performance needed to satisfy the strict requirements of production sites. The multi-layer capability of the framework enables the exploitation of data not strictly coming from work machines, allowing a more strict interaction between the company, its employees, and customers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several decision and control tasks involve networks of cyber-physical systems that need to be coordinated and controlled according to a fully-distributed paradigm involving only local communications without any central unit. This thesis focuses on distributed optimization and games over networks from a system theoretical perspective. In the addressed frameworks, we consider agents communicating only with neighbors and running distributed algorithms with optimization-oriented goals. The distinctive feature of this thesis is to interpret these algorithms as dynamical systems and, thus, to resort to powerful system theoretical tools for both their analysis and design. We first address the so-called consensus optimization setup. In this context, we provide an original system theoretical analysis of the well-known Gradient Tracking algorithm in the general case of nonconvex objective functions. Then, inspired by this method, we provide and study a series of extensions to improve the performance and to deal with more challenging settings like, e.g., the derivative-free framework or the online one. Subsequently, we tackle the recently emerged framework named distributed aggregative optimization. For this setup, we develop and analyze novel schemes to handle (i) online instances of the problem, (ii) ``personalized'' optimization frameworks, and (iii) feedback optimization settings. Finally, we adopt a system theoretical approach to address aggregative games over networks both in the presence or absence of linear coupling constraints among the decision variables of the players. In this context, we design and inspect novel fully-distributed algorithms, based on tracking mechanisms, that outperform state-of-the-art methods in finding the Nash equilibrium of the game.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The advent of Bitcoin suggested a disintermediated economy in which Internet users can take part directly. The conceptual disruption brought about by this Internet of Money (IoM) mirrors the cross-industry impacts of blockchain and distributed ledger technologies (DLTs). While related instances of non-centralisation thwart regulatory efforts to establish accountability, in the financial domain further challenges arise from the presence in the IoM of two seemingly opposing traits: anonymity and transparency. Indeed, DLTs are often described as architecturally transparent, but the perceived level of anonymity of cryptocurrency transfers fuels fears of illicit exploitation. This is a primary concern for the framework to prevent money laundering and the financing of terrorism and proliferation (AML/CFT/CPF), and a top priority both globally and at the EU level. Nevertheless, the anonymous and transparent features of the IoM are far from clear-cut, and the same is true for its levels of disintermediation and non-centralisation. Almost fifteen years after the first Bitcoin transaction, the IoM today comprises a diverse set of socio-technical ecosystems. Building on an analysis of their phenomenology, this dissertation shows how there is more to their traits of anonymity and transparency than it may seem, and how these features range across a spectrum of combinations and degrees. In this context, trade-offs can be evaluated by referring to techno-legal benchmarks, established through socio-technical assessments grounded on teleological interpretation. Against this backdrop, this work provides framework-level recommendations for the EU to respond to the twofold nature of the IoM legitimately and effectively. The methodology cherishes the mutual interaction between regulation and technology when drafting regulation whose compliance can be eased by design. This approach mitigates the risk of overfitting in a fast-changing environment, while acknowledging specificities in compliance with the risk-based approach that sits at the core of the AML/CFT/CPF regime.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The application of modern ICT technologies is radically changing many fields pushing toward more open and dynamic value chains fostering the cooperation and integration of many connected partners, sensors, and devices. As a valuable example, the emerging Smart Tourism field derived from the application of ICT to Tourism so to create richer and more integrated experiences, making them more accessible and sustainable. From a technological viewpoint, a recurring challenge in these decentralized environments is the integration of heterogeneous services and data spanning multiple administrative domains, each possibly applying different security/privacy policies, device and process control mechanisms, service access, and provisioning schemes, etc. The distribution and heterogeneity of those sources exacerbate the complexity in the development of integrating solutions with consequent high effort and costs for partners seeking them. Taking a step towards addressing these issues, we propose APERTO, a decentralized and distributed architecture that aims at facilitating the blending of data and services. At its core, APERTO relies on APERTO FaaS, a Serverless platform allowing fast prototyping of the business logic, lowering the barrier of entry and development costs to newcomers, (zero) fine-grained scaling of resources servicing end-users, and reduced management overhead. APERTO FaaS infrastructure is based on asynchronous and transparent communications between the components of the architecture, allowing the development of optimized solutions that exploit the peculiarities of distributed and heterogeneous environments. In particular, APERTO addresses the provisioning of scalable and cost-efficient mechanisms targeting: i) function composition allowing the definition of complex workloads from simple, ready-to-use functions, enabling smarter management of complex tasks and improved multiplexing capabilities; ii) the creation of end-to-end differentiated QoS slices minimizing interfaces among application/service running on a shared infrastructure; i) an abstraction providing uniform and optimized access to heterogeneous data sources, iv) a decentralized approach for the verification of access rights to resources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Distributed argumentation technology is a computational approach incorporating argumentation reasoning mechanisms within multi-agent systems. For the formal foundations of distributed argumentation technology, in this thesis we conduct a principle-based analysis of structured argumentation as well as abstract multi-agent and abstract bipolar argumentation. The results of the principle-based approach of these theories provide an overview and guideline for further applications of the theories. Moreover, in this thesis we explore distributed argumentation technology using distributed ledgers. We envision an Intelligent Human-input-based Blockchain Oracle (IHiBO), an artificial intelligence tool for storing argumentation reasoning. We propose a decentralized and secure architecture for conducting decision-making, addressing key concerns of trust, transparency, and immutability. We model fund management with agent argumentation in IHiBO and analyze its compliance with European fund management legal frameworks. We illustrate how bipolar argumentation balances pros and cons in legal reasoning in a legal divorce case, and how the strength of arguments in natural language can be represented in structured arguments. Finally, we discuss how distributed argumentation technology can be used to advance risk management, regulatory compliance of distributed ledgers for financial securities, and dialogue techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the aim of heading towards a more sustainable future, there has been a noticeable increase in the installation of Renewable Energy Sources (RES) in power systems in the latest years. Besides the evident environmental benefits, RES pose several technological challenges in terms of scheduling, operation, and control of transmission and distribution power networks. Therefore, it raised the necessity of developing smart grids, relying on suitable distributed measurement infrastructure, for instance, based on Phasor Measurement Units (PMUs). Not only are such devices able to estimate a phasor, but they can also provide time information which is essential for real-time monitoring. This Thesis falls within this context by analyzing the uncertainty requirements of PMUs in distribution and transmission applications. Concerning the latter, the reliability of PMU measurements during severe power system events is examined, whereas for the first, typical configurations of distribution networks are studied for the development of target uncertainties. The second part of the Thesis, instead, is dedicated to the application of PMUs in low-inertia power grids. The replacement of traditional synchronous machines with inertia-less RES is progressively reducing the overall system inertia, resulting in faster and more severe events. In this scenario, PMUs may play a vital role in spite of the fact that no standard requirements nor target uncertainties are yet available. This Thesis deeply investigates PMU-based applications, by proposing a new inertia index relying only on local measurements and evaluating their reliability in low-inertia scenarios. It also develops possible uncertainty intervals based on the electrical instrumentation currently used in power systems and assesses the interoperability with other devices before and after contingency events.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Internet of Vehicles (IoV) paradigm has emerged in recent times, where with the support of technologies like the Internet of Things and V2X , Vehicular Users (VUs) can access different services through internet connectivity. With the support of 6G technology, the IoV paradigm will evolve further and converge into a fully connected and intelligent vehicular system. However, this brings new challenges over dynamic and resource-constrained vehicular systems, and advanced solutions are demanded. This dissertation analyzes the future 6G enabled IoV systems demands, corresponding challenges, and provides various solutions to address them. The vehicular services and application requests demands proper data processing solutions with the support of distributed computing environments such as Vehicular Edge Computing (VEC). While analyzing the performance of VEC systems it is important to take into account the limited resources, coverage, and vehicular mobility into account. Recently, Non terrestrial Networks (NTN) have gained huge popularity for boosting the coverage and capacity of terrestrial wireless networks. Integrating such NTN facilities into the terrestrial VEC system can address the above mentioned challenges. Additionally, such integrated Terrestrial and Non-terrestrial networks (T-NTN) can also be considered to provide advanced intelligent solutions with the support of the edge intelligence paradigm. In this dissertation, we proposed an edge computing-enabled joint T-NTN-based vehicular system architecture to serve VUs. Next, we analyze the terrestrial VEC systems performance for VUs data processing problems and propose solutions to improve the performance in terms of latency and energy costs. Next, we extend the scenario toward the joint T-NTN system and address the problem of distributed data processing through ML-based solutions. We also proposed advanced distributed learning frameworks with the support of a joint T-NTN framework with edge computing facilities. In the end, proper conclusive remarks and several future directions are provided for the proposed solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main objective of my thesis work is to exploit the Google native and open-source platform Kubeflow, specifically using Kubeflow pipelines, to execute a Federated Learning scalable ML process in a 5G-like and simplified test architecture hosting a Kubernetes cluster and apply the largely adopted FedAVG algorithm and FedProx its optimization empowered by the ML platform ‘s abilities to ease the development and production cycle of this specific FL process. FL algorithms are more are and more promising and adopted both in Cloud application development and 5G communication enhancement through data coming from the monitoring of the underlying telco infrastructure and execution of training and data aggregation at edge nodes to optimize the global model of the algorithm ( that could be used for example for resource provisioning to reach an agreed QoS for the underlying network slice) and after a study and a research over the available papers and scientific articles related to FL with the help of the CTTC that suggests me to study and use Kubeflow to bear the algorithm we found out that this approach for the whole FL cycle deployment was not documented and may be interesting to investigate more in depth. This study may lead to prove the efficiency of the Kubeflow platform itself for this need of development of new FL algorithms that will support new Applications and especially test the FedAVG algorithm performances in a simulated client to cloud communication using a MNIST dataset for FL as benchmark.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, the optimal operation of a neighborhood of smart households in terms of minimizing the total energy cost is analyzed. Each household may comprise several assets such as electric vehicles, controllable appliances, energy storage and distributed generation. Bi-directional power flow is considered for each household . Apart from the distributed generation unit, technological options such as vehicle-to-home and vehicle-to-grid are available to provide energy to cover self-consumption needs and to export excessive energy to other households, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The idea of Grid Computing originated in the nineties and found its concrete applications in contexts like the SETI@home project where a lot of computers (offered by volunteers) cooperated, performing distributed computations, inside the Grid environment analyzing radio signals trying to find extraterrestrial life. The Grid was composed of traditional personal computers but, with the emergence of the first mobile devices like Personal Digital Assistants (PDAs), researchers started theorizing the inclusion of mobile devices into Grid Computing; although impressive theoretical work was done, the idea was discarded due to the limitations (mainly technological) of mobile devices available at the time. Decades have passed, and now mobile devices are extremely more performant and numerous than before, leaving a great amount of resources available on mobile devices, such as smartphones and tablets, untapped. Here we propose a solution for performing distributed computations over a Grid Computing environment that utilizes both desktop and mobile devices, exploiting the resources from day-to-day mobile users that alternatively would end up unused. The work starts with an introduction on what Grid Computing is, the evolution of mobile devices, the idea of integrating such devices into the Grid and how to convince device owners to participate in the Grid. Then, the tone becomes more technical, starting with an explanation on how Grid Computing actually works, followed by the technical challenges of integrating mobile devices into the Grid. Next, the model, which constitutes the solution offered by this study, is explained, followed by a chapter regarding the realization of a prototype that proves the feasibility of distributed computations over a Grid composed by both mobile and desktop devices. To conclude future developments and ideas to improve this project are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A rapid, sensitive and specific method for quantifying propylthiouracil in human plasma using methylthiouracil as the internal standard (IS) is described. The analyte and the IS were extracted from plasma by liquid-liquid extraction using an organic solvent (ethyl acetate). The extracts were analyzed by high performance liquid chromatography coupled with electrospray tandem mass spectrometry (HPLC-MS/MS) in negative mode (ES-). Chromatography was performed using a Phenomenex Gemini C18 5μm analytical column (4.6mm×150mm i.d.) and a mobile phase consisting of methanol/water/acetonitrile (40/40/20, v/v/v)+0.1% of formic acid. For propylthiouracil and I.S., the optimized parameters of the declustering potential, collision energy and collision exit potential were -60 (V), -26 (eV) and -5 (V), respectively. The method had a chromatographic run time of 2.5min and a linear calibration curve over the range 20-5000ng/mL. The limit of quantification was 20ng/mL. The stability tests indicated no significant degradation. This HPLC-MS/MS procedure was used to assess the bioequivalence of two propylthiouracil 100mg tablet formulations in healthy volunteers of both sexes in fasted and fed state. The geometric mean and 90% confidence interval CI of Test/Reference percent ratios were, without and with food, respectively: 109.28% (103.63-115.25%) and 115.60% (109.03-122.58%) for Cmax, 103.31% (100.74-105.96%) and 103.40% (101.03-105.84) for AUClast. This method offers advantages over those previously reported, in terms of both a simple liquid-liquid extraction without clean-up procedures, as well as a faster run time (2.5min). The LOQ of 20ng/mL is well suited for pharmacokinetic studies. The assay performance results indicate that the method is precise and accurate enough for the routine determination of the propylthiouracil in human plasma. The test formulation with and without food was bioequivalent to reference formulation. Food administration increased the Tmax and decreased the bioavailability (Cmax and AUC).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although several treatments for tendon lesions have been proposed, successful tendon repair remains a great challenge for orthopedics, especially considering the high incidence of re-rupture of injured tendons. Our aim was to evaluate the pharmacological potential of Aloe vera on the content and arrangement of glycosaminoglycans (GAGs) during tendon healing, which was based on the effectiveness of A. vera on collagen organization previously observed by our group. In rats, a partial calcaneal tendon transection was performed with subsequent topical A. vera application at the injury site. The tendons were treated with A. vera ointment for 7 days and excised on the 7(th) , 14(th) , or 21(st) day post-surgery. Control rats received ointment without A. vera. A higher content of GAGs and a lower amount of dermatan sulfate were detected in the A. vera-treated group on the 14(th) day compared with the control. Also at 14 days post-surgery, a lower dichroic ratio in toluidine blue stained sections was observed in A. vera-treated tendons compared with the control. No differences were observed in the chondroitin-6-sulfate and TGF-β1 levels between the groups, and higher amount of non-collagenous proteins was detected in the A. vera-treated group on the 21(st) day, compared with the control group. No differences were observed in the number of fibroblasts, inflammatory cells and blood vessels between the groups. The application of A. vera during tendon healing modified the arrangement of GAGs and increased the content of GAGs and non-collagenous proteins.