886 resultados para tucson, cloud, tuple, java, sistemi distribuiti, cloudify
Resumo:
La diffusione di soluzioni domotiche dipende da tecnologie abilitanti che supportino la comunicazione tra i numerosi agenti delle reti. L’obiettivo della tesi è progettare e realizzare un middleware per sensori distribuiti Java-based chiamato SensorNetwork, che permetta ad un agente domotico di effettuare sensing sull’ambiente. Le funzionalità principali del sistema sono uniformità di accesso a sensori eterogenei distribuiti, alto livello di automazione (autoconfigurazione e autodiscovery dei nodi), configurazione a deployment time, modularità, semplicità di utilizzo ed estensione con nuovi sensori. Il sistema realizzato è basato su un’architettura a componente-container che permette l’utilizzo di sensori all’interno di stazioni di sensori e che supporti l’accesso remoto per mezzo di un servizio di naming definito ad-hoc.
Resumo:
In this thesis, tool support is addressed for the combined disciplines of Model-based testing and performance testing. Model-based testing (MBT) utilizes abstract behavioral models to automate test generation, thus decreasing time and cost of test creation. MBT is a functional testing technique, thereby focusing on output, behavior, and functionality. Performance testing, however, is non-functional and is concerned with responsiveness and stability under various load conditions. MBPeT (Model-Based Performance evaluation Tool) is one such tool which utilizes probabilistic models, representing dynamic real-world user behavior patterns, to generate synthetic workload against a System Under Test and in turn carry out performance analysis based on key performance indicators (KPI). Developed at Åbo Akademi University, the MBPeT tool is currently comprised of a downloadable command-line based tool as well as a graphical user interface. The goal of this thesis project is two-fold: 1) to extend the existing MBPeT tool by deploying it as a web-based application, thereby removing the requirement of local installation, and 2) to design a user interface for this web application which will add new user interaction paradigms to the existing feature set of the tool. All phases of the MBPeT process will be realized via this single web deployment location including probabilistic model creation, test configurations, test session execution against a SUT with real-time monitoring of user configurable metric, and final test report generation and display. This web application (MBPeT Dashboard) is implemented with the Java programming language on top of the Vaadin framework for rich internet application development. The Vaadin framework handles the complicated web communications processes and front-end technologies, freeing developers to implement the business logic as well as the user interface in pure Java. A number of experiments are run in a case study environment to validate the functionality of the newly developed Dashboard application as well as the scalability of the solution implemented in handling multiple concurrent users. The results support a successful solution with regards to the functional and performance criteria defined, while improvements and optimizations are suggested to increase both of these factors.
Resumo:
The application of modern ICT technologies is radically changing many fields pushing toward more open and dynamic value chains fostering the cooperation and integration of many connected partners, sensors, and devices. As a valuable example, the emerging Smart Tourism field derived from the application of ICT to Tourism so to create richer and more integrated experiences, making them more accessible and sustainable. From a technological viewpoint, a recurring challenge in these decentralized environments is the integration of heterogeneous services and data spanning multiple administrative domains, each possibly applying different security/privacy policies, device and process control mechanisms, service access, and provisioning schemes, etc. The distribution and heterogeneity of those sources exacerbate the complexity in the development of integrating solutions with consequent high effort and costs for partners seeking them. Taking a step towards addressing these issues, we propose APERTO, a decentralized and distributed architecture that aims at facilitating the blending of data and services. At its core, APERTO relies on APERTO FaaS, a Serverless platform allowing fast prototyping of the business logic, lowering the barrier of entry and development costs to newcomers, (zero) fine-grained scaling of resources servicing end-users, and reduced management overhead. APERTO FaaS infrastructure is based on asynchronous and transparent communications between the components of the architecture, allowing the development of optimized solutions that exploit the peculiarities of distributed and heterogeneous environments. In particular, APERTO addresses the provisioning of scalable and cost-efficient mechanisms targeting: i) function composition allowing the definition of complex workloads from simple, ready-to-use functions, enabling smarter management of complex tasks and improved multiplexing capabilities; ii) the creation of end-to-end differentiated QoS slices minimizing interfaces among application/service running on a shared infrastructure; i) an abstraction providing uniform and optimized access to heterogeneous data sources, iv) a decentralized approach for the verification of access rights to resources.
Resumo:
The recent trend of moving Cloud Computing capabilities to the Edge of the network is reshaping how applications and their middleware supports are designed, deployed, and operated. This new model envisions a continuum of virtual resources between the traditional cloud and the network edge, which is potentially more suitable to meet the heterogeneous Quality of Service (QoS) requirements of diverse application domains and next-generation applications. Several classes of advanced Internet of Things (IoT) applications, e.g., in the industrial manufacturing domain, are expected to serve a wide range of applications with heterogeneous QoS requirements and call for QoS management systems to guarantee/control performance indicators, even in the presence of real-world factors such as limited bandwidth and concurrent virtual resource utilization. The present dissertation proposes a comprehensive QoS-aware architecture that addresses the challenges of integrating cloud infrastructure with edge nodes in IoT applications. The architecture provides end-to-end QoS support by incorporating several components for managing physical and virtual resources. The proposed architecture features: i) a multilevel middleware for resolving the convergence between Operational Technology (OT) and Information Technology (IT), ii) an end-to-end QoS management approach compliant with the Time-Sensitive Networking (TSN) standard, iii) new approaches for virtualized network environments, such as running TSN-based applications under Ultra-low Latency (ULL) constraints in virtual and 5G environments, and iv) an accelerated and deterministic container overlay network architecture. Additionally, the QoS-aware architecture includes two novel middlewares: i) a middleware that transparently integrates multiple acceleration technologies in heterogeneous Edge contexts and ii) a QoS-aware middleware for Serverless platforms that leverages coordination of various QoS mechanisms and virtualized Function-as-a-Service (FaaS) invocation stack to manage end-to-end QoS metrics. Finally, all architecture components were tested and evaluated by leveraging realistic testbeds, demonstrating the efficacy of the proposed solutions.
Resumo:
The pervasive availability of connected devices in any industrial and societal sector is pushing for an evolution of the well-established cloud computing model. The emerging paradigm of the cloud continuum embraces this decentralization trend and envisions virtualized computing resources physically located between traditional datacenters and data sources. By totally or partially executing closer to the network edge, applications can have quicker reactions to events, thus enabling advanced forms of automation and intelligence. However, these applications also induce new data-intensive workloads with low-latency constraints that require the adoption of specialized resources, such as high-performance communication options (e.g., RDMA, DPDK, XDP, etc.). Unfortunately, cloud providers still struggle to integrate these options into their infrastructures. That risks undermining the principle of generality that underlies the cloud computing scale economy by forcing developers to tailor their code to low-level APIs, non-standard programming models, and static execution environments. This thesis proposes a novel system architecture to empower cloud platforms across the whole cloud continuum with Network Acceleration as a Service (NAaaS). To provide commodity yet efficient access to acceleration, this architecture defines a layer of agnostic high-performance I/O APIs, exposed to applications and clearly separated from the heterogeneous protocols, interfaces, and hardware devices that implement it. A novel system component embodies this decoupling by offering a set of agnostic OS features to applications: memory management for zero-copy transfers, asynchronous I/O processing, and efficient packet scheduling. This thesis also explores the design space of the possible implementations of this architecture by proposing two reference middleware systems and by adopting them to support interactive use cases in the cloud continuum: a serverless platform and an Industry 4.0 scenario. A detailed discussion and a thorough performance evaluation demonstrate that the proposed architecture is suitable to enable the easy-to-use, flexible integration of modern network acceleration into next-generation cloud platforms.
Resumo:
Recent technological advancements have played a key role in seamlessly integrating cloud, edge, and Internet of Things (IoT) technologies, giving rise to the Cloud-to-Thing Continuum paradigm. This cloud model connects many heterogeneous resources that generate a large amount of data and collaborate to deliver next-generation services. While it has the potential to reshape several application domains, the number of connected entities remarkably broadens the security attack surface. One of the main problems is the lack of security measures to adapt to the dynamic and evolving conditions of the Cloud-To-Thing Continuum. To address this challenge, this dissertation proposes novel adaptable security mechanisms. Adaptable security is the capability of security controls, systems, and protocols to dynamically adjust to changing conditions and scenarios. However, since the design and development of novel security mechanisms can be explored from different perspectives and levels, we place our attention on threat modeling and access control. The contributions of the thesis can be summarized as follows. First, we introduce a model-based methodology that secures the design of edge and cyber-physical systems. This solution identifies threats, security controls, and moving target defense techniques based on system features. Then, we focus on access control management. Since access control policies are subject to modifications, we evaluate how they can be efficiently shared among distributed areas, highlighting the effectiveness of distributed ledger technologies. Furthermore, we propose a risk-based authorization middleware, adjusting permissions based on real-time data, and a federated learning framework that enhances trustworthiness by weighting each client's contributions according to the quality of their partial models. Finally, since authorization revocation is another critical concern, we present an efficient revocation scheme for verifiable credentials in IoT networks, featuring decentralization, demanding minimum storage and computing capabilities. All the mechanisms have been evaluated in different conditions, proving their adaptability to the Cloud-to-Thing Continuum landscape.
Resumo:
Siamo sempre stati abituati fin dal principio ad interagire con l’ambiente che ci circonda, utilizzando gli oggetti fisici presenti attorno a noi per soddisfare le nostre esigenze, ma se esistesse di più di questo? Se fosse possibile avere attorno a noi oggetti che non sono propriamente corpi fisici, ma che hanno un comportamento coerente con l’ambiente circostante e non venisse percepita la differenza tra essi e un vero e proprio oggetto? Ci si sta riferendo a quella che oggi viene chiamata Mixed Reality, una realtà mista resa visibile tramite appositi dispositivi, in cui è possibile interagire contemporaneamente con oggetti fisici e oggetti digitali che vengono chiamati ologrammi. Un aspetto fondamentale che riguarda questa tipologia di sistemi è sicuramente la collaborazione. In questa tesi viene esaminato il panorama delle tecnologie allo stato dell'arte che permettono di vivere esperienze di Collaborative Mixed Reality, ma soprattutto ci si concentra sulla progettazione di una vera e propria architettura in rete locale che consenta la realizzazione di un sistema condiviso. Successivamente all'applicazione di varie strategie vengono valutati i risultati ottenuti da rigorose misurazioni, per determinare scientificamente le prestazioni dell'architettura progettata e poter trarre delle conclusioni, considerando analogie e differenze rispetto ad altre possibili soluzioni.
Resumo:
I dati sono una risorsa di valore inestimabile per tutte le organizzazioni. Queste informazioni vanno da una parte gestite tramite i classici sistemi operazionali, dall’altra parte analizzate per ottenere approfondimenti che possano guidare le scelte di business. Uno degli strumenti fondamentali a supporto delle scelte di business è il data warehouse. Questo elaborato è il frutto di un percorso di tirocinio svolto con l'azienda Injenia S.r.l. Il focus del percorso era rivolto all'ottimizzazione di un data warehouse che l'azienda vende come modulo aggiuntivo di un software di nome Interacta. Questo data warehouse, Interacta Analytics, ha espresso nel tempo notevoli criticità architetturali e di performance. L’architettura attualmente usata per la creazione e la gestione dei dati all'interno di Interacta Analytics utilizza un approccio batch, pertanto, l’obiettivo cardine dello studio è quello di trovare soluzioni alternative batch che garantiscano un risparmio sia in termini economici che di tempo, esplorando anche la possibilità di una transizione ad un’architettura streaming. Gli strumenti da utilizzare in questa ricerca dovevano inoltre mantenersi in linea con le tecnologie utilizzate per Interacta, ossia i servizi della Google Cloud Platform. Dopo una breve dissertazione sul background teorico di questa area tematica, l'elaborato si concentra sul funzionamento del software principale e sulla struttura logica del modulo di analisi. Infine, si espone il lavoro sperimentale, innanzitutto proponendo un'analisi delle criticità principali del sistema as-is, dopodiché ipotizzando e valutando quattro ipotesi migliorative batch e due streaming. Queste, come viene espresso nelle conclusioni della ricerca, migliorano di molto le performance del sistema di analisi in termini di tempistiche di elaborazione, di costo totale e di semplicità dell'architettura, in particolare grazie all'utilizzo dei servizi serverless con container e FaaS della piattaforma cloud di Google.
Resumo:
Modern society is now facing significant difficulties in attempting to preserve its architectural heritage. Numerous challenges arise consequently when it comes to documentation, preservation and restoration. Fortunately, new perspectives on architectural heritage are emerging owing to the rapid development of digitalization. Therefore, this presents new challenges for architects, restorers and specialists. Additionally, this has changed the way they approach the study of existing heritage, changing from conventional 2D drawings in response to the increasing requirement for 3D representations. Recently, Building Information Modelling for historic buildings (HBIM) has escalated as an emerging trend to interconnect geometrical and informational data. Currently, the latest 3D geomatics techniques based on 3D laser scanners with enhanced photogrammetry along with the continuous improvement in the BIM industry allow for an enhanced 3D digital reconstruction of historical and existing buildings. This research study aimed to develop an integrated workflow for the 3D digital reconstruction of heritage buildings starting from a point cloud. The Pieve of San Michele in Acerboli’s Church in Santarcangelo Di Romagna (6th century) served as the test bed. The point cloud was utilized as an essential referential to model the BIM geometry using Autodesk Revit® 2022. To validate the accuracy of the model, Deviation Analysis Method was employed using CloudCompare software to determine the degree of deviation between the HBIM model and the point cloud. The acquired findings showed a very promising outcome in the average distance between the HBIM model and the point cloud. The conducted approach in this study demonstrated the viability of producing a precise BIM geometry from point clouds.
Resumo:
Questo lavoro di tesi è incentrato sullo sviluppo di una soluzione applicativa nell'ambito dell'integrazione di sistemi software basati su tecnologie considerate legacy. In particolar modo è stato studiata una soluzione integrativa per il popolare ERP gestionale Sap su piattaforma Cloud OpenShift. La soluzione è articolata su diversi livelli basati sull'architettura proposta da Gartner nell'ambito della Digital Integration Hub. È stata sviluppata tramite tecnologie open source leader nel settore e tecnologie cloud avanzate.
Resumo:
Trees from tropical montane cloud forest (TMCF) display very dynamic patterns of water use. They are capable of downwards water transport towards the soil during leaf-wetting events, likely a consequence of foliar water uptake (FWU), as well as high rates of night-time transpiration (Enight) during drier nights. These two processes might represent important sources of water losses and gains to the plant, but little is known about the environmental factors controlling these water fluxes. We evaluated how contrasting atmospheric and soil water conditions control diurnal, nocturnal and seasonal dynamics of sap flow in Drimys brasiliensis (Miers), a common Neotropical cloud forest species. We monitored the seasonal variation of soil water content, micrometeorological conditions and sap flow of D. brasiliensis trees in the field during wet and dry seasons. We also conducted a greenhouse experiment exposing D. brasiliensis saplings under contrasting soil water conditions to deuterium-labelled fog water. We found that during the night D. brasiliensis possesses heightened stomatal sensitivity to soil drought and vapour pressure deficit, which reduces night-time water loss. Leaf-wetting events had a strong suppressive effect on tree transpiration (E). Foliar water uptake increased in magnitude with drier soil and during longer leaf-wetting events. The difference between diurnal and nocturnal stomatal behaviour in D. brasiliensis could be attributed to an optimization of carbon gain when leaves are dry, as well as minimization of nocturnal water loss. The leaf-wetting events on the other hand seem important to D. brasiliensis water balance, especially during soil droughts, both by suppressing tree transpiration (E) and as a small additional water supply through FWU. Our results suggest that decreases in leaf-wetting events in TMCF might increase D. brasiliensis water loss and decrease its water gains, which could compromise its ecophysiological performance and survival during dry periods.
Resumo:
A compact frequency standard based on an expanding cold (133)CS cloud is under development in our laboratory. In a first experiment, Cs cold atoms were prepared by a magneto-optical trap in a vapor cell, and a microwave antenna was used to transmit the radiation for the clock transition. The signal obtained from fluorescence of the expanding cold atoms cloud is used to lock a microwave chain. In this way the overall system stability is evaluated. A theoretical model based on a two-level system interacting with the two microwave pulses enables interpretation for the observed features, especially the poor Ramsey fringes contrast. (C) 2008 Optical Society of America.
Resumo:
Context. Analysis of ages and metallicities of star clusters in the Magellanic Clouds provide information for studies on the chemical evolution of the Clouds and other dwarf irregular galaxies. Aims. The aim is to derive ages and metallicities from integrated spectra of 14 star clusters in the Small Magellanic Cloud, including a few intermediate/old age star clusters. Methods. Making use of a full-spectrum fitting technique, we compared the integrated spectra of the sample clusters to three different sets of single stellar population models, using two fitting codes available in the literature. Results. We derive the ages and metallicities of 9 intermediate/old age clusters, some of them previously unstudied, and 5 young clusters. Conclusions. We point out the interest of the newly identified as intermediate/old age clusters HW1, NGC 152, Lindsay 3, Lindsay 11, and Lindsay 113. We also confirm the old ages of NGC 361, NGC 419, Kron 3, and of the very well-known oldest SMC cluster, NGC 121.
Resumo:
The formation of clouds is an important process for the atmosphere, the hydrological cycle, and climate, but some aspects of it are not completely understood. In this work, we show that microorganisms might affect cloud formation without leaving the Earth's surface by releasing biological surfactants (or biosurfactants) in the environment, that make their way into atmospheric aerosols and could significantly enhance their activation into cloud droplets. In the first part of this work, the cloud-nucleating efficiency of standard biosurfactants was characterized and found to be better than that of any aerosol material studied so far, including inorganic salts. These results identify molecular structures that give organic compounds exceptional cloud-nucleating properties. In the second part, atmospheric aerosols were sampled at different locations: a temperate coastal site, a marine site, a temperate forest, and a tropical forest. Their surface tension was measured and found to be below 30 mN/m, the lowest reported for aerosols, to our knowledge. This very low surface tension was attributed to the presence of biosurfactants, the only natural substances able to reach to such low values. The presence of strong microbial surfactants in aerosols would be consistent with the organic fractions of exceptional cloud-nucleating efficiency recently found in aerosols, and with the correlations between algae bloom and cloud cover reported in the Southern Ocean. The results of this work also suggest that biosurfactants might be common in aerosols and thus of global relevance. If this is confirmed, a new role for microorganisms on the atmosphere and climate could be identified.
Resumo:
Atmospheric aerosol particles serving as cloud condensation nuclei (CCN) are key elements of the hydrological cycle and climate. We have measured and characterized CCN at water vapor supersaturations in the range of S=0.10-0.82% in pristine tropical rainforest air during the AMAZE-08 campaign in central Amazonia. The effective hygroscopicity parameters describing the influence of chemical composition on the CCN activity of aerosol particles varied in the range of kappa approximate to 0.1-0.4 (0.16+/-0.06 arithmetic mean and standard deviation). The overall median value of kappa approximate to 0.15 was by a factor of two lower than the values typically observed for continental aerosols in other regions of the world. Aitken mode particles were less hygroscopic than accumulation mode particles (kappa approximate to 0.1 at D approximate to 50 nm; kappa approximate to 0.2 at D approximate to 200 nm), which is in agreement with earlier hygroscopicity tandem differential mobility analyzer (H-TDMA) studies. The CCN measurement results are consistent with aerosol mass spectrometry (AMS) data, showing that the organic mass fraction (f(org)) was on average as high as similar to 90% in the Aitken mode (D <= 100 nm) and decreased with increasing particle diameter in the accumulation mode (similar to 80% at D approximate to 200 nm). The kappa values exhibited a negative linear correlation with f(org) (R(2)=0.81), and extrapolation yielded the following effective hygroscopicity parameters for organic and inorganic particle components: kappa(org)approximate to 0.1 which can be regarded as the effective hygroscopicity of biogenic secondary organic aerosol (SOA) and kappa(inorg)approximate to 0.6 which is characteristic for ammonium sulfate and related salts. Both the size dependence and the temporal variability of effective particle hygroscopicity could be parameterized as a function of AMS-based organic and inorganic mass fractions (kappa(p)=kappa(org) x f(org)+kappa(inorg) x f(inorg)). The CCN number concentrations predicted with kappa(p) were in fair agreement with the measurement results (similar to 20% average deviation). The median CCN number concentrations at S=0.1-0.82% ranged from N(CCN,0.10)approximate to 35 cm(-3) to N(CCN,0.82)approximate to 160 cm(-3), the median concentration of aerosol particles larger than 30 nm was N(CN,30)approximate to 200 cm(-3), and the corresponding integral CCN efficiencies were in the range of N(CCN,0.10/NCN,30)approximate to 0.1 to N(CCN,0.82/NCN,30)approximate to 0.8. Although the number concentrations and hygroscopicity parameters were much lower in pristine rainforest air, the integral CCN efficiencies observed were similar to those in highly polluted megacity air. Moreover, model calculations of N(CCN,S) assuming an approximate global average value of kappa approximate to 0.3 for continental aerosols led to systematic overpredictions, but the average deviations exceeded similar to 50% only at low water vapor supersaturation (0.1%) and low particle number concentrations (<= 100 cm(-3)). Model calculations assuming aconstant aerosol size distribution led to higher average deviations at all investigated levels of supersaturation: similar to 60% for the campaign average distribution and similar to 1600% for a generic remote continental size distribution. These findings confirm earlier studies suggesting that aerosol particle number and size are the major predictors for the variability of the CCN concentration in continental boundary layer air, followed by particle composition and hygroscopicity as relatively minor modulators. Depending on the required and applicable level of detail, the information and parameterizations presented in this paper should enable efficient description of the CCN properties of pristine tropical rainforest aerosols of Amazonia in detailed process models as well as in large-scale atmospheric and climate models.