792 resultados para cloud-based computing
Resumo:
This talk, which is based on our newest findings and experiences from research and industrial projects, addresses one of the most relevant challenges for a decade to come: How to integrate the Internet of Things with software, people, and processes, considering modern Cloud Computing and Elasticity principles. Elasticity is seen as one of the main characteristics of Cloud Computing today. Is elasticity simply scalability on steroids? This talk addresses the main principles of elasticity, presents a fresh look at this problem, and examines how to integrate people, software services, and things into one composite system, which can be modeled, programmed, and deployed on a large scale in an elastic way. This novel paradigm has major consequences on how we view, build, design, and deploy ultra-large scale distributed systems.
Resumo:
Introducción: Los softwares dietoterapéuticos constituyen actualmente una herramienta básica en el tratamiento dietético de pacientes, ya sea desde un punto de vista fisiológico y/o patológico. Las nuevas tecnologías y la investigación en este sentido, han favorecido la aparición de nuevas aplicaciones de gestión dietético-nutricional que facilitan la gestión de la empresa dietoterapéutica. Objetivos: Estudiar comparativamente las principales aplicaciones dietoterapéuticas existentes en el mercado para dar criterio a los usuarios profesionales de la dietética y nutrición en la selección de una de las principales herramientas para éstos. Resultados: Desde nuestro punto de vista, dietopro. com resulta, junto con otras de las aplicaciones dietoterapéuticas analizadas, una de las más completas para la gestión de la clínica nutricional. Conclusión: En función de la necesidad del usuario, éste dispone de diferentes softwares dietéticos donde elegir. Se concluye que la selección de una u otra, depende de las necesidades del profesional.
Resumo:
The increasing needs for computational power in areas such as weather simulation, genomics or Internet applications have led to sharing of geographically distributed and heterogeneous resources from commercial data centers and scientific institutions. Research in the areas of utility, grid and cloud computing, together with improvements in network and hardware virtualization has resulted in methods to locate and use resources to rapidly provision virtual environments in a flexible manner, while lowering costs for consumers and providers. However, there is still a lack of methodologies to enable efficient and seamless sharing of resources among institutions. In this work, we concentrate in the problem of executing parallel scientific applications across distributed resources belonging to separate organizations. Our approach can be divided in three main points. First, we define and implement an interoperable grid protocol to distribute job workloads among partners with different middleware and execution resources. Second, we research and implement different policies for virtual resource provisioning and job-to-resource allocation, taking advantage of their cooperation to improve execution cost and performance. Third, we explore the consequences of on-demand provisioning and allocation in the problem of site-selection for the execution of parallel workloads, and propose new strategies to reduce job slowdown and overall cost.
Resumo:
Clouds are important in weather prediction, climate studies and aviation safety. Important parameters include cloud height, type and cover percentage. In this paper, the recent improvements in the development of a low-cost cloud height measurement setup are described. It is based on stereo vision with consumer digital cameras. The cameras positioning is calibrated using the position of stars in the night sky. An experimental uncertainty analysis of the calibration parameters is performed. Cloud height measurement results are presented and compared with LIDAR measurements.
Resumo:
Embedding intelligence in extreme edge devices allows distilling raw data acquired from sensors into actionable information, directly on IoT end-nodes. This computing paradigm, in which end-nodes no longer depend entirely on the Cloud, offers undeniable benefits, driving a large research area (TinyML) to deploy leading Machine Learning (ML) algorithms on micro-controller class of devices. To fit the limited memory storage capability of these tiny platforms, full-precision Deep Neural Networks (DNNs) are compressed by representing their data down to byte and sub-byte formats, in the integer domain. However, the current generation of micro-controller systems can barely cope with the computing requirements of QNNs. This thesis tackles the challenge from many perspectives, presenting solutions both at software and hardware levels, exploiting parallelism, heterogeneity and software programmability to guarantee high flexibility and high energy-performance proportionality. The first contribution, PULP-NN, is an optimized software computing library for QNN inference on parallel ultra-low-power (PULP) clusters of RISC-V processors, showing one order of magnitude improvements in performance and energy efficiency, compared to current State-of-the-Art (SoA) STM32 micro-controller systems (MCUs) based on ARM Cortex-M cores. The second contribution is XpulpNN, a set of RISC-V domain specific instruction set architecture (ISA) extensions to deal with sub-byte integer arithmetic computation. The solution, including the ISA extensions and the micro-architecture to support them, achieves energy efficiency comparable with dedicated DNN accelerators and surpasses the efficiency of SoA ARM Cortex-M based MCUs, such as the low-end STM32M4 and the high-end STM32H7 devices, by up to three orders of magnitude. To overcome the Von Neumann bottleneck while guaranteeing the highest flexibility, the final contribution integrates an Analog In-Memory Computing accelerator into the PULP cluster, creating a fully programmable heterogeneous fabric that demonstrates end-to-end inference capabilities of SoA MobileNetV2 models, showing two orders of magnitude performance improvements over current SoA analog/digital solutions.
Resumo:
Analog In-memory Computing (AIMC) has been proposed in the context of Beyond Von Neumann architectures as a valid strategy to reduce internal data transfers energy consumption and latency, and to improve compute efficiency. The aim of AIMC is to perform computations within the memory unit, typically leveraging the physical features of memory devices. Among resistive Non-volatile Memories (NVMs), Phase-change Memory (PCM) has become a promising technology due to its intrinsic capability to store multilevel data. Hence, PCM technology is currently investigated to enhance the possibilities and the applications of AIMC. This thesis aims at exploring the potential of new PCM-based architectures as in-memory computational accelerators. In a first step, a preliminar experimental characterization of PCM devices has been carried out in an AIMC perspective. PCM cells non-idealities, such as time-drift, noise, and non-linearity have been studied to develop a dedicated multilevel programming algorithm. Measurement-based simulations have been then employed to evaluate the feasibility of PCM-based operations in the fields of Deep Neural Networks (DNNs) and Structural Health Monitoring (SHM). Moreover, a first testchip has been designed and tested to evaluate the hardware implementation of Multiply-and-Accumulate (MAC) operations employing PCM cells. This prototype experimentally demonstrates the possibility to reach a 95% MAC accuracy with a circuit-level compensation of cells time drift and non-linearity. Finally, empirical circuit behavior models have been included in simulations to assess the use of this technology in specific DNN applications, and to enhance the potentiality of this innovative computation approach.
Resumo:
The recent trend of moving Cloud Computing capabilities to the Edge of the network is reshaping how applications and their middleware supports are designed, deployed, and operated. This new model envisions a continuum of virtual resources between the traditional cloud and the network edge, which is potentially more suitable to meet the heterogeneous Quality of Service (QoS) requirements of diverse application domains and next-generation applications. Several classes of advanced Internet of Things (IoT) applications, e.g., in the industrial manufacturing domain, are expected to serve a wide range of applications with heterogeneous QoS requirements and call for QoS management systems to guarantee/control performance indicators, even in the presence of real-world factors such as limited bandwidth and concurrent virtual resource utilization. The present dissertation proposes a comprehensive QoS-aware architecture that addresses the challenges of integrating cloud infrastructure with edge nodes in IoT applications. The architecture provides end-to-end QoS support by incorporating several components for managing physical and virtual resources. The proposed architecture features: i) a multilevel middleware for resolving the convergence between Operational Technology (OT) and Information Technology (IT), ii) an end-to-end QoS management approach compliant with the Time-Sensitive Networking (TSN) standard, iii) new approaches for virtualized network environments, such as running TSN-based applications under Ultra-low Latency (ULL) constraints in virtual and 5G environments, and iv) an accelerated and deterministic container overlay network architecture. Additionally, the QoS-aware architecture includes two novel middlewares: i) a middleware that transparently integrates multiple acceleration technologies in heterogeneous Edge contexts and ii) a QoS-aware middleware for Serverless platforms that leverages coordination of various QoS mechanisms and virtualized Function-as-a-Service (FaaS) invocation stack to manage end-to-end QoS metrics. Finally, all architecture components were tested and evaluated by leveraging realistic testbeds, demonstrating the efficacy of the proposed solutions.
Resumo:
Recent technological advancements have played a key role in seamlessly integrating cloud, edge, and Internet of Things (IoT) technologies, giving rise to the Cloud-to-Thing Continuum paradigm. This cloud model connects many heterogeneous resources that generate a large amount of data and collaborate to deliver next-generation services. While it has the potential to reshape several application domains, the number of connected entities remarkably broadens the security attack surface. One of the main problems is the lack of security measures to adapt to the dynamic and evolving conditions of the Cloud-To-Thing Continuum. To address this challenge, this dissertation proposes novel adaptable security mechanisms. Adaptable security is the capability of security controls, systems, and protocols to dynamically adjust to changing conditions and scenarios. However, since the design and development of novel security mechanisms can be explored from different perspectives and levels, we place our attention on threat modeling and access control. The contributions of the thesis can be summarized as follows. First, we introduce a model-based methodology that secures the design of edge and cyber-physical systems. This solution identifies threats, security controls, and moving target defense techniques based on system features. Then, we focus on access control management. Since access control policies are subject to modifications, we evaluate how they can be efficiently shared among distributed areas, highlighting the effectiveness of distributed ledger technologies. Furthermore, we propose a risk-based authorization middleware, adjusting permissions based on real-time data, and a federated learning framework that enhances trustworthiness by weighting each client's contributions according to the quality of their partial models. Finally, since authorization revocation is another critical concern, we present an efficient revocation scheme for verifiable credentials in IoT networks, featuring decentralization, demanding minimum storage and computing capabilities. All the mechanisms have been evaluated in different conditions, proving their adaptability to the Cloud-to-Thing Continuum landscape.
Resumo:
The scientific success of the LHC experiments at CERN highly depends on the availability of computing resources which efficiently store, process, and analyse the amount of data collected every year. This is ensured by the Worldwide LHC Computing Grid infrastructure that connect computing centres distributed all over the world with high performance network. LHC has an ambitious experimental program for the coming years, which includes large investments and improvements both for the hardware of the detectors and for the software and computing systems, in order to deal with the huge increase in the event rate expected from the High Luminosity LHC (HL-LHC) phase and consequently with the huge amount of data that will be produced. Since few years the role of Artificial Intelligence has become relevant in the High Energy Physics (HEP) world. Machine Learning (ML) and Deep Learning algorithms have been successfully used in many areas of HEP, like online and offline reconstruction programs, detector simulation, object reconstruction, identification, Monte Carlo generation, and surely they will be crucial in the HL-LHC phase. This thesis aims at contributing to a CMS R&D project, regarding a ML "as a Service" solution for HEP needs (MLaaS4HEP). It consists in a data-service able to perform an entire ML pipeline (in terms of reading data, processing data, training ML models, serving predictions) in a completely model-agnostic fashion, directly using ROOT files of arbitrary size from local or distributed data sources. This framework has been updated adding new features in the data preprocessing phase, allowing more flexibility to the user. Since the MLaaS4HEP framework is experiment agnostic, the ATLAS Higgs Boson ML challenge has been chosen as physics use case, with the aim to test MLaaS4HEP and the contribution done with this work.
Resumo:
Il serverless é un paradigma del cloud computing al giorno d’oggi sempre più diffuso; si basa sulla scrittura di funzioni stateless in quanto le attività relative alla loro manutenzione e scalabilità fanno capo al provider dei servizi cloud. Lo sviluppatore deve quindi solamente concentrarsi sulla creazione del prodotto. Questo lavoro si apre con un’analisi del cloud computing introducendo i principali modelli di applicazione, passando dal parlare di servizi cloud, con le varie sottocategorie e i relativi utilizzi per poi arrivare a parlare di serverless. Si é scelto di focalizzarsi sulla piattaforma Google con la suite: Google Cloud Platform. In particolare, si parlerà di Google Cloud Functions, una nuova offerta serverless della compagnia, di recente sviluppo e in continuo aggiornamento. Partiremo dalle prime release, analizzeremo l’ambiente di sviluppo, i casi d’uso, vantaggi, svantaggi, parleremo poi di portabilità e verranno mostrati alcuni esempi del loro utilizzo.
Resumo:
A compact frequency standard based on an expanding cold (133)CS cloud is under development in our laboratory. In a first experiment, Cs cold atoms were prepared by a magneto-optical trap in a vapor cell, and a microwave antenna was used to transmit the radiation for the clock transition. The signal obtained from fluorescence of the expanding cold atoms cloud is used to lock a microwave chain. In this way the overall system stability is evaluated. A theoretical model based on a two-level system interacting with the two microwave pulses enables interpretation for the observed features, especially the poor Ramsey fringes contrast. (C) 2008 Optical Society of America.
Resumo:
This paper presents a new statistical algorithm to estimate rainfall over the Amazon Basin region using the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI). The algorithm relies on empirical relationships derived for different raining-type systems between coincident measurements of surface rainfall rate and 85-GHz polarization-corrected brightness temperature as observed by the precipitation radar (PR) and TMI on board the TRMM satellite. The scheme includes rain/no-rain area delineation (screening) and system-type classification routines for rain retrieval. The algorithm is validated against independent measurements of the TRMM-PR and S-band dual-polarization Doppler radar (S-Pol) surface rainfall data for two different periods. Moreover, the performance of this rainfall estimation technique is evaluated against well-known methods, namely, the TRMM-2A12 [ the Goddard profiling algorithm (GPROF)], the Goddard scattering algorithm (GSCAT), and the National Environmental Satellite, Data, and Information Service (NESDIS) algorithms. The proposed algorithm shows a normalized bias of approximately 23% for both PR and S-Pol ground truth datasets and a mean error of 0.244 mm h(-1) ( PR) and -0.157 mm h(-1)(S-Pol). For rain volume estimates using PR as reference, a correlation coefficient of 0.939 and a normalized bias of 0.039 were found. With respect to rainfall distributions and rain area comparisons, the results showed that the formulation proposed is efficient and compatible with the physics and dynamics of the observed systems over the area of interest. The performance of the other algorithms showed that GSCAT presented low normalized bias for rain areas and rain volume [0.346 ( PR) and 0.361 (S-Pol)], and GPROF showed rainfall distribution similar to that of the PR and S-Pol but with a bimodal distribution. Last, the five algorithms were evaluated during the TRMM-Large-Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) 1999 field campaign to verify the precipitation characteristics observed during the easterly and westerly Amazon wind flow regimes. The proposed algorithm presented a cumulative rainfall distribution similar to the observations during the easterly regime, but it underestimated for the westerly period for rainfall rates above 5 mm h(-1). NESDIS(1) overestimated for both wind regimes but presented the best westerly representation. NESDIS(2), GSCAT, and GPROF underestimated in both regimes, but GPROF was closer to the observations during the easterly flow.
Resumo:
Context. HD 181231 is a B5IVe star, which has been observed with the CoRoT satellite during similar to 5 consecutive months and simultaneously from the ground in spectroscopy and spectropolarimetry. Aims. By analysing these data, we aim to detect and characterize as many pulsation frequencies as possible, to search for the presence of beating effects possibly at the origin of the Be phenomenon. Our results will also provide a basis for seismic modelling. Methods. The fundamental parameters of the star are determined from spectral fitting and from the study of the circumstellar emission. The CoRoT photometric data and ground-based spectroscopy are analysed using several Fourier techniques: CLEAN-NG, PASPER, and TISAFT, as well as a time-frequency technique. A search for a magnetic field is performed by applying the LSD technique to the spectropolarimetric data. Results. We find that HD 181231 is a B5IVe star seen with an inclination of similar to 45 degrees. No magnetic field is detected in its photosphere. We detect at least 10 independent significant frequencies of variations among the 54 detected frequencies, interpreted in terms of non-radial pulsation modes and rotation. Two longer-term variations are also detected: one at similar to 14 days resulting from a beating effect between the two main frequencies of short-term variations, the other at similar to 116 days due either to a beating of frequencies or to a zonal pulsation mode. Conclusions. Our analysis of the CoRoT light curve and ground-based spectroscopic data of HD 181231 has led to the determination of the fundamental and pulsational parameters of the star, including beating effects. This will allow a precise seismic modelling of this star.
Resumo:
Atmospheric aerosol particles serving as cloud condensation nuclei (CCN) are key elements of the hydrological cycle and climate. We have measured and characterized CCN at water vapor supersaturations in the range of S=0.10-0.82% in pristine tropical rainforest air during the AMAZE-08 campaign in central Amazonia. The effective hygroscopicity parameters describing the influence of chemical composition on the CCN activity of aerosol particles varied in the range of kappa approximate to 0.1-0.4 (0.16+/-0.06 arithmetic mean and standard deviation). The overall median value of kappa approximate to 0.15 was by a factor of two lower than the values typically observed for continental aerosols in other regions of the world. Aitken mode particles were less hygroscopic than accumulation mode particles (kappa approximate to 0.1 at D approximate to 50 nm; kappa approximate to 0.2 at D approximate to 200 nm), which is in agreement with earlier hygroscopicity tandem differential mobility analyzer (H-TDMA) studies. The CCN measurement results are consistent with aerosol mass spectrometry (AMS) data, showing that the organic mass fraction (f(org)) was on average as high as similar to 90% in the Aitken mode (D <= 100 nm) and decreased with increasing particle diameter in the accumulation mode (similar to 80% at D approximate to 200 nm). The kappa values exhibited a negative linear correlation with f(org) (R(2)=0.81), and extrapolation yielded the following effective hygroscopicity parameters for organic and inorganic particle components: kappa(org)approximate to 0.1 which can be regarded as the effective hygroscopicity of biogenic secondary organic aerosol (SOA) and kappa(inorg)approximate to 0.6 which is characteristic for ammonium sulfate and related salts. Both the size dependence and the temporal variability of effective particle hygroscopicity could be parameterized as a function of AMS-based organic and inorganic mass fractions (kappa(p)=kappa(org) x f(org)+kappa(inorg) x f(inorg)). The CCN number concentrations predicted with kappa(p) were in fair agreement with the measurement results (similar to 20% average deviation). The median CCN number concentrations at S=0.1-0.82% ranged from N(CCN,0.10)approximate to 35 cm(-3) to N(CCN,0.82)approximate to 160 cm(-3), the median concentration of aerosol particles larger than 30 nm was N(CN,30)approximate to 200 cm(-3), and the corresponding integral CCN efficiencies were in the range of N(CCN,0.10/NCN,30)approximate to 0.1 to N(CCN,0.82/NCN,30)approximate to 0.8. Although the number concentrations and hygroscopicity parameters were much lower in pristine rainforest air, the integral CCN efficiencies observed were similar to those in highly polluted megacity air. Moreover, model calculations of N(CCN,S) assuming an approximate global average value of kappa approximate to 0.3 for continental aerosols led to systematic overpredictions, but the average deviations exceeded similar to 50% only at low water vapor supersaturation (0.1%) and low particle number concentrations (<= 100 cm(-3)). Model calculations assuming aconstant aerosol size distribution led to higher average deviations at all investigated levels of supersaturation: similar to 60% for the campaign average distribution and similar to 1600% for a generic remote continental size distribution. These findings confirm earlier studies suggesting that aerosol particle number and size are the major predictors for the variability of the CCN concentration in continental boundary layer air, followed by particle composition and hygroscopicity as relatively minor modulators. Depending on the required and applicable level of detail, the information and parameterizations presented in this paper should enable efficient description of the CCN properties of pristine tropical rainforest aerosols of Amazonia in detailed process models as well as in large-scale atmospheric and climate models.
Resumo:
An analytical procedure based on microwave-assisted digestion with diluted acid and a double cloud point extraction is proposed for nickel determination in plant materials by flame atomic absorption spectrometry. Extraction in micellar medium was successfully applied for sample clean up, aiming to remove organic species containing phosphorous that caused spectral interferences by structured background attributed to the formation of PO species in the flame. Cloud point extraction of nickel complexes formed with 1,2-thiazolylazo-2-naphthol was explored for pre-concentration, with enrichment factor estimated as 30, detection limit of 5 mu g L(-1) (99.7% confidence level) and linear response up to 80 mu g L(-1). The accuracy of the procedure was evaluated by nickel determinations in reference materials and the results agreed with the certified values at the 95% confidence level.