820 resultados para sistema distribuito data-grid cloud computing CERN LHC Hazelcast Elasticsearch
Resumo:
This paper presents the study and experimental tests for the viability analysis of using multiple wireless technologies in urban traffic light controllers in a Smart City environment. Communication drivers, different types of antennas, data acquisition methods and data processing for monitoring the network are presented. The sensors and actuators modules are connected in a local area network through two distinct low power wireless networks using both 868 MHz and 2.4 GHz frequency bands. All data communications using 868 MHz go through a Moteino. Various tests are made to assess the most advantageous features of each communication type. The experimental results show better range for 868 MHz solutions, whereas the 2.4 GHz presents the advantage of self-regenerating the network and mesh. The different pros and cons of both communication methods are presented.
Resumo:
The modern industrial environment is populated by a myriad of intelligent devices that collaborate for the accomplishment of the numerous business processes in place at the production sites. The close collaboration between humans and work machines poses new interesting challenges that industry must overcome in order to implement the new digital policies demanded by the industrial transition. The Industry 5.0 movement is a companion revolution of the previous Industry 4.0, and it relies on three characteristics that any industrial sector should have and pursue: human centrality, resilience, and sustainability. The application of the fifth industrial revolution cannot be completed without moving from the implementation of Industry 4.0-enabled platforms. The common feature found in the development of this kind of platform is the need to integrate the Information and Operational layers. Our thesis work focuses on the implementation of a platform addressing all the digitization features foreseen by the fourth industrial revolution, making the IT/OT convergence inside production plants an improvement and not a risk. Furthermore, we added modular features to our platform enabling the Industry 5.0 vision. We favored the human centrality using the mobile crowdsensing techniques and the reliability and sustainability using pluggable cloud computing services, combined with data coming from the crowd support. We achieved important and encouraging results in all the domains in which we conducted our experiments. Our IT/OT convergence-enabled platform exhibits the right performance needed to satisfy the strict requirements of production sites. The multi-layer capability of the framework enables the exploitation of data not strictly coming from work machines, allowing a more strict interaction between the company, its employees, and customers.
Resumo:
The Standard Model (SM) of particle physics predicts the existence of a Higgs field responsible for the generation of particles' mass. However, some aspects of this theory remain unsolved, supposing the presence of new physics Beyond the Standard Model (BSM) with the production of new particles at a higher energy scale compared to the current experimental limits. The search for additional Higgs bosons is, in fact, predicted by theoretical extensions of the SM including the Minimal Supersymmetry Standard Model (MSSM). In the MSSM, the Higgs sector consists of two Higgs doublets, resulting in five physical Higgs particles: two charged bosons $H^{\pm}$, two neutral scalars $h$ and $H$, and one pseudoscalar $A$. The work presented in this thesis is dedicated to the search of neutral non-Standard Model Higgs bosons decaying to two muons in the model independent MSSM scenario. Proton-proton collision data recorded by the CMS experiment at the CERN LHC at a center-of-mass energy of 13 TeV are used, corresponding to an integrated luminosity of $35.9\ \text{fb}^{-1}$. Such search is sensitive to neutral Higgs bosons produced either via gluon fusion process or in association with a $\text{b}\bar{\text{b}}$ quark pair. The extensive usage of Machine and Deep Learning techniques is a fundamental element in the discrimination between signal and background simulated events. A new network structure called parameterised Neural Network (pNN) has been implemented, replacing a whole set of single neural networks trained at a specific mass hypothesis value with a single neural network able to generalise well and interpolate in the entire mass range considered. The results of the pNN signal/background discrimination are used to set a model independent 95\% confidence level expected upper limit on the production cross section times branching ratio, for a generic $\phi$ boson decaying into a muon pair in the 130 to 1000 GeV range.
Resumo:
The recent trend of moving Cloud Computing capabilities to the Edge of the network is reshaping how applications and their middleware supports are designed, deployed, and operated. This new model envisions a continuum of virtual resources between the traditional cloud and the network edge, which is potentially more suitable to meet the heterogeneous Quality of Service (QoS) requirements of diverse application domains and next-generation applications. Several classes of advanced Internet of Things (IoT) applications, e.g., in the industrial manufacturing domain, are expected to serve a wide range of applications with heterogeneous QoS requirements and call for QoS management systems to guarantee/control performance indicators, even in the presence of real-world factors such as limited bandwidth and concurrent virtual resource utilization. The present dissertation proposes a comprehensive QoS-aware architecture that addresses the challenges of integrating cloud infrastructure with edge nodes in IoT applications. The architecture provides end-to-end QoS support by incorporating several components for managing physical and virtual resources. The proposed architecture features: i) a multilevel middleware for resolving the convergence between Operational Technology (OT) and Information Technology (IT), ii) an end-to-end QoS management approach compliant with the Time-Sensitive Networking (TSN) standard, iii) new approaches for virtualized network environments, such as running TSN-based applications under Ultra-low Latency (ULL) constraints in virtual and 5G environments, and iv) an accelerated and deterministic container overlay network architecture. Additionally, the QoS-aware architecture includes two novel middlewares: i) a middleware that transparently integrates multiple acceleration technologies in heterogeneous Edge contexts and ii) a QoS-aware middleware for Serverless platforms that leverages coordination of various QoS mechanisms and virtualized Function-as-a-Service (FaaS) invocation stack to manage end-to-end QoS metrics. Finally, all architecture components were tested and evaluated by leveraging realistic testbeds, demonstrating the efficacy of the proposed solutions.
Resumo:
The enhanced production of strange hadrons in heavy-ion collisions relative to that in minimum-bias pp collisions is historically considered one of the first signatures of the formation of a deconfined quark-gluon plasma. At the LHC, the ALICE experiment observed that the ratio of strange to non-strange hadron yields increases with the charged-particle multiplicity at midrapidity, starting from pp collisions and evolving smoothly across interaction systems and energies, ultimately reaching Pb-Pb collisions. The understanding of the origin of this effect in small systems remains an open question. This thesis presents a comprehensive study of the production of $K^{0}_{S}$, $\Lambda$ ($\bar{\Lambda}$) and $\Xi^{-}$ ($\bar{\Xi}^{+}$) strange hadrons in pp collisions at $\sqrt{s}$ = 13 TeV collected in LHC Run 2 with ALICE. A novel approach is exploited, introducing, for the first time, the concept of effective energy in the study of strangeness production in hadronic collisions at the LHC. In this work, the ALICE Zero Degree Calorimeters are used to measure the energy carried by forward emitted baryons in pp collisions, which reduces the effective energy available for particle production with respect to the nominal centre-of-mass energy. The results presented in this thesis provide new insights into the interplay, for strangeness production, between the initial stages of the collision and the produced final hadronic state. Finally, the first Run 3 results on the production of $\Omega^{\pm}$ ($\bar{\Omega}^{+}$) multi-strange baryons are presented, measured in pp collisions at $\sqrt{s}$ = 13.6 TeV and 900 GeV, the highest and lowest collision energies reached so far at the LHC. This thesis also presents the development and validation of the ALICE Time-Of-Flight (TOF) data quality monitoring system for LHC Run 3. This work was fundamental to assess the performance of the TOF detector during the commissioning phase, in the Long Shutdown 2, and during the data taking period.
Resumo:
Industry 4.0 refers to the 4th industrial revolution and at its bases, we can see the digitalization and the automation of the assembly line. The whole production process has improved and evolved thanks to the advances made in networking, and AI studies, which include of course machine learning, cloud computing, IoT, and other technologies that are finally being implemented into the industrial scenario. All these technologies have in common a need for faster, more secure, robust, and reliable communication. One of the many solutions for these demands is the use of mobile communication technologies in the industrial environment, but which technology is better suited for these demands? Of course, the answer isn’t as simple as it seems. The 4th industrial revolution has a never seen incomparable potential with respect to the previous ones, every factory, enterprise, or company have different network demands, and even in each of these infrastructures, the demands may diversify by sector, or by application. For example, in the health care industry, there may be e a need for increased bandwidth for the analysis of high-definition videos or, faster speeds in order to have analytics occur in real-time, and again another application might be higher security and reliability to protect patients’ data. As seen above, choosing the right technology for the right environment and application, considers many things, and the ones just stated are but a speck of dust with respect to the overall picture. In this thesis, we will investigate a comparison between the use of two of the available technologies in use for the industrial environment: Wi-Fi 6 and 5G Private Networks in the specific case of a steel factory.
Resumo:
Con il lancio di nuove applicazioni tecnologiche come l'Internet of Things, Big Data, Cloud computing e tecnologie mobili che stanno accelerando in maniera spropositata la velocità di cambiamento, i comportamenti, le abitudini e i modi di vivere sono completamente mutati nel favorire un mondo di tecnologie digitali che agevolino le operazioni quotidiane. Questi progressi stanno velocemente cambiando il modo in cui le aziende fanno business, con grandi ripercussioni in tutto quello che è il contesto aziendale, ma non solo. L’avvento della Digital Transformation ha incrementato questi fenomeni e la si potrebbe definire come causa scatenante di tutti i mutamenti che stiamo vivendo. La velocità e l’intensità del cambiamento ha effetti disruptive rispetto al passato, colpendo numerosi settori economici ed abitudini dei consumatori. L’obiettivo di questo elaborato è di analizzare la trasformazione digitale applicata al caso dell’azienda Alfa, comprendendone le potenzialità. In particolare, si vogliono studiare i principali risvolti portati da tale innovazione, le più importanti iniziative adottate in merito alle nuove tecnologie implementate e i benefici che queste portano in campo strategico, di business e cultura aziendale.
Resumo:
Ballerina è un linguaggio open-source, cloud-native, progettato, quindi, con l'intento di alleggerire il carico dello sviluppo e dell'integrazione associato alle applicazioni aziendali. Questo linguaggio semplifica il modo in cui un programma comunica con la rete, che di solito avviene tramite un API (Application Program Interface). Ballerina tenta di creare un sistema integrato, riunendo i concetti, le idee e gli strumenti essenziali di integrazione di un sistema distribuito e offrendo un ambiente concorrente e sicuro per supportare lo sviluppo di API.
Resumo:
As tecnologias de informação representam um pilar fundamental nas organizações como sustento do negócio através de infraestruturas dedicadas sendo que com o evoluir do crescimento no centro de dados surgem desafios relativamente a escalabilidade, tolerância à falha, desempenho, alocação de recursos, segurança nos acessos, reposição de grandes quantidades de informação e eficiência energética. Com a adoção de tecnologias baseadas em cloud computing aplica-se um modelo de recursos partilhados de modo a consolidar a infraestrutura e endereçar os desafios anteriormente descritos. As tecnologias de virtualização têm como objetivo reduzir a infraestrutura levantando novas considerações ao nível das redes locais e de dados, segurança, backup e reposição da informação devido á dinâmica de um ambiente virtualizado. Em centros de dados esta abordagem pode representar um nível de consolidação elevado, permitindo reduzir servidores físicos, portas de rede, cablagem, armazenamento, espaço, energia e custo, assegurando os níveis de desempenho. Este trabalho permite definir uma estratégia de consolidação do centro de dados em estudo que permita a tolerância a falhas, provisionamento de novos serviços com tempo reduzido, escalabilidade para mais serviços, segurança nas redes Delimitarized Zone (DMZ), e backup e reposição de dados com impacto reduzido nos recursos, permitindo altos débitos e rácios de consolidação do armazenamento. A arquitetura proposta visa implementar a estratégia com tecnologias otimizadas para o cloud computing. Foi realizado um estudo tendo como base a análise de um centro de dados através da aplicação VMWare Capacity Planner que permitiu a análise do ambiente por um período de 8 meses com registo de métricas de acessos, utilizadas para dimensionar a arquitetura proposta. Na implementação da abordagem em cloud valida-se a redução de 85% de infraestrutura de servidores, a latência de comunicação, taxas de transferência de dados, latências de serviços, impacto de protocolos na transferência de dados, overhead da virtualização, migração de serviços na infraestrutura física, tempos de backup e restauro de informação e a segurança na DMZ.
Resumo:
Dissertação de natureza científica realizada para obtenção do grau de Mestre em Engenharia de Redes de Computadores e Multimédia
Resumo:
Relatório de Projeto realizado para obtenção do grau de Mestre em Engenharia Informática e de Computadores
Resumo:
Em Portugal, as instituições de ensino superior dispõem de plataformas de e-learning que reflectem uma mais-valia para o processo de ensino-aprendizagem. No entanto, estas plataformas caracterizam-se por serem de âmbito privado expondo, desta forma, a tímida abertura das instituições na partilha do seu conhecimento, como também dos seus recursos. O paradigma Cloud Computing surge como uma solução, por exemplo, para a criação de uma federação de nuvens capaz de contemplar soluções heterogéneas, garantindo a interoperabilidade entre as plataformas das várias instituições de ensino, e promovendo os objectivos propostos pelo Processo de Bolonha, nomeadamente no que se refere à partilha de informação, de plataformas e serviços e promoção de projectos comuns. Neste âmbito, é necessário desenvolver ferramentas que permitam aos decisores ponderar as mais-valias deste novo paradigma. Assim, é conveniente quantificar o retorno esperado para o investimento, em recursos humanos e tecnológicos, exigido pelo modelo Cloud Computing. Este trabalho contribui para o estudo da avaliação do retorno do investimento (ROI) em infra-estruturas e serviços TIC (Tecnologias de Informação e Comunicação), resultante da análise de diferentes cenários relativos à introdução do paradigma Cloud Computing. Para tal, foi proposta uma metodologia de análise baseada num questionário, distribuído por diversas instituições de ensino superior portuguesas, contendo um conjunto de questões que permitiram identificar indicadores, e respectivas métricas, a usar na elaboração de modelos de estimação do ROI.
Resumo:
Cloud SLAs compensate customers with credits when average availability drops below certain levels. This is too inflexible because consumers lose non-measurable amounts of performance being only compensated later, in next charging cycles. We propose to schedule virtual machines (VMs), driven by range-based non-linear reductions of utility, different for classes of users and across different ranges of resource allocations: partial utility. This customer-defined metric, allows providers transferring resources between VMs in meaningful and economically efficient ways. We define a comprehensive cost model incorporating partial utility given by clients to a certain level of degradation, when VMs are allocated in overcommitted environments (Public, Private, Community Clouds). CloudSim was extended to support our scheduling model. Several simulation scenarios with synthetic and real workloads are presented, using datacenters with different dimensions regarding the number of servers and computational capacity. We show the partial utility-driven driven scheduling allows more VMs to be allocated. It brings benefits to providers, regarding revenue and resource utilization, allowing for more revenue per resource allocated and scaling well with the size of datacenters when comparing with an utility-oblivious redistribution of resources. Regarding clients, their workloads’ execution time is also improved, by incorporating an SLA-based redistribution of their VM’s computational power.
Resumo:
Portugal joined the effort to create the EPOS infrastructure in 2008, and it became immediately apparent that a national network of Earth Sciences infrastructures was required to participate in the initiative. At that time, FCT was promoting the creation of a national infrastructure called RNG - Rede Nacional de Geofísica (National Geophysics Network). A memorandum of understanding had been agreed upon, and it seemed therefore straightforward to use RNG (enlarged to include relevant participants that were not RNG members) as the Portuguese partner to EPOS-PP. However, at the time of signature of the EPOS-PP contract with the European Commission (November 2010), RNG had not gained formal identity yet, and IST (one of the participants) signed the grant agreement on behalf of the Portuguese consortium. During 2011 no progress was made towards the formal creation of RNG, and the composition of the network – based on proposals submitted to a call issued in 2002 – had by then become obsolete. On February 2012, the EPOS national contact point was mandated by the representatives of the participating national infrastructures to request from FCT the recognition of a new consortium - C3G, Collaboratory for Geology, Geodesy and Geophysics - as the Portuguese partner to EPOS-PP. This request was supported by formal letters from the following institutions: ‐ LNEG. Laboratório Nacional de Energia e Geologia (National Geological Survey); ‐ IGP ‐ Instituto Geográfico Português (National Geographic Institute); ‐ IDL, Instituto Dom Luiz – Laboratório Associado ‐ CGE, Centro de Geofísica de Évora; ‐ FCTUC, Faculdade de Ciências e Tecnologia da Universidade de Coimbra; ‐ Instituto Superior de Engenharia de Lisboa; ‐ Instituto Superior Técnico; ‐ Universidade da Beira Interior. While Instituto de Meteorologia (Meteorological Institute, in charge of the national seismographic network) actively supports the national participation in EPOS, a letter of support was not feasible in view of the organic changes underway at the time. C3G aims at the integration and coordination, at national level, of existing Earth Sciences infrastructures, namely: ‐ seismic and geodetic networks (IM, IST, IDL, CGE); ‐ rock physics laboratories (ISEL); ‐ geophysical laboratories dedicated to natural resources and environmental studies; ‐ geological and geophysical data repositories; ‐ facilities for data storage and computing resources. The C3G - Collaboratory for Geology, Geodesy and Geophysics will be coordinated by Universidade da Beira Interior, whose Department of Informatics will host the C3G infrastructure.
Resumo:
As estruturas orgânicas empresariais estão cada vez mais obrigadas a garantir elevados padrões de qualidade de serviços, possibilitando ao mesmo tempo a sustentabilidade das estruturas e ainda, o alinhamento dos investimentos efetuados com as estratégias de negócio. O seu desenvolvimento obriga a que na área das tecnologias de informação e comunicação exista a necessidade de repensar estratégias em vigor, procurando novos modelos, mais ágeis e mais capazes de se enquadrar nestas novas exigências. Neste âmbito, é de esperar que as plataformas de identidade digital tenham um papel determinante no desenvolvimento destes novos modelos, pois são um instrumento único para se implementarem plataformas heterogéneas, intemperáveis, com elevados níveis de segurança e de garantia de controlo no acesso à informação. O trabalho agora apresentado tem como objectivo investigar e desenvolver uma plataforma de identidade digital e uma plataforma de testes, que permitam ao Politécnico do Porto a aquisição de um infraestrutura de Tecnologias de Informação e Comunicação que se torne um instrumento fundamental para o desenvolvimento contínuo, de garantia de qualidade e de sustentabilidade de todos os serviços prestados à sua comunidade.