800 resultados para cloud computing datacenter performance QoS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Protective factors are neglected in risk assessment in adult psychiatric and criminal justice populations. This review investigated the predictive efficacy of selected tools that assess protective factors. Five databases were searched using comprehensive terms for records up to June 2014, resulting in 17 studies (n = 2,198). Results were combined in a multilevel meta-analysis using the R (R Core Team, R: A Language and Environment for Statistical Computing, Vienna, Austria: R Foundation for Statistical Computing, 2015) metafor package (Viechtbauer, Journal of Statistical Software, 2010, 36, 1). Prediction of outcomes was poor relative to a reference category of violent offending, with the exception of prediction of discharge from secure units. There were no significant differences between the predictive efficacy of risk scales, protective scales, and summary judgments. Protective factor assessment may be clinically useful, but more development is required. Claims that use of these tools is therapeutically beneficial require testing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A constante evolução da tecnologia disponibilizou, atualmente, ferramentas computacionais que eram apenas expectativas há 10 anos atrás. O aumento do potencial computacional aplicado a modelos numéricos que simulam a atmosfera permitiu ampliar o estudo de fenômenos atmosféricos, através do uso de ferramentas de computação de alto desempenho. O trabalho propôs o desenvolvimento de algoritmos com base em arquiteturas SIMT e aplicação de técnicas de paralelismo com uso da ferramenta OpenACC para processamento de dados de previsão numérica do modelo Weather Research and Forecast. Esta proposta tem forte conotação interdisciplinar, buscando a interação entre as áreas de modelagem atmosférica e computação científica. Foram testadas a influência da computação do cálculo de microfísica de nuvens na degradação temporal do modelo. Como a entrada de dados para execução na GPU não era suficientemente grande, o tempo necessário para transferir dados da CPU para a GPU foi maior do que a execução da computação na CPU. Outro fator determinante foi a adição de código CUDA dentro de um contexto MPI, causando assim condições de disputa de recursos entre os processadores, mais uma vez degradando o tempo de execução. A proposta do uso de diretivas para aplicar computação de alto desempenho em uma estrutura CUDA parece muito promissora, mas ainda precisa ser utilizada com muita cautela a fim de produzir bons resultados. A construção de um híbrido MPI + CUDA foi testada, mas os resultados não foram conclusivos.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The high performance computing community has traditionally focused uniquely on the reduction of execution time, though in the last years, the optimization of energy consumption has become a main issue. A reduction of energy usage without a degradation of performance requires the adoption of energy-efficient hardware platforms accompanied by the development of energy-aware algorithms and computational kernels. The solution of linear systems is a key operation for many scientific and engineering problems. Its relevance has motivated an important amount of work, and consequently, it is possible to find high performance solvers for a wide variety of hardware platforms. In this work, we aim to develop a high performance and energy-efficient linear system solver. In particular, we develop two solvers for a low-power CPU-GPU platform, the NVIDIA Jetson TK1. These solvers implement the Gauss-Huard algorithm yielding an efficient usage of the target hardware as well as an efficient memory access. The experimental evaluation shows that the novel proposal reports important savings in both time and energy-consumption when compared with the state-of-the-art solvers of the platform.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, new computers generation provides a high performance that enables to build computationally expensive computer vision applications applied to mobile robotics. Building a map of the environment is a common task of a robot and is an essential part to allow the robots to move through these environments. Traditionally, mobile robots used a combination of several sensors from different technologies. Lasers, sonars and contact sensors have been typically used in any mobile robotic architecture, however color cameras are an important sensor due to we want the robots to use the same information that humans to sense and move through the different environments. Color cameras are cheap and flexible but a lot of work need to be done to give robots enough visual understanding of the scenes. Computer vision algorithms are computational complex problems but nowadays robots have access to different and powerful architectures that can be used for mobile robotics purposes. The advent of low-cost RGB-D sensors like Microsoft Kinect which provide 3D colored point clouds at high frame rates made the computer vision even more relevant in the mobile robotics field. The combination of visual and 3D data allows the systems to use both computer vision and 3D processing and therefore to be aware of more details of the surrounding environment. The research described in this thesis was motivated by the need of scene mapping. Being aware of the surrounding environment is a key feature in many mobile robotics applications from simple robotic navigation to complex surveillance applications. In addition, the acquisition of a 3D model of the scenes is useful in many areas as video games scene modeling where well-known places are reconstructed and added to game systems or advertising where once you get the 3D model of one room the system can add furniture pieces using augmented reality techniques. In this thesis we perform an experimental study of the state-of-the-art registration methods to find which one fits better to our scene mapping purposes. Different methods are tested and analyzed on different scene distributions of visual and geometry appearance. In addition, this thesis proposes two methods for 3d data compression and representation of 3D maps. Our 3D representation proposal is based on the use of Growing Neural Gas (GNG) method. This Self-Organizing Maps (SOMs) has been successfully used for clustering, pattern recognition and topology representation of various kind of data. Until now, Self-Organizing Maps have been primarily computed offline and their application in 3D data has mainly focused on free noise models without considering time constraints. Self-organising neural models have the ability to provide a good representation of the input space. In particular, the Growing Neural Gas (GNG) is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation. However, this type of learning is time consuming, specially for high-dimensional input data. Since real applications often work under time constraints, it is necessary to adapt the learning process in order to complete it in a predefined time. This thesis proposes a hardware implementation leveraging the computing power of modern GPUs which takes advantage of a new paradigm coined as General-Purpose Computing on Graphics Processing Units (GPGPU). Our proposed geometrical 3D compression method seeks to reduce the 3D information using plane detection as basic structure to compress the data. This is due to our target environments are man-made and therefore there are a lot of points that belong to a plane surface. Our proposed method is able to get good compression results in those man-made scenarios. The detected and compressed planes can be also used in other applications as surface reconstruction or plane-based registration algorithms. Finally, we have also demonstrated the goodness of the GPU technologies getting a high performance implementation of a CAD/CAM common technique called Virtual Digitizing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Long-range non-covalent interactions play a key role in the chemistry of natural polyphenols. We have previously proposed a description of supramolecular polyphenol complexes by the B3P86 density functional coupled with some corrections for dispersion. We couple here the B3P86 functional with the D3 correction for dispersion, assessing systematically the accuracy of the new B3P86-D3 model using for that the well-known S66, HB23, NCCE31, and S12L datasets for non-covalent interactions. Furthermore, the association energies of these complexes were carefully compared to those obtained by other dispersion-corrected functionals, such as B(3)LYP-D3, BP86-D3 or B3P86-NL. Finally, this set of models were also applied to a database composed of seven non-covalent polyphenol complexes of the most interest.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity. We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D, Computing) -- Queen's University, 2016-09-30 09:55:51.506

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A High-Performance Computing job dispatcher is a critical software that assigns the finite computing resources to submitted jobs. This resource assignment over time is known as the on-line job dispatching problem in HPC systems. The fact the problem is on-line means that solutions must be computed in real-time, and their required time cannot exceed some threshold to do not affect the normal system functioning. In addition, a job dispatcher must deal with a lot of uncertainty: submission times, the number of requested resources, and duration of jobs. Heuristic-based techniques have been broadly used in HPC systems, at the cost of achieving (sub-)optimal solutions in a short time. However, the scheduling and resource allocation components are separated, thus generates a decoupled decision that may cause a performance loss. Optimization-based techniques are less used for this problem, although they can significantly improve the performance of HPC systems at the expense of higher computation time. Nowadays, HPC systems are being used for modern applications, such as big data analytics and predictive model building, that employ, in general, many short jobs. However, this information is unknown at dispatching time, and job dispatchers need to process large numbers of them quickly while ensuring high Quality-of-Service (QoS) levels. Constraint Programming (CP) has been shown to be an effective approach to tackle job dispatching problems. However, state-of-the-art CP-based job dispatchers are unable to satisfy the challenges of on-line dispatching, such as generate dispatching decisions in a brief period and integrate current and past information of the housing system. Given the previous reasons, we propose CP-based dispatchers that are more suitable for HPC systems running modern applications, generating on-line dispatching decisions in a proper time and are able to make effective use of job duration predictions to improve QoS levels, especially for workloads dominated by short jobs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present Thesis reports on the various research projects to which I have contributed during my PhD period, working with several research groups, and whose results have been communicated in a number of scientific publications. The main focus of my research activity was to learn, test, exploit and extend the recently developed vdW-DFT (van der Waals corrected Density Functional Theory) methods for computing the structural, vibrational and electronic properties of ordered molecular crystals from first principles. A secondary, and more recent, research activity has been the analysis with microelectrostatic methods of Molecular Dynamics (MD) simulations of disordered molecular systems. While only very unreliable methods based on empirical models were practically usable until a few years ago, accurate calculations of the crystal energy are now possible, thanks to very fast modern computers and to the excellent performance of the best vdW-DFT methods. Accurate energies are particularly important for describing organic molecular solids, since they often exhibit several alternative crystal structures (polymorphs), with very different packing arrangements but very small energy differences. Standard DFT methods do not describe the long-range electron correlations which give rise to the vdW interactions. Although weak, these interactions are extremely sensitive to the packing arrangement, and neglecting them used to be a problem. The calculations of reliable crystal structures and vibrational frequencies has been made possible only recently, thanks to development of some good representations of the vdW contribution to the energy (known as “vdW corrections”).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ultimamente si stanno sviluppando tecnologie per rendere più efficiente la virtualizzazione a livello di sistema operativo, tra cui si cita la suite Docker, che permette di gestire processi come se fossero macchine virtuali. Inoltre i meccanismi di clustering, come Kubernetes, permettono di collegare macchine multiple, farle comunicare tra loro e renderle assimilabili ad un server monolitico per l'utente esterno. Il connubio tra virtualizzazione a livello di sistema operativo e clustering permette di costruire server potenti quanto quelli monolitici ma più economici e possono adattarsi meglio alle richieste esterne. Data l'enorme mole di dati e di potenza di calcolo necessaria per gestire le comunicazioni e le interazioni tra utenti e servizi web, molte imprese non possono permettersi investimenti su un server proprietario e la sua manutenzione, perciò affittano le risorse necessarie che costituiscono il cosiddetto "cloud", cioè l'insieme di server che le aziende mettono a disposizione dei propri clienti. Il trasferimento dei servizi da macchina fisica a cloud ha modificato la visione che si ha dei servizi stessi, infatti non sono più visti come software monolitici ma come microservizi che interagiscono tra di loro. L'infrastruttura di comunicazione che permette ai microservizi di comunicare è chiamata service mesh e la sua suddivisione richiama la tecnologia SDN. È stato studiato il comportamento del software di service mesh Istio installato in un cluster Kubernetes. Sono state raccolte metriche su memoria occupata, CPU utilizzata, pacchetti trasmessi ed eventuali errori e infine latenza per confrontarle a quelle ottenute da un cluster su cui non è stato installato Istio. Lo studio dimostra che, in un cluster rivolto all'uso in produzione, la service mesh offerta da Istio fornisce molti strumenti per il controllo della rete a scapito di una richiesta leggermente più alta di risorse hardware.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern networks are undergoing a fast and drastic evolution, with software taking a more predominant role. Virtualization and cloud-like approaches are replacing physical network appliances, reducing the management burden of the operators. Furthermore, networks now expose programmable interfaces for fast and dynamic control over traffic forwarding. This evolution is backed by standard organizations such as ETSI, 3GPP, and IETF. This thesis will describe which are the main trends in this evolution. Then, it will present solutions developed during the three years of Ph.D. to exploit the capabilities these new technologies offer and to study their possible limitations to push further the state-of-the-art. Namely, it will deal with programmable network infrastructure, introducing the concept of Service Function Chaining (SFC) and presenting two possible solutions, one with Openstack and OpenFlow and the other using Segment Routing and IPv6. Then, it will continue with network service provisioning, presenting concepts from Network Function Virtualization (NFV) and Multi-access Edge Computing (MEC). These concepts will be applied to network slicing for mission-critical communications and Industrial IoT (IIoT). Finally, it will deal with network abstraction, with a focus on Intent Based Networking (IBN). To summarize, the thesis will include solutions for data plane programming with evaluation on well-known platforms, performance metrics on virtual resource allocations, novel practical application of network slicing on mission-critical communications, an architectural proposal and its implementation for edge technologies in Industrial IoT scenarios, and a formal definition of intent using a category theory approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation proposes an analysis of the governance of the European scientific research, focusing on the emergence of the Open Science paradigm: a new way of doing science, oriented towards the openness of every phase of the scientific research process, able to take full advantage of the digital ICTs. The emergence of this paradigm is relatively recent, but in the last years it has become increasingly relevant. The European institutions expressed a clear intention to embrace the Open Science paradigm (eg., think about the European Open Science Cloud, EOSC; or the establishment of the Horizon Europe programme). This dissertation provides a conceptual framework for the multiple interventions of the European institutions in the field of Open Science, addressing the major legal challenges of its implementation. The study investigates the notion of Open Science, proposing a definition that takes into account all its dimensions related to the human and fundamental rights framework in which Open Science is grounded. The inquiry addresses the legal challenges related to the openness of research data, in light of the European Open Data framework and the impact of the GDPR on the context of Open Science. The last part of the study is devoted to the infrastructural dimension of the Open Science paradigm, exploring the e-infrastructures. The focus is on a specific type of computational infrastructure: the High Performance Computing (HPC) facility. The adoption of HPC for research is analysed from the European perspective, investigating the EuroHPC project, and the local perspective, proposing the case study of the HPC facility of the University of Luxembourg, the ULHPC. This dissertation intends to underline the relevance of the legal coordination approach, between all actors and phases of the process, in order to develop and implement the Open Science paradigm, adhering to the underlying human and fundamental rights.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The deployment of ultra-dense networks is one of the most promising solutions to manage the phenomenon of co-channel interference that affects the latest wireless communication systems, especially in hotspots. To meet the requirements of the use-cases and the immense amount of traffic generated in these scenarios, 5G ultra-dense networks are being deployed using various technologies, such as distributed antenna system (DAS) and cloud-radio access network (C-RAN). Through these centralized densification schemes, virtualized baseband processing units coordinate the distributed access points and manage the available network resources. In particular, link adaptation techniques are shown to be fundamental to overall system operation and performance enhancement. The core of this dissertation is the result of an analysis and a comparison of dynamic and adaptive methods for modulation and coding scheme (MCS) selection applied to the latest mobile telecommunications standards. A novel algorithm based on the proportional-integral-derivative (PID) controller principles and block error rate (BLER) target has been proposed. Tests were conducted in a 4G and 5G system level laboratory and, by means of a channel emulator, the performance was evaluated for different channel models and target BLERs. Furthermore, due to the intrinsic sectorization of the end-users distribution in the investigated scenario, a preliminary analysis on the joint application of users grouping algorithms with multi-antenna and multi-user techniques has been performed. In conclusion, the importance and impact of other fundamental physical layer operations, such as channel estimation and power control, on the overall end-to-end system behavior and performance were highlighted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The accurate representation of the Earth Radiation Budget by General Circulation Models (GCMs) is a fundamental requirement to provide reliable historical and future climate simulations. In this study, we found reasonable agreement between the integrated energy fluxes at the top of the atmosphere simulated by 34 state-of-the-art climate models and the observations provided by the Cloud and Earth Radiant Energy System (CERES) mission on a global scale, but large regional biases have been detected throughout the globe. Furthermore, we highlighted that a good agreement between simulated and observed integrated Outgoing Longwave Radiation (OLR) fluxes may be obtained from the cancellation of opposite-in-sign systematic errors, localized in different spectral ranges. To avoid this and to understand the causes of these biases, we compared the observed Earth emission spectra, measured by the Infrared Atmospheric Sounding Interferometer (IASI) in the period 2008-2016, with the synthetic radiances computed on the basis of the atmospheric fields provided by the EC-Earth GCM. To this purpose, the fast σ-IASI radiative transfer model was used, after its validation and implementation in EC-Earth. From the comparison between observed and simulated spectral radiances, a positive temperature bias in the stratosphere and a negative temperature bias in the middle troposphere, as well as a dry bias of the water vapor concentration in the upper troposphere, have been identified in the EC-Earth climate model. The analysis has been performed in clear-sky conditions, but the feasibility of its extension in the presence of clouds, whose impact on the radiation represents the greatest source of uncertainty in climate models, has also been proven. Finally, the analysis of simulated and observed OLR trends indicated good agreement and provided detailed information on the spectral fingerprints of the evolution of the main climate variables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The application of modern ICT technologies is radically changing many fields pushing toward more open and dynamic value chains fostering the cooperation and integration of many connected partners, sensors, and devices. As a valuable example, the emerging Smart Tourism field derived from the application of ICT to Tourism so to create richer and more integrated experiences, making them more accessible and sustainable. From a technological viewpoint, a recurring challenge in these decentralized environments is the integration of heterogeneous services and data spanning multiple administrative domains, each possibly applying different security/privacy policies, device and process control mechanisms, service access, and provisioning schemes, etc. The distribution and heterogeneity of those sources exacerbate the complexity in the development of integrating solutions with consequent high effort and costs for partners seeking them. Taking a step towards addressing these issues, we propose APERTO, a decentralized and distributed architecture that aims at facilitating the blending of data and services. At its core, APERTO relies on APERTO FaaS, a Serverless platform allowing fast prototyping of the business logic, lowering the barrier of entry and development costs to newcomers, (zero) fine-grained scaling of resources servicing end-users, and reduced management overhead. APERTO FaaS infrastructure is based on asynchronous and transparent communications between the components of the architecture, allowing the development of optimized solutions that exploit the peculiarities of distributed and heterogeneous environments. In particular, APERTO addresses the provisioning of scalable and cost-efficient mechanisms targeting: i) function composition allowing the definition of complex workloads from simple, ready-to-use functions, enabling smarter management of complex tasks and improved multiplexing capabilities; ii) the creation of end-to-end differentiated QoS slices minimizing interfaces among application/service running on a shared infrastructure; i) an abstraction providing uniform and optimized access to heterogeneous data sources, iv) a decentralized approach for the verification of access rights to resources.