981 resultados para internet data centers


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Esta tese é composta por três estudos ecológicos que incluíram as 27 capitais brasileiras. Esses três estudos foram os seguintes: 1- A associação entre a disponibilidade de cirurgiões-dentistas e a quantidade de procedimentos odontológicos nos serviços públicos de odontologia; 2- A associação entre a disponibilidade de cirurgiões-dentistas e a proporção de dentes restaurados (em relação ao total de dentes atacados pela cárie) em indivíduos de 15 a 19 anos ; 3- A associação da disponibilidade de cirurgiões-dentistas com a prevalência e severidade da cárie em indivíduos de 15 a 19 anos. As três investigações são apresentadas sob forma de artigos. Foram utilizados diversos bancos de dados secundários, disponíveis gratuitamente na internet. No primeiro estudo foi identificada associação do número de Equipes de Saúde Bucal do programa Saúde da Família (ESB) e de cirurgiões-dentistas no SUS de uma forma geral com o número de procedimentos odontológicos no serviço público; quanto mais ESB e cirurgiões-dentistas mais procedimentos odontológicos, tanto preventivos quanto restauradores. Mais dentistas no serviço público de odontologia significaram mais procedimentos preventivos e coletivos, porém um número relativamente pequeno a mais de restaurações. É preocupante a quantidade relativamente pequena de restaurações realizadas pelos dentistas do serviço público no Brasil diante do grande número de dentes com cárie não tratada, identificado pela pesquisa nacional de saúde bucal. O segundo estudo revelou que a quantidade de dentistas nas capitais brasileiras é muito grande e que, portanto, há capacidade instalada para atender todas as necessidades de tratamentos restauradores. Entretanto, o índice de cuidado odontológico em jovens de 15 a 19 anos revelou que menos da metade dos dentes atacados pela cárie tinham recebido o cuidado adequado, i.e., estavam restaurados. Este estudo concluiu que, o grande investimento da sociedade brasileira em odontologia, seja no setor público ou privado, não está tendo o retorno esperado, pelo menos para jovens de 15 a 19 anos. O terceiro estudo concluiu que fatores socioeconômicos amplos e flúor na água foram os principais determinantes da variação na prevalência e severidade da cárie em jovens de 15 a 19 anos e que a contribuição do dentista foi relativamente pequena. Diante do papel relativamente pequeno do dentista na prevenção da cárie, o esforço clínico do mesmo deveria, portanto, enfatizar tratamentos de maior complexidade, visando a restauração e reabilitação de danos relevantes para a função e bem estar (Serviço Pessoal de Saúde). Esforços efetivos para evitar a cárie dentária ocorrem principalmente no âmbito de estratégias preventivas populacionais (Serviço não Pessoal de Saúde), com uma contribuição relativamente pequena do trabalho clínico.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Scheduling a set of jobs over a collection of machines to optimize a certain quality-of-service measure is one of the most important research topics in both computer science theory and practice. In this thesis, we design algorithms that optimize {\em flow-time} (or delay) of jobs for scheduling problems that arise in a wide range of applications. We consider the classical model of unrelated machine scheduling and resolve several long standing open problems; we introduce new models that capture the novel algorithmic challenges in scheduling jobs in data centers or large clusters; we study the effect of selfish behavior in distributed and decentralized environments; we design algorithms that strive to balance the energy consumption and performance.

The technically interesting aspect of our work is the surprising connections we establish between approximation and online algorithms, economics, game theory, and queuing theory. It is the interplay of ideas from these different areas that lies at the heart of most of the algorithms presented in this thesis.

The main contributions of the thesis can be placed in one of the following categories.

1. Classical Unrelated Machine Scheduling: We give the first polygorithmic approximation algorithms for minimizing the average flow-time and minimizing the maximum flow-time in the offline setting. In the online and non-clairvoyant setting, we design the first non-clairvoyant algorithm for minimizing the weighted flow-time in the resource augmentation model. Our work introduces iterated rounding technique for the offline flow-time optimization, and gives the first framework to analyze non-clairvoyant algorithms for unrelated machines.

2. Polytope Scheduling Problem: To capture the multidimensional nature of the scheduling problems that arise in practice, we introduce Polytope Scheduling Problem (\psp). The \psp problem generalizes almost all classical scheduling models, and also captures hitherto unstudied scheduling problems such as routing multi-commodity flows, routing multicast (video-on-demand) trees, and multi-dimensional resource allocation. We design several competitive algorithms for the \psp problem and its variants for the objectives of minimizing the flow-time and completion time. Our work establishes many interesting connections between scheduling and market equilibrium concepts, fairness and non-clairvoyant scheduling, and queuing theoretic notion of stability and resource augmentation analysis.

3. Energy Efficient Scheduling: We give the first non-clairvoyant algorithm for minimizing the total flow-time + energy in the online and resource augmentation model for the most general setting of unrelated machines.

4. Selfish Scheduling: We study the effect of selfish behavior in scheduling and routing problems. We define a fairness index for scheduling policies called {\em bounded stretch}, and show that for the objective of minimizing the average (weighted) completion time, policies with small stretch lead to equilibrium outcomes with small price of anarchy. Our work gives the first linear/ convex programming duality based framework to bound the price of anarchy for general equilibrium concepts such as coarse correlated equilibrium.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cloud services are exploding, and organizations are converging their data centers in order to take advantage of the predictability, continuity, and quality of service delivered by virtualization technologies. In parallel, energy-efficient and high-security networking is of increasing importance. Network operators, and service and product providers require a new network solution to efficiently tackle the increasing demands of this changing network landscape. Software-defined networking has emerged as an efficient network technology capable of supporting the dynamic nature of future network functions and intelligent applications while lowering operating costs through simplified hardware, software, and management. In this article, the question of how to achieve a successful carrier grade network with software-defined networking is raised. Specific focus is placed on the challenges of network performance, scalability, security, and interoperability with the proposal of potential solution directions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The exponential growth in user and application data entails new means for providing fault tolerance and protection against data loss. High Performance Com- puting (HPC) storage systems, which are at the forefront of handling the data del- uge, typically employ hardware RAID at the backend. However, such solutions are costly, do not ensure end-to-end data integrity, and can become a bottleneck during data reconstruction. In this paper, we design an innovative solution to achieve a flex- ible, fault-tolerant, and high-performance RAID-6 solution for a parallel file system (PFS). Our system utilizes low-cost, strategically placed GPUs — both on the client and server sides — to accelerate parity computation. In contrast to hardware-based approaches, we provide full control over the size, length and location of a RAID array on a per file basis, end-to-end data integrity checking, and parallelization of RAID array reconstruction. We have deployed our system in conjunction with the widely-used Lustre PFS, and show that our approach is feasible and imposes ac- ceptable overhead.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent advances in hardware development coupled with the rapid adoption and broad applicability of cloud computing have introduced widespread heterogeneity in data centers, significantly complicating the management of cloud applications and data center resources. This paper presents the CACTOS approach to cloud infrastructure automation and optimization, which addresses heterogeneity through a combination of in-depth analysis of application behavior with insights from commercial cloud providers. The aim of the approach is threefold: to model applications and data center resources, to simulate applications and resources for planning and operation, and to optimize application deployment and resource use in an autonomic manner. The approach is based on case studies from the areas of business analytics, enterprise applications, and scientific computing.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Peak power consumption is the first order design constraint of data centers. Though peak power consumption is rarely, if ever, observed, the entire data center facility must prepare for it, leading to inefficient usage of its resources. The most prominent way for addressing this issue is to limit the power consumption of the data center IT facility far below its theoretical peak value. Many approaches have been proposed to achieve that, based on the same small set of enforcement mechanisms, but there has been no corresponding work on systematically examining the advantages and disadvantages of each such mechanism. In the absence of such a study,it is unclear what is the optimal mechanism for a given computing environment, which can lead to unnecessarily poor performance if an inappropriate scheme is used. This paper fills this gap by comparing for the first time five widely used power capping mechanisms under the same hardware/software setting. We also explore possible alternative power capping mechanisms beyond what has been previously proposed and evaluate them under the same setup. We systematically analyze the strengths and weaknesses of each mechanism, in terms of energy efficiency, overhead, and predictable behavior. We show how these mechanisms can be combined in order to implement an optimal power capping mechanism which reduces the slow down compared to the most widely used mechanism by up to 88%. Our results provide interesting insights regarding the different trade-offs of power capping techniques, which will be useful for designing and implementing highly efficient power capping in the future. 

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Interactive gallery installation which playfully re-contextualised online news feeds from CNN’s website with a soundtrack of found music in order to comment on an online environment where 'serious' news and trivial 'infotainment' often occupy the same space. ‘CNN Interactive just got more interactive’ aimed to investigate the balance between information and ‘info-tainment’ on the web. It demonstrated how the authority and presence of global news corporations online could be playfully subverted by enabling the audience to add a variety of emotively titled soundtracks to the monolithic CNN Interactive website. The project also explored how a work could exist dually as website and gallery installation. ‘CNN interactive’ contributes to the taxonomy of new media art as a new form of contemporary art. One of the first examples in the world of a gallery installation using live Internet data, it is also one of the first attempts in a new media art context to address how individuals respond to and comprehend the changed nature of the news as an immediate phenomenon as relayed by network communications systems. 'CNN interactive’ continues Craighead and Thomson’s research into how live digital networked information can be re-purposed as artistic material within gallery installation contexts but with specific reference to online-international news events, rather than arbitrary data sources (see e-poltergeist, output 1). ‘CNN Interactive’ was commissioned by Tate Britain for the exhibition ‘Art and Money Online’. This was the first gallery exhibition in Tate Britain featuring work that utilised and explored new media as an artistic area, and the first work commissioned by the Tate to operate simultaneously as an online gallery artwork. Selected reviews and citations include ‘Digital Art’ by Christiane Paul, 2003; ‘Internet Art: The Online Clash of Culture and Commerce’ by Julian Stallabrass. (2002); ‘Thomson & Craighead’ by Lisa Le Feuvre for Katalog Journal of Photography and Video, Denmark. All work is developed jointly and equally between Craighead and her collaborator, Jon Thomson, (Slade).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2015

Relevância:

80.00% 80.00%

Publicador:

Resumo:

"Thèse en vue de l'obtention du grade de docteur en droit de l'Université Panthéon-Assas (Paris II) et de docteur en droit de la faculté de droit de l'Université de Montréal en droit privé"

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La formation est une stratégie clé pour le développement des compétences. Les entreprises continuent à investir dans la formation et le développement, mais elles possèdent rarement des données pour évaluer les résultats de cet investissement. La plupart des entreprises utilisent le modèle Kirkpatrick/Phillips pour évaluer la formation en entreprise. Cependant, il ressort de la littérature que les entreprises ont des difficultés à utiliser ce modèle. Les principales barrières sont la difficulté d’isoler l’apprentissage comme un facteur qui a une incidence sur les résultats, l’absence d’un système d’évaluation utile avec le système de gestion de l’apprentissage (Learning Management System - LMS) et le manque de données standardisées pour pouvoir comparer différentes fonctions d’apprentissage. Dans cette thèse, nous proposons un modèle (Analyse, Modélisation, Monitoring et Optimisation - AM2O) de gestion de projets de formation en entreprise, basée sur la gestion des processus d’affaires (Business Process Management - BPM). Un tel scénario suppose que les activités de formation en entreprise doivent être considérées comme des processus d’affaires. Notre modèle est inspiré de cette méthode (BPM), à travers la définition et le suivi des indicateurs de performance pour gérer les projets de formation dans les organisations. Elle est basée sur l’analyse et la modélisation des besoins de formation pour assurer l’alignement entre les activités de formation et les objectifs d’affaires de l’entreprise. Elle permet le suivi des projets de formation ainsi que le calcul des avantages tangibles et intangibles de la formation (sans coût supplémentaire). En outre, elle permet la production d’une classification des projets de formation en fonction de critères relatifs à l’entreprise. Ainsi, avec assez de données, notre approche peut être utilisée pour optimiser le rendement de la formation par une série de simulations utilisant des algorithmes d’apprentissage machine : régression logistique, réseau de neurones, co-apprentissage. Enfin, nous avons conçu un système informatique, Enterprise TRaining programs Evaluation and Optimization System - ETREOSys, pour la gestion des programmes de formation en entreprise et l’aide à la décision. ETREOSys est une plateforme Web utilisant des services en nuage (cloud services) et les bases de données NoSQL. A travers AM2O et ETREOSys nous résolvons les principaux problèmes liés à la gestion et l’évaluation de la formation en entreprise à savoir : la difficulté d’isoler les effets de la formation dans les résultats de l’entreprise et le manque de systèmes informatiques.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A partir del conocimiento que se posee referente al desarrollo de las ladrilleras del sector de Nemocón y las herramientas y disciplinas aplicables al estudio de un plan de internacionalización, se determina desarrollar el siguiente proceso. En primer lugar, a partir de la necesidad de diagnosticar un análisis del sector ladrillero en Nemocón Cundinamarca, se hace indispensable utilizar herramientas de la línea de estrategia. Se requiere de un estudio matricial del comportamiento del sector con su respectivo análisis y para generar un mayor soporte técnico al análisis del sector, se requiere de un estudio de planeación estratégica por escenarios el cual proveerá a la investigación de diversas alternativas en la selección de variables fundamentales del sistema. Acerca del estudio de los mercados tanto nacional como internacional, se dispondrá de las habilidades básicas de la línea de mercadeo. Es importante que a partir del análisis del sector y de la obtención de algunas variables importantes, se indague a profundidad acerca de las variables más importantes para el desarrollo de una matriz de mercados. La idea de la aplicación de esta matriz, es generar unos filtros de países seleccionados a partir de variables anteriormente especificadas con el fin de determinar un único mercado que será el objetivo para el plan de internacionalización. Sobre el país seleccionado se aplica un estudio de mercado y un estudio de viabilidad financiera bajo los cuales se obtienen las conclusiones del plan de internacionalización, su viabilidad y sus recomendaciones.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

If a company or person wants to invest a lot of money, where, when, and how should the investment go? A multi-agent based Financial Investment Planner may give some reasonable answers to the above question. Good advice is mainly based on adequate information, rich knowledge, and great
skills to use knowledge and information. To this end, this planner consists of four principal components information gathering agents that are responsible for gathering relevant information on the Internet, data mining agents that are in charge of discovering knowledge from retrieved information as well as other relevant databases, group decision making agents that can effectively use available knowledge and appropriate information to make reasonable decisions (investment advice), and a graphical user interface that interacts with users. This paper is focused on the group decision making part. The design and implementation of an agent-based hybrid intelligent system - agent-based soft computing society are detailed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cloud is becoming a dominant computing platform. However, we see few work on how to protect cloud data centers. As a cloud usually hosts many different type of applications, the traditional packet level firewall mechanism is not suitable for cloud platforms in case of complex attacks. It is necessary to perform anomaly detection at the event level. Moreover, protecting objects are more diverse than the traditional firewall. Motivated by this, we propose a general framework of cloud firewall, which features event level detection chain with dynamic resource allocation. We establish a mathematical model for the proposed framework. Moreover, a linear resource investment function is proposed for economical dynamical resource allocation for cloud firewalls. A few conclusions have been extracted for the reference of cloud service providers and designers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cloud computing is becoming popular as the next infrastructure of computing platform. However, with data and business applications outsourced to a third party, how to protect cloud data centers from numerous attacks has become a critical concern. In this paper, we propose a clusterized framework of cloud firewall, which characters performance and cost evaluation. To provide quantitative performance analysis of the cloud firewall, a novel M/Geo/1 analytical model is established. The model allows cloud defenders to extract key system measures such as request response time, and determine how many resources are needed to guarantee quality of service (QoS). Moreover, we give an insight into financial cost of the proposed cloud firewall. Finally, our analytical results are verified by simulation experiments.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

At present, companies and standards organizations are enhancing Ethernet as the unified switch fabric for all of the TCP/IP traffic, the storage traffic and the high performance computing traffic in data centers. Backward congestion notification (BCN) is the basic mechanism for the end-to-end congestion management enhancement of Ethernet. To fulfill the special requirements of the unified switch fabric, i.e., losslessness and low transmission delay, BCN should hold the buffer occupancy around a target point tightly. Thus, the stability of the control loop and the buffer size are critical to BCN. Currently, the impacts of delay on the performance of BCN are unidentified. When the speed of Ethernet increases to 40 Gbps or 100 Gbps in the near future, the number of on-the-fly packets becomes the same order with the buffer size of switch. Accordingly, the impacts of delay will become significant. In this paper, we analyze BCN, paying special attention on the delay. We model the BCN system with a set of segmented delayed differential equations, and then deduce sufficient condition for the uniformly asymptotic stability of BCN. Subsequently, the bounds of buffer occupancy are estimated, which provides direct guidelines on setting buffer size. Finally, numerical analysis and experiments on the NetFPGA platform verify our theoretical analysis.