939 resultados para abstract data type
Resumo:
A spatial object consists of data assigned to points in a space. Spatial objects, such as memory states and three dimensional graphical scenes, are diverse and ubiquitous in computing. We develop a general theory of spatial objects by modelling abstract data types of spatial objects as topological algebras of functions. One useful algebra is that of continuous functions, with operations derived from operations on space and data, and equipped with the compact-open topology. Terms are used as abstract syntax for defining spatial objects and conditional equational specifications are used for reasoning. We pose a completeness problem: Given a selection of operations on spatial objects, do the terms approximate all the spatial objects to arbitrary accuracy? We give some general methods for solving the problem and consider their application to spatial objects with real number attributes. © 2011 British Computer Society.
Resumo:
Visualistics, computer science, picture syntax, picture semantics, picture pragmatics, interactive pictures
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
E-learning is supposing an innovation in teaching, raising from the development of new technologies. It is based in a set of educational resources, including, among others, multimedia or interactive contents accessible through Internet or Intranet networks. A whole spectrum of tools and services support e-learning, some of them include auto-evaluation and automated correction of test-like exercises, however, this sort of exercises are very constrained because of its nature: fixed contents and correct answers suppose a limit in the way teachers may evaluation students. In this paper we propose a new engine that allows validating complex exercises in the area of Data Structures and Algorithms. Correct solutions to exercises do not rely only in how good the execution of the code is, or if the results are same as expected. A set of criteria on algorithm complexity or correctness in the use of the data structures are required. The engine presented in this work covers a wide set of exercises with these characteristics allowing teachers to establish the set of requirements for a solution, and students to obtain a measure on the quality of their solution in the same terms that are later required for exams.
Resumo:
Los tipos de datos concurrentes son implementaciones concurrentes de las abstracciones de datos clásicas, con la diferencia de que han sido específicamente diseñados para aprovechar el gran paralelismo disponible en las modernas arquitecturas multiprocesador y multinúcleo. La correcta manipulación de los tipos de datos concurrentes resulta esencial para demostrar la completa corrección de los sistemas de software que los utilizan. Una de las mayores dificultades a la hora de diseñar y verificar tipos de datos concurrentes surge de la necesidad de tener que razonar acerca de un número arbitrario de procesos que invocan estos tipos de datos de manera concurrente. Esto requiere considerar sistemas parametrizados. En este trabajo estudiamos la verificación formal de propiedades temporales de sistemas concurrentes parametrizados, poniendo especial énfasis en programas que manipulan estructuras de datos concurrentes. La principal dificultad a la hora de razonar acerca de sistemas concurrentes parametrizados proviene de la interacción entre el gran nivel de concurrencia que éstos poseen y la necesidad de razonar al mismo tiempo acerca de la memoria dinámica. La verificación de sistemas parametrizados resulta en sí un problema desafiante debido a que requiere razonar acerca de estructuras de datos complejas que son accedidas y modificadas por un numero ilimitado de procesos que manipulan de manera simultánea el contenido de la memoria dinámica empleando métodos de sincronización poco estructurados. En este trabajo, presentamos un marco formal basado en métodos deductivos capaz de ocuparse de la verificación de propiedades de safety y liveness de sistemas concurrentes parametrizados que manejan estructuras de datos complejas. Nuestro marco formal incluye reglas de prueba y técnicas especialmente adaptadas para sistemas parametrizados, las cuales trabajan en colaboración con procedimientos de decisión especialmente diseñados para analizar complejas estructuras de datos concurrentes. Un aspecto novedoso de nuestro marco formal es que efectúa una clara diferenciación entre el análisis del flujo de control del programa y el análisis de los datos que se manejan. El flujo de control del programa se analiza utilizando reglas de prueba y técnicas de verificación deductivas especialmente diseñadas para lidiar con sistemas parametrizados. Comenzando a partir de un programa concurrente y la especificación de una propiedad temporal, nuestras técnicas deductivas son capaces de generar un conjunto finito de condiciones de verificación cuya validez implican la satisfacción de dicha especificación temporal por parte de cualquier sistema, sin importar el número de procesos que formen parte del sistema. Las condiciones de verificación generadas se corresponden con los datos manipulados. Estudiamos el diseño de procedimientos de decisión especializados capaces de lidiar con estas condiciones de verificación de manera completamente automática. Investigamos teorías decidibles capaces de describir propiedades de tipos de datos complejos que manipulan punteros, tales como implementaciones imperativas de pilas, colas, listas y skiplists. Para cada una de estas teorías presentamos un procedimiento de decisión y una implementación práctica construida sobre SMT solvers. Estos procedimientos de decisión son finalmente utilizados para verificar de manera automática las condiciones de verificación generadas por nuestras técnicas de verificación parametrizada. Para concluir, demostramos como utilizando nuestro marco formal es posible probar no solo propiedades de safety sino además de liveness en algunas versiones de protocolos de exclusión mutua y programas que manipulan estructuras de datos concurrentes. El enfoque que presentamos en este trabajo resulta ser muy general y puede ser aplicado para verificar un amplio rango de tipos de datos concurrentes similares. Abstract Concurrent data types are concurrent implementations of classical data abstractions, specifically designed to exploit the great deal of parallelism available in modern multiprocessor and multi-core architectures. The correct manipulation of concurrent data types is essential for the overall correctness of the software system built using them. A major difficulty in designing and verifying concurrent data types arises by the need to reason about any number of threads invoking the data type simultaneously, which requires considering parametrized systems. In this work we study the formal verification of temporal properties of parametrized concurrent systems, with a special focus on programs that manipulate concurrent data structures. The main difficulty to reason about concurrent parametrized systems comes from the combination of their inherently high concurrency and the manipulation of dynamic memory. This parametrized verification problem is very challenging, because it requires to reason about complex concurrent data structures being accessed and modified by threads which simultaneously manipulate the heap using unstructured synchronization methods. In this work, we present a formal framework based on deductive methods which is capable of dealing with the verification of safety and liveness properties of concurrent parametrized systems that manipulate complex data structures. Our framework includes special proof rules and techniques adapted for parametrized systems which work in collaboration with specialized decision procedures for complex data structures. A novel aspect of our framework is that it cleanly differentiates the analysis of the program control flow from the analysis of the data being manipulated. The program control flow is analyzed using deductive proof rules and verification techniques specifically designed for coping with parametrized systems. Starting from a concurrent program and a temporal specification, our techniques generate a finite collection of verification conditions whose validity entails the satisfaction of the temporal specification by any client system, in spite of the number of threads. The verification conditions correspond to the data manipulation. We study the design of specialized decision procedures to deal with these verification conditions fully automatically. We investigate decidable theories capable of describing rich properties of complex pointer based data types such as stacks, queues, lists and skiplists. For each of these theories we present a decision procedure, and its practical implementation on top of existing SMT solvers. These decision procedures are ultimately used for automatically verifying the verification conditions generated by our specialized parametrized verification techniques. Finally, we show how using our framework it is possible to prove not only safety but also liveness properties of concurrent versions of some mutual exclusion protocols and programs that manipulate concurrent data structures. The approach we present in this work is very general, and can be applied to verify a wide range of similar concurrent data types.
Resumo:
RESUMO: Introdução: A diabetes é uma patologia crônica que vêm crescendo exponencialmente em países desenvolvidos e, principalmente, naqueles em desenvolvimento, como é o caso do Brasil. Além de gerar importante custo aos sistemas públicos de saúde, sabe-se que as consequências do mau controle da diabetes tem impacto importante na vida de indivíduos que apresentam a doença, como a perda precoce da funcionalidade e a reduzida qualidade de vida. Nesse sentido, o governo federal brasileiro estabelece em 2002 o Programa Hiperdia, que prevê educação terapêutica e a assistência multiprofissional como estratégias na prevenção e controle das consequências geradas pelo mau controle da diabetes. Objetivo: O estudo aqui proposto tem como objetivo avaliar de que modo a presença e o tempo de diagnóstico da diabetes do tipo 2 (DM2) estão associados à funcionalidade e qualidade de vida de indivíduos assistidos pelo Programa Hiperdia. Metodologia: Foram avaliados indivíduos com idade igual ou superior a 40 anos, residentes na cidade de Viçosa-Minas Gerais/Brasil, distribuídos em diferentes grupos conforme as perspectivas de análise 1 (estudo da presença da DM2) e 2 (estudo do tempo de diagnóstico da patologia). Para a perspectiva 1 dois diferentes grupos foram comparados: controle (CTL), indivíduos sem DM2 ou qualquer patologia em órgãos alvo da doença; e DM2, indivíduos diagnosticados com diabetes do tipo 2. Já para a perspectiva 2 de análise pessoas diagnosticadas com DM2 foram distribuídas em dois diferentes grupos: G1, indivíduos com tempo de diagnóstico da DM2 ≥ 1 ano e ≤ 5 anos; e G2, indivíduos com tempo de diagnóstico da DM2 ≥ 10 anos. Previamente, avaliamos o estado cognitivo dos participantes por meio do Mini Mental State Exam. Dados sociodemográficos e clínicos (rastreio de sintomas depressivos, sonolência diurna excessiva e antropometria) também foram avaliados, além da verificação do perfil bioquímico por meio de informações provenientes de prontuários médicos. Para o estudo da funcionalidade, os instrumentos Activities of Daily Living, Instrumental Activities of Daily Living e o Life Style Questionnaire foram utilizados, assim como o SF-36v2 para a avaliação da qualidade de vida. Por fim, outras variáveis como conhecimento sobre a DM2 e gestão da patologia também foram investigadas. 10 Resultados: 198 indivíduos (CTL: 81; DM2: 117) com idade ≥ 40 anos foram avaliados, dos quais 55,5% apresentaram idade igual ou superior a 60 anos. A maioria corresponderam ao sexo feminino (62,6%). Foram verificados similares resultados para o estado cognitivo em ambas as perspectivas de análise. Pode-se dizer que, para a perspectiva 1 (CTL vs. DM2), os grupos apresentaram diferenças estatísticas significantes para a maioria das variáveis estudadas e tendência para a variável estilo de vida, com resultados desfavorecedores ao grupo DM2. Para a perspectiva 2 (G1 vs. G2), nossos resultados não evidenciam diferenças significantes para o tempo de diagnóstico em nenhuma das variáveis estudadas. Conclusões: Os resultados do estudo mostram que a presença da DM2 em situação de inadequado controle, bem como o insuficiente conhecimento sobre a patologia entre os indivíduos assistidos pelo Centro Hiperdia podem representar um importante fator para a verificação da reduzida funcionalidade e qualidade de vida. Isto sugere a necessidade de ajustes na execução do Programa, de modo a tornar possível o alcance dos objetivos propostos pelo mesmo. Referente ao tempo de diagnóstico da DM2, em nossa amostra, os resultados indicam que este parece não representar um fator desfavorecedor da funcionalidade e qualidade de vida.---------------------------ABSTRACT: Introduction: Type 2 diabetes (DM2) is a chronic disease that has been growing exponentially in developed countries, and even more so in developing countries such as Brazil. In addition, the pathology generates a significant cost to public healthcare systems. It is well known that the poor control of diabetes has important consequences on the lives of individuals diagnosed with the disease, such as the early loss of functionality and a reduced quality of life. In this sense, the Brazilian federal government established the Programa Hiperdia in 2002, a program that provides therapeutic education and multidisciplinary care in order to prevent and control the consequences of diabetes. Objective: The aim of this study is to evaluate how the presence and the diagnosis time of DM2 are associated with the functionality and quality of life of individuals assisted by the Programa Hiperdia. Methodology: We evaluated individuals aged 40 years or older living in Viçosa, Minas Gerais/Brazil, and divided them into different groups according to the analytical perspectives 1 (the study of the presence of DM2) and 2 (the study of the diagnosis time of DM2). For perspective 1, two different groups were compared: the DM2 group, which consisted of individuals diagnosed with type 2 diabetes, and the control group (CTL), which consisted of individuals without type 2 diabetes or any disease in the target organs. For perspective 2, people diagnosed with type 2 diabetes were divided into two different groups: G1, individuals with diagnosis time ≥ 1 year and ≤ 5 years; and G2, individuals with diagnosis time ≥ 10 years. Prior to group assignment, we assessed the cognitive status of all participants with the Mini Mental State Exam (MMSE). Sociodemographic and clinical data (i.e. screening of depressive symptoms, excessive daytime sleepiness and anthropometry) were also evaluated, as well as the biochemical profile based on information from the local Hiperdia center. To study functionality, Activities of Daily Living, Instrumental Activities of Daily Living and Life Style Questionnaire were administered. Quality of life was assessed via the SF-36v2 Health Survey. Finally, variables such as knowledge about DM2 and disease management were also verified. Results: 198 subjects (CTL: 81; DM2: 117) aged ≥ 40 years were evaluated, of whom 55.5% were aged 60 years or older. The majority of subjects were women (62,6%). Cognitive status scores were similar amongst both analytical perspectives. In terms of perspective 1 (DM2. vs. CTL), it showed statistically significant differences between the groups for the most part of the variables studied, and poorer results in the DM2 group. Regarding perspective 2 (G1 vs. G2), our results did not show significant differences for the diagnosis time in any of the variables studied. Conclusions: Our findings show that the presence of DM2 with inadequate control of the condition, as well as lack of knowledge about the disease among individuals assisted by the Hiperdia center may represent an important factor in the poor functionality and reduced quality of life when compared to the control group. This suggests that the Program likely needs some adjustments on its implementation in order to make possible the achievement of the objectives proposed. With respect to the diagnosis time for DM2 in our sample, the results indicate that it does not seem to be a factor in poor functionality nor quality of life.
Resumo:
Information systems are widespread and used by anyone with computing devices as well as corporations and governments. It is often the case that security leaks are introduced during the development of an application. Reasons for these security bugs are multiple but among them one can easily identify that it is very hard to define and enforce relevant security policies in modern software. This is because modern applications often rely on container sharing and multi-tenancy where, for instance, data can be stored in the same physical space but is logically mapped into different security compartments or data structures. In turn, these security compartments, to which data is classified into in security policies, can also be dynamic and depend on runtime data. In this thesis we introduce and develop the novel notion of dependent information flow types, and focus on the problem of ensuring data confidentiality in data-centric software. Dependent information flow types fit within the standard framework of dependent type theory, but, unlike usual dependent types, crucially allow the security level of a type, rather than just the structural data type itself, to depend on runtime values. Our dependent function and dependent sum information flow types provide a direct, natural and elegant way to express and enforce fine grained security policies on programs. Namely programs that manipulate structured data types in which the security level of a structure field may depend on values dynamically stored in other fields The main contribution of this work is an efficient analysis that allows programmers to verify, during the development phase, whether programs have information leaks, that is, it verifies whether programs protect the confidentiality of the information they manipulate. As such, we also implemented a prototype typechecker that can be found at http://ctp.di.fct.unl.pt/DIFTprototype/.
Resumo:
This thesis presented the overview of Open Data research area, quantity of evidence and establishes the research evidence based on the Systematic Mapping Study (SMS). There are 621 such publications were identified published between years 2005 and 2014, but only 243 were selected in the review process. This thesis highlights the implications of Open Data principals’ proliferation in the emerging era of the accessibility, reusability and sustainability of data transparency. The findings of mapping study are described in quantitative and qualitative measurement based on the organization affiliation, countries, year of publications, research method, star rating and units of analysis identified. Furthermore, units of analysis were categorized by development lifecycle, linked open data, type of data, technical platforms, organizations, ontology and semantic, adoption and awareness, intermediaries, security and privacy and supply of data which are important component to provide a quality open data applications and services. The results of the mapping study help the organizations (such as academia, government and industries), re-searchers and software developers to understand the existing trend of open data, latest research development and the demand of future research. In addition, the proposed conceptual framework of Open Data research can be adopted and expanded to strengthen and improved current open data applications.
Resumo:
Knowledge discovery in databases is the non-trivial process of identifying valid, novel potentially useful and ultimately understandable patterns from data. The term Data mining refers to the process which does the exploratory analysis on the data and builds some model on the data. To infer patterns from data, data mining involves different approaches like association rule mining, classification techniques or clustering techniques. Among the many data mining techniques, clustering plays a major role, since it helps to group the related data for assessing properties and drawing conclusions. Most of the clustering algorithms act on a dataset with uniform format, since the similarity or dissimilarity between the data points is a significant factor in finding out the clusters. If a dataset consists of mixed attributes, i.e. a combination of numerical and categorical variables, a preferred approach is to convert different formats into a uniform format. The research study explores the various techniques to convert the mixed data sets to a numerical equivalent, so as to make it equipped for applying the statistical and similar algorithms. The results of clustering mixed category data after conversion to numeric data type have been demonstrated using a crime data set. The thesis also proposes an extension to the well known algorithm for handling mixed data types, to deal with data sets having only categorical data. The proposed conversion has been validated on a data set corresponding to breast cancer. Moreover, another issue with the clustering process is the visualization of output. Different geometric techniques like scatter plot, or projection plots are available, but none of the techniques display the result projecting the whole database but rather demonstrate attribute-pair wise analysis
Resumo:
O Diabetes Mellitus tipo 1 (DM1) é a endocrinopatia mais comum da infância e adolescência e impacta negativamente na qualidade de vida (QV). O EuroQol é um instrumento que afere o estado de saúde e vem sendo utilizado na grande maioria dos estudos multicêntricos mundiais em diabetes e tem se mostrado uma ferramenta extremamente útil e confiável. O objetivo desse estudo é avaliar a QV de pacientes com DM1 do Brasil, país de proporções continentais, por meio da análise do EuroQol. Para isso, realizou-se estudo retrospectivo e transversal, no qual foram analisados questionários de pacientes com DM1, respondidos no período de dezembro de 2008 a dezembro de 2010, em 28 centros de pesquisa de 20 cidades das quatro regiões do país (sudeste, norte/nordeste, sul e centro-oeste). Foram também coletados dados sobre complicações crônicas micro e macrovasculares e perfil lipídico. A avaliação da qualidade de vida pelo EuroQol mostra que a nota média atribuída ao estado geral de saúde é nitidamente menor que a encontrada em dois outros estudos populacionais com DM1 realizados na Europa (EQ-VAS da Alemanha, Holanda e Brasil foram de 82,1 ± 14; 81 ± 15 e 72 ± 22, respectivamente). O EuroQol demonstra que a região Norte-Nordeste apresenta melhor índice na avaliação do estado geral de saúde quando comparada a região Sudeste e menor frequência de ansiedade-depressão autorreferidas, quando comparada às demais regiões do país (Norte-Nordeste = 1,53 ± 0,6, Sudeste = 1,65 ± 0,7, Sul = 1,72 ± 0,7 e Centro-Oeste = 1,67 ± 0,7; p <0,05). Adicionalmente, diversas variáveis conhecidas (idade, duração do DM, prática de atividade física, HbA1c, glicemia de jejum e presença de complicações crônicas se correlacionaram com a QV (r = -0,1, p <0,05; r = -0,1, p <0,05; r = -0,1, p <0,05; r = -0,2, p <0,05; r = -0,1, p <0,05 e r= -0,1, p <0,05, respectivamente). Esse é o primeiro estudo a avaliar a qualidade de vida de pacientes com DM1 a nível populacional no hemisfério sul. Nossos dados indicam uma pior qualidade de vida dos pacientes com DM 1 no Brasil quando comparado a dados de países europeus. Apesar de ter sido encontrado uma inferior duração do DM e menor presença de complicações microvasculares na região Norte/ Nordeste, quando comparada à outras regiões, nossos dados sugerem a existência de elementos adicionais responsáveis pela melhor QV e menor presença de ansiedade/depressão encontradas nesta região. Novos estudos são necessários para identificar esses possíveis fatores.
Resumo:
O desmatamento é um processo evidente na Amazônia oriundo da ação antrópica predatória dos recursos naturais. A extração madeireira e a agropecuária são as principais atividades que tem promovido a destruição da floresta no Arco do desmatamento. Entretanto, o reflorestamento tem sido o foco de políticas públicas que o Governo tem desenvolvido por meio do Programa Arco Verde. No Pará este projeto está sendo aplicado em 16 municípios que integram as áreas críticas de desmatamento devido às pressões antrópicas exercidas. Nesse contexto, os sistemas agroflorestais tem sido uma das alternativas para reflorestamento dessas áreas. Neste trabalho objetivou-se a identificação de áreas preferenciais para plantio de 15 espécies florestais potenciais para uso em sistema agroflorestais. A partir do mapeamento da ocorrência das espécies florestais selecionadas, e do cruzamento de dados geográficos de tipologia climática e deficiência hídrica, identificou-se 24 zonas bioclimáticas no Arco Verde paraense. Os resultados para o plantio das espécies florestais em áreas preferenciais foram: J. copaia, T. serratifolia e B. excelsa são potenciais para serem plantadas em 100% do Arco Verde Paraense; C. pentandra, H. courbaril, S. morototoni e T. vulgaris são indicadas para serem plantadas em 98% da área alvo; C. odorata, C. goeldiana, D. odorata, S. macrophylla são indicadas para serem inseridas em 75% do Arco Verde paraense; C. guianensis, S. parahyba var. amazonicum, B. guianensis e V. maxima em 60% da área estudada. Em suma, é necessário se intensificar estudos em espécies florestais que são indicadas para as áreas preferenciais mais abrangentes.
Resumo:
Approximately one-third of US adults have metabolic syndrome, the clustering of cardiovascular risk factors that include hypertension, abdominal adiposity, elevated fasting glucose, low high-density lipoprotein (HDL)-cholesterol and elevated triglyceride levels. While the definition of metabolic syndrome continues to be much debated among leading health research organizations, the fact is that individuals with metabolic syndrome have an increased risk of developing cardiovascular disease and/or type 2 diabetes. A recent report by the Henry J. Kaiser Family Foundation found that the US spent $2.2 trillion (16.2% of the Gross Domestic Product) on healthcare in 2007 and cited that among other factors, chronic diseases, including type 2 diabetes and cardiovascular disease, are large contributors to this growing national expenditure. Bearing a substantial portion of this cost are employers, the leading providers of health insurance. In lieu of this, many employers have begun implementing health promotion efforts to counteract these rising costs. However, evidence-based practices, uniform guidelines and policy do not exist for this setting in regard to the prevention of metabolic syndrome risk factors as defined by the National Cholesterol Education Program (NCEP) Adult Treatment Panel III (ATP III). Therefore, the aim of this review was to determine the effects of worksite-based behavior change programs on reducing the risk factors for metabolic syndrome in adults. Using relevant search terms, OVID MEDLINE was used to search the peer-reviewed literature published since 1998, resulting in 23 articles meeting the inclusion criteria for the review. The American Dietetic Association's Evidence Analysis Process was used to abstract data from selected articles, assess the quality of each study, compile the evidence, develop a summarized conclusion, and assign a grade based upon the strength of supporting evidence. The results revealed that participating in a worksite-based behavior change program may be associated in one or more improved metabolic syndrome risk factors. Programs that delivered a higher dose (>22 hours), in a shorter duration (<2 years) using two or more behavior-change strategies were associated with more metabolic risk factors being positively impacted. A Conclusion Grade of III was obtained for the evidence, indicating that studies were of weak design or results were inconclusive due to inadequate sample sizes, bias and lack of generalizability. These results provide some support for the continued use of worksite-based health promotion and further research is needed to determine if multi-strategy, intense behavior change programs targeting multiple risk factors are able to sustain health improvements in the long-term.^
Resumo:
Los Centros de Datos se encuentran actualmente en cualquier sector de la economía mundial. Están compuestos por miles de servidores, dando servicio a los usuarios de forma global, las 24 horas del día y los 365 días del año. Durante los últimos años, las aplicaciones del ámbito de la e-Ciencia, como la e-Salud o las Ciudades Inteligentes han experimentado un desarrollo muy significativo. La necesidad de manejar de forma eficiente las necesidades de cómputo de aplicaciones de nueva generación, junto con la creciente demanda de recursos en aplicaciones tradicionales, han facilitado el rápido crecimiento y la proliferación de los Centros de Datos. El principal inconveniente de este aumento de capacidad ha sido el rápido y dramático incremento del consumo energético de estas infraestructuras. En 2010, la factura eléctrica de los Centros de Datos representaba el 1.3% del consumo eléctrico mundial. Sólo en el año 2012, el consumo de potencia de los Centros de Datos creció un 63%, alcanzando los 38GW. En 2013 se estimó un crecimiento de otro 17%, hasta llegar a los 43GW. Además, los Centros de Datos son responsables de más del 2% del total de emisiones de dióxido de carbono a la atmósfera. Esta tesis doctoral se enfrenta al problema energético proponiendo técnicas proactivas y reactivas conscientes de la temperatura y de la energía, que contribuyen a tener Centros de Datos más eficientes. Este trabajo desarrolla modelos de energía y utiliza el conocimiento sobre la demanda energética de la carga de trabajo a ejecutar y de los recursos de computación y refrigeración del Centro de Datos para optimizar el consumo. Además, los Centros de Datos son considerados como un elemento crucial dentro del marco de la aplicación ejecutada, optimizando no sólo el consumo del Centro de Datos sino el consumo energético global de la aplicación. Los principales componentes del consumo en los Centros de Datos son la potencia de computación utilizada por los equipos de IT, y la refrigeración necesaria para mantener los servidores dentro de un rango de temperatura de trabajo que asegure su correcto funcionamiento. Debido a la relación cúbica entre la velocidad de los ventiladores y el consumo de los mismos, las soluciones basadas en el sobre-aprovisionamiento de aire frío al servidor generalmente tienen como resultado ineficiencias energéticas. Por otro lado, temperaturas más elevadas en el procesador llevan a un consumo de fugas mayor, debido a la relación exponencial del consumo de fugas con la temperatura. Además, las características de la carga de trabajo y las políticas de asignación de recursos tienen un impacto importante en los balances entre corriente de fugas y consumo de refrigeración. La primera gran contribución de este trabajo es el desarrollo de modelos de potencia y temperatura que permiten describes estos balances entre corriente de fugas y refrigeración; así como la propuesta de estrategias para minimizar el consumo del servidor por medio de la asignación conjunta de refrigeración y carga desde una perspectiva multivariable. Cuando escalamos a nivel del Centro de Datos, observamos un comportamiento similar en términos del balance entre corrientes de fugas y refrigeración. Conforme aumenta la temperatura de la sala, mejora la eficiencia de la refrigeración. Sin embargo, este incremente de la temperatura de sala provoca un aumento en la temperatura de la CPU y, por tanto, también del consumo de fugas. Además, la dinámica de la sala tiene un comportamiento muy desigual, no equilibrado, debido a la asignación de carga y a la heterogeneidad en el equipamiento de IT. La segunda contribución de esta tesis es la propuesta de técnicas de asigación conscientes de la temperatura y heterogeneidad que permiten optimizar conjuntamente la asignación de tareas y refrigeración a los servidores. Estas estrategias necesitan estar respaldadas por modelos flexibles, que puedan trabajar en tiempo real, para describir el sistema desde un nivel de abstracción alto. Dentro del ámbito de las aplicaciones de nueva generación, las decisiones tomadas en el nivel de aplicación pueden tener un impacto dramático en el consumo energético de niveles de abstracción menores, como por ejemplo, en el Centro de Datos. Es importante considerar las relaciones entre todos los agentes computacionales implicados en el problema, de forma que puedan cooperar para conseguir el objetivo común de reducir el coste energético global del sistema. La tercera contribución de esta tesis es el desarrollo de optimizaciones energéticas para la aplicación global por medio de la evaluación de los costes de ejecutar parte del procesado necesario en otros niveles de abstracción, que van desde los nodos hasta el Centro de Datos, por medio de técnicas de balanceo de carga. Como resumen, el trabajo presentado en esta tesis lleva a cabo contribuciones en el modelado y optimización consciente del consumo por fugas y la refrigeración de servidores; el modelado de los Centros de Datos y el desarrollo de políticas de asignación conscientes de la heterogeneidad; y desarrolla mecanismos para la optimización energética de aplicaciones de nueva generación desde varios niveles de abstracción. ABSTRACT Data centers are easily found in every sector of the worldwide economy. They consist of tens of thousands of servers, serving millions of users globally and 24-7. In the last years, e-Science applications such e-Health or Smart Cities have experienced a significant development. The need to deal efficiently with the computational needs of next-generation applications together with the increasing demand for higher resources in traditional applications has facilitated the rapid proliferation and growing of data centers. A drawback to this capacity growth has been the rapid increase of the energy consumption of these facilities. In 2010, data center electricity represented 1.3% of all the electricity use in the world. In year 2012 alone, global data center power demand grew 63% to 38GW. A further rise of 17% to 43GW was estimated in 2013. Moreover, data centers are responsible for more than 2% of total carbon dioxide emissions. This PhD Thesis addresses the energy challenge by proposing proactive and reactive thermal and energy-aware optimization techniques that contribute to place data centers on a more scalable curve. This work develops energy models and uses the knowledge about the energy demand of the workload to be executed and the computational and cooling resources available at data center to optimize energy consumption. Moreover, data centers are considered as a crucial element within their application framework, optimizing not only the energy consumption of the facility, but the global energy consumption of the application. The main contributors to the energy consumption in a data center are the computing power drawn by IT equipment and the cooling power needed to keep the servers within a certain temperature range that ensures safe operation. Because of the cubic relation of fan power with fan speed, solutions based on over-provisioning cold air into the server usually lead to inefficiencies. On the other hand, higher chip temperatures lead to higher leakage power because of the exponential dependence of leakage on temperature. Moreover, workload characteristics as well as allocation policies also have an important impact on the leakage-cooling tradeoffs. The first key contribution of this work is the development of power and temperature models that accurately describe the leakage-cooling tradeoffs at the server level, and the proposal of strategies to minimize server energy via joint cooling and workload management from a multivariate perspective. When scaling to the data center level, a similar behavior in terms of leakage-temperature tradeoffs can be observed. As room temperature raises, the efficiency of data room cooling units improves. However, as we increase room temperature, CPU temperature raises and so does leakage power. Moreover, the thermal dynamics of a data room exhibit unbalanced patterns due to both the workload allocation and the heterogeneity of computing equipment. The second main contribution is the proposal of thermal- and heterogeneity-aware workload management techniques that jointly optimize the allocation of computation and cooling to servers. These strategies need to be backed up by flexible room level models, able to work on runtime, that describe the system from a high level perspective. Within the framework of next-generation applications, decisions taken at this scope can have a dramatical impact on the energy consumption of lower abstraction levels, i.e. the data center facility. It is important to consider the relationships between all the computational agents involved in the problem, so that they can cooperate to achieve the common goal of reducing energy in the overall system. The third main contribution is the energy optimization of the overall application by evaluating the energy costs of performing part of the processing in any of the different abstraction layers, from the node to the data center, via workload management and off-loading techniques. In summary, the work presented in this PhD Thesis, makes contributions on leakage and cooling aware server modeling and optimization, data center thermal modeling and heterogeneityaware data center resource allocation, and develops mechanisms for the energy optimization for next-generation applications from a multi-layer perspective.
Resumo:
En este proyecto se realiza el diseño e implementación de un sistema que detecta anomalías en las entradas de entornos controlados. Para ello, se hace uso de las últimas técnicas en visión por computador y se avisa visual y auditivamente, mediante un sistema hardware que recibe señales del ordenador al que está conectado. Se marca y fotografía, a una o varias personas, que cometen una infracción en las entradas de un establecimiento, vigilado con sistemas de vídeo. Las imágenes se almacenan en las carpetas correspondientes. El sistema diseñado es colaborativo, por lo tanto, las cámaras que intervienen, se comunican entre ellas a través de estructuras de datos con el objetivo de intercambiar información. Además, se utiliza conexión inalámbrica desde un dispositivo móvil para obtener una visión global del entorno desde cualquier lugar del mundo. La aplicación se desarrolla en el entorno MATLAB, que permite un tratamiento de la señal de imagen apropiado para el presente proyecto. Asimismo, se proporciona al usuario una interfaz gráfica con la que interactuar de manera sencilla, evitando así, el cambio de parámetros en la estructura interna del programa cuando se quiere variar el entorno o el tipo de adquisición de datos. El lenguaje que se escoge facilita la ejecución en distintos sistemas operativos, incluyendo Windows o iOS y, de esta manera, se proporciona flexibilidad. ABSTRACT. This project studies the design and implementation of a system that detects any anomalies on the entrances to controlled environments. To this end, it is necessary the use of last techniques in computer vision in order to notify visually and aurally, by a hardware system which receives signs from the computer it is connected to. One or more people that commit an infringement while entering into a secured environment, with video systems, are marked and photographed and those images are stored in their belonging file folder. This is a collaborative design system, therefore, every involved camera communicates among themselves through data structures with the purpose of exchanging information. Furthermore, to obtain a global environment vision from any place in the world it uses a mobile wireless connection. The application is developed in MATLAB environment because it allows an appropriate treatment of the image signal for this project. In addition, the user is given a graphical interface to easily interact, avoiding with this, changing any parameters on the program’s intern structure, when it requires modifying the environment or the data type acquisition. The chosen language eases its execution in different operating systems, including Windows or iOS, providing flexibility.