836 resultados para RESOURCE ALLOCATION


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Various software packages for project management include a procedure for resource-constrained scheduling. In several packages, the user can influence this procedure by selecting a priority rule. However, the resource-allocation methods that are implemented in the procedures are proprietary information; therefore, the question of how the priority-rule selection impacts the performance of the procedures arises. We experimentally evaluate the resource-allocation methods of eight recent software packages using the 600 instances of the PSPLIB J120 test set. The results of our analysis indicate that applying the default rule tends to outperform a randomly selected rule, whereas applying two randomly selected rules tends to outperform the default rule. Applying a small set of more than two rules further improves the project durations considerably. However, a large number of rules must be applied to obtain the best possible project durations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Kenya has experienced a rapid expansion of the education system partly due to high government expenditure on education. Despite the high level of expenditure on education, primary school enrolment has been declining since early 1990s and until 2003 when gross primary school enrolment increased to 104 percent after the introduction of free primary education. However, with an estimated net primary school enrolment rate of 77 percent, the country is far from achieving universal primary education. The worrying scenario is that the allocations of resources within the education sector seems to be ineffective as the increasing expenditure on education goes to recurrent expenditure (to pay teachers salaries). Kenya's Poverty Reduction Strategy Paper (PRSP) and the Economic Recovery Strategy for wealth and Employment Creation (ERS) outlines education targets of reaching universal primary education by 2015. The Government is faced with budget constrains and therefore the available resources need to be allocated efficiently in order to realize the education targets. The paper uses Budget Negotiation Framework (BNF) to analyze the cost effective ways of resource allocation in the primary education sector to achieve universal primary education and other education targets. Budget Negotiation Framework is a tool that aims at achieving equity and efficiency in resource allocation. Results from the analysis shows that universal primary education by the year 2015 is a feasible target for Kenya. The results also show that with a more cost- effective spending of education resources - increased trained teachers, enhanced textbook supplies and subsidies targeting the poor - the country could realize higher enrolment rates than what has been achieved with free primary education.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background. Cardiovascular disease (CVD) exhibits the most striking public health significance due to its high prevalence and mortality as well as huge economic burdens all over the world, especially in industrialized countries. Major risk factors of CVDs have been the targets of population-wide prevention in the United States. Economic evaluations provide structured information in regard to the efficiency of resource utilization which can inform decisions of resource allocation. The main purpose of this review is to investigate the pattern of study design of economic evaluations for interventions of CVDs. ^ Methods. Primary journal articles published during 2003-2008 were systematically retrieved via relevant keywords from Medline, NHS Economic Evaluation Database (NHS EED) and EBSCO Academic Search Complete. Only full economic evaluations for narrowly defined CVD interventions were included for this review. The methodological data of interest were extracted from the eligible articles and reorganized in Microsoft Access database. Chi-square tests in SPSS were used to analyze the associations between pairs of categorical data. ^ Results. One hundred and twenty eligible articles were reviewed after two steps of literature selection with explicit inclusion and exclusion criteria. Descriptive statistics were reported regarding the evaluated interventions, outcome measures, unit costing and cost reports. The chi-square test of the association between prevention level of intervention and category of time horizon showed no statistical significance. The chi-square test showed that sponsor type was significantly associated with whether new or standard intervention being concluded as more cost effective. ^ Conclusions. Tertiary prevention and medication interventions are the major interests for economic evaluators. The majority of the evaluations were claimed from either a provider’s or a payer’s perspective. Almost all evaluations adopted gross costing strategy for unit cost data rather than micro costing. EQ-5D is the most commonly used instrument for subjective outcome measurement. More than half of the evaluations used decision analytic modeling techniques. The lack of consistency in study design standards in published evaluations appears in several aspects. Prevention level of intervention is not likely to be a factor for evaluators to decide whether to design an evaluation in a lifetime horizon or not. Published evaluations sponsored by industry are more likely to conclude that new intervention is more cost effective than standard intervention.^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Evaluation methods for assessing the performance of non-profit funders are lacking. The purpose of the research was to create a comprehensive framework that systematically assesses the goals and objectives of a funder, how these relate to the funder's allocation of resources, and the potential impact of programs and services selected by the funder for resource allocation to address organizational goals and objectives. The Houston Affiliate of Susan G. Komen for the Cure, a local chapter of a national breast cancer awareness advocacy organization, was selected as the funding agency whose performance assessment was to assist in the creation of this framework. Evaluation approaches from the government sector were adapted and incorporated into the research to guide the methods used to answer the three research questions corresponding to the three phases of research within the study: (1) what are the funding goals and objectives of the Affiliate?; (2) what allocation scheme does the organization use to address these goals and objectives and select programs for funding?; and, (3) to what extent do the programs funded by the Affiliate have potential long-term impact? ^ Within the first stage of the research, document reviews of the Affiliate's mission-based documents and bylaws and interviews with organizational and community informants revealed a highly latent constellation of broad objectives that were not formalized into one guiding document, thus creating gaps in management and governance. Within the second phase of the research, reviews of grant applications from the 2008-2009 funding cycle and interviews with employees and volunteers familiar with the funding process revealed competing ideas regarding resource allocation in light of vague organizational documents describing funding goals and objectives. Within the final stage of the research, these findings translated to the Affiliate selecting programs with highly varying potential long-term impact with regards to addressing goals and objectives relating to breast cancer education, screening, diagnostics, treatment, and support. The resulting performance assessment framework, consisting of three phases of research utilizing organizational documents and key informant interviews, demonstrated the importance of clearly defined funding goals and objectives, reference documents and committee participation within the funding process, and regular reviews of potential long-term impact for selected programs, all supported by the active participation and governance of a funder's Board of Directors.^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

"Technology assessment is a comprehensive form of policy research that examines the short- and long-term social consequences of the application or use of technology" (US Congress 1967).^ This study explored a research methodology appropriate for technology assessment (TA) within the health industry. The case studied was utilization of external Small-Volume Infusion Pumps (SVIP) at a cancer treatment and research center. Primary and secondary data were collected in three project phases. In Phase I, hospital prescription records (N = 14,979) represented SVIP adoption and utilization for the years 1982-1984. The Candidate Adoption-Use (CA-U) diffusion paradigm developed for this study was germane. Compared to classic and unorthodox curves, CA-U more accurately simulated empiric experience. The hospital SVIP 1983-1984 trends denoted assurance in prescribing chemotherapy and concomitant balloon SVIP efficacy and efficiency. Abandonment of battery pumps was predicted while exponential demand for balloon SVIP was forecast for 1985-1987. In Phase II, patients using SVIP (N = 117) were prospectively surveyed from July to October 1984; the data represented a single episode of therapy. The questionnaire and indices, specifically designed to measure the impact of SVIP, evinced face validity. Compeer group data were from pre-SVIP case reviews rather than from an inpatient sample. Statistically significant results indicated that outpatients using SVIP interacted socially more than inpatients using the alternative technology. Additionally, the hospital's education program effectively taught clients to discriminate between self care and professional SVIP services. In these contexts, there was sufficient evidence that the alternative technology restricted patients activity whereas SVIP permitted patients to function more independently and in a social lifestyle, thus adding quality to life. In Phase III, diffusion forecast and patient survey findings were combined with direct observation of clinic services to profile some economic dimensions of SVIP. These three project phases provide a foundation for executing: (1) cost effectiveness analysis of external versus internal infusors, (2) institutional resource allocation, and (3) technology deployment to epidemiology-significant communities. The models and methods tested in this research of clinical technology assessment are innovative and do assess biotechnology. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Corticosterone, the main stress hormone in birds, mediates resource allocation, allowing animals to adjust their physiology and behaviour to changes in the environment. Incubation is a time and energy-consuming phase of the avian reproductive cycle. It may be terminated prematurely, when the parents' energy stores are depleted or when environmental conditions are severe. In this study, the effects of experimentally elevated baseline corticosterone levels on the parental investment of incubating male Adelie penguins were investigated. Incubation duration and reproductive success of 60 penguins were recorded. The clutches of some birds were replaced by dummy eggs, which recorded egg temperatures and rotation rates, enabling a detailed investigation of incubation behaviour. Corticosterone levels of treated birds were 2.4-fold higher than those of controls 18 days post treatment. Exogenous corticosterone triggered nest desertion in 61% of the treated birds; consequently reducing reproductive success, indicating that corticosterone can reduce or disrupt parental investment. Regarding egg temperatures, hypothermic events became more frequent and more pronounced in treated birds, before these birds eventually abandoned their nest. The treatment also significantly decreased incubation temperatures by 1.3 °C and lengthened the incubation period by 2.1 days. However, the number of chicks at hatching was similar among successful nests, regardless of treatment. Weather conditions appeared to be particularly important in determining the extent to which corticosterone levels affected the behaviour of penguins, as treated penguins were more sensitive to severe weather conditions. This underlines the importance of considering the interactions of organisms with their environment in studies of animal behaviour and ecophysiology.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We investigated carbon acquisition by the N2-fixing cyanobacterium Trichodesmium IMS101 in response to CO2 levels of 15.1, 37.5, and 101.3 Pa (equivalent to 150, 370, and 1000 ppm). In these acclimations, growth rates as well as cellular C and N contents were measured. In vivo activities of carbonic anhydrase (CA), photosynthetic O2 evolution, and CO2 and HCO3- fluxes were measured using membrane inlet mass spectrometry and the 14C disequilibrium technique. While no differences in growth rates were observed, elevated CO2 levels caused higher C and N quotas and stimulated photosynthesis and N2 fixation. Minimal extracellular CA (eCA) activity was observed, indicating a minor role in carbon acquisition. Rates of CO2 uptake were small relative to total inorganic carbon (Ci) fixation, whereas HCO{3 contributed more than 90% and varied only slightly over the light period and between CO2 treatments. The low eCA activity and preference for HCO3- were verified by the 14C disequilibrium technique. Regarding apparent affinities, half-saturation concentrations (K1/2) for photosynthetic O2 evolution and HCO3- uptake changed markedly over the day and with CO2 concentration. Leakage (CO2 efflux : Ci uptake) showed pronounced diurnal changes. Our findings do not support a direct CO2 effect on the carboxylation efficiency of ribulose-1,5-bisphosphate carboxylase/oxygenase (RubisCO) but point to a shift in resource allocation among photosynthesis, carbon acquisition, and N2 fixation under elevated CO2 levels. The observed increase in photosynthesis and N2fixation could have potential biogeochemical implications, as it may stimulate productivity in N-limited oligotrophic regions and thus provide a negative feedback in rising atmospheric CO2 levels.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Technocracy often holds out the promise of rational, disinterested decision-making. Yet states look to technocracy not just for expert inputs and calculated outcomes but to embed the exercise of power in many agendas, policies and programs. Thus, technocracy operates as an appendage of politically constructed structures and configurations of power, and highly placed technocrats cannot be 'mere' backroom experts who supply disinterested rational-technical solutions in economic planning, resource allocation and social distribution, which are inherently political. This paper traces the trajectories of technocracy in conditions of rapid social transformation, severe economic restructuring, or political crises - when the technocratic was unavoidably political.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The paper explores the effects of birth order and sibling sex composition on human capital investment in children in India using the Indian Human Development Survey (IHDS). Endogeneity of fertility is addressed using instruments and controlling for household fixed effects. Family size effect is also distinguished from the sibling sex composition effect. Previous literature has often failed to take endogeneity into account and shows a negative birth order effect for girls in India. Once endogeneity of fertility is addressed, there is no evidence for a negative birth order effect or sibling sex composition effect for girls. Results show that boys are worse off in households that have a higher proportion of boys specifically when they have older brothers.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Distributed real-time embedded systems are becoming increasingly important to society. More demands will be made on them and greater reliance will be placed on the delivery of their services. A relevant subset of them is high-integrity or hard real-time systems, where failure can cause loss of life, environmental harm, or significant financial loss. Additionally, the evolution of communication networks and paradigms as well as the necessity of demanding processing power and fault tolerance, motivated the interconnection between electronic devices; many of the communications have the possibility of transferring data at a high speed. The concept of distributed systems emerged as systems where different parts are executed on several nodes that interact with each other via a communication network. Java’s popularity, facilities and platform independence have made it an interesting language for the real-time and embedded community. This was the motivation for the development of RTSJ (Real-Time Specification for Java), which is a language extension intended to allow the development of real-time systems. The use of Java in the development of high-integrity systems requires strict development and testing techniques. However, RTJS includes a number of language features that are forbidden in such systems. In the context of the HIJA project, the HRTJ (Hard Real-Time Java) profile was developed to define a robust subset of the language that is amenable to static analysis for high-integrity system certification. Currently, a specification under the Java community process (JSR- 302) is being developed. Its purpose is to define those capabilities needed to create safety critical applications with Java technology called Safety Critical Java (SCJ). However, neither RTSJ nor its profiles provide facilities to develop distributed realtime applications. This is an important issue, as most of the current and future systems will be distributed. The Distributed RTSJ (DRTSJ) Expert Group was created under the Java community process (JSR-50) in order to define appropriate abstractions to overcome this problem. Currently there is no formal specification. The aim of this thesis is to develop a communication middleware that is suitable for the development of distributed hard real-time systems in Java, based on the integration between the RMI (Remote Method Invocation) model and the HRTJ profile. It has been designed and implemented keeping in mind the main requirements such as the predictability and reliability in the timing behavior and the resource usage. iThe design starts with the definition of a computational model which identifies among other things: the communication model, most appropriate underlying network protocols, the analysis model, and a subset of Java for hard real-time systems. In the design, the remote references are the basic means for building distributed applications which are associated with all non-functional parameters and resources needed to implement synchronous or asynchronous remote invocations with real-time attributes. The proposed middleware separates the resource allocation from the execution itself by defining two phases and a specific threading mechanism that guarantees a suitable timing behavior. It also includes mechanisms to monitor the functional and the timing behavior. It provides independence from network protocol defining a network interface and modules. The JRMP protocol was modified to include two phases, non-functional parameters, and message size optimizations. Although serialization is one of the fundamental operations to ensure proper data transmission, current implementations are not suitable for hard real-time systems and there are no alternatives. This thesis proposes a predictable serialization that introduces a new compiler to generate optimized code according to the computational model. The proposed solution has the advantage of allowing us to schedule the communications and to adjust the memory usage at compilation time. In order to validate the design and the implementation a demanding validation process was carried out with emphasis in the functional behavior, the memory usage, the processor usage (the end-to-end response time and the response time in each functional block) and the network usage (real consumption according to the calculated consumption). The results obtained in an industrial application developed by Thales Avionics (a Flight Management System) and in exhaustive tests show that the design and the prototype are reliable for industrial applications with strict timing requirements. Los sistemas empotrados y distribuidos de tiempo real son cada vez más importantes para la sociedad. Su demanda aumenta y cada vez más dependemos de los servicios que proporcionan. Los sistemas de alta integridad constituyen un subconjunto de gran importancia. Se caracterizan por que un fallo en su funcionamiento puede causar pérdida de vidas humanas, daños en el medio ambiente o cuantiosas pérdidas económicas. La necesidad de satisfacer requisitos temporales estrictos, hace más complejo su desarrollo. Mientras que los sistemas empotrados se sigan expandiendo en nuestra sociedad, es necesario garantizar un coste de desarrollo ajustado mediante el uso técnicas adecuadas en su diseño, mantenimiento y certificación. En concreto, se requiere una tecnología flexible e independiente del hardware. La evolución de las redes y paradigmas de comunicación, así como la necesidad de mayor potencia de cómputo y de tolerancia a fallos, ha motivado la interconexión de dispositivos electrónicos. Los mecanismos de comunicación permiten la transferencia de datos con alta velocidad de transmisión. En este contexto, el concepto de sistema distribuido ha emergido como sistemas donde sus componentes se ejecutan en varios nodos en paralelo y que interactúan entre ellos mediante redes de comunicaciones. Un concepto interesante son los sistemas de tiempo real neutrales respecto a la plataforma de ejecución. Se caracterizan por la falta de conocimiento de esta plataforma durante su diseño. Esta propiedad es relevante, por que conviene que se ejecuten en la mayor variedad de arquitecturas, tienen una vida media mayor de diez anos y el lugar ˜ donde se ejecutan puede variar. El lenguaje de programación Java es una buena base para el desarrollo de este tipo de sistemas. Por este motivo se ha creado RTSJ (Real-Time Specification for Java), que es una extensión del lenguaje para permitir el desarrollo de sistemas de tiempo real. Sin embargo, RTSJ no proporciona facilidades para el desarrollo de aplicaciones distribuidas de tiempo real. Es una limitación importante dado que la mayoría de los actuales y futuros sistemas serán distribuidos. El grupo DRTSJ (DistributedRTSJ) fue creado bajo el proceso de la comunidad de Java (JSR-50) con el fin de definir las abstracciones que aborden dicha limitación, pero en la actualidad aun no existe una especificacion formal. El objetivo de esta tesis es desarrollar un middleware de comunicaciones para el desarrollo de sistemas distribuidos de tiempo real en Java, basado en la integración entre el modelo de RMI (Remote Method Invocation) y el perfil HRTJ. Ha sido diseñado e implementado teniendo en cuenta los requisitos principales, como la predecibilidad y la confiabilidad del comportamiento temporal y el uso de recursos. El diseño parte de la definición de un modelo computacional el cual identifica entre otras cosas: el modelo de comunicaciones, los protocolos de red subyacentes más adecuados, el modelo de análisis, y un subconjunto de Java para sistemas de tiempo real crítico. En el diseño, las referencias remotas son el medio básico para construcción de aplicaciones distribuidas las cuales son asociadas a todos los parámetros no funcionales y los recursos necesarios para la ejecución de invocaciones remotas síncronas o asíncronas con atributos de tiempo real. El middleware propuesto separa la asignación de recursos de la propia ejecución definiendo dos fases y un mecanismo de hebras especifico que garantiza un comportamiento temporal adecuado. Además se ha incluido mecanismos para supervisar el comportamiento funcional y temporal. Se ha buscado independencia del protocolo de red definiendo una interfaz de red y módulos específicos. También se ha modificado el protocolo JRMP para incluir diferentes fases, parámetros no funcionales y optimizaciones de los tamaños de los mensajes. Aunque la serialización es una de las operaciones fundamentales para asegurar la adecuada transmisión de datos, las actuales implementaciones no son adecuadas para sistemas críticos y no hay alternativas. Este trabajo propone una serialización predecible que ha implicado el desarrollo de un nuevo compilador para la generación de código optimizado acorde al modelo computacional. La solución propuesta tiene la ventaja que en tiempo de compilación nos permite planificar las comunicaciones y ajustar el uso de memoria. Con el objetivo de validar el diseño e implementación se ha llevado a cabo un exigente proceso de validación con énfasis en: el comportamiento funcional, el uso de memoria, el uso del procesador (tiempo de respuesta de extremo a extremo y en cada uno de los bloques funcionales) y el uso de la red (consumo real conforme al estimado). Los buenos resultados obtenidos en una aplicación industrial desarrollada por Thales Avionics (un sistema de gestión de vuelo) y en las pruebas exhaustivas han demostrado que el diseño y el prototipo son fiables para aplicaciones industriales con estrictos requisitos temporales.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A generic bio-inspired adaptive architecture for image compression suitable to be implemented in embedded systems is presented. The architecture allows the system to be tuned during its calibration phase. An evolutionary algorithm is responsible of making the system evolve towards the required performance. A prototype has been implemented in a Xilinx Virtex-5 FPGA featuring an adaptive wavelet transform core directed at improving image compression for specific types of images. An Evolution Strategy has been chosen as the search algorithm and its typical genetic operators adapted to allow for a hardware friendly implementation. HW/SW partitioning issues are also considered after a high level description of the algorithm is profiled which validates the proposed resource allocation in the device fabric. To check the robustness of the system and its adaptation capabilities, different types of images have been selected as validation patterns. A direct application of such a system is its deployment in an unknown environment during design time, letting the calibration phase adjust the system parameters so that it performs efcient image compression. Also, this prototype implementation may serve as an accelerator for the automatic design of evolved transform coefficients which are later on synthesized and implemented in a non-adaptive system in the final implementation device, whether it is a HW or SW based computing device. The architecture has been built in a modular way so that it can be easily extended to adapt other types of image processing cores. Details on this pluggable component point of view are also given in the paper.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Conventional programming techniques are not well suited for solving many highly combinatorial industrial problems, like scheduling, decision making, resource allocation or planning. Constraint Programming (CP), an emerging software technology, offers an original approach allowing for efficient and flexible solving of complex problems, through combined implementation of various constraint solvers and expert heuristics. Its applications are increasingly elded in various industries.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Los Centros de Datos se encuentran actualmente en cualquier sector de la economía mundial. Están compuestos por miles de servidores, dando servicio a los usuarios de forma global, las 24 horas del día y los 365 días del año. Durante los últimos años, las aplicaciones del ámbito de la e-Ciencia, como la e-Salud o las Ciudades Inteligentes han experimentado un desarrollo muy significativo. La necesidad de manejar de forma eficiente las necesidades de cómputo de aplicaciones de nueva generación, junto con la creciente demanda de recursos en aplicaciones tradicionales, han facilitado el rápido crecimiento y la proliferación de los Centros de Datos. El principal inconveniente de este aumento de capacidad ha sido el rápido y dramático incremento del consumo energético de estas infraestructuras. En 2010, la factura eléctrica de los Centros de Datos representaba el 1.3% del consumo eléctrico mundial. Sólo en el año 2012, el consumo de potencia de los Centros de Datos creció un 63%, alcanzando los 38GW. En 2013 se estimó un crecimiento de otro 17%, hasta llegar a los 43GW. Además, los Centros de Datos son responsables de más del 2% del total de emisiones de dióxido de carbono a la atmósfera. Esta tesis doctoral se enfrenta al problema energético proponiendo técnicas proactivas y reactivas conscientes de la temperatura y de la energía, que contribuyen a tener Centros de Datos más eficientes. Este trabajo desarrolla modelos de energía y utiliza el conocimiento sobre la demanda energética de la carga de trabajo a ejecutar y de los recursos de computación y refrigeración del Centro de Datos para optimizar el consumo. Además, los Centros de Datos son considerados como un elemento crucial dentro del marco de la aplicación ejecutada, optimizando no sólo el consumo del Centro de Datos sino el consumo energético global de la aplicación. Los principales componentes del consumo en los Centros de Datos son la potencia de computación utilizada por los equipos de IT, y la refrigeración necesaria para mantener los servidores dentro de un rango de temperatura de trabajo que asegure su correcto funcionamiento. Debido a la relación cúbica entre la velocidad de los ventiladores y el consumo de los mismos, las soluciones basadas en el sobre-aprovisionamiento de aire frío al servidor generalmente tienen como resultado ineficiencias energéticas. Por otro lado, temperaturas más elevadas en el procesador llevan a un consumo de fugas mayor, debido a la relación exponencial del consumo de fugas con la temperatura. Además, las características de la carga de trabajo y las políticas de asignación de recursos tienen un impacto importante en los balances entre corriente de fugas y consumo de refrigeración. La primera gran contribución de este trabajo es el desarrollo de modelos de potencia y temperatura que permiten describes estos balances entre corriente de fugas y refrigeración; así como la propuesta de estrategias para minimizar el consumo del servidor por medio de la asignación conjunta de refrigeración y carga desde una perspectiva multivariable. Cuando escalamos a nivel del Centro de Datos, observamos un comportamiento similar en términos del balance entre corrientes de fugas y refrigeración. Conforme aumenta la temperatura de la sala, mejora la eficiencia de la refrigeración. Sin embargo, este incremente de la temperatura de sala provoca un aumento en la temperatura de la CPU y, por tanto, también del consumo de fugas. Además, la dinámica de la sala tiene un comportamiento muy desigual, no equilibrado, debido a la asignación de carga y a la heterogeneidad en el equipamiento de IT. La segunda contribución de esta tesis es la propuesta de técnicas de asigación conscientes de la temperatura y heterogeneidad que permiten optimizar conjuntamente la asignación de tareas y refrigeración a los servidores. Estas estrategias necesitan estar respaldadas por modelos flexibles, que puedan trabajar en tiempo real, para describir el sistema desde un nivel de abstracción alto. Dentro del ámbito de las aplicaciones de nueva generación, las decisiones tomadas en el nivel de aplicación pueden tener un impacto dramático en el consumo energético de niveles de abstracción menores, como por ejemplo, en el Centro de Datos. Es importante considerar las relaciones entre todos los agentes computacionales implicados en el problema, de forma que puedan cooperar para conseguir el objetivo común de reducir el coste energético global del sistema. La tercera contribución de esta tesis es el desarrollo de optimizaciones energéticas para la aplicación global por medio de la evaluación de los costes de ejecutar parte del procesado necesario en otros niveles de abstracción, que van desde los nodos hasta el Centro de Datos, por medio de técnicas de balanceo de carga. Como resumen, el trabajo presentado en esta tesis lleva a cabo contribuciones en el modelado y optimización consciente del consumo por fugas y la refrigeración de servidores; el modelado de los Centros de Datos y el desarrollo de políticas de asignación conscientes de la heterogeneidad; y desarrolla mecanismos para la optimización energética de aplicaciones de nueva generación desde varios niveles de abstracción. ABSTRACT Data centers are easily found in every sector of the worldwide economy. They consist of tens of thousands of servers, serving millions of users globally and 24-7. In the last years, e-Science applications such e-Health or Smart Cities have experienced a significant development. The need to deal efficiently with the computational needs of next-generation applications together with the increasing demand for higher resources in traditional applications has facilitated the rapid proliferation and growing of data centers. A drawback to this capacity growth has been the rapid increase of the energy consumption of these facilities. In 2010, data center electricity represented 1.3% of all the electricity use in the world. In year 2012 alone, global data center power demand grew 63% to 38GW. A further rise of 17% to 43GW was estimated in 2013. Moreover, data centers are responsible for more than 2% of total carbon dioxide emissions. This PhD Thesis addresses the energy challenge by proposing proactive and reactive thermal and energy-aware optimization techniques that contribute to place data centers on a more scalable curve. This work develops energy models and uses the knowledge about the energy demand of the workload to be executed and the computational and cooling resources available at data center to optimize energy consumption. Moreover, data centers are considered as a crucial element within their application framework, optimizing not only the energy consumption of the facility, but the global energy consumption of the application. The main contributors to the energy consumption in a data center are the computing power drawn by IT equipment and the cooling power needed to keep the servers within a certain temperature range that ensures safe operation. Because of the cubic relation of fan power with fan speed, solutions based on over-provisioning cold air into the server usually lead to inefficiencies. On the other hand, higher chip temperatures lead to higher leakage power because of the exponential dependence of leakage on temperature. Moreover, workload characteristics as well as allocation policies also have an important impact on the leakage-cooling tradeoffs. The first key contribution of this work is the development of power and temperature models that accurately describe the leakage-cooling tradeoffs at the server level, and the proposal of strategies to minimize server energy via joint cooling and workload management from a multivariate perspective. When scaling to the data center level, a similar behavior in terms of leakage-temperature tradeoffs can be observed. As room temperature raises, the efficiency of data room cooling units improves. However, as we increase room temperature, CPU temperature raises and so does leakage power. Moreover, the thermal dynamics of a data room exhibit unbalanced patterns due to both the workload allocation and the heterogeneity of computing equipment. The second main contribution is the proposal of thermal- and heterogeneity-aware workload management techniques that jointly optimize the allocation of computation and cooling to servers. These strategies need to be backed up by flexible room level models, able to work on runtime, that describe the system from a high level perspective. Within the framework of next-generation applications, decisions taken at this scope can have a dramatical impact on the energy consumption of lower abstraction levels, i.e. the data center facility. It is important to consider the relationships between all the computational agents involved in the problem, so that they can cooperate to achieve the common goal of reducing energy in the overall system. The third main contribution is the energy optimization of the overall application by evaluating the energy costs of performing part of the processing in any of the different abstraction layers, from the node to the data center, via workload management and off-loading techniques. In summary, the work presented in this PhD Thesis, makes contributions on leakage and cooling aware server modeling and optimization, data center thermal modeling and heterogeneityaware data center resource allocation, and develops mechanisms for the energy optimization for next-generation applications from a multi-layer perspective.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper proposes an economic instrument designed to assess the competitive nature of the sugar industry in Romania. In the first part of the paper is presented the theoretical background underlying index (HHI) and its calculation methodology. Then comes the results of a first application of this index for a total of 10 plants in the sugar industry, the robustness of these results is discussed. We believe HHI is a proactive tool that may prove useful competition authority, in its pursuit of continuous monitoring of various industries in the economy and in the internal decision-making on resource allocation institution (Peacock, and Prisecaru, 2013).The starting point of our research is to free competition in the European market with competitors much stronger than Romanian plants, plants that produce at a price lower than the domestic ones. In our study we will see if it is a concentration of production in factories around the strongest in Romania, concentration accompanied by the collapse of those who could not resist the market.The market concentration, competition policy, we will follow using the HHI index, for evaluation of impact analysis on existing trade, the number and size of competitors, protecting existing sales structures, avoiding disruptions in the competitive environment, etc.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Acknowledgments This study was funded by the Research Council of Norway (POLARPROG grant 216051; SFF-III grant 223257/ F50) and Svalbard Environmental Protection Fund (SMF grant 13/74). We thank Mathilde Le Moullec for helping with the fieldwork and the Norwegian Meteorological Institute for access to weather data.