915 resultados para Two-level scheduling and optimization
Resumo:
For understanding the major- and minor-groove hydration patterns of DNAs and RNAs, it is important to understand the local solvation of individual nucleobases at the molecular level. We have investigated the 2-aminopurine center dot H2O. monohydrate by two-color resonant two-photon ionization and UV/UV hole-burning spectroscopies, which reveal two isomers, denoted A and B. The electronic spectral shift delta nu of the S-1 <- S-0 transition relative to bare 9H-2-aminopurine (9H-2AP) is small for isomer A (-70 cm(-1)), while that of isomer B is much larger (delta nu = 889 cm(-1)). B3LYP geometry optimizations with the TZVP basis set predict four cluster isomers, of which three are doubly H-bonded, with H2O acting as an acceptor to a N-H or -NH2 group and as a donor to either of the pyrimidine N sites. The "sugar-edge" isomer A is calculated to be the most stable form with binding energy D-e = 56.4 kJ/mol. Isomers B and C are H-bonded between the -NH2 group and pyrimidine moieties and are 2.5 and 6.9 kJ/mol less stable, respectively. Time-dependent (TD) B3LYP/TZVP calculations predict the adiabatic energies of the lowest (1)pi pi* states of A and B in excellent agreement with the observed 0(0)(0) bands; also, the relative intensities of the A and B origin bands agree well with the calculated S-0 state relative energies. This allows unequivocal identification of the isomers. The R2PI spectra of 9H-2AP and of isomer A exhibit intense low-frequency out-of-plane overtone and combination bands, which is interpreted as a coupling of the optically excited (1)pi pi* state to the lower-lying (1)n pi* dark state. In contrast, these overtone and combination bands are much weaker for isomer B, implying that the (1)pi pi* state of B is planar and decoupled from the (1)n pi* state. These observations agree with the calculations, which predict the (1)n pi* above the (1)pi pi* state for isomer B but below the (1)pi pi* for both 9H-2AP and isomer A.
Resumo:
Among other auditory operations, the analysis of different sound levels received at both ears is fundamental for the localization of a sound source. These so-called interaural level differences, in animals, are coded by excitatory-inhibitory neurons yielding asymmetric hemispheric activity patterns with acoustic stimuli having maximal interaural level differences. In human auditory cortex, the temporal blood oxygen level-dependent (BOLD) response to auditory inputs, as measured by functional magnetic resonance imaging (fMRI), consists of at least two independent components: an initial transient and a subsequent sustained signal, which, on a different time scale, are consistent with electrophysiological human and animal response patterns. However, their specific functional role remains unclear. Animal studies suggest these temporal components being based on different neural networks and having specific roles in representing the external acoustic environment. Here we hypothesized that the transient and sustained response constituents are differentially involved in coding interaural level differences and therefore play different roles in spatial information processing. Healthy subjects underwent monaural and binaural acoustic stimulation and BOLD responses were measured using high signal-to-noise-ratio fMRI. In the anatomically segmented Heschl's gyrus the transient response was bilaterally balanced, independent of the side of stimulation, while in opposite the sustained response was contralateralized. This dissociation suggests a differential role at these two independent temporal response components, with an initial bilateral transient signal subserving rapid sound detection and a subsequent lateralized sustained signal subserving detailed sound characterization.
Resumo:
A range of societal issues have been caused by fossil fuel consumption in the transportation sector in the United States (U.S.), including health related air pollution, climate change, the dependence on imported oil, and other oil related national security concerns. Biofuels production from various lignocellulosic biomass types such as wood, forest residues, and agriculture residues have the potential to replace a substantial portion of the total fossil fuel consumption. This research focuses on locating biofuel facilities and designing the biofuel supply chain to minimize the overall cost. For this purpose an integrated methodology was proposed by combining the GIS technology with simulation and optimization modeling methods. The GIS based methodology was used as a precursor for selecting biofuel facility locations by employing a series of decision factors. The resulted candidate sites for biofuel production served as inputs for simulation and optimization modeling. As a precursor to simulation or optimization modeling, the GIS-based methodology was used to preselect potential biofuel facility locations for biofuel production from forest biomass. Candidate locations were selected based on a set of evaluation criteria, including: county boundaries, a railroad transportation network, a state/federal road transportation network, water body (rivers, lakes, etc.) dispersion, city and village dispersion, a population census, biomass production, and no co-location with co-fired power plants. The simulation and optimization models were built around key supply activities including biomass harvesting/forwarding, transportation and storage. The built onsite storage served for spring breakup period where road restrictions were in place and truck transportation on certain roads was limited. Both models were evaluated using multiple performance indicators, including cost (consisting of the delivered feedstock cost, and inventory holding cost), energy consumption, and GHG emissions. The impact of energy consumption and GHG emissions were expressed in monetary terms to keep consistent with cost. Compared with the optimization model, the simulation model represents a more dynamic look at a 20-year operation by considering the impacts associated with building inventory at the biorefinery to address the limited availability of biomass feedstock during the spring breakup period. The number of trucks required per day was estimated and the inventory level all year around was tracked. Through the exchange of information across different procedures (harvesting, transportation, and biomass feedstock processing procedures), a smooth flow of biomass from harvesting areas to a biofuel facility was implemented. The optimization model was developed to address issues related to locating multiple biofuel facilities simultaneously. The size of the potential biofuel facility is set up with an upper bound of 50 MGY and a lower bound of 30 MGY. The optimization model is a static, Mathematical Programming Language (MPL)-based application which allows for sensitivity analysis by changing inputs to evaluate different scenarios. It was found that annual biofuel demand and biomass availability impacts the optimal results of biofuel facility locations and sizes.
Resumo:
Seed production, seed dispersal, and seedling recruitment are integral to forest dynamics, especially in masting species. Often these are studied separately, yet scarcely ever for species with ballistic dispersal even though this mode of dispersal is common in legume trees of tropical African rain forests. Here, we studied two dominant main-canopy tree species, Microberlinia bisulcata and Tetraberlinia bifoliolata (Caesalpinioideae), in 25 ha of primary rain forest at Korup, Cameroon, during two successive masting events (2007/2010). In the vicinity of c. 100 and 130 trees of each species, 476/580 traps caught dispersed seeds and beneath their crowns c. 57,000 pod valves per species were inspected to estimate tree-level fecundity. Seed production of trees increased non-linearly and asymptotically with increasing stem diameters. It was unequal within the two species’ populations, and differed strongly between years to foster both spatial and temporal patchiness in seed rain. The M. bisulcata trees could begin seeding at 42–44 cm diameter: at a much larger size than could T. bifoliolata (25 cm). Nevertheless, per capita life-time reproductive capacity was c. five times greater in M. bisulcata than T. bifoliolata owing to former’s larger adult stature, lower mortality rate (despite a shorter life-time) and smaller seed mass. The two species displayed strong differences in their dispersal capabilities. Inverse modelling (IM) revealed that dispersal of M. bisulcata was best described by a lognormal kernel. Most seeds landed at 10–15 m from stems, with 1% of them going beyond 80 m (<100 m). The direct estimates of fecundity significantly improved the models fitted. The lognormal also described well the seedling recruitment distribution of this species in 121 ground plots. By contrast, the lower intensity of masting and more limited dispersal of the heavier-seeded T. bifoliolata prevented reliable IM. For this species, seed density as function of distance to traps suggested a maximum dispersal distance of 40–50 m, and a correspondingly more aggregated seedling recruitment pattern ensued than for M. bisulcata. From this integrated field study, we conclude that the reproductive traits of M. bisulcata give it a considerable advantage over T. bifoliolata by better dispersing more seeds per capita to reach more suitable establishment sites, and combined with other key traits they explain its local dominance in the forest. Understanding the linkages between size at onset of maturity, individual fecundity, and dispersal capability can better inform the life-history strategies, and hence management, of co-occurring tree species in tropical forests.
Resumo:
Sphingosine-1-phosphate (S1P) is a key lipid regulator of a variety of cellular responses including cell proliferation and survival, cell migration, and inflammatory reactions. Here, we investigated the effect of S1P receptor activation on immune cell adhesion to endothelial cells under inflammatory conditions. We show that S1P reduces both tumor necrosis factor (TNF)-α- and lipopolysaccharide (LPS)-stimulated adhesion of Jurkat and U937 cells to an endothelial monolayer. The reducing effect of S1P was reversed by the S1P1+3 antagonist VPC23019 but not by the S1P1 antagonist W146. Additionally, knockdown of S1P3, but not S1P1, by short hairpin RNA (shRNA) abolished the reducing effect of S1P, suggesting the involvement of S1P3. A suppression of immune cell adhesion was also seen with the immunomodulatory drug FTY720 and two novel butterfly derivatives ST-968 and ST-1071. On the molecular level, S1P and all FTY720 derivatives reduced the mRNA expression of LPS- and TNF-α-induced adhesion molecules including ICAM-1, VCAM-1, E-selectin, and CD44 which was reversed by the PI3K inhibitor LY294002, but not by the MEK inhibitor U0126.In summary, our data demonstrate a novel molecular mechanism by which S1P, FTY720, and two novel butterfly derivatives acted anti-inflammatory that is by suppressing gene transcription of various endothelial adhesion molecules and thereby preventing adhesion of immune cells to endothelial cells and subsequent extravasation.
Resumo:
The purpose of this study is to examine the stages of program realization of the interventions that the Bronx Health REACH program initiated at various levels to improve nutrition as a means for reducing racial and ethnic disparities in diabetes. This study was based on secondary analyses of qualitative data collected through the Bronx Health REACH Nutrition Project, a project conducted under the auspices of the Institute on Urban Family Health, with support from the Centers for Disease Control and Prevention (CDC). Local human subjects' review and approval through the Institute on Urban Family Health was required and obtained in order to conduct the Bronx Health REACH Nutrition Project. ^ The study drew from two theoretical models—Glanz and colleagues' nutrition environments model and Shediac-Rizkallah and Bone's sustainability model. The specific study objectives were two-fold: (1) to categorize each nutrition activity to a specific dimension (i.e. consumer, organizational or community nutrition environment); and (2) to evaluate the stage at which the program has been realized (i.e. development, implementation or sustainability). ^ A case study approach was applied and a constant comparative method was used to analyze the data. Triangulation of data based was also conducted. Qualitative data from this study revealed the following principal findings: (1) communities of color are disproportionately experiencing numerous individual and environmental factors contributing to the disparities in diabetes; (2) multi-level strategies that targeted the individual, organizational and community nutrition environments can appropriately address these contributing factors; (3) the nutrition strategies greatly varied in their ability to appropriately meet criteria for the three program stages; and (4) those nutrition strategies most likely to succeed (a) conveyed consistent and culturally relevant messages, (b) had continued involvement from program staff and partners, (c) were able to adapt over time or setting, (d) had a program champion and a training component, (e) were integrated into partnering organizations, and (f) were perceived to be successful by program staff and partners in their efforts to create individual, organizational and community/policy change. As a result of the criteria-based assessment and qualitative findings, an ecological framework elaborating on Glanz and colleagues model was developed. The qualitative findings and the resulting ecological framework developed from this study will help public health professionals and community leaders to develop and implement sustainable multi-level nutrition strategies for addressing racial and ethnic disparities in diabetes. ^
Resumo:
Goal-level Independent and-parallelism (IAP) is exploited by scheduling for simultaneous execution two or more goals which will not interfere with each other at run time. This can be done safely even if such goals can produce multiple answers. The most successful IAP implementations to date have used recomputation of answers and sequentially ordered backtracking. While in principle simplifying the implementation, recomputation can be very inefficient if the granularity of the parallel goals is large enough and they produce several answers, while sequentially ordered backtracking limits parallelism. And, despite the expected simplification, the implementation of the classic schemes has proved to involve complex engineering, with the consequent difficulty for system maintenance and expansion, and still frequently run into the well-known trapped goal and garbage slot problems. This work presents ideas about an alternative parallel backtracking model for IAP and a simulation studio. The model features parallel out-of-order backtracking and relies on answer memoization to reuse and combine answers. Whenever a parallel goal backtracks, its siblings also perform backtracking, but after storing the bindings generated by previous answers. The bindings are then reinstalled when combining answers. In order not to unnecessarily penalize forward execution, non-speculative and-parallel goals which have not been executed yet take precedence over sibling goals which could be backtracked over. Using a simulator, we show that this approach can bring significant performance advantages over classical approaches.
Resumo:
Abstract Transport is the foundation of any economy: it boosts economic growth, creates wealth, enhances trade, geographical accessibility and the mobility of people. Transport is also a key ingredient for a high quality of life, making places accessible and bringing people together. The future prosperity of our world will depend on the ability of all of its regions to remain fully and competitively integrated in the world economy. Efficient transport is vital in making this happen. Operations research can help in efficiently planning the design and operating transport systems. Planning and operational processes are fields that are rich in combinatorial optimization problems. These problems can be analyzed and solved through the application of mathematical models and optimization techniques, which may lead to an improvement in the performance of the transport system, as well as to a reduction in the time required for solving these problems. The latter aspect is important, because it increases the flexibility of the system: the system can adapt in a faster way to changes in the environment (i.e.: weather conditions, crew illness, failures, etc.). These disturbing changes (called disruptions) often enforce the schedule to be adapted. The direct consequences are delays and cancellations, implying many schedule adjustments and huge costs. Consequently, robust schedules and recovery plans must be developed in order to fight against disruptions. This dissertation makes contributions to two different fields: rail and air applications. Robust planning and recovery methods are presented. In the field of railway transport we develop several mathematical models which answer to RENFE’s (the major railway operator in Spain) needs: 1. We study the rolling stock assignment problem: here, we introduce some robust aspects in order to ameliorate some operations which are likely to fail. Once the rolling stock assignment is known, we propose a robust routing model which aims at identifying the train units’ sequences while minimizing the expected delays and human resources needed to perform the sequences. 2. It is widely accepted that the sequential solving approach produces solutions that are not global optima. Therefore, we develop an integrated and robust model to determine the train schedule and rolling stock assignment. We also propose an integrated model to study the rolling stock circulations. Circulations are determined by the rolling stock assignment and routing of the train units. 3. Although our aim is to develop robust plans, disruptions will be likely to occur and recovery methods will be needed. Therefore, we propose a recovery method which aims to recover the train schedule and rolling stock assignment in an integrated fashion all while considering the passenger demand. In the field of air transport we develop several mathematical models which answer to IBERIA’s (the major airline in Spain) needs: 1. We look at the airline-scheduling problem and develop an integrated approach that optimizes schedule design, fleet assignment and passenger use so as to reduce costs and create fewer incompatibilities between decisions. Robust itineraries are created to ameliorate misconnected passengers. 2. Air transport operators are continuously facing competition from other air operators and different modes of transport (e.g., High Speed Rail). Consequently, airline profitability is critically influenced by the airline’s ability to estimate passenger demands and construct profitable flight schedules. We consider multi-modal competition including airline and rail, and develop a new approach that estimates the demand associated with a given schedule; and generates airline schedules and fleet assignments using an integrated schedule design and fleet assignment optimization model that captures the impacts of schedule decisions on passenger demand.
Resumo:
Shrubs play an important role in water-limited agro-silvo-pastoral systems by providing shelter and forage for livestock, for erosion control, to maintain biodiversity, diversifying the landscape, and above all, facilitating the regeneration of trees. Furthermore, the carbon sink capacity of shrubs could also help to mitigate the effects of climate change since they constitute a high proportion of total plant biomass. The contribution of two common extensive native shrub species (Cistus ladanifer L. and Retama sphaerocarpa (L.) Boiss.) to the carbon pool of Iberian dehesas (Mediterranean agro-silvo-pastoral systems) is analyzed through biomass models developed at both individual (biovolume depending) and community level (height and cover depending). The total amount of carbon stored in these shrubs, including above- and belowground biomass, ranges from 1.8 to 11.2 Mg C ha_1 (mean 6.8 Mg C ha_1) for communities of C. ladanifer and from 2.6 to 8.6 Mg C ha_1 (mean 4.5 Mg C ha_1) for R. sphaerocarpa. These quantities account for over 20e30% of the total plant biomass in the system. The potential for carbon sequestration of these shrubs in the studied system ranges 0.10e1.32 Mg C ha_1 year_1 and 0.25e1.25 Mg C ha_1 year_1 for the C. ladanifer and R. sphaerocarpa communities’ respectively
Resumo:
Mode switches are used to partition the system’s behavior into different modes to reduce the complexity of large embedded systems. Such systems operate in multiple modes in which each one corresponds to a specific application scenario; these are called Multi-Mode Systems (MMS). A different piece of software is normally executed for each mode. At any given time, the system can be in one of the predefined modes and then be switched to another as a result of a certain condition. A mode switch mechanism (or mode change protocol) is used to shift the system from one mode to another at run-time. In this thesis we have used a hierarchical scheduling framework to implement a multi-mode system called Multi-Mode Hierarchical Scheduling Framework (MMHSF). A two-level Hierarchical Scheduling Framework (HSF) has already been implemented in an open source real-time operating system, FreeRTOS, to support temporal isolation among real-time components. The main contribution of this thesis is the extension of the HSF featuring a multimode feature with an emphasis on making minimal changes in the underlying operating system (FreeRTOS) and its HSF implementation. Our implementation uses fixed-priority preemptive scheduling at both local and global scheduling levels and idling periodic servers. It also now supports different modes of the system which can be switched at run-time. Each subsystem and task exhibit different timing attributes according to mode, and upon a Mode Change Request (MCR) the task-set and timing interfaces of the entire system (including subsystems and tasks) undergo a change. A Mode Change Protocol specifies precisely how the system-mode will be changed. However, an application may not only need to change a mode but also a different mode change protocol semantic. For example, the mode change from normal to shutdown can allow all the tasks to be completed before the mode itself is changed, while changing a mode from normal to emergency may require aborting all tasks instantly. In our work, both the system mode and the mode change protocol can be changed at run-time. We have implemented three different mode change protocols to switch from one mode to another: the Suspend/resume protocol, the Abort protocol, and the Complete protocol. These protocols increase the flexibility of the system, allowing users to select the way they want to switch to a new mode. The implementation of MMHSF is tested and evaluated on an AVR-based 32 bit board EVK1100 with an AVR32UC3A0512 micro-controller. We have tested the behavior of each system mode and for each mode change protocol. We also provide the results for the performance measures of all mode change protocols in the thesis. RESUMEN Los conmutadores de modo son usados para particionar el comportamiento del sistema en diferentes modos, reduciendo así la complejidad de grandes sistemas empotrados. Estos sistemas tienen multiples modos de operación, cada uno de ellos correspondiente a distintos escenarios y para distintas aplicaciones; son llamados Sistemas Multimodales (o en inglés “Multi-Mode Systems” o MMS). Normalmente cada modo ejecuta una parte de código distinto. En un momento dado el sistema, que está en un modo concreto, puede ser cambiado a otro modo distinto como resultado de alguna condicion impuesta previamente. Los mecanismos de cambio de modo (o protocolos de cambio de modo) son usados para mover el sistema de un modo a otro durante el tiempo de ejecución. En este trabajo se ha usado un modelo de sistema operativo para implementar un sistema multimodo llamado MMHSF, siglas en inglés correspondientes a (Multi-Mode Hierarchical Scheduling Framework). Este sistema está basado en el HSF (Hierarchical Scheduling Framework), un modelo de sistema operativo con jerarquía de dos niveles, implementado en un sistema operativo en tiempo real de libre distribución llamado FreeRTOS, capaz de permitir el aislamiento temporal entre componentes. La principal contribución de este trabajo es la ampliación del HSF convirtiendolo en un sistema multimodo realizando los cambios mínimos necesarios sobre el sistema operativo FreeRTOS y la implementación ya existente del HSF. Esta implementación usa un sistema de planificación de prioridad fija para ambos niveles de jerarquía, ocupando el tiempo entre tareas con un “modo reposo”. Además el sistema es capaz de cambiar de un modo a otro en tiempo de ejecución. Cada subsistema y tarea son capaces de tener distintos atributos de tiempo (prioridad, periodo y tiempo de ejecución) en función del modo. Bajo una demanda de cambio de modo (Mode Change Request MCR) se puede variar el set de tareas en ejecución, así como los atributos de los servidores y las tareas. Un protocolo de cambio de modo espeficica precisamente cómo será cambiado el sistema de un modo a otro. Sin embargo una aplicación puede requerir no solo un cambio de modo, sino que lo haga de una forma especifica. Por ejemplo, el cambio de modo de “normal” a “apagado” puede permitir a las tareas en ejecución ser finalizadas antes de que se complete la transición, pero sin embargo el cambio de “normal” a “emergencia” puede requerir abortar todas las tareas instantaneamente. En este trabajo ambas características, tanto el modo como el protocolo de cambio, pueden ser cambiadas en tiempo de ejecución, pero deben ser previamente definidas por el desarrollador. Han sido definidos tres protocolos de cambios: el protocolo “suspender/continuar”, protocolo “abortar” y el protocolo “completar”. Estos protocolos incrementan la flexibilidad del sistema, permitiendo al usuario seleccionar de que forma quieren cambiar hacia el nuevo modo. La implementación del MMHSF ha sido testada y evaluada en una placa AVR EVK1100, con un micro-controlador AVR32UC3A0. Se ha comprobado el comportamiento de los distintos modos para los distintos protocolos, definidos previamente. Como resultado se proporcionan las medidades de rendimiento de los distintos protocolos de cambio de modo.
Resumo:
Los dispositivos móviles modernos disponen cada vez de más funcionalidad debido al rápido avance de las tecnologías de las comunicaciones y computaciones móviles. Sin embargo, la capacidad de la batería no ha experimentado un aumento equivalente. Por ello, la experiencia de usuario en los sistemas móviles modernos se ve muy afectada por la vida de la batería, que es un factor inestable de difícil de control. Para abordar este problema, investigaciones anteriores han propuesto un esquema de gestion del consumo (PM) centrada en la energía y que proporciona una garantía sobre la vida operativa de la batería mediante la gestión de la energía como un recurso de primera clase en el sistema. Como el planificador juega un papel fundamental en la administración del consumo de energía y en la garantía del rendimiento de las aplicaciones, esta tesis explora la optimización de la experiencia de usuario para sistemas móviles con energía limitada desde la perspectiva de un planificador que tiene en cuenta el consumo de energía en un contexto en el que ésta es un recurso de primera clase. En esta tesis se analiza en primer lugar los factores que contribuyen de forma general a la experiencia de usuario en un sistema móvil. Después se determinan los requisitos esenciales que afectan a la experiencia de usuario en la planificación centrada en el consumo de energía, que son el reparto proporcional de la potencia, el cumplimiento de las restricciones temporales, y cuando sea necesario, el compromiso entre la cuota de potencia y las restricciones temporales. Para cumplir con los requisitos, el algoritmo clásico de fair queueing y su modelo de referencia se extienden desde los dominios de las comunicaciones y ancho de banda de CPU hacia el dominio de la energía, y en base a ésto, se propone el algoritmo energy-based fair queueing (EFQ) para proporcionar una planificación basada en la energía. El algoritmo EFQ está diseñado para compartir la potencia consumida entre las tareas mediante su planificación en función de la energía consumida y de la cuota reservada. La cuota de consumo de cada tarea con restricciones temporales está protegida frente a diversos cambios que puedan ocurrir en el sistema. Además, para dar mejor soporte a las tareas en tiempo real y multimedia, se propone un mecanismo para combinar con el algoritmo EFQ para dar preferencia en la planificación durante breves intervalos de tiempo a las tareas más urgentes con restricciones temporales.Las propiedades del algoritmo EFQ se evaluan a través del modelado de alto nivel y la simulación. Los resultados de las simulaciones indican que los requisitos esenciales de la planificación centrada en la energía pueden lograrse. El algoritmo EFQ se implementa más tarde en el kernel de Linux. Para evaluar las propiedades del planificador EFQ basado en Linux, se desarrolló un banco de pruebas experimental basado en una sitema empotrado, un programa de banco de pruebas multihilo, y un conjunto de pruebas de código abierto. A través de experimentos específicamente diseñados, esta tesis verifica primero las propiedades de EFQ en la gestión de la cuota de consumo de potencia y la planificación en tiempo real y, a continuación, explora los beneficios potenciales de emplear la planificación EFQ en la optimización de la experiencia de usuario para sistemas móviles con energía limitada. Los resultados experimentales sobre la gestión de la cuota de energía muestran que EFQ es más eficaz que el planificador de Linux-CFS en la gestión de energía, logrando un reparto proporcional de la energía del sistema independientemente de en qué dispositivo se consume la energía. Los resultados experimentales en la planificación en tiempo real demuestran que EFQ puede lograr de forma eficaz, flexible y robusta el cumplimiento de las restricciones temporales aunque se dé el caso de aumento del el número de tareas o del error en la estimación de energía. Por último, un análisis comparativo de los resultados experimentales sobre la optimización de la experiencia del usuario demuestra que, primero, EFQ es más eficaz y flexible que los algoritmos tradicionales de planificación del procesador, como el que se encuentra por defecto en el planificador de Linux y, segundo, que proporciona la posibilidad de optimizar y preservar la experiencia de usuario para los sistemas móviles con energía limitada. Abstract Modern mobiledevices have been becoming increasingly powerful in functionality and entertainment as the next-generation mobile computing and communication technologies are rapidly advanced. However, the battery capacity has not experienced anequivalent increase. The user experience of modern mobile systems is therefore greatly affected by the battery lifetime,which is an unstable factor that is hard to control. To address this problem, previous works proposed energy-centric power management (PM) schemes to provide strong guarantee on the battery lifetime by globally managing energy as the first-class resource in the system. As the processor scheduler plays a pivotal role in power management and application performance guarantee, this thesis explores the user experience optimization of energy-limited mobile systemsfrom the perspective of energy-centric processor scheduling in an energy-centric context. This thesis first analyzes the general contributing factors of the mobile system user experience.Then itdetermines the essential requirements on the energy-centric processor scheduling for user experience optimization, which are proportional power sharing, time-constraint compliance, and when necessary, a tradeoff between the power share and the time-constraint compliance. To meet the requirements, the classical fair queuing algorithm and its reference model are extended from the network and CPU bandwidth sharing domain to the energy sharing domain, and based on that, the energy-based fair queuing (EFQ) algorithm is proposed for performing energy-centric processor scheduling. The EFQ algorithm is designed to provide proportional power shares to tasks by scheduling the tasks based on their energy consumption and weights. The power share of each time-sensitive task is protected upon the change of the scheduling environment to guarantee a stable performance, and any instantaneous power share that is overly allocated to one time-sensitive task can be fairly re-allocated to the other tasks. In addition, to better support real-time and multimedia scheduling, certain real-time friendly mechanism is combined into the EFQ algorithm to give time-limited scheduling preference to the time-sensitive tasks. Through high-level modelling and simulation, the properties of the EFQ algorithm are evaluated. The simulation results indicate that the essential requirements of energy-centric processor scheduling can be achieved. The EFQ algorithm is later implemented in the Linux kernel. To assess the properties of the Linux-based EFQ scheduler, an experimental test-bench based on an embedded platform, a multithreading test-bench program, and an open-source benchmark suite is developed. Through specifically-designed experiments, this thesis first verifies the properties of EFQ in power share management and real-time scheduling, and then, explores the potential benefits of employing EFQ scheduling in the user experience optimization for energy-limited mobile systems. Experimental results on power share management show that EFQ is more effective than the Linux-CFS scheduler in managing power shares and it can achieve a proportional sharing of the system power regardless of on which device the energy is spent. Experimental results on real-time scheduling demonstrate that EFQ can achieve effective, flexible and robust time-constraint compliance upon the increase of energy estimation error and task number. Finally, a comparative analysis of the experimental results on user experience optimization demonstrates that EFQ is more effective and flexible than traditional processor scheduling algorithms, such as those of the default Linux scheduler, in optimizing and preserving the user experience of energy-limited mobile systems.
Resumo:
¿Suministrarán las fuentes de energía renovables toda la energía que el mundo necesita algún día? Algunos argumentan que sí, mientras que otros dicen que no. Sin embargo, en algunas regiones del mundo, la producción de electricidad a través de fuentes de energía renovables ya está en una etapa prometedora de desarrollo en la que su costo de generación de electricidad compite con fuentes de electricidad convencionales, como por ejemplo la paridad de red. Este logro ha sido respaldado por el aumento de la eficiencia de la tecnología, la reducción de los costos de producción y, sobre todo, los años de intervenciones políticas de apoyo financiero. La difusión de los sistemas solares fotovoltaicos (PV) en Alemania es un ejemplo relevante. Alemania no sólo es el país líder en términos de capacidad instalada de sistemas fotovoltaicos (PV) en todo el mundo, sino también uno de los países pioneros donde la paridad de red se ha logrado recientemente. No obstante, podría haber una nube en el horizonte. La tasa de difusión ha comenzado a declinar en muchas regiones. Además, las empresas solares locales – que se sabe son importantes impulsores de la difusión – han comenzado a enfrentar dificultades para manejar sus negocios. Estos acontecimientos plantean algunas preguntas importantes: ¿Es ésta una disminución temporal en la difusión? ¿Los adoptantes continuarán instalando sistemas fotovoltaicos? ¿Qué pasa con los modelos de negocio de las empresas solares locales? Con base en el caso de los sistemas fotovoltaicos en Alemania a través de un análisis multinivel y dos revisiones literarias complementarias, esta tesis doctoral extiende el debate proporcionando riqueza múltiple de datos empíricos en un conocimiento de contexto limitado. El primer análisis se basa en la perspectiva del adoptante, que explora el nivel "micro" y el proceso social que subyace a la adopción de los sistemas fotovoltaicos. El segundo análisis es una perspectiva a nivel de empresa, que explora los modelos de negocio de las empresas y sus roles impulsores en la difusión de los sistemas fotovoltaicos. El tercero análisis es una perspectiva regional, la cual explora el nivel "meso", el proceso social que subyace a la adopción de sistemas fotovoltaicos y sus técnicas de modelado. Los resultados incluyen implicaciones tanto para académicos como políticos, no sólo sobre las innovaciones en energía renovable relativas a la paridad de red, sino también, de manera inductiva, sobre las innovaciones ambientales impulsadas por las políticas que logren la competitividad de costes. ABSTRACT Will renewable energy sources supply all of the world energy needs one day? Some argue yes, while others say no. However, in some regions of the world, the electricity production through renewable energy sources is already at a promising stage of development at which their electricity generation costs compete with conventional electricity sources’, i.e., grid parity. This achievement has been underpinned by the increase of technology efficiency, reduction of production costs and, above all, years of policy interventions of providing financial support. The diffusion of solar photovoltaic (PV) systems in Germany is an important frontrunner case in point. Germany is not only the top country in terms of installed PV systems’ capacity worldwide but also one of the pioneer countries where the grid parity has recently been achieved. However, there might be a cloud on the horizon. The diffusion rate has started to decline in many regions. In addition, local solar firms – which are known to be important drivers of diffusion – have started to face difficulties to run their businesses. These developments raise some important questions: Is this a temporary decline on diffusion? Will adopters continue to install PV systems? What about the business models of the local solar firms? Based on the case of PV systems in Germany through a multi-level analysis and two complementary literature reviews, this PhD Dissertation extends the debate by providing multiple wealth of empirical details in a context-limited knowledge. The first analysis is based on the adopter perspective, which explores the “micro” level and the social process underlying the adoption of PV systems. The second one is a firm-level perspective, which explores the business models of firms and their driving roles in diffusion of PV systems. The third one is a regional perspective, which explores the “meso” level, i.e., the social process underlying the adoption of PV systems and its modeling techniques. The results include implications for both scholars and policymakers, not only about renewable energy innovations at grid parity, but also in an inductive manner, about policy-driven environmental innovations that achieve the cost competiveness.
Resumo:
Los Centros de Datos se encuentran actualmente en cualquier sector de la economía mundial. Están compuestos por miles de servidores, dando servicio a los usuarios de forma global, las 24 horas del día y los 365 días del año. Durante los últimos años, las aplicaciones del ámbito de la e-Ciencia, como la e-Salud o las Ciudades Inteligentes han experimentado un desarrollo muy significativo. La necesidad de manejar de forma eficiente las necesidades de cómputo de aplicaciones de nueva generación, junto con la creciente demanda de recursos en aplicaciones tradicionales, han facilitado el rápido crecimiento y la proliferación de los Centros de Datos. El principal inconveniente de este aumento de capacidad ha sido el rápido y dramático incremento del consumo energético de estas infraestructuras. En 2010, la factura eléctrica de los Centros de Datos representaba el 1.3% del consumo eléctrico mundial. Sólo en el año 2012, el consumo de potencia de los Centros de Datos creció un 63%, alcanzando los 38GW. En 2013 se estimó un crecimiento de otro 17%, hasta llegar a los 43GW. Además, los Centros de Datos son responsables de más del 2% del total de emisiones de dióxido de carbono a la atmósfera. Esta tesis doctoral se enfrenta al problema energético proponiendo técnicas proactivas y reactivas conscientes de la temperatura y de la energía, que contribuyen a tener Centros de Datos más eficientes. Este trabajo desarrolla modelos de energía y utiliza el conocimiento sobre la demanda energética de la carga de trabajo a ejecutar y de los recursos de computación y refrigeración del Centro de Datos para optimizar el consumo. Además, los Centros de Datos son considerados como un elemento crucial dentro del marco de la aplicación ejecutada, optimizando no sólo el consumo del Centro de Datos sino el consumo energético global de la aplicación. Los principales componentes del consumo en los Centros de Datos son la potencia de computación utilizada por los equipos de IT, y la refrigeración necesaria para mantener los servidores dentro de un rango de temperatura de trabajo que asegure su correcto funcionamiento. Debido a la relación cúbica entre la velocidad de los ventiladores y el consumo de los mismos, las soluciones basadas en el sobre-aprovisionamiento de aire frío al servidor generalmente tienen como resultado ineficiencias energéticas. Por otro lado, temperaturas más elevadas en el procesador llevan a un consumo de fugas mayor, debido a la relación exponencial del consumo de fugas con la temperatura. Además, las características de la carga de trabajo y las políticas de asignación de recursos tienen un impacto importante en los balances entre corriente de fugas y consumo de refrigeración. La primera gran contribución de este trabajo es el desarrollo de modelos de potencia y temperatura que permiten describes estos balances entre corriente de fugas y refrigeración; así como la propuesta de estrategias para minimizar el consumo del servidor por medio de la asignación conjunta de refrigeración y carga desde una perspectiva multivariable. Cuando escalamos a nivel del Centro de Datos, observamos un comportamiento similar en términos del balance entre corrientes de fugas y refrigeración. Conforme aumenta la temperatura de la sala, mejora la eficiencia de la refrigeración. Sin embargo, este incremente de la temperatura de sala provoca un aumento en la temperatura de la CPU y, por tanto, también del consumo de fugas. Además, la dinámica de la sala tiene un comportamiento muy desigual, no equilibrado, debido a la asignación de carga y a la heterogeneidad en el equipamiento de IT. La segunda contribución de esta tesis es la propuesta de técnicas de asigación conscientes de la temperatura y heterogeneidad que permiten optimizar conjuntamente la asignación de tareas y refrigeración a los servidores. Estas estrategias necesitan estar respaldadas por modelos flexibles, que puedan trabajar en tiempo real, para describir el sistema desde un nivel de abstracción alto. Dentro del ámbito de las aplicaciones de nueva generación, las decisiones tomadas en el nivel de aplicación pueden tener un impacto dramático en el consumo energético de niveles de abstracción menores, como por ejemplo, en el Centro de Datos. Es importante considerar las relaciones entre todos los agentes computacionales implicados en el problema, de forma que puedan cooperar para conseguir el objetivo común de reducir el coste energético global del sistema. La tercera contribución de esta tesis es el desarrollo de optimizaciones energéticas para la aplicación global por medio de la evaluación de los costes de ejecutar parte del procesado necesario en otros niveles de abstracción, que van desde los nodos hasta el Centro de Datos, por medio de técnicas de balanceo de carga. Como resumen, el trabajo presentado en esta tesis lleva a cabo contribuciones en el modelado y optimización consciente del consumo por fugas y la refrigeración de servidores; el modelado de los Centros de Datos y el desarrollo de políticas de asignación conscientes de la heterogeneidad; y desarrolla mecanismos para la optimización energética de aplicaciones de nueva generación desde varios niveles de abstracción. ABSTRACT Data centers are easily found in every sector of the worldwide economy. They consist of tens of thousands of servers, serving millions of users globally and 24-7. In the last years, e-Science applications such e-Health or Smart Cities have experienced a significant development. The need to deal efficiently with the computational needs of next-generation applications together with the increasing demand for higher resources in traditional applications has facilitated the rapid proliferation and growing of data centers. A drawback to this capacity growth has been the rapid increase of the energy consumption of these facilities. In 2010, data center electricity represented 1.3% of all the electricity use in the world. In year 2012 alone, global data center power demand grew 63% to 38GW. A further rise of 17% to 43GW was estimated in 2013. Moreover, data centers are responsible for more than 2% of total carbon dioxide emissions. This PhD Thesis addresses the energy challenge by proposing proactive and reactive thermal and energy-aware optimization techniques that contribute to place data centers on a more scalable curve. This work develops energy models and uses the knowledge about the energy demand of the workload to be executed and the computational and cooling resources available at data center to optimize energy consumption. Moreover, data centers are considered as a crucial element within their application framework, optimizing not only the energy consumption of the facility, but the global energy consumption of the application. The main contributors to the energy consumption in a data center are the computing power drawn by IT equipment and the cooling power needed to keep the servers within a certain temperature range that ensures safe operation. Because of the cubic relation of fan power with fan speed, solutions based on over-provisioning cold air into the server usually lead to inefficiencies. On the other hand, higher chip temperatures lead to higher leakage power because of the exponential dependence of leakage on temperature. Moreover, workload characteristics as well as allocation policies also have an important impact on the leakage-cooling tradeoffs. The first key contribution of this work is the development of power and temperature models that accurately describe the leakage-cooling tradeoffs at the server level, and the proposal of strategies to minimize server energy via joint cooling and workload management from a multivariate perspective. When scaling to the data center level, a similar behavior in terms of leakage-temperature tradeoffs can be observed. As room temperature raises, the efficiency of data room cooling units improves. However, as we increase room temperature, CPU temperature raises and so does leakage power. Moreover, the thermal dynamics of a data room exhibit unbalanced patterns due to both the workload allocation and the heterogeneity of computing equipment. The second main contribution is the proposal of thermal- and heterogeneity-aware workload management techniques that jointly optimize the allocation of computation and cooling to servers. These strategies need to be backed up by flexible room level models, able to work on runtime, that describe the system from a high level perspective. Within the framework of next-generation applications, decisions taken at this scope can have a dramatical impact on the energy consumption of lower abstraction levels, i.e. the data center facility. It is important to consider the relationships between all the computational agents involved in the problem, so that they can cooperate to achieve the common goal of reducing energy in the overall system. The third main contribution is the energy optimization of the overall application by evaluating the energy costs of performing part of the processing in any of the different abstraction layers, from the node to the data center, via workload management and off-loading techniques. In summary, the work presented in this PhD Thesis, makes contributions on leakage and cooling aware server modeling and optimization, data center thermal modeling and heterogeneityaware data center resource allocation, and develops mechanisms for the energy optimization for next-generation applications from a multi-layer perspective.
Resumo:
In recent decades, full electric and hybrid electric vehicles have emerged as an alternative to conventional cars due to a range of factors, including environmental and economic aspects. These vehicles are the result of considerable efforts to seek ways of reducing the use of fossil fuel for vehicle propulsion. Sophisticated technologies such as hybrid and electric powertrains require careful study and optimization. Mathematical models play a key role at this point. Currently, many advanced mathematical analysis tools, as well as computer applications have been built for vehicle simulation purposes. Given the great interest of hybrid and electric powertrains, along with the increasing importance of reliable computer-based models, the author decided to integrate both aspects in the research purpose of this work. Furthermore, this is one of the first final degree projects held at the ETSII (Higher Technical School of Industrial Engineers) that covers the study of hybrid and electric propulsion systems. The present project is based on MBS3D 2.0, a specialized software for the dynamic simulation of multibody systems developed at the UPM Institute of Automobile Research (INSIA). Automobiles are a clear example of complex multibody systems, which are present in nearly every field of engineering. The work presented here benefits from the availability of MBS3D software. This program has proven to be a very efficient tool, with a highly developed underlying mathematical formulation. On this basis, the focus of this project is the extension of MBS3D features in order to be able to perform dynamic simulations of hybrid and electric vehicle models. This requires the joint simulation of the mechanical model of the vehicle, together with the model of the hybrid or electric powertrain. These sub-models belong to completely different physical domains. In fact the powertrain consists of energy storage systems, electrical machines and power electronics, connected to purely mechanical components (wheels, suspension, transmission, clutch…). The challenge today is to create a global vehicle model that is valid for computer simulation. Therefore, the main goal of this project is to apply co-simulation methodologies to a comprehensive model of an electric vehicle, where sub-models from different areas of engineering are coupled. The created electric vehicle (EV) model consists of a separately excited DC electric motor, a Li-ion battery pack, a DC/DC chopper converter and a multibody vehicle model. Co-simulation techniques allow car designers to simulate complex vehicle architectures and behaviors, which are usually difficult to implement in a real environment due to safety and/or economic reasons. In addition, multi-domain computational models help to detect the effects of different driving patterns and parameters and improve the models in a fast and effective way. Automotive designers can greatly benefit from a multidisciplinary approach of new hybrid and electric vehicles. In this case, the global electric vehicle model includes an electrical subsystem and a mechanical subsystem. The electrical subsystem consists of three basic components: electric motor, battery pack and power converter. A modular representation is used for building the dynamic model of the vehicle drivetrain. This means that every component of the drivetrain (submodule) is modeled separately and has its own general dynamic model, with clearly defined inputs and outputs. Then, all the particular submodules are assembled according to the drivetrain configuration and, in this way, the power flow across the components is completely determined. Dynamic models of electrical components are often based on equivalent circuits, where Kirchhoff’s voltage and current laws are applied to draw the algebraic and differential equations. Here, Randles circuit is used for dynamic modeling of the battery and the electric motor is modeled through the analysis of the equivalent circuit of a separately excited DC motor, where the power converter is included. The mechanical subsystem is defined by MBS3D equations. These equations consider the position, velocity and acceleration of all the bodies comprising the vehicle multibody system. MBS3D 2.0 is entirely written in MATLAB and the structure of the program has been thoroughly studied and understood by the author. MBS3D software is adapted according to the requirements of the applied co-simulation method. Some of the core functions are modified, such as integrator and graphics, and several auxiliary functions are added in order to compute the mathematical model of the electrical components. By coupling and co-simulating both subsystems, it is possible to evaluate the dynamic interaction among all the components of the drivetrain. ‘Tight-coupling’ method is used to cosimulate the sub-models. This approach integrates all subsystems simultaneously and the results of the integration are exchanged by function-call. This means that the integration is done jointly for the mechanical and the electrical subsystem, under a single integrator and then, the speed of integration is determined by the slower subsystem. Simulations are then used to show the performance of the developed EV model. However, this project focuses more on the validation of the computational and mathematical tool for electric and hybrid vehicle simulation. For this purpose, a detailed study and comparison of different integrators within the MATLAB environment is done. Consequently, the main efforts are directed towards the implementation of co-simulation techniques in MBS3D software. In this regard, it is not intended to create an extremely precise EV model in terms of real vehicle performance, although an acceptable level of accuracy is achieved. The gap between the EV model and the real system is filled, in a way, by introducing the gas and brake pedals input, which reflects the actual driver behavior. This input is included directly in the differential equations of the model, and determines the amount of current provided to the electric motor. For a separately excited DC motor, the rotor current is proportional to the traction torque delivered to the car wheels. Therefore, as it occurs in the case of real vehicle models, the propulsion torque in the mathematical model is controlled through acceleration and brake pedal commands. The designed transmission system also includes a reduction gear that adapts the torque coming for the motor drive and transfers it. The main contribution of this project is, therefore, the implementation of a new calculation path for the wheel torques, based on performance characteristics and outputs of the electric powertrain model. Originally, the wheel traction and braking torques were input to MBS3D through a vector directly computed by the user in a MATLAB script. Now, they are calculated as a function of the motor current which, in turn, depends on the current provided by the battery pack across the DC/DC chopper converter. The motor and battery currents and voltages are the solutions of the electrical ODE (Ordinary Differential Equation) system coupled to the multibody system. Simultaneously, the outputs of MBS3D model are the position, velocity and acceleration of the vehicle at all times. The motor shaft speed is computed from the output vehicle speed considering the wheel radius, the gear reduction ratio and the transmission efficiency. This motor shaft speed, somehow available from MBS3D model, is then introduced in the differential equations corresponding to the electrical subsystem. In this way, MBS3D and the electrical powertrain model are interconnected and both subsystems exchange values resulting as expected with tight-coupling approach.When programming mathematical models of complex systems, code optimization is a key step in the process. A way to improve the overall performance of the integration, making use of C/C++ as an alternative programming language, is described and implemented. Although this entails a higher computational burden, it leads to important advantages regarding cosimulation speed and stability. In order to do this, it is necessary to integrate MATLAB with another integrated development environment (IDE), where C/C++ code can be generated and executed. In this project, C/C++ files are programmed in Microsoft Visual Studio and the interface between both IDEs is created by building C/C++ MEX file functions. These programs contain functions or subroutines that can be dynamically linked and executed from MATLAB. This process achieves reductions in simulation time up to two orders of magnitude. The tests performed with different integrators, also reveal the stiff character of the differential equations corresponding to the electrical subsystem, and allow the improvement of the cosimulation process. When varying the parameters of the integration and/or the initial conditions of the problem, the solutions of the system of equations show better dynamic response and stability, depending on the integrator used. Several integrators, with variable and non-variable step-size, and for stiff and non-stiff problems are applied to the coupled ODE system. Then, the results are analyzed, compared and discussed. From all the above, the project can be divided into four main parts: 1. Creation of the equation-based electric vehicle model; 2. Programming, simulation and adjustment of the electric vehicle model; 3. Application of co-simulation methodologies to MBS3D and the electric powertrain subsystem; and 4. Code optimization and study of different integrators. Additionally, in order to deeply understand the context of the project, the first chapters include an introduction to basic vehicle dynamics, current classification of hybrid and electric vehicles and an explanation of the involved technologies such as brake energy regeneration, electric and non-electric propulsion systems for EVs and HEVs (hybrid electric vehicles) and their control strategies. Later, the problem of dynamic modeling of hybrid and electric vehicles is discussed. The integrated development environment and the simulation tool are also briefly described. The core chapters include an explanation of the major co-simulation methodologies and how they have been programmed and applied to the electric powertrain model together with the multibody system dynamic model. Finally, the last chapters summarize the main results and conclusions of the project and propose further research topics. In conclusion, co-simulation methodologies are applicable within the integrated development environments MATLAB and Visual Studio, and the simulation tool MBS3D 2.0, where equation-based models of multidisciplinary subsystems, consisting of mechanical and electrical components, are coupled and integrated in a very efficient way.
Resumo:
RNA polymerase I (Pol I) transcription in the yeast Saccharomyces cerevisiae is greatly stimulated in vivo and in vitro by the multiprotein complex, upstream activation factor (UAF). UAF binds tightly to the upstream element of the rDNA promoter, such that once bound (in vitro), UAF does not readily exchange onto a competing template. Of the polypeptides previously identified in purified UAF, three are encoded by genes required for Pol I transcription in vivo: RRN5, RRN9, and RRN10. Two others, p30 and p18, have remained uncharacterized. We report here that the N-terminal amino acid sequence, its mobility in gel electrophoresis, and the immunoreactivity of p18 shows that it is histone H3. In addition, histone H4 was found in UAF, and myc-tagged histone H4 could be used to affinity-purify UAF. Histones H2A and H2B were not detectable in UAF. These results suggest that histones H3 and H4 probably account for the strong binding of UAF to DNA and may offer a means by which general nuclear regulatory signals could be transmitted to Pol I.