915 resultados para Two-level scheduling and optimization
Resumo:
Over the past few decades, we have been enjoying tremendous benefits thanks to the revolutionary advancement of computing systems, driven mainly by the remarkable semiconductor technology scaling and the increasingly complicated processor architecture. However, the exponentially increased transistor density has directly led to exponentially increased power consumption and dramatically elevated system temperature, which not only adversely impacts the system's cost, performance and reliability, but also increases the leakage and thus the overall power consumption. Today, the power and thermal issues have posed enormous challenges and threaten to slow down the continuous evolvement of computer technology. Effective power/thermal-aware design techniques are urgently demanded, at all design abstraction levels, from the circuit-level, the logic-level, to the architectural-level and the system-level. ^ In this dissertation, we present our research efforts to employ real-time scheduling techniques to solve the resource-constrained power/thermal-aware, design-optimization problems. In our research, we developed a set of simple yet accurate system-level models to capture the processor's thermal dynamic as well as the interdependency of leakage power consumption, temperature, and supply voltage. Based on these models, we investigated the fundamental principles in power/thermal-aware scheduling, and developed real-time scheduling techniques targeting at a variety of design objectives, including peak temperature minimization, overall energy reduction, and performance maximization. ^ The novelty of this work is that we integrate the cutting-edge research on power and thermal at the circuit and architectural-level into a set of accurate yet simplified system-level models, and are able to conduct system-level analysis and design based on these models. The theoretical study in this work serves as a solid foundation for the guidance of the power/thermal-aware scheduling algorithms development in practical computing systems.^
Resumo:
The integration of mathematics and science in secondary schools in the 21st century continues to be an important topic of practice and research. The purpose of my research study, which builds on studies by Frykholm and Glasson (2005) and Berlin and White (2010), is to explore the potential constraints and benefits of integrating mathematics and science in Ontario secondary schools based on the perspectives of in-service and pre-service teachers with various math and/or science backgrounds. A qualitative and quantitative research design with an exploratory approach was used. The qualitative data was collected from a sample of 12 in-service teachers with various math and/or science backgrounds recruited from two school boards in Eastern Ontario. The quantitative and some qualitative data was collected from a sample of 81 pre-service teachers from the Queen’s University Bachelor of Education (B.Ed) program. Semi-structured interviews were conducted with the in-service teachers while a survey and a focus group was conducted with the pre-service teachers. Once the data was collected, the qualitative data were abductively analyzed. For the quantitative data, descriptive and inferential statistics (one-way ANOVAs and Pearson Chi Square analyses) were calculated to examine perspectives of teachers regardless of teaching background and to compare groups of teachers based on teaching background. The findings of this study suggest that in-service and pre-service teachers have a positive attitude towards the integration of math and science and view it as valuable to student learning and success. The pre-service teachers viewed the integration as easy and did not express concerns to this integration. On the other hand, the in-service teachers highlighted concerns and challenges such as resources, scheduling, and time constraints. My results illustrate when teachers perceive it is valuable to integrate math and science and which aspects of the classroom benefit best from the integration. Furthermore, the results highlight barriers and possible solutions to better the integration of math and science. In addition to the benefits and constraints of integration, my results illustrate why some teachers may opt out of integrating math and science and the different strategies teachers have incorporated to integrate math and science in their classroom.
Resumo:
Optimization of Carnobacterium divergens V41 growth and bacteriocin activity in a culture medium deprived of animal protein, needs for food bioprotection, was performed by using a statistical approach. In a screening experiment, twelve factors (pH, temperature, carbohydrates, NaCl, yeast extract, soy peptone, sodium acetate, ammonium citrate, magnesium sulphate, manganese sulphate, ascorbic acid and thiamine) were tested for their influence on the maximal growth and bacteriocin activity using a two-level incomplete factorial design with 192 experiments performed in microtiter plate wells. Based on results, a basic medium was developed and three variables (pH, temperature and carbohydrates concentration) were selected for a scale-up study in bioreactor. A 23 complete factorial design was performed, allowing the estimation of linear effects of factors and all the first order interactions. The best conditions for the cell production were obtained with a temperature of 15°C and a carbohydrates concentration of 20 g/l whatever the pH (in the range 6.5-8), and the best conditions for bacteriocin activity were obtained at 15°C and pH 6.5 whatever the carbohydrates concentration (in the range 2-20 g/l). The predicted final count of C. divergens V41 and the bacteriocin activity under the optimized conditions (15°C, pH 6.5, 20 g/l carbohydrates) were 2.4 x 1010 CFU/ml and 819200 AU/ml respectively. C. divergens V41 cells cultivated in the optimized conditions were able to grow in cold-smoked salmon and totally inhibited the growth of Listeria monocytogenes (< 50 CFU g-1) during five weeks of vacuum storage at 4° and 8°C.
Resumo:
In the framework of industrial problems, the application of Constrained Optimization is known to have overall very good modeling capability and performance and stands as one of the most powerful, explored, and exploited tool to address prescriptive tasks. The number of applications is huge, ranging from logistics to transportation, packing, production, telecommunication, scheduling, and much more. The main reason behind this success is to be found in the remarkable effort put in the last decades by the OR community to develop realistic models and devise exact or approximate methods to solve the largest variety of constrained or combinatorial optimization problems, together with the spread of computational power and easily accessible OR software and resources. On the other hand, the technological advancements lead to a data wealth never seen before and increasingly push towards methods able to extract useful knowledge from them; among the data-driven methods, Machine Learning techniques appear to be one of the most promising, thanks to its successes in domains like Image Recognition, Natural Language Processes and playing games, but also the amount of research involved. The purpose of the present research is to study how Machine Learning and Constrained Optimization can be used together to achieve systems able to leverage the strengths of both methods: this would open the way to exploiting decades of research on resolution techniques for COPs and constructing models able to adapt and learn from available data. In the first part of this work, we survey the existing techniques and classify them according to the type, method, or scope of the integration; subsequently, we introduce a novel and general algorithm devised to inject knowledge into learning models through constraints, Moving Target. In the last part of the thesis, two applications stemming from real-world projects and done in collaboration with Optit will be presented.
Resumo:
We introduce an analytical approximation scheme to diagonalize parabolically confined two-dimensional (2D) electron systems with both the Rashba and Dresselhaus spin-orbit interactions. The starting point of our perturbative expansion is a zeroth-order Hamiltonian for an electron confined in a quantum wire with an effective spin-orbit induced magnetic field along the wire, obtained by properly rotating the usual spin-orbit Hamiltonian. We find that the spin-orbit-related transverse coupling terms can be recast into two parts W and V, which couple crossing and noncrossing adjacent transverse modes, respectively. Interestingly, the zeroth-order Hamiltonian together with W can be solved exactly, as it maps onto the Jaynes-Cummings model of quantum optics. We treat the V coupling by performing a Schrieffer-Wolff transformation. This allows us to obtain an effective Hamiltonian to third order in the coupling strength k(R)l of V, which can be straightforwardly diagonalized via an additional unitary transformation. We also apply our approach to other types of effective parabolic confinement, e. g., 2D electrons in a perpendicular magnetic field. To demonstrate the usefulness of our approximate eigensolutions, we obtain analytical expressions for the nth Landau-level g(n) factors in the presence of both Rashba and Dresselhaus couplings. For small values of the bulk g factors, we find that spin-orbit effects cancel out entirely for particular values of the spin-orbit couplings. By solving simple transcendental equations we also obtain the band minima of a Rashba-coupled quantum wire as a function of an external magnetic field. These can be used to describe Shubnikov-de Haas oscillations. This procedure makes it easier to extract the strength of the spin-orbit interaction in these systems via proper fitting of the data.
Resumo:
A simplex-lattice statistical project was employed to study an optimization method for a preservative system in an ophthalmic suspension of dexametasone and polymyxin B. The assay matrix generated 17 formulas which were differentiated by the preservatives and EDTA (disodium ethylene diamine-tetraacetate), being the independent variable: X-1 = chlorhexidine digluconate (0.010 % w/v); X-2 = phenylethanol (0.500 % w/v); X-3 = EDTA (0.100 % w/v). The dependent variable was the Dvalue obtained from the microbial challenge of the formulas and calculated when the microbial killing process was modeled by an exponential function. The analysis of the dependent variable, performed using the software Design Expert/W, originated cubic equations with terms derived from stepwise adjustment method for the challenging microorganisms: Pseudomonas aeruginosa, Burkholderia cepacia, Staphylococcus aureus, Candida albicans and Aspergillus niger. Besides the mathematical expressions, the response surfaces and the contour graphics were obtained for each assay. The contour graphs obtained were overlaid in order to permit the identification of a region containing the most adequate formulas (graphic strategy), having as representatives: X-1 = 0.10 ( 0.001 % w/v); X-2 = 0.80 (0.400 % w/v); X-3 = 0.10 (0.010 % w/v). Additionally, in order to minimize responses (Dvalue), a numerical strategy corresponding to the use of the desirability function was used, which resulted in the following independent variables combinations: X-1 = 0.25 (0.0025 % w/v); X-2 = 0.75 (0.375 % w/v); X-3 = 0. These formulas, derived from the two strategies (graphic and numerical), were submitted to microbial challenge, and the experimental Dvalue obtained was compared to the theoretical Dvalue calculated from the cubic equation. Both Dvalues were similar to all the assays except that related to Staphylococcus aureus. This microorganism, as well as Pseudomonas aeruginosa, presented intense susceptibility to the formulas independently from the preservative and EDTA concentrations. Both formulas derived from graphic and numerical strategies attained the recommended criteria adopted by the official method. It was concluded that the model proposed allowed the optimization of the formulas in their preservation aspect.
Resumo:
We show that an arbitrary system described by two dipole moments exhibits coherent superpositions of internal states that can be completely decoupled fi om the dissipative interactions (responsible for decoherence) and an external driving laser field. These superpositions, known as dark or trapping states, can he completely stable or can coherently interact with the remaining states. We examine the master equation describing the dissipative evolution of the system and identify conditions for population trapping and also classify processes that can transfer the population to these undriven and nondecaying states. It is shown that coherent transfers are possible only if the two systems are nonidentical, that is the transitions have different frequencies and/or decay rates. in particular, we find that the trapping conditions can involve both coherent and dissipative interactions, and depending on the energy level structure of the system, the population can be trapped in a linear superposition of two or more bare states, a dressed state corresponding to an eigenstate of the system plus external fields or, in some cases. in one of the excited states of the system. A comprehensive analysis is presented of the different processes that are responsible for population trapping, and we illustrate these ideas with three examples of two coupled systems: single V- and Lambda-type three-level atoms and two nonidentical tao-level atoms, which are known to exhibit dark states. We show that the effect of population trapping does not necessarily require decoupling of the antisymmetric superposition from the dissipative interactions. We also find that the vacuum-induced coherent coupling between the systems could be easily observed in Lambda-type atoms. Our analysis of the population trapping in two nonidentical atoms shows that the atoms can be driven into a maximally entangled state which is completely decoupled from the dissipative interaction.
Resumo:
High levels of inheritable resistance to phosphine in Rhyzopertha dominica have recently, been detected in Australia and hi art effort to isolate the genes responsible For resistance we have used random amplified DNA fingerprinting (RAF) to produce a genetic linkage map of R. dominica. The map consists of 94 dominant DNA markers with art average distance between markers of 4.6 cM and defines nine linkage groups with a total recombination distance of 390.1 cM. We have identified two loci that are responsible for high-level resistance. One provides similar to50x resistance to phosphine while the other provides 12.5x resistance and in combination, the two genes act synergistically to provide a resistance level 250 x greater than that of fully susceptible beetles. The haploid genome size has been determined to be 4.76 x 10(8) bp, resulting in an average physical distance of 1.2 Mbp per map unit. No recombination has been observed between either of the two resistance loci and their adjacent DNA markers in a population of 44 fully resistant F-5 individuals, which indicates that the genes are likely to reside within 0.91 cM (1.1 Mbp) of the DNA markers.
Resumo:
Mutations in the E1alpha subunit of the pyruvate dehydrogenase multienzyme complex may result in congenital lactic acidosis, but little is known about the consequences of these mutations at the enzymatic level. Here we characterize two mutants (F205L and T231A) of human pyruvate dehydrogenase in vitro, using the enzyme expressed in Escherichia coli. Wild-type and mutant proteins were purified successfully and their kinetic parameters were measured. F205L shows impaired binding of the thiamin diphosphate cofactor, which may explain why patients carrying this mutation respond to high-dose vitamin B-1 therapy. T231A has very low activity and a greatly elevated K-m for pyruvate, and this combination of effects would be expected to result in severe lactic acidosis. The results lead to a better understanding of the consequences of these mutations on the functional and structural properties of the enzyme, which may lead to improved therapies for patients carrying these mutations.
Resumo:
This paper is on the problem of short-term hydro scheduling, particularly concerning head-dependent reservoirs under competitive environment. We propose a new nonlinear optimization method to consider hydroelectric power generation as a function of water discharge and also of the head. Head-dependency is considered on short-term hydro scheduling in order to obtain more realistic and feasible results. The proposed method has been applied successfully to solve a case study based on one of the main Portuguese cascaded hydro systems, providing a higher profit at a negligible additional computation time in comparison with a linear optimization method that ignores head-dependency.
Resumo:
The use of distributed energy resources, based on natural intermittent power sources, like wind generation, in power systems imposes the development of new adequate operation management and control methodologies. A short-term Energy Resource Management (ERM) methodology performed in two phases is proposed in this paper. The first one addresses the day-ahead ERM scheduling and the second one deals with the five-minute ahead ERM scheduling. The ERM scheduling is a complex optimization problem due to the high quantity of variables and constraints. In this paper the main goal is to minimize the operation costs from the point of view of a virtual power player that manages the network and the existing resources. The optimization problem is solved by a deterministic mixedinteger non-linear programming approach. A case study considering a distribution network with 33 bus, 66 distributed generation, 32 loads with demand response contracts and 7 storage units and 1000 electric vehicles has been implemented in a simulator developed in the field of the presented work, in order to validate the proposed short-term ERM methodology considering the dynamic power system behavior.
Resumo:
Metaheuristics performance is highly dependent of the respective parameters which need to be tuned. Parameter tuning may allow a larger flexibility and robustness but requires a careful initialization. The process of defining which parameters setting should be used is not obvious. The values for parameters depend mainly on the problem, the instance to be solved, the search time available to spend in solving the problem, and the required quality of solution. This paper presents a learning module proposal for an autonomous parameterization of Metaheuristics, integrated on a Multi-Agent System for the resolution of Dynamic Scheduling problems. The proposed learning module is inspired on Autonomic Computing Self-Optimization concept, defining that systems must continuously and proactively improve their performance. For the learning implementation it is used Case-based Reasoning, which uses previous similar data to solve new cases. In the use of Case-based Reasoning it is assumed that similar cases have similar solutions. After a literature review on topics used, both AutoDynAgents system and Self-Optimization module are described. Finally, a computational study is presented where the proposed module is evaluated, obtained results are compared with previous ones, some conclusions are reached, and some future work is referred. It is expected that this proposal can be a great contribution for the self-parameterization of Metaheuristics and for the resolution of scheduling problems on dynamic environments.
Resumo:
Scheduling is a critical function that is present throughout many industries and applications. A great need exists for developing scheduling approaches that can be applied to a number of different scheduling problems with significant impact on performance of business organizations. A challenge is emerging in the design of scheduling support systems for manufacturing environments where dynamic adaptation and optimization become increasingly important. In this paper, we describe a Self-Optimizing Mechanism for Scheduling System through Nature Inspired Optimization Techniques (NIT).
Resumo:
Scheduling is a critical function that is present throughout many industries and applications. A great need exists for developing scheduling approaches that can be applied to a number of different scheduling problems with significant impact on performance of business organizations. A challenge is emerging in the design of scheduling support systems for manufacturing environments where dynamic adaptation and optimization become increasingly important. At this scenario, self-optimizing arise as the ability of the agent to monitor its state and performance and proactively tune itself to respond to environmental stimuli.
Resumo:
A construction project is a group of discernible tasks or activities that are conduct-ed in a coordinated effort to accomplish one or more objectives. Construction projects re-quire varying levels of cost, time and other resources. To plan and schedule a construction project, activities must be defined sufficiently. The level of detail determines the number of activities contained within the project plan and schedule. So, finding feasible schedules which efficiently use scarce resources is a challenging task within project management. In this context, the well-known Resource Constrained Project Scheduling Problem (RCPSP) has been studied during the last decades. In the RCPSP the activities of a project have to be scheduled such that the makespan of the project is minimized. So, the technological precedence constraints have to be observed as well as limitations of the renewable resources required to accomplish the activities. Once started, an activity may not be interrupted. This problem has been extended to a more realistic model, the multi-mode resource con-strained project scheduling problem (MRCPSP), where each activity can be performed in one out of several modes. Each mode of an activity represents an alternative way of combining different levels of resource requirements with a related duration. Each renewable resource has a limited availability for the entire project such as manpower and machines. This paper presents a hybrid genetic algorithm for the multi-mode resource-constrained pro-ject scheduling problem, in which multiple execution modes are available for each of the ac-tivities of the project. The objective function is the minimization of the construction project completion time. To solve the problem, is applied a two-level genetic algorithm, which makes use of two separate levels and extend the parameterized schedule generation scheme. It is evaluated the quality of the schedules and presents detailed comparative computational re-sults for the MRCPSP, which reveal that this approach is a competitive algorithm.