949 resultados para Hydrothermal generation scheduling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Carbonates of rare-earths, specifically hydroxide carbonate or oxide carbonate hydrate, could be prepared on common glass by a hydrothermal process involving thiourea. Examples presented in this paper include LaOHCO3, CeOHCO3 and EU2O(CO3)(2) . H2O structures formed on glass from solutions of thiourea and the relevant rare-earth reactants. The crystal structure and habit on the substrates were dependent on the preparative conditions; the influence of the concentrations of reactants and temperature on the crystal morphologies is illustrated. Second harmonic generation was found to occur in the crystals. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new multi-depot combined vehicle and crew scheduling algorithm, and uses it, in conjunction with a heuristic vehicle routing algorithm, to solve the intra-city mail distribution problem faced by Australia Post. First we describe the Australia Post mail distribution problem and outline the heuristic vehicle routing algorithm used to find vehicle routes. We present a new multi-depot combined vehicle and crew scheduling algorithm based on set covering with column generation. The paper concludes with a computational investigation examining the affect of different types of vehicle routing solutions on the vehicle and crew scheduling solution, comparing the different levels of integration possible with the new vehicle and crew scheduling algorithm and comparing the results of sequential versus simultaneous vehicle and crew scheduling, using real life data for Australia Post distribution networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Next-generation integrated wireless local area network (WLAN) and 3G cellular networks aim to take advantage of the roaming ability in a cellular network and the high data rate services of a WLAN. To ensure successful implementation of an integrated network, many issues must be carefully addressed, including network architecture design, resource management, quality-of-service (QoS), call admission control (CAC) and mobility management. ^ This dissertation focuses on QoS provisioning, CAC, and the network architecture design in the integration of WLANs and cellular networks. First, a new scheduling algorithm and a call admission control mechanism in IEEE 802.11 WLAN are presented to support multimedia services with QoS provisioning. The proposed scheduling algorithms make use of the idle system time to reduce the average packet loss of realtime (RT) services. The admission control mechanism provides long-term transmission quality for both RT and NRT services by ensuring the packet loss ratio for RT services and the throughput for non-real-time (NRT) services. ^ A joint CAC scheme is proposed to efficiently balance traffic load in the integrated environment. A channel searching and replacement algorithm (CSR) is developed to relieve traffic congestion in the cellular network by using idle channels in the WLAN. The CSR is optimized to minimize the system cost in terms of the blocking probability in the interworking environment. Specifically, it is proved that there exists an optimal admission probability for passive handoffs that minimizes the total system cost. Also, a method of searching the probability is designed based on linear-programming techniques. ^ Finally, a new integration architecture, Hybrid Coupling with Radio Access System (HCRAS), is proposed for lowering the average cost of intersystem communication (IC) and the vertical handoff latency. An analytical model is presented to evaluate the system performance of the HCRAS in terms of the intersystem communication cost function and the handoff cost function. Based on this model, an algorithm is designed to determine the optimal route for each intersystem communication. Additionally, a fast handoff algorithm is developed to reduce the vertical handoff latency.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research is motivated by a practical application observed at a printed circuit board (PCB) manufacturing facility. After assembly, the PCBs (or jobs) are tested in environmental stress screening (ESS) chambers (or batch processing machines) to detect early failures. Several PCBs can be simultaneously tested as long as the total size of all the PCBs in the batch does not violate the chamber capacity. PCBs from different production lines arrive dynamically to a queue in front of a set of identical ESS chambers, where they are grouped into batches for testing. Each line delivers PCBs that vary in size and require different testing (or processing) times. Once a batch is formed, its processing time is the longest processing time among the PCBs in the batch, and its ready time is given by the PCB arriving last to the batch. ESS chambers are expensive and a bottleneck. Consequently, its makespan has to be minimized. ^ A mixed-integer formulation is proposed for the problem under study and compared to a formulation recently published. The proposed formulation is better in terms of the number of decision variables, linear constraints and run time. A procedure to compute the lower bound is proposed. For sparse problems (i.e. when job ready times are dispersed widely), the lower bounds are close to optimum. ^ The problem under study is NP-hard. Consequently, five heuristics, two metaheuristics (i.e. simulated annealing (SA) and greedy randomized adaptive search procedure (GRASP)), and a decomposition approach (i.e. column generation) are proposed—especially to solve problem instances which require prohibitively long run times when a commercial solver is used. Extensive experimental study was conducted to evaluate the different solution approaches based on the solution quality and run time. ^ The decomposition approach improved the lower bounds (or linear relaxation solution) of the mixed-integer formulation. At least one of the proposed heuristic outperforms the Modified Delay heuristic from the literature. For sparse problems, almost all the heuristics report a solution close to optimum. GRASP outperforms SA at a higher computational cost. The proposed approaches are viable to implement as the run time is very short. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Buffered crossbar switches have recently attracted considerable attention as the next generation of high speed interconnects. They are a special type of crossbar switches with an exclusive buffer at each crosspoint of the crossbar. They demonstrate unique advantages over traditional unbuffered crossbar switches, such as high throughput, low latency, and asynchronous packet scheduling. However, since crosspoint buffers are expensive on-chip memories, it is desired that each crosspoint has only a small buffer. This dissertation proposes a series of practical algorithms and techniques for efficient packet scheduling for buffered crossbar switches. To reduce the hardware cost of such switches and make them scalable, we considered partially buffered crossbars, whose crosspoint buffers can be of an arbitrarily small size. Firstly, we introduced a hybrid scheme called Packet-mode Asynchronous Scheduling Algorithm (PASA) to schedule best effort traffic. PASA combines the features of both distributed and centralized scheduling algorithms and can directly handle variable length packets without Segmentation And Reassembly (SAR). We showed by theoretical analysis that it achieves 100% throughput for any admissible traffic in a crossbar with a speedup of two. Moreover, outputs in PASA have a large probability to avoid the more time-consuming centralized scheduling process, and thus make fast scheduling decisions. Secondly, we proposed the Fair Asynchronous Segment Scheduling (FASS) algorithm to handle guaranteed performance traffic with explicit flow rates. FASS reduces the crosspoint buffer size by dividing packets into shorter segments before transmission. It also provides tight constant performance guarantees by emulating the ideal Generalized Processor Sharing (GPS) model. Furthermore, FASS requires no speedup for the crossbar, lowering the hardware cost and improving the switch capacity. Thirdly, we presented a bandwidth allocation scheme called Queue Length Proportional (QLP) to apply FASS to best effort traffic. QLP dynamically obtains a feasible bandwidth allocation matrix based on the queue length information, and thus assists the crossbar switch to be more work-conserving. The feasibility and stability of QLP were proved, no matter whether the traffic distribution is uniform or non-uniform. Hence, based on bandwidth allocation of QLP, FASS can also achieve 100% throughput for best effort traffic in a crossbar without speedup.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main goal of this work is to determine the true cost incurred by the Republic of Ireland and Northern Ireland in order to meet their EU renewable electricity targets. The primary all-island of Ireland policy goal is that 40% of electricity will come from renewable sources in 2020. From this it is expected that wind generation on the Irish electricity system will be in the region of 32-37% of total generation. This leads to issues resulting from wind energy being a non-synchronous, unpredictable and variable source of energy use on a scale never seen before for a single synchronous system. If changes are not made to traditional operational practices, the efficient running of the electricity system will be directly affected by these issues in the coming years. Using models of the electricity system for the all-island grid of Ireland, the effects of high wind energy penetration expected to be present in 2020 are examined. These models were developed using a unit commitment, economic dispatch tool called PLEXOS which allows for a detailed representation of the electricity system to be achieved down to individual generator level. These models replicate the true running of the electricity system through use of day-ahead scheduling and semi-relaxed use of these schedules that reflects the Transmission System Operator's of real time decision making on dispatch. In addition, it carefully considers other non-wind priority dispatch generation technologies that have an effect on the overall system. In the models developed, three main issues associated with wind energy integration were selected to be examined in detail to determine the sensitivity of assumptions presented in other studies. These three issues include wind energy's non-synchronous nature, its variability and spatial correlation, and its unpredictability. This leads to an examination of the effects in three areas: the need for system operation constraints required for system security; different onshore to offshore ratios of installed wind energy; and the degrees of accuracy in wind energy forecasting. Each of these areas directly impact the way in which the electricity system is run as they address each of the three issues associated with wind energy stated above, respectively. It is shown that assumptions in these three areas have a large effect on the results in terms of total generation costs, wind curtailment and generator technology type dispatch. In particular accounting for these issues has resulted in wind curtailment being predicted in much larger quantities than had been previously reported. This would have a large effect on wind energy companies because it is already a very low profit margin industry. Results from this work have shown that the relaxation of system operation constraints is crucial to the economic running of the electricity system with large improvements shown in the reduction of wind curtailment and system generation costs. There are clear benefits in having a proportion of the wind installed offshore in Ireland which would help to reduce variability of wind energy generation on the system and therefore reduce wind curtailment. With envisaged future improvements in day-ahead wind forecasting from 8% to 4% mean absolute error, there are potential reductions in wind curtailment system costs and open cycle gas turbine usage. This work illustrates the consequences of assumptions in the areas of system operation constraints, onshore/offshore installed wind capacities and accuracy in wind forecasting to better inform the true costs associated with running Ireland's changing electricity system as it continues to decarbonise into the near future. This work also proposes to illustrate, through the use of Ireland as a case study, the effects that will become ever more prevalent in other synchronous systems as they pursue a path of increasing renewable energy generation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The hydrothermal liquefaction(HTL) of algal biomass is a promising route to viable second generation biofuels. In this investigation HTL was assessed for the valorisation of algae used in the remediation of acid mine drainage (AMD). Initially the HTL process was evaluated using Arthrospira platensis (Spirulina) with additional metal sulphates to simulate metal remediation. Optimised conditions were then used to process a natural algal community (predominantly Chlamydomonas sp.) cultivated under two scenarios: high uptake and low uptake of metals from AMD. High metal concentrations appear to catalyse the conversion to bio-oil, and do not significantly affect the heteroatom content or higher heating value of the bio-oil produced. The associated metals were found to partition almost exclusively into the solid residue, favourable for potential metal recovery. High metal loadings also caused partitioning of phosphates from the aqueous phase to the solid phase, potentially compromising attempts to recycle process water as a growth supplement. HTL was therefore found to be a suitable method of processing algae used in AMD remediation, producing a crude oil suitable for upgrading into hydrocarbon fuels, an aqueous and gas stream suitable for supplementing the algal growth and the partitioning of most contaminant metals to the solid residue where they would be readily amenable for recovery and/or disposal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The hydrothermal liquefaction(HTL) of algal biomass is a promising route to viable second generation biofuels. In this investigation HTL was assessed for the valorisation of algae used in the remediation of acid mine drainage (AMD). Initially the HTL process was evaluated using Arthrospira platensis (Spirulina) with additional metal sulphates to simulate metal remediation. Optimised conditions were then used to process a natural algal community (predominantly Chlamydomonas sp.) cultivated under two scenarios: high uptake and low uptake of metals from AMD. High metal concentrations appear to catalyse the conversion to bio-oil, and do not significantly affect the heteroatom content or higher heating value of the bio-oil produced. The associated metals were found to partition almost exclusively into the solid residue, favourable for potential metal recovery. High metal loadings also caused partitioning of phosphates from the aqueous phase to the solid phase, potentially compromising attempts to recycle process water as a growth supplement. HTL was therefore found to be a suitable method of processing algae used in AMD remediation, producing a crude oil suitable for upgrading into hydrocarbon fuels, an aqueous and gas stream suitable for supplementing the algal growth and the partitioning of most contaminant metals to the solid residue where they would be readily amenable for recovery and/or disposal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper compares two linear programming (LP) models for shift scheduling in services where homogeneously-skilled employees are available at limited times. Although both models are based on set covering approaches, one explicitly matches employees to shifts, while the other imposes this matching implicitly. Each model is used in three forms—one with complete, another with very limited meal break placement flexibility, and a third without meal breaks—to provide initial schedules to a completion/improvement heuristic. The term completion/improvement heuristic is used to describe a construction/ improvement heuristic operating on a starting schedule. On 80 test problems varying widely in scheduling flexibility, employee staffing requirements, and employee availability characteristics, all six LP-based procedures generated lower cost schedules than a comparison from-scratch construction/improvement heuristic. This heuristic, which perpetually maintains an explicit matching of employees to shifts, consists of three phases which add, drop, and modify shifts. In terms of schedule cost, schedule generation time, and model size, the procedures based on the implicit model performed better, as a group, than those based on the explicit model. The LP model with complete break placement flexibility and implicitly matching employees to shifts generated schedules costing 6.7% less than those developed by the from-scratch heuristic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Deep-sea hydrothermal-vent habitats are typically linear, discontinuous, and short-lived. Some of the vent fauna such as the endemic polychaete family Alvinellidae are thought to lack a planktotrophic larval stage and therefore not to broadcast-release their offspring. The genetic evidence points to exchanges on a scale that seems to contradict this type of reproductive pattern. However, the rift valley may topographically rectify the bottom currents, thereby facilitating the dispersal of propagules between active vent sites separated in some cases by 10s of kilometers or more along the ridge axis. A propagule flux model based on a matrix of intersite distances, long-term current-meter data, and information on the biology and ecology of Alvinellidae was developed to test this hypothesis. Calculations of the number of migrants exchanged between two populations per generation (N-m) allowed comparisons with estimates obtained from genetic studies. N, displays a logarithmic decrease with increasing dispersal duration and reaches the critical value of 1 after 8 d when the propagule Aux model was run in standard conditions. At most, propagule traveling time cannot reasonably exceed 15-30 d, according to the model, whereas reported distances between sites would require longer lasting dispersal abilities. Two nonexclusive explanations are proposed. First, some aspects of the biology of Alvinellidae have been overlooked and long-distance dispersal does occur. Second, such dispersal never occurs in Alvinellidae, but the spatial-temporal dynamics of vent sites over geological timescales allows short-range dispersal processes to maintain gene flow.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a stochastic mixed-integer linear programming approach for solving the self-scheduling problem of a price-taker thermal and wind power producer taking part in a pool-based electricity market. Uncertainty on electricity price and wind power is considered through a set of scenarios. Thermal units are modeled by variable costs, start-up costs and technical operating constraints, such as: ramp up/down limits and minimum up/down time limits. An efficient mixed-integer linear program is presented to develop the offering strategies of the coordinated production of thermal and wind energy generation, aiming to maximize the expected profit. A case study with data from the Iberian Electricity Market is presented and results are discussed to show the effectiveness of the proposed approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Protocols for the generation of dendritic cells (DCs) using serum as a supplementation of culture media leads to reactions due to animal proteins and disease transmissions. Several types of serum-free media (SFM), based on good manufacture practices (GMP), have recently been used and seem to be a viable option. The aim of this study was to evaluate the results of the differentiation, maturation, and function of DCs from Acute Myeloid Leukemia patients (AML), generated in SFM and medium supplemented with autologous serum (AS). DCs were analyzed by phenotype characteristics, viability, and functionality. The results showed the possibility of generating viable DCs in all the conditions tested. In patients, the X-VIVO 15 medium was more efficient than the other media tested in the generation of DCs producing IL-12p70 (p=0.05). Moreover, the presence of AS led to a significant increase of IL-10 by DCs as compared with CellGro (p=0.05) and X-Vivo15 (p=0.05) media, both in patients and donors. We concluded that SFM was efficient in the production of DCs for immunotherapy in AML patients. However, the use of AS appears to interfere with the functional capacity of the generated DCs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report the observation of multiple harmonic generation in electric dipole spin resonance in an InAs nanowire double quantum dot. The harmonics display a remarkable detuning dependence: near the interdot charge transition as many as eight harmonics are observed, while at large detunings we only observe the fundamental spin resonance condition. The detuning dependence indicates that the observed harmonics may be due to Landau-Zener transition dynamics at anticrossings in the energy level spectrum.