889 resultados para Production lot-scheduling models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Based on detailed reconstructions of global distribution patterns, both paleoproductivity and the benthic d13C record of CO2, which is dissolved in the deep ocean, strongly differed between the Last Glacial Maximum and the Holocene. With the onset of Termination I about 15,000 years ago, the new (export) production of low- and mid-latitude upwelling cells started to decline by more than 2-4 Gt carbon/year. This reduction is regarded as a main factor leading to both the simultaneous rise in atmospheric CO2 as recorded in ice cores and, with a slight delay of more than 1000 years, to a large-scale gradual CO2 depletion of the deep ocean by about 650 Gt C. This estimate is based on an average increase in benthic d13C by 0.4-0.5 per mil. The decrease in new production also matches a clear 13C depletion of organic matter, possibly recording an end of extreme nutrient utilization in upwelling cells. As shown by Sarnthein et al., [1987], the productivity reversal appears to be triggered by a rapid reduction in the strength of meridional trades, which in turn was linked via a shrinking extent of sea ice to a massive increase in high-latitude insolation, i.e., to orbital forcing as primary cause.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Net ecosystem calcification rates (NEC) and net photosynthesis (NP) were determined from CO2 seawater parameters on the barrier coral reef of Kaneohe Bay, Oahu, Hawaii. Autosamplers were deployed to collect samples on the barrier reef every 2 hours for six 48-hour deployments, two each in June 2008, August 2009, and January/February 2010. NEC on the Kaneohe Bay barrier reef increased throughout the day and decreased at night. Net calcification continued at low rates at night except for six time periods when net dissolution was measured. The barrier reef was generally net photosynthetic (positive NP) during the day and net respiring (negative NP) at night. NP controlled the diel cycles of the partial pressure of CO2 (pCO2) and aragonite saturation state resulting in high daytime aragonite saturation state levels when calcification rates were at their peak. However, the NEC and NP diel cycles can become decoupled for short periods of time (several hours) without affecting calcification rates. On a net daily basis, net ecosystem production (NEP) of the barrier reef was found to be sometimes net photosynthetic and sometimes net respiring and ranged from -378 to 80 mmol m-2 d-1 when calculated using simple box models. Daily NEC of the barrier reef was positive (net calcification) for all deployments and ranged from 174 to 331 mmol CaCO3 m-2 d-1. Daily NEC was strongly negatively correlated with average daily pCO2 (R2 = 0.76) which ranged from 431 to 622 µatm. Daily NEC of the Kaneohe Bay barrier reef is similar to or higher than daily NEC measured on other coral reefs even though aragonite saturation state levels (mean aragonite saturation state = 2.85) are some of the lowest measured in coral reef ecosystems. It appears that while calcification rate and ?arag are correlated within a single coral reef ecosystem, this relationship does not necessarily hold between different coral reef systems. It can be expected that ocean acidification will not affect coral reefs uniformly and that some may be more sensitive to increasing pCO2 levels than others.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En los últimos años el término Economía Colaborativa se ha popularizado sin que, hasta el momento, haya sido definido de manera inequívoca. Bajo esta denominación se engloban experiencias tan diversas como bancos de tiempo, huertos urbanos, startups o grandes plataformas digitales. La proliferación de este tipo de iniciativas puede relacionarse con una multiplicidad de factores tales como el desarrollo tecnológico, la recesión económica y otras crisis superpuestas (medioambiental, de cuidados, de valores, de lo político) y un cierto cambio en los valores sociales. Entre 2014-2015 se han realizado dos investigaciones en Andalucía de manera casi paralela y con una metodología similar. La primera de ellas pretendía identificar prácticas de Economía Colaborativa en el entorno universitario. La segunda investigación identificaba experiencias de emprendimiento a nivel autonómico. A luz de los resultados obtenidos se plantea la siguiente cuestión sobre la naturaleza misma de la Economía Colaborativa: ¿nos encontramos ante prácticas postcapitalistas que abren el camino a una sociedad más justa e igualitaria o, más bien, estamos ante una respuesta del capital para, una vez más, seguir extrayendo de manera privada el valor que se genera socialmente? Este artículo, partiendo del análisis del conjunto de iniciativas detentadas en Andalucía, se centra en aquellas basadas en el software libre y la producción digital concluyendo cómo, gracias a la incorporación de ciertos aspectos de la ética hacker y las lógicas del conocimiento abierto, éstas pueden situarse dentro de un escenario de fomento de los comunes globales frente a las lógicas imperantes del capitalismo netárquico. 

Relevância:

30.00% 30.00%

Publicador:

Resumo:

SILVA, Flávio César Bezerra da ; COSTA, Francisca Marta de Lima; ANDRADE, Hamilton Leandro Pinto de; FREIRE, Lúcia de Fátima; MACIEL, Patrícia Suerda de Oliveira; ENDERS, Bertha Cruz ; MENEZES, Rejane Maria Paiva de. Paradigms that guide the models of attention to the health in Brazil: an analytic essay. Revista de Enfermagem UFPE On Line., Recife, v.3,n.4, p.460-65. out/dez. 2009. Disponível em < http://www.ufpe.br/revistaenfermagem/index.php/revista/search/results >.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper compares two linear programming (LP) models for shift scheduling in services where homogeneously-skilled employees are available at limited times. Although both models are based on set covering approaches, one explicitly matches employees to shifts, while the other imposes this matching implicitly. Each model is used in three forms—one with complete, another with very limited meal break placement flexibility, and a third without meal breaks—to provide initial schedules to a completion/improvement heuristic. The term completion/improvement heuristic is used to describe a construction/ improvement heuristic operating on a starting schedule. On 80 test problems varying widely in scheduling flexibility, employee staffing requirements, and employee availability characteristics, all six LP-based procedures generated lower cost schedules than a comparison from-scratch construction/improvement heuristic. This heuristic, which perpetually maintains an explicit matching of employees to shifts, consists of three phases which add, drop, and modify shifts. In terms of schedule cost, schedule generation time, and model size, the procedures based on the implicit model performed better, as a group, than those based on the explicit model. The LP model with complete break placement flexibility and implicitly matching employees to shifts generated schedules costing 6.7% less than those developed by the from-scratch heuristic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An extensive literature exists on the problems of daily (shift) and weekly (tour) labor scheduling. In representing requirements for employees in these problems, researchers have used formulations based either on the model of Dantzig (1954) or on the model of Keith (1979). We show that both formulations have weakness in environments where management knows, or can attempt to identify, how different levels of customer service affect profits. These weaknesses results in lower-than-necessary profits. This paper presents a New Formulation of the daily and weekly Labor Scheduling Problems (NFLSP) designed to overcome the limitations of earlier models. NFLSP incorporates information on how changing the number of employees working in each planning period affects profits. NFLP uses this information during the development of the schedule to identify the number of employees who, ideally, should be working in each period. In an extensive simulation of 1,152 service environments, NFLSP outperformed the formulations of Dantzig (1954) and Keith (1979) at a level of significance of 0.001. Assuming year-round operations and an hourly wage, including benefits, of $6.00, NFLSP's schedules were $96,046 (2.2%) and $24,648 (0.6%) more profitable, on average, than schedules developed using the formulations of Danzig (1954) and Keith (1979), respectively. Although the average percentage gain over Keith's model was fairly small, it could be much larger in some real cases with different parameters. In 73 and 100 percent of the cases we simulated NFLSP yielded a higher profit than the models of Keith (1979) and Danzig (1954), respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ma thèse s’intéresse aux politiques de santé conçues pour encourager l’offre de services de santé. L’accessibilité aux services de santé est un problème majeur qui mine le système de santé de la plupart des pays industrialisés. Au Québec, le temps médian d’attente entre une recommandation du médecin généraliste et un rendez-vous avec un médecin spécialiste était de 7,3 semaines en 2012, contre 2,9 semaines en 1993, et ceci malgré l’augmentation du nombre de médecins sur cette même période. Pour les décideurs politiques observant l’augmentation du temps d’attente pour des soins de santé, il est important de comprendre la structure de l’offre de travail des médecins et comment celle-ci affecte l’offre des services de santé. Dans ce contexte, je considère deux principales politiques. En premier lieu, j’estime comment les médecins réagissent aux incitatifs monétaires et j’utilise les paramètres estimés pour examiner comment les politiques de compensation peuvent être utilisées pour déterminer l’offre de services de santé de court terme. En second lieu, j’examine comment la productivité des médecins est affectée par leur expérience, à travers le mécanisme du "learning-by-doing", et j’utilise les paramètres estimés pour trouver le nombre de médecins inexpérimentés que l’on doit recruter pour remplacer un médecin expérimenté qui va à la retraite afin de garder l’offre des services de santé constant. Ma thèse développe et applique des méthodes économique et statistique afin de mesurer la réaction des médecins face aux incitatifs monétaires et estimer leur profil de productivité (en mesurant la variation de la productivité des médecins tout le long de leur carrière) en utilisant à la fois des données de panel sur les médecins québécois, provenant d’enquêtes et de l’administration. Les données contiennent des informations sur l’offre de travail de chaque médecin, les différents types de services offerts ainsi que leurs prix. Ces données couvrent une période pendant laquelle le gouvernement du Québec a changé les prix relatifs des services de santé. J’ai utilisé une approche basée sur la modélisation pour développer et estimer un modèle structurel d’offre de travail en permettant au médecin d’être multitâche. Dans mon modèle les médecins choisissent le nombre d’heures travaillées ainsi que l’allocation de ces heures à travers les différents services offerts, de plus les prix des services leurs sont imposés par le gouvernement. Le modèle génère une équation de revenu qui dépend des heures travaillées et d’un indice de prix représentant le rendement marginal des heures travaillées lorsque celles-ci sont allouées de façon optimale à travers les différents services. L’indice de prix dépend des prix des services offerts et des paramètres de la technologie de production des services qui déterminent comment les médecins réagissent aux changements des prix relatifs. J’ai appliqué le modèle aux données de panel sur la rémunération des médecins au Québec fusionnées à celles sur l’utilisation du temps de ces mêmes médecins. J’utilise le modèle pour examiner deux dimensions de l’offre des services de santé. En premierlieu, j’analyse l’utilisation des incitatifs monétaires pour amener les médecins à modifier leur production des différents services. Bien que les études antérieures ont souvent cherché à comparer le comportement des médecins à travers les différents systèmes de compensation,il y a relativement peu d’informations sur comment les médecins réagissent aux changementsdes prix des services de santé. Des débats actuels dans les milieux de politiques de santé au Canada se sont intéressés à l’importance des effets de revenu dans la détermination de la réponse des médecins face à l’augmentation des prix des services de santé. Mon travail contribue à alimenter ce débat en identifiant et en estimant les effets de substitution et de revenu résultant des changements des prix relatifs des services de santé. En second lieu, j’analyse comment l’expérience affecte la productivité des médecins. Cela a une importante implication sur le recrutement des médecins afin de satisfaire la demande croissante due à une population vieillissante, en particulier lorsque les médecins les plus expérimentés (les plus productifs) vont à la retraite. Dans le premier essai, j’ai estimé la fonction de revenu conditionnellement aux heures travaillées, en utilisant la méthode des variables instrumentales afin de contrôler pour une éventuelle endogeneité des heures travaillées. Comme instruments j’ai utilisé les variables indicatrices des âges des médecins, le taux marginal de taxation, le rendement sur le marché boursier, le carré et le cube de ce rendement. Je montre que cela donne la borne inférieure de l’élasticité-prix direct, permettant ainsi de tester si les médecins réagissent aux incitatifs monétaires. Les résultats montrent que les bornes inférieures des élasticités-prix de l’offre de services sont significativement positives, suggérant que les médecins répondent aux incitatifs. Un changement des prix relatifs conduit les médecins à allouer plus d’heures de travail au service dont le prix a augmenté. Dans le deuxième essai, j’estime le modèle en entier, de façon inconditionnelle aux heures travaillées, en analysant les variations des heures travaillées par les médecins, le volume des services offerts et le revenu des médecins. Pour ce faire, j’ai utilisé l’estimateur de la méthode des moments simulés. Les résultats montrent que les élasticités-prix direct de substitution sont élevées et significativement positives, représentant une tendance des médecins à accroitre le volume du service dont le prix a connu la plus forte augmentation. Les élasticitésprix croisées de substitution sont également élevées mais négatives. Par ailleurs, il existe un effet de revenu associé à l’augmentation des tarifs. J’ai utilisé les paramètres estimés du modèle structurel pour simuler une hausse générale de prix des services de 32%. Les résultats montrent que les médecins devraient réduire le nombre total d’heures travaillées (élasticité moyenne de -0,02) ainsi que les heures cliniques travaillées (élasticité moyenne de -0.07). Ils devraient aussi réduire le volume de services offerts (élasticité moyenne de -0.05). Troisièmement, j’ai exploité le lien naturel existant entre le revenu d’un médecin payé à l’acte et sa productivité afin d’établir le profil de productivité des médecins. Pour ce faire, j’ai modifié la spécification du modèle pour prendre en compte la relation entre la productivité d’un médecin et son expérience. J’estime l’équation de revenu en utilisant des données de panel asymétrique et en corrigeant le caractère non-aléatoire des observations manquantes à l’aide d’un modèle de sélection. Les résultats suggèrent que le profil de productivité est une fonction croissante et concave de l’expérience. Par ailleurs, ce profil est robuste à l’utilisation de l’expérience effective (la quantité de service produit) comme variable de contrôle et aussi à la suppression d’hypothèse paramétrique. De plus, si l’expérience du médecin augmente d’une année, il augmente la production de services de 1003 dollar CAN. J’ai utilisé les paramètres estimés du modèle pour calculer le ratio de remplacement : le nombre de médecins inexpérimentés qu’il faut pour remplacer un médecin expérimenté. Ce ratio de remplacement est de 1,2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Economic losses resulting from disease development can be reduced by accurate and early detection of plant pathogens. Early detection can provide the grower with useful information on optimal crop rotation patterns, varietal selections, appropriate control measures, harvest date and post harvest handling. Classical methods for the isolation of pathogens are commonly used only after disease symptoms. This frequently results in a delay in application of control measures at potentially important periods in crop production. This paper describes the application of both antibody and DNA based systems to monitor infection risk of air and soil borne fungal pathogens and the use of this information with mathematical models describing risk of disease associated with environmental parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes two new techniques designed to enhance the performance of fire field modelling software. The two techniques are "group solvers" and automated dynamic control of the solution process, both of which are currently under development within the SMARTFIRE Computational Fluid Dynamics environment. The "group solver" is a derivation of common solver techniques used to obtain numerical solutions to the algebraic equations associated with fire field modelling. The purpose of "group solvers" is to reduce the computational overheads associated with traditional numerical solvers typically used in fire field modelling applications. In an example, discussed in this paper, the group solver is shown to provide a 37% saving in computational time compared with a traditional solver. The second technique is the automated dynamic control of the solution process, which is achieved through the use of artificial intelligence techniques. This is designed to improve the convergence capabilities of the software while further decreasing the computational overheads. The technique automatically controls solver relaxation using an integrated production rule engine with a blackboard to monitor and implement the required control changes during solution processing. Initial results for a two-dimensional fire simulation are presented that demonstrate the potential for considerable savings in simulation run-times when compared with control sets from various sources. Furthermore, the results demonstrate the potential for enhanced solution reliability due to obtaining acceptable convergence within each time step, unlike some of the comparison simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current practices in agricultural management involve the application of rules and techniques to ensure high quality and environmentally friendly production. Based on their experience, agricultural technicians and farmers make critical decisions affecting crop growth while considering several interwoven agricultural, technological, environmental, legal and economic factors. In this context, decision support systems and the knowledge models that support them, enable the incorporation of valuable experience into software systems providing support to agricultural technicians to make rapid and effective decisions for efficient crop growth. Pest control is an important issue in agricultural management due to crop yield reductions caused by pests and it involves expert knowledge. This paper presents a formalisation of the pest control problem and the workflow followed by agricultural technicians and farmers in integrated pest management, the crop production strategy that combines different practices for growing healthy crops whilst minimising pesticide use. A generic decision schema for estimating infestation risk of a given pest on a given crop is defined and it acts as a metamodel for the maintenance and extension of the knowledge embedded in a pest management decision support system which is also presented. This software tool has been implemented by integrating a rule-based tool into web-based architecture. Evaluation from validity and usability perspectives concluded that both agricultural technicians and farmers considered it a useful tool in pest control, particularly for training new technicians and inexperienced farmers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Environmental impacts of wind energy facilities increasingly cause concern, a central issue being bats and birds killed by rotor blades. Two approaches have been employed to assess collision rates: carcass searches and surveys of animals prone to collisions. Carcass searches can provide an estimate for the actual number of animals being killed but they offer little information on the relation between collision rates and, for example, weather parameters due to the time of death not being precisely known. In contrast, a density index of animals exposed to collision is sufficient to analyse the parameters influencing the collision rate. However, quantification of the collision rate from animal density indices (e.g. acoustic bat activity or bird migration traffic rates) remains difficult. We combine carcass search data with animal density indices in a mixture model to investigate collision rates. In a simulation study we show that the collision rates estimated by our model were at least as precise as conventional estimates based solely on carcass search data. Furthermore, if certain conditions are met, the model can be used to predict the collision rate from density indices alone, without data from carcass searches. This can reduce the time and effort required to estimate collision rates. We applied the model to bat carcass search data obtained at 30 wind turbines in 15 wind facilities in Germany. We used acoustic bat activity and wind speed as predictors for the collision rate. The model estimates correlated well with conventional estimators. Our model can be used to predict the average collision rate. It enables an analysis of the effect of parameters such as rotor diameter or turbine type on the collision rate. The model can also be used in turbine-specific curtailment algorithms that predict the collision rate and reduce this rate with a minimal loss of energy production.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

SILVA, Flávio César Bezerra da ; COSTA, Francisca Marta de Lima; ANDRADE, Hamilton Leandro Pinto de; FREIRE, Lúcia de Fátima; MACIEL, Patrícia Suerda de Oliveira; ENDERS, Bertha Cruz ; MENEZES, Rejane Maria Paiva de. Paradigms that guide the models of attention to the health in Brazil: an analytic essay. Revista de Enfermagem UFPE On Line., Recife, v.3,n.4, p.460-65. out/dez. 2009. Disponível em < http://www.ufpe.br/revistaenfermagem/index.php/revista/search/results >.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most Australian banana production occurs on the north-eastern tropical coast between latitudes 15-18°S, and can experience summer cyclone activity. Damage from severe tropical cyclones has serious impact on banana-based livelihoods. The most significant impacts include immediate loss of production and income for several months, the region-wide synchronization of cropping and the expense of rehabilitating affected plantations. Severe tropical cyclones have directly affected the main production region twice in recent years Tropical Cyclone (TC) Larry (Category 4) in March 2006 and TC Yasi (Category 5) in February 2011. Based on TC Larry experiences, pre- and post-cyclone farm practices were developed to reduce these impacts in future cyclonic events. The main pre-cyclone farm practice focused on maintaining production units and an earlier return to fruit production by partially or completely removing the plant canopy to reduce wind resistance. Post-cyclone farm practices focused on managing the industry-wide crop synchronization using crop timing techniques to achieve a staggered return to cropping by scheduling production to provide continuous fruit supply. With TC Yasi in 2011, some banana producers implemented these practices, allowing them to examine their effectiveness in reducing cyclonic impacts. Additional research and development activities were conducted to refine our understanding of their effectiveness and improve their application for future cyclonic events. Based on these activities and farm-based observations, suggested practice-based management strategies can be developed to help reduce the impact of severe tropical cyclones in the future. Canopy removal maintained banana plants as productive units, and provided earlier but smaller bunches, generating earlier-than-expected income. Queensland producers expressed willingness to adopt canopy removal for future cyclone threats where appropriate, despite its labor-intensiveness. Mechanization would allow larger scale adoption. Implementing a staggered cropping program successfully achieved a consistent, continuous fruit supply after a cyclone impact. Both techniques should be applicable to other cyclone-prone regions.