889 resultados para Production lot-scheduling models
Resumo:
Leafy greens are essential part of a healthy diet. Because of their health benefits, production and consumption of leafy greens has increased considerably in the U.S. in the last few decades. However, leafy greens are also associated with a large number of foodborne disease outbreaks in the last few years. The overall goal of this dissertation was to use the current knowledge of predictive models and available data to understand the growth, survival, and death of enteric pathogens in leafy greens at pre- and post-harvest levels. Temperature plays a major role in the growth and death of bacteria in foods. A growth-death model was developed for Salmonella and Listeria monocytogenes in leafy greens for varying temperature conditions typically encountered during supply chain. The developed growth-death models were validated using experimental dynamic time-temperature profiles available in the literature. Furthermore, these growth-death models for Salmonella and Listeria monocytogenes and a similar model for E. coli O157:H7 were used to predict the growth of these pathogens in leafy greens during transportation without temperature control. Refrigeration of leafy greens meets the purposes of increasing their shelf-life and mitigating the bacterial growth, but at the same time, storage of foods at lower temperature increases the storage cost. Nonlinear programming was used to optimize the storage temperature of leafy greens during supply chain while minimizing the storage cost and maintaining the desired levels of sensory quality and microbial safety. Most of the outbreaks associated with consumption of leafy greens contaminated with E. coli O157:H7 have occurred during July-November in the U.S. A dynamic system model consisting of subsystems and inputs (soil, irrigation, cattle, wildlife, and rainfall) simulating a farm in a major leafy greens producing area in California was developed. The model was simulated incorporating the events of planting, irrigation, harvesting, ground preparation for the new crop, contamination of soil and plants, and survival of E. coli O157:H7. The predictions of this system model are in agreement with the seasonality of outbreaks. This dissertation utilized the growth, survival, and death models of enteric pathogens in leafy greens during production and supply chain.
Resumo:
Plantings of mixed native species (termed 'environmental plantings') are increasingly being established for carbon sequestration whilst providing additional environmental benefits such as biodiversity and water quality. In Australia, they are currently one of the most common forms of reforestation. Investment in establishing and maintaining such plantings relies on having a cost-effective modelling approach to providing unbiased estimates of biomass production and carbon sequestration rates. In Australia, the Full Carbon Accounting Model (FullCAM) is used for both national greenhouse gas accounting and project-scale sequestration activities. Prior to undertaking the work presented here, the FullCAM tree growth curve was not calibrated specifically for environmental plantings and generally under-estimated their biomass. Here we collected and analysed above-ground biomass data from 605 mixed-species environmental plantings, and tested the effects of several planting characteristics on growth rates. Plantings were then categorised based on significant differences in growth rates. Growth of plantings differed between temperate and tropical regions. Tropical plantings were relatively uniform in terms of planting methods and their growth was largely related to stand age, consistent with the un-calibrated growth curve. However, in temperate regions where plantings were more variable, key factors influencing growth were planting width, stand density and species-mix (proportion of individuals that were trees). These categories provided the basis for FullCAM calibration. Although the overall model efficiency was only 39-46%, there was nonetheless no significant bias when the model was applied to the various planting categories. Thus, modelled estimates of biomass accumulation will be reliable on average, but estimates at any particular location will be uncertain, with either under- or over-prediction possible. When compared with the un-calibrated yield curves, predictions using the new calibrations show that early growth is likely to be more rapid and total above-ground biomass may be higher for many plantings at maturity. This study has considerably improved understanding of the patterns of growth in different types of environmental plantings, and in modelling biomass accumulation in young (<25. years old) plantings. However, significant challenges remain to understand longer-term stand dynamics, particularly with temporal changes in stand density and species composition. © 2014.
Resumo:
The predictive capabilities of computational fire models have improved in recent years such that models have become an integral part of many research efforts. Models improve the understanding of the fire risk of materials and may decrease the number of expensive experiments required to assess the fire hazard of a specific material or designed space. A critical component of a predictive fire model is the pyrolysis sub-model that provides a mathematical representation of the rate of gaseous fuel production from condensed phase fuels given a heat flux incident to the material surface. The modern, comprehensive pyrolysis sub-models that are common today require the definition of many model parameters to accurately represent the physical description of materials that are ubiquitous in the built environment. Coupled with the increase in the number of parameters required to accurately represent the pyrolysis of materials is the increasing prevalence in the built environment of engineered composite materials that have never been measured or modeled. The motivation behind this project is to develop a systematic, generalized methodology to determine the requisite parameters to generate pyrolysis models with predictive capabilities for layered composite materials that are common in industrial and commercial applications. This methodology has been applied to four common composites in this work that exhibit a range of material structures and component materials. The methodology utilizes a multi-scale experimental approach in which each test is designed to isolate and determine a specific subset of the parameters required to define a material in the model. Data collected in simultaneous thermogravimetry and differential scanning calorimetry experiments were analyzed to determine the reaction kinetics, thermodynamic properties, and energetics of decomposition for each component of the composite. Data collected in microscale combustion calorimetry experiments were analyzed to determine the heats of complete combustion of the volatiles produced in each reaction. Inverse analyses were conducted on sample temperature data collected in bench-scale tests to determine the thermal transport parameters of each component through degradation. Simulations of quasi-one-dimensional bench-scale gasification tests generated from the resultant models using the ThermaKin modeling environment were compared to experimental data to independently validate the models.
Resumo:
Previous studies of greenhouse gas emissions (GHGE) from beef production systems in northern Australia have been based on models of ‘steady-state’ herd structures that do not take into account the considerable inter-annual variation in liveweight gain, reproduction and mortality rates that occurs due to seasonal conditions. Nor do they consider the implications of flexible stocking strategies designed to adapt these production systems to the highly variable climate. The aim of the present study was to quantify the variation in total GHGE (t CO2e) and GHGE intensity (t CO2e/t liveweight sold) for the beef industry in northern Australia when variability in these factors was considered. A combined GRASP–Enterprise modelling platform was used to simulate a breeding–finishing beef cattle property in the Burdekin River region of northern Queensland, using historical climate data from 1982–2011. GHGE was calculated using the method of Australian National Greenhouse Gas Inventory. Five different stocking-rate strategies were simulated with fixed stocking strategies at moderate and high rates, and three flexible stocking strategies where the stocking rate was adjusted annually by up to 5%, 10% or 20%, according to pasture available at the end of the growing season. Variation in total annual GHGE was lowest in the ‘fixed moderate’ (~9.5 ha/adult equivalent (AE)) stocking strategy, ranging from 3799 to 4471 t CO2e, and highest in the ‘fixed high’ strategy (~5.9 ha/AE), which ranged from 3771 to 7636 t CO2e. The ‘fixed moderate’ strategy had the least variation in GHGE intensity (15.7–19.4 t CO2e/t liveweight sold), while the ‘flexible 20’ strategy (up to 20% annual change in AE) had the largest range (10.5–40.8 t CO2e/t liveweight sold). Across the five stocking strategies, the ‘fixed moderate’ stocking-rate strategy had the highest simulated perennial grass percentage and pasture growth, highest average rate of liveweight gain (121 kg/steer), highest average branding percentage (74%) and lowest average breeding-cow mortality rate (3.9%), resulting in the lowest average GHGE intensity (16.9 t CO2e/t liveweight sold). The ‘fixed high’ stocking rate strategy (~5.9 ha/AE) performed the poorest in each of these measures, while the three flexible stocking strategies were intermediate. The ‘fixed moderate’ stocking strategy also yielded the highest average gross margin per AE carried and per hectare. These results highlight the importance of considering the influence of climate variability on stocking-rate management strategies and herd performance when estimating GHGE. The results also support a body of previous work that has recommended the adoption of moderate stocking strategies to enhance the profitability and ecological stability of beef production systems in northern Australia.
Resumo:
People, animals and the environment can be exposed to multiple chemicals at once from a variety of sources, but current risk assessment is usually carried out based on one chemical substance at a time. In human health risk assessment, ingestion of food is considered a major route of exposure to many contaminants, namely mycotoxins, a wide group of fungal secondary metabolites that are known to potentially cause toxicity and carcinogenic outcomes. Mycotoxins are commonly found in a variety of foods including those intended for consumption by infants and young children and have been found in processed cereal-based foods available in the Portuguese market. The use of mathematical models, including probabilistic approaches using Monte Carlo simulations, constitutes a prominent issue in human health risk assessment in general and in mycotoxins exposure assessment in particular. The present study aims to characterize, for the first time, the risk associated with the exposure of Portuguese children to single and multiple mycotoxins present in processed cereal-based foods (CBF). Portuguese children (0-3 years old) food consumption data (n=103) were collected using a 3 days food diary. Contamination data concerned the quantification of 12 mycotoxins (aflatoxins, ochratoxin A, fumonisins and trichothecenes) were evaluated in 20 CBF samples marketed in 2014 and 2015 in Lisbon; samples were analyzed by HPLC-FLD, LC-MS/MS and GC-MS. Daily exposure of children to mycotoxins was performed using deterministic and probabilistic approaches. Different strategies were used to treat the left censored data. For aflatoxins, as carcinogenic compounds, the margin of exposure (MoE) was calculated as a ratio of BMDL (benchmark dose lower confidence limit) to the aflatoxin exposure. The magnitude of the MoE gives an indication of the risk level. For the remaining mycotoxins, the output of exposure was compared to the dose reference values (TDI) in order to calculate the hazard quotients (ratio between exposure and a reference dose, HQ). For the cumulative risk assessment of multiple mycotoxins, the concentration addition (CA) concept was used. The combined margin of exposure (MoET) and the hazard index (HI) were calculated for aflatoxins and the remaining mycotoxins, respectively. 71% of CBF analyzed samples were contaminated with mycotoxins (with values below the legal limits) and approximately 56% of the studied children consumed CBF at least once in these 3 days. Preliminary results showed that children exposure to single mycotoxins present in CBF were below the TDI. Aflatoxins MoE and MoET revealed a reduced potential risk by exposure through consumption of CBF (with values around 10000 or more). HQ and HI values for the remaining mycotoxins were below 1. Children are a particularly vulnerable population group to food contaminants and the present results point out an urgent need to establish legal limits and control strategies regarding the presence of multiple mycotoxins in children foods in order to protect their health. The development of packaging materials with antifungal properties is a possible solution to control the growth of moulds and consequently to reduce mycotoxin production, contributing to guarantee the quality and safety of foods intended for children consumption.
Resumo:
There is increasing interest in evaluating the environmental effects on crop architectural traits and yield improvement. However, crop models describing the dynamic changes in canopy structure with environmental conditions and the complex interactions between canopy structure, light interception, and dry mass production are only gradually emerging. Using tomato (Solanum lycopersicum L.) as a model crop, a dynamic functional-structural plant model (FSPM) was constructed, parameterized, and evaluated to analyse the effects of temperature on architectural traits, which strongly influence canopy light interception and shoot dry mass. The FSPM predicted the organ growth, organ size, and shoot dry mass over time with high accuracy (>85%). Analyses of this FSPM showed that, in comparison with the reference canopy, shoot dry mass may be affected by leaf angle by as much as 20%, leaf curvature by up to 7%, the leaf length: width ratio by up to 5%, internode length by up to 9%, and curvature ratios and leaf arrangement by up to 6%. Tomato canopies at low temperature had higher canopy density and were more clumped due to higher leaf area and shorter internodes. Interestingly, dry mass production and light interception of the clumped canopy were more sensitive to changes in architectural traits. The complex interactions between architectural traits, canopy light interception, dry mass production, and environmental conditions can be studied by the dynamic FSPM, which may serve as a tool for designing a canopy structure which is 'ideal' in a given environment.
Resumo:
Our research has shown that schedules can be built mimicking a human scheduler by using a set of rules that involve domain knowledge. This chapter presents a Bayesian Optimization Algorithm (BOA) for the nurse scheduling problem that chooses such suitable scheduling rules from a set for each nurse’s assignment. Based on the idea of using probabilistic models, the BOA builds a Bayesian network for the set of promising solutions and samples these networks to generate new candidate solutions. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed algorithm may be suitable for other scheduling problems.
Resumo:
Abstract: This paper reports a lot-sizing and scheduling problem, which minimizes inventory and backlog costs on m parallel machines with sequence-dependent set-up times over t periods. Problem solutions are represented as product subsets ordered and/or unordered for each machine m at each period t. The optimal lot sizes are determined applying a linear program. A genetic algorithm searches either over ordered or over unordered subsets (which are implicitly ordered using a fast ATSP-type heuristic) to identify an overall optimal solution. Initial computational results are presented, comparing the speed and solution quality of the ordered and unordered genetic algorithm approaches.
Resumo:
Clearing woodlands is practised world-wide to increase crop and livestock production, but can result in unintended consequences including woody regrowth and land degradation. The pasture response of 2 eucalypt woodlands in the central Queensland rangelands to killing trees with herbicides, in the presence or absence of grazing and regular spring burning, was recorded over 7 or 8 years to determine the long-term sustainability of these common practices. Herbage mass and species composition plus tree dynamics were monitored in 2 replicated experiments at each site. For 8 years following herbicide application, killing Eucalyptus populnea F. Muell. (poplar box) trees resulted in a doubling of native pasture herbage mass from that of the pre-existing woodland, with a tree basal area of 8.7 m2 ha-1. Conversely, over 7 years with a similar range of seasons, killing E. melanophloia F. Muell. (silver-leaved ironbark) trees of a similar tree basal area had little impact on herbage mass grown or on pasture composition for the first 4 years before production then increased. Few consistent changes in pasture composition were recorded after killing the trees, although there was an increase in the desirable grasses Dichanthium sericeum (R. Br.) A. Camus (Queensland bluegrass) and Themeda triandra Forssk. (kangaroo grass) when grazed conservatively. Excluding grazing allowed more palatable species of the major grasses to enhance their prominence, but seasonal conditions still had a major influence on their production in particular years. Pasture crown basal area was significantly higher where trees had been killed, especially in the poplar box woodland. Removing tree competition did not have a major effect on pasture composition that was independent of other management impositions or seasons, and it did not result in a rapid increase in herbage mass in both eucalypt communities. The slow pasture response to tree removal at one site indicates that regional models and economic projections relating to tree clearing require community-specific inputs.
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
Mass Customization (MC) is not a mature business strategy and hence it is not clear that a single or small group of operational models are dominating. Companies tend to approach MC from either a mass production or a customization origin and this in itself gives reason to believe that several operational models will be observable. This paper reviews actual and theoretical fulfilment systems that enterprises could apply when offering a pre-engineered catalogue of customizable products and options. Issues considered are: How product flows are structured in relation to processes, inventories and decoupling point(s); - Characteristics of the OF process that inhibit or facilitate fulfilment; - The logic of how products are allocated to customers; - Customer factors that influence OF process design and operation. Diversity in the order fulfilment structures is expected and is found in the literature. The review has identified four structural forms that have been used in a Catalogue MC context: - fulfilment from stock; - fulfilment from a single fixed decoupling point; - fulfilment from one of several fixed decoupling points; - fulfilment from several locations, with floating decoupling points. From the review it is apparent that producers are being imaginative in coping with the demands of high variety, high volume, customization and short lead times. These demands have encouraged the relationship between product, process and customer to be re-examined. Not only has this strengthened interest in commonality and postponement, but, as is reported in the paper, has led to the re-engineering of the order fulfilment process to create models with multiple fixed decoupling points and the floating decoupling point system
Resumo:
International audience
Resumo:
We study the production and signatures of doubly charged Higgs bosons (DCHBs) in the process gamma gamma <-> H(--)H(++) at the e(-)e(+) International Linear Collider and CERN Linear Collider, where the intermediate photons are given by the Weizsacker-Willians and laser backscattering distributions.
Resumo:
International audience
Resumo:
Reconfigurable platforms are a promising technology that offers an interesting trade-off between flexibility and performance, which many recent embedded system applications demand, especially in fields such as multimedia processing. These applications typically involve multiple ad-hoc tasks for hardware acceleration, which are usually represented using formalisms such as Data Flow Diagrams (DFDs), Data Flow Graphs (DFGs), Control and Data Flow Graphs (CDFGs) or Petri Nets. However, none of these models is able to capture at the same time the pipeline behavior between tasks (that therefore can coexist in order to minimize the application execution time), their communication patterns, and their data dependencies. This paper proves that the knowledge of all this information can be effectively exploited to reduce the resource requirements and the timing performance of modern reconfigurable systems, where a set of hardware accelerators is used to support the computation. For this purpose, this paper proposes a novel task representation model, named Temporal Constrained Data Flow Diagram (TCDFD), which includes all this information. This paper also presents a mapping-scheduling algorithm that is able to take advantage of the new TCDFD model. It aims at minimizing the dynamic reconfiguration overhead while meeting the communication requirements among the tasks. Experimental results show that the presented approach achieves up to 75% of resources saving and up to 89% of reconfiguration overhead reduction with respect to other state-of-the-art techniques for reconfigurable platforms.