917 resultados para Multiple-use forestry
Resumo:
Although people frequently pursue multiple goals simultaneously, these goals often conflict with each other. For instance, consumers may have both a healthy eating goal and a goal to have an enjoyable eating experience. In this dissertation, I focus on two sources of enjoyment in eating experiences that may conflict with healthy eating: consuming tasty food (Essay 1) and affiliating with indulging dining companions (Essay 2). In both essays, I examine solutions and strategies that decrease the conflict between healthy eating and these aspects of enjoyment in the eating experience, thereby enabling consumers to resolve such goal conflicts.
Essay 1 focuses on the well-established conflict between having healthy food and having tasty food and introduces a novel product offering (“vice-virtue bundles”) that can help consumers simultaneously address both health and taste goals. Through several experiments, I demonstrate that consumers often choose vice-virtue bundles with small proportions (¼) of vice and that they view such bundles as healthier than but equally tasty as bundles with larger vice proportions, indicating that “healthier” does not always have to equal “less tasty.”
Essay 2 focuses on a conflict between healthy eating and affiliation with indulging dining companions. The first set of experiments provides evidence of this conflict and examine why it arises (Studies 1 to 3). Based on this conflict’s origins, the second set of experiments tests strategies that consumers can use to decrease the conflict between healthy eating and affiliation with an indulging dining companion (Studies 4 and 5), such that they can make healthy food choices while still being liked by an indulging dining companion. Thus, Essay 2 broadens the existing picture of goals that conflict with the healthy eating goal and, together with Essay 1, identifies solutions to such goal conflicts.
Resumo:
The rise of the twenty-first century has seen the further increase in the industrialization of Earth’s resources, as society aims to meet the needs of a growing population while still protecting our environmental and natural resources. The advent of the industrial bioeconomy – which encompasses the production of renewable biological resources and their conversion into food, feed, and bio-based products – is seen as an important step in transition towards sustainable development and away from fossil fuels. One sector of the industrial bioeconomy which is rapidly being expanded is the use of biobased feedstocks in electricity production as an alternative to coal, especially in the European Union.
As bioeconomy policies and objectives increasingly appear on political agendas, there is a growing need to quantify the impacts of transitioning from fossil fuel-based feedstocks to renewable biological feedstocks. Specifically, there is a growing need to conduct a systems analysis and potential risks of increasing the industrial bioeconomy, given that the flows within it are inextricably linked. Furthermore, greater analysis is needed into the consequences of shifting from fossil fuels to renewable feedstocks, in part through the use of life cycle assessment modeling to analyze impacts along the entire value chain.
To assess the emerging nature of the industrial bioeconomy, three objectives are addressed: (1) quantify the global industrial bioeconomy, linking the use of primary resources with the ultimate end product; (2) quantify the impacts of the expaning wood pellet energy export market of the Southeastern United States; (3) conduct a comparative life cycle assessment, incorporating the use of dynamic life cycle assessment, of replacing coal-fired electricity generation in the United Kingdom with wood pellets that are produced in the Southeastern United States.
To quantify the emergent industrial bioeconomy, an empirical analysis was undertaken. Existing databases from multiple domestic and international agencies was aggregated and analyzed in Microsoft Excel to produce a harmonized dataset of the bioeconomy. First-person interviews, existing academic literature, and industry reports were then utilized to delineate the various intermediate and end use flows within the bioeconomy. The results indicate that within a decade, the industrial use of agriculture has risen ten percent, given increases in the production of bioenergy and bioproducts. The underlying resources supporting the emergent bioeconomy (i.e., land, water, and fertilizer use) were also quantified and included in the database.
Following the quantification of the existing bioeconomy, an in-depth analysis of the bioenergy sector was conducted. Specifically, the focus was on quantifying the impacts of the emergent wood pellet export sector that has rapidly developed in recent years in the Southeastern United States. A cradle-to-gate life cycle assessment was conducted in order to quantify supply chain impacts from two wood pellet production scenarios: roundwood and sawmill residues. For reach of the nine impact categories assessed, wood pellet production from sawmill residues resulted in higher values, ranging from 10-31% higher.
The analysis of the wood pellet sector was then expanded to include the full life cycle (i.e., cradle-to-grave). In doing to, the combustion of biogenic carbon and the subsequent timing of emissions were assessed by incorporating dynamic life cycle assessment modeling. Assuming immediate carbon neutrality of the biomass, the results indicated an 86% reduction in global warming potential when utilizing wood pellets as compared to coal for electricity production in the United Kingdom. When incorporating the timing of emissions, wood pellets equated to a 75% or 96% reduction in carbon dioxide emissions, depending upon whether the forestry feedstock was considered to be harvested or planted in year one, respectively.
Finally, a policy analysis of renewable energy in the United States was conducted. Existing coal-fired power plants in the Southeastern United States were assessed in terms of incorporating the co-firing of wood pellets. Co-firing wood pellets with coal in existing Southeastern United States power stations would result in a nine percent reduction in global warming potential.
Resumo:
Protected areas are the leading forest conservation policy for species and ecoservices goals and they may feature in climate policy if countries with tropical forest rely on familiar tools. For Brazil's Legal Amazon, we estimate the average impact of protection upon deforestation and show how protected areas' forest impacts vary significantly with development pressure. We use matching, i.e., comparisons that are apples-to-apples in observed land characteristics, to address the fact that protected areas (PAs) tend to be located on lands facing less pressure. Correcting for that location bias lowers our estimates of PAs' forest impacts by roughly half. Further, it reveals significant variation in PA impacts along development-related dimensions: for example, the PAs that are closer to roads and the PAs closer to cities have higher impact. Planners have multiple conservation and development goals, and are constrained by cost, yet still conservation planning should reflect what our results imply about future impacts of PAs.
Resumo:
Surveys can collect important data that inform policy decisions and drive social science research. Large government surveys collect information from the U.S. population on a wide range of topics, including demographics, education, employment, and lifestyle. Analysis of survey data presents unique challenges. In particular, one needs to account for missing data, for complex sampling designs, and for measurement error. Conceptually, a survey organization could spend lots of resources getting high-quality responses from a simple random sample, resulting in survey data that are easy to analyze. However, this scenario often is not realistic. To address these practical issues, survey organizations can leverage the information available from other sources of data. For example, in longitudinal studies that suffer from attrition, they can use the information from refreshment samples to correct for potential attrition bias. They can use information from known marginal distributions or survey design to improve inferences. They can use information from gold standard sources to correct for measurement error.
This thesis presents novel approaches to combining information from multiple sources that address the three problems described above.
The first method addresses nonignorable unit nonresponse and attrition in a panel survey with a refreshment sample. Panel surveys typically suffer from attrition, which can lead to biased inference when basing analysis only on cases that complete all waves of the panel. Unfortunately, the panel data alone cannot inform the extent of the bias due to attrition, so analysts must make strong and untestable assumptions about the missing data mechanism. Many panel studies also include refreshment samples, which are data collected from a random sample of new
individuals during some later wave of the panel. Refreshment samples offer information that can be utilized to correct for biases induced by nonignorable attrition while reducing reliance on strong assumptions about the attrition process. To date, these bias correction methods have not dealt with two key practical issues in panel studies: unit nonresponse in the initial wave of the panel and in the
refreshment sample itself. As we illustrate, nonignorable unit nonresponse
can significantly compromise the analyst's ability to use the refreshment samples for attrition bias correction. Thus, it is crucial for analysts to assess how sensitive their inferences---corrected for panel attrition---are to different assumptions about the nature of the unit nonresponse. We present an approach that facilitates such sensitivity analyses, both for suspected nonignorable unit nonresponse
in the initial wave and in the refreshment sample. We illustrate the approach using simulation studies and an analysis of data from the 2007-2008 Associated Press/Yahoo News election panel study.
The second method incorporates informative prior beliefs about
marginal probabilities into Bayesian latent class models for categorical data.
The basic idea is to append synthetic observations to the original data such that
(i) the empirical distributions of the desired margins match those of the prior beliefs, and (ii) the values of the remaining variables are left missing. The degree of prior uncertainty is controlled by the number of augmented records. Posterior inferences can be obtained via typical MCMC algorithms for latent class models, tailored to deal efficiently with the missing values in the concatenated data.
We illustrate the approach using a variety of simulations based on data from the American Community Survey, including an example of how augmented records can be used to fit latent class models to data from stratified samples.
The third method leverages the information from a gold standard survey to model reporting error. Survey data are subject to reporting error when respondents misunderstand the question or accidentally select the wrong response. Sometimes survey respondents knowingly select the wrong response, for example, by reporting a higher level of education than they actually have attained. We present an approach that allows an analyst to model reporting error by incorporating information from a gold standard survey. The analyst can specify various reporting error models and assess how sensitive their conclusions are to different assumptions about the reporting error process. We illustrate the approach using simulations based on data from the 1993 National Survey of College Graduates. We use the method to impute error-corrected educational attainments in the 2010 American Community Survey using the 2010 National Survey of College Graduates as the gold standard survey.
Resumo:
This paper reports the results of the on-body experimental tests of a set of four planar differential antennas, originated by design variations of radiating elements with the same shape and characterized by the potential for covering wide and narrow bands. All the antenna designs have been implemented on low-cost FR4 substrate and characterized experimentally through on-body measurements. The results show the impact of the proximity to the human body on antenna performance and the opportunities in terms of potential coverage of wide and narrow bands for future ad hoc designs and implementations through wearable substrates targeting on-body and off-body communication and sensing applications.
Resumo:
Exergames are digital games with a physical exertion component. Exergames can help motivate fitness in people not inclined toward exercise. However, players of exergames sometimes over-exert, risking adverse health effects. These players must be told to slow down, but doing so may distract them from gameplay and diminish their desire to keep exercising. In this thesis we apply the concept of nudges—indirect suggestions that gently push people toward a desired behaviour—to keeping exergame players from over-exerting. We describe the effective use of nudges through a set of four design principles: natural integration, comprehension, progression, and multiple channels. We describe two exergames modified to use nudges to persuade players to slow down, and describe the studies evaluating the use of nudges in these games. PlaneGame shows that nudges can be as effective as an explicit textual display to control player over-exertion. Gekku Race demonstrates that nudges are not necessarily effective when players have a strong incentive to over-exert. However, Gekku Race also shows that, even in high-energy games, the power of nudges can be maintained by adding negative consequences to the nudges. We use the term "shove" to describe a nudge using negative consequences to increase its pressure. We were concerned that making players slow down would damage their immersion—the feeling of being engaged with a game. However, testing showed no loss of immersion through the use of nudges to reduce exertion. Players reported that the nudges and shoves motivated them to slow down when they were over-exerting, and fit naturally into the games.
Bullying Involvement and Adolescent Substance Use: A Study of Multilevel Risk and Protective Factors
Resumo:
Bullying, frequent drunkenness, and frequent cannabis use are significant health-risk behaviours among youth. While many studies have demonstrated that bullying involvement may initiate a developmental pathway to both types of frequent substance use, there is a limited understanding of the connection between these behaviours. The presence of risk and protective factors within youths’ relationships and within their neighbourhoods may alter the associations between bullying involvement and both types of frequent substance use. A systemic approach is needed to assess the complex, social environments in which youth are embedded. The current thesis consists of two studies that examined the associations between bullying and both types of frequent substance use within the context of youths’ social environments. In Study 1, multilevel modeling was used to examine the associations between bullying and frequent substance use within the context of individual and neighbourhood risk factors. Our results indicated that the risk factors associated with both frequent drunkenness and frequent cannabis use exist at both levels, with neighbourhoods altering the association of individual risk factors. Moreover, bullying was a unique risk factor associated with both types of frequent substance use, whereas indirect associations were observed for victimization. Study 2 used a similar methodology to examine the association between bullying and both types of frequent substance use within the context of individual and neighbourhood protective factors. Once again, our results indicated that the protective factors associated with both types of frequent substance use exist at multiple levels, and that neighbourhoods altered the association of individual protective factors. Additionally, positive relationship characteristics interacted with the link between bullying and both types of frequent substance use. Together, these findings clarify the nature of the bullying-substance use link and emphasize the need to study adolescent development in context.
Resumo:
DH-JH rearrangements of the Ig heavy-chain gene (IGH) occur early during B-cell development. Consequently, they are detected in precursor-B-cell acute lymphoblastic leukemias both at diagnosis and relapse. Incomplete DJH rearrangements have also been occasionally reported in mature B-cell lymphoproliferative disorders, but their frequency and immunobiological characteristics have not been studied in detail. We have investigated the frequency and characteristics of incomplete DJH as well as complete VDJH rearrangements in a series of 84 untreated multiple myeloma (MM) patients. The overall detection rate of clonality by amplifying VDJH and DJH rearrangements using family-specific primers was 94%. Interestingly, we found a high frequency (60%) of DJH rearrangements in this group. As expected from an immunological point of view, the vast majority of DJH rearrangements (88%) were unmutated. To the best of our knowledge, this is the first systematic study describing the incidence of incomplete DJH rearrangements in a series of unselected MM patients. These results strongly support the use of DJH rearrangements as PCR targets for clonality studies and, particularly, for quantification of minimal residual disease by real-time quantitative PCR using consensus JH probes in MM patients. The finding of hypermutation in a small proportion of incomplete DJH rearrangements (six out of 50) suggests important biological implications concerning the process of somatic hypermutation. Moreover, our data offer a new insight in the regulatory development model of IGH rearrangements.
Resumo:
The hypervariable regions of immunoglobulin heavy-chain (IgH) rearrangements provide a specific tumor marker in multiple myeloma (MM). Recently, real-time PCR assays have been developed in order to quantify the number of tumor cells after treatment. However, these strategies are hampered by the presence of somatic hypermutation (SH) in VDJH rearrangements from multiple myeloma (MM) patients, which causes mismatches between primers and/or probes and the target, leading to a nonaccurate quantification of tumor cells. Our group has recently described a 60% incidence of incomplete DJH rearrangements in MM patients, with no or very low rates of SH. In this study, we compare the efficiency of a real-time PCR approach for the analysis of both complete and incomplete IgH rearrangements in eight MM patients using only three JH consensus probes. We were able to design an allele-specific oligonucleotide for both the complete and incomplete rearrangement in all patients. DJH rearrangements fulfilled the criteria of effectiveness for real-time PCR in all samples (ie no unspecific amplification, detection of less than 10 tumor cells within 10(5) polyclonal background and correlation coefficients of standard curves higher than 0.98). By contrast, only three out of eight VDJH rearrangements fulfilled these criteria. Further analyses showed that the remaining five VDJH rearrangements carried three or more somatic mutations in the probe and primer sites, leading to a dramatic decrease in the melting temperature. These results support the use of incomplete DJH rearrangements instead of complete somatically mutated VDJH rearrangements for investigation of minimal residual disease in multiple myeloma.
Resumo:
In the present paper, we report on the use of the heteroduplex PCR technique to detect the presence of clonally rearranged VDJ segments of the heavy chain immunoglobulin gene (VDJH) in the apheresis products of patients with multiple myeloma (MM) undergoing autologous peripheral blood stem cell (APBSC) transplantation. Twenty-three out of 31 MM patients undergoing APBSC transplantation with VDJH segments clonally rearranged detected at diagnosis were included in the study. Samples of the apheresis products were PCR amplified using JH and VH (FRIII and FRII) consensus primers and subsequently analyzed with the heteroduplex technique, and compared with those obtained at diagnosis. 52% of cases yielded positive results (presence of clonally rearranged VDJH segments in at least one apheresis). The presence of positive results in the apheresis products was not related to any pretransplant characteristics with the exception of response status at transplant. Thus, while no one patient with positive apheresis products was in complete remission (CR), negative immunofixation, before the transplant, five cases (46%) with negative apheresis were already in CR at transplant (P = 0.01). The remaining six cases with heteroduplex PCR negative apheresis were in partial remission before transplant. Patients with clonally free products were more likely to obtain CR following transplant (64% vs 17%, P= 0.02) and a longer progression-free survival, (40 months in patients transplanted with polyclonal products vs 20 with monoclonal ones, P = 0.03). These results were consistent when the overall survival was considered, since it was better in those patients with negative apheresis than it was in those with positive (83% vs 36% at 5 years from diagnosis, P= 0.01). These findings indicate that the presence of clonality rearranged VDJH segments is related to the response and outcome in MM transplanted patients.
Resumo:
BACKGROUND AND OBJECTIVE: Molecular analysis by PCR of monoclonally rearranged immunoglobulin (Ig) genes can be used for diagnosis in B-cell lymphoproliferative disorders (LPD), as well as for monitoring minimal residual disease (MRD) after treatment. This technique has the risk of false-positive results due to the "background" amplification of similar rearrangements derived from polyclonal B-cells. This problem can be resolved in advance by additional analyses that discern between polyclonal and monoclonal PCR products, such as the heteroduplex analysis. A second problem is that PCR frequently fails to amplify the junction regions, mainly due to somatic mutations frequently present in mature (post-follicular) B-cell lymphoproliferations. The use of additional targets (e.g. Ig light chain genes) can avoid this problem. DESIGN AND METHODS: We studied the specificity of heteroduplex PCR analysis of several Ig junction regions to detect monoclonal products in samples from 84 MM patients and 24 patients with B cell polyclonal disorders. RESULTS: Using two distinct VH consensus primers (FR3 and FR2) in combination with one JH primer, 79% of the MM displayed monoclonal products. The percentage of positive cases was increased by amplification of the Vlamda-Jlamda junction regions or kappa(de) rearrangements, using two or five pairs of consensus primers, respectively. After including these targets in the heteroduplex PCR analysis, 93% of MM cases displayed monoclonal products. None of the polyclonal samples analyzed resulted in monoclonal products. Dilution experiments showed that monoclonal rearrangements could be detected with a sensitivity of at least 10(-2) in a background with >30% polyclonal B-cells, the sensitivity increasing up to 10(-3) when the polyclonal background was
Resumo:
Landnutzungsänderungen sind eine wesentliche Ursache von Treibhausgasemissionen. Die Umwandlung von Ökosystemen mit permanenter natürlicher Vegetation hin zu Ackerbau mit zeitweise vegetationslosem Boden (z.B. nach der Bodenbearbeitung vor der Aussaat) führt häufig zu gesteigerten Treibhausgasemissionen und verminderter Kohlenstoffbindung. Weltweit dehnt sich Ackerbau sowohl in kleinbäuerlichen als auch in agro-industriellen Systemen aus, häufig in benachbarte semiaride bis subhumide Rangeland Ökosysteme. Die vorliegende Arbeit untersucht Trends der Landnutzungsänderung im Borana Rangeland Südäthiopiens. Bevölkerungswachstum, Landprivatisierung und damit einhergehende Einzäunung, veränderte Landnutzungspolitik und zunehmende Klimavariabilität führen zu raschen Veränderungen der traditionell auf Tierhaltung basierten, pastoralen Systeme. Mittels einer Literaturanalyse von Fallstudien in ostafrikanischen Rangelands wurde im Rahmen dieser Studie ein schematisches Modell der Zusammenhänge von Landnutzung, Treibhausgasemissionen und Kohlenstofffixierung entwickelt. Anhand von Satellitendaten und Daten aus Haushaltsbefragungen wurden Art und Umfang von Landnutzungsänderungen und Vegetationsveränderungen an fünf Untersuchungsstandorten (Darito/Yabelo Distrikt, Soda, Samaro, Haralo, Did Mega/alle Dire Distrikt) zwischen 1985 und 2011 analysiert. In Darito dehnte sich die Ackerbaufläche um 12% aus, überwiegend auf Kosten von Buschland. An den übrigen Standorten blieb die Ackerbaufläche relativ konstant, jedoch nahm Graslandvegetation um zwischen 16 und 28% zu, während Buschland um zwischen 23 und 31% abnahm. Lediglich am Standort Haralo nahm auch „bare land“, vegetationslose Flächen, um 13% zu. Faktoren, die zur Ausdehnung des Ackerbaus führen, wurden am Standort Darito detaillierter untersucht. GPS Daten und anbaugeschichtlichen Daten von 108 Feldern auf 54 Betrieben wurden in einem Geographischen Informationssystem (GIS) mit thematischen Boden-, Niederschlags-, und Hangneigungskarten sowie einem Digitales Höhenmodell überlagert. Multiple lineare Regression ermittelte Hangneigung und geographische Höhe als signifikante Erklärungsvariablen für die Ausdehnung von Ackerbau in niedrigere Lagen. Bodenart, Entfernung zum saisonalen Flusslauf und Niederschlag waren hingegen nicht signifikant. Das niedrige Bestimmtheitsmaß (R²=0,154) weist darauf hin, dass es weitere, hier nicht erfasste Erklärungsvariablen für die Richtung der räumlichen Ausweitung von Ackerland gibt. Streudiagramme zu Ackergröße und Anbaujahren in Relation zu geographischer Höhe zeigen seit dem Jahr 2000 eine Ausdehnung des Ackerbaus in Lagen unter 1620 müNN und eine Zunahme der Schlaggröße (>3ha). Die Analyse der phänologischen Entwicklung von Feldfrüchten im Jahresverlauf in Kombination mit Niederschlagsdaten und normalized difference vegetation index (NDVI) Zeitreihendaten dienten dazu, Zeitpunkte besonders hoher (Begrünung vor der Ernte) oder niedriger (nach der Bodenbearbeitung) Pflanzenbiomasse auf Ackerland zu identifizieren, um Ackerland und seine Ausdehnung von anderen Vegetationsformen fernerkundlich unterscheiden zu können. Anhand der NDVI Spektralprofile konnte Ackerland gut Wald, jedoch weniger gut von Gras- und Buschland unterschieden werden. Die geringe Auflösung (250m) der Moderate Resolution Imaging Spectroradiometer (MODIS) NDVI Daten führte zu einem Mixed Pixel Effect, d.h. die Fläche eines Pixels beinhaltete häufig verschiedene Vegetationsformen in unterschiedlichen Anteilen, was deren Unterscheidung beeinträchtigte. Für die Entwicklung eines Echtzeit Monitoring Systems für die Ausdehnung des Ackerbaus wären höher auflösende NDVI Daten (z.B. Multispektralband, Hyperion EO-1 Sensor) notwendig, um kleinräumig eine bessere Differenzierung von Ackerland und natürlicher Rangeland-Vegetation zu erhalten. Die Entwicklung und der Einsatz solcher Methoden als Entscheidungshilfen für Land- und Ressourcennutzungsplanung könnte dazu beitragen, Produktions- und Entwicklungsziele der Borana Landnutzer mit nationalen Anstrengungen zur Eindämmung des Klimawandels durch Steigerung der Kohlenstofffixierung in Rangelands in Einklang zu bringen.
Resumo:
Lors du transport du bois de la forêt vers les usines, de nombreux événements imprévus peuvent se produire, événements qui perturbent les trajets prévus (par exemple, en raison des conditions météo, des feux de forêt, de la présence de nouveaux chargements, etc.). Lorsque de tels événements ne sont connus que durant un trajet, le camion qui accomplit ce trajet doit être détourné vers un chemin alternatif. En l’absence d’informations sur un tel chemin, le chauffeur du camion est susceptible de choisir un chemin alternatif inutilement long ou pire, qui est lui-même "fermé" suite à un événement imprévu. Il est donc essentiel de fournir aux chauffeurs des informations en temps réel, en particulier des suggestions de chemins alternatifs lorsqu’une route prévue s’avère impraticable. Les possibilités de recours en cas d’imprévus dépendent des caractéristiques de la chaîne logistique étudiée comme la présence de camions auto-chargeurs et la politique de gestion du transport. Nous présentons trois articles traitant de contextes d’application différents ainsi que des modèles et des méthodes de résolution adaptés à chacun des contextes. Dans le premier article, les chauffeurs de camion disposent de l’ensemble du plan hebdomadaire de la semaine en cours. Dans ce contexte, tous les efforts doivent être faits pour minimiser les changements apportés au plan initial. Bien que la flotte de camions soit homogène, il y a un ordre de priorité des chauffeurs. Les plus prioritaires obtiennent les volumes de travail les plus importants. Minimiser les changements dans leurs plans est également une priorité. Étant donné que les conséquences des événements imprévus sur le plan de transport sont essentiellement des annulations et/ou des retards de certains voyages, l’approche proposée traite d’abord l’annulation et le retard d’un seul voyage, puis elle est généralisée pour traiter des événements plus complexes. Dans cette ap- proche, nous essayons de re-planifier les voyages impactés durant la même semaine de telle sorte qu’une chargeuse soit libre au moment de l’arrivée du camion à la fois au site forestier et à l’usine. De cette façon, les voyages des autres camions ne seront pas mo- difiés. Cette approche fournit aux répartiteurs des plans alternatifs en quelques secondes. De meilleures solutions pourraient être obtenues si le répartiteur était autorisé à apporter plus de modifications au plan initial. Dans le second article, nous considérons un contexte où un seul voyage à la fois est communiqué aux chauffeurs. Le répartiteur attend jusqu’à ce que le chauffeur termine son voyage avant de lui révéler le prochain voyage. Ce contexte est plus souple et offre plus de possibilités de recours en cas d’imprévus. En plus, le problème hebdomadaire peut être divisé en des problèmes quotidiens, puisque la demande est quotidienne et les usines sont ouvertes pendant des périodes limitées durant la journée. Nous utilisons un modèle de programmation mathématique basé sur un réseau espace-temps pour réagir aux perturbations. Bien que ces dernières puissent avoir des effets différents sur le plan de transport initial, une caractéristique clé du modèle proposé est qu’il reste valable pour traiter tous les imprévus, quelle que soit leur nature. En effet, l’impact de ces événements est capturé dans le réseau espace-temps et dans les paramètres d’entrée plutôt que dans le modèle lui-même. Le modèle est résolu pour la journée en cours chaque fois qu’un événement imprévu est révélé. Dans le dernier article, la flotte de camions est hétérogène, comprenant des camions avec des chargeuses à bord. La configuration des routes de ces camions est différente de celle des camions réguliers, car ils ne doivent pas être synchronisés avec les chargeuses. Nous utilisons un modèle mathématique où les colonnes peuvent être facilement et naturellement interprétées comme des itinéraires de camions. Nous résolvons ce modèle en utilisant la génération de colonnes. Dans un premier temps, nous relaxons l’intégralité des variables de décision et nous considérons seulement un sous-ensemble des itinéraires réalisables. Les itinéraires avec un potentiel d’amélioration de la solution courante sont ajoutés au modèle de manière itérative. Un réseau espace-temps est utilisé à la fois pour représenter les impacts des événements imprévus et pour générer ces itinéraires. La solution obtenue est généralement fractionnaire et un algorithme de branch-and-price est utilisé pour trouver des solutions entières. Plusieurs scénarios de perturbation ont été développés pour tester l’approche proposée sur des études de cas provenant de l’industrie forestière canadienne et les résultats numériques sont présentés pour les trois contextes.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08