919 resultados para numerical models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract The ultimate problem considered in this thesis is modeling a high-dimensional joint distribution over a set of discrete variables. For this purpose, we consider classes of context-specific graphical models and the main emphasis is on learning the structure of such models from data. Traditional graphical models compactly represent a joint distribution through a factorization justi ed by statements of conditional independence which are encoded by a graph structure. Context-speci c independence is a natural generalization of conditional independence that only holds in a certain context, speci ed by the conditioning variables. We introduce context-speci c generalizations of both Bayesian networks and Markov networks by including statements of context-specific independence which can be encoded as a part of the model structures. For the purpose of learning context-speci c model structures from data, we derive score functions, based on results from Bayesian statistics, by which the plausibility of a structure is assessed. To identify high-scoring structures, we construct stochastic and deterministic search algorithms designed to exploit the structural decomposition of our score functions. Numerical experiments on synthetic and real-world data show that the increased exibility of context-specific structures can more accurately emulate the dependence structure among the variables and thereby improve the predictive accuracy of the models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We tested the prediction that, if hoverflies are Batesian mimics, this may extend to behavioral mimicry such that their numerical abundance at each hour of the day (the daily activity pattern) is related to the numbers of their hymenopteran models. After accounting for site, season, microclimatic responses and for general hoverfly abundance at three sites in north-west England, the residual numbers of mimics were significantly correlated positively with their models 9 times out of 17, while 16 out of 17 relationships were positive, itself a highly significant non-random pattern. Several eristaline flies showed significant relationships with honeybees even though some of them mimic wasps or bumblebees, perhaps reflecting an ancestral resemblance to honeybees. There was no evidence that good and poor mimics differed in their daily activity pattern relationships with models. However, the common mimics showed significant activity pattern relationships with their models, but the rarer mimics did not. We conclude that many hoverflies show behavioral mimicry of their hymenopteran models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the formative agents of cloud droplets, aerosols play an undeniably important role in the development of clouds and precipitation. Few meteorological models have been developed or adapted to simulate aerosols and their contribution to cloud and precipitation processes. The Weather Research and Forecasting model (WRF) has recently been coupled with an atmospheric chemistry suite and is jointly referred to as WRF-Chem, allowing atmospheric chemistry and meteorology to influence each other’s evolution within a mesoscale modeling framework. Provided that the model physics are robust, this framework allows the feedbacks between aerosol chemistry, cloud physics, and dynamics to be investigated. This study focuses on the effects of aerosols on meteorology, specifically, the interaction of aerosol chemical species with microphysical processes represented within the framework of the WRF-Chem. Aerosols are represented by eight size bins using the Model for Simulating Aerosol Interactions and Chemistry (MOSAIC) sectional parameterization, which is linked to the Purdue Lin bulk microphysics scheme. The aim of this study is to examine the sensitivity of deep convective precipitation modeled by the 2D WRF-Chem to varying aerosol number concentration and aerosol type. A systematic study has been performed regarding the effects of aerosols on parameters such as total precipitation, updraft/downdraft speed, distribution of hydrometeor species, and organizational features, within idealized maritime and continental thermodynamic environments. Initial results were obtained using WRFv3.0.1, and a second series of tests were run using WRFv3.2 after several changes to the activation, autoconversion, and Lin et al. microphysics schemes added by the WRF community, as well as the implementation of prescribed vertical levels by the author. The results of WRFv3.2 runs contrasted starkly with WRFv3.0.1 runs. The WRFv3.0.1 runs produced a propagating system resembling a developing squall line, whereas the WRFv3.2 runs did not. The response of total precipitation, updraft/downdraft speeds, and system organization to increasing aerosol concentrations were opposite between runs with different versions of WRF. Results of the WRFv3.2 runs, however, were in better agreement in timing and magnitude of vertical velocity and hydrometeor content with a WRFv3.0.1 run using single-moment Lin et al. microphysics, than WRFv3.0.1 runs with chemistry. One result consistent throughout all simulations was an inhibition in warm-rain processes due to enhanced aerosol concentrations, which resulted in a delay of precipitation onset that ranged from 2-3 minutes in WRFv3.2 runs, and up to 15 minutes in WRFv.3.0.1 runs. This result was not observed in a previous study by Ntelekos et al. (2009) using the WRF-Chem, perhaps due to their use of coarser horizontal and vertical resolution within their experiment. The changes to microphysical processes such as activation and autoconversion from WRFv3.0.1 to WRFv3.2, along with changes in the packing of vertical levels, had more impact than the varying aerosol concentrations even though the range of aerosol tested was greater than that observed in field studies. In order to take full advantage of the input of aerosols now offered by the chemistry module in WRF, the author recommends that a fully double-moment microphysics scheme be linked, rather than the limited double-moment Lin et al. scheme that currently exists. With this modification, the WRF-Chem will be a powerful tool for studying aerosol-cloud interactions and allow comparison of results with other studies using more modern and complex microphysical parameterizations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

SQL Injection Attack (SQLIA) remains a technique used by a computer network intruder to pilfer an organisation’s confidential data. This is done by an intruder re-crafting web form’s input and query strings used in web requests with malicious intent to compromise the security of an organisation’s confidential data stored at the back-end database. The database is the most valuable data source, and thus, intruders are unrelenting in constantly evolving new techniques to bypass the signature’s solutions currently provided in Web Application Firewalls (WAF) to mitigate SQLIA. There is therefore a need for an automated scalable methodology in the pre-processing of SQLIA features fit for a supervised learning model. However, obtaining a ready-made scalable dataset that is feature engineered with numerical attributes dataset items to train Artificial Neural Network (ANN) and Machine Leaning (ML) models is a known issue in applying artificial intelligence to effectively address ever evolving novel SQLIA signatures. This proposed approach applies numerical attributes encoding ontology to encode features (both legitimate web requests and SQLIA) to numerical data items as to extract scalable dataset for input to a supervised learning model in moving towards a ML SQLIA detection and prevention model. In numerical attributes encoding of features, the proposed model explores a hybrid of static and dynamic pattern matching by implementing a Non-Deterministic Finite Automaton (NFA). This combined with proxy and SQL parser Application Programming Interface (API) to intercept and parse web requests in transition to the back-end database. In developing a solution to address SQLIA, this model allows processed web requests at the proxy deemed to contain injected query string to be excluded from reaching the target back-end database. This paper is intended for evaluating the performance metrics of a dataset obtained by numerical encoding of features ontology in Microsoft Azure Machine Learning (MAML) studio using Two-Class Support Vector Machines (TCSVM) binary classifier. This methodology then forms the subject of the empirical evaluation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis focuses on experimental and numerical studies of the hydrodynamic interaction between two vessels in close proximity in waves. In the model tests, two identical box-like models with round corners were used. Regular waves with the same wave steepness and different wave frequencies were generated. Six degrees of freedom body motions and wave elevations between bodies were measured in a head sea condition. Three initial gap widths were examined. In the numerical computations, a panel-free method based seakeeping program, MAPS0, and a panel method based program, WAMIT, were used for the prediction of body motions and wave elevations. The computed body motions and wave elevations were compared with experimental data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation proposes statistical methods to formulate, estimate and apply complex transportation models. Two main problems are part of the analyses conducted and presented in this dissertation. The first method solves an econometric problem and is concerned with the joint estimation of models that contain both discrete and continuous decision variables. The use of ordered models along with a regression is proposed and their effectiveness is evaluated with respect to unordered models. Procedure to calculate and optimize the log-likelihood functions of both discrete-continuous approaches are derived, and difficulties associated with the estimation of unordered models explained. Numerical approximation methods based on the Genz algortithm are implemented in order to solve the multidimensional integral associated with the unordered modeling structure. The problems deriving from the lack of smoothness of the probit model around the maximum of the log-likelihood function, which makes the optimization and the calculation of standard deviations very difficult, are carefully analyzed. A methodology to perform out-of-sample validation in the context of a joint model is proposed. Comprehensive numerical experiments have been conducted on both simulated and real data. In particular, the discrete-continuous models are estimated and applied to vehicle ownership and use models on data extracted from the 2009 National Household Travel Survey. The second part of this work offers a comprehensive statistical analysis of free-flow speed distribution; the method is applied to data collected on a sample of roads in Italy. A linear mixed model that includes speed quantiles in its predictors is estimated. Results show that there is no road effect in the analysis of free-flow speeds, which is particularly important for model transferability. A very general framework to predict random effects with few observations and incomplete access to model covariates is formulated and applied to predict the distribution of free-flow speed quantiles. The speed distribution of most road sections is successfully predicted; jack-knife estimates are calculated and used to explain why some sections are poorly predicted. Eventually, this work contributes to the literature in transportation modeling by proposing econometric model formulations for discrete-continuous variables, more efficient methods for the calculation of multivariate normal probabilities, and random effects models for free-flow speed estimation that takes into account the survey design. All methods are rigorously validated on both real and simulated data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reliability and dependability modeling can be employed during many stages of analysis of a computing system to gain insights into its critical behaviors. To provide useful results, realistic models of systems are often necessarily large and complex. Numerical analysis of these models presents a formidable challenge because the sizes of their state-space descriptions grow exponentially in proportion to the sizes of the models. On the other hand, simulation of the models requires analysis of many trajectories in order to compute statistically correct solutions. This dissertation presents a novel framework for performing both numerical analysis and simulation. The new numerical approach computes bounds on the solutions of transient measures in large continuous-time Markov chains (CTMCs). It extends existing path-based and uniformization-based methods by identifying sets of paths that are equivalent with respect to a reward measure and related to one another via a simple structural relationship. This relationship makes it possible for the approach to explore multiple paths at the same time,· thus significantly increasing the number of paths that can be explored in a given amount of time. Furthermore, the use of a structured representation for the state space and the direct computation of the desired reward measure (without ever storing the solution vector) allow it to analyze very large models using a very small amount of storage. Often, path-based techniques must compute many paths to obtain tight bounds. In addition to presenting the basic path-based approach, we also present algorithms for computing more paths and tighter bounds quickly. One resulting approach is based on the concept of path composition whereby precomputed subpaths are composed to compute the whole paths efficiently. Another approach is based on selecting important paths (among a set of many paths) for evaluation. Many path-based techniques suffer from having to evaluate many (unimportant) paths. Evaluating the important ones helps to compute tight bounds efficiently and quickly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Solving linear systems is an important problem for scientific computing. Exploiting parallelism is essential for solving complex systems, and this traditionally involves writing parallel algorithms on top of a library such as MPI. The SPIKE family of algorithms is one well-known example of a parallel solver for linear systems. The Hierarchically Tiled Array data type extends traditional data-parallel array operations with explicit tiling and allows programmers to directly manipulate tiles. The tiles of the HTA data type map naturally to the block nature of many numeric computations, including the SPIKE family of algorithms. The higher level of abstraction of the HTA enables the same program to be portable across different platforms. Current implementations target both shared-memory and distributed-memory models. In this thesis we present a proof-of-concept for portable linear solvers. We implement two algorithms from the SPIKE family using the HTA library. We show that our implementations of SPIKE exploit the abstractions provided by the HTA to produce a compact, clean code that can run on both shared-memory and distributed-memory models without modification. We discuss how we map the algorithms to HTA programs as well as examine their performance. We compare the performance of our HTA codes to comparable codes written in MPI as well as current state-of-the-art linear algebra routines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The use of artificial endoprostheses has become a routine procedure for knee and hip joints while ankle arthritis has traditionally been treated by means of arthrodesis. Due to its advantages, the implantation of endoprostheses is constantly increasing. While finite element analyses (FEA) of strain-adaptive bone remodelling have been carried out for the hip joint in previous studies, to our knowledge there are no investigations that have considered remodelling processes of the ankle joint. In order to evaluate and optimise new generation implants of the ankle joint, as well as to gain additional knowledge regarding the biomechanics, strain-adaptive bone remodelling has been calculated separately for the tibia and the talus after providing them with an implant. Methods: FE models of the bone-implant assembly for both the tibia and the talus have been developed. Bone characteristics such as the density distribution have been applied corresponding to CT scans. A force of 5,200 N, which corresponds to the compression force during normal walking of a person with a weight of 100 kg according to Stauffer et al., has been used in the simulation. The bone adaptation law, previously developed by our research team, has been used for the calculation of the remodelling processes. Results: A total bone mass loss of 2% in the tibia and 13% in the talus was calculated. The greater decline of density in the talus is due to its smaller size compared to the relatively large implant dimensions causing remodelling processes in the whole bone tissue. In the tibia, bone remodelling processes are only calculated in areas adjacent to the implant. Thus, a smaller bone mass loss than in the talus can be expected. There is a high agreement between the simulation results in the distal tibia and the literature regarding. Conclusions: In this study, strain-adaptive bone remodelling processes are simulated using the FE method. The results contribute to a better understanding of the biomechanical behaviour of the ankle joint and hence are useful for the optimisation of the implant geometry in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Magnetic fields are ubiquitous in galaxy cluster atmospheres and have a variety of astrophysical and cosmological consequences. Magnetic fields can contribute to the pressure support of clusters, affect thermal conduction, and modify the evolution of bubbles driven by active galactic nuclei. However, we currently do not fully understand the origin and evolution of these fields throughout cosmic time. Furthermore, we do not have a general understanding of the relationship between magnetic field strength and topology and other cluster properties, such as mass and X-ray luminosity. We can now begin to answer some of these questions using large-scale cosmological magnetohydrodynamic (MHD) simulations of the formation of galaxy clusters including the seeding and growth of magnetic fields. Using large-scale cosmological simulations with the FLASH code combined with a simplified model of the acceleration of cosmic rays responsible for the generation of radio halos, we find that the galaxy cluster frequency distribution and expected number counts of radio halos from upcoming low-frequency sur- veys are strongly dependent on the strength of magnetic fields. Thus, a more complete understanding of the origin and evolution of magnetic fields is necessary to understand and constrain models of diffuse synchrotron emission from clusters. One favored model for generating magnetic fields is through the amplification of weak seed fields in active galactic nuclei (AGN) accretion disks and their subsequent injection into cluster atmospheres via AGN-driven jets and bubbles. However, current large-scale cosmological simulations cannot directly include the physical processes associated with the accretion and feedback processes of AGN or the seeding and merging of the associated SMBHs. Thus, we must include these effects as subgrid models. In order to carefully study the growth of magnetic fields in clusters via AGN-driven outflows, we present a systematic study of SMBH and AGN subgrid models. Using dark-matter only cosmological simulations, we find that many important quantities, such as the relationship between SMBH mass and galactic bulge velocity dispersion and the merger rate of black holes, are highly sensitive to the subgrid model assumptions of SMBHs. In addition, using MHD calculations of an isolated cluster, we find that magnetic field strengths, extent, topology, and relationship to other gas quantities such as temperature and density are also highly dependent on the chosen model of accretion and feedback. We use these systematic studies of SMBHs and AGN inform and constrain our choice of subgrid models, and we use those results to outline a fully cosmological MHD simulation to study the injection and growth of magnetic fields in clusters of galaxies. This simulation will be the first to study the birth and evolution of magnetic fields using a fully closed accretion-feedback cycle, with as few assumptions as possible and a clearer understanding of the effects of the various parameter choices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, numerical simulation of the Caspian Sea circulation was performed using COHERENS three-dimensional numerical model and field data. The COHERENS three-dimensional model and FVCOM were performed under the effect of the wind driven force, and then the simulation results obtained were compared. Simulation modeling was performed at the Caspian Sea. Its horizontal grid size is approximately equal to 5 Km and 30 sigma levels were considered. The numerical simulation results indicate that the winds' driven-forces and temperature gradient are the most important driving force factors of the Caspian circulation pattern. One of the effects of wind-driven currents was the upwelling phenomenon that was formed in the eastern shores of the Caspian Sea in the summer. The simulation results also indicate that this phenomenon occurred at a depth less than 40 meters, and the vertical velocity in July and August was 10 meters and 7 meters respectively. During the upwelling phenomenon period the temperatures on the east coast compared to the west coast were about 5°C lower. In autumn and winter, the warm waters moved from the south east coast to the north and the cold waters moved from the west coast of the central Caspian toward the south. In the subsurface and deep layers, these movements were much more structured and caused strengthening of the anti-clockwise circulation in the area, especially in the central area of Caspian. The obtained results of the two models COHERENS and FVCOM performed under wind driven-force show a high coordination of the two models, and so the wind current circulation pattern for both models is almost identical.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis work deals with a mathematical description of flow in polymeric pipe and in a specific peristaltic pump. This study involves fluid-structure interaction analysis in presence of complex-turbulent flows treated in an arbitrary Lagrangian-Eulerian (ALE) framework. The flow simulations are performed in COMSOL 4.4, as 2D axial symmetric model, and ABAQUS 6.14.1, as 3D model with symmetric boundary conditions. In COMSOL, the fluid and structure problems are coupled by monolithic algorithm, while ABAQUS code links ABAQUS CFD and ABAQUS Standard solvers with single block-iterative partitioned algorithm. For the turbulent features of the flow, the fluid model in both codes is described by RNG k-ϵ. The structural model is described, on the basis of the pipe material, by Elastic models or Hyperelastic Neo-Hookean models with Rayleigh damping properties. In order to describe the pulsatile fluid flow after the pumping process, the available data are often defective for the fluid problem. Engineering measurements are normally able to provide average pressure or velocity at a cross-section. This problem has been analyzed by McDonald's and Womersley's work for average pressure at fixed cross section by Fourier analysis since '50, while nowadays sophisticated techniques including Finite Elements and Finite Volumes exist to study the flow. Finally, we set up peristaltic pipe simulations in ABAQUS code, by using the same model previously tested for the fl uid and the structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this thesis is to test the ability of some correlative models such as Alpert correlations on 1972 and re-examined on 2011, the investigation of Heskestad and Delichatsios in 1978, the correlations produced by Cooper in 1982, to define both dynamic and thermal characteristics of a fire induced ceiling-jet flow. The flow occurs when the fire plume impinges the ceiling and develops in the radial direction of the fire axis. Both temperature and velocity predictions are decisive for sprinklers positioning, fire alarms positions, detectors (heat, smoke) positions and activation times and back-layering predictions. These correlative models will be compared with a 3D numerical simulation software CFAST. For the results comparison of temperature and velocity near the ceiling. These results are also compared with a Computational Fluid Dynamics (CFD) analysis, using ANSYS FLUENT.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Experiments with ultracold atoms in optical lattice have become a versatile testing ground to study diverse quantum many-body Hamiltonians. A single-band Bose-Hubbard (BH) Hamiltonian was first proposed to describe these systems in 1998 and its associated quantum phase-transition was subsequently observed in 2002. Over the years, there has been a rapid progress in experimental realizations of more complex lattice geometries, leading to more exotic BH Hamiltonians with contributions from excited bands, and modified tunneling and interaction energies. There has also been interesting theoretical insights and experimental studies on “un- conventional” Bose-Einstein condensates in optical lattices and predictions of rich orbital physics in higher bands. In this thesis, I present our results on several multi- band BH models and emergent quantum phenomena. In particular, I study optical lattices with two local minima per unit cell and show that the low energy states of a multi-band BH Hamiltonian with only pairwise interactions is equivalent to an effec- tive single-band Hamiltonian with strong three-body interactions. I also propose a second method to create three-body interactions in ultracold gases of bosonic atoms in a optical lattice. In this case, this is achieved by a careful cancellation of two contributions in the pair-wise interaction between the atoms, one proportional to the zero-energy scattering length and a second proportional to the effective range. I subsequently study the physics of Bose-Einstein condensation in the second band of a double-well 2D lattice and show that the collision aided decay rate of the con- densate to the ground band is smaller than the tunneling rate between neighboring unit cells. Finally, I propose a numerical method using the discrete variable repre- sentation for constructing real-valued Wannier functions localized in a unit cell for optical lattices. The developed numerical method is general and can be applied to a wide array of optical lattice geometries in one, two or three dimensions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lors du transport du bois de la forêt vers les usines, de nombreux événements imprévus peuvent se produire, événements qui perturbent les trajets prévus (par exemple, en raison des conditions météo, des feux de forêt, de la présence de nouveaux chargements, etc.). Lorsque de tels événements ne sont connus que durant un trajet, le camion qui accomplit ce trajet doit être détourné vers un chemin alternatif. En l’absence d’informations sur un tel chemin, le chauffeur du camion est susceptible de choisir un chemin alternatif inutilement long ou pire, qui est lui-même "fermé" suite à un événement imprévu. Il est donc essentiel de fournir aux chauffeurs des informations en temps réel, en particulier des suggestions de chemins alternatifs lorsqu’une route prévue s’avère impraticable. Les possibilités de recours en cas d’imprévus dépendent des caractéristiques de la chaîne logistique étudiée comme la présence de camions auto-chargeurs et la politique de gestion du transport. Nous présentons trois articles traitant de contextes d’application différents ainsi que des modèles et des méthodes de résolution adaptés à chacun des contextes. Dans le premier article, les chauffeurs de camion disposent de l’ensemble du plan hebdomadaire de la semaine en cours. Dans ce contexte, tous les efforts doivent être faits pour minimiser les changements apportés au plan initial. Bien que la flotte de camions soit homogène, il y a un ordre de priorité des chauffeurs. Les plus prioritaires obtiennent les volumes de travail les plus importants. Minimiser les changements dans leurs plans est également une priorité. Étant donné que les conséquences des événements imprévus sur le plan de transport sont essentiellement des annulations et/ou des retards de certains voyages, l’approche proposée traite d’abord l’annulation et le retard d’un seul voyage, puis elle est généralisée pour traiter des événements plus complexes. Dans cette ap- proche, nous essayons de re-planifier les voyages impactés durant la même semaine de telle sorte qu’une chargeuse soit libre au moment de l’arrivée du camion à la fois au site forestier et à l’usine. De cette façon, les voyages des autres camions ne seront pas mo- difiés. Cette approche fournit aux répartiteurs des plans alternatifs en quelques secondes. De meilleures solutions pourraient être obtenues si le répartiteur était autorisé à apporter plus de modifications au plan initial. Dans le second article, nous considérons un contexte où un seul voyage à la fois est communiqué aux chauffeurs. Le répartiteur attend jusqu’à ce que le chauffeur termine son voyage avant de lui révéler le prochain voyage. Ce contexte est plus souple et offre plus de possibilités de recours en cas d’imprévus. En plus, le problème hebdomadaire peut être divisé en des problèmes quotidiens, puisque la demande est quotidienne et les usines sont ouvertes pendant des périodes limitées durant la journée. Nous utilisons un modèle de programmation mathématique basé sur un réseau espace-temps pour réagir aux perturbations. Bien que ces dernières puissent avoir des effets différents sur le plan de transport initial, une caractéristique clé du modèle proposé est qu’il reste valable pour traiter tous les imprévus, quelle que soit leur nature. En effet, l’impact de ces événements est capturé dans le réseau espace-temps et dans les paramètres d’entrée plutôt que dans le modèle lui-même. Le modèle est résolu pour la journée en cours chaque fois qu’un événement imprévu est révélé. Dans le dernier article, la flotte de camions est hétérogène, comprenant des camions avec des chargeuses à bord. La configuration des routes de ces camions est différente de celle des camions réguliers, car ils ne doivent pas être synchronisés avec les chargeuses. Nous utilisons un modèle mathématique où les colonnes peuvent être facilement et naturellement interprétées comme des itinéraires de camions. Nous résolvons ce modèle en utilisant la génération de colonnes. Dans un premier temps, nous relaxons l’intégralité des variables de décision et nous considérons seulement un sous-ensemble des itinéraires réalisables. Les itinéraires avec un potentiel d’amélioration de la solution courante sont ajoutés au modèle de manière itérative. Un réseau espace-temps est utilisé à la fois pour représenter les impacts des événements imprévus et pour générer ces itinéraires. La solution obtenue est généralement fractionnaire et un algorithme de branch-and-price est utilisé pour trouver des solutions entières. Plusieurs scénarios de perturbation ont été développés pour tester l’approche proposée sur des études de cas provenant de l’industrie forestière canadienne et les résultats numériques sont présentés pour les trois contextes.