816 resultados para ROLLING
Resumo:
A theory was developed to allow the separate determination of the effects of the interparticle friction and interlocking of particles on the shearing resistance and deformational behavior of granular materials. The derived parameter, angle of solid friction, is independent of the type of shear test, stress history, porosity and the level of confining pressure, and depends solely upon the nature of the particle surface. The theory was tested against published data concerning the performance of plane strain, triaxial compression and extension tests on cohesionless soils. The theory also was applied to isotropically consolidated undrained triaxial tests on three crushed limestones prepared by the authors using vibratory compaction. The authors concluded that, (1) the theory allowed the determination of solid friction between particles which was found to depend solely on the nature of the particle surface, (2) the separation of frictional and volume change components of shear strength of granular materials qualitatively corroborated the postulated mechanism of deformation (sliding and rolling of groups of particles over other similar groups with resulting dilatancy of specimen), (3) the influence of void ratio, gradation confining pressure, stress history and type of shear test on shear strength is reflected in values of the omega parameter, and (4) calculation of the coefficient of solid friction allows the establishment of the lower limit of the shear strength of a granular material.
Finite element modeling of straightening of thin-walled seamless tubes of austenitic stainless steel
Resumo:
During this thesis work a coupled thermo-mechanical finite element model (FEM) was builtto simulate hot rolling in the blooming mill at Sandvik Materials Technology (SMT) inSandviken. The blooming mill is the first in a long line of processes that continuously or ingotcast ingots are subjected to before becoming finished products. The aim of this thesis work was twofold. The first was to create a parameterized finiteelement (FE) model of the blooming mill. The commercial FE software package MSCMarc/Mentat was used to create this model and the programing language Python was used toparameterize it. Second, two different pass schedules (A and B) were studied and comparedusing the model. The two pass series were evaluated with focus on their ability to healcentreline porosity, i.e. to close voids in the centre of the ingot. This evaluation was made by studying the hydrostatic stress (σm), the von Mises stress (σeq)and the plastic strain (εp) in the centre of the ingot. From these parameters the stress triaxiality(Tx) and the hydrostatic integration parameter (Gm) were calculated for each pass in bothseries using two different transportation times (30 and 150 s) from the furnace. The relationbetween Gm and an analytical parameter (Δ) was also studied. This parameter is the ratiobetween the mean height of the ingot and the contact length between the rolls and the ingot,which is useful as a rule of thumb to determine the homogeneity or penetration of strain for aspecific pass. The pass series designed with fewer passes (B), many with greater reduction, was shown toachieve better void closure theoretically. It was also shown that a temperature gradient, whichis the result of a longer holding time between the furnace and the blooming mill leads toimproved void closure.
Resumo:
Les substituts valvulaires disponibles actuellement comportent encore plusieurs lacunes. La disponibilité restreinte des allogreffes, les risques de coagulation associés aux valves mécaniques et la durabilité limitée des bioprothèses en tissu animal sont toutes des problématiques que le génie tissulaire a le potentiel de surmonter. Avec la méthode d’auto-assemblage, le seul support des cellules consiste en leur propre matrice extracellulaire, permettant la fabrication d’un tissu entièrement libre de matériau exogène. Ce projet a été précédé par ceux des doctorantes Catherine Tremblay et Véronique Laterreur, ayant respectivement développé une méthode de fabrication de valves moulées par auto-assemblage et une nouvelle version de bioréacteur. Au cours de cette maîtrise, le nouveau bioréacteur a été adapté à une utilisation stérile avec des tissus vivants et la méthode de fabrication de valves moulées a été modifiée puis éprouvée avec la production de 4 prototypes. Ces derniers n’ont pas permis d’obtenir des performances satisfaisantes en bioréacteur, motivant la conception d’une nouvelle méthode. Plutôt que de tenter de répliquer la forme native des valves cardiaques, des études récentes ont suggéré une géométrie tubulaire. Cela permettrait une fabrication simplifiée, une implantation rapide, et un encombrement minimal en vue d’opérations percutanées. Cette approche minimaliste s’accorde bien avec la méthode d’auto-assemblage, qui a déjà été utilisée pour la production de vaisseaux de petits diamètres. Un total de 11 tubes ont été produits par l’enroulement de feuillets fibroblastiques auto-assemblés, puis ont été transférés sur des mandrins de diamètre inférieur, leur permettant de se contracter librement. La caractérisation de deux tubes contrôles a démontré que cette phase de précontraction était bénéfique pour les propriétés du tissu en plus de prévenir la contraction en bioréacteur. Les prototypes finaux pouvaient supporter un écoulement physiologique pulmonaire. Cette nouvelle méthode montre que le procédé d’auto-assemblage a le potentiel d’être utilisé pour fabriquer des valves cardiaques tubulaires.
Resumo:
Following inspections in 2013 of all police forces, Her Majesty’s Inspectorate of Constabulary found that one-third of forces could not provide data on repeat victims of domestic abuse (DA) and concluded that in general there were ambiguities around the term ‘repeat victim’ and that there was a need for consistent and comparable statistics on DA. Using an analysis of police-recorded DA data from two forces, an argument is made for including both offences and non-crime incidents when identifying repeat victims of DA. Furthermore, for statistical purposes the counting period for repeat victimizations should be taken as a rolling 12 months from first recorded victimization. Examples are given of summary statistics that can be derived from these data down to Community Safety Partnership level. To reinforce the need to include both offences and incidents in analyses, repeat victim chronologies from policerecorded data are also used to briefly examine cases of escalation to homicide as an example of how they can offer new insights and greater scope for evaluating risk and effectiveness of interventions.
Resumo:
The dynamics, shape, deformation, and orientation of red blood cells in microcirculation affect the rheology, flow resistance and transport properties of whole blood. This leads to important correlations of cellular and continuum scales. Furthermore, the dynamics of RBCs subject to different flow conditions and vessel geometries is relevant for both fundamental research and biomedical applications (e.g drug delivery). In this thesis, the behaviour of RBCs is investigated for different flow conditions via computer simulations. We use a combination of two mesoscopic particle-based simulation techniques, dissipative particle dynamics and smoothed dissipative particle dynamics. We focus on the microcapillary scale of several μm. At this scale, blood cannot be considered at the continuum but has to be studied at the cellular level. The connection between cellular motion and overall blood rheology will be investigated. Red blood cells are modelled as viscoelastic objects interacting hydrodynamically with a viscous fluid environment. The properties of the membrane, such as resistance against bending or shearing, are set to correspond to experimental values. Furthermore, thermal fluctuations are considered via random forces. Analyses corresponding to light scattering measurements are performed in order to compare to experiments and suggest for which situations this method is suitable. Static light scattering by red blood cells characterises their shape and allows comparison to objects such as spheres or cylinders, whose scattering signals have analytical solutions, in contrast to those of red blood cells. Dynamic light scattering by red blood cells is studied concerning its suitability to detect and analyse motion, deformation and membrane fluctuations. Dynamic light scattering analysis is performed for both diffusing and flowing cells. We find that scattering signals depend on various cell properties, thus allowing to distinguish different cells. The scattering of diffusing cells allows to draw conclusions on their bending rigidity via the effective diffusion coefficient. The scattering of flowing cells allows to draw conclusions on the shear rate via the scattering amplitude correlation. In flow, a RBC shows different shapes and dynamic states, depending on conditions such as confinement, physiological/pathological state and cell age. Here, two essential flow conditions are studied: simple shear flow and tube flow. Simple shear flow as a basic flow condition is part of any more complex flow. The velocity profile is linear and shear stress is homogeneous. In simple shear flow, we find a sequence of different cell shapes by increasing the shear rate. With increasing shear rate, we find rolling cells with cup shapes, trilobe shapes and quadrulobe shapes. This agrees with recent experiments. Furthermore, the impact of the initial orientation on the dynamics is studied. To study crowding and collective effects, systems with higher haematocrit are set up. Tube flow is an idealised model for the flow through cylindric microvessels. Without cell, a parabolic flow profile prevails. A single red blood cell is placed into the tube and subject to a Poiseuille profile. In tube flow, we find different cell shapes and dynamics depending on confinement, shear rate and cell properties. For strong confinements and high shear rates, we find parachute-like shapes. Although not perfectly symmetric, they are adjusted to the flow profile and maintain a stationary shape and orientation. For weak confinements and low shear rates, we find tumbling slippers that rotate and moderately change their shape. For weak confinements and high shear rates, we find tank-treading slippers that oscillate in a limited range of inclination angles and strongly change their shape. For the lowest shear rates, we find cells performing a snaking motion. Due to cell properties and resultant deformations, all shapes differ from hitherto descriptions, such as steady tank-treading or symmetric parachutes. We introduce phase diagrams to identify flow regimes for the different shapes and dynamics. Changing cell properties, the regime borders in the phase diagrams change. In both flow types, both the viscosity contrast and the choice of stress-free shape are important. For in vitro experiments, the solvent viscosity has often been higher than the cytosol viscosity, leading to a different pattern of dynamics, such as steady tank-treading. The stress-free state of a RBC, which is the state at zero shear stress, is still controversial, and computer simulations enable direct comparisons of possible candidates in equivalent flow conditions.
Resumo:
A quantificação do material sólido transportado (transporte sólido) ao longo de um curso de água é extremamente importante nas mais variadas áreas da engenharia fluvial. O transporte sólido em rios de montanha dá-se maioritariamente por arrastamento no fundo, através de deslizamento, rolamento e saltação dos sedimentos. Ao longo dos tempos foram desenvolvidas várias fórmulas para estimar o transporte sólido por arrastamento, contudo, devido à complexidade dos processos de transporte de sedimentos, bem como a variabilidade espacial e temporal, a previsão de taxas de transporte não foi conseguida exclusivamente através de investigação teórica. Para obter um melhor conhecimento sobre os processos de transporte sólido por arrastamento em rios de montanha, torna-se necessário monitorizá-los com maior precisão possível. Com os avanços na electrónica, novos métodos tecnológicos foram desenvolvidos para resolver a problemática da quantificação do transporte sólido, em detrimento dos atuais métodos tradicionais, que se baseiam na recolha de amostras em campo, para posterior correlação. O objetivo principal da presente dissertação foi o desenvolvimento de um equipamento capaz de estimar/monitorizar continuamente o transporte sólido por arrastamento em rios de montanha, que utilizasse tecnologia de baixo custo. Este equipamento dispõe de um sensor piezolelétrico que realizará medições à vibração causada pelo embate dos sedimentos sobre uma chapa metálica. A energia do sinal resultante dos impactos reverterá em peso. A metodologia usada para a obtenção das medições foi a realização de ensaios laboratoriais, tendo sido dado especial destaque à influência da variação do caudal, bem como da forma dos sedimentos, na intensidade do sinal adquirido.
Resumo:
Resultante dos avanços tecnológicos, conseguiu-se obter um aço que elimina o paradigma de se aliar alta ductilidade e resistência mecânica. Assim foi desenvolvido durante a última década o aço TWIP, deformação induzida por maclação, tendo como principal mecanismo de deformação a maclação. Este presente trabalho teve como principal objetivo caraterizar o aço TWIP980 em três temáticas diferentes: química, mecânica e microestrutura. Na primeira temática, a química, esta teve como objetivo encontrar a designação do aço TWIP em estudo. Sendo apenas conhecida a direção de laminação, RD, e a empresa que forneceu as chapas, a POSCO, o objetivo era obter a sua designação. Através da comparação das curvas de tração encontradas para o material em estudo, e conjuntamente, com as diversas curvas de tração de vários aços TWIP da empresa POSCO, realizou-se a comparação. Visto ter-se ficado reduzido a dois possíveis aços TWIP, foi através de uma análise à composição química, EDS - Espectroscopia da energia dispersa por raios-X, que se concluiu que o aço em estudo era o TWIP980. Na caraterização mecânica, e através de ensaios de tração, foram estudadas propriedades como: o módulo de elasticidade, tensão limite elástico, ductilidade, anisotropia, coeficiente de encruamento e Poisson. Estas propriedades foram estudas para três mudanças na trajetória de deformação e quatro pré-deformações em estudo. Assim estudou-se a alteração de trajetória para os ângulos a 0º, 45º e 90º em relação a RD, para as deformações de engenharia de 0%, 10%, 20% e 30%. Por último, na análise à microestrutura, esta teve como objetivo obter valores para o tamanho de grão e de macla bem como as suas orientações cristalográficas. Também a densidade de deslocações e maclação para cada uma das 4 pré-deformações esteve em estudo. Estes parâmetros foram obtidos através de microscopia ótica, eletrónica de varrimento, MEV e eletrónica de transmissão, MET.
Resumo:
Le processus de planification forestière hiérarchique présentement en place sur les terres publiques risque d’échouer à deux niveaux. Au niveau supérieur, le processus en place ne fournit pas une preuve suffisante de la durabilité du niveau de récolte actuel. À un niveau inférieur, le processus en place n’appuie pas la réalisation du plein potentiel de création de valeur de la ressource forestière, contraignant parfois inutilement la planification à court terme de la récolte. Ces échecs sont attribuables à certaines hypothèses implicites au modèle d’optimisation de la possibilité forestière, ce qui pourrait expliquer pourquoi ce problème n’est pas bien documenté dans la littérature. Nous utilisons la théorie de l’agence pour modéliser le processus de planification forestière hiérarchique sur les terres publiques. Nous développons un cadre de simulation itératif en deux étapes pour estimer l’effet à long terme de l’interaction entre l’État et le consommateur de fibre, nous permettant ainsi d’établir certaines conditions pouvant mener à des ruptures de stock. Nous proposons ensuite une formulation améliorée du modèle d’optimisation de la possibilité forestière. La formulation classique du modèle d’optimisation de la possibilité forestière (c.-à-d., maximisation du rendement soutenu en fibre) ne considère pas que le consommateur de fibre industriel souhaite maximiser son profit, mais suppose plutôt la consommation totale de l’offre de fibre à chaque période, peu importe le potentiel de création de valeur de celle-ci. Nous étendons la formulation classique du modèle d’optimisation de la possibilité forestière afin de permettre l’anticipation du comportement du consommateur de fibre, augmentant ainsi la probabilité que l’offre de fibre soit entièrement consommée, rétablissant ainsi la validité de l’hypothèse de consommation totale de l’offre de fibre implicite au modèle d’optimisation. Nous modélisons la relation principal-agent entre le gouvernement et l’industrie à l’aide d’une formulation biniveau du modèle optimisation, où le niveau supérieur représente le processus de détermination de la possibilité forestière (responsabilité du gouvernement), et le niveau inférieur représente le processus de consommation de la fibre (responsabilité de l’industrie). Nous montrons que la formulation biniveau peux atténuer le risque de ruptures de stock, améliorant ainsi la crédibilité du processus de planification forestière hiérarchique. Ensemble, le modèle biniveau d’optimisation de la possibilité forestière et la méthodologie que nous avons développée pour résoudre celui-ci à l’optimalité, représentent une alternative aux méthodes actuellement utilisées. Notre modèle biniveau et le cadre de simulation itérative représentent un pas vers l’avant en matière de technologie de planification forestière axée sur la création de valeur. L’intégration explicite d’objectifs et de contraintes industrielles au processus de planification forestière, dès la détermination de la possibilité forestière, devrait favoriser une collaboration accrue entre les instances gouvernementales et industrielles, permettant ainsi d’exploiter le plein potentiel de création de valeur de la ressource forestière.
Resumo:
Mestrado em Auditoria
Resumo:
In order to power our planet for the next century, clean energy technologies need to be developed and deployed. Photovoltaic solar cells, which convert sunlight into electricity, are a clear option; however, they currently supply 0.1% of the US electricity due to the relatively high cost per Watt of generation. Thus, our goal is to create more power from a photovoltaic device, while simultaneously reducing its price. To accomplish this goal, we are creating new high efficiency anti-reflection coatings that allow more of the incident sunlight to be converted to electricity, using simple and inexpensive coating techniques that enable reduced manufacturing costs. Traditional anti-reflection coatings (consisting of thin layers of non-absorbing materials) rely on the destructive interference of the reflected light, causing more light to enter the device and subsequently get absorbed. While these coatings are used on nearly all commercial cells, they are wavelength dependent and are deposited using expensive processes that require elevated temperatures, which increase production cost and can be detrimental to some temperature sensitive solar cell materials. We are developing two new classes of anti-reflection coatings (ARCs) based on textured dielectric materials: (i) a transparent, flexible paper technology that relies on optical scattering and reduced refractive index contrast between the air and semiconductor and (ii) silicon dioxide (SiO2) nanosphere arrays that rely on collective optical resonances. Both techniques improve solar cell absorption and ultimately yield high efficiency, low cost devices. For the transparent paper-based ARCs, we have recently shown that they improve solar cell efficiencies for all angles of incident illumination reducing the need for costly tracking of the sun’s position. For a GaAs solar cell, we achieved a 24% improvement in the power conversion efficiency using this simple coating. Because the transparent paper is made from an earth abundant material (wood pulp) using an easy, inexpensive and scalable process, this type of ARC is an excellent candidate for future solar technologies. The coatings based on arrays of dielectric nanospheres also show excellent potential for inexpensive, high efficiency solar cells. The fabrication process is based on a Meyer rod rolling technique, which can be performed at room-temperature and applied to mass production, yielding a scalable and inexpensive manufacturing process. The deposited monolayer of SiO2 nanospheres, having a diameter of 500 nm on a bare Si wafer, leads to a significant increase in light absorption and a higher expected current density based on initial simulations, on the order of 15-20%. With application on a Si solar cell containing a traditional anti-reflection coating (Si3N4 thin-film), an additional increase in the spectral current density is observed, 5% beyond what a typical commercial device would achieve. Due to the coupling between the spheres originated from Whispering Gallery Modes (WGMs) inside each nanosphere, the incident light is strongly coupled into the high-index absorbing material, leading to increased light absorption. Furthermore, the SiO2 nanospheres scatter and diffract light in such a way that both the optical and electrical properties of the device have little dependence on incident angle, eliminating the need for solar tracking. Because the layer can be made with an easy, inexpensive, and scalable process, this anti-reflection coating is also an excellent candidate for replacing conventional technologies relying on complicated and expensive processes.
Resumo:
We report sedimentological evidence for a tsunami from a coastal lake at Innaarsuit, Disko Bugt (west Greenland), which was most likely generated by a rolling iceberg. The tsunami invaded the lake c. 6000 years ago, during a period of time when relative sea level (RSL) was falling quickly because of isostatic rebound. We use the background rate of RSL fall, together with an age model for the sediment sequence, to infer a minimum wave run-up during the event of c. 3.3 m. The stratigraphic signature of the event bears similarities to that described from studies of the early-Holocene Storegga slide tsunami in Norwegian coastal basins. Conditions conducive to iceberg tsunami include a supply of icebergs, deep water close to the shore, a depositional setting protected from storms or landslide tsunami, and a coastal configuration that has the potential to amplify the height of tsunami waves as water depths shallow and the waves approach and impact the coast. Future warming of polar regions will lead to increased calving and iceberg production, at a time when human use of polar coasts will also grow. We predict, therefore, that iceberg-generated tsunami will become a growing hazard in polar coastal waters, especially in areas adjacent to large, fast-flowing, marine-terminating ice streams that are close to human populations or infrastructure.
Resumo:
The U.S. railroad companies spend billions of dollars every year on railroad track maintenance in order to ensure safety and operational efficiency of their railroad networks. Besides maintenance costs, other costs such as train accident costs, train and shipment delay costs and rolling stock maintenance costs are also closely related to track maintenance activities. Optimizing the track maintenance process on the extensive railroad networks is a very complex problem with major cost implications. Currently, the decision making process for track maintenance planning is largely manual and primarily relies on the knowledge and judgment of experts. There is considerable potential to improve the process by using operations research techniques to develop solutions to the optimization problems on track maintenance. In this dissertation study, we propose a range of mathematical models and solution algorithms for three network-level scheduling problems on track maintenance: track inspection scheduling problem (TISP), production team scheduling problem (PTSP) and job-to-project clustering problem (JTPCP). TISP involves a set of inspection teams which travel over the railroad network to identify track defects. It is a large-scale routing and scheduling problem where thousands of tasks are to be scheduled subject to many difficult side constraints such as periodicity constraints and discrete working time constraints. A vehicle routing problem formulation was proposed for TISP, and a customized heuristic algorithm was developed to solve the model. The algorithm iteratively applies a constructive heuristic and a local search algorithm in an incremental scheduling horizon framework. The proposed model and algorithm have been adopted by a Class I railroad in its decision making process. Real-world case studies show the proposed approach outperforms the manual approach in short-term scheduling and can be used to conduct long-term what-if analyses to yield managerial insights. PTSP schedules capital track maintenance projects, which are the largest track maintenance activities and account for the majority of railroad capital spending. A time-space network model was proposed to formulate PTSP. More than ten types of side constraints were considered in the model, including very complex constraints such as mutual exclusion constraints and consecution constraints. A multiple neighborhood search algorithm, including a decomposition and restriction search and a block-interchange search, was developed to solve the model. Various performance enhancement techniques, such as data reduction, augmented cost function and subproblem prioritization, were developed to improve the algorithm. The proposed approach has been adopted by a Class I railroad for two years. Our numerical results show the model solutions are able to satisfy all hard constraints and most soft constraints. Compared with the existing manual procedure, the proposed approach is able to bring significant cost savings and operational efficiency improvement. JTPCP is an intermediate problem between TISP and PTSP. It focuses on clustering thousands of capital track maintenance jobs (based on the defects identified in track inspection) into projects so that the projects can be scheduled in PTSP. A vehicle routing problem based model and a multiple-step heuristic algorithm were developed to solve this problem. Various side constraints such as mutual exclusion constraints and rounding constraints were considered. The proposed approach has been applied in practice and has shown good performance in both solution quality and efficiency.
Resumo:
A caracterização anatômica, física, mecânica e química da madeira fornece informações importantes para sua melhor utilização. Contudo, para que madeiras se tornem boa opção para o mercado de pisos, adicionalmente é necessária a realização de ensaios que simulem suas reais condições em serviço. Esses ensaios simulam o pisoteio executado pelos sapatos de salto com pequenas áreas de pressão, o arraste e a queda de objetos, a resistência à abrasão da superfície e o atrito oferecido durante o deslocamento de pessoas que caminham sobre ele. Grande dificuldade da seleção de novas madeiras para pisos está na ausência de valores de referência físico-mecânicos. O presente trabalho visou a caracterizar as madeiras de Eucalyptus clöeziana F. Muell, de Eucalyptus microcorys F. Muell e de Corymbia maculata Hook, para as propriedades de densidade básica, retratibilidade, aplicação de carga rolante, de atrito estático e dinâmico, endentação causada por cargas aplicadas em pequenas áreas, impacto da esfera de aço cadente e resistência à abrasão. Foi observado que as madeiras estudadas podem ser utilizadas para a confecção de pisos, de acordo com seus resultados obtidos e por meio de comparações com resultados de literatura.
Resumo:
Energy Conservation Measure (ECM) project selection is made difficult given real-world constraints, limited resources to implement savings retrofits, various suppliers in the market and project financing alternatives. Many of these energy efficient retrofit projects should be viewed as a series of investments with annual returns for these traditionally risk-averse agencies. Given a list of ECMs available, federal, state and local agencies must determine how to implement projects at lowest costs. The most common methods of implementation planning are suboptimal relative to cost. Federal, state and local agencies can obtain greater returns on their energy conservation investment over traditional methods, regardless of the implementing organization. This dissertation outlines several approaches to improve the traditional energy conservations models. Any public buildings in regions with similar energy conservation goals in the United States or internationally can also benefit greatly from this research. Additionally, many private owners of buildings are under mandates to conserve energy e.g., Local Law 85 of the New York City Energy Conservation Code requires any building, public or private, to meet the most current energy code for any alteration or renovation. Thus, both public and private stakeholders can benefit from this research. The research in this dissertation advances and presents models that decision-makers can use to optimize the selection of ECM projects with respect to the total cost of implementation. A practical application of a two-level mathematical program with equilibrium constraints (MPEC) improves the current best practice for agencies concerned with making the most cost-effective selection leveraging energy services companies or utilities. The two-level model maximizes savings to the agency and profit to the energy services companies (Chapter 2). An additional model presented leverages a single congressional appropriation to implement ECM projects (Chapter 3). Returns from implemented ECM projects are used to fund additional ECM projects. In these cases, fluctuations in energy costs and uncertainty in the estimated savings severely influence ECM project selection and the amount of the appropriation requested. A risk aversion method proposed imposes a minimum on the number of “of projects completed in each stage. A comparative method using Conditional Value at Risk is analyzed. Time consistency was addressed in this chapter. This work demonstrates how a risk-based, stochastic, multi-stage model with binary decision variables at each stage provides a much more accurate estimate for planning than the agency’s traditional approach and deterministic models. Finally, in Chapter 4, a rolling-horizon model allows for subadditivity and superadditivity of the energy savings to simulate interactive effects between ECM projects. The approach makes use of inequalities (McCormick, 1976) to re-express constraints that involve the product of binary variables with an exact linearization (related to the convex hull of those constraints). This model additionally shows the benefits of learning between stages while remaining consistent with the single congressional appropriations framework.