912 resultados para cutting stock problem with setups
Resumo:
Firms worldwide are taking major initiatives to reduce the carbon footprint of their supply chains in response to the growing governmental and consumer pressures. In real life, these supply chains face stochastic and non-stationary demand but most of the studies on inventory lot-sizing problem with emission concerns consider deterministic demand. In this paper, we study the inventory lot-sizing problem under non-stationary stochastic demand condition with emission and cycle service level constraints considering carbon cap-and-trade regulatory mechanism. Using a mixed integer linear programming model, this paper aims to investigate the effects of emission parameters, product- and system-related features on the supply chain performance through extensive computational experiments to cover general type business settings and not a specific scenario. Results show that cycle service level and demand coefficient of variation have significant impacts on total cost and emission irrespective of level of demand variability while the impact of product's demand pattern is significant only at lower level of demand variability. Finally, results also show that increasing value of carbon price reduces total cost, total emission and total inventory and the scope of emission reduction by increasing carbon price is greater at higher levels of cycle service level and demand coefficient of variation. The analysis of results helps supply chain managers to take right decision in different demand and service level situations.
Resumo:
Most research on stock prices is based on the present value model or the more general consumption-based model. When applied to real economic data, both of them are found unable to account for both the stock price level and its volatility. Three essays here attempt to both build a more realistic model, and to check whether there is still room for bubbles in explaining fluctuations in stock prices. In the second chapter, several innovations are simultaneously incorporated into the traditional present value model in order to produce more accurate model-based fundamental prices. These innovations comprise replacing with broad dividends the more narrow traditional dividends that are more commonly used, a nonlinear artificial neural network (ANN) forecasting procedure for these broad dividends instead of the more common linear forecasting models for narrow traditional dividends, and a stochastic discount rate in place of the constant discount rate. Empirical results show that the model described above predicts fundamental prices better, compared with alternative models using linear forecasting process, narrow dividends, or a constant discount factor. Nonetheless, actual prices are still largely detached from fundamental prices. The bubblelike deviations are found to coincide with business cycles. The third chapter examines possible cointegration of stock prices with fundamentals and non-fundamentals. The output gap is introduced to form the nonfundamental part of stock prices. I use a trivariate Vector Autoregression (TVAR) model and a single equation model to run cointegration tests between these three variables. Neither of the cointegration tests shows strong evidence of explosive behavior in the DJIA and S&P 500 data. Then, I applied a sup augmented Dickey-Fuller test to check for the existence of periodically collapsing bubbles in stock prices. Such bubbles are found in S&P data during the late 1990s. Employing econometric tests from the third chapter, I continue in the fourth chapter to examine whether bubbles exist in stock prices of conventional economic sectors on the New York Stock Exchange. The ‘old economy’ as a whole is not found to have bubbles. But, periodically collapsing bubbles are found in Material and Telecommunication Services sectors, and the Real Estate industry group.
Resumo:
Most research on stock prices is based on the present value model or the more general consumption-based model. When applied to real economic data, both of them are found unable to account for both the stock price level and its volatility. Three essays here attempt to both build a more realistic model, and to check whether there is still room for bubbles in explaining fluctuations in stock prices. In the second chapter, several innovations are simultaneously incorporated into the traditional present value model in order to produce more accurate model-based fundamental prices. These innovations comprise replacing with broad dividends the more narrow traditional dividends that are more commonly used, a nonlinear artificial neural network (ANN) forecasting procedure for these broad dividends instead of the more common linear forecasting models for narrow traditional dividends, and a stochastic discount rate in place of the constant discount rate. Empirical results show that the model described above predicts fundamental prices better, compared with alternative models using linear forecasting process, narrow dividends, or a constant discount factor. Nonetheless, actual prices are still largely detached from fundamental prices. The bubble-like deviations are found to coincide with business cycles. The third chapter examines possible cointegration of stock prices with fundamentals and non-fundamentals. The output gap is introduced to form the non-fundamental part of stock prices. I use a trivariate Vector Autoregression (TVAR) model and a single equation model to run cointegration tests between these three variables. Neither of the cointegration tests shows strong evidence of explosive behavior in the DJIA and S&P 500 data. Then, I applied a sup augmented Dickey-Fuller test to check for the existence of periodically collapsing bubbles in stock prices. Such bubbles are found in S&P data during the late 1990s. Employing econometric tests from the third chapter, I continue in the fourth chapter to examine whether bubbles exist in stock prices of conventional economic sectors on the New York Stock Exchange. The ‘old economy’ as a whole is not found to have bubbles. But, periodically collapsing bubbles are found in Material and Telecommunication Services sectors, and the Real Estate industry group.
Resumo:
Uncertainty quantification (UQ) is both an old and new concept. The current novelty lies in the interactions and synthesis of mathematical models, computer experiments, statistics, field/real experiments, and probability theory, with a particular emphasize on the large-scale simulations by computer models. The challenges not only come from the complication of scientific questions, but also from the size of the information. It is the focus in this thesis to provide statistical models that are scalable to massive data produced in computer experiments and real experiments, through fast and robust statistical inference.
Chapter 2 provides a practical approach for simultaneously emulating/approximating massive number of functions, with the application on hazard quantification of Soufri\`{e}re Hills volcano in Montserrate island. Chapter 3 discusses another problem with massive data, in which the number of observations of a function is large. An exact algorithm that is linear in time is developed for the problem of interpolation of Methylation levels. Chapter 4 and Chapter 5 are both about the robust inference of the models. Chapter 4 provides a new criteria robustness parameter estimation criteria and several ways of inference have been shown to satisfy such criteria. Chapter 5 develops a new prior that satisfies some more criteria and is thus proposed to use in practice.
A New Method for Modeling Free Surface Flows and Fluid-structure Interaction with Ocean Applications
Resumo:
The computational modeling of ocean waves and ocean-faring devices poses numerous challenges. Among these are the need to stably and accurately represent both the fluid-fluid interface between water and air as well as the fluid-structure interfaces arising between solid devices and one or more fluids. As techniques are developed to stably and accurately balance the interactions between fluid and structural solvers at these boundaries, a similarly pressing challenge is the development of algorithms that are massively scalable and capable of performing large-scale three-dimensional simulations on reasonable time scales. This dissertation introduces two separate methods for approaching this problem, with the first focusing on the development of sophisticated fluid-fluid interface representations and the second focusing primarily on scalability and extensibility to higher-order methods.
We begin by introducing the narrow-band gradient-augmented level set method (GALSM) for incompressible multiphase Navier-Stokes flow. This is the first use of the high-order GALSM for a fluid flow application, and its reliability and accuracy in modeling ocean environments is tested extensively. The method demonstrates numerous advantages over the traditional level set method, among these a heightened conservation of fluid volume and the representation of subgrid structures.
Next, we present a finite-volume algorithm for solving the incompressible Euler equations in two and three dimensions in the presence of a flow-driven free surface and a dynamic rigid body. In this development, the chief concerns are efficiency, scalability, and extensibility (to higher-order and truly conservative methods). These priorities informed a number of important choices: The air phase is substituted by a pressure boundary condition in order to greatly reduce the size of the computational domain, a cut-cell finite-volume approach is chosen in order to minimize fluid volume loss and open the door to higher-order methods, and adaptive mesh refinement (AMR) is employed to focus computational effort and make large-scale 3D simulations possible. This algorithm is shown to produce robust and accurate results that are well-suited for the study of ocean waves and the development of wave energy conversion (WEC) devices.
Resumo:
De nombreux problèmes liés aux domaines du transport, des télécommunications et de la logistique peuvent être modélisés comme des problèmes de conception de réseaux. Le problème classique consiste à transporter un flot (données, personnes, produits, etc.) sur un réseau sous un certain nombre de contraintes dans le but de satisfaire la demande, tout en minimisant les coûts. Dans ce mémoire, on se propose d'étudier le problème de conception de réseaux avec coûts fixes, capacités et un seul produit, qu'on transforme en un problème équivalent à plusieurs produits de façon à améliorer la valeur de la borne inférieure provenant de la relaxation continue du modèle. La méthode que nous présentons pour la résolution de ce problème est une méthode exacte de branch-and-price-and-cut avec une condition d'arrêt, dans laquelle nous exploitons à la fois la méthode de génération de colonnes, la méthode de génération de coupes et l'algorithme de branch-and-bound. Ces méthodes figurent parmi les techniques les plus utilisées en programmation linéaire en nombres entiers. Nous testons notre méthode sur deux groupes d'instances de tailles différentes (gran-des et très grandes), et nous la comparons avec les résultats donnés par CPLEX, un des meilleurs logiciels permettant de résoudre des problèmes d'optimisation mathématique, ainsi qu’avec une méthode de branch-and-cut. Il s'est avéré que notre méthode est prometteuse et peut donner de bons résultats, en particulier pour les instances de très grandes tailles.
Resumo:
La modélisation de la cryolite, utilisée dans la fabrication de l’aluminium, implique plusieurs défis, notament la présence de discontinuités dans la solution et l’inclusion de la difference de densité entre les phases solide et liquide. Pour surmonter ces défis, plusieurs éléments novateurs ont été développés dans cette thèse. En premier lieu, le problème du changement de phase, communément appelé problème de Stefan, a été résolu en deux dimensions en utilisant la méthode des éléments finis étendue. Une formulation utilisant un multiplicateur de Lagrange stable spécialement développée et une interpolation enrichie a été utilisée pour imposer la température de fusion à l’interface. La vitesse de l’interface est déterminée par le saut dans le flux de chaleur à travers l’interface et a été calculée en utilisant la solution du multiplicateur de Lagrange. En second lieu, les effets convectifs ont été inclus par la résolution des équations de Stokes dans la phase liquide en utilisant la méthode des éléments finis étendue aussi. Troisièmement, le changement de densité entre les phases solide et liquide, généralement négligé dans la littérature, a été pris en compte par l’ajout d’une condition aux limites de vitesse non nulle à l’interface solide-liquide pour respecter la conservation de la masse dans le système. Des problèmes analytiques et numériques ont été résolus pour valider les divers composants du modèle et le système d’équations couplés. Les solutions aux problèmes numériques ont été comparées aux solutions obtenues avec l’algorithme de déplacement de maillage de Comsol. Ces comparaisons démontrent que le modèle par éléments finis étendue reproduit correctement le problème de changement phase avec densités variables.
Resumo:
Quenched and tempered high-speed steels obtained by powder metallurgy are commonly used in automotive components, such as valve seats of combustion engines. In order to machine these components, tools with high wear resistance and appropriate cutting edge geometry are required. This work aims to investigate the influence of the edge preparation of polycrystalline cubic boron nitride (PCBN) tools on the wear behavior in the orthogonal longitudinal turning of quenched and tempered M2 high-speed steels obtained by powder metallurgy. For this research, PCBN tools with high and low-CBN content have been used. Two different cutting edge geometries with a honed radius were tested: with a ground land (S shape) and without it (E shape). Also, the cutting speed was varied from 100 to 220 m/min. A rigid CNC lathe was used. The results showed that the high-CBN, E-shaped tool presented the longest life for a cutting speed of 100 m/min. High-CBN tools with a ground land and honed edge radius (S shaped) showed edge damage and lower values of the tool’s life. Low-CBN, S-shaped tools showed similar results, but with an inferior performance when compared with tools with high CBN content in both forms of edge preparation.
Resumo:
The U.S. railroad companies spend billions of dollars every year on railroad track maintenance in order to ensure safety and operational efficiency of their railroad networks. Besides maintenance costs, other costs such as train accident costs, train and shipment delay costs and rolling stock maintenance costs are also closely related to track maintenance activities. Optimizing the track maintenance process on the extensive railroad networks is a very complex problem with major cost implications. Currently, the decision making process for track maintenance planning is largely manual and primarily relies on the knowledge and judgment of experts. There is considerable potential to improve the process by using operations research techniques to develop solutions to the optimization problems on track maintenance. In this dissertation study, we propose a range of mathematical models and solution algorithms for three network-level scheduling problems on track maintenance: track inspection scheduling problem (TISP), production team scheduling problem (PTSP) and job-to-project clustering problem (JTPCP). TISP involves a set of inspection teams which travel over the railroad network to identify track defects. It is a large-scale routing and scheduling problem where thousands of tasks are to be scheduled subject to many difficult side constraints such as periodicity constraints and discrete working time constraints. A vehicle routing problem formulation was proposed for TISP, and a customized heuristic algorithm was developed to solve the model. The algorithm iteratively applies a constructive heuristic and a local search algorithm in an incremental scheduling horizon framework. The proposed model and algorithm have been adopted by a Class I railroad in its decision making process. Real-world case studies show the proposed approach outperforms the manual approach in short-term scheduling and can be used to conduct long-term what-if analyses to yield managerial insights. PTSP schedules capital track maintenance projects, which are the largest track maintenance activities and account for the majority of railroad capital spending. A time-space network model was proposed to formulate PTSP. More than ten types of side constraints were considered in the model, including very complex constraints such as mutual exclusion constraints and consecution constraints. A multiple neighborhood search algorithm, including a decomposition and restriction search and a block-interchange search, was developed to solve the model. Various performance enhancement techniques, such as data reduction, augmented cost function and subproblem prioritization, were developed to improve the algorithm. The proposed approach has been adopted by a Class I railroad for two years. Our numerical results show the model solutions are able to satisfy all hard constraints and most soft constraints. Compared with the existing manual procedure, the proposed approach is able to bring significant cost savings and operational efficiency improvement. JTPCP is an intermediate problem between TISP and PTSP. It focuses on clustering thousands of capital track maintenance jobs (based on the defects identified in track inspection) into projects so that the projects can be scheduled in PTSP. A vehicle routing problem based model and a multiple-step heuristic algorithm were developed to solve this problem. Various side constraints such as mutual exclusion constraints and rounding constraints were considered. The proposed approach has been applied in practice and has shown good performance in both solution quality and efficiency.
“Enjoy your baby” Internet-based CBT for mothers with babies: a feasibility randomised control trial
Resumo:
Background: Postnatal depression is a global health problem with lasting effects on the family. Government policy is focussed on early intervention and increasing access to psychological therapies. There is a growing evidence base for the use of computerised CBT packages and this study investigated the feasibility of a CBT-based self-help internet intervention for new mothers. Objective: To assess the ability to recruit mothers, deliver an internet course, obtain follow-up data and evaluate what mothers think of the course. Design: A feasibility randomised control design was used to compare a waiting list control group (delayed access= DA) to the Enjoy Your Baby course (immediate access= IA). Measures were administered at baseline and 8 week follow-up. Methods: Adverts were placed in the Metro freesheet, on charity web pages, on social media, posters were put up in the community, and leaflets were handed out at mother and baby groups. Participants had to be 18 years old or over with a child less than 18 months old. The IA arm was given access to the course straight away. After 8 weeks all participants were asked to recomplete the original measures and those in the IA arm also gave feedback on the course. Participants in the DA arm were given access after recompleting the questionnaires. Due to a lack of follow-up data a small discussion group was conducted. Intervention: The course contains 4 core modules including helping mothers understand why they feel the way they do and helping them build closeness to their babies. Additional modules, worksheets and homework tasks were available. The DA group were given a list of additional support resources and services, and encouraged to seek additional help if required. All participants received weekly automated emails for 12 weeks as they worked through the course. It was not possible to deliver individualised support. 34 Results: Despite using a number of recruitment strategies, recruitment was lower and slower than anticipated, and attrition was high. 41 women, primarily recruited via the internet, were randomised (IA n=21, DA n=20). No significant differences were observed between participants in either arm at baseline and no statistically significant differences were identified when the demographics and baseline measures of participants who logged-on to the course were compared to those who did not, or when participants who completed follow-up measures were compared to those who did not. Pre and post intervention scores on the EPDS approached statistical significance (P=.059, r=.444) favouring the intervention arm. The discussion group suggested strengths of the course and recommended areas for improvement, including making the course more mobile friendly. Conclusion: Internet interventions show promise; however it is difficult to recruit mothers, engagement is low and attrition high. A number of recommendations are made and a further pilot or an internal pilot of a larger substantive study should be conducted to confirm recruitment and retention. Trial ID: ISRCTN90927910.
Resumo:
De nombreux problèmes liés aux domaines du transport, des télécommunications et de la logistique peuvent être modélisés comme des problèmes de conception de réseaux. Le problème classique consiste à transporter un flot (données, personnes, produits, etc.) sur un réseau sous un certain nombre de contraintes dans le but de satisfaire la demande, tout en minimisant les coûts. Dans ce mémoire, on se propose d'étudier le problème de conception de réseaux avec coûts fixes, capacités et un seul produit, qu'on transforme en un problème équivalent à plusieurs produits de façon à améliorer la valeur de la borne inférieure provenant de la relaxation continue du modèle. La méthode que nous présentons pour la résolution de ce problème est une méthode exacte de branch-and-price-and-cut avec une condition d'arrêt, dans laquelle nous exploitons à la fois la méthode de génération de colonnes, la méthode de génération de coupes et l'algorithme de branch-and-bound. Ces méthodes figurent parmi les techniques les plus utilisées en programmation linéaire en nombres entiers. Nous testons notre méthode sur deux groupes d'instances de tailles différentes (gran-des et très grandes), et nous la comparons avec les résultats donnés par CPLEX, un des meilleurs logiciels permettant de résoudre des problèmes d'optimisation mathématique, ainsi qu’avec une méthode de branch-and-cut. Il s'est avéré que notre méthode est prometteuse et peut donner de bons résultats, en particulier pour les instances de très grandes tailles.
Resumo:
In this paper we present a new type of fractional operator, the Caputo–Katugampola derivative. The Caputo and the Caputo–Hadamard fractional derivatives are special cases of this new operator. An existence and uniqueness theorem for a fractional Cauchy type problem, with dependence on the Caputo–Katugampola derivative, is proven. A decomposition formula for the Caputo–Katugampola derivative is obtained. This formula allows us to provide a simple numerical procedure to solve the fractional differential equation.
Resumo:
Background: Non-alcoholic steatohepatitis (NASH) is a chronic liver disease that is capable of progressing to end-stage liver disease, but generally has a benign course. Non-alcoholic steatohepatitis (NASH) is a growing public health problem with no approved therapy. NASH projected to be the leading cause of liver transplantation in the United States by 2020. Obesity, non-insulin-dependent diabetes mellitus and hyperlipidaemia are the most common associations of the disease. Global prevalence of NASH is 10-24% amongst general population but increases to 25-75% in obese diabetic individuals. Objective: There is an urgent need for efficient therapeutic options as there is still no approved medication. The aim of this study was to detect changes in biochemical parameters including insulin resistance, cytokines, blood lipid profile and liver enzymes following weight loss in patients with non-alcoholic steatohepatitis. Materials and methods: One hundred obese patients with NASH, their age between 35-50 years, body mass index (BMI) from 30 to 35 Kg/m2 were included in the study in two subgroups; the first group (A) received moderate aerobic exercise training in addition to diet regimen , where the second group (B) received no treatment intervention. Results: The mean values of leptin, TNF-α, IL6, IL8, Alanine Aminotransferase (ALT), Aspartate Aminotransferase (AST), Homeostasis Model Assessment-Insulin Resistance- index (HOMA-IR), Total Cholesterol (TC), Low Density Lipoprotein Cholesterol (LDL-c) , Triglycerides (TG) and BMI were significantly decreased in group (A), where the mean value of Adiponectin and High Density Lipoprotein Cholesterol (HDL-c) were significantly increased, while there were no significant changes in group (B). Also, there was a significant difference between both groups at the end of the study. Conclusion: Weight loss modulates insulin resistance, adiponectin, leptin, inflammatory cytokine levels and markers of hepatic function in patients with nonalcoholic steatohepatitis.
Resumo:
The effective supplier evaluation and purchasing processes are of vital importance to business organizations, making the suppliers selection problem a fundamental key issue to their success. We consider a complex supplier selection problem with multiple products where minimum package quantities, minimum order values related to delivery costs, and discounted pricing schemes are taken into account. Our main contribution is to present a mixed integer linear programming (MILP) model for this supplier selection problem. The model is used to solve several examples including three real case studies from an electronic equipment assembly company.
Resumo:
International audience