864 resultados para Production Inventory Model with Switching Time
Resumo:
ATM, SDH or satellite have been used in the last century as the contribution network of Broadcasters. However the attractive price of IP networks is changing the infrastructure of these networks in the last decade. Nowadays, IP networks are widely used, but their characteristics do not offer the level of performance required to carry high quality video under certain circumstances. Data transmission is always subject to errors on line. In the case of streaming, correction is attempted at destination, while on transfer of files, retransmissions of information are conducted and a reliable copy of the file is obtained. In the latter case, reception time is penalized because of the low priority this type of traffic on the networks usually has. While in streaming, image quality is adapted to line speed, and line errors result in a decrease of quality at destination, in the file copy the difference between coding speed vs line speed and errors in transmission are reflected in an increase of transmission time. The way news or audiovisual programs are transferred from a remote office to the production centre depends on the time window and the type of line available; in many cases, it must be done in real time (streaming), with the resulting image degradation. The main purpose of this work is the workflow optimization and the image quality maximization, for that reason a transmission model for multimedia files adapted to JPEG2000, is described based on the combination of advantages of file transmission and those of streaming transmission, putting aside the disadvantages that these models have. The method is based on two patents and consists of the safe transfer of the headers and data considered to be vital for reproduction. Aside, the rest of the data is sent by streaming, being able to carry out recuperation operations and error concealment. Using this model, image quality is maximized according to the time window. In this paper, we will first give a briefest overview of the broadcasters requirements and the solutions with IP networks. We will then focus on a different solution for video file transfer. We will take the example of a broadcast center with mobile units (unidirectional video link) and regional headends (bidirectional link), and we will also present a video file transfer file method that satisfies the broadcaster requirements.
Resumo:
Several authors have analysed the changes of the probability density function of the solar radiation with different time resolutions. Some others have approached to study the significance of these changes when produced energy calculations are attempted. We have undertaken different transformations to four Spanish databases in order to clarify the interrelationship between radiation models and produced energy estimations. Our contribution is straightforward: the complexity of a solar radiation model needed for yearly energy calculations, is very low. Twelve values of monthly mean of solar radiation are enough to estimate energy with errors below 3%. Time resolutions better than hourly samples do not improve significantly the result of energy estimations.
Resumo:
El actual contexto de fabricación, con incrementos en los precios de la energía, una creciente preocupación medioambiental y cambios continuos en los comportamientos de los consumidores, fomenta que los responsables prioricen la fabricación respetuosa con el medioambiente. El paradigma del Internet de las Cosas (IoT) promete incrementar la visibilidad y la atención prestada al consumo de energía gracias tanto a sensores como a medidores inteligentes en los niveles de máquina y de línea de producción. En consecuencia es posible y sencillo obtener datos de consumo de energía en tiempo real proveniente de los procesos de fabricación, pero además es posible analizarlos para incrementar su importancia en la toma de decisiones. Esta tesis pretende investigar cómo utilizar la adopción del Internet de las Cosas en el nivel de planta de producción, en procesos discretos, para incrementar la capacidad de uso de la información proveniente tanto de la energía como de la eficiencia energética. Para alcanzar este objetivo general, la investigación se ha dividido en cuatro sub-objetivos y la misma se ha desarrollado a lo largo de cuatro fases principales (en adelante estudios). El primer estudio de esta tesis, que se apoya sobre una revisión bibliográfica comprehensiva y sobre las aportaciones de expertos, define prácticas de gestión de la producción que son energéticamente eficientes y que se apoyan de un modo preeminente en la tecnología IoT. Este primer estudio también detalla los beneficios esperables al adoptar estas prácticas de gestión. Además, propugna un marco de referencia para permitir la integración de los datos que sobre el consumo energético se obtienen en el marco de las plataformas y sistemas de información de la compañía. Esto se lleva a cabo con el objetivo último de remarcar cómo estos datos pueden ser utilizados para apalancar decisiones en los niveles de procesos tanto tácticos como operativos. Segundo, considerando los precios de la energía como variables en el mercado intradiario y la disponibilidad de información detallada sobre el estado de las máquinas desde el punto de vista de consumo energético, el segundo estudio propone un modelo matemático para minimizar los costes del consumo de energía para la programación de asignaciones de una única máquina que deba atender a varios procesos de producción. Este modelo permite la toma de decisiones en el nivel de máquina para determinar los instantes de lanzamiento de cada trabajo de producción, los tiempos muertos, cuándo la máquina debe ser puesta en un estado de apagada, el momento adecuado para rearrancar, y para pararse, etc. Así, este modelo habilita al responsable de producción de implementar el esquema de producción menos costoso para cada turno de producción. En el tercer estudio esta investigación proporciona una metodología para ayudar a los responsables a implementar IoT en el nivel de los sistemas productivos. Se incluye un análisis del estado en que se encuentran los sistemas de gestión de energía y de producción en la factoría, así como también se proporcionan recomendaciones sobre procedimientos para implementar IoT para capturar y analizar los datos de consumo. Esta metodología ha sido validada en un estudio piloto, donde algunos indicadores clave de rendimiento (KPIs) han sido empleados para determinar la eficiencia energética. En el cuarto estudio el objetivo es introducir una vía para obtener visibilidad y relevancia a diferentes niveles de la energía consumida en los procesos de producción. El método propuesto permite que las factorías con procesos de producción discretos puedan determinar la energía consumida, el CO2 emitido o el coste de la energía consumida ya sea en cualquiera de los niveles: operación, producto o la orden de fabricación completa, siempre considerando las diferentes fuentes de energía y las fluctuaciones en los precios de la misma. Los resultados muestran que decisiones y prácticas de gestión para conseguir sistemas de producción energéticamente eficientes son posibles en virtud del Internet de las Cosas. También, con los resultados de esta tesis los responsables de la gestión energética en las compañías pueden plantearse una aproximación a la utilización del IoT desde un punto de vista de la obtención de beneficios, abordando aquellas prácticas de gestión energética que se encuentran más próximas al nivel de madurez de la factoría, a sus objetivos, al tipo de producción que desarrolla, etc. Así mismo esta tesis muestra que es posible obtener reducciones significativas de coste simplemente evitando los períodos de pico diario en el precio de la misma. Además la tesis permite identificar cómo el nivel de monitorización del consumo energético (es decir al nivel de máquina), el intervalo temporal, y el nivel del análisis de los datos son factores determinantes a la hora de localizar oportunidades para mejorar la eficiencia energética. Adicionalmente, la integración de datos de consumo energético en tiempo real con datos de producción (cuando existen altos niveles de estandarización en los procesos productivos y sus datos) es esencial para permitir que las factorías detallen la energía efectivamente consumida, su coste y CO2 emitido durante la producción de un producto o componente. Esto permite obtener una valiosa información a los gestores en el nivel decisor de la factoría así como a los consumidores y reguladores. ABSTRACT In today‘s manufacturing scenario, rising energy prices, increasing ecological awareness, and changing consumer behaviors are driving decision makers to prioritize green manufacturing. The Internet of Things (IoT) paradigm promises to increase the visibility and awareness of energy consumption, thanks to smart sensors and smart meters at the machine and production line level. Consequently, real-time energy consumption data from the manufacturing processes can be easily collected and then analyzed, to improve energy-aware decision-making. This thesis aims to investigate how to utilize the adoption of the Internet of Things at shop floor level to increase energy–awareness and the energy efficiency of discrete production processes. In order to achieve the main research goal, the research is divided into four sub-objectives, and is accomplished during four main phases (i.e., studies). In the first study, by relying on a comprehensive literature review and on experts‘ insights, the thesis defines energy-efficient production management practices that are enhanced and enabled by IoT technology. The first study also explains the benefits that can be obtained by adopting such management practices. Furthermore, it presents a framework to support the integration of gathered energy data into a company‘s information technology tools and platforms, which is done with the ultimate goal of highlighting how operational and tactical decision-making processes could leverage such data in order to improve energy efficiency. Considering the variable energy prices in one day, along with the availability of detailed machine status energy data, the second study proposes a mathematical model to minimize energy consumption costs for single machine production scheduling during production processes. This model works by making decisions at the machine level to determine the launch times for job processing, idle time, when the machine must be shut down, ―turning on‖ time, and ―turning off‖ time. This model enables the operations manager to implement the least expensive production schedule during a production shift. In the third study, the research provides a methodology to help managers implement the IoT at the production system level; it includes an analysis of current energy management and production systems at the factory, and recommends procedures for implementing the IoT to collect and analyze energy data. The methodology has been validated by a pilot study, where energy KPIs have been used to evaluate energy efficiency. In the fourth study, the goal is to introduce a way to achieve multi-level awareness of the energy consumed during production processes. The proposed method enables discrete factories to specify energy consumption, CO2 emissions, and the cost of the energy consumed at operation, production and order levels, while considering energy sources and fluctuations in energy prices. The results show that energy-efficient production management practices and decisions can be enhanced and enabled by the IoT. With the outcomes of the thesis, energy managers can approach the IoT adoption in a benefit-driven way, by addressing energy management practices that are close to the maturity level of the factory, target, production type, etc. The thesis also shows that significant reductions in energy costs can be achieved by avoiding high-energy price periods in a day. Furthermore, the thesis determines the level of monitoring energy consumption (i.e., machine level), the interval time, and the level of energy data analysis, which are all important factors involved in finding opportunities to improve energy efficiency. Eventually, integrating real-time energy data with production data (when there are high levels of production process standardization data) is essential to enable factories to specify the amount and cost of energy consumed, as well as the CO2 emitted while producing a product, providing valuable information to decision makers at the factory level as well as to consumers and regulators.
Resumo:
The question was addressed whether the risk of cancer of an individual in a heterogeneous population can be predicted on the basis of measurable biochemical and biological variables postulated to be associated with the process of chemical carcinogenesis. Using the skin tumor model with outbred male NMRI mice, the latency time for the appearance of a papilloma was used as an indicator of the individual cancer risk. Starting at 8 weeks of age, a group of 29 mice was treated twice weekly with 20 nmol of 7,12-dimethylbenz[alpha]anthracene (DMBA) applied to back skin. The individual papilloma latency time ranged from 13.5 to 25 weeks of treatment. Two weeks after the appearance of the first papilloma in each mouse, an osmotic minipump delivering 5-bromo-2'-deoxyuridine was s.c. implanted and the mouse was killed 24 hr later. Levels of DMBA-DNA adducts, of 8-hydroxy-2'-deoxyguanosine, and various measures of the kinetics of cell division were determined in the epidermis of the treated skin area. The levels of 8-hydroxy-2'-deoxyguanosine and the fraction of cells in DNA replication (labeling index for the incorporation of 5-bromo-2'-deoxyuridine) were significantly higher in those mice that showed short latency times. On the other hand, the levels of DMBA-DNA adducts were lowest in animals with short latency times. The latter finding was rather unexpected but can be explained as a consequence of the inverse correlation seen for the labeling index: with each round of cell division, the adduct concentration is reduced to 50% because the new DNA strand is free of DMBA adducts until the next treatment. Under the conditions of this bioassay, therefore, oxygen radical-related genotoxicity and the rate of cell division, rather than levels of carcinogen-DNA adducts, were found to be of predictive value as indicators of an individual cancer risk.
Resumo:
In this paper we examine the time T to reach a critical number K0 of infections during an outbreak in an epidemic model with infective and susceptible immigrants. The underlying process X, which was first introduced by Ridler-Rowe (1967), is related to recurrent diseases and it appears to be analytically intractable. We present an approximating model inspired from the use of extreme values, and we derive formulae for the Laplace-Stieltjes transform of T and its moments, which are evaluated by using an iterative procedure. Numerical examples are presented to illustrate the effects of the contact and removal rates on the expected values of T and the threshold K0, when the initial time instant corresponds to an invasion time. We also study the exact reproduction number Rexact,0 and the population transmission number Rp, which are random versions of the basic reproduction number R0.
Resumo:
Different non-Fourier models of heat conduction, that incorporate time lags in the heat flux and/or the temperature gradient, have been increasingly considered in the last years to model microscale heat transfer problems in engineering. Numerical schemes to obtain approximate solutions of constant coefficients lagging models of heat conduction have already been proposed. In this work, an explicit finite difference scheme for a model with coefficients variable in time is developed, and their properties of convergence and stability are studied. Numerical computations showing examples of applications of the scheme are presented.
Resumo:
Here, the pelagic carbonate system and the ?13C signature of dissolved inorganic carbonate (DIC) were investigated in a tidal basin of the southern North Sea, the Jade Bay, with respect to tidal cycles and a transect towards the North Sea in winter time (January and November, 2010). Physical parameters, major and trace elements, and nutrient concentrations were considered, too. Primary production and pelagic organic matter respiration were negligible during winter time. Both, the compositional variations on the transects as well as during the tidal cycles indicate the mixing of North Sea with fresh water. The combined spatial co-variations of different parameters indicate an introduction of fresh water that was enriched in DI12C, metabolites (e.g., ammonia), protons, and dissolved redox-sensitive elements (e.g., Mn2+). During the January campaign, the discharge via the flood gates was limited due to ice cover of the hinterland drainage ditches, allowing for an observation of tidal variations without significant mixing contributions from surface water discharges. Considering a binary mixing model with North Sea and fresh water as end-members, the extrapolated fresh water end-member composition for this campaign is estimated to contain about 3.8 mmol/kg DIC , and enhanced concentrations of NH4+, Mn2+, and protons compared to North Sea water. The fast temporal response of dissolved geochemical tracers on tidal variations in the Jade Bay indicates a continuous supply of a fresh water component. The measured composition of fresh waters entering the Jade Bay via flood gates (end of October, 2010) did not match the values estimated by the binary mixing model. Therefore, the overall fresh water component likely is a mixture between sources originating from flood gates and (in January) dominating submarine groundwater discharge entering the Jade Bay. This model is consistent with the results obtained during the November campaign, when a more important contribution from flood gates is expected and a more variable fresh water end-member is estimated. The co-variations of the concentrations and the stable carbon isotope composition of DIC are applied to evaluate possible superimposed sink-source-transformation processes in the coastal waters and a general co-variation scheme is suggested.
Resumo:
This paper presents a metafrontier production function model for firms in different groups having different technologies. The metafrontier model enables the calculation of comparable technical efficiencies for firms operating under different technologies. The model also enables the technology gaps to be estimated for firms under different technologies relative to the potential technology available to the industry as a whole. The metafrontier model is applied in the analysis of panel data on garment firms in five different regions of Indonesia, assuming that the regional stochastic frontier production function models have technical inefficiency effects with the time-varying structure proposed by Battese and Coelli ( 1992).
Resumo:
Biodiesel production is a very promising area due to the relevance that it is an environmental-friendly diesel fuel alternative to fossil fuel derived diesel fuels. Nowadays, most industrial applications of biodiesel production are performed by the transesterification of renewable biological sources based on homogeneous acid catalysts, which requires downstream neutralization and separation leading to a series of technical and environmental problems. However, heterogeneous catalyst can solve these issues, and be used as a better alternative for biodiesel production. Thus, a heuristic diffusion-reaction kinetic model has been established to simulate the transesterification of alkyl ester with methanol over a series of heterogeneous Cs-doped heteropolyacid catalysts. The novelty of this framework lies in detailed modeling of surface reacting kinetic phenomena and integrating that with particle-level transport phenomena all the way through to process design and optimisation, which has been done for biodiesel production process for the first time. This multi-disciplinary research combining chemistry, chemical engineering and process integration offers better insights into catalyst design and process intensification for the industrial application of Cs-doped heteropolyacid catalysts for biodiesel production. A case study of the transesterification of tributyrin with methanol has been demonstrated to establish the effectiveness of this methodology.
Resumo:
Biodiesel production is a very promising area due to the relevance that it is an environmental-friendly diesel fuel alternative to fossil fuel derived diesel fuels. Nowadays, most industrial applications of biodiesel production are performed by the transesterification of renewable biological sources based on homogeneous acid catalysts, which requires downstream neutralization and separation leading to a series of technical and environmental problems. However, heterogeneous catalyst can solve these issues, and be used as a better alternative for biodiesel production. Thus, a heuristic diffusion-reaction kinetic model has been established to simulate the transesterification of alkyl ester with methanol over a series of heterogeneous Cs-doped heteropolyacid catalysts. The novelty of this framework lies in detailed modeling of surface reacting kinetic phenomena and integrating that with particle-level transport phenomena all the way through to process design and optimisation, which has been done for biodiesel production process for the first time. This multi-disciplinary research combining chemistry, chemical engineering and process integration offers better insights into catalyst design and process intensification for the industrial application of Cs-doped heteropolyacid catalysts for biodiesel production. A case study of the transesterification of tributyrin with methanol has been demonstrated to establish the effectiveness of this methodology.
Resumo:
In oscillatory reaction-diffusion systems, time-delay feedback can lead to the instability of uniform oscillations with respect to formation of standing waves. Here, we investigate how the presence of additive, Gaussian white noise can induce the appearance of standing waves. Combining analytical solutions of the model with spatio-temporal simulations, we find that noise can promote standing waves in regimes where the deterministic uniform oscillatory modes are stabilized. As the deterministic phase boundary is approached, the spatio-temporal correlations become stronger, such that even small noise can induce standing waves in this parameter regime. With larger noise strengths, standing waves could be induced at finite distances from the (deterministic) phase boundary. The overall dynamics is defined through the interplay of noisy forcing with the inherent reaction-diffusion dynamics.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
This dissertation mainly focuses on coordinated pricing and inventory management problems, where the related background is provided in Chapter 1. Several periodic-review models are then discussed in Chapters 2,3,4 and 5, respectively. Chapter 2 analyzes a deterministic single-product model, where a price adjustment cost incurs if the current selling price is changed from the previous period. We develop exact algorithms for the problem under different conditions and find out that computation complexity varies significantly associated with the cost structure. %Moreover, our numerical study indicates that dynamic pricing strategies may outperform static pricing strategies even when price adjustment cost accounts for a significant portion of the total profit. Chapter 3 develops a single-product model in which demand of a period depends not only on the current selling price but also on past prices through the so-called reference price. Strongly polynomial time algorithms are designed for the case without no fixed ordering cost, and a heuristic is proposed for the general case together with an error bound estimation. Moreover, our illustrates through numerical studies that incorporating reference price effect into coordinated pricing and inventory models can have a significant impact on firms' profits. Chapter 4 discusses the stochastic version of the model in Chapter 3 when customers are loss averse. It extends the associated results developed in literature and proves that the reference price dependent base-stock policy is proved to be optimal under a certain conditions. Instead of dealing with specific problems, Chapter 5 establishes the preservation of supermodularity in a class of optimization problems. This property and its extensions include several existing results in the literature as special cases, and provide powerful tools as we illustrate their applications to several operations problems: the stochastic two-product model with cross-price effects, the two-stage inventory control model, and the self-financing model.
Resumo:
This paper presents a new tuning methodology of the main controller of an internal model control structure for n×n stable multivariable processes with multiple time delays based on the centralized inverted decoupling structure. Independently of the system size, very simple general expressions for the controller elements are obtained. The realizability conditions are provided and the specification of the closed-loop requirements is explained. A diagonal filter is added to the proposed control structure in order to improve the disturbance rejection without modifying the nominal set-point response. The effectiveness of the method is illustrated through different simulation examples in comparison with other works.
Resumo:
OBJECTIVES AND STUDY METHOD: There are two subjects in this thesis: “Lot production size for a parallel machine scheduling problem with auxiliary equipment” and “Bus holding for a simulated traffic network”. Although these two themes seem unrelated, the main idea is the optimization of complex systems. The “Lot production size for a parallel machine scheduling problem with auxiliary equipment” deals with a manufacturing setting where sets of pieces form finished products. The aim is to maximize the profit of the finished products. Each piece may be processed in more than one mold. Molds must be mounted on machines with their corresponding installation setup times. The key point of our methodology is to solve the single period lot-sizing decisions for the finished products together with the piece-mold and the mold-machine assignments, relaxing the constraint that a single mold may not be used in two machines at the same time. For the “Bus holding for a simulated traffic network” we deal with One of the most annoying problems in urban bus operations is bus bunching, which happens when two or more buses arrive at a stop nose to tail. Bus bunching reflects an unreliable service that affects transit operations by increasing passenger-waiting times. This work proposes a linear mathematical programming model that establishes bus holding times at certain stops along a transit corridor to avoid bus bunching. Our approach needs real-time input, so we simulate a transit corridor and apply our mathematical model to the data generated. Thus, the inherent variability of a transit system is considered by the simulation, while the optimization model takes into account the key variables and constraints of the bus operation. CONTRIBUTIONS AND CONCLUSIONS: For the “Lot production size for a parallel machine scheduling problem with auxiliary equipment” the relaxation we propose able to find solutions more efficiently, moreover our experimental results show that most of the solutions verify that molds are non-overlapping even if they are installed on several machines. We propose an exact integer linear programming, a Relax&Fix heuristic, and a multistart greedy algorithm to solve this problem. Experimental results on instances based on real-world data show the efficiency of our approaches. The mathematical model and the algorithm for the lot production size problem, showed in this research, can be used for production planners to help in the scheduling of the manufacturing. For the “Bus holding for a simulated traffic network” most of the literature considers quadratic models that minimize passenger-waiting times, but they are harder to solve and therefore difficult to operate by real-time systems. On the other hand, our methodology reduces passenger-waiting times efficiently given our linear programming model, with the characteristic of applying control intervals just every 5 minutes.