984 resultados para Lead times


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a generic study of inventory costs in a factory stockroom that supplies component parts to an assembly line. Specifically, we are concerned with the increase in component inventories due to uncertainty in supplier lead-times, and the fact that several different components must be present before assembly can begin. It is assumed that the suppliers of the various components are independent, that the suppliers' operations are in statistical equilibrium, and that the same amount of each type of component is demanded by the assembly line each time a new assembly cycle is scheduled to begin. We use, as a measure of inventory cost, the expected time for which an order of components must be held in the stockroom from the time it is delivered until the time it is consumed by the assembly line. Our work reveals the effects of supplier lead-time variability, the number of different types of components, and their desired service levels, on the inventory cost. In addition, under the assumptions that inventory holding costs and the cost of delaying assembly are linear in time, we study optimal ordering policies and present an interesting characterization that is independent of the supplier lead-time distributions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Service mismatches involve the adaptation of structural and behavioural interfaces of services, which in practice incurs long lead times through manual, coding e ort. We propose a framework, complementary to conventional service adaptation, to extract comprehensive and seman- tically normalised service interfaces, useful for interoperability in large business networks and the Internet of Services. The framework supports introspection and analysis of large and overloaded operational signa- tures to derive focal artefacts, namely the underlying business objects of services. A more simpli ed and comprehensive service interface layer is created based on these, and rendered into semantically normalised in- terfaces, given an ontology accrued through the framework from service analysis history. This opens up the prospect of supporting capability comparisons across services, and run-time request backtracking and ad- justment, as consumers discover new features of a service's operations through corresponding features of similar services. This paper provides a rst exposition of the service interface synthesis framework, describing patterns having novel requirements for unilateral service adaptation, and algorithms for interface introspection and business object alignment. A prototype implementation and analysis of web services drawn from com- mercial logistic systems are used to validate the algorithms and identify open challenges and future research directions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Gascoyne-Murchison region of Western Australia experiences an arid to semi-arid climate with a highly variable temporal and spatial rainfall distribution. The region has around 39.2 million hectares available for pastoral lease and supports predominantly catle and sheep grazing leases. In recent years a number of climate forecasting systems have been available offering rainfall probabilities with different lead times and a forecast period; however, the extent to which these systems are capable of fulfilling the requirements of the local pastoralists is still ambiguous. Issues can range from ensuring forecasts are issued with sufficient lead time to enable key planning or decisions to be revoked or altered, to ensuring forecast language is simple and clear, to negate possible misunderstandings in interpretation. A climate research project sought to provide an objective method to determine which available forecasting systems had the greatest forecasting skill at times of the year relevant to local property management. To aid this climate research project, the study reported here was undertaken with an overall objective of exploring local pastoralists' climate information needs. We also explored how well they understand common climate forecast terms such as 'mean', median' and 'probability', and how they interpret and apply forecast information to decisions. A stratified, proportional random sampling was used for the purpose of deriving the representative sample based on rainfall-enterprise combinations. In order to provide more time for decision-making than existing operational forecasts that are issued with zero lead time, pastoralists requested that forecasts be issued for May-July and January-March with lead times counting down from 4 to 0 months. We found forecasts of between 20 and 50 mm break-of-season or follow-up rainfall were likely to influence decisions. Eighty percent of pastoralists demonstrated in a test question that they had a poor technical understanding of how to interpret the standard wording of a probabilistic median rainfall forecast. this is worthy of further research to investigate whether inappropriate management decisions are being made because the forecasts are being misunderstood. We found more than half the respondents regularly access and use weather and climate forecasts or outlook information from a range of sources and almost three-quarters considered climate information or tools useful, with preferred methods for accessing this information by email, faxback service, internet and the Department of Agriculture Western Australia's Pastoral Memo. Despite differences in enterprise types and rainfall seasonality across the region we found seasonal climate forecasting needs were relatively consistent. It became clear that providing basic training and working with pastoralists to help them understand regional climatic drivers, climate terminology and jargon, and the best ways to apply the forecasts to enhance decision-making are important to improve their use of information. Consideration could also be given to engaging a range of producers to write the climate forecasts themselves in the language they use and understand, in consultation with the scientists who prepare the forecasts.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we use reinforcement learning (RL) as a tool to study price dynamics in an electronic retail market consisting of two competing sellers, and price sensitive and lead time sensitive customers. Sellers, offering identical products, compete on price to satisfy stochastically arriving demands (customers), and follow standard inventory control and replenishment policies to manage their inventories. In such a generalized setting, RL techniques have not previously been applied. We consider two representative cases: 1) no information case, were none of the sellers has any information about customer queue levels, inventory levels, or prices at the competitors; and 2) partial information case, where every seller has information about the customer queue levels and inventory levels of the competitors. Sellers employ automated pricing agents, or pricebots, which use RL-based pricing algorithms to reset the prices at random intervals based on factors such as number of back orders, inventory levels, and replenishment lead times, with the objective of maximizing discounted cumulative profit. In the no information case, we show that a seller who uses Q-learning outperforms a seller who uses derivative following (DF). In the partial information case, we model the problem as a Markovian game and use actor-critic based RL to learn dynamic prices. We believe our approach to solving these problems is a new and promising way of setting dynamic prices in multiseller environments with stochastic demands, price sensitive customers, and inventory replenishments.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study uses the European Centre for Medium-Range Weather Forecasts (ECMWF) model-generated high-resolution 10-day-long predictions for the Year of Tropical Convection (YOTC) 2008. Precipitation forecast skills of the model over the tropics are evaluated against the Tropical Rainfall Measuring Mission (TRMM) estimates. It has been shown that the model was able to capture the monthly to seasonal mean features of tropical convection reasonably. Northward propagation of convective bands over the Bay of Bengal was also forecasted realistically up to 5 days in advance, including the onset phase of the monsoon during the first half of June 2008. However, large errors exist in the daily datasets especially for longer lead times over smaller domains. For shorter lead times (less than 4-5 days), forecast errors are much smaller over the oceans than over land. Moreover, the rate of increase of errors with lead time is rapid over the oceans and is confined to the regions where observed precipitation shows large day-to-day variability. It has been shown that this rapid growth of errors over the oceans is related to the spatial pattern of near-surface air temperature. This is probably due to the one-way air-sea interaction in the atmosphere-only model used for forecasting. While the prescribed surface temperature over the oceans remain realistic at shorter lead times, the pattern and hence the gradient of the surface temperature is not altered with change in atmospheric parameters at longer lead times. It has also been shown that the ECMWF model had considerable difficulties in forecasting very low and very heavy intensity of precipitation over South Asia. The model has too few grids with ``zero'' precipitation and heavy (>40 mm day(-1)) precipitation. On the other hand, drizzle-like precipitation is too frequent in the model compared to that in the TRMM datasets. Further analysis shows that a major source of error in the ECMWF precipitation forecasts is the diurnal cycle over the South Asian monsoon region. The peak intensity of precipitation in the model forecasts over land (ocean) appear about 6 (9) h earlier than that in the observations. Moreover, the amplitude of the diurnal cycle is much higher in the model forecasts compared to that in the TRMM estimates. It has been seen that the phase error of the diurnal cycle increases with forecast lead time. The error in monthly mean 3-hourly precipitation forecasts is about 2-4 times of the error in the daily mean datasets. Thus, effort should be given to improve the phase and amplitude forecast of the diurnal cycle of precipitation from the model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Perfect or even mediocre weather predictions over a long period are almost impossible because of the ultimate growth of a small initial error into a significant one. Even though the sensitivity of initial conditions limits the predictability in chaotic systems, an ensemble of prediction from different possible initial conditions and also a prediction algorithm capable of resolving the fine structure of the chaotic attractor can reduce the prediction uncertainty to some extent. All of the traditional chaotic prediction methods in hydrology are based on single optimum initial condition local models which can model the sudden divergence of the trajectories with different local functions. Conceptually, global models are ineffective in modeling the highly unstable structure of the chaotic attractor. This paper focuses on an ensemble prediction approach by reconstructing the phase space using different combinations of chaotic parameters, i.e., embedding dimension and delay time to quantify the uncertainty in initial conditions. The ensemble approach is implemented through a local learning wavelet network model with a global feed-forward neural network structure for the phase space prediction of chaotic streamflow series. Quantification of uncertainties in future predictions are done by creating an ensemble of predictions with wavelet network using a range of plausible embedding dimensions and delay times. The ensemble approach is proved to be 50% more efficient than the single prediction for both local approximation and wavelet network approaches. The wavelet network approach has proved to be 30%-50% more superior to the local approximation approach. Compared to the traditional local approximation approach with single initial condition, the total predictive uncertainty in the streamflow is reduced when modeled with ensemble wavelet networks for different lead times. Localization property of wavelets, utilizing different dilation and translation parameters, helps in capturing most of the statistical properties of the observed data. The need for taking into account all plausible initial conditions and also bringing together the characteristics of both local and global approaches to model the unstable yet ordered chaotic attractor of a hydrologic series is clearly demonstrated.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The initial objective of Part I was to determine the nature of upper mantle discontinuities, the average velocities through the mantle, and differences between mantle structure under continents and oceans by the use of P'dP', the seismic core phase P'P' (PKPPKP) that reflects at depth d in the mantle. In order to accomplish this, it was found necessary to also investigate core phases themselves and their inferences on core structure. P'dP' at both single stations and at the LASA array in Montana indicates that the following zones are candidates for discontinuities with varying degrees of confidence: 800-950 km, weak; 630-670 km, strongest; 500-600 km, strong but interpretation in doubt; 350-415 km, fair; 280-300 km, strong, varying in depth; 100-200 km, strong, varying in depth, may be the bottom of the low-velocity zone. It is estimated that a single station cannot easily discriminate between asymmetric P'P' and P'dP' for lead times of about 30 sec from the main P'P' phase, but the LASA array reduces this uncertainty range to less than 10 sec. The problems of scatter of P'P' main-phase times, mainly due to asymmetric P'P', incorrect identification of the branch, and lack of the proper velocity structure at the velocity point, are avoided and the analysis shows that one-way travel of P waves through oceanic mantle is delayed by 0.65 to 0.95 sec relative to United States mid-continental mantle.

A new P-wave velocity core model is constructed from observed times, dt/dΔ's, and relative amplitudes of P'; the observed times of SKS, SKKS, and PKiKP; and a new mantle-velocity determination by Jordan and Anderson. The new core model is smooth except for a discontinuity at the inner-core boundary determined to be at a radius of 1215 km. Short-period amplitude data do not require the inner core Q to be significantly lower than that of the outer core. Several lines of evidence show that most, if not all, of the arrivals preceding the DF branch of P' at distances shorter than 143° are due to scattering as proposed by Haddon and not due to spherically symmetric discontinuities just above the inner core as previously believed. Calculation of the travel-time distribution of scattered phases and comparison with published data show that the strongest scattering takes place at or near the core-mantle boundary close to the seismic station.

In Part II, the largest events in the San Fernando earthquake series, initiated by the main shock at 14 00 41.8 GMT on February 9, 1971, were chosen for analysis from the first three months of activity, 87 events in all. The initial rupture location coincides with the lower, northernmost edge of the main north-dipping thrust fault and the aftershock distribution. The best focal mechanism fit to the main shock P-wave first motions constrains the fault plane parameters to: strike, N 67° (± 6°) W; dip, 52° (± 3°) NE; rake, 72° (67°-95°) left lateral. Focal mechanisms of the aftershocks clearly outline a downstep of the western edge of the main thrust fault surface along a northeast-trending flexure. Faulting on this downstep is left-lateral strike-slip and dominates the strain release of the aftershock series, which indicates that the downstep limited the main event rupture on the west. The main thrust fault surface dips at about 35° to the northeast at shallow depths and probably steepens to 50° below a depth of 8 km. This steep dip at depth is a characteristic of other thrust faults in the Transverse Ranges and indicates the presence at depth of laterally-varying vertical forces that are probably due to buckling or overriding that causes some upward redirection of a dominant north-south horizontal compression. Two sets of events exhibit normal dip-slip motion with shallow hypocenters and correlate with areas of ground subsidence deduced from gravity data. Several lines of evidence indicate that a horizontal compressional stress in a north or north-northwest direction was added to the stresses in the aftershock area 12 days after the main shock. After this change, events were contained in bursts along the downstep and sequencing within the bursts provides evidence for an earthquake-triggering phenomenon that propagates with speeds of 5 to 15 km/day. Seismicity before the San Fernando series and the mapped structure of the area suggest that the downstep of the main fault surface is not a localized discontinuity but is part of a zone of weakness extending from Point Dume, near Malibu, to Palmdale on the San Andreas fault. This zone is interpreted as a decoupling boundary between crustal blocks that permits them to deform separately in the prevalent crustal-shortening mode of the Transverse Ranges region.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

18 p.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Expansion of economic activities, urbanisation, increased resource use and population growth are continuously increasing the vulnerability of the coastal zone. This vulnerability is now further raised by the threat of climate change and accelerated sea level rise. The potentially severe impacts force policy-makers to also consider long-term planning for climate change and sea level rise. For reasons of efficiency and effectiveness this long-term planning should be integrated with existing short-term plans, thus creating an Integrated Coastal Zone Management programme. As a starting point for coastal zone management, the assessment of a country's or region's vulnerability to accelerated sea level rise is of utmost importance. The Intergovernmental Panel on Climate Change has developed a common methodology for this purpose. Studies carried out according to this Common Methodology have been compared and combined, from which general conclusions on local, regional and global vulnerability have been drawn, the latter in the form of a Global Vulnerability Assessment. In order to address the challenge of coping with climate change and accelerated sea level rise, it is essential to foresee the possible impacts, and to take precautionary action. Because of the long lead times needed for creating the required technical and institutional infrastructures, such action should be taken in the short term. Furthermore, it should be part of a broader coastal zone management and planning context. This will require a holistic view, shared by the different institutional levels that exist, along which different needs and interests should be balanced.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In some supply chains, materials are ordered periodically according to local information. This paper investigates how to improve the performance of such a supply chain. Specifically, we consider a serial inventory system in which each stage implements a local reorder interval policy; i.e., each stage orders up to a local basestock level according to a fixed-interval schedule. A fixed cost is incurred for placing an order. Two improvement strategies are considered: (1) expanding the information flow by acquiring real-time demand information and (2) accelerating the material flow via flexible deliveries. The first strategy leads to a reorder interval policy with full information; the second strategy leads to a reorder point policy with local information. Both policies have been studied in the literature. Thus, to assess the benefit of these strategies, we analyze the local reorder interval policy. We develop a bottom-up recursion to evaluate the system cost and provide a method to obtain the optimal policy. A numerical study shows the following: Increasing the flexibility of deliveries lowers costs more than does expanding information flow; the fixed order costs and the system lead times are key drivers that determine the effectiveness of these improvement strategies. In addition, we find that using optimal batch sizes in the reorder point policy and demand rate to infer reorder intervals may lead to significant cost inefficiency. © 2010 INFORMS.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper analyzes a class of common-component allocation rules, termed no-holdback (NHB) rules, in continuous-review assemble-to-order (ATO) systems with positive lead times. The inventory of each component is replenished following an independent base-stock policy. In contrast to the usually assumed first-come-first-served (FCFS) component allocation rule in the literature, an NHB rule allocates a component to a product demand only if it will yield immediate fulfillment of that demand. We identify metrics as well as cost and product structures under which NHB rules outperform all other component allocation rules. For systems with certain product structures, we obtain key performance expressions and compare them to those under FCFS. For general product structures, we present performance bounds and approximations. Finally, we discuss the applicability of these results to more general ATO systems. © 2010 INFORMS.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Numerical modelling technology and software is now being used to underwrite the design of many microelectronic and microsystems components. The demands for greater capability of these analysis tools are increasing dramatically, as the user community is faced with the challenge of producing reliable products in ever shorter lead times. This leads to the requirement for analysis tools to represent the interactions amongst the distinct phenomena and physics at multiple length and timescales. Multi-physics and Multi-scale technology is now becoming a reality with many code vendors. This chapter discusses the current status of modelling tools that assess the impact of nano-technology on the fabrication/packaging and testing of microsystems. The chapter is broken down into three sections: Modelling Technologies, Modelling Application to Fabrication, and Modelling Application to Assembly/Packing and Modelling Applied for Test and Metrology.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Biodegradable polymers, such as PLA (Polylactide), come from renewable resources like corn starch and if disposed of correctly, degrade and become harmless to the ecosystem making them attractive alternatives to petroleum based polymers. PLA in particular is used in a variety of applications including medical devices, food packaging and waste disposal packaging. However, the industry faces challenges in melt processing of PLA due to its poor thermal stability which is influenced by processing temperatures and shearing.
Identification and control of suitable processing conditions is extremely challenging, usually relying on trial and error, and often sensitive to batch to batch variations. Off-line assessment in a lab environment can result in high scrap rates, long lead times and lengthy and expensive process development. Scrap rates are typically in the region of 25-30% for medical grade PLA costing between €2000-€5000/kg.
Additives are used to enhance material properties such as mechanical properties and may also have a therapeutic role in the case of bioresorbable medical devices, for example the release of calcium from orthopaedic implants such as fixation screws promotes healing. Additives can also reduce the costs involved as less of the polymer resin is required.
This study investigates the scope for monitoring, modelling and optimising processing conditions for twin screw extrusion of PLA and PLA w/calcium carbonate to achieve desired material properties. A DAQ system has been constructed to gather data from a bespoke measurement die comprising melt temperature; pressure drop along the length of the die; and UV-Vis spectral data which is shown to correlate to filler dispersion. Trials were carried out under a range of processing conditions using a Design of Experiments approach and samples were tested for mechanical properties, degradation rate and the release rate of calcium. Relationships between recorded process data and material characterisation results are explored.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The thesis entitled Analysis of Some Stochastic Models in Inventories and Queues. This thesis is devoted to the study of some stochastic models in Inventories and Queues which are physically realizable, though complex. It contains a detailed analysis of the basic stochastic processes underlying these models. In this thesis, (s,S) inventory systems with nonidentically distributed interarrival demand times and random lead times, state dependent demands, varying ordering levels and perishable commodities with exponential life times have been studied. The queueing system of the type Ek/Ga,b/l with server vacations, service systems with single and batch services, queueing system with phase type arrival and service processes and finite capacity M/G/l queue when server going for vacation after serving a random number of customers are also analysed. The analogy between the queueing systems and inventory systems could be exploited in solving certain models. In vacation models, one important result is the stochastic decomposition property of the system size or waiting time. One can think of extending this to the transient case. In inventory theory, one can extend the present study to the case of multi-item, multi-echelon problems. The study of perishable inventory problem when the commodities have a general life time distribution would be a quite interesting problem. The analogy between the queueing systems and inventory systems could be exploited in solving certain models.