967 resultados para Historical cost data


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a vertically resolved zonal mean monthly mean global ozone data set spanning the period 1901 to 2007, called HISTOZ.1.0. It is based on a new approach that combines information from an ensemble of chemistry climate model (CCM) simulations with historical total column ozone information. The CCM simulations incorporate important external drivers of stratospheric chemistry and dynamics (in particular solar and volcanic effects, greenhouse gases and ozone depleting substances, sea surface temperatures, and the quasi-biennial oscillation). The historical total column ozone observations include ground-based measurements from the 1920s onward and satellite observations from 1970 to 1976. An off-line data assimilation approach is used to combine model simulations, observations, and information on the observation error. The period starting in 1979 was used for validation with existing ozone data sets and therefore only ground-based measurements were assimilated. Results demonstrate considerable skill from the CCM simulations alone. Assimilating observations provides additional skill for total column ozone. With respect to the vertical ozone distribution, assimilating observations increases on average the correlation with a reference data set, but does not decrease the mean squared error. Analyses of HISTOZ.1.0 with respect to the effects of El Niño–Southern Oscillation (ENSO) and of the 11 yr solar cycle on stratospheric ozone from 1934 to 1979 qualitatively confirm previous studies that focussed on the post-1979 period. The ENSO signature exhibits a much clearer imprint of a change in strength of the Brewer–Dobson circulation compared to the post-1979 period. The imprint of the 11 yr solar cycle is slightly weaker in the earlier period. Furthermore, the total column ozone increase from the 1950s to around 1970 at northern mid-latitudes is briefly discussed. Indications for contributions of a tropospheric ozone increase, greenhouse gases, and changes in atmospheric circulation are found. Finally, the paper points at several possible future improvements of HISTOZ.1.0.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective To review critically the statistical methods used for health economic evaluations in randomised controlled trials where an estimate of cost is available for each patient in the study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

National Highway Traffic Safety Administration, Office of Research and Development, Washington, D.C.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

On cover: Water quality management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Organisations are constantly seeking efficiency improvements for their business processes in terms of time and cost. Management accounting enables reporting of detailed cost of operations for decision making purpose, although significant effort is required to gather accurate operational data. Business process management is concerned with systematically documenting, managing, automating, and optimising processes. Process mining gives valuable insight into processes through analysis of events recorded by an IT system in the form of an event log with the focus on efficient utilisation of time and resources, although its primary focus is not on cost implications. In this paper, we propose a framework to support management accounting decisions on cost control by automatically incorporating cost data with historical data from event logs for monitoring, predicting and reporting process-related costs. We also illustrate how accurate, relevant and timely management accounting style cost reports can be produced on demand by extending open-source process mining framework ProM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Organisations are constantly seeking efficiency gains for their business processes in terms of time and cost. Management accounting enables detailed cost reporting of business operations for decision making purposes, although significant effort is required to gather accurate operational data. Process mining, on the other hand, may provide valuable insight into processes through analysis of events recorded in logs by IT systems, but its primary focus is not on cost implications. In this paper, a framework is proposed which aims to exploit the strengths of both fields in order to better support management decisions on cost control. This is achieved by automatically merging cost data with historical data from event logs for the purposes of monitoring, predicting, and reporting process-related costs. The on-demand generation of accurate, relevant and timely cost reports, in a style akin to reports in the area of management accounting, will also be illustrated. This is achieved through extending the open-source process mining framework ProM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The stress release model, a stochastic version of the elastic-rebound theory, is applied to the historical earthquake data from three strong earthquake-prone regions of China, including North China, Southwest China, and the Taiwan seismic regions. The results show that the seismicity along a plate boundary (Taiwan) is more active than in intraplate regions (North and Southwest China). The degree of predictability or regularity of seismic events in these seismic regions, based on both the Akaike information criterion (AIC) and fitted sensitivity parameters, follows the order Taiwan, Southwest China, and North China, which is further identified by numerical simulations. (c) 2004 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data have been collected on fisheries catch and effort trends since the latter half of the 1800s. With current trends in declining stocks and stricter management regimes, data need to be collected and analyzed over shorter periods and at finer spatial resolution than in the past. New methods of electronic reporting may reduce the lag time in data collection and provide more accurate spatial resolution. In this study I evaluated the differences between fish dealer and vessel reporting systems for federal fisheries in the US New England and Mid-Atlantic areas. Using data on landing date, report date, gear used, port landed, number of hauls, number of fish sampled and species quotas from available catch and effort records I compared dealer and vessel electronically collected data against paper collected dealer and vessel data to determine if electronically collected data are timelier and more accurate. To determine if vessel or dealer electronic reporting is more useful for management, I determined differences in timeliness and accuracy between vessel and dealer electronic reports. I also compared the cost and efficiency of these new methods with less technology intensive reporting methods using available cost data and surveys of seafood dealers for cost information. Using this information I identified potentially unnecessary duplication of effort and identified applications in ecosystem-based fisheries management. This information can be used to guide the decisions of fisheries managers in the United States and other countries that are attempting to identify appropriate fisheries reporting methods for the management regimes under consideration. (PDF contains 370 pages)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Industrial companies in developing countries are facing rapid growths, and this requires having in place the best organizational processes to cope with the market demand. Sales forecasting, as a tool aligned with the general strategy of the company, needs to be as much accurate as possible, in order to achieve the sales targets by making available the right information for purchasing, planning and control of production areas, and finally attending in time and form the demand generated. The present dissertation uses a single case study from the subsidiary of an international explosives company based in Brazil, Maxam, experiencing high growth in sales, and therefore facing the challenge to adequate its structure and processes properly for the rapid growth expected. Diverse sales forecast techniques have been analyzed to compare the actual monthly sales forecast, based on the sales force representatives’ market knowledge, with forecasts based on the analysis of historical sales data. The dissertation findings show how the combination of both qualitative and quantitative forecasts, by the creation of a combined forecast that considers both client´s demand knowledge from the sales workforce with time series analysis, leads to the improvement on the accuracy of the company´s sales forecast.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Modern machines are complex and often required to operate long hours to achieve production targets. The ability to detect symptoms of failure, hence, forecasting the remaining useful life of the machine is vital to prevent catastrophic failures. This is essential to reducing maintenance cost, operation downtime and safety hazard. Recent advances in condition monitoring technologies have given rise to a number of prognosis models that attempt to forecast machinery health based on either condition data or reliability data. In practice, failure condition trending data are seldom kept by industries and data that ended with a suspension are sometimes treated as failure data. This paper presents a novel approach of incorporating historical failure data and suspended condition trending data in the prognostic model. The proposed model consists of a FFNN whose training targets are asset survival probabilities estimated using a variation of Kaplan-Meier estimator and degradation-based failure PDF estimator. The output survival probabilities collectively form an estimated survival curve. The viability of the model was tested using a set of industry vibration data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Advances in data mining have provided techniques for automatically discovering underlying knowledge and extracting useful information from large volumes of data. Data mining offers tools for quick discovery of relationships, patterns and knowledge in large complex databases. Application of data mining to manufacturing is relatively limited mainly because of complexity of manufacturing data. Growing self organizing map (GSOM) algorithm has been proven to be an efficient algorithm to analyze unsupervised DNA data. However, it produced unsatisfactory clustering when used on some large manufacturing data. In this paper a data mining methodology has been proposed using a GSOM tool which was developed using a modified GSOM algorithm. The proposed method is used to generate clusters for good and faulty products from a manufacturing dataset. The clustering quality (CQ) measure proposed in the paper is used to evaluate the performance of the cluster maps. The paper also proposed an automatic identification of variables to find the most probable causative factor(s) that discriminate between good and faulty product by quickly examining the historical manufacturing data. The proposed method offers the manufacturers to smoothen the production flow and improve the quality of the products. Simulation results on small and large manufacturing data show the effectiveness of the proposed method.