21 resultados para industrial data
em CentAUR: Central Archive University of Reading - UK
Resumo:
This research has responded to the need for diagnostic reference tools explicitly linking the influence of environmental uncertainty and performance within the supply chain. Uncertainty is a key factor influencing performance and an important measure of the operating environment. We develop and demonstrate a novel reference methodology based on data envelopment analysis (DEA) for examining the performance of value streams within the supply chain with specific reference to the level of environmental uncertainty they face. In this paper, using real industrial data, 20 product supply value streams within the European automotive industry sector are evaluated. Two are found to be efficient. The peer reference groups for the underperforming value streams are identified and numerical improvement targets are derived. The paper demonstrates how DEA can be used to guide supply chain improvement efforts through role-model identification and target setting, in a way that recognises the multiple dimensions/outcomes of the supply chain process and the influence of its environmental conditions. We have facilitated the contextualisation of environmental uncertainty and its incorporation into a specific diagnostic reference tool.
Resumo:
Knowledge-elicitation is a common technique used to produce rules about the operation of a plant from the knowledge that is available from human expertise. Similarly, data-mining is becoming a popular technique to extract rules from the data available from the operation of a plant. In the work reported here knowledge was required to enable the supervisory control of an aluminium hot strip mill by the determination of mill set-points. A method was developed to fuse knowledge-elicitation and data-mining to incorporate the best aspects of each technique, whilst avoiding known problems. Utilisation of the knowledge was through an expert system, which determined schedules of set-points and provided information to human operators. The results show that the method proposed in this paper was effective in producing rules for the on-line control of a complex industrial process. (C) 2005 Elsevier Ltd. All rights reserved.
Resumo:
We propose a geoadditive negative binomial model (Geo-NB-GAM) for regional count data that allows us to address simultaneously some important methodological issues, such as spatial clustering, nonlinearities, and overdispersion. This model is applied to the study of location determinants of inward greenfield investments that occurred during 2003–2007 in 249 European regions. After presenting the data set and showing the presence of overdispersion and spatial clustering, we review the theoretical framework that motivates the choice of the location determinants included in the empirical model, and we highlight some reasons why the relationship between some of the covariates and the dependent variable might be nonlinear. The subsequent section first describes the solutions proposed by previous literature to tackle spatial clustering, nonlinearities, and overdispersion, and then presents the Geo-NB-GAM. The empirical analysis shows the good performance of Geo-NB-GAM. Notably, the inclusion of a geoadditive component (a smooth spatial trend surface) permits us to control for spatial unobserved heterogeneity that induces spatial clustering. Allowing for nonlinearities reveals, in keeping with theoretical predictions, that the positive effect of agglomeration economies fades as the density of economic activities reaches some threshold value. However, no matter how dense the economic activity becomes, our results suggest that congestion costs never overcome positive agglomeration externalities.
Resumo:
We suggest that climate variability in Europe for the “pre-industrial” period 1500–1900 is fundamentally a consequence of internal fluctuations of the climate system. This is because a model simulation, using fixed pre-industrial forcing, in several important aspects is consistent with recent observational reconstructions at high temporal resolution. This includes extreme warm and cold seasonal events as well as different measures of the decadal to multi-decadal variance. Significant trends of 50-year duration can be seen in the model simulation. While the global temperature is highly correlated with ENSO (El Nino- Southern Oscillation), European seasonal temperature is only weakly correlated with the global temperature broadly consistent with data from ERA-40 reanalyses. Seasonal temperature anomalies of the European land area are largely controlled by the position of the North Atlantic storm tracks. We believe the result is highly relevant for the interpretation of past observational records suggesting that the effect of external forcing appears to be of secondary importance. That variations in the solar irradiation could have been a credible cause of climate variations during the last centuries, as suggested in some previous studies, is presumably due to the fact that the models used in these studies may have underestimated the internal variability of the climate. The general interpretation from this study is that the past climate is just one of many possible realizations and thus in many respects not reproducible in its time evolution with a general circulation model but only reproducible in a statistical sense.
Resumo:
A regional overview of the water quality and ecology of the River Lee catchment is presented. Specifically, data describing the chemical, microbiological and macrobiological water quality and fisheries communities have been analysed, based on a division into river, sewage treatment works, fish-farm, lake and industrial samples. Nutrient enrichment and the highest concentrations of metals and micro-organics were found in the urbanised, lower reaches of the Lee and in the Lee Navigation. Average annual concentrations of metals were generally within environmental quality standards although, oil many occasions, concentrations of cadmium, copper, lead, mercury and zinc were in excess of the standards. Various organic substances (used as herbicides, fungicides, insecticides, chlorination by-products and industrial solvents) were widely detected in the Lee system. Concentrations of ten micro-organic substances were observed in excess of their environmental quality standards, though not in terms of annual averages. Sewage treatment works were the principal point source input of nutrients. metals and micro-organic determinands to the catchment. Diffuse nitrogen sources contributed approximately 60% and 27% of the in-stream load in the upper and lower Lee respectively, whereas approximately 60% and 20% of the in-stream phosphorus load was derived from diffuse sources in the upper and lower Lee. For metals, the most significant source was the urban runoff from North London. In reaches less affected by effluent discharges, diffuse runoff from urban and agricultural areas dominated trends. Flig-h microbiological content, observed in the River Lee particularly in urbanised reaches, was far in excess of the EC Bathing Water Directive standards. Water quality issues and degraded habitat in the lower reaches of the Lee have led to impoverished aquatic fauna but, within the mid-catchment reaches and upper agricultural tributaries, less nutrient enrichment and channel alteration has permitted more diverse aquatic fauna.
Resumo:
Consider the statement "this project should cost X and has risk of Y". Such statements are used daily in industry as the basis for making decisions. The work reported here is part of a study aimed at providing a rational and pragmatic basis for such statements. Of particular interest are predictions made in the requirements and early phases of projects. A preliminary model has been constructed using Bayesian Belief Networks and in support of this, a programme to collect and study data during the execution of various software development projects commenced in May 2002. The data collection programme is undertaken under the constraints of a commercial industrial regime of multiple concurrent small to medium scale software development projects. Guided by pragmatism, the work is predicated on the use of data that can be collected readily by project managers; including expert judgements, effort, elapsed times and metrics collected within each project.
Resumo:
Details about the parameters of kinetic systems are crucial for progress in both medical and industrial research, including drug development, clinical diagnosis and biotechnology applications. Such details must be collected by a series of kinetic experiments and investigations. The correct design of the experiment is essential to collecting data suitable for analysis, modelling and deriving the correct information. We have developed a systematic and iterative Bayesian method and sets of rules for the design of enzyme kinetic experiments. Our method selects the optimum design to collect data suitable for accurate modelling and analysis and minimises the error in the parameters estimated. The rules select features of the design such as the substrate range and the number of measurements. We show here that this method can be directly applied to the study of other important kinetic systems, including drug transport, receptor binding, microbial culture and cell transport kinetics. It is possible to reduce the errors in the estimated parameters and, most importantly, increase the efficiency and cost-effectiveness by reducing the necessary amount of experiments and data points measured. (C) 2003 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.
Resumo:
This paper addresses the need for accurate predictions on the fault inflow, i.e. the number of faults found in the consecutive project weeks, in highly iterative processes. In such processes, in contrast to waterfall-like processes, fault repair and development of new features run almost in parallel. Given accurate predictions on fault inflow, managers could dynamically re-allocate resources between these different tasks in a more adequate way. Furthermore, managers could react with process improvements when the expected fault inflow is higher than desired. This study suggests software reliability growth models (SRGMs) for predicting fault inflow. Originally developed for traditional processes, the performance of these models in highly iterative processes is investigated. Additionally, a simple linear model is developed and compared to the SRGMs. The paper provides results from applying these models on fault data from three different industrial projects. One of the key findings of this study is that some SRGMs are applicable for predicting fault inflow in highly iterative processes. Moreover, the results show that the simple linear model represents a valid alternative to the SRGMs, as it provides reasonably accurate predictions and performs better in many cases.
Resumo:
Knowledge-elicitation is a common technique used to produce rules about the operation of a plant from the knowledge that is available from human expertise. Similarly, data-mining is becoming a popular technique to extract rules from the data available from the operation of a plant. In the work reported here knowledge was required to enable the supervisory control of an aluminium hot strip mill by the determination of mill set-points. A method was developed to fuse knowledge-elicitation and data-mining to incorporate the best aspects of each technique, whilst avoiding known problems. Utilisation of the knowledge was through an expert system, which determined schedules of set-points and provided information to human operators. The results show that the method proposed in this paper was effective in producing rules for the on-line control of a complex industrial process.
Resumo:
The overall operation and internal complexity of a particular production machinery can be depicted in terms of clusters of multidimensional points which describe the process states, the value in each point dimension representing a measured variable from the machinery. The paper describes a new cluster analysis technique for use with manufacturing processes, to illustrate how machine behaviour can be categorised and how regions of good and poor machine behaviour can be identified. The cluster algorithm presented is the novel mean-tracking algorithm, capable of locating N-dimensional clusters in a large data space in which a considerable amount of noise is present. Implementation of the algorithm on a real-world high-speed machinery application is described, with clusters being formed from machinery data to indicate machinery error regions and error-free regions. This analysis is seen to provide a promising step ahead in the field of multivariable control of manufacturing systems.
Resumo:
A statistical technique for fault analysis in industrial printing is reported. The method specifically deals with binary data, for which the results of the production process fall into two categories, rejected or accepted. The method is referred to as logistic regression, and is capable of predicting future fault occurrences by the analysis of current measurements from machine parts sensors. Individual analysis of each type of fault can determine which parts of the plant have a significant influence on the occurrence of such faults; it is also possible to infer which measurable process parameters have no significant influence on the generation of these faults. Information derived from the analysis can be helpful in the operator's interpretation of the current state of the plant. Appropriate actions may then be taken to prevent potential faults from occurring. The algorithm is being implemented as part of an applied self-learning expert system.
Resumo:
The transition to a low-carbon economy urgently demands better information on the drivers of energy consumption. UK government policy has prioritized energy efficiency in the built stock as a means of carbon reduction, but the sector is historically information poor, particularly the non-domestic building stock. This paper presents the results of a pilot study that investigated whether and how property and energy consumption data might be combined for non-domestic energy analysis. These data were combined in a ‘Non-Domestic Energy Efficiency Database’ to describe the location and physical attributes of each property and its energy consumption. The aim was to support the generation of a range of energy-efficiency statistics for the industrial, commercial and institutional sectors of the non-domestic building stock, and to provide robust evidence for national energy-efficiency and carbon-reduction policy development and monitoring. The work has brought together non-domestic energy data, property data and mapping in a ‘data framework’ for the first time. The results show what is possible when these data are integrated and the associated difficulties. A data framework offers the potential to inform energy-efficiency policy formation and to support its monitoring at a level of detail not previously possible.
Resumo:
An important feature of agribusiness promotion programs is their lagged impact on consumption. Efficient investment in advertising requires reliable estimates of these lagged responses and it is desirable from both applied and theoretical standpoints to have a flexible method for estimating them. This note derives an alternative Bayesian methodology for estimating lagged responses when investments occur intermittently within a time series. The method exploits a latent-variable extension of the natural-conjugate, normal-linear model, Gibbs sampling and data augmentation. It is applied to a monthly time series on Turkish pasta consumption (1993:5-1998:3) and three, nonconsecutive promotion campaigns (1996:3, 1997:3, 1997:10). The results suggest that responses were greatest to the second campaign, which allocated its entire budget to television media; that its impact peaked in the sixth month following expenditure; and that the rate of return (measured in metric tons additional consumption per thousand dollars expended) was around a factor of 20.
Resumo:
We present a summary of the principal physical and optical properties of aerosol particles using the FAAM BAE-146 instrumented aircraft during ADRIEX between 27 August and 6 September 2004, augmented by sunphotometer, lidar and satellite retrievals. Observations of anthropogenic aerosol, principally from industrial sources, were concentrated over the northern Adriatic Sea and over the Po Valley close to the aerosol sources. An additional flight was also carried out over the Black Sea to compare east and west European pollution. Measurements show the single-scattering albedo of dry aerosol particles to vary considerably between 0.89 and 0.97 at a wavelength of 0.55 μm, with a campaign mean within the polluted lower free troposphere of 0.92. Although aerosol concentrations varied significantly from day to day and during individual days, the shape of the aerosol size distribution was relatively consistent through the experiment, with no detectable change observed over land and over sea. There is evidence to suggest that the pollution aerosol within the marine boundary layer was younger than that in the elevated layer. Trends in the aerosol volume distribution show consistency with multiple-site AERONET radiometric observations. The aerosol optical depths derived from aircraft measurements show a consistent bias to lower values than both the AERONET and lidar ground-based radiometric observations, differences which can be explained by local variations in the aerosol column loading and by some aircraft instrumental artefacts. Retrievals of the aerosol optical depth and fine-mode (<0.5 μm radius) fraction contribution to the optical depth using MODIS data from the Terra and Aqua satellites show a reasonable level of agreement with the AERONET and aircraft measurements.