886 resultados para Multivariate measurement model
Resumo:
Purpose: The purpose of this paper is to describe the problems encountered and the solutions developed when using benchmarking and key performance indicators (KPIs) to monitor a major UK social house building innovation (change) programme. The innovation programme sought improvements to both the quality of the house product and the procurement process. Design/methodology/approach: Benchmarking and KPIs were used to quantify performance and in-depth case studies to identify underlying cause and effect relationships within the innovation programme. Findings: The inherent competition between consortium members; the complexity of the relationship between the consortium and its strategic partner; the lack of an authoritative management control structure; and the rapidly changing nature of the UK social housing market all proved problematic to the development of a reliable and robust monitoring system. These problems were overcome by the development of multi-dimensional benchmarking model that balanced the needs and aspirations of the individual organisations with the broader objectives of the consortium. Research limitations/implications: Whilst the research methodology provides insight into the factors that affected the performance of a major innovation programme its findings may not be representative of all projects. Practical implications: The lessons learnt should assist those developing benchmarking models for multi-client consortia. Originality/value: The work reported in this paper describes an inclusive approach to benchmarking in which a multiple client group and their strategic partner sought to work together for shared gain. Very few papers have addressed this issue.
Resumo:
This work demonstrates an example of the importance of an adequate method to sub-sample model results when comparing with in situ measurements. A test of model skill was performed by employing a point-to-point method to compare a multi-decadal hindcast against a sparse, unevenly distributed historic in situ dataset. The point-to-point method masked out all hindcast cells that did not have a corresponding in situ measurement in order to match each in situ measurement against its most similar cell from the model. The application of the point-to-point method showed that the model was successful at reproducing the inter-annual variability of the in situ datasets. Furthermore, this success was not immediately apparent when the measurements were aggregated to regional averages. Time series, data density and target diagrams were employed to illustrate the impact of switching from the regional average method to the point-to-point method. The comparison based on regional averages gave significantly different and sometimes contradicting results that could lead to erroneous conclusions on the model performance. Furthermore, the point-to-point technique is a more correct method to exploit sparse uneven in situ data while compensating for the variability of its sampling. We therefore recommend that researchers take into account for the limitations of the in situ datasets and process the model to resemble the data as much as possible.
Resumo:
We examined how marine plankton interaction networks, as inferred by multivariate autoregressive (MAR) analysis of time-series, differ based on data collected at a fixed sampling location (L4 station in the Western English Channel) and four similar time-series prepared by averaging Continuous Plankton Recorder (CPR) datapoints in the region surrounding the fixed station. None of the plankton community structures suggested by the MAR models generated from the CPR datasets were well correlated with the MAR model for L4, but of the four CPR models, the one most closely resembling the L4 model was that for the CPR region nearest to L4. We infer that observation error and spatial variation in plankton community dynamics influenced the model performance for the CPR datasets. A modified MAR framework in which observation error and spatial variation are explicitly incorporated could allow the analysis to better handle the diverse time-series data collected in marine environments.
Resumo:
This paper presents a statistical-based fault diagnosis scheme for application to internal combustion engines. The scheme relies on an identified model that describes the relationships between a set of recorded engine variables using principal component analysis (PCA). Since combustion cycles are complex in nature and produce nonlinear relationships between the recorded engine variables, the paper proposes the use of nonlinear PCA (NLPCA). The paper further justifies the use of NLPCA by comparing the model accuracy of the NLPCA model with that of a linear PCA model. A new nonlinear variable reconstruction algorithm and bivariate scatter plots are proposed for fault isolation, following the application of NLPCA. The proposed technique allows the diagnosis of different fault types under steady-state operating conditions. More precisely, nonlinear variable reconstruction can remove the fault signature from the recorded engine data, which allows the identification and isolation of the root cause of abnormal engine behaviour. The paper shows that this can lead to (i) an enhanced identification of potential root causes of abnormal events and (ii) the masking of faulty sensor readings. The effectiveness of the enhanced NLPCA based monitoring scheme is illustrated by its application to a sensor fault and a process fault. The sensor fault relates to a drift in the fuel flow reading, whilst the process fault relates to a partial blockage of the intercooler. These faults are introduced to a Volkswagen TDI 1.9 Litre diesel engine mounted on an experimental engine test bench facility.
Resumo:
The ultrasonic measurement and imaging of tissue elasticity is currently under wide investigation and development as a clinical tool for the assessment of a broad range of diseases, but little account in this field has yet been taken of the fact that soft tissue is porous and contains mobile fluid. The ability to squeeze fluid out of tissue may have implications for conventional elasticity imaging, and may present opportunities for new investigative tools. When a homogeneous, isotropic, fluid-saturated poroelastic material with a linearly elastic solid phase and incompressible solid and fluid constituents is subjected to stress, the behaviour of the induced internal strain field is influenced by three material constants: the Young's modulus (E(s)) and Poisson's ratio (nu(s)) of the solid matrix and the permeability (k) of the solid matrix to the pore fluid. New analytical expressions were derived and used to model the time-dependent behaviour of the strain field inside simulated homogeneous cylindrical samples of such a poroelastic material undergoing sustained unconfined compression. A model-based reconstruction technique was developed to produce images of parameters related to the poroelastic material constants (E(s), nu(s), k) from a comparison of the measured and predicted time-dependent spatially varying radial strain. Tests of the method using simulated noisy strain data showed that it is capable of producing three unique parametric images: an image of the Poisson's ratio of the solid matrix, an image of the axial strain (which was not time-dependent subsequent to the application of the compression) and an image representing the product of the aggregate modulus E(s)(1-nu(s))/(1+nu(s))(1-2nu(s)) of the solid matrix and the permeability of the solid matrix to the pore fluid. The analytical expressions were further used to numerically validate a finite element model and to clarify previous work on poroelastography.
Resumo:
Fifty-two CFLP mice had an open femoral diaphyseal osteotomy held in compression by a four-pin external fixator. The movement of 34 of the mice in their cages was quantified before and after operation, until sacrifice at 4, 8, 16 or 24 days. Thirty-three specimens underwent histomorphometric analysis and 19 specimens underwent torsional stiffness measurement. The expected combination of intramembranous and endochondral bone formation was observed, and the model was shown to be reliable in that variation in the histological parameters of healing was small between animals at the same time point, compared to the variation between time-points. There was surprisingly large individual variation in the amount of animal movement about the cage, which correlated with both histomorphometric and mechanical measures of healing. Animals that moved more had larger external calluses containing more cartilage and demonstrated lower torsional stiffness at the same time point. Assuming that movement of the whole animal predicts, at least to some extent, movement at the fracture site, this correlation is what would be expected in a model that involves similar processes to those in human fracture healing. Models such as this, employed to determine the effect of experimental interventions, will yield more information if the natural variation in animal motion is measured and included in the analysis.
Resumo:
This paper points out a serious flaw in dynamic multivariate statistical process control (MSPC). The principal component analysis of a linear time series model that is employed to capture auto- and cross-correlation in recorded data may produce a considerable number of variables to be analysed. To give a dynamic representation of the data (based on variable correlation) and circumvent the production of a large time-series structure, a linear state space model is used here instead. The paper demonstrates that incorporating a state space model, the number of variables to be analysed dynamically can be considerably reduced, compared to conventional dynamic MSPC techniques.
Resumo:
The work in this paper is of particular significance since it considers the problem of modelling cross- and auto-correlation in statistical process monitoring. The presence of both types of correlation can lead to fault insensitivity or false alarms, although in published literature to date, only autocorrelation has been broadly considered. The proposed method, which uses a Kalman innovation model, effectively removes both correlations. The paper (and Part 2 [2]) has emerged from work supported by EPSRC grant GR/S84354/01 and is of direct relevance to problems in several application areas including chemical, electrical, and mechanical process monitoring.
Resumo:
The contribution of electron-phonon scattering and grain boundary scattering to the mid-IR (lambda = 3.392 mum) properties of An has been assessed by examining both bulk, single crystal samples-Au(1 1 1) and Au(1 1 0)-and thin film, polycrystalline An samples at 300 K and 100 K by means of surface plasmon polariton excitation. The investigation constitutes a stringent test for the in-vacuo Otto-configuration prism coupler used to perform the measurements, illustrating its strengths and limitations. Analysis of the optical response is guided by a physically based interpretation of the Drude model. Relative to the reference case of single crystal Au at 100 K (epsilon = - 568 + i17.5), raising the temperature to 300 K causes increased electron-phonon scattering that accounts for a reduction of similar to40 nm in the electron mean free path. Comparison of a polycrystalline sample to the reference case determines a mean free path due to grain boundary scattering of similar to 17 nm, corresponding to about half the mean grain size as determined from atomic force microscopy and indicating a high reflectance coefficient for the An grain boundaries. An analysis combining consideration of grain boundary scattering and the inclusion of a small percentage of voids in the polycrystalline film by means of an effective medium model indicates a value for the grain boundary reflection coefficient in the range 0.55-0.71. (C) 2005 Elsevier B.V. All rights reserved.
Resumo:
Cooperatives, as a kind of firms, are considered by many scholars as an remarkable alternative for overcoming the economic crisis started in 2008. Besides, there are other scholars which pointed out the important role that these firms play in the regional economic development. Nevertheless, when one examines the economic literature on cooperatives, it is detected that this kind of firms is mainly studied starting from the point of view of their own characteristics and particularities of participation and solidarity. In this sense, following a different analysis framework, this article proposes a theoretical model in order to explain the behavior of cooperatives based on the entrepreneurship theory with the aim of increasing the knowledge about this kind of firms and, more specifically, their contribution to regional economic development.
Resumo:
Indoor wireless network based client localisation requires the use of a radio map to relate received signal strength to specific locations. However, signal strength measurements are time consuming, expensive and usually require unrestricted access to all parts of the building concerned. An obvious option for circumventing this difficulty is to estimate the radio map using a propagation model. This paper compares the effect of measured and simulated radio maps on the accuracy of two different methods of wireless network based localisation. The results presented indicate that, although the propagation model used underestimated the signal strength by up to 15 dB at certain locations, there was not a signigicant reduction in localisation performance. In general, the difference in performance between the simulated and measured radio maps was around a 30 % increase in rms error
Resumo:
The use of image processing techniques to assess the performance of airport landing lighting using images of it collected from an aircraft-mounted camera is documented. In order to assess the performance of the lighting, it is necessary to uniquely identify each luminaire within an image and then track the luminaires through the entire sequence and store the relevant information for each luminaire, that is, the total number of pixels that each luminaire covers and the total grey level of these pixels. This pixel grey level can then be used for performance assessment. The authors propose a robust model-based (MB) featurematching technique by which the performance is assessed. The development of this matching technique is the key to the automated performance assessment of airport lighting. The MB matching technique utilises projective geometry in addition to accurate template of the 3D model of a landing-lighting system. The template is projected onto the image data and an optimum match found, using nonlinear least-squares optimisation. The MB matching software is compared with standard feature extraction and tracking techniques known within the community, these being the Kanade–Lucus–Tomasi (KLT) and scaleinvariant feature transform (SIFT) techniques. The new MB matching technique compares favourably with the SIFT and KLT feature-tracking alternatives. As such, it provides a solid foundation to achieve the central aim of this research which is to automatically assess the performance of airport lighting.
Resumo:
Extending the work presented in Prasad et al. (IEEE Proceedings on Control Theory and Applications, 147, 523-37, 2000), this paper reports a hierarchical nonlinear physical model-based control strategy to account for the problems arising due to complex dynamics of drum level and governor valve, and demonstrates its effectiveness in plant-wide disturbance handling. The strategy incorporates a two-level control structure consisting of lower-level conventional PI regulators and a higher-level nonlinear physical model predictive controller (NPMPC) for mainly set-point manoeuvring. The lower-level PI loops help stabilise the unstable drum-boiler dynamics and allow faster governor valve action for power and grid-frequency regulation. The higher-level NPMPC provides an optimal load demand (or set-point) transition by effective handling of plant-wide interactions and system disturbances. The strategy has been tested in a simulation of a 200-MW oil-fired power plant at Ballylumford in Northern Ireland. A novel approach is devized to test the disturbance rejection capability in severe operating conditions. Low frequency disturbances were created by making random changes in radiation heat flow on the boiler-side, while condenser vacuum was fluctuating in a random fashion on the turbine side. In order to simulate high-frequency disturbances, pulse-type load disturbances were made to strike at instants which are not an integral multiple of the NPMPC sampling period. Impressive results have been obtained during both types of system disturbances and extremely high rates of load changes, right across the operating range, These results compared favourably with those from a conventional state-space generalized predictive control (GPC) method designed under similar conditions.
Resumo:
This paper proposes a novel image denoising technique based on the normal inverse Gaussian (NIG) density model using an extended non-negative sparse coding (NNSC) algorithm proposed by us. This algorithm can converge to feature basis vectors, which behave in the locality and orientation in spatial and frequency domain. Here, we demonstrate that the NIG density provides a very good fitness to the non-negative sparse data. In the denoising process, by exploiting a NIG-based maximum a posteriori estimator (MAP) of an image corrupted by additive Gaussian noise, the noise can be reduced successfully. This shrinkage technique, also referred to as the NNSC shrinkage technique, is self-adaptive to the statistical properties of image data. This denoising method is evaluated by values of the normalized signal to noise rate (SNR). Experimental results show that the NNSC shrinkage approach is indeed efficient and effective in denoising. Otherwise, we also compare the effectiveness of the NNSC shrinkage method with methods of standard sparse coding shrinkage, wavelet-based shrinkage and the Wiener filter. The simulation results show that our method outperforms the three kinds of denoising approaches mentioned above.