63 resultados para Error of measurement


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Chili powder is a globally traded commodity which has been found to be adulterated with Sudan dyes from 2003 onwards. In this study, chili powders were adulterated with varying quantities of Sudan I dye (0.1-5%) and spectra were generated using near infrared reflectance spectroscopy (NIRS) and Raman
spectroscopy (on a spectrometer with a sample compartment modified as part of the study). Chemometrics were applied to the spectral data to produce quantitative and qualitative calibration models and prediction statistics. For the quantitative models coefficients of determination (R2) were found to be
0.891-0.994 depending on which spectral data (NIRS/Raman) was processed, the mathematical algorithm used and the data pre-processing applied. The corresponding values for the root mean square error of calibration (RMSEC) and root mean square error of prediction (RMSEP) were found to be 0.208-0.851%
and 0.141-0.831% respectively, once again depending on the spectral data and the chemometric treatment applied to the data. Indications are that the NIR spectroscopy based models are superior to the models produced from Raman spectral data based on a comparison of the values of the chemometric
parameters. The limit of detection (LOD) based on analysis of 20 blank chili powders against each calibration model gave 0.25% and 0.88% for the NIR and Raman data, respectively. In addition, adopting a qualitative approach with the spectral data and applying PCA or PLS-DA, it was possible to discriminate
between adulterated chili powders from non-adulterated chili powders.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents a new method for online determination of the Thèvenin equivalent parameters of a power system at a given node using the local PMU measurements at that node. The method takes into account the measurement errors and the changes in the system side. An analysis of the effects of changes in system side is carried out on a simple two-bus system to gain an insight of the effect of system side changes on the estimated Thévenin equivalent parameters. The proposed method uses voltage and current magnitudes as well as active and reactive powers; thus avoiding the effect of phase angle drift of the PMU and the need to synchronize measurements at different instances to the same reference. Applying the method to the IEEE 30-bus test system has shown its ability to correctly determine the Thévenin equivalent even in the presence of measurement errors and/or system side changes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Moving from combustion engine to electric vehicle (EV)-based transport is recognized as having a major role to play in reducing pollution, combating climate change and improving energy security. However, the introduction of EVs poses major challenges for power system operation. With increasing penetration of EVs, uncontrolled coincident charging may overload the grid and substantially increase peak power requirements. Developing smart grid technologies and appropriate charging strategies to support the role out of EVs is therefore a high priority. In this paper, we investigate the effectiveness of distributed additive increase and multiplicative decrease (AIMD) charging algorithms, as proposed by Stu¨dli et al. in 2012, at mitigating the impact of domestic charging of EVs on low-voltage distribution networks. In particular, a number of enhancements to the basic AIMD implementation are introduced to enable local power system infrastructure and voltage level constraints to be taken into account and to reduce peak power requirements. The enhanced AIMD EV charging strategies are evaluated using power system simulations for a typical low-voltage residential feeder network in Ireland. Results show that by using the proposed AIMD-based smart charging algorithms, 50% EV penetration can be accommodated, compared with only 10% with uncontrolled charging, without exceeding network infrastructure constraints.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In many countries formal or informal palliative care networks (PCNs) have evolved to better integrate community-based services for individuals with a life-limiting illness. We conducted a cross-sectional survey using a customized tool to determine the perceptions of the processes of palliative care delivery reflective of horizontal integration from the perspective of nurses, physicians and allied health professionals working in a PCN, as well as to assess the utility of this tool. The process elements examined were part of a conceptual framework for evaluating integration of a system of care and centred on interprofessional collaboration. We used the Index of Interdisciplinary Collaboration (IIC) as a basis of measurement. The 86 respondents (85% response rate) placed high value on working collaboratively and most reported being part of an interprofessional team. The survey tool showed utility in identifying strengths and gaps in integration across the network and in detecting variability in some factors according to respondent agency affiliation and profession. Specifically, support for interprofessional communication and evaluative activities were viewed as insufficient. Impediments to these aspects of horizontal integration may be reflective of workload constraints, differences in agency operations or an absence of key structural features.


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The use of handheld near infrared (NIR) instrumentation, as a tool for rapid analysis, has the potential to be used widely in the animal feed sector. A comparison was made between handheld NIR and benchtop instruments in terms of proximate analysis of poultry feed using off-the-shelf calibration models and including statistical analysis. Additionally, melamine adulterated soya bean products were used to develop qualitative and quantitative calibration models from the NIRS spectral data with excellent calibration models and prediction statistics obtained. With regards to the quantitative approach, the coefficients of determination (R2) were found to be 0.94-0.99 with the corresponding values for the root mean square error of calibration and prediction were found to be 0.081-0.215 % and 0.095-0.288 % respectively. In addition, cross validation was used to further validate the models with the root mean square error of cross validation found to be 0.101-0.212 %. Furthermore, by adopting a qualitative approach with the spectral data and applying Principal Component Analysis, it was possible to discriminate between adulterated and pure samples.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this study, 39 sets of hard turning (HT) experimental trials were performed on a Mori-Seiki SL-25Y (4-axis) computer numerical controlled (CNC) lathe to study the effect of cutting parameters in influencing the machined surface roughness. In all the trials, AISI 4340 steel workpiece (hardened up to 69 HRC) was machined with a commercially available CBN insert (Warren Tooling Limited, UK) under dry conditions. The surface topography of the machined samples was examined by using a white light interferometer and a reconfirmation of measurement was done using a Form Talysurf. The machining outcome was used as an input to develop various regression models to predict the average machined surface roughness on this material. Three regression models - Multiple regression, Random Forest, and Quantile regression were applied to the experimental outcomes. To the best of the authors’ knowledge, this paper is the first to apply Random Forest or Quantile regression techniques to the machining domain. The performance of these models was compared to each other to ascertain how feed, depth of cut, and spindle speed affect surface roughness and finally to obtain a mathematical equation correlating these variables. It was concluded that the random forest regression model is a superior choice over multiple regression models for prediction of surface roughness during machining of AISI 4340 steel (69 HRC).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The paper presents a conceptual discussion of the characterisation and phenomenology of passive intermodulation (PIM) by the localised and distributed nonlinearities in passive devices and antennas. The PIM distinctive nature and its impact on signal distortions are examined in comparison with similar effects in power amplifiers. The main features of PIM generation are discussed and illustrated by the example of PIM due to electro-thermal nonlinearity. The issues of measurement, discrimination and modelling of PIM generated by nonlinearities in passive RF components and antennas are addressed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Thermal comfort is defined as “that condition of mind which expresses satisfaction with the thermal environment’ [1] [2]. Field studies have been completed in order to establish the governing conditions for thermal comfort [3]. These studies showed that the internal climate of a room was the strongest factor in establishing thermal comfort. Direct manipulation of the internal climate is necessary to retain an acceptable level of thermal comfort. In order for Building Energy Management Systems (BEMS) strategies to be efficiently utilised it is necessary to have the ability to predict the effect that activating a heating/cooling source (radiators, windows and doors) will have on the room. The numerical modelling of the domain can be challenging due to necessity to capture temperature stratification and/or different heat sources (radiators, computers and human beings). Computational Fluid Dynamic (CFD) models are usually utilised for this function because they provide the level of details required. Although they provide the necessary level of accuracy these models tend to be highly computationally expensive especially when transient behaviour needs to be analysed. Consequently they cannot be integrated in BEMS. This paper presents and describes validation of a CFD-ROM method for real-time simulations of building thermal performance. The CFD-ROM method involves the automatic extraction and solution of reduced order models (ROMs) from validated CFD simulations. The test case used in this work is a room of the Environmental Research Institute (ERI) Building at the University College Cork (UCC). ROMs have shown that they are sufficiently accurate with a total error of less than 1% and successfully retain a satisfactory representation of the phenomena modelled. The number of zones in a ROM defines the size and complexity of that ROM. It has been observed that ROMs with a higher number of zones produce more accurate results. As each ROM has a time to solution of less than 20 seconds they can be integrated into the BEMS of a building which opens the potential to real time physics based building energy modelling.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents an electrochemical instrumentation system capable of real-time in situ detection of heavy metals. A practical approach to introduce acidity compensation against changes in amplitude of the peak currents is also presented. The compensated amplitudes can then be used to predict the concentration level of heavy metals. The system uses differential pulse anodic stripping voltammetry, which is a precise and sensitive analytical method with excellent limits of detection. The instrument is capable of detecting lead, cadmium, zinc, nickel and copper with good sensitivity and precision. The system avoids expensive and time-consuming procedures and may be used in a variety of situations to help environmental assessment and control. 

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Galactosemia, an inborn error of galactose metabolism, was first described in the 1900s by von Ruess. The subsequent 100years has seen considerable progress in understanding the underlying genetics and biochemistry of this condition. Initial studies concentrated on increasing the understanding of the clinical manifestations of the disease. However, Leloir's discovery of the pathway of galactose catabolism in the 1940s and 1950s enabled other scientists, notably Kalckar, to link the disease to a specific enzymatic step in the pathway. Kalckar's work established that defects in galactose 1-phosphate uridylyltransferase (GALT) were responsible for the majority of cases of galactosemia. However, over the next three decades it became clear that there were two other forms of galactosemia: type II resulting from deficiencies in galactokinase (GALK1) and type III where the affected enzyme is UDP-galactose 4'-epimerase (GALE). From the 1970s, molecular biology approaches were applied to galactosemia. The chromosomal locations and DNA sequences of the three genes were determined. These studies enabled modern biochemical studies. Structures of the proteins have been determined and biochemical studies have shown that enzymatic impairment often results from misfolding and consequent protein instability. Cellular and model organism studies have demonstrated that reduced GALT or GALE activity results in increased oxidative stress. Thus, after a century of progress, it is possible to conceive of improved therapies including drugs to manipulate the pathway to reduce potentially toxic intermediates, antioxidants to reduce the oxidative stress of cells or use of "pharmacological chaperones" to stabilise the affected proteins.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Rapid in situ diagnosis of damage is a key issue in the preservation of stone-built cultural heritage. This is evident in the increasing number of congresses, workshops and publications dealing with this issue. With this increased activity has come, however, the realisation that for many culturally significant artefacts it is not possible either to remove samples for analysis or to affix surface markers for measurement. It is for this reason that there has been a growth of interest in non-destructive and minimally invasive techniques for characterising internal and external stone condition. With this interest has come the realisation that no single technique can adequately encompass the wide variety of parameters to be assessed or provide the range of information required to identify appropriate conservation. In this paper we describe a strategy to address these problems through the development of an integrated `tool kit' of measurement and analytical techniques aimed specifically at linking object-specific research to appropriate intervention. The strategy is based initially upon the acquisition of accurate three-dimensional models of stone-built heritage at different scales using a combination of millimetre accurate LiDAR and sub-millimetre accurate Object Scanning that can be exported into a GIS or directly into CAD. These are currently used to overlay information on stone characteristics obtained through a combination of Ground Penetrating Radar, Surface Permeametry, Colorimetry and X-ray Fluorescence, but the possibility exists for adding to this array of techniques as appropriate. In addition to the integrated three-dimensional data array provided by superimposition upon Digital Terrain Models, there is the capability of accurate re-measurement to show patterns of surface loss and changes in material condition over time. Thus it is possible to both record and base-line condition and to identify areas that require either preventive maintenance or more significant pre-emptive intervention. In pursuit of these goals the authors are developing, through a UK Government supported collaboration between University Researchers and Conservation Architects, commercially viable protocols for damage diagnosis, condition monitoring and eventually mechanisms for prioritizing repairs to stone-built heritage. The understanding is, however, that such strategies are not age-constrained and can ultimately be applied to structures of any age.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Cascade control is one of the routinely used control strategies in industrial processes because it can dramatically improve the performance of single-loop control, reducing both the maximum deviation and the integral error of the disturbance response. Currently, many control performance assessment methods of cascade control loops are developed based on the assumption that all the disturbances are subject to Gaussian distribution. However, in the practical condition, several disturbance sources occur in the manipulated variable or the upstream exhibits nonlinear behaviors. In this paper, a general and effective index of the performance assessment of the cascade control system subjected to the unknown disturbance distribution is proposed. Like the minimum variance control (MVC) design, the output variances of the primary and the secondary loops are decomposed into a cascade-invariant and a cascade-dependent term, but the estimated ARMA model for the cascade control loop based on the minimum entropy, instead of the minimum mean squares error, is developed for non-Gaussian disturbances. Unlike the MVC index, an innovative control performance index is given based on the information theory and the minimum entropy criterion. The index is informative and in agreement with the expected control knowledge. To elucidate wide applicability and effectiveness of the minimum entropy cascade control index, a simulation problem and a cascade control case of an oil refinery are applied. The comparison with MVC based cascade control is also included.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

his paper investigates the identification and output tracking control of a class of Hammerstein systems through a wireless network within an integrated framework and the statistic characteristics of the wireless network are modelled using the inverse Gaussian cumulative distribution function. In the proposed framework, a new networked identification algorithm is proposed to compensate for the influence of the wireless network delays so as to acquire the more precise Hammerstein system model. Then, the identified model together with the model-based approach is used to design an output tracking controller. Mean square stability conditions are given using linear matrix inequalities (LMIs) and the optimal controller gains can be obtained by solving the corresponding optimization problem expressed using LMIs. Illustrative numerical simulation examples are given to demonstrate the effectiveness of our proposed method.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper is concerned with the analysis of the stability of delayed recurrent neural networks. In contrast to the widely used Lyapunov–Krasovskii functional approach, a new method is developed within the integral quadratic constraints framework. To achieve this, several lemmas are first given to propose integral quadratic separators to characterize the original delayed neural network. With these, the network is then reformulated as a special form of feedback-interconnected system by choosing proper integral quadratic constraints. Finally, new stability criteria are established based on the proposed approach. Numerical examples are given to illustrate the effectiveness of the new approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper investigates camera control for capturing bottle cap target images in the fault-detection system of an industrial production line. The main purpose is to identify the targeted bottle caps accurately in real time from the images. This is achieved by combining iterative learning control and Kalman filtering to reduce the effect of various disturbances introduced into the detection system. A mathematical model, together with a physical simulation platform is established based on the actual production requirements, and the convergence properties of the model are analyzed. It is shown that the proposed method enables accurate real-time control of the camera, and further, the gain range of the learning rule is also obtained. The numerical simulation and experimental results confirm that the proposed method can not only reduce the effect of repeatable disturbances but also non-repeatable ones.