63 resultados para Error of measurement
Resumo:
Recently polymeric adsorbents have been emerging as highly effective alternatives to activated carbons for pollutant removal from industrial effluents. Poly(methyl methacrylate) (PMMA), polymerized using the atom transfer radical polymerization (ATRP) technique has been investigated for its feasibility to remove phenol from aqueous solution. Adsorption equilibrium and kinetic investigations were undertaken to evaluate the effect of contact time, initial concentration (10-90 mg/L), and temperature (25-55 degrees C). Phenol uptake was found to increase with increase in initial concentration and agitation time. The adsorption kinetics were found to follow the pseudo-second-order kinetic model. The intra-particle diffusion analysis indicated that film diffusion may be the rate controlling step in the removal process. Experimental equilibrium data were fitted to five different isotherm models namely Langmuir, Freundlich, Dubinin-Radushkevich, Temkin and Redlich-Peterson by non-linear least square regression and their goodness-of-fit evaluated in terms of mean relative error (MRE) and standard error of estimate (SEE). The adsorption equilibrium data were best represented by Freundlich and Redlich-Peterson isotherms. Thermodynamic parameters such as Delta G degrees and Delta H degrees indicated that the sorption process is exothermic and spontaneous in nature and that higher ambient temperature results in more favourable adsorption. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The conventional radial basis function (RBF) network optimization methods, such as orthogonal least squares or the two-stage selection, can produce a sparse network with satisfactory generalization capability. However, the RBF width, as a nonlinear parameter in the network, is not easy to determine. In the aforementioned methods, the width is always pre-determined, either by trial-and-error, or generated randomly. Furthermore, all hidden nodes share the same RBF width. This will inevitably reduce the network performance, and more RBF centres may then be needed to meet a desired modelling specification. In this paper we investigate a new two-stage construction algorithm for RBF networks. It utilizes the particle swarm optimization method to search for the optimal RBF centres and their associated widths. Although the new method needs more computation than conventional approaches, it can greatly reduce the model size and improve model generalization performance. The effectiveness of the proposed technique is confirmed by two numerical simulation examples.
Resumo:
Background: Co-localisation is a widely used measurement in immunohistochemical analysis to determine if fluorescently labelled biological entities, such as cells, proteins or molecules share a same location. However the measurement of co-localisation is challenging due to the complex nature of such fluorescent images, especially when multiple focal planes are captured. The current state-of-art co-localisation measurements of 3-dimensional (3D) image stacks are biased by noise and cross-overs from non-consecutive planes.
Method: In this study, we have developed Co-localisation Intensity Coefficients (CICs) and Co-localisation Binary Coefficients (CBCs), which uses rich z-stack data from neighbouring focal planes to identify similarities between image intensities of two and potentially more fluorescently-labelled biological entities. This was developed using z-stack images from murine organotypic slice cultures from central nervous system tissue, and two sets of pseudo-data. A large amount of non-specific cross-over situations are excluded using this method. This proposed method is also proven to be robust in recognising co-localisations even when images are polluted with a range of noises.
Results: The proposed CBCs and CICs produce robust co-localisation measurements which are easy to interpret, resilient to noise and capable of removing a large amount of false positivity, such as non-specific cross-overs. Performance of this method of measurement is significantly more accurate than existing measurements, as determined statistically using pseudo datasets of known values. This method provides an important and reliable tool for fluorescent 3D neurobiological studies, and will benefit other biological studies which measure fluorescence co-localisation in 3D.
Resumo:
The mechanism by which a retrodirective Rotman lens operates is examined theoretically and prediction is compared with measurement. By deriving the reflection matrix based on the phase delay relationship between the beam ports and the array ports we show that if the phase delay difference between neighbouring ports is constrained in a particular way that the reflection matrix becomes an inverse diagonal matrix and the Rotman lens functions as a Van Atta Array hence can perform retrodirective reflection. Further, the primary factors governing the bandwidth and beam pointing error of the lens are elaborated.
Resumo:
The concentration of organic acids in anaerobic digesters is one of the most critical parameters for monitoring and advanced control of anaerobic digestion processes. Thus, a reliable online-measurement system is absolutely necessary. A novel approach to obtaining these measurements indirectly and online using UV/vis spectroscopic probes, in conjunction with powerful pattern recognition methods, is presented in this paper. An UV/vis spectroscopic probe from S::CAN is used in combination with a custom-built dilution system to monitor the absorption of fully fermented sludge at a spectrum from 200 to 750 nm. Advanced pattern recognition methods are then used to map the non-linear relationship between measured absorption spectra to laboratory measurements of organic acid concentrations. Linear discriminant analysis, generalized discriminant analysis (GerDA), support vector machines (SVM), relevance vector machines, random forest and neural networks are investigated for this purpose and their performance compared. To validate the approach, online measurements have been taken at a full-scale 1.3-MW industrial biogas plant. Results show that whereas some of the methods considered do not yield satisfactory results, accurate prediction of organic acid concentration ranges can be obtained with both GerDA and SVM-based classifiers, with classification rates in excess of 87% achieved on test data.
Resumo:
Diverse parameters, including chaotropicity, can limit the function of cellular systems and thereby determine the extent of Earth's biosphere. Whereas parameters such as temperature, hydrophobicity, pressure, pH, Hofmeister effects, and water activity can be quantified via standard scales of measurement, the chao-/kosmotropic activities of environmentally ubiquitous substances have no widely accepted, universal scale. We developed an assay to determine and quantify chao-/kosmotropicity for 97 chemically diverse substances that can be universally applied to all solutes. This scale is numerically continuous for the solutes assayed (from +361kJkg-1mol-1 for chaotropes to -659kJkg-1mol-1 for kosmotropes) but there are key points that delineate (i) chaotropic from kosmotropic substances (i.e. chaotropes =+4; kosmotropes =-4kJkg-1mol-1); and (ii) chaotropic solutes that are readily water-soluble (log P<1.9) from hydrophobic substances that exert their chaotropic activity, by proxy, from within the hydrophobic domains of macromolecular systems (log P>1.9). Examples of chao-/kosmotropicity values are, for chaotropes: phenol +143, CaCl2 +92.2, MgCl2 +54.0, butanol +37.4, guanidine hydrochloride +31.9, urea +16.6, glycerol [>6.5M] +6.34, ethanol +5.93, fructose +4.56; for kosmotropes: proline -5.76, sucrose -6.92, dimethylsulphoxide (DMSO) -9.72, mannitol -6.69, trehalose -10.6, NaCl -11.0, glycine -14.2, ammonium sulfate -66.9, polyethylene glycol- (PEG-)1000 -126; and for relatively neutral solutes: methanol, +3.12, ethylene glycol +1.66, glucose +1.19, glycerol [<5M] +1.06, maltose -1.43 (kJkg-1mol-1). The data obtained correlate with solute interactions with, and structure-function changes in, enzymes and membranes. We discuss the implications for diverse fields including microbial ecology, biotechnology and astrobiology.
Resumo:
A study was undertaken to examine a range of sample preparation and near infrared reflectance spectroscopy (NIPS) methodologies, using undried samples, for predicting organic matter digestibility (OMD g kg(-1)) and ad libitum intake (g kg(-1) W-0.75) of grass silages. A total of eight sample preparation/NIRS scanning methods were examined involving three extents of silage comminution, two liquid extracts and scanning via either external probe (1100-2200 nm) or internal cell (1100-2500 nm). The spectral data (log 1/R) for each of the eight methods were examined by three regression techniques each with a range of data transformations. The 136 silages used in the study were obtained from farms across Northern Ireland, over a two year period, and had in vivo OMD (sheep) and ad libitum intake (cattle) determined under uniform conditions. In the comparisons of the eight sample preparation/scanning methods, and the differing mathematical treatments of the spectral data, the sample population was divided into calibration (n = 91) and validation (n = 45) sets. The standard error of performance (SEP) on the validation set was used in comparisons of prediction accuracy. Across all 8 sample preparation/scanning methods, the modified partial least squares (MPLS) technique, generally minimized SEP's for both OMD and intake. The accuracy of prediction also increased with degree of comminution of the forage and with scanning by internal cell rather than external probe. The system providing the lowest SEP used the MPLS regression technique on spectra from the finely milled material scanned through the internal cell. This resulted in SEP and R-2 (variance accounted for in validation set) values of 24 (g/kg OM) and 0.88 (OMD) and 5.37 (g/kg W-0.75) and 0.77 (intake) respectively. These data indicate that with appropriate techniques NIRS scanning of undried samples of grass silage can produce predictions of intake and digestibility with accuracies similar to those achieved previously using NIRS with dried samples. (C) 1998 Elsevier Science B.V.
Resumo:
Determining the trophic niche width of an animal population and the relative degree to which a generalist population consists of dietary specialists are long-standing problems of ecology. It has been proposed that the variance of stable isotope values in consumer tissues could be used to quantify trophic niche width of consumer populations. However, this promising idea has not yet been rigorously tested. By conducting controlled laboratory experiments using model consumer populations (Daphnia sp., Crustacea) with controlled diets, we investigated the effect of individual- and population-level specialisation and generalism on consumer d C mean and variance values. While our experimental data follow general expectations, we extend current qualitative models to quantitative predictions of the dependence of isotopic variance on dietary correlation time, a measure for the typical time over which a consumer changes its diet. This quantitative approach allows us to pinpoint possible procedural pitfalls and critical sources of measurement uncertainty. Our results show that the stable isotope approach represents a powerful method for estimating trophic niche widths, especially when taking the quantitative concept of dietary correlation time into account. © 2012 The Authors.
Resumo:
A study combining high resolution mass spectrometry (liquid chromatography-quadrupole time-of-flight-mass spectrometry, UPLC-QTof-MS) and chemometrics for the analysis of post-mortem brain tissue from subjects with Alzheimer’s disease (AD) (n = 15) and healthy age-matched controls (n = 15) was undertaken. The huge potential of this metabolomics approach for distinguishing AD cases is underlined by the correct prediction of disease status in 94–97% of cases. Predictive power was confirmed in a blind test set of 60 samples, reaching 100% diagnostic accuracy. The approach also indicated compounds significantly altered in concentration following the onset of human AD. Using orthogonal partial least-squares discriminant analysis (OPLS-DA), a multivariate model was created for both modes of acquisition explaining the maximum amount of variation between sample groups (Positive Mode-R2 = 97%; Q2 = 93%; root mean squared error of validation (RMSEV) = 13%; Negative Mode-R2 = 99%; Q2 = 92%; RMSEV = 15%). In brain extracts, 1264 and 1457 ions of interest were detected for the different modes of acquisition (positive and negative, respectively). Incorporation of gender into the model increased predictive accuracy and decreased RMSEV values. High resolution UPLC-QTof-MS has not previously been employed to biochemically profile post-mortem brain tissue, and the novel methods described and validated herein prove its potential for making new discoveries related to the etiology, pathophysiology, and treatment of degenerative brain disorders.
Resumo:
Soya bean products are used widely in the animal feed industry as a protein based feed ingredient and
have been found to be adulterated with melamine. This was highlighted in the Chinese scandal of
2008. Dehulled soya (GM and non-GM), soya hulls and toasted soya were contaminated with melamine
and spectra were generated using Near Infrared Reflectance Spectroscopy (NIRS). By applying chemometrics
to the spectral data, excellent calibration models and prediction statistics were obtained. The coefficients
of determination (R2) were found to be 0.89–0.99 depending on the mathematical algorithm used,
the data pre-processing applied and the sample type used. The corresponding values for the root mean
square error of calibration and prediction were found to be 0.081–0.276% and 0.134–0.368%, respectively,
again depending on the chemometric treatment applied to the data and sample type. In addition, adopting
a qualitative approach with the spectral data and applying PCA, it was possible to discriminate
between the four samples types and also, by generation of Cooman’s plots, possible to distinguish
between adulterated and non-adulterated samples.
Resumo:
Purpose: The authors sought to quantify neighboring and distant interpoint correlations of threshold values within the visual field in patients with glaucoma. Methods: Visual fields of patients with confirmed or suspected glaucoma were analyzed (n = 255). One eye per patient was included. Patients were examined using the 32 program of the Octopus 1-2-3. Linear regression analysis among each of the locations and the rest of the points of the visual field was performed, and the correlation coefficient was calculated. The degree of correlation was categorized as high (r > 0.66), moderate (0.66 = r > 0.33), or low (r = 0.33). The standard error of threshold estimation was calculated. Results: Most locations of the visual field had high and moderate correlations with neighboring points and with distant locations corresponding to the same nerve fiber bundle. Locations of the visual field had low correlations with those of the opposite hemifield, with the exception of locations temporal to the blind spot. The standard error of threshold estimation increased from 0.6 to 0.9 dB with an r reduction of 0.1. Conclusion: Locations of the visual field have highest interpoint correlation with neighboring points and with distant points in areas corresponding to the distribution of the retinal nerve fiber layer. The quantification of interpoint correlations may be useful in the design and interpretation of visual field tests in patients with glaucoma.
Resumo:
Modern internal combustion (IC) engines reject around two thirds of the energy provided by the fuel as low-grade waste heat. Capturing a portion of this waste heat energy and transforming it into a more useful form of energy could result in a significant reduction in fuel consumption. By using the low-grade heat, an organic Rankine cycle (ORC) can produce mechanical work from a pressurised organic fluid with the use of an expander.
Ideal gas assumptions are shown to produce significant errors in expander performance predictions when using an organic fluid. This paper details the mathematical modelling technique used to accurately model the thermodynamic processes for both ideal and non-ideal fluids within the reciprocating expander. A comparison between the two methods illustrates the extent of the errors when modelling a reciprocating piston expander. Use of the ideal gas assumptions are shown to produce an error of 55% in the prediction of power produced by the expander when operating on refrigerant R134a.
Resumo:
In this article the multibody simulation software package MADYMO for analysing and optimizing occupant safety design was used to model crash tests for Normal Containment barriers in accordance with EN 1317. The verification process was carried out by simulating a TB31 and a TB32 crash test performed on vertical portable concrete barriers and by comparing the numerical results to those obtained experimentally. The same modelling approach was applied to both tests to evaluate the predictive capacity of the modelling at two different impact speeds. A sensitivity analysis of the vehicle stiffness was also carried out. The capacity to predict all of the principal EN1317 criteria was assessed for the first time: the acceleration severity index, the theoretical head impact velocity, the barrier working width and the vehicle exit box. Results showed a maximum error of 6% for the acceleration severity index and 21% for theoretical head impact velocity for the numerical simulation in comparison to the recorded data. The exit box position was predicted with a maximum error of 4°. For the working width, a large percentage difference was observed for test TB31 due to the small absolute value of the barrier deflection but the results were well within the limit value from the standard for both tests. The sensitivity analysis showed the robustness of the modelling with respect to contact stiffness increase of ±20% and ±40%. This is the first multibody model of portable concrete barriers that can reproduce not only the acceleration severity index but all the test criteria of EN 1317 and is therefore a valuable tool for new product development and for injury biomechanics research.
Resumo:
In this paper, a novel method for modelling a scaled vehicle–barrier crash test similar to the 20◦ angled barrier test specified in EN 1317 is reported. The intended application is for proof-of-concept evaluation of novel roadside barrier designs, and as a cost-effective precursor to full-scale testing or detailed computational modelling. The method is based on the combination of the conservation of energy law and the equation of motion of a spring mass system representing the impact, and shows, for the first time, the feasibility of applying classical scaling theories to evaluation of roadside barrier design. The scaling method is used to set the initial velocity of the vehicle in the scaled test and to provide scaling factors to convert the measured vehicle accelerations in the scaled test to predicted full-scale accelerations. These values can then be used to calculate the Acceleration Severity Index score of the barrier for a full-scale test. The theoretical validity of the method is demonstrated by comparison to numerical simulations of scaled and full-scale angled barrier impacts using multibody analysis implemented in the crash simulation software MADYMO. Results show a maximum error of 0.3% ascribable to the scaling method.
Resumo:
From the early 1900s, some psychologists have attempted to establish their discipline as a quantitative science. In using quantitative methods to investigate their theories, they adopted their own special definition of measurement of attributes such as cognitive abilities, as though they were quantities of the type encountered in Newtonian science. Joel Michell has presented a carefully reasoned argument that psychological attributes lack additivity, and therefore cannot be quantities in the same way as the attributes of classical Newtonian physics. In the early decades of the 20th century, quantum theory superseded Newtonian mechanics as the best model of physical reality. This paper gives a brief, critical overview of the evolution of current measurement practices in psychology, and suggests the need for a transition from a Newtonian to a quantum theoretical paradigm for psychological measurement. Finally, a case study is presented that considers the implications of a quantum theoretical model for educational measurement. In particular, it is argued that, since the OECD’s Programme for International Student Assessment (PISA) is predicated on a Newtonian conception of measurement, this may constrain the extent to which it can make accurate comparisons of the achievements of different education systems.