93 resultados para Error of measurement
Resumo:
Battery energy storage system (BESS) is to be incorporated in a wind farm to achieve constant power dispatch. The design of the BESS is based on the forecasted wind speed, and the technique assumes the distribution of the error between the forecasted and actual wind speeds is Gaussian. It is then shown that although the error between the predicted and actual wind powers can be evaluated, it is non-Gaussian. With the known distribution in the error of the predicted wind power, the capacity of the BESS can be determined in terms of the confident level in meeting specified constant power dispatch commitment. Furthermore, a short-term power dispatch strategy is also developed which takes into account the state of charge (SOC) of the BESS. The proposed approach is useful in the planning of the wind farm-BESS scheme and in the operational planning of the wind power generating station.
Resumo:
Background Multi attribute utility instruments (MAUIs) are preference-based measures that comprise a health state classification system (HSCS) and a scoring algorithm that assigns a utility value to each health state in the HSCS. When developing a MAUI from a health-related quality of life (HRQOL) questionnaire, first a HSCS must be derived. This typically involves selecting a subset of domains and items because HRQOL questionnaires typically have too many items to be amendable to the valuation task required to develop the scoring algorithm for a MAUI. Currently, exploratory factor analysis (EFA) followed by Rasch analysis is recommended for deriving a MAUI from a HRQOL measure. Aim To determine whether confirmatory factor analysis (CFA) is more appropriate and efficient than EFA to derive a HSCS from the European Organisation for the Research and Treatment of Cancer’s core HRQOL questionnaire, Quality of Life Questionnaire (QLQ-C30), given its well-established domain structure. Methods QLQ-C30 (Version 3) data were collected from 356 patients receiving palliative radiotherapy for recurrent/metastatic cancer (various primary sites). The dimensional structure of the QLQ-C30 was tested with EFA and CFA, the latter informed by the established QLQ-C30 structure and views of both patients and clinicians on which are the most relevant items. Dimensions determined by EFA or CFA were then subjected to Rasch analysis. Results CFA results generally supported the proposed QLQ-C30 structure (comparative fit index =0.99, Tucker–Lewis index =0.99, root mean square error of approximation =0.04). EFA revealed fewer factors and some items cross-loaded on multiple factors. Further assessment of dimensionality with Rasch analysis allowed better alignment of the EFA dimensions with those detected by CFA. Conclusion CFA was more appropriate and efficient than EFA in producing clinically interpretable results for the HSCS for a proposed new cancer-specific MAUI. Our findings suggest that CFA should be recommended generally when deriving a preference-based measure from a HRQOL measure that has an established domain structure.
Resumo:
Background and Aims Research into craving is hampered by lack of theoretical specification and a plethora of substance-specific measures. This study aimed to develop a generic measure of craving based on elaborated intrusion (EI) theory. Confirmatory factor analysis (CFA) examined whether a generic measure replicated the three-factor structure of the Alcohol Craving Experience (ACE) scale over different consummatory targets and time-frames. Design Twelve studies were pooled for CFA. Targets included alcohol, cigarettes, chocolate and food. Focal periods varied from the present moment to the previous week. Separate analyses were conducted for strength and frequency forms. Setting Nine studies included university students, with single studies drawn from an internet survey, a community sample of smokers and alcohol-dependent out-patients. Participants A heterogeneous sample of 1230 participants. Measurements Adaptations of the ACE questionnaire. Findings Both craving strength [comparative fit indices (CFI = 0.974; root mean square error of approximation (RMSEA) = 0.039, 95% confidence interval (CI) = 0.035–0.044] and frequency (CFI = 0.971, RMSEA = 0.049, 95% CI = 0.044–0.055) gave an acceptable three-factor solution across desired targets that mapped onto the structure of the original ACE (intensity, imagery, intrusiveness), after removing an item, re-allocating another and taking intercorrelated error terms into account. Similar structures were obtained across time-frames and targets. Preliminary validity data on the resulting 10-item Craving Experience Questionnaire (CEQ) for cigarettes and alcohol were strong. Conclusions The Craving Experience Questionnaire (CEQ) is a brief, conceptually grounded and psychometrically sound measure of desires. It demonstrates a consistent factor structure across a range of consummatory targets in both laboratory and clinical contexts.
Resumo:
Protein adsorption at solid-liquid interfaces is critical to many applications, including biomaterials, protein microarrays and lab-on-a-chip devices. Despite this general interest, and a large amount of research in the last half a century, protein adsorption cannot be predicted with an engineering level, design-orientated accuracy. Here we describe a Biomolecular Adsorption Database (BAD), freely available online, which archives the published protein adsorption data. Piecewise linear regression with breakpoint applied to the data in the BAD suggests that the input variables to protein adsorption, i.e., protein concentration in solution; protein descriptors derived from primary structure (number of residues, global protein hydrophobicity and range of amino acid hydrophobicity, isoelectric point); surface descriptors (contact angle); and fluid environment descriptors (pH, ionic strength), correlate well with the output variable-the protein concentration on the surface. Furthermore, neural network analysis revealed that the size of the BAD makes it sufficiently representative, with a neural network-based predictive error of 5% or less. Interestingly, a consistently better fit is obtained if the BAD is divided in two separate sub-sets representing protein adsorption on hydrophilic and hydrophobic surfaces, respectively. Based on these findings, selected entries from the BAD have been used to construct neural network-based estimation routines, which predict the amount of adsorbed protein, the thickness of the adsorbed layer and the surface tension of the protein-covered surface. While the BAD is of general interest, the prediction of the thickness and the surface tension of the protein-covered layers are of particular relevance to the design of microfluidics devices.
Resumo:
The trans-activator of transcription (TAT) peptide is regarded as the “gold standard” for cell-penetrating peptides, capable of traversing a mammalian membrane passively into the cytosolic space. This characteristic has been exploited through conjugation of TAT for applications such as drug delivery. However, the process by which TAT achieves membrane penetration remains ambiguous and unresolved. Mechanistic details of TAT peptide action are revealed herein by using three complementary methods: quartz crystal microbalance with dissipation (QCM-D), scanning electrochemical microscopy (SECM) and atomic force microscopy (AFM). When combined, these three scales of measurement define that the membrane uptake of the TAT peptide is by trans-membrane insertion using a “worm-hole” pore that leads to ion permeability across the membrane layer. AFM data provided nanometre-scale visualisation of TAT punctuation using a mammalian-mimetic membrane bilayer. The TAT peptide does not show the same specificity towards a bacterial mimetic membrane and QCM-D and SECM showed that the TAT peptide demonstrates a disruptive action towards these membranes. This investigation supports the energy-independent uptake of the cationic TAT peptide and provides empirical data that clarify the mechanism by which the TAT peptide achieves its membrane activity. The novel use of these three biophysical techniques provides valuable insight into the mechanism for TAT peptide translocation, which is essential for improvements in the cellular delivery of TAT-conjugated cargoes including therapeutic agents required to target specific intracellular locations.
Resumo:
The mineral chloritoid collected from the argillite in the bottom of Yaopo Formation of Western Beijing was characterized by mid-infrared (MIR) and near-infrared (NIR) spectroscopy. The MIR spectra showed all fundamental vibrations including the hydroxyl units, basic aluminosilicate framework and the influence of iron on the chloritoid structure. The NIR spectrum of the chloritoid showed combination (ν + δ)OH bands with the fundamental stretching (ν) and bending (δ) vibrations. Based on the chemical component data and the analysis result from the MIR and NIR spectra, the crystal structure of chloritoid from western hills of Beijing, China, can be illustrated. Therefore, the application of the technique across the entire infrared region is expected to become more routine and extend its usefulness, and the reproducibility of measurement and richness of qualitative information should be simultaneously considered for proper selection of a spectroscopic method for the unit cell structural analysis.
Resumo:
Study design Retrospective validation study. Objectives To propose a method to evaluate, from a clinical standpoint, the ability of a finite-element model (FEM) of the trunk to simulate orthotic correction of spinal deformity and to apply it to validate a previously described FEM. Summary of background data Several FEMs of the scoliotic spine have been described in the literature. These models can prove useful in understanding the mechanisms of scoliosis progression and in optimizing its treatment, but their validation has often been lacking or incomplete. Methods Three-dimensional (3D) geometries of 10 patients before and during conservative treatment were reconstructed from biplanar radiographs. The effect of bracing was simulated by modeling displacements induced by the brace pads. Simulated clinical indices (Cobb angle, T1–T12 and T4–T12 kyphosis, L1–L5 lordosis, apical vertebral rotation, torsion, rib hump) and vertebral orientations and positions were compared to those measured in the patients' 3D geometries. Results Errors in clinical indices were of the same order of magnitude as the uncertainties due to 3D reconstruction; for instance, Cobb angle was simulated with a root mean square error of 5.7°, and rib hump error was 5.6°. Vertebral orientation was simulated with a root mean square error of 4.8° and vertebral position with an error of 2.5 mm. Conclusions The methodology proposed here allowed in-depth evaluation of subject-specific simulations, confirming that FEMs of the trunk have the potential to accurately simulate brace action. These promising results provide a basis for ongoing 3D model development, toward the design of more efficient orthoses.
Resumo:
Pattern recognition is a promising approach for the identification of structural damage using measured dynamic data. Much of the research on pattern recognition has employed artificial neural networks (ANNs) and genetic algorithms as systematic ways of matching pattern features. The selection of a damage-sensitive and noise-insensitive pattern feature is important for all structural damage identification methods. Accordingly, a neural networks-based damage detection method using frequency response function (FRF) data is presented in this paper. This method can effectively consider uncertainties of measured data from which training patterns are generated. The proposed method reduces the dimension of the initial FRF data and transforms it into new damage indices and employs an ANN method for the actual damage localization and quantification using recognized damage patterns from the algorithm. In civil engineering applications, the measurement of dynamic response under field conditions always contains noise components from environmental factors. In order to evaluate the performance of the proposed strategy with noise polluted data, noise contaminated measurements are also introduced to the proposed algorithm. ANNs with optimal architecture give minimum training and testing errors and provide precise damage detection results. In order to maximize damage detection results, the optimal architecture of ANN is identified by defining the number of hidden layers and the number of neurons per hidden layer by a trial and error method. In real testing, the number of measurement points and the measurement locations to obtain the structure response are critical for damage detection. Therefore, optimal sensor placement to improve damage identification is also investigated herein. A finite element model of a two storey framed structure is used to train the neural network. It shows accurate performance and gives low error with simulated and noise-contaminated data for single and multiple damage cases. As a result, the proposed method can be used for structural health monitoring and damage detection, particularly for cases where the measurement data is very large. Furthermore, it is suggested that an optimal ANN architecture can detect damage occurrence with good accuracy and can provide damage quantification with reasonable accuracy under varying levels of damage.
Resumo:
OBJECTIVES: There is controversy in the literature regarding the effect of inflammatory bowel disease (IBD) on resting energy expenditure (REE). In many cases this may have resulted from inappropriate adjustment of REE measurements to account for differences in body composition. This article considers how to appropriately adjust measurements of REE for differences in body composition between individuals with IBD. PATIENTS AND METHODS: Body composition, assessed via total body potassium to yield a measure of body cell mass (BCM), and REE measurements were performed in 41 children with Crohn disease and ulcerative colitis in the Royal Children's Hospital, Brisbane, Australia. Log-log regression was used to determine the power function to which BCM should be raised to appropriately adjust REE to account for differences in body composition between children. RESULTS: The appropriate value to "adjust" BCM was found to be 0.49, with a standard error of 0.10. CONCLUSIONS: Clearly, there is a need to adjust for differences in body composition, or at the very least body weight, in metabolic studies in children with IBD. We suggest that raising BCM to the power of 0.5 is both a numerically convenient and a statistically valid way of achieving this aim. Under circumstances in which the measurement of BCM is not available, raising body weight to the power of 0.5 remains appropriate. The important issue of whether REE is changed in cases of IBD can then be appropriately addressed. © 2007 Lippincott Williams & Wilkins, Inc.
Resumo:
Estimating the economic burden of injuries is important for setting priorities, allocating scarce health resources and planning cost-effective prevention activities. As a metric of burden, costs account for multiple injury consequences—death, severity, disability, body region, nature of injury—in a single unit of measurement. In a 1989 landmark report to the US Congress, Rice et al1 estimated the lifetime costs of injuries in the USA in 1985. By 2000, the epidemiology and burden of injuries had changed enough that the US Congress mandated an update, resulting in a book on the incidence and economic burden of injury in the USA.2 To make these findings more accessible to the larger realm of scientists and practitioners and to provide a template for conducting the same economic burden analyses in other countries and settings, a summary3 was published in Injury Prevention. Corso et al reported that, between 1985 and 2000, injury rates declined roughly 15%. The estimated lifetime cost of these injuries declined 20%, totalling US$406 billion, including US$80 billion in medical costs and US$326 billion in lost productivity. While incidence reflects problem size, the relative burden of injury is better expressed using costs.
Resumo:
Quantifying the stiffness properties of soft tissues is essential for the diagnosis of many cardiovascular diseases such as atherosclerosis. In these pathologies it is widely agreed that the arterial wall stiffness is an indicator of vulnerability. The present paper focuses on the carotid artery and proposes a new inversion methodology for deriving the stiffness properties of the wall from cine-MRI (magnetic resonance imaging) data. We address this problem by setting-up a cost function defined as the distance between the modeled pixel signals and the measured ones. Minimizing this cost function yields the unknown stiffness properties of both the arterial wall and the surrounding tissues. The sensitivity of the identified properties to various sources of uncertainty is studied. Validation of the method is performed on a rubber phantom. The elastic modulus identified using the developed methodology lies within a mean error of 9.6%. It is then applied to two young healthy subjects as a proof of practical feasibility, with identified values of 625 kPa and 587 kPa for one of the carotid of each subject.
Resumo:
The shape of tracheal cartilage has been widely treated as symmetric in analytical and numerical models. However, according to both histological images and in vivo medical image, tracheal cartilage is of highly asymmetric shape. Taking the cartilage as symmetric structure will induce bias in calculation of the collapse behavior, as well as compliance and muscular stress. However, this has been rarely discussed. In this paper, tracheal collapse is represented by considering its asymmetric shape. For comparison, the symmetric shape, which is reconstructed by half of the cartilage, is also presented. A comparison of cross-sectional area, compliance of airway and stress in the muscular membrane, determined by asymmetric shape and symmetric shape is made. The result indicates that the symmetric assumption brings a small error, around 5% in predicting the cross-sectional area under loading conditions. The relative error of compliance is more than 10%. Particularly when the pressure is close to zero, the error could be more than 50%. The model considering the symmetric shape results in a significant difference in predicting stress in muscular membrane by either under- or over-estimating it. In conclusion, tracheal cartilage should not be treated as a symmetric structure. The results obtained in this study are helpful in evaluating the error induced by the assumption in geometry.
Resumo:
We consider estimating the total load from frequent flow data but less frequent concentration data. There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates that minimizes the biases and makes use of informative predictive variables. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized rating-curve approach with additional predictors that capture unique features in the flow data, such as the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and the discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. Forming this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach for two rivers delivering to the Great Barrier Reef, Queensland, Australia. One is a data set from the Burdekin River, and consists of the total suspended sediment (TSS) and nitrogen oxide (NO(x)) and gauged flow for 1997. The other dataset is from the Tully River, for the period of July 2000 to June 2008. For NO(x) Burdekin, the new estimates are very similar to the ratio estimates even when there is no relationship between the concentration and the flow. However, for the Tully dataset, by incorporating the additional predictive variables namely the discounted flow and flow phases (rising or recessing), we substantially improved the model fit, and thus the certainty with which the load is estimated.
Resumo:
There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates by minimizing the biases and making use of possible predictive variables. The load estimation procedure can be summarized by the following four steps: - (i) output the flow rates at regular time intervals (e.g. 10 minutes) using a time series model that captures all the peak flows; - (ii) output the predicted flow rates as in (i) at the concentration sampling times, if the corresponding flow rates are not collected; - (iii) establish a predictive model for the concentration data, which incorporates all possible predictor variables and output the predicted concentrations at the regular time intervals as in (i), and; - (iv) obtain the sum of all the products of the predicted flow and the predicted concentration over the regular time intervals to represent an estimate of the load. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized regression (rating-curve) approach with additional predictors that capture unique features in the flow data, namely the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and cumulative discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. The model also has the capacity to accommodate autocorrelation in model errors which are the result of intensive sampling during floods. Incorporating this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach using the concentrations of total suspended sediment (TSS) and nitrogen oxide (NOx) and gauged flow data from the Burdekin River, a catchment delivering to the Great Barrier Reef. The sampling biases for NOx concentrations range from 2 to 10 times indicating severe biases. As we expect, the traditional average and extrapolation methods produce much higher estimates than those when bias in sampling is taken into account.
Resumo:
Background Previous studies (mostly questionnaire-based in children) suggest that outdoor activity is protective against myopia. There are few studies on young adults investigating both the impact of simply being outdoors versus performing physical activity. The aim was to study the relationship between the refractive error of young adults and their physical activity patterns. Methods Twenty-seven university students, aged 18 to 25 years, wore a pedometer (Omron HJ720ITE) for seven days both during the semester and holiday periods. They simultaneously recorded the type of activity performed, its duration, the number of steps taken (from the pedometer) and their location (indoors/outdoors) in a logbook. Mean spherical refractive error was used to divide participants into three groups (emmetropes: +1.00 to -0.50 D, low myopes: -0.62 to -3.00 D, higher myopes: -3.12 D or greater myopia). Results There were no significant differences between the refractive groups during the semester or holiday periods; the average daily times spent outdoors, the duration of physical activity, the ratio of physical activity performed outdoors to indoors and amount of near work performed were similar. The peak exercise intensity was similar across all groups: approximately 100 steps perminute, a brisk walk. Up to one-third of all physical activity was performed outdoors. There were some significant differences in activities performed during semester and holiday times. For example, lowmyopes spent significantly less time outside (49 ± 47 versus 74 ± 41 minutes, p = 0.005) and performed less physical activity (6,388 ± 1,747 versus 6,779 ± 2,746 steps per day; p = 0.03) during the holidays compared to during semester. Conclusions The fact that all groups had similar low exercise intensity butmany were notmyopic suggests that physical activity levels are not critical. There were differences in the activity patterns of lowmyopes during semester and holiday periods. This study highlights the need for a larger longitudinal-based study with particular emphasis on how discretionary time is spent.