75 resultados para Classical measurement error model
Resumo:
This work presents an automated system for the measurement of form errors of mechanical components using an industrial robot. A three-probe error separation technique was employed to allow decoupling between the measured form error and errors introduced by the robotic system. A mathematical model of the measuring system was developed to provide inspection results by means of the solution of a system of linear equations. A new self-calibration procedure, which employs redundant data from several runs, minimizes the influence of probes zero-adjustment on the final result. Experimental tests applied to the measurement of straightness errors of mechanical components were accomplished and demonstrated the effectiveness of the employed methodology. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
This work extends a previously presented refined sandwich beam finite element (FE) model to vibration analysis, including dynamic piezoelectric actuation and sensing. The mechanical model is a refinement of the classical sandwich theory (CST), for which the core is modelled with a third-order shear deformation theory (TSDT). The FE model is developed considering, through the beam length, electrically: constant voltage for piezoelectric layers and quadratic third-order variable of the electric potential in the core, while meclianically: linear axial displacement, quadratic bending rotation of the core and cubic transverse displacement of the sandwich beam. Despite the refinement of mechanical and electric behaviours of the piezoelectric core, the model leads to the same number of degrees of freedom as the previous CST one due to a two-step static condensation of the internal dof (bending rotation and core electric potential third-order variable). The results obtained with the proposed FE model are compared to available numerical, analytical and experimental ones. Results confirm that the TSDT and the induced cubic electric potential yield an extra stiffness to the sandwich beam. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
We describe a one-time signature scheme based on the hardness of the syndrome decoding problem, and prove it secure in the random oracle model. Our proposal can be instantiated on general linear error correcting codes, rather than restricted families like alternant codes for which a decoding trapdoor is known to exist. (C) 2010 Elsevier Inc. All rights reserved,
Resumo:
Reconciliation can be divided into stages, each stage representing the performance of a mining operation, such as: long-term estimation, short-term estimation, planning, mining and mineral processing. The gold industry includes another stage which is the budget, when the company informs the financial market of its annual production forecast. The division of reconciliation into stages increases the reliability of the annual budget informed by the mining companies, while also detecting and correcting the critical steps responsible for the overall estimation error by the optimization of sampling protocols and equipment. This paper develops and validates a new reconciliation model for the gold industry, which is based on correct sampling practices and the subdivision of reconciliation into stages, aiming for better grade estimates and more efficient control of the mining industry`s processes, from resource estimation to final production.
Resumo:
Real-time viscosity measurement remains a necessity for highly automated industry. To resolve this problem, many studies have been carried out using an ultrasonic shear wave reflectance method. This method is based on the determination of the complex reflection coefficient`s magnitude and phase at the solid-liquid interface. Although magnitude is a stable quantity and its measurement is relatively simple and precise, phase measurement is a difficult task because of strong temperature dependence. A simplified method that uses only the magnitude of the reflection coefficient and that is valid under the Newtonian regimen has been proposed by some authors, but the obtained viscosity values do not match conventional viscometry measurements. In this work, a mode conversion measurement cell was used to measure glycerin viscosity as a function of temperature (15 to 25 degrees C) and corn syrup-water mixtures as a function of concentration (70 to 100 wt% of corn syrup). Tests were carried out at 1 MHz. A novel signal processing technique that calculates the reflection coefficient magnitude in a frequency band, instead of a single frequency, was studied. The effects of the bandwidth on magnitude and viscosity were analyzed and the results were compared with the values predicted by the Newtonian liquid model. The frequency band technique improved the magnitude results. The obtained viscosity values came close to those measured by the rotational viscometer with percentage errors up to 14%, whereas errors up to 96% were found for the single frequency method.
Resumo:
This work presents the implementation of the ultrasonic shear reflectance method for viscosity measurement of Newtonian liquids using wave mode conversion from longitudinal to shear waves and vice versa. The method is based on the measurement of the complex reflection coefficient (magnitude and phase) at a solid-liquid interface. The implemented measurement cell is composed of an ultrasonic transducer, a water buffer, an aluminum prism, a PMMA buffer rod, and a sample chamber. Viscosity measurements were made in the range from 1 to 3.5 MHz for olive oil and for automotive oils (SAE 40, 90, and 250) at 15 and 22.5 degrees C, respectively. Moreover, olive oil and corn oil measurements were conducted in the range from 15 to 30 degrees C at 3.5 and 2.25 MHz, respectively. The ultrasonic measurements, in the case of the less viscous liquids, agree with the results provided by a rotational viscometer, showing Newtonian behavior. In the case of the more viscous liquids, a significant difference was obtained, showing a clear non-Newtonian behavior that cannot be described by the Kelvin-Voigt model.
Resumo:
Model predictive control (MPC) is usually implemented as a control strategy where the system outputs are controlled within specified zones, instead of fixed set points. One strategy to implement the zone control is by means of the selection of different weights for the output error in the control cost function. A disadvantage of this approach is that closed-loop stability cannot be guaranteed, as a different linear controller may be activated at each time step. A way to implement a stable zone control is by means of the use of an infinite horizon cost in which the set point is an additional variable of the control problem. In this case, the set point is restricted to remain inside the output zone and an appropriate output slack variable is included in the optimisation problem to assure the recursive feasibility of the control optimisation problem. Following this approach, a robust MPC is developed for the case of multi-model uncertainty of open-loop stable systems. The controller is devoted to maintain the outputs within their corresponding feasible zone, while reaching the desired optimal input target. Simulation of a process of the oil re. ning industry illustrates the performance of the proposed strategy.
Resumo:
In this paper the continuous Verhulst dynamic model is used to synthesize a new distributed power control algorithm (DPCA) for use in direct sequence code division multiple access (DS-CDMA) systems. The Verhulst model was initially designed to describe the population growth of biological species under food and physical space restrictions. The discretization of the corresponding differential equation is accomplished via the Euler numeric integration (ENI) method. Analytical convergence conditions for the proposed DPCA are also established. Several properties of the proposed recursive algorithm, such as Euclidean distance from optimum vector after convergence, convergence speed, normalized mean squared error (NSE), average power consumption per user, performance under dynamics channels, and implementation complexity aspects, are analyzed through simulations. The simulation results are compared with two other DPCAs: the classic algorithm derived by Foschini and Miljanic and the sigmoidal of Uykan and Koivo. Under estimated errors conditions, the proposed DPCA exhibits smaller discrepancy from the optimum power vector solution and better convergence (under fixed and adaptive convergence factor) than the classic and sigmoidal DPCAs. (C) 2010 Elsevier GmbH. All rights reserved.
Resumo:
Aims: We aimed to evaluate if the co-localisation of calcium and necrosis in intravascular ultrasound virtual histology (IVUS-VH) is due to artefact, and whether this effect can be mathematically estimated. Methods and results: We hypothesised that, in case calcium induces an artefactual coding of necrosis, any addition in calcium content would generate an artificial increment in the necrotic tissue. Stent struts were used to simulate the ""added calcium"". The change in the amount and in the spatial localisation of necrotic tissue was evaluated before and after stenting (n=17 coronary lesions) by means of a especially developed imaging software. The area of ""calcium"" increased from a median of 0.04 mm(2) at baseline to 0.76 mm(2) after stenting (p<0.01). In parallel the median necrotic content increased from 0.19 mm(2) to 0.59 mm(2) (p<0.01). The ""added"" calcium strongly predicted a proportional increase in necrosis-coded tissue in the areas surrounding the calcium-like spots (model R(2)=0.70; p<0.001). Conclusions: Artificial addition of calcium-like elements to the atherosclerotic plaque led to an increase in necrotic tissue in virtual histology that is probably artefactual. The overestimation of necrotic tissue by calcium strictly followed a linear pattern, indicating that it may be amenable to mathematical correction.
Resumo:
The water diffusion attributable to concentration gradients is among the main mechanisms of water transport into the asphalt mixture. The transport of small molecules through polymeric materials is a very complex process, and no single model provides a complete explanation because of the small molecule`s complex internal structure. The objective of this study was to experimentally determine the diffusion of water in different fine aggregate mixtures (FAM) using simple gravimetric sorption measurements. For the purposes of measuring the diffusivity of water, FAMs were regarded as a representative homogenous volume of the hot-mix asphalt (HMA). Fick`s second law is generally used to model diffusion driven by concentration gradients in different materials. The concept of the dual mode diffusion was investigated for FAM cylindrical samples. Although FAM samples have three components (asphalt binder, aggregates, and air voids), the dual mode was an attempt to represent the diffusion process by only two stages that occur simultaneously: (1) the water molecules are completely mobile, and (2) the water molecules are partially mobile. The combination of three asphalt binders and two aggregates selected from the Strategic Highway Research Program`s (SHRP) Materials Reference Library (MRL) were evaluated at room temperature [23.9 degrees C (75 degrees F)] and at 37.8 degrees C (100 degrees F). The results show that moisture uptake and diffusivity of water through FAM is dependent on the type of aggregate and asphalt binder. At room temperature, the rank order of diffusivity and moisture uptake for the three binders was the same regardless of the type of aggregate. However, this rank order changed at higher temperatures, suggesting that at elevated temperatures different binders may be undergoing a different level of change in the free volume. DOI: 10.1061/(ASCE)MT.1943-5533.0000190. (C) 2011 American Society of Civil Engineers.
Resumo:
The application of airborne laser scanning (ALS) technologies in forest inventories has shown great potential to improve the efficiency of forest planning activities. Precise estimates, fast assessment and relatively low complexity can explain the good results in terms of efficiency. The evolution of GPS and inertial measurement technologies, as well as the observed lower assessment costs when these technologies are applied to large scale studies, can explain the increasing dissemination of ALS technologies. The observed good quality of results can be expressed by estimates of volumes and basal area with estimated error below the level of 8.4%, depending on the size of sampled area, the quantity of laser pulses per square meter and the number of control plots. This paper analyzes the potential of an ALS assessment to produce certain forest inventory statistics in plantations of cloned Eucalyptus spp with precision equal of superior to conventional methods. The statistics of interest in this case were: volume, basal area, mean height and dominant trees mean height. The ALS flight for data assessment covered two strips of approximately 2 by 20 Km, in which clouds of points were sampled in circular plots with a radius of 13 m. Plots were sampled in different parts of the strips to cover different stand ages. The clouds of points generated by the ALS assessment: overall height mean, standard error, five percentiles (height under which we can find 10%, 30%, 50%,70% and 90% of the ALS points above ground level in the cloud), and density of points above ground level in each percentile were calculated. The ALS statistics were used in regression models to estimate mean diameter, mean height, mean height of dominant trees, basal area and volume. Conventional forest inventory sample plots provided real data. For volume, an exploratory assessment involving different combinations of ALS statistics allowed for the definition of the most promising relationships and fitting tests based on well known forest biometric models. The models based on ALS statistics that produced the best results involved: the 30% percentile to estimate mean diameter (R(2)=0,88 and MQE%=0,0004); the 10% and 90% percentiles to estimate mean height (R(2)=0,94 and MQE%=0,0003); the 90% percentile to estimate dominant height (R(2)=0,96 and MQE%=0,0003); the 10% percentile and mean height of ALS points to estimate basal area (R(2)=0,92 and MQE%=0,0016); and, to estimate volume, age and the 30% and 90% percentiles (R(2)=0,95 MQE%=0,002). Among the tested forest biometric models, the best fits were provided by the modified Schumacher using age and the 90% percentile, modified Clutter using age, mean height of ALS points and the 70% percentile, and modified Buckman using age, mean height of ALS points and the 10% percentile.
Resumo:
The DSSAT/CANEGRO model was parameterized and its predictions evaluated using data from five sugarcane (Sacchetrum spp.) experiments conducted in southern Brazil. The data used are from two of the most important Brazilian cultivars. Some parameters whose values were either directly measured or considered to be well known were not adjusted. Ten of the 20 parameters were optimized using a Generalized Likelihood Uncertainty Estimation (GLUE) algorithm using the leave-one-out cross-validation technique. Model predictions were evaluated using measured data of leaf area index (LA!), stalk and aerial dry mass, sucrose content, and soil water content, using bias, root mean squared error (RMSE), modeling efficiency (Eff), correlation coefficient, and agreement index. The Decision Support System for Agrotechnology Transfer (DSSAT)/CANEGRO model simulated the sugarcane crop in southern Brazil well, using the parameterization reported here. The soil water content predictions were better for rainfed (mean RMSE = 0.122mm) than for irrigated treatment (mean RMSE = 0.214mm). Predictions were best for aerial dry mass (Eff = 0.850), followed by stalk dry mass (Eff = 0.765) and then sucrose mass (Eff = 0.170). Number of green leaves showed the worst fit (Eff = -2.300). The cross-validation technique permits using multiple datasets that would have limited use if used independently because of the heterogeneity of measures and measurement strategies.
Resumo:
Protein engineering is a powerful tool, which correlates protein structure with specific functions, both in applied biotechnology and in basic research. Here, we present a practical teaching course for engineering the green fluorescent protein (GFP) from Aequorea victoria by a random mutagenesis strategy using error-prone polymerase chain reaction. Screening of bacterial colonies transformed with random mutant libraries identified GFP variants with increased fluorescence yields. Mapping the three-dimensional structure of these mutants demonstrated how alterations in structural features such as the environment around the fluorophore and properties of the protein surface can influence functional properties such as the intensity of fluorescence and protein solubility.
Resumo:
Background Meta-analysis is increasingly being employed as a screening procedure in large-scale association studies to select promising variants for follow-up studies. However, standard methods for meta-analysis require the assumption of an underlying genetic model, which is typically unknown a priori. This drawback can introduce model misspecifications, causing power to be suboptimal, or the evaluation of multiple genetic models, which augments the number of false-positive associations, ultimately leading to waste of resources with fruitless replication studies. We used simulated meta-analyses of large genetic association studies to investigate naive strategies of genetic model specification to optimize screenings of genome-wide meta-analysis signals for further replication. Methods Different methods, meta-analytical models and strategies were compared in terms of power and type-I error. Simulations were carried out for a binary trait in a wide range of true genetic models, genome-wide thresholds, minor allele frequencies (MAFs), odds ratios and between-study heterogeneity (tau(2)). Results Among the investigated strategies, a simple Bonferroni-corrected approach that fits both multiplicative and recessive models was found to be optimal in most examined scenarios, reducing the likelihood of false discoveries and enhancing power in scenarios with small MAFs either in the presence or in absence of heterogeneity. Nonetheless, this strategy is sensitive to tau(2) whenever the susceptibility allele is common (MAF epsilon 30%), resulting in an increased number of false-positive associations compared with an analysis that considers only the multiplicative model. Conclusion Invoking a simple Bonferroni adjustment and testing for both multiplicative and recessive models is fast and an optimal strategy in large meta-analysis-based screenings. However, care must be taken when examined variants are common, where specification of a multiplicative model alone may be preferable.
Resumo:
Amyotrophic lateral sclerosis (ALS) is a progressive degenerative disorder affecting motoneurons and the SOD1(G93A) transgenic mice are widely employed to study disease physiopathology and therapeutic strategies. Despite the cellular and biochemical evidences of an early motor system dysfunction, the conventional behavioral tests do not detect early motor impairments in SOD1 mouse model. We evaluated early changes in motor behavior of ALS mice by doing the analyses of tail elevation, footprint, automatic recording of motor activities by means of an infrared motion sensor activity system and electrophysiological measurements in male and female wild-type (WT) and SOD1(G93A) mice from postnatal day (P) 20 up to endpoint. The classical evaluations of mortality, weight loss, tremor, rotometer, hanging wire and inclined plane were also employed. There was a late onset (after P90) of the impairments of classical parameters and the outcome varied between genders of ALS mice, being tremor, cumulative survival, weight loss and neurological score about 10 days earlier in male than female ALS mice and also about 20 days earlier in ALS males regarding rotarod and hanging wire performances. While diminution of hindpaw base was 10 days earlier in ALS males (P110) compared to females, the steep length decreased 40 days earlier in ALS females (P60) than ALS males. The automatic analysis of motor impairments showed substantial late changes (after P90) of motility and locomotion in the ALS females, but not in the ALS males. It was surprising that the scores of tail elevation were already decreased in ALS males and females by P40, reaching the minimal values at the endpoint. The electrophysiological analyses showed early changes of measures in the ALS mouse sciatic nerve, i.e., decreased values of amplitude (P40) and nerve conduction velocity (P20), and also an increased latency (P20) reaching maximal level of impairments at the late disease phase. The early changes were not accompanied by reductions of neuronal protein markers of neurofilament 200 and ChAT in the ventral part of the lumbar spinal cord of P20 and P60 ALS mice by means of Western blot technique, despite remarkable decreases of those protein levels in P120 ALS mice. In conclusion, early changes of motor behavior and electrophysiological parameters in ALS mouse model must be taken into attention in the analyses of disease mechanisms and therapeutic effects. (C) 2011 Published by Elsevier B.V.