915 resultados para measurement error model
Resumo:
Terrestrial laser scanning (TLS) is one of the most promising surveying techniques for rockslope characterization and monitoring. Landslide and rockfall movements can be detected by means of comparison of sequential scans. One of the most pressing challenges of natural hazards is combined temporal and spatial prediction of rockfall. An outdoor experiment was performed to ascertain whether the TLS instrumental error is small enough to enable detection of precursory displacements of millimetric magnitude. This consists of a known displacement of three objects relative to a stable surface. Results show that millimetric changes cannot be detected by the analysis of the unprocessed datasets. Displacement measurement are improved considerably by applying Nearest Neighbour (NN) averaging, which reduces the error (1¿) up to a factor of 6. This technique was applied to displacements prior to the April 2007 rockfall event at Castellfollit de la Roca, Spain. The maximum precursory displacement measured was 45 mm, approximately 2.5 times the standard deviation of the model comparison, hampering the distinction between actual displacement and instrumental error using conventional methodologies. Encouragingly, the precursory displacement was clearly detected by applying the NN averaging method. These results show that millimetric displacements prior to failure can be detected using TLS.
Resumo:
Winter weather in Iowa is often unpredictable and can have an adverse impact on traffic flow. The Iowa Department of Transportation (Iowa DOT) attempts to lessen the impact of winter weather events on traffic speeds with various proactive maintenance operations. In order to assess the performance of these maintenance operations, it would be beneficial to develop a model for expected speed reduction based on weather variables and normal maintenance schedules. Such a model would allow the Iowa DOT to identify situations in which speed reductions were much greater than or less than would be expected for a given set of storm conditions, and make modifications to improve efficiency and effectiveness. The objective of this work was to predict speed changes relative to baseline speed under normal conditions, based on nominal maintenance schedules and winter weather covariates (snow type, temperature, and wind speed), as measured by roadside weather stations. This allows for an assessment of the impact of winter weather covariates on traffic speed changes, and estimation of the effect of regular maintenance passes. The researchers chose events from Adair County, Iowa and fit a linear model incorporating the covariates mentioned previously. A Bayesian analysis was conducted to estimate the values of the parameters of this model. Specifically, the analysis produces a distribution for the parameter value that represents the impact of maintenance on traffic speeds. The effect of maintenance is not a constant, but rather a value that the researchers have some uncertainty about and this distribution represents what they know about the effects of maintenance. Similarly, examinations of the distributions for the effects of winter weather covariates are possible. Plots of observed and expected traffic speed changes allow a visual assessment of the model fit. Future work involves expanding this model to incorporate many events at multiple locations. This would allow for assessment of the impact of winter weather maintenance across various situations, and eventually identify locations and times in which maintenance could be improved.
Resumo:
The objective of this work was to adapt the CROPGRO model, which is part of the DSSAT system, for simulating the cowpea (Vigna unguiculata) growth and development under soil and climate conditions of the Baixo Parnaíba region, Piauí State, Brazil. In the CROPGRO, only input parameters that define crop species, cultivars, and ecotype were changed in order to characterize the cowpea crop. Soil and climate files were created for the considered site. Field experiments without water deficit were used to calibrate the model. In these experiments, dry matter (DM), leaf area index (LAI), yield components and grain yield of cowpea (cv. BR 14 Mulato) were evaluated. The results showed good fit for DM and LAI estimates. The medium values of R² and medium absolute error (MAE) were, respectively, 0.95 and 264.9 kg ha-1 for DM, and 0.97 and 0.22 for LAI. The difference between observed and simulated values of plant phenology varied from 0 to 3 days. The model also presented good performance for yield components simulation, excluding 100-grain weight, for which the error ranged from 20.9% to 34.3%. Considering the medium values of crop yield in two years, the model presented an error from 5.6%.
Resumo:
When individuals learn by trial-and-error, they perform randomly chosen actions and then reinforce those actions that led to a high payoff. However, individuals do not always have to physically perform an action in order to evaluate its consequences. Rather, they may be able to mentally simulate actions and their consequences without actually performing them. Such fictitious learners can select actions with high payoffs without making long chains of trial-and-error learning. Here, we analyze the evolution of an n-dimensional cultural trait (or artifact) by learning, in a payoff landscape with a single optimum. We derive the stochastic learning dynamics of the distance to the optimum in trait space when choice between alternative artifacts follows the standard logit choice rule. We show that for both trial-and-error and fictitious learners, the learning dynamics stabilize at an approximate distance of root n/(2 lambda(e)) away from the optimum, where lambda(e) is an effective learning performance parameter depending on the learning rule under scrutiny. Individual learners are thus unlikely to reach the optimum when traits are complex (n large), and so face a barrier to further improvement of the artifact. We show, however, that this barrier can be significantly reduced in a large population of learners performing payoff-biased social learning, in which case lambda(e) becomes proportional to population size. Overall, our results illustrate the effects of errors in learning, levels of cognition, and population size for the evolution of complex cultural traits. (C) 2013 Elsevier Inc. All rights reserved.
Resumo:
In the previous study, moisture loss indices were developed based on the field measurements from one CIR-foam and one CIR-emulsion construction sites. To calibrate these moisture loss indices, additional CIR construction sites were monitored using embedded moisture and temperature sensors. In addition, to determine the optimum timing of an HMA overlay on the CIR layer, the potential of using the stiffness of CIR layer measured by geo-gauge instead of the moisture measurement by a nuclear gauge was explored. Based on the monitoring the moisture and stiffness from seven CIR project sites, the following conclusions are derived: 1. In some cases, the in-situ stiffness remained constant and, in other cases, despite some rainfalls, stiffness of the CIR layers steadily increased during the curing time. 2. The stiffness measured by geo-gauge was affected by a significant amount of rainfall. 3. The moisture indices developed for CIR sites can be used for predicting moisture level in a typical CIR project. The initial moisture content and temperature were the most significant factors in predicting the future moisture content in the CIR layer. 4. The stiffness of a CIR layer is an extremely useful tool for contractors to use for timing their HMA overlay. To determine the optimal timing of an HMA overlay, it is recommended that the moisture loss index should be used in conjunction with the stiffness of the CIR layer.
Resumo:
The objective of this study was to adapt a nonlinear model (Wang and Engel - WE) for simulating the phenology of maize (Zea mays L.), and to evaluate this model and a linear one (thermal time), in order to predict developmental stages of a field-grown maize variety. A field experiment, during 2005/2006 and 2006/2007 was conducted in Santa Maria, RS, Brazil, in two growing seasons, with seven sowing dates each. Dates of emergence, silking, and physiological maturity of the maize variety BRS Missões were recorded in six replications in each sowing date. Data collected in 2005/2006 growing season were used to estimate the coefficients of the two models, and data collected in the 2006/2007 growing season were used as independent data set for model evaluations. The nonlinear WE model accurately predicted the date of silking and physiological maturity, and had a lower root mean square error (RMSE) than the linear (thermal time) model. The overall RMSE for silking and physiological maturity was 2.7 and 4.8 days with WE model, and 5.6 and 8.3 days with thermal time model, respectively.
Resumo:
Determination of brain glucose transport kinetics in vivo at steady-state typically does not allow distinguishing apparent maximum transport rate (T(max)) from cerebral consumption rate. Using a four-state conformational model of glucose transport, we show that simultaneous dynamic measurement of brain and plasma glucose concentrations provide enough information for independent and reliable determination of the two rates. In addition, although dynamic glucose homeostasis can be described with a reversible Michaelis-Menten model, which is implicit to the large iso-inhibition constant (K(ii)) relative to physiological brain glucose content, we found that the apparent affinity constant (K(t)) was better determined with the four-state conformational model of glucose transport than with any of the other models tested. Furthermore, we confirmed the utility of the present method to determine glucose transport and consumption by analysing the modulation of both glucose transport and consumption by anaesthesia conditions that modify cerebral activity. In particular, deep thiopental anaesthesia caused a significant reduction of both T(max) and cerebral metabolic rate for glucose consumption. In conclusion, dynamic measurement of brain glucose in vivo in function of plasma glucose allows robust determination of both glucose uptake and consumption kinetics.
Resumo:
Computer-Aided Tomography Angiography (CTA) images are the standard for assessing Peripheral artery disease (PAD). This paper presents a Computer Aided Detection (CAD) and Computer Aided Measurement (CAM) system for PAD. The CAD stage detects the arterial network using a 3D region growing method and a fast 3D morphology operation. The CAM stage aims to accurately measure the artery diameters from the detected vessel centerline, compensating for the partial volume effect using Expectation Maximization (EM) and a Markov Random field (MRF). The system has been evaluated on phantom data and also applied to fifteen (15) CTA datasets, where the detection accuracy of stenosis was 88% and the measurement accuracy was with an 8% error.
Resumo:
Abstract
Resumo:
The objective of this work was to determine the sensitivity of maize (Zea mays) genotypes to water deficit, using a simple agrometeorological crop yield model. Crop actual yield and agronomic data of 26 genotypes were obtained from the Maize National Assays carried out in ten locations, in four Brazilian states, from 1998 to 2006. Weather information for each experimental location and period were obtained from the closest weather station. Water deficit sensitivity index (Ky) was determined using the crop yield depletion model. Genotypes can be divided into two groups according to their resistance to water deficit. Normal resistance genotypes had Ky ranging from 0.4 to 0.5 in vegetative period, 1.4 to 1.5 in flowering, 0.3 to 0.6 in fruiting, and 0.1 to 0.3 in maturing period, whereas the higher resistance genotypes had lower values, respectively 0.2-0.4, 0.7-1.2, 0.2-0.4, and 0.1-0.2. The general Ky for the total growing season was 2.15 for sensitive genotypes and 1.56 for the resistant ones. Model performance was acceptable to evaluate crop actual yield, whose average errors estimated for each genotype ranged from -5.7% to +5.8%, and whose general mean absolute error was 960 kg ha-1 (10%).
Resumo:
Cancer pain significantly affects the quality of cancer patients, and current treatments for this pain are limited. C-Jun N-terminal kinase (JNK) has been implicated in tumor growth and neuropathic pain sensitization. We investigated the role of JNK in cancer pain and tumor growth in a skin cancer pain model. Injection of luciferase-transfected B16-Fluc melanoma cells into a hindpaw of mouse induced robust tumor growth, as indicated by increase in paw volume and fluorescence intensity. Pain hypersensitivity in this model developed rapidly (<5 days) and reached a peak in 2 weeks, and was characterized by mechanical allodynia and heat hyperalgesia. Tumor growth was associated with JNK activation in tumor mass, dorsal root ganglion (DRG), and spinal cord and a peripheral neuropathy, such as loss of nerve fibers in the hindpaw skin and induction of ATF-3 expression in DRG neurons. Repeated systemic injections of D-JNKI-1 (6 mg/kg, i.p.), a selective and cell-permeable peptide inhibitor of JNK, produced an accumulative inhibition of mechanical allodynia and heat hyperalgesia. A bolus spinal injection of D-JNKI-1 also inhibited mechanical allodynia. Further, JNK inhibition suppressed tumor growth in vivo and melanoma cell proliferation in vitro. In contrast, repeated injections of morphine (5 mg/kg), a commonly used analgesic for terminal cancer, produced analgesic tolerance after 1 day and did not inhibit tumor growth. Our data reveal a marked peripheral neuropathy in this skin cancer model and important roles of the JNK pathway in cancer pain development and tumor growth. JNK inhibitors such as D-JNKI-1 may be used to treat cancer pain.
Resumo:
Exposure to solar ultraviolet (UV) light is the main causative factor for skin cancer. UV exposure depends on environmental and individual factors. Individual exposure data remain scarce and development of alternative assessment methods is greatly needed. We developed a model simulating human exposure to solar UV. The model predicts the dose and distribution of UV exposure received on the basis of ground irradiation and morphological data. Standard 3D computer graphics techniques were adapted to develop a rendering engine that estimates the solar exposure of a virtual manikin depicted as a triangle mesh surface. The amount of solar energy received by each triangle was calculated, taking into account reflected, direct and diffuse radiation, and shading from other body parts. Dosimetric measurements (n = 54) were conducted in field conditions using a foam manikin as surrogate for an exposed individual. Dosimetric results were compared to the model predictions. The model predicted exposure to solar UV adequately. The symmetric mean absolute percentage error was 13%. Half of the predictions were within 17% range of the measurements. This model provides a tool to assess outdoor occupational and recreational UV exposures, without necessitating time-consuming individual dosimetry, with numerous potential uses in skin cancer prevention and research.
Resumo:
Velocity-density tests conducted in the laboratory involved small 4-inch diameter by 4.58-inch-long compacted soil cylinders made up of 3 differing soil types and for varying degrees of density and moisture content, the latter being varied well beyond optimum moisture values. Seventeen specimens were tested, 9 with velocity determinations made along two elements of the cylinder, 180 degrees apart, and 8 along three elements, 120 degrees apart. Seismic energy was developed by blows of a small tack hammer on a 5/8-inch diameter steel ball placed at the center of the top of the cylinder, with the detector placed successively at four points spaced 1/2-inch apart on the side of the specimen involving wave travel paths varying from 3.36 inches to 4.66 inches in length. Time intervals were measured using a model 217 micro-seismic timer in both laboratory and field measurements. Forty blows of the hammer were required for each velocity determination, which amounted to 80 blows on 9 laboratory specimens and 120 blows on the remaining 8 cylinders. Thirty-five field tests were made over the three selected soil types, all fine-grained, using a 2-foot seismic line with hammer-impact points at 6-inch intervals. The small tack hammer and 5/8-inch steel ball was, again, used to develop seismic wave energy. Generally, the densities obtained from the velocity measurements were lower than those measured in the conventional field testing. Conclusions were reached that: (1) the method does not appear to be usable for measurement of density of essentially fine-grained soils when the moisture content greatly exceeds the optimum for compaction, and (2) due to a gradual reduction in velocity upon aging, apparently because of gradual absorption of pore water into the expandable interlayer region of the clay, the seismic test should be conducted immediately after soil compaction to obtain a meaningful velocity value.
Resumo:
In this paper we propose a method for computing JPEG quantization matrices for a given mean square error or PSNR. Then, we employ our method to compute JPEG standard progressive operation mode definition scripts using a quantization approach. Therefore, it is no longer necessary to use a trial and error procedure to obtain a desired PSNR and/or definition script, reducing cost. Firstly, we establish a relationship between a Laplacian source and its uniform quantization error. We apply this model to the coefficients obtained in the discrete cosine transform stage of the JPEG standard. Then, an image may be compressed using the JPEG standard under a global MSE (or PSNR) constraint and a set of local constraints determined by the JPEG standard and visual criteria. Secondly, we study the JPEG standard progressive operation mode from a quantization based approach. A relationship between the measured image quality at a given stage of the coding process and a quantization matrix is found. Thus, the definition script construction problem can be reduced to a quantization problem. Simulations show that our method generates better quantization matrices than the classical method based on scaling the JPEG default quantization matrix. The estimation of PSNR has usually an error smaller than 1 dB. This figure decreases for high PSNR values. Definition scripts may be generated avoiding an excessive number of stages and removing small stages that do not contribute during the decoding process with a noticeable image quality improvement.
Resumo:
This paper presents a probabilistic approach to model the problem of power supply voltage fluctuations. Error probability calculations are shown for some 90-nm technology digital circuits.The analysis here considered gives the timing violation error probability as a new design quality factor in front of conventional techniques that assume the full perfection of the circuit. The evaluation of the error bound can be useful for new design paradigms where retry and self-recoveringtechniques are being applied to the design of high performance processors. The method here described allows to evaluate the performance of these techniques by means of calculating the expected error probability in terms of power supply distribution quality.