968 resultados para order estimation
Resumo:
Two methods were evaluated for scaling a set of semivariograms into a unified function for kriging estimation of field-measured properties. Scaling is performed using sample variances and sills of individual semivariograms as scale factors. Theoretical developments show that kriging weights are independent of the scaling factor which appears simply as a constant multiplying both sides of the kriging equations. The scaling techniques were applied to four sets of semivariograms representing spatial scales of 30 x 30 m to 600 x 900 km. Experimental semivariograms in each set successfully coalesced into a single curve by variances and sills of individual semivariograms. To evaluate the scaling techniques, kriged estimates derived from scaled semivariogram models were compared with those derived from unscaled models. Differences in kriged estimates of the order of 5% were found for the cases in which the scaling technique was not successful in coalescing the individual semivariograms, which also means that the spatial variability of these properties is different. The proposed scaling techniques enhance interpretation of semivariograms when a variety of measurements are made at the same location. They also reduce computational times for kriging estimations because kriging weights only need to be calculated for one variable. Weights remain unchanged for all other variables in the data set whose semivariograms are scaled.
Resumo:
Monitoring the performance is a crucial task for elite sports during both training and competition. Velocity is the key parameter of performance in swimming, but swimming performance evaluation remains immature due to the complexities of measurements in water. The purpose of this study is to use a single inertial measurement unit (IMU) to estimate front crawl velocity. Thirty swimmers, equipped with an IMU on the sacrum, each performed four different velocity trials of 25 m in ascending order. A tethered speedometer was used as the velocity measurement reference. Deployment of biomechanical constraints of front crawl locomotion and change detection framework on acceleration signal paved the way for a drift-free integration of forward acceleration using IMU to estimate the swimmers velocity. A difference of 0.6 ± 5.4 cm · s(-1) on mean cycle velocity and an RMS difference of 11.3 cm · s(-1) in instantaneous velocity estimation were observed between IMU and the reference. The most important contribution of the study is a new practical tool for objective evaluation of swimming performance. A single body-worn IMU provides timely feedback for coaches and sport scientists without any complicated setup or restraining the swimmer's natural technique.
Resumo:
We have measured the changes in the ultrasonic wave velocity, induced by the application of uniaxial stresses in a Cu-Al-Ni single crystal. From these measurements, the complete set of third-order elastic constants has been obtained. The comparison of results for Cu-Al-Ni with available data for other Cu-based alloys has shown that all these alloys exhibit similar anharmonic behavior. By using the measured elastic constants in a Landau expansion for elastic phase transitions, we have been able to give an estimation of the value of a fourth-order elastic constants combination. The experiments have also shown that the application of a stress in the [001] direction, reduces the material resistance to a (110)[110] shear and thus favors the martensitic transition.
Resumo:
Field capacity (FC) is a parameter widely used in applied soil science. However, its in situ method of determination may be difficult to apply, generally because of the need of large supplies of water at the test sites. Ottoni Filho et al. (2014) proposed a standardized procedure for field determination of FC and showed that such in situ FC can be estimated by a linear pedotransfer function (PTF) based on volumetric soil water content at the matric potential of -6 kPa [θ(6)] for the same soils used in the present study. The objective of this study was to use soil moisture data below a double ring infiltrometer measured 48 h after the end of the infiltration test in order to develop PTFs for standard in situ FC. We found that such ring FC data were an average of 0.03 m³ m- 3 greater than standard FC values. The linear PTF that was developed for the ring FC data based only on θ(6) was nearly as accurate as the equivalent PTF reported by Ottoni Filho et al. (2014), which was developed for the standard FC data. The root mean squared residues of FC determined from both PTFs were about 0.02 m³ m- 3. The proposed method has the advantage of estimating the soil in situ FC using the water applied in the infiltration test.
Resumo:
Numerous sources of evidence point to the fact that heterogeneity within the Earth's deep crystalline crust is complex and hence may be best described through stochastic rather than deterministic approaches. As seismic reflection imaging arguably offers the best means of sampling deep crustal rocks in situ, much interest has been expressed in using such data to characterize the stochastic nature of crustal heterogeneity. Previous work on this problem has shown that the spatial statistics of seismic reflection data are indeed related to those of the underlying heterogeneous seismic velocity distribution. As of yet, however, the nature of this relationship has remained elusive due to the fact that most of the work was either strictly empirical or based on incorrect methodological approaches. Here, we introduce a conceptual model, based on the assumption of weak scattering, that allows us to quantitatively link the second-order statistics of a 2-D seismic velocity distribution with those of the corresponding processed and depth-migrated seismic reflection image. We then perform a sensitivity study in order to investigate what information regarding the stochastic model parameters describing crustal velocity heterogeneity might potentially be recovered from the statistics of a seismic reflection image using this model. Finally, we present a Monte Carlo inversion strategy to estimate these parameters and we show examples of its application at two different source frequencies and using two different sets of prior information. Our results indicate that the inverse problem is inherently non-unique and that many different combinations of the vertical and lateral correlation lengths describing the velocity heterogeneity can yield seismic images with the same 2-D autocorrelation structure. The ratio of all of these possible combinations of vertical and lateral correlation lengths, however, remains roughly constant which indicates that, without additional prior information, the aspect ratio is the only parameter describing the stochastic seismic velocity structure that can be reliably recovered.
Resumo:
In the first part of the study, nine estimators of the first-order autoregressive parameter are reviewed and a new estimator is proposed. The relationships and discrepancies between the estimators are discussed in order to achieve a clear differentiation. In the second part of the study, the precision in the estimation of autocorrelation is studied. The performance of the ten lag-one autocorrelation estimators is compared in terms of Mean Square Error (combining bias and variance) using data series generated by Monte Carlo simulation. The results show that there is not a single optimal estimator for all conditions, suggesting that the estimator ought to be chosen according to sample size and to the information available of the possible direction of the serial dependence. Additionally, the probability of labelling an actually existing autocorrelation as statistically significant is explored using Monte Carlo sampling. The power estimates obtained are quite similar among the tests associated with the different estimators. These estimates evidence the small probability of detecting autocorrelation in series with less than 20 measurement times.
Resumo:
Lutetium zoning in garnet within eclogites from the Zermatt-Saas Fee zone, Western Alps, reveal sharp, exponentially decreasing central peaks. They can be used to constrain maximum Lu volume diffusion in garnets. A prograde garnet growth temperature interval of 450-600 A degrees C has been estimated based on pseudosection calculations and garnet-clinopyroxene thermometry. The maximum pre-exponential diffusion coefficient which fits the measured central peak is in the order of D-0= 5.7*10(-6) m(2)/s, taking an estimated activation energy of 270 kJ/mol based on diffusion experiments for other rare earth elements in garnet. This corresponds to a maximum diffusion rate of D (600 A degrees C) = 4.0*10(-22) m(2)/s. The diffusion estimate of Lu can be used to estimate the minimum closure temperature, T-c, for Sm-Nd and Lu-Hf age data that have been obtained in eclogites of the Western Alps, postulating, based on a literature review, that D (Hf) < D (Nd) < D (Sm) a parts per thousand currency sign D (Lu). T-c calculations, using the Dodson equation, yielded minimum closure temperatures of about 630 A degrees C, assuming a rapid initial exhumation rate of 50A degrees/m.y., and an average crystal size of garnets (r = 1 mm). This suggests that Sm/Nd and Lu/Hf isochron age differences in eclogites from the Western Alps, where peak temperatures did rarely exceed 600 A degrees C must be interpreted in terms of prograde metamorphism.
Resumo:
PURPOSE: The aim of this study was to develop models based on kernel regression and probability estimation in order to predict and map IRC in Switzerland by taking into account all of the following: architectural factors, spatial relationships between the measurements, as well as geological information. METHODS: We looked at about 240,000 IRC measurements carried out in about 150,000 houses. As predictor variables we included: building type, foundation type, year of construction, detector type, geographical coordinates, altitude, temperature and lithology into the kernel estimation models. We developed predictive maps as well as a map of the local probability to exceed 300 Bq/m(3). Additionally, we developed a map of a confidence index in order to estimate the reliability of the probability map. RESULTS: Our models were able to explain 28% of the variations of IRC data. All variables added information to the model. The model estimation revealed a bandwidth for each variable, making it possible to characterize the influence of each variable on the IRC estimation. Furthermore, we assessed the mapping characteristics of kernel estimation overall as well as by municipality. Overall, our model reproduces spatial IRC patterns which were already obtained earlier. On the municipal level, we could show that our model accounts well for IRC trends within municipal boundaries. Finally, we found that different building characteristics result in different IRC maps. Maps corresponding to detached houses with concrete foundations indicate systematically smaller IRC than maps corresponding to farms with earth foundation. CONCLUSIONS: IRC mapping based on kernel estimation is a powerful tool to predict and analyze IRC on a large-scale as well as on a local level. This approach enables to develop tailor-made maps for different architectural elements and measurement conditions and to account at the same time for geological information and spatial relations between IRC measurements.
Resumo:
The goal of this study was to investigate the impact of computing parameters and the location of volumes of interest (VOI) on the calculation of 3D noise power spectrum (NPS) in order to determine an optimal set of computing parameters and propose a robust method for evaluating the noise properties of imaging systems. Noise stationarity in noise volumes acquired with a water phantom on a 128-MDCT and a 320-MDCT scanner were analyzed in the spatial domain in order to define locally stationary VOIs. The influence of the computing parameters in the 3D NPS measurement: the sampling distances bx,y,z and the VOI lengths Lx,y,z, the number of VOIs NVOI and the structured noise were investigated to minimize measurement errors. The effect of the VOI locations on the NPS was also investigated. Results showed that the noise (standard deviation) varies more in the r-direction (phantom radius) than z-direction plane. A 25 × 25 × 40 mm(3) VOI associated with DFOV = 200 mm (Lx,y,z = 64, bx,y = 0.391 mm with 512 × 512 matrix) and a first-order detrending method to reduce structured noise led to an accurate NPS estimation. NPS estimated from off centered small VOIs had a directional dependency contrary to NPS obtained from large VOIs located in the center of the volume or from small VOIs located on a concentric circle. This showed that the VOI size and location play a major role in the determination of NPS when images are not stationary. This study emphasizes the need for consistent measurement methods to assess and compare image quality in CT.
Resumo:
PURPOSE: To use measurement by cycling power meters (Pmes) to evaluate the accuracy of commonly used models for estimating uphill cycling power (Pest). Experiments were designed to explore the influence of wind speed and steepness of climb on accuracy of Pest. The authors hypothesized that the random error in Pest would be largely influenced by the windy conditions, the bias would be diminished in steeper climbs, and windy conditions would induce larger bias in Pest. METHODS: Sixteen well-trained cyclists performed 15 uphill-cycling trials (range: length 1.3-6.3 km, slope 4.4-10.7%) in a random order. Trials included different riding position in a group (lead or follow) and different wind speeds. Pmes was quantified using a power meter, and Pest was calculated with a methodology used by journalists reporting on the Tour de France. RESULTS: Overall, the difference between Pmes and Pest was -0.95% (95%CI: -10.4%, +8.5%) for all trials and 0.24% (-6.1%, +6.6%) in conditions without wind (<2 m/s). The relationship between percent slope and the error between Pest and Pmes were considered trivial. CONCLUSIONS: Aerodynamic drag (affected by wind velocity and orientation, frontal area, drafting, and speed) is the most confounding factor. The mean estimated values are close to the power-output values measured by power meters, but the random error is between ±6% and ±10%. Moreover, at the power outputs (>400 W) produced by professional riders, this error is likely to be higher. This observation calls into question the validity of releasing individual values without reporting the range of random errors.
Resumo:
This paper provides a systematic approach to theproblem of nondata aided symbol-timing estimation for linearmodulations. The study is performed under the unconditionalmaximum likelihood framework where the carrier-frequencyerror is included as a nuisance parameter in the mathematicalderivation. The second-order moments of the received signal arefound to be the sufficient statistics for the problem at hand and theyallow the provision of a robust performance in the presence of acarrier-frequency error uncertainty. We particularly focus on theexploitation of the cyclostationary property of linear modulations.This enables us to derive simple and closed-form symbol-timingestimators which are found to be based on the well-known squaretiming recovery method by Oerder and Meyr. Finally, we generalizethe OM method to the case of linear modulations withoffset formats. In this case, the square-law nonlinearity is foundto provide not only the symbol-timing but also the carrier-phaseerror.
Resumo:
This correspondence addresses the problem of nondata-aidedwaveform estimation for digital communications. Based on the unconditionalmaximum likelihood criterion, the main contribution of this correspondenceis the derivation of a closed-form solution to the waveform estimationproblem in the low signal-to-noise ratio regime. The proposed estimationmethod is based on the second-order statistics of the received signaland a clear link is established between maximum likelihood estimation andcorrelation matching techniques. Compression with the signal-subspace isalso proposed to improve the robustness against the noise and to mitigatethe impact of abnormals or outliers.
Resumo:
Gas-liquid mass transfer is an important issue in the design and operation of many chemical unit operations. Despite its importance, the evaluation of gas-liquid mass transfer is not straightforward due to the complex nature of the phenomena involved. In this thesis gas-liquid mass transfer was evaluated in three different gas-liquid reactors in a traditional way by measuring the volumetric mass transfer coefficient (kLa). The studied reactors were a bubble column with a T-junction two-phase nozzle for gas dispersion, an industrial scale bubble column reactor for the oxidation of tetrahydroanthrahydroquinone and a concurrent downflow structured bed.The main drawback of this approach is that the obtained correlations give only the average volumetric mass transfer coefficient, which is dependent on average conditions. Moreover, the obtained correlations are valid only for the studied geometry and for the chemical system used in the measurements. In principle, a more fundamental approach is to estimate the interfacial area available for mass transfer from bubble size distributions obtained by solution of population balance equations. This approach has been used in this thesis by developing a population balance model for a bubble column together with phenomenological models for bubble breakage and coalescence. The parameters of the bubble breakage rate and coalescence rate models were estimated by comparing the measured and calculated bubble sizes. The coalescence models always have at least one experimental parameter. This is because the bubble coalescence depends on liquid composition in a way which is difficult to evaluate using known physical properties. The coalescence properties of some model solutions were evaluated by measuring the time that a bubble rests at the free liquid-gas interface before coalescing (the so-calledpersistence time or rest time). The measured persistence times range from 10 msup to 15 s depending on the solution. The coalescence was never found to be instantaneous. The bubble oscillates up and down at the interface at least a coupleof times before coalescence takes place. The measured persistence times were compared to coalescence times obtained by parameter fitting using measured bubble size distributions in a bubble column and a bubble column population balance model. For short persistence times, the persistence and coalescence times are in good agreement. For longer persistence times, however, the persistence times are at least an order of magnitude longer than the corresponding coalescence times from parameter fitting. This discrepancy may be attributed to the uncertainties concerning the estimation of energy dissipation rates, collision rates and mechanisms and contact times of the bubbles.
Resumo:
The growth of five variables of the tibia (diaphyseal length, diaphyseal length plus distal epiphysis, condylo-malleolar length, sagittal diameter of the proximal epiphysis, maximum breadth of the distal epiphysis) were analysed using polynomial regression in order to evaluate their significance and capacity for age and sex determination during and after growth. Data were collected from 181 (90♂ and 91♀) individuals ranging from birth to 25 years of age and belonging to three documented collections from Western Europe. Results indicate that all five variables exhibit linear behaviour during growth, which can be expressed by a first-degree polynomial function. Sexual significant differences were observed from age 15 onward in the two epiphysis measurements and condylo-malleolar length, suggesting that these three variables could be useful for sex determination in individuals older than 15 years. Strong correlation coefficients were identified between the five tibial variables and age. These results indicate that any of the studied tibial measurements is likely to serve as a useful source for estimating sub-adult age in both archaeological and forensic samples.
Resumo:
Electroencephalographic (EEG) recordings are, most of the times, corrupted by spurious artifacts, which should be rejected or cleaned by the practitioner. As human scalp EEG screening is error-prone, automatic artifact detection is an issue of capital importance, to ensure objective and reliable results. In this paper we propose a new approach for discrimination of muscular activity in the human scalp quantitative EEG (QEEG), based on the time-frequency shape analysis. The impact of the muscular activity on the EEG can be evaluated from this methodology. We present an application of this scoring as a preprocessing step for EEG signal analysis, in order to evaluate the amount of muscular activity for two set of EEG recordings for dementia patients with early stage of Alzheimer’s disease and control age-matched subjects.