945 resultados para Mean Squared Error
Resumo:
Reliable estimates of heavy-truck volumes are important in a number of transportation applications. Estimates of truck volumes are necessary for pavement design and pavement management. Truck volumes are important in traffic safety. The number of trucks on the road also influences roadway capacity and traffic operations. Additionally, heavy vehicles pollute at higher rates than passenger vehicles. Consequently, reliable estimates of heavy-truck vehicle miles traveled (VMT) are important in creating accurate inventories of on-road emissions. This research evaluated three different methods to calculate heavy-truck annual average daily traffic (AADT) which can subsequently be used to estimate vehicle miles traveled (VMT). Traffic data from continuous count stations provided by the Iowa DOT were used to estimate AADT for two different truck groups (single-unit and multi-unit) using the three methods. The first method developed monthly and daily expansion factors for each truck group. The second and third methods created general expansion factors for all vehicles. Accuracy of the three methods was compared using n-fold cross-validation. In n-fold cross-validation, data are split into n partitions, and data from the nth partition are used to validate the remaining data. A comparison of the accuracy of the three methods was made using the estimates of prediction error obtained from cross-validation. The prediction error was determined by averaging the squared error between the estimated AADT and the actual AADT. Overall, the prediction error was the lowest for the method that developed expansion factors separately for the different truck groups for both single- and multi-unit trucks. This indicates that use of expansion factors specific to heavy trucks results in better estimates of AADT, and, subsequently, VMT, than using aggregate expansion factors and applying a percentage of trucks. Monthly, daily, and weekly traffic patterns were also evaluated. Significant variation exists in the temporal and seasonal patterns of heavy trucks as compared to passenger vehicles. This suggests that the use of aggregate expansion factors fails to adequately describe truck travel patterns.
Resumo:
The mutual information of independent parallel Gaussian-noise channels is maximized, under an average power constraint, by independent Gaussian inputs whose power is allocated according to the waterfilling policy. In practice, discrete signalling constellations with limited peak-to-average ratios (m-PSK, m-QAM, etc) are used in lieu of the ideal Gaussian signals. This paper gives the power allocation policy that maximizes the mutual information over parallel channels with arbitrary input distributions. Such policy admits a graphical interpretation, referred to as mercury/waterfilling, which generalizes the waterfilling solution and allows retaining some of its intuition. The relationship between mutual information of Gaussian channels and nonlinear minimum mean-square error proves key to solving the power allocation problem.
Resumo:
Human arteries affected by atherosclerosis are characterized by altered wall viscoelastic properties. The possibility of noninvasively assessing arterial viscoelasticity in vivo would significantly contribute to the early diagnosis and prevention of this disease. This paper presents a noniterative technique to estimate the viscoelastic parameters of a vascular wall Zener model. The approach requires the simultaneous measurement of flow variations and wall displacements, which can be provided by suitable ultrasound Doppler instruments. Viscoelastic parameters are estimated by fitting the theoretical constitutive equations to the experimental measurements using an ARMA parameter approach. The accuracy and sensitivity of the proposed method are tested using reference data generated by numerical simulations of arterial pulsation in which the physiological conditions and the viscoelastic parameters of the model can be suitably varied. The estimated values quantitatively agree with the reference values, showing that the only parameter affected by changing the physiological conditions is viscosity, whose relative error was about 27% even when a poor signal-to-noise ratio is simulated. Finally, the feasibility of the method is illustrated through three measurements made at different flow regimes on a cylindrical vessel phantom, yielding a parameter mean estimation error of 25%.
Resumo:
We study the minimum mean square error (MMSE) and the multiuser efficiency η of large dynamic multiple access communication systems in which optimal multiuser detection is performed at the receiver as the number and the identities of active users is allowed to change at each transmission time. The system dynamics are ruled by a Markov model describing the evolution of the channel occupancy and a large-system analysis is performed when the number of observations grow large. Starting on the equivalent scalar channel and the fixed-point equation tying multiuser efficiency and MMSE, we extend it to the case of a dynamic channel, and derive lower and upper bounds for the MMSE (and, thus, for η as well) holding true in the limit of large signal–to–noise ratios and increasingly large observation time T.
Resumo:
Résumé : Ce travail porte sur l'étude rétrospective d'une série de jeunes patients opérés de glaucomes pédiatriques. Le but est d'évaluer le résultat au long cours d'une intervention chirurgicale combinant une sclérectomie profonde et une trabéculectomie (sclérectomie profonde pénétrante). Durant la période de mars 1997 à octobre 2006, 28 patients on été suivis pour évaluer le résultat de cette chirurgie effectuées sur 35 yeux. Un examen ophtalmologique complet a été pratiqué avant la chirurgie, 1 et 7 jours, puis 1, 2, 3, 4, 6, 9, 12 mois, enfin tous les 6 mois après l'opération. Les critères d'évaluation du résultat postopératoire sont : les changements de pression intraoculaire, le traitement antiglaucomateux adjuvant, le taux de complication, le nombre de reprises chirurgicales,- l'erreur de réfraction, la meilleure acuité visuelle corrigée, l'état et le diamètre de la cornée. L'âge moyen est de 3.6 ± 4.5 ans et le suivi moyen de 3.6 ± 2.9 ans. La pression intraoculaire préopératoire de 31.9 ± 11.5 mmHg baisse de 58.3% (p<0.005) à la fin du suivi. Sur les 14 patients dont l'acuité visuelle a pu être mesurée, 8 (57.1 %) ont une acuité égale ou supérieure à 5/10e, 3 (21.4%) une acuité de 2/10e après intervention. Le taux de succès cumulatif complet à 9 ans est de 52.3%, le succès relatif 70.6%. Les complications menaçant la vision (8.6%) ont été plus fréquentes dans les cas de glaucome réfractaire. Pour conclure la sclérectomie profonde combinée à une trabéculectomie est une technique chirurgicale développée afin de contrôler la pression intraoculaire dans les cas de glaucomes congénitaux, juvéniles et secondaires. Les résultats intermédiaires sont encourageants et prometteurs. Les cas préalablement opérés avant cette nouvelle technique ont cependant un pronostic moins favorable. Le nombre de complications menaçant la vision est essentiellement lié à la sévérité du glaucome et au nombre d'interventions préalables. Abstract : Purpose : To evaluate the outcomes of combined deep sclerectomy and trabeculectomy (penetrating deep sclerectomy) in pediatric glaucoma. Design : Retrospective, non-consecutive, non-comparative, interventional case series. Participants : Children suffering from pediatric glaucoma who underwent surgery between March 1997 and October 2006 were included in this study. Methods : A primary combined deep sclerectomy and trabeculectomy was performed in 35 eyes of 28 patients. Complete examinations were performed before surgery, postoperatively at 1 and 7 days, at 1, 2, 3, 4, 6, 9, 12 months and then every 6 months after surgery. Main Outcome Measures : Surgical outcome was assessed in terms of intraocular pressure (IOP) change, additional glaucoma medication, complication rate, need for surgical revision, as well as refractive errors, best corrected visual acuity (BCVA), and corneal clarity and diameters. Results : The mean age before surgery was 3.6 ± 4.5 years, and the mean follow-up was 3.5 ± 2.9 years. The mean preoperative IOP was 31.9 ± 11.5 mmHg. At the end of follow-up, the mean IOP decreased by 58.3% (p<0.005), and from 14 patients with available BCVA 8 patients (57.1 %) achieved. 0.5 (20/40) or better, 3 (21.4%) 0.2 (20/100), and 2 (14.3%) 0.1 (20/200) in their better eye. The mean refractive error (spherical equivalent) at final follow-up visits was +0.83 ± 5.4. Six patients (43%) were affected by myopia. The complete and qualified success rates, based on a cumulative survival curve, after- 9 years were 52.3% and 70.6%, respectively (p<0.05). Sight threatening complications were more common (8.6%) in refractory glaucomas. Conclusions : Combined deep sclerectomy and trabeculectomy is a surgical technique developed to control IOP in congenital, secondary and juvenile glaucomas. The intermediate results are satisfactory and promising. Previous classic glaucoma surgeries performed before this new technique had less favourable results. The number of sight threatening complications is related to the severity of glaucoma and number of previous surgeries.
Resumo:
This paper presents a comparative analysis of linear and mixed modelsfor short term forecasting of a real data series with a high percentage of missing data. Data are the series of significant wave heights registered at regular periods of three hours by a buoy placed in the Bay of Biscay.The series is interpolated with a linear predictor which minimizes theforecast mean square error. The linear models are seasonal ARIMA models and themixed models have a linear component and a non linear seasonal component.The non linear component is estimated by a non parametric regression of dataversus time. Short term forecasts, no more than two days ahead, are of interestbecause they can be used by the port authorities to notice the fleet.Several models are fitted and compared by their forecasting behavior.
Resumo:
This work is part of a project studying the performance of model basedestimators in a small area context. We have chosen a simple statisticalapplication in which we estimate the growth rate of accupation for severalregions of Spain. We compare three estimators: the direct one based onstraightforward results from the survey (which is unbiassed), and a thirdone which is based in a statistical model and that minimizes the mean squareerror.
Resumo:
This paper investigates the comparative performance of five small areaestimators. We use Monte Carlo simulation in the context of boththeoretical and empirical populations. In addition to the direct andindirect estimators, we consider the optimal composite estimator withpopulation weights, and two composite estimators with estimatedweights: one that assumes homogeneity of within area variance andsquare bias, and another one that uses area specific estimates ofvariance and square bias. It is found that among the feasibleestimators, the best choice is the one that uses area specificestimates of variance and square bias.
Resumo:
We consider adaptive sequential lossy coding of bounded individual sequences when the performance is measured by the sequentially accumulated mean squared distortion. Theencoder and the decoder are connected via a noiseless channel of capacity $R$ and both are assumed to have zero delay. No probabilistic assumptions are made on how the sequence to be encoded is generated. For any bounded sequence of length $n$, the distortion redundancy is defined as the normalized cumulative distortion of the sequential scheme minus the normalized cumulative distortion of the best scalarquantizer of rate $R$ which is matched to this particular sequence. We demonstrate the existence of a zero-delay sequential scheme which uses common randomization in the encoder and the decoder such that the normalized maximum distortion redundancy converges to zero at a rate $n^{-1/5}\log n$ as the length of the encoded sequence $n$ increases without bound.
Resumo:
PURPOSE: To evaluate the outcomes of combined deep sclerectomy and trabeculectomy (penetrating deep sclerectomy) in pediatric glaucoma. DESIGN: Retrospective, nonconsecutive, noncomparative, interventional case series. PARTICIPANTS: Children suffering from pediatric glaucoma who underwent surgery between March 1997 and October 2006 were included in this study. METHODS: A primary combined deep sclerectomy and trabeculectomy was performed in 35 eyes of 28 patients. Complete examinations were performed before surgery, postoperatively at 1 and 7 days, at 1, 2, 3, 4, 6, 9, and 12 months, and then every 6 months after surgery. MAIN OUTCOME MEASURES: Surgical outcome was assessed in terms of intraocular pressure (IOP) change, additional glaucoma medication, complication rate, need for surgical revision, as well as refractive errors, best-corrected visual acuity (BCVA), and corneal clarity and diameters. RESULTS: The mean age before surgery was 3.6+/-4.5 years, and the mean follow-up was 3.5+/-2.9 years. The mean preoperative IOP was 31.9+/-11.5 mmHg. At the end of follow-up, the mean IOP decreased by 58.3% (P<0.005), and from 14 patients with available BCVA 8 patients (57.1%) achieved 0.5 (20/40) or better, 3 (21.4%) 0.2 (20/100), and 2 (14.3%) 0.1 (20/200) in their better eye. The mean refractive error (spherical equivalent [SE]) at final follow-up visits was +0.83+/-5.4. Six patients (43%) were affected by myopia. The complete and qualified success rates, based on a cumulative survival curve, after 9 years were 52.3% and 70.6%, respectively (P<0.05). Sight-threatening complications were more common (8.6%) in refractory glaucomas. CONCLUSIONS: Combined deep sclerectomy and trabeculectomy is an operative technique developed to control IOP in congenital, secondary, and juvenile glaucomas. The intermediate results are satisfactory and promising. Previous classic glaucoma surgeries performed before this new technique had less favorable results. The number of sight-threatening complications is related to the severity of glaucoma and number of previous surgeries. FINANCIAL DISCLOSURE(S): The authors have no proprietary or commercial interest in any materials discussed in this article.
Resumo:
PURPOSE: To explore whether triaxial accelerometric measurements can be utilized to accurately assess speed and incline of running in free-living conditions. METHODS: Body accelerations during running were recorded at the lower back and at the heel by a portable data logger in 20 human subjects, 10 men, and 10 women. After parameterizing body accelerations, two neural networks were designed to recognize each running pattern and calculate speed and incline. Each subject ran 18 times on outdoor roads at various speeds and inclines; 12 runs were used to calibrate the neural networks whereas the 6 other runs were used to validate the model. RESULTS: A small difference between the estimated and the actual values was observed: the square root of the mean square error (RMSE) was 0.12 m x s(-1) for speed and 0.014 radiant (rad) (or 1.4% in absolute value) for incline. Multiple regression analysis allowed accurate prediction of speed (RMSE = 0.14 m x s(-1)) but not of incline (RMSE = 0.026 rad or 2.6% slope). CONCLUSION: Triaxial accelerometric measurements allows an accurate estimation of speed of running and incline of terrain (the latter with more uncertainty). This will permit the validation of the energetic results generated on the treadmill as applied to more physiological unconstrained running conditions.
Resumo:
Multiexponential decays may contain time-constants differing in several orders of magnitudes. In such cases, uniform sampling results in very long records featuring a high degree of oversampling at the final part of the transient. Here, we analyze a nonlinear time scale transformation to reduce the total number of samples with minimum signal distortion, achieving an important reduction of the computational cost of subsequent analyses. We propose a time-varying filter whose length is optimized for minimum mean square error
Resumo:
The estimation of non available soil variables through the knowledge of other related measured variables can be achieved through pedotransfer functions (PTF) mainly saving time and reducing cost. Great differences among soils, however, can yield non desirable results when applying this method. This study discusses the application of developed PTFs by several authors using a variety of soils of different characteristics, to evaluate soil water contents of two Brazilian lowland soils. Comparisons are made between PTF evaluated data and field measured data, using statistical and geostatistical tools, like mean error, root mean square error, semivariogram, cross-validation, and regression coefficient. The eight tested PTFs to evaluate gravimetric soil water contents (Ug) at the tensions of 33 kPa and 1,500 kPa presented a tendency to overestimate Ug 33 kPa and underestimate Ug1,500 kPa. The PTFs were ranked according to their performance and also with respect to their potential in describing the structure of the spatial variability of the set of measured values. Although none of the PTFs have changed the distribution pattern of the data, all resulted in mean and variance statistically different from those observed for all measured values. The PTFs that presented the best predictive values of Ug33 kPa and Ug1,500 kPa were not the same that had the best performance to reproduce the structure of spatial variability of these variables.
Resumo:
In this paper we describe the results of a simulation study performed to elucidate the robustness of the Lindstrom and Bates (1990) approximation method under non-normality of the residuals, under different situations. Concerning the fixed effects, the observed coverage probabilities and the true bias and mean square error values, show that some aspects of this inferential approach are not completely reliable. When the true distribution of the residuals is asymmetrical, the true coverage is markedly lower than the nominal one. The best results are obtained for the skew normal distribution, and not for the normal distribution. On the other hand, the results are partially reversed concerning the random effects. Soybean genotypes data are used to illustrate the methods and to motivate the simulation scenarios
Resumo:
In this paper we analyse, using Monte Carlo simulation, the possible consequences of incorrect assumptions on the true structure of the random effects covariance matrix and the true correlation pattern of residuals, over the performance of an estimation method for nonlinear mixed models. The procedure under study is the well known linearization method due to Lindstrom and Bates (1990), implemented in the nlme library of S-Plus and R. Its performance is studied in terms of bias, mean square error (MSE), and true coverage of the associated asymptotic confidence intervals. Ignoring other criteria like the convenience of avoiding over parameterised models, it seems worst to erroneously assume some structure than do not assume any structure when this would be adequate.