956 resultados para estimation error


Relevância:

30.00% 30.00%

Publicador:

Resumo:

After attending this presentation, attendees will: (1) understand how body height from computed tomography data can be estimated; and, (2) gain knowledge about the accuracy of estimated body height and limitations. The presentation will impact the forensic science community by providing knowledge and competence which will enable attendees to develop formulas for single bones to reconstruct body height using postmortem Computer Tomography (p-CT) data. The estimation of Body Height (BH) is an important component of the identification of corpses and skeletal remains. Stature can be estimated with relative accuracy via the measurement of long bones, such as the femora. Compared to time-consuming maceration procedures, p-CT allows fast and simple measurements of bones. This study undertook four objectives concerning the accuracy of BH estimation via p-CT: (1) accuracy between measurements on native bone and p-CT imaged bone (F1 according to Martin 1914); (2) intra-observer p-CT measurement precision; (3) accuracy between formula-based estimation of the BH and conventional body length measurement during autopsy; and, (4) accuracy of different estimation formulas available.1 In the first step, the accuracy of measurements in the CT compared to those obtained using an osteometric board was evaluated on the basis of eight defleshed femora. Then the femora of 83 female and 144 male corpses of a Swiss population for which p-CTs had been performed, were measured at the Institute of Forensic Medicine in Bern. After two months, 20 individuals were measured again in order to assess the intraobserver error. The mean age of the men was 53±17 years and that of the women was 61±20 years. Additionally, the body length of the corpses was measured conventionally. The mean body length was 176.6±7.2cm for men and 163.6±7.8cm for women. The images that were obtained using a six-slice CT were reconstructed with a slice thickness of 1.25mm. Analysis and measurements of CT images were performed on a multipurpose workstation. As a forensic standard procedure, stature was estimated by means of the regression equations by Penning & Riepert developed on a Southern German population and for comparison, also those referenced by Trotter & Gleser “American White.”2,3 All statistical tests were performed with a statistical software. No significant differences were found between the CT and osteometric board measurements. The double p-CT measurement of 20 individuals resulted in an absolute intra-observer difference of 0.4±0.3mm. For both sexes, the correlation between the body length and the estimated BH using the F1 measurements was highly significant. The correlation coefficient was slightly higher for women. The differences in accuracy of the different formulas were small. While the errors of BH estimation were generally ±4.5–5.0cm, the consideration of age led to an increase in accuracy of a few millimetres to about 1cm. BH estimations according to Penning & Riepert and Trotter & Gleser were slightly more accurate when age-at-death was taken into account.2,3 That way, stature estimations in the group of individuals older than 60 years were improved by about 2.4cm and 3.1cm.2,3 The error of estimation is therefore about a third of the common ±4.7cm error range. Femur measurements in p-CT allow very accurate BH estimations. Estimations according to Penning led to good results that (barely) come closer to the true value than the frequently used formulas by Trotter & Gleser “American White.”2,3 Therefore, the formulas by Penning & Riepert are also validated for this substantial recent Swiss population.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Diabetes mellitus is spreading throughout the world and diabetic individuals have been shown to often assess their food intake inaccurately; therefore, it is a matter of urgency to develop automated diet assessment tools. The recent availability of mobile phones with enhanced capabilities, together with the advances in computer vision, have permitted the development of image analysis apps for the automated assessment of meals. GoCARB is a mobile phone-based system designed to support individuals with type 1 diabetes during daily carbohydrate estimation. In a typical scenario, the user places a reference card next to the dish and acquires two images using a mobile phone. A series of computer vision modules detect the plate and automatically segment and recognize the different food items, while their 3D shape is reconstructed. Finally, the carbohydrate content is calculated by combining the volume of each food item with the nutritional information provided by the USDA Nutrient Database for Standard Reference. Objective: The main objective of this study is to assess the accuracy of the GoCARB prototype when used by individuals with type 1 diabetes and to compare it to their own performance in carbohydrate counting. In addition, the user experience and usability of the system is evaluated by questionnaires. Methods: The study was conducted at the Bern University Hospital, “Inselspital” (Bern, Switzerland) and involved 19 adult volunteers with type 1 diabetes, each participating once. Each study day, a total of six meals of broad diversity were taken from the hospital’s restaurant and presented to the participants. The food items were weighed on a standard balance and the true amount of carbohydrate was calculated from the USDA nutrient database. Participants were asked to count the carbohydrate content of each meal independently and then by using GoCARB. At the end of each session, a questionnaire was completed to assess the user’s experience with GoCARB. Results: The mean absolute error was 27.89 (SD 38.20) grams of carbohydrate for the estimation of participants, whereas the corresponding value for the GoCARB system was 12.28 (SD 9.56) grams of carbohydrate, which was a significantly better performance ( P=.001). In 75.4% (86/114) of the meals, the GoCARB automatic segmentation was successful and 85.1% (291/342) of individual food items were successfully recognized. Most participants found GoCARB easy to use. Conclusions: This study indicates that the system is able to estimate, on average, the carbohydrate content of meals with higher accuracy than individuals with type 1 diabetes can. The participants thought the app was useful and easy to use. GoCARB seems to be a well-accepted supportive mHealth tool for the assessment of served-on-a-plate meals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study proposed a novel statistical method that modeled the multiple outcomes and missing data process jointly using item response theory. This method follows the "intent-to-treat" principle in clinical trials and accounts for the correlation between outcomes and missing data process. This method may provide a good solution to chronic mental disorder study. ^ The simulation study demonstrated that if the true model is the proposed model with moderate or strong correlation, ignoring the within correlation may lead to overestimate of the treatment effect and result in more type I error than specified level. Even if the within correlation is small, the performance of proposed model is as good as naïve response model. Thus, the proposed model is robust for different correlation settings if the data is generated by the proposed model.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The isotopic composition of surface seawater is widely used to infer past changes in sea surface salinity using paired foraminiferal Mg/Ca and d18O from marine sediments. At low latitudes, paleosalinity reconstructions using this method have largely been used to document changes in the hydrological cycle. This method usually assumes that the modern seawater d18O (d18Osw)/salinity relationship remained constant through time. Modelling studies have shown that such assumptions may not be valid because large-scale atmospheric circulation patterns linked to global climate changes can alter the seawater d18Osw/salinity relationship locally. Such processes have not been evidenced by paleo-data so far because there is presently no way to reconstruct past changes in the seawater d18Osw/salinity relationship. We have addressed this issue by applying a multi-proxy salinity reconstruction from a marine sediment core collected in the Gulf of Guinea. We measured hydrogen isotopes in C37:2 alkenones (dDa) to estimate changes in seawater dD. We find a smooth, long-term increase of ~10 per mil in dDa between 10 and 3 kyr BP, followed by a rapid decrease of ~10 per mil in dDa between 3 kyr BP and core top to values slightly lighter than during the early Holocene. Those features are inconsistent with published salinity estimations based on d18Osw and foraminiferal Ba/Ca, as well as nearby continental rainfall history derived from pollen analysis. We combined dDa and d18Osw values to reconstruct a Holocene record of salinity and compared it to a Ba/Ca-derived salinity record from the same sedimentary sequence. This combined method provides salinity trends that are in better agreement with both the Ba/Ca-derived salinity and the regional precipitation changes as inferred from pollen records. Our results illustrate that changes in atmospheric circulation can trigger changes in precipitation isotopes in a counter-intuitive manner that ultimately impacts surface salinity estimates based on seawater isotopic values. Our data suggest that the trends in Holocene rainfall isotopic values at low latitudes may not uniquely result from changes in local precipitation associated with the amount effect.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The long-term warmth of the Eocene (~56 to 34 million years ago) is commonly associated with elevated partial pressure of atmospheric carbon dioxide (pCO2). However, a direct relationship between the two has not been established for short-term climate perturbations. We reconstructed changes in both pCO2 and temperature over an episode of transient global warming called the Middle Eocene Climatic Optimum (MECO; ~40 million years ago). Organic molecular paleothermometry indicates a warming of southwest Pacific sea surface temperatures (SSTs) by 3° to 6°C. Reconstructions of pCO2 indicate a concomitant increase by a factor of 2 to 3. The marked consistency between SST and pCO2 trends during the MECO suggests that elevated pCO2 played a major role in global warming during the MECO.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Synthetic Aperture Radar (SAR) images a target region reflectivity function in the multi-dimensional spatial domain of range and cross-range. SAR synthesizes a large aperture radar in order to achieve a finer azimuth resolution than the one provided by any on-board real antenna. Conventional SAR techniques assume a single reflection of transmitted waveforms from targets. Nevertheless, today¿s new scenes force SAR systems to work in urban environments. Consequently, multiple-bounce returns are added to directscatter echoes. We refer to these as ghost images, since they obscure true target image and lead to poor resolution. By analyzing the quadratic phase error (QPE), this paper demonstrates that Earth¿s curvature influences the defocusing degree of multipath returns. In addition to the QPE, other parameters such as integrated sidelobe ratio (ISLR), peak sidelobe ratio (PSLR), contrast (C) and entropy (E) provide us with the tools to identify direct-scatter echoes in images containing undesired returns coming from multipath.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mesh adaptation based on error estimation has become a key technique to improve th eaccuracy o fcomputational-fluid-dynamics computations. The adjoint-based approach for error estimation is one of the most promising techniques for computational-fluid-dynamics applications. Nevertheless, the level of implementation of this technique in the aeronautical industrial environment is still low because it is a computationally expensive method. In the present investigation, a new mesh refinement method based on estimation of truncation error is presented in the context of finite-volume discretization. The estimation method uses auxiliary coarser meshes to estimate the local truncation error, which can be used for driving an adaptation algorithm. The method is demonstrated in the context of two-dimensional NACA0012 and three-dimensional ONERA M6 wing inviscid flows, and the results are compared against the adjoint-based approach and physical sensors based on features of the flow field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The understanding of the embryogenesis in living systems requires reliable quantitative analysis of the cell migration throughout all the stages of development. This is a major challenge of the "in-toto" reconstruction based on different modalities of "in-vivo" imaging techniques -spatio-temporal resolution and image artifacts and noise. Several methods for cell tracking are available, but expensive manual interaction -time and human resources- is always required to enforce coherence. Because of this limitation it is necessary to restrict the experiments or assume an uncontrolled error rate. Is it possible to obtain automated reliable measurements of migration? can we provide a seed for biologists to complete cell lineages efficiently? We propose a filtering technique that considers trajectories as spatio-temporal connected structures that prunes out those that might introduce noise and false positives by using multi-dimensional morphological operators.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study was motivated by the need to improve densification of Global Horizontal Irradiance (GHI) observations, increasing the number of surface weather stations that observe it, using sensors with a sub-hour periodicity and examining the methods of spatial GHI estimation (by interpolation) with that periodicity in other locations. The aim of the present research project is to analyze the goodness of 15-minute GHI spatial estimations for five methods in the territory of Spain (three geo-statistical interpolation methods, one deterministic method and the HelioSat2 method, which is based on satellite images). The research concludes that, when the work area has adequate station density, the best method for estimating GHI every 15 min is Regression Kriging interpolation using GHI estimated from satellite images as one of the input variables. On the contrary, when station density is low, the best method is estimating GHI directly from satellite images. A comparison between the GHI observed by volunteer stations and the estimation model applied concludes that 67% of the volunteer stations analyzed present values within the margin of error (average of +-2 standard deviations).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper three p-adaptation strategies based on the minimization of the truncation error are presented for high order discontinuous Galerkin methods. The truncation error is approximated by means of a ? -estimation procedure and enables the identification of mesh regions that require adaptation. Three adaptation strategies are developed and termed a posteriori, quasi-a priori and quasi-a priori corrected. All strategies require fine solutions, which are obtained by enriching the polynomial order, but while the former needs time converged solutions, the last two rely on non-converged solutions, which lead to faster computations. In addition, the high order method permits the spatial decoupling for the estimated errors and enables anisotropic p-adaptation. These strategies are verified and compared in terms of accuracy and computational cost for the Euler and the compressible Navier?Stokes equations. It is shown that the two quasi- a priori methods achieve a significant reduction in computational cost when compared to a uniform polynomial enrichment. Namely, for a viscous boundary layer flow, we obtain a speedup of 6.6 and 7.6 for the quasi-a priori and quasi-a priori corrected approaches, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work a p-adaptation (modification of the polynomial order) strategy based on the minimization of the truncation error is developed for high order discontinuous Galerkin methods. The truncation error is approximated by means of a truncation error estimation procedure and enables the identification of mesh regions that require adaptation. Three truncation error estimation approaches are developed and termed a posteriori, quasi-a priori and quasi-a priori corrected. Fine solutions, which are obtained by enriching the polynomial order, are required to solve the numerical problem with adequate accuracy. For the three truncation error estimation methods the former needs time converged solutions, while the last two rely on non-converged solutions, which lead to faster computations. Based on these truncation error estimation methods, algorithms for mesh adaptation were designed and tested. Firstly, an isotropic adaptation approach is presented, which leads to equally distributed polynomial orders in different coordinate directions. This first implementation is improved by incorporating a method to extrapolate the truncation error. This results in a significant reduction of computational cost. Secondly, the employed high order method permits the spatial decoupling of the estimated errors and enables anisotropic p-adaptation. The incorporation of anisotropic features leads to meshes with different polynomial orders in the different coordinate directions such that flow-features related to the geometry are resolved in a better manner. These adaptations result in a significant reduction of degrees of freedom and computational cost, while the amount of improvement depends on the test-case. Finally, this anisotropic approach is extended by using error extrapolation which leads to an even higher reduction in computational cost. These strategies are verified and compared in terms of accuracy and computational cost for the Euler and the compressible Navier-Stokes equations. The main result is that the two quasi-a priori methods achieve a significant reduction in computational cost when compared to a uniform polynomial enrichment. Namely, for a viscous boundary layer flow, we obtain a speedup of a factor of 6.6 and 7.6 for the quasi-a priori and quasi-a priori corrected approaches, respectively. RESUMEN En este trabajo se ha desarrollado una estrategia de adaptación-p (modificación del orden polinómico) para métodos Galerkin discontinuo de alto orden basada en la minimización del error de truncación. El error de truncación se estima utilizando el método tau-estimation. El estimador permite la identificación de zonas de la malla que requieren adaptación. Se distinguen tres técnicas de estimación: a posteriori, quasi a priori y quasi a priori con correción. Todas las estrategias requieren una solución obtenida en una malla fina, la cual es obtenida aumentando de manera uniforme el orden polinómico. Sin embargo, mientras que el primero requiere que esta solución esté convergida temporalmente, el resto utiliza soluciones no convergidas, lo que se traduce en un menor coste computacional. En este trabajo se han diseñado y probado algoritmos de adaptación de malla basados en métodos tau-estimation. En primer lugar, se presenta un algoritmo de adaptacin isótropo, que conduce a discretizaciones con el mismo orden polinómico en todas las direcciones espaciales. Esta primera implementación se mejora incluyendo un método para extrapolar el error de truncación. Esto resulta en una reducción significativa del coste computacional. En segundo lugar, el método de alto orden permite el desacoplamiento espacial de los errores estimados, permitiendo la adaptación anisotropica. Las mallas obtenidas mediante esta técnica tienen distintos órdenes polinómicos en cada una de las direcciones espaciales. La malla final tiene una distribución óptima de órdenes polinómicos, los cuales guardan relación con las características del flujo que, a su vez, depenen de la geometría. Estas técnicas de adaptación reducen de manera significativa los grados de libertad y el coste computacional. Por último, esta aproximación anisotropica se extiende usando extrapolación del error de truncación, lo que conlleva un coste computational aún menor. Las estrategias se verifican y se comparan en téminors de precisión y coste computacional utilizando las ecuaciones de Euler y Navier Stokes. Los dos métodos quasi a priori consiguen una reducción significativa del coste computacional en comparación con aumento uniforme del orden polinómico. En concreto, para una capa límite viscosa, obtenemos una mejora en tiempo de computación de 6.6 y 7.6 respectivamente, para las aproximaciones quasi-a priori y quasi-a priori con corrección.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a general procedure for solving incomplete data estimation problems. The procedure can be used to find the maximum likelihood estimate or to solve estimating equations in difficult cases such as estimation with the censored or truncated regression model, the nonlinear structural measurement error model, and the random effects model. The procedure is based on the general principle of stochastic approximation and the Markov chain Monte-Carlo method. Applying the theory on adaptive algorithms, we derive conditions under which the proposed procedure converges. Simulation studies also indicate that the proposed procedure consistently converges to the maximum likelihood estimate for the structural measurement error logistic regression model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When many protein sequences are available for estimating the time of divergence between two species, it is customary to estimate the time for each protein separately and then use the average for all proteins as the final estimate. However, it can be shown that this estimate generally has an upward bias, and that an unbiased estimate is obtained by using distances based on concatenated sequences. We have shown that two concatenation-based distances, i.e., average gamma distance weighted with sequence length (d2) and multiprotein gamma distance (d3), generally give more satisfactory results than other concatenation-based distances. Using these two distance measures for 104 protein sequences, we estimated the time of divergence between mice and rats to be approximately 33 million years ago. Similarly, the time of divergence between humans and rodents was estimated to be approximately 96 million years ago. We also investigated the dependency of time estimates on statistical methods and various assumptions made by using sequence data from eubacteria, protists, plants, fungi, and animals. Our best estimates of the times of divergence between eubacteria and eukaryotes, between protists and other eukaryotes, and between plants, fungi, and animals were 3, 1.7, and 1.3 billion years ago, respectively. However, estimates of ancient divergence times are subject to a substantial amount of error caused by uncertainty of the molecular clock, horizontal gene transfer, errors in sequence alignments, etc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper deals with the estimation of a time-invariant channel spectrum from its own nonuniform samples, assuming there is a bound on the channel’s delay spread. Except for this last assumption, this is the basic estimation problem in systems providing channel spectral samples. However, as shown in the paper, the delay spread bound leads us to view the spectrum as a band-limited signal, rather than the Fourier transform of a tapped delay line (TDL). Using this alternative model, a linear estimator is presented that approximately minimizes the expected root-mean-square (RMS) error for a deterministic channel. Its main advantage over the TDL is that it takes into account the spectrum’s smoothness (time width), thus providing a performance improvement. The proposed estimator is compared numerically with the maximum likelihood (ML) estimator based on a TDL model in pilot-assisted channel estimation (PACE) for OFDM.