965 resultados para Error in substance
Resumo:
Simulation is an important resource for researchers in diverse fields. However, many researchers have found flaws in the methodology of published simulation studies and have described the state of the simulation community as being in a crisis of credibility. This work describes the project of the Simulation Automation Framework for Experiments (SAFE), which addresses the issues that undermine credibility by automating the workflow in the execution of simulation studies. Automation reduces the number of opportunities for users to introduce error in the scientific process thereby improvingthe credibility of the final results. Automation also eases the job of simulation users and allows them to focus on the design of models and the analysis of results rather than on the complexities of the workflow.
Resumo:
Recent research highlights the promise of remotely-sensed aerosol optical depth (AOD) as a proxy for ground-level PM2.5. Particular interest lies in the information on spatial heterogeneity potentially provided by AOD, with important application to estimating and monitoring pollution exposure for public health purposes. Given the temporal and spatio-temporal correlations reported between AOD and PM2.5 , it is tempting to interpret the spatial patterns in AOD as reflecting patterns in PM2.5 . Here we find only limited spatial associations of AOD from three satellite retrievals with PM2.5 over the eastern U.S. at the daily and yearly levels in 2004. We then use statistical modeling to show that the patterns in monthly average AOD poorly reflect patterns in PM2.5 because of systematic, spatially-correlated error in AOD as a proxy for PM2.5 . Furthermore, when we include AOD as a predictor of monthly PM2.5 in a statistical prediction model, AOD provides little additional information to improve predictions of PM2.5 when included in a model that already accounts for land use, emission sources, meteorology and regional variability. These results suggest caution in using spatial variation in AOD to stand in for spatial variation in ground-level PM2.5 in epidemiological analyses and indicate that when PM2.5 monitoring is available, careful statistical modeling outperforms the use of AOD.
Resumo:
We describe a method for evaluating an ensemble of predictive models given a sample of observations comprising the model predictions and the outcome event measured with error. Our formulation allows us to simultaneously estimate measurement error parameters, true outcome — aka the gold standard — and a relative weighting of the predictive scores. We describe conditions necessary to estimate the gold standard and for these estimates to be calibrated and detail how our approach is related to, but distinct from, standard model combination techniques. We apply our approach to data from a study to evaluate a collection of BRCA1/BRCA2 gene mutation prediction scores. In this example, genotype is measured with error by one or more genetic assays. We estimate true genotype for each individual in the dataset, operating characteristics of the commonly used genotyping procedures and a relative weighting of the scores. Finally, we compare the scores against the gold standard genotype and find that Mendelian scores are, on average, the more refined and better calibrated of those considered and that the comparison is sensitive to measurement error in the gold standard.
Resumo:
This dissertation discusses structural-electrostatic modeling techniques, genetic algorithm based optimization and control design for electrostatic micro devices. First, an alternative modeling technique, the interpolated force model, for electrostatic micro devices is discussed. The method provides improved computational efficiency relative to a benchmark model, as well as improved accuracy for irregular electrode configurations relative to a common approximate model, the parallel plate approximation model. For the configuration most similar to two parallel plates, expected to be the best case scenario for the approximate model, both the parallel plate approximation model and the interpolated force model maintained less than 2.2% error in static deflection compared to the benchmark model. For the configuration expected to be the worst case scenario for the parallel plate approximation model, the interpolated force model maintained less than 2.9% error in static deflection while the parallel plate approximation model is incapable of handling the configuration. Second, genetic algorithm based optimization is shown to improve the design of an electrostatic micro sensor. The design space is enlarged from published design spaces to include the configuration of both sensing and actuation electrodes, material distribution, actuation voltage and other geometric dimensions. For a small population, the design was improved by approximately a factor of 6 over 15 generations to a fitness value of 3.2 fF. For a larger population seeded with the best configurations of the previous optimization, the design was improved by another 7% in 5 generations to a fitness value of 3.0 fF. Third, a learning control algorithm is presented that reduces the closing time of a radiofrequency microelectromechanical systems switch by minimizing bounce while maintaining robustness to fabrication variability. Electrostatic actuation of the plate causes pull-in with high impact velocities, which are difficult to control due to parameter variations from part to part. A single degree-of-freedom model was utilized to design a learning control algorithm that shapes the actuation voltage based on the open/closed state of the switch. Experiments on 3 test switches show that after 5-10 iterations, the learning algorithm lands the switch with an impact velocity not exceeding 0.2 m/s, eliminating bounce.
Resumo:
A significant cost for foundations is the design and installation of piles when they are required due to poor ground conditions. Not only is it important that piles be designed properly, but also that the installation equipment and total cost be evaluated. To assist in the evaluation of piles a number of methods have been developed. In this research three of these methods were investigated, which were developed by the Federal Highway Administration, the US Corps of Engineers and the American Petroleum Institute (API). The results from these methods were entered into the program GRLWEAPTM to assess the pile drivability and to provide a standard base for comparing the three methods. An additional element of this research was to develop EXCEL spreadsheets to implement these three methods. Currently the Army Corps and API methods do not have publicly available software and must be performed manually, which requires that data is taken off of figures and tables, which can introduce error in the prediction of pile capacities. Following development of the EXCEL spreadsheet, they were validated with both manual calculations and existing data sets to ensure that the data output is correct. To evaluate the three pile capacity methods data was utilized from four project sites from North America. The data included site geotechnical data along with field determined pile capacities. In order to achieve a standard comparison of the data, the pile capacities and geotechnical data from the three methods were entered into GRLWEAPTM. The sites consisted of both cohesive and cohesionless soils; where one site was primarily cohesive, one was primarily cohesionless, and the other two consisted of inter-bedded cohesive and cohesionless soils. Based on this limited set of data the results indicated that the US Corps of Engineers method more closely compared with the field test data, followed by the API method to a lesser degree. The DRIVEN program compared favorably in cohesive soils, but over predicted in cohesionless material.
Resumo:
Range estimation is the core of many positioning systems such as radar, and Wireless Local Positioning Systems (WLPS). The estimation of range is achieved by estimating Time-of-Arrival (TOA). TOA represents the signal propagation delay between a transmitter and a receiver. Thus, error in TOA estimation causes degradation in range estimation performance. In wireless environments, noise, multipath, and limited bandwidth reduce TOA estimation performance. TOA estimation algorithms that are designed for wireless environments aim to improve the TOA estimation performance by mitigating the effect of closely spaced paths in practical (positive) signal-to-noise ratio (SNR) regions. Limited bandwidth avoids the discrimination of closely spaced paths. This reduces TOA estimation performance. TOA estimation methods are evaluated as a function of SNR, bandwidth, and the number of reflections in multipath wireless environments, as well as their complexity. In this research, a TOA estimation technique based on Blind signal Separation (BSS) is proposed. This frequency domain method estimates TOA in wireless multipath environments for a given signal bandwidth. The structure of the proposed technique is presented and its complexity and performance are theoretically evaluated. It is depicted that the proposed method is not sensitive to SNR, number of reflections, and bandwidth. In general, as bandwidth increases, TOA estimation performance improves. However, spectrum is the most valuable resource in wireless systems and usually a large portion of spectrum to support high performance TOA estimation is not available. In addition, the radio frequency (RF) components of wideband systems suffer from high cost and complexity. Thus, a novel, multiband positioning structure is proposed. The proposed technique uses the available (non-contiguous) bands to support high performance TOA estimation. This system incorporates the capabilities of cognitive radio (CR) systems to sense the available spectrum (also called white spaces) and to incorporate white spaces for high-performance localization. First, contiguous bands that are divided into several non-equal, narrow sub-bands that possess the same SNR are concatenated to attain an accuracy corresponding to the equivalent full band. Two radio architectures are proposed and investigated: the signal is transmitted over available spectrum either simultaneously (parallel concatenation) or sequentially (serial concatenation). Low complexity radio designs that handle the concatenation process sequentially and in parallel are introduced. Different TOA estimation algorithms that are applicable to multiband scenarios are studied and their performance is theoretically evaluated and compared to simulations. Next, the results are extended to non-contiguous, non-equal sub-bands with the same SNR. These are more realistic assumptions in practical systems. The performance and complexity of the proposed technique is investigated as well. This study’s results show that selecting bandwidth, center frequency, and SNR levels for each sub-band can adapt positioning accuracy.
Resumo:
The performance of memory-guided saccades with two different delays (3 and 30 s of memorization) was studied in seven healthy subjects. Double-pulse transcranial magnetic stimulation (dTMS) with an interstimulus interval of 100 ms was applied over the right dorsolateral prefrontal cortex (DLPFC) early (1 s after target presentation) and late (28 s after target presentation). Early stimulation significantly increased in both delays the percentage of error in amplitude (PEA) of contralateral memory-guided saccades compared to the control experiment without stimulation. dTMS applied late in the delay had no significant effect on PEA. Furthermore, we found a significantly smaller effect of early stimulation in the long-delay paradigm. These results suggest a time-dependent hierarchical organization of the spatial working memory with a functional dominance of DLPFC during the early memorization, independent from the memorization delay. For a long memorization delay, however, working memory seems to have an additional, DLPFC-independent component.
Resumo:
The double-echo-steady-state (DESS) sequence generates two signal echoes that are characterized by a different contrast behavior. Based on these two contrasts, the underlying T2 can be calculated. For a flip-angle of 90 degrees , the calculated T2 becomes independent of T1, but with very low signal-to-noise ratio. In the present study, the estimation of cartilage T2, based on DESS with a reduced flip-angle, was investigated, with the goal of optimizing SNR, and simultaneously minimizing the error in T2. This approach was validated in phantoms and on volunteers. T2 estimations based on DESS at different flip-angles were compared with standard multiecho, spin-echo T2. Furthermore, DESS-T2 estimations were used in a volunteer and in an initial study on patients after cartilage repair of the knee. A flip-angle of 33 degrees was the best compromise for the combination of DESS-T2 mapping and morphological imaging. For this flip angle, the Pearson correlation was 0.993 in the phantom study (approximately 20% relative difference between SE-T2 and DESS-T2); and varied between 0.429 and 0.514 in the volunteer study. Measurements in patients showed comparable results for both techniques with regard to zonal assessment. This DESS-T2 approach represents an opportunity to combine morphological and quantitative cartilage MRI in a rapid one-step examination.
Resumo:
Obwohl der Ursprung der europäischen Einigungsgeschichte im wirtschaftlichen Bereich lag, hatte die Integration von Beginn an auch politischen Charakter. Schon die römischen Verträge enthielten Ansätze einer Konstitutionalisierung und auch die Bezeichnung der Verträge als Verfassung wurde seit den 60er-Jahren unter Rechtswissenschaftlern immer gebräuchlicher, auch wenn dies stets umstritten war. Unabhängig vom Streit über den Verfassungsbegriff hat die von den Verträgen gebildete Rechtsordnung jedenfalls inhaltlich Verfassungscharakter. Sie enthält Regelungen, die man gemeinhin mit einer Staatsverfassung verbindet. Die europäische Integration war stets von verfassungsrechtlichen Idealen getragen, weshalb man die Mitgliedstaaten auch als eine Verfassungsrechtsgemeinschaft bezeichnen kann. Bedeutende Weiterentwicklungen erfuhr der Konstitutionalisierungsprozess mit der Konventsmethode und der Erarbeitung der Grundrechte-Charta. Fortgesetzt wurde dieser Prozess mit dem Entwurf über den Verfassungsvertrag für Europa. Da in ihm typische Gehalte einer Verfassung verkörpert sind, verdient er durchaus auch diese Bezeichnung. Auf seiner Basis sollte ein schlanker, übersichtlicher und verständlicher Verfassungstext geschaffen werden, der die Reform und Integration Europas weiter führt und ein Instrument der Identitätsstiftung sein kann.
Resumo:
The human face is a vital component of our identity and many people undergo medical aesthetics procedures in order to achieve an ideal or desired look. However, communication between physician and patient is fundamental to understand the patient’s wishes and to achieve the desired results. To date, most plastic surgeons rely on either “free hand” 2D drawings on picture printouts or computerized picture morphing. Alternatively, hardware dependent solutions allow facial shapes to be created and planned in 3D, but they are usually expensive or complex to handle. To offer a simple and hardware independent solution, we propose a web-based application that uses 3 standard 2D pictures to create a 3D representation of the patient’s face on which facial aesthetic procedures such as filling, skin clearing or rejuvenation, and rhinoplasty are planned in 3D. The proposed application couples a set of well-established methods together in a novel manner to optimize 3D reconstructions for clinical use. Face reconstructions performed with the application were evaluated by two plastic surgeons and also compared to ground truth data. Results showed the application can provide accurate 3D face representations to be used in clinics (within an average of 2 mm error) in less than 5 min.
Resumo:
Objective: The PEM Flex Solo II (Naviscan, Inc., San Diego, CA) is currently the only commercially-available positron emission mammography (PEM) scanner. This scanner does not apply corrections for count rate effects, attenuation or scatter during image reconstruction, potentially affecting the quantitative accuracy of images. This work measures the overall quantitative accuracy of the PEM Flex system, and determines the contributions of error due to count rate effects, attenuation and scatter. Materials and Methods: Gelatin phantoms were designed to simulate breasts of different sizes (4 – 12 cm thick) with varying uniform background activity concentration (0.007 – 0.5 μCi/cc), cysts and lesions (2:1, 5:1, 10:1 lesion-to-background ratios). The overall error was calculated from ROI measurements in the phantoms with a clinically relevant background activity concentration (0.065 μCi/cc). The error due to count rate effects was determined by comparing the overall error at multiple background activity concentrations to the error at 0.007 μCi/cc. A point source and cold gelatin phantoms were used to assess the errors due to attenuation and scatter. The maximum pixel values in gelatin and in air were compared to determine the effect of attenuation. Scatter was evaluated by comparing the sum of all pixel values in gelatin and in air. Results: The overall error in the background was found to be negative in phantoms of all thicknesses, with the exception of the 4-cm thick phantoms (0%±7%), and it increased with thickness (-34%±6% for the 12-cm phantoms). All lesions exhibited large negative error (-22% for the 2:1 lesions in the 4-cm phantom) which increased with thickness and with lesion-to-background ratio (-85% for the 10:1 lesions in the 12-cm phantoms). The error due to count rate in phantoms with 0.065 μCi/cc background was negative (-23%±6% for 4-cm thickness) and decreased with thickness (-7%±7% for 12 cm). Attenuation was a substantial source of negative error and increased with thickness (-51%±10% to -77% ±4% in 4 to 12 cm phantoms, respectively). Scatter contributed a relatively constant amount of positive error (+23%±11%) for all thicknesses. Conclusion: Applying corrections for count rate, attenuation and scatter will be essential for the PEM Flex Solo II to be able to produce quantitatively accurate images.
Resumo:
Primate immunodeficiency viruses, or lentiviruses (HIV-1, HIV-2, and SIV), and hepatitis delta virus (HDV) are RNA viruses characterized by rapid evolution. Infection by primate immunodeficiency viruses usually results in the development of acquired immunodeficiency syndrome (AIDS) in humans and AIDS-like illnesses in Asian macaques. Similarly, hepatitis delta virus infection causes hepatitis and liver cancer in humans. These viruses are heterogeneous within an infected patient and among individuals. Substitution rates in the virus genomes are high and vary in different lineages and among sites. Methods of phylogenetic analysis were applied to study the evolution of primate lentiviruses and the hepatitis delta virus. The following results have been obtained: (1) The substitution rate varies among sites of primate lentivirus genes according to the two parameter gamma distribution, with the shape parameter $\alpha$ being close to 1. (2) Primate immunodeficiency viruses fall into species-specific lineages. Therefore, viral transmissions across primate species are not as frequent as suggested by previous authors. (3) Primate lentiviruses have acquired or lost their pathogenicity several times in the course of evolution. (4) Evidence was provided for multiple infections of a North American patient by distinct HIV-1 strains of the B subtype. (5) Computer simulations indicate that the probability of committing an error in testing HIV transmission depends on the number of virus sequences and their length, the divergence times among sequences, and the model of nucleotide substitution. (6) For future investigations of HIV-1 transmissions, using longer virus sequences and avoiding the use of distant outgroups is recommended. (7) Hepatitis delta virus strains are usually related according to the geographic region of isolation. (8) Evolution of HDV is characterized by the rate of synonymous substitution being lower than the nonsynonymous substitution rate and the rate of evolution of the noncoding region. (9) There is a strong preference for G and C nucleotides at the third codon positions of the HDV coding region. ^
Resumo:
Environmental data sets of pollutant concentrations in air, water, and soil frequently include unquantified sample values reported only as being below the analytical method detection limit. These values, referred to as censored values, should be considered in the estimation of distribution parameters as each represents some value of pollutant concentration between zero and the detection limit. Most of the currently accepted methods for estimating the population parameters of environmental data sets containing censored values rely upon the assumption of an underlying normal (or transformed normal) distribution. This assumption can result in unacceptable levels of error in parameter estimation due to the unbounded left tail of the normal distribution. With the beta distribution, which is bounded by the same range of a distribution of concentrations, $\rm\lbrack0\le x\le1\rbrack,$ parameter estimation errors resulting from improper distribution bounds are avoided. This work developed a method that uses the beta distribution to estimate population parameters from censored environmental data sets and evaluated its performance in comparison to currently accepted methods that rely upon an underlying normal (or transformed normal) distribution. Data sets were generated assuming typical values encountered in environmental pollutant evaluation for mean, standard deviation, and number of variates. For each set of model values, data sets were generated assuming that the data was distributed either normally, lognormally, or according to a beta distribution. For varying levels of censoring, two established methods of parameter estimation, regression on normal ordered statistics, and regression on lognormal ordered statistics, were used to estimate the known mean and standard deviation of each data set. The method developed for this study, employing a beta distribution assumption, was also used to estimate parameters and the relative accuracy of all three methods were compared. For data sets of all three distribution types, and for censoring levels up to 50%, the performance of the new method equaled, if not exceeded, the performance of the two established methods. Because of its robustness in parameter estimation regardless of distribution type or censoring level, the method employing the beta distribution should be considered for full development in estimating parameters for censored environmental data sets. ^
Resumo:
Background: Individuals with type 1 diabetes (T1D) have to count the carbohydrates (CHOs) of their meal to estimate the prandial insulin dose needed to compensate for the meal’s effect on blood glucose levels. CHO counting is very challenging but also crucial, since an error of 20 grams can substantially impair postprandial control. Method: The GoCARB system is a smartphone application designed to support T1D patients with CHO counting of nonpacked foods. In a typical scenario, the user places a reference card next to the dish and acquires 2 images with his/her smartphone. From these images, the plate is detected and the different food items on the plate are automatically segmented and recognized, while their 3D shape is reconstructed. Finally, the food volumes are calculated and the CHO content is estimated by combining the previous results and using the USDA nutritional database. Results: To evaluate the proposed system, a set of 24 multi-food dishes was used. For each dish, 3 pairs of images were taken and for each pair, the system was applied 4 times. The mean absolute percentage error in CHO estimation was 10 ± 12%, which led to a mean absolute error of 6 ± 8 CHO grams for normal-sized dishes. Conclusion: The laboratory experiments demonstrated the feasibility of the GoCARB prototype system since the error was below the initial goal of 20 grams. However, further improvements and evaluation are needed prior launching a system able to meet the inter- and intracultural eating habits.
Resumo:
We present successful 81 Kr-Kr radiometric dating of ancient polarice. Krypton was extracted from the air bubbles in four∼350-kg polar ice samples from Taylor Glacier in the McMurdo Dry Valleys, Antarctica, and dated using Atom Trap Trace Analysis (ATTA). The 81 Kr radiometric ages agree with independent age estimates obtained from stratigraphic dating techniques with a mean abso-lute age offset of 6±2.5 ka. Our experimental methods and sampling strategy are validated by (i) 85 Kr and 39 Ar analyses that show the samples to be free of modern air contamination and (ii)air content measurements that show the ice did not experience gas loss. We estimate the error in the 81 Kr ages due to past geomagnetic variability to be below 3 ka. We show that ice from the previous interglacial period (Marine Isotope Stage 5e, 130–115 ka before present) can be found in abundance near the surface of Taylor Glacier. Our study paves the way for reliable radiometric dating of ancient ice in blue ice areas and margin sites where large samples are available, greatly enhancing their scientific value as archives of old ice and meteorites. At present, ATTA 81Kr analysis requires a 40–80-kg ice sample; as sample requirements continue to decrease, 81 Kr dating of ice cores is a future possibility.