920 resultados para reasonable accuracy
Resumo:
Comparing published NAVD 88 Helmert orthometric heights of First-Order bench marks against GPS-determined orthometric heights showed that GEOID03 and GEOID09 perform at their reported accuracy in Connecticut. GPS-determined orthometric heights were determined by subtracting geoid undulations from ellipsoid heights obtained from a network least-squares adjustment of GPS occupations in 2007 and 2008. A total of 73 markers were occupied in these stability classes: 25 class A, 11 class B, 12 class C, 2 class D bench marks, and 23 temporary marks with transferred elevations. Adjusted ellipsoid heights were compared against OPUS as a check. We found that: the GPS-determined orthometric heights of stability class A markers and the transfers are statistically lower than their published values but just barely; stability class B, C and D markers are also statistically lower in a manner consistent with subsidence or settling; GEOID09 does not exhibit a statistically significant residual trend across Connecticut; and GEOID09 out-performed GEOID03. A "correction surface" is not recommended in spite of the geoid models being statistically different than the NAVD 88 heights because the uncertainties involved dominate the discrepancies. Instead, it is recommended that the vertical control network be re-observed.
Resumo:
High Angular Resolution Diffusion Imaging (HARDI) techniques, including Diffusion Spectrum Imaging (DSI), have been proposed to resolve crossing and other complex fiber architecture in the human brain white matter. In these methods, directional information of diffusion is inferred from the peaks in the orientation distribution function (ODF). Extensive studies using histology on macaque brain, cat cerebellum, rat hippocampus and optic tracts, and bovine tongue are qualitatively in agreement with the DSI-derived ODFs and tractography. However, there are only two studies in the literature which validated the DSI results using physical phantoms and both these studies were not performed on a clinical MRI scanner. Also, the limited studies which optimized DSI in a clinical setting, did not involve a comparison against physical phantoms. Finally, there is lack of consensus on the necessary pre- and post-processing steps in DSI; and ground truth diffusion fiber phantoms are not yet standardized. Therefore, the aims of this dissertation were to design and construct novel diffusion phantoms, employ post-processing techniques in order to systematically validate and optimize (DSI)-derived fiber ODFs in the crossing regions on a clinical 3T MR scanner, and develop user-friendly software for DSI data reconstruction and analysis. Phantoms with a fixed crossing fiber configuration of two crossing fibers at 90° and 45° respectively along with a phantom with three crossing fibers at 60°, using novel hollow plastic capillaries and novel placeholders, were constructed. T2-weighted MRI results on these phantoms demonstrated high SNR, homogeneous signal, and absence of air bubbles. Also, a technique to deconvolve the response function of an individual peak from the overall ODF was implemented, in addition to other DSI post-processing steps. This technique greatly improved the angular resolution of the otherwise unresolvable peaks in a crossing fiber ODF. The effects of DSI acquisition parameters and SNR on the resultant angular accuracy of DSI on the clinical scanner were studied and quantified using the developed phantoms. With a high angular direction sampling and reasonable levels of SNR, quantification of a crossing region in the 90°, 45° and 60° phantoms resulted in a successful detection of angular information with mean ± SD of 86.93°±2.65°, 44.61°±1.6° and 60.03°±2.21° respectively, while simultaneously enhancing the ODFs in regions containing single fibers. For the applicability of these validated methodologies in DSI, improvement in ODFs and fiber tracking from known crossing fiber regions in normal human subjects were demonstrated; and an in-house software package in MATLAB which streamlines the data reconstruction and post-processing for DSI, with easy to use graphical user interface was developed. In conclusion, the phantoms developed in this dissertation offer a means of providing ground truth for validation of reconstruction and tractography algorithms of various diffusion models (including DSI). Also, the deconvolution methodology (when applied as an additional DSI post-processing step) significantly improved the angular accuracy of the ODFs obtained from DSI, and should be applicable to ODFs obtained from the other high angular resolution diffusion imaging techniques.
Resumo:
ACCURACY OF THE BRCAPRO RISK ASSESSMENT MODEL IN MALES PRESENTING TO MD ANDERSON FOR BRCA TESTING Publication No. _______ Carolyn A. Garby, B.S. Supervisory Professor: Banu Arun, M.D. Hereditary Breast and Ovarian Cancer (HBOC) syndrome is due to mutations in BRCA1 and BRCA2 genes. Women with HBOC have high risks to develop breast and ovarian cancers. Males with HBOC are commonly overlooked because male breast cancer is rare and other male cancer risks such as prostate and pancreatic cancers are relatively low. BRCA genetic testing is indicated for men as it is currently estimated that 4-40% of male breast cancers result from a BRCA1 or BRCA2 mutation (Ottini, 2010) and management recommendations can be made based on genetic test results. Risk assessment models are available to provide the individualized likelihood to have a BRCA mutation. Only one study has been conducted to date to evaluate the accuracy of BRCAPro in males and was based on a cohort of Italian males and utilized an older version of BRCAPro. The objective of this study is to determine if BRCAPro5.1 is a valid risk assessment model for males who present to MD Anderson Cancer Center for BRCA genetic testing. BRCAPro has been previously validated for determining the probability of carrying a BRCA mutation, however has not been further examined particularly in males. The total cohort consisted of 152 males who had undergone BRCA genetic testing. The cohort was stratified by indication for genetic counseling. Indications included having a known familial BRCA mutation, having a personal diagnosis of a BRCA-related cancer, or having a family history suggestive of HBOC. Overall there were 22 (14.47%) BRCA1+ males and 25 (16.45%) BRCA2+ males. Receiver operating characteristic curves were constructed for the cohort overall, for each particular indication, as well as for each cancer subtype. Our findings revealed that the BRCAPro5.1 model had perfect discriminating ability at a threshold of 56.2 for males with breast cancer, however only 2 (4.35%) of 46 were found to have BRCA2 mutations. These results are significantly lower than the high approximation (40%) reported in previous literature. BRCAPro does perform well in certain situations for men. Future investigation of male breast cancer and men at risk for BRCA mutations is necessary to provide a more accurate risk assessment.
Resumo:
This work aimed to create a mailable and OSLD-based phantom with accuracy suitable for RPC audits of HDR brachytherapy sources at institutions participating in NCI-funded cooperative clinical trials. An 8 × 8 × 10 cm3 prototype with two slots capable of holding nanoDot Al2O3:C OSL dosimeters (Landauer, Glenwood, IL) was designed and built. The phantom has a single channel capable of accepting all 192Ir HDR brachytherapy sources in current clinical use in the United States. Irradiations were performed with an 192Ir HDR source to determine correction factors for linearity with dose, dose rate, and the combined effect of irradiation energy and phantom construction. The uncertainties introduced by source positioning in the phantom and timer resolution limitations were also investigated. It was found that the linearity correction factor was where dose is in cGy, which differed from that determined by the RPC for the same batch of dosimeters under 60Co irradiation. There was no significant dose rate effect. Separate energy+block correction factors were determined for both models of 192Ir sources currently in clinical use and these vendor-specific correction factors differed by almost 2.6%. For Nucletron sources, this correction factor was 1.026±0.004 (99% Confidence Interval) and for Varian sources it was 1.000±0.007 (99% CI). Reasonable deviations in source positioning within the phantom and the limited resolution of the source timer had insignificant effects on the ability to measure dose. Overall measurement uncertainty of the system was estimated to be ±2.5% for both Nucletron and Varian source audits (95% CI). This uncertainty was sufficient to establish a ±5% acceptance criterion for source strength audits under a formal RPC audit program. Trial audits of eight participating institutions resulted in an average RPC-to-institution dose ratio of 1.000 with a standard deviation of 0.011.
Resumo:
Objective. In 2009, the International Expert Committee recommended the use of HbA1c test for diagnosis of diabetes. Although it has been recommended for the diagnosis of diabetes, its precise test performance among Mexican Americans is uncertain. A strong “gold standard” would rely on repeated blood glucose measurement on different days, which is the recommended method for diagnosing diabetes in clinical practice. Our objective was to assess test performance of HbA1c in detecting diabetes and pre-diabetes against repeated fasting blood glucose measurement for the Mexican American population living in United States-Mexico border. Moreover, we wanted to find out a specific and precise threshold value of HbA1c for Diabetes Mellitus (DM) and pre-diabetes for this high-risk population which might assist in better diagnosis and better management of patient diabetes. ^ Research design and methods. We used CCHC dataset for our study. In 2004, the Cameron County Hispanic Cohort (CCHC), now numbering 2,574, was established drawn from randomly selected households on the basis of 2000 Census tract data. The CCHC study randomly selected a subset of people (aged 18-64 years) in CCHC cohort households to determine the influence of SES on diabetes and obesity. Among the participants in Cohort-2000, 67.15% are female; all are Hispanic. ^ Individuals were defined as having diabetes mellitus (Fasting plasma glucose [FPG] ≥ 126 mg/dL or pre-diabetes (100 ≤ FPG < 126 mg/dL). HbA1c test performance was evaluated using receiver operator characteristic (ROC) curves. Moreover, change-point models were used to determine HbA1c thresholds compatible with FPG thresholds for diabetes and pre-diabetes. ^ Results. When assessing Fasting Plasma Glucose (FPG) is used to detect diabetes, the sensitivity and specificity of HbA1c≥ 6.5% was 75% and 87% respectively (area under the curve 0.895). Additionally, when assessing FPG to detect pre-diabetes, the sensitivity and specificity of HbA1c≥ 6.0% (ADA recommended threshold) was 18% and 90% respectively. The sensitivity and specificity of HbA1c≥ 5.7% (International Expert Committee recommended threshold) for detecting pre-diabetes was 31% and 78% respectively. ROC analyses suggest HbA1c as a sound predictor of diabetes mellitus (area under the curve 0.895) but a poorer predictor for pre-diabetes (area under the curve 0.632). ^ Conclusions. Our data support the current recommendations for use of HbA1c in the diagnosis of diabetes for the Mexican American population as it has shown reasonable sensitivity, specificity and accuracy against repeated FPG measures. However, use of HbA1c may be premature for detecting pre-diabetes in this specific population because of the poor sensitivity with FPG. It might be the case that HbA1c is differentiating the cases more effectively who are at risk of developing diabetes. Following these pre-diabetic individuals for a longer-term for the detection of incident diabetes may lead to more confirmatory result.^
Resumo:
Clinical Research Data Quality Literature Review and Pooled Analysis We present a literature review and secondary analysis of data accuracy in clinical research and related secondary data uses. A total of 93 papers meeting our inclusion criteria were categorized according to the data processing methods. Quantitative data accuracy information was abstracted from the articles and pooled. Our analysis demonstrates that the accuracy associated with data processing methods varies widely, with error rates ranging from 2 errors per 10,000 files to 5019 errors per 10,000 fields. Medical record abstraction was associated with the highest error rates (70–5019 errors per 10,000 fields). Data entered and processed at healthcare facilities had comparable error rates to data processed at central data processing centers. Error rates for data processed with single entry in the presence of on-screen checks were comparable to double entered data. While data processing and cleaning methods may explain a significant amount of the variability in data accuracy, additional factors not resolvable here likely exist. Defining Data Quality for Clinical Research: A Concept Analysis Despite notable previous attempts by experts to define data quality, the concept remains ambiguous and subject to the vagaries of natural language. This current lack of clarity continues to hamper research related to data quality issues. We present a formal concept analysis of data quality, which builds on and synthesizes previously published work. We further posit that discipline-level specificity may be required to achieve the desired definitional clarity. To this end, we combine work from the clinical research domain with findings from the general data quality literature to produce a discipline-specific definition and operationalization for data quality in clinical research. While the results are helpful to clinical research, the methodology of concept analysis may be useful in other fields to clarify data quality attributes and to achieve operational definitions. Medical Record Abstractor’s Perceptions of Factors Impacting the Accuracy of Abstracted Data Medical record abstraction (MRA) is known to be a significant source of data errors in secondary data uses. Factors impacting the accuracy of abstracted data are not reported consistently in the literature. Two Delphi processes were conducted with experienced medical record abstractors to assess abstractor’s perceptions about the factors. The Delphi process identified 9 factors that were not found in the literature, and differed with the literature by 5 factors in the top 25%. The Delphi results refuted seven factors reported in the literature as impacting the quality of abstracted data. The results provide insight into and indicate content validity of a significant number of the factors reported in the literature. Further, the results indicate general consistency between the perceptions of clinical research medical record abstractors and registry and quality improvement abstractors. Distributed Cognition Artifacts on Clinical Research Data Collection Forms Medical record abstraction, a primary mode of data collection in secondary data use, is associated with high error rates. Distributed cognition in medical record abstraction has not been studied as a possible explanation for abstraction errors. We employed the theory of distributed representation and representational analysis to systematically evaluate cognitive demands in medical record abstraction and the extent of external cognitive support employed in a sample of clinical research data collection forms. We show that the cognitive load required for abstraction in 61% of the sampled data elements was high, exceedingly so in 9%. Further, the data collection forms did not support external cognition for the most complex data elements. High working memory demands are a possible explanation for the association of data errors with data elements requiring abstractor interpretation, comparison, mapping or calculation. The representational analysis used here can be used to identify data elements with high cognitive demands.
Resumo:
Prenatal genetic counseling patients have the ability to choose from a myriad of screening and diagnostic testing options, each with intricacies and caveats regarding accuracy and timing. Decisions regarding such testing can be difficult and are often made on the same day that testing is performed. Therefore, it is reasonable to consider that the support people brought to an appointment may have a role in the decision-making process. We aimed to better define this potential role by examining the incoming knowledge and expectations of support people who attended prenatal genetic counseling appointments. Support people were asked to complete a survey at one of seven Houston area prenatal clinics. The survey included questions regarding demographics, relationship to patient, incoming knowledge of the appointment, expectations of decision-making and perceived levels of influence over the decisions that would be made during the counseling session. The majority (79.4%) of the 252 participants were spouses/partners. Overall, there was poor knowledge of the referral indications with only 33.5% of participants correctly identifying the patient’s indication. Participants had even poorer knowledge of testing options that would be offered during the session, as only 17.7% were able to correctly identify testing options that would be discussed during the genetic counseling session. Of participants, just 3.6% said that they did not want to be included in discussions about screening/testing options. Only a few participants thought that they had less influence over decisions related to the pregnancy than over non-pregnancy decisions. Participants who reported feeling like they had a higher level of influence were likely to attend more of the pregnancy-related appointments with the patient. Findings from this study have provided insight into the perspective of support persons and have identified gaps in knowledge that may exist between the patients and the people they choose to bring with them into the genetic counseling session. In addition, this study is a starting point to assess how much the support people think that they impact the decision-making process of prenatal genetic counseling patients versus how much the prenatal patients value the input of the support people.
New methods for quantification and analysis of quantitative real-time polymerase chain reaction data
Resumo:
Quantitative real-time polymerase chain reaction (qPCR) is a sensitive gene quantitation method that has been widely used in the biological and biomedical fields. The currently used methods for PCR data analysis, including the threshold cycle (CT) method, linear and non-linear model fitting methods, all require subtracting background fluorescence. However, the removal of background fluorescence is usually inaccurate, and therefore can distort results. Here, we propose a new method, the taking-difference linear regression method, to overcome this limitation. Briefly, for each two consecutive PCR cycles, we subtracted the fluorescence in the former cycle from that in the later cycle, transforming the n cycle raw data into n-1 cycle data. Then linear regression was applied to the natural logarithm of the transformed data. Finally, amplification efficiencies and the initial DNA molecular numbers were calculated for each PCR run. To evaluate this new method, we compared it in terms of accuracy and precision with the original linear regression method with three background corrections, being the mean of cycles 1-3, the mean of cycles 3-7, and the minimum. Three criteria, including threshold identification, max R2, and max slope, were employed to search for target data points. Considering that PCR data are time series data, we also applied linear mixed models. Collectively, when the threshold identification criterion was applied and when the linear mixed model was adopted, the taking-difference linear regression method was superior as it gave an accurate estimation of initial DNA amount and a reasonable estimation of PCR amplification efficiencies. When the criteria of max R2 and max slope were used, the original linear regression method gave an accurate estimation of initial DNA amount. Overall, the taking-difference linear regression method avoids the error in subtracting an unknown background and thus it is theoretically more accurate and reliable. This method is easy to perform and the taking-difference strategy can be extended to all current methods for qPCR data analysis.^
Resumo:
My dissertation focuses on two aspects of RNA sequencing technology. The first is the methodology for modeling the overdispersion inherent in RNA-seq data for differential expression analysis. This aspect is addressed in three sections. The second aspect is the application of RNA-seq data to identify the CpG island methylator phenotype (CIMP) by integrating datasets of mRNA expression level and DNA methylation status. Section 1: The cost of DNA sequencing has reduced dramatically in the past decade. Consequently, genomic research increasingly depends on sequencing technology. However it remains elusive how the sequencing capacity influences the accuracy of mRNA expression measurement. We observe that accuracy improves along with the increasing sequencing depth. To model the overdispersion, we use the beta-binomial distribution with a new parameter indicating the dependency between overdispersion and sequencing depth. Our modified beta-binomial model performs better than the binomial or the pure beta-binomial model with a lower false discovery rate. Section 2: Although a number of methods have been proposed in order to accurately analyze differential RNA expression on the gene level, modeling on the base pair level is required. Here, we find that the overdispersion rate decreases as the sequencing depth increases on the base pair level. Also, we propose four models and compare them with each other. As expected, our beta binomial model with a dynamic overdispersion rate is shown to be superior. Section 3: We investigate biases in RNA-seq by exploring the measurement of the external control, spike-in RNA. This study is based on two datasets with spike-in controls obtained from a recent study. We observe an undiscovered bias in the measurement of the spike-in transcripts that arises from the influence of the sample transcripts in RNA-seq. Also, we find that this influence is related to the local sequence of the random hexamer that is used in priming. We suggest a model of the inequality between samples and to correct this type of bias. Section 4: The expression of a gene can be turned off when its promoter is highly methylated. Several studies have reported that a clear threshold effect exists in gene silencing that is mediated by DNA methylation. It is reasonable to assume the thresholds are specific for each gene. It is also intriguing to investigate genes that are largely controlled by DNA methylation. These genes are called “L-shaped” genes. We develop a method to determine the DNA methylation threshold and identify a new CIMP of BRCA. In conclusion, we provide a detailed understanding of the relationship between the overdispersion rate and sequencing depth. And we reveal a new bias in RNA-seq and provide a detailed understanding of the relationship between this new bias and the local sequence. Also we develop a powerful method to dichotomize methylation status and consequently we identify a new CIMP of breast cancer with a distinct classification of molecular characteristics and clinical features.
Resumo:
During R/V Meteor-cruise no. 30 4 moorings with 17 current meters were placed on the continental slope of Sierra Leone at depths between 81 and 1058 meters. The observation period started on March 8, 1973, 16.55 hours GMT and lasted 19 days for moorings M30_068MOOR, M30_069MOOR, M30_070MOOR on the slope and 9 days for M30_067MOOR on the shelf. One current meter recorded at location M30_067MOOR for 22 days. Hydrographic data were collected at 32 stations by means of the "Kieler Multi-Meeressonde". Harmonic analysis is applied to the first 15 days of the time series to determine the M2 and S2 tides. By vertically averaging of the Fourier coefficients the field of motion is separated into its barotropic and its baroclinic component. The expected error generated by white Gaussian noise is estimated. To estimate the influence of the particular vertical distribution of the current meters, the barotropic M2 tide is calculated by ommitting and interchanging time series of different moorings. It is shown that only the data of moorings M30_069MOOR, M30_070MOOR and M30_067MOOR can be used. The results for the barotropic M2 tide agree well with the previous publications of other authors. On the slope at a depth of 1000 m there is a free barotropic wave under the influence of the Coriolis-force propagating along the slope with an amplitude of 3.4 cm S**-1. On the shelf, the maximum current is substantially greater (5.8 cm s**-1) and the direction of propagation is perpendicular to the slope. As for the continental slope a separation into different baroclinic modes using vertical eigenmodes is not reasonable, an interpretation of the total baroclinic wave field is tried by means of the method of characteristis. Assuming the continental slope to generate several linear waves, which superpose, baroclinic tidal ellipses are calculated. The scattering of the direction of the major axes M30_069MOOR is in contrast to M30_070MOOR, where they are bundled within an angle of 60°. This is presumably caused by the different character of the bottom topography in the vicinity of the two moorings. A detailed discussion of M30_069MOOR is renounced since the accuracy of the bathymetric chart is not sufficient to prove any relation between waves and topography. The bundeling of the major axes at M30_070MOOR can be explained by the longslope changes of the slope, which cause an energy transfer from the longslope barotropic component to the downslope baroclinic component. The maximum amplitude is found at a depth of 245 m where it is expected from the characteristics originating at the shelf edge. Because of the dominating barotropic tide high coherence is found between most of the current meters. To show the influence of the baroclinic tidal waves, the effect of the mean current is considered. There are two periods nearly opposite longshore mean current. For 128 hours during each of these periods, starting on March 11, 05.00, and March 21, 08.30, the coherences and energy spectra are calculated. The changes in the slope of the characteristics are found in agreement with the changes of energy and coherence. Because of the short periods of nearly constant mean current, some of the calculated differences of energy and coherence are not statistically significant. For the M2 tide a calculation of the ratios of vertically integrated total baroclinic energy and vertically integrated barotropic kinetic energy is carried out. Taking into account both components (along and perpendicular to the slope) the obtained values are 0.75 and 0.98 at the slope and 0.38 at the shelf. If each component is considered separately, the ratios are 0.39 and 1.16 parallel to the slope and 5.1 and 15.85 for the component perpendicular to it. Taking the energy transfer from the longslope component to the doenslope component into account, a simple model yields an energy-ratio of 2.6. Considering the limited application of the theory to the real conditions, the obtained are in agreement with the values calculated by Sandstroem.
Resumo:
The thermal diffusion enrichment apparatus in use in Amsterdam before 1967, has been rebuilt in the Groningen Radiocarbon Dating Laboratory. It has been shown to operate reliably and reproducibly. A reasonable agreement exists between the theoretical calculations and the experimental results. The 14C enrichment of a CO sample is deduced from the simultaneous mass 30 enrichment, which is measured with a mass spectrometer. The relation between both enrichments follows from a series of calibration measurements. The over-all accuracy in the enrichment is a few percent, equivalent to a few hundred years in age. The main problem in dating very old samples is their possible contamination with recent carbon. Generally, careful sample selection and rigorous pretreatment reduce sample contamination to an acceptable value. Also, it has been established that laboratory contamination, due to a memory effect in the combustion system and to impurities in the oxygen and nitrogen gas used for combustion, can be eliminated. A detailed analysis shows that the counter background in our set-up is almost exclusively caused by cosmic ray muons. The measurement of 28 early glacial samples, mostly from North-west Europe, has yielded a consistent set of ages. These indicate the existence of three early glacial interstadials; using the Weichselian definitions: Amersfoort starting at 68 200 ± 1100, Brørup at 64 400 ± 800 and Odderade at 60 500 ± 600 years BP. This 14C chronology shows good agreement with the Camp Century chronology and the dated palaeo sea levels. The discrepancy in the age of the early part of the Last Glacial on the 14C time scale and on that adopted for the deep-sea d18 record, must probably be attributed to the use of a generalized d18 curve and a wrong interpretation of this curve in terms of three Barbados terraces.
Resumo:
Emission inventories are databases that aim to describe the polluting activities that occur across a certain geographic domain. According to the spatial scale, the availability of information will vary as well as the applied assumptions, which will strongly influence its quality, accuracy and representativeness. This study compared and contrasted two emission inventories describing the Greater Madrid Region (GMR) under an air quality simulation approach. The chosen inventories were the National Emissions Inventory (NEI) and the Regional Emissions Inventory of the Greater Madrid Region (REI). Both of them were used to feed air quality simulations with the CMAQ modelling system, and the results were compared with observations from the air quality monitoring network in the modelled domain. Through the application of statistical tools, the analysis of emissions at cell level and cell – expansion procedures, it was observed that the National Inventory showed better results for describing on – road traffic activities and agriculture, SNAP07 and SNAP10. The accurate description of activities, the good characterization of the vehicle fleet and the correct use of traffic emission factors were the main causes of such a good correlation. On the other hand, the Regional Inventory showed better descriptions for non – industrial combustion (SNAP02) and industrial activities (SNAP03). It incorporated realistic emission factors, a reasonable fuel mix and it drew upon local information sources to describe these activities, while NEI relied on surrogation and national datasets which leaded to a poorer representation. Off – road transportation (SNAP08) was similarly described by both inventories, while the rest of the SNAP activities showed a marginal contribution to the overall emissions.
Resumo:
This paper discusses the target localization problem of wireless visual sensor networks. Specifically, each node with a low-resolution camera extracts multiple feature points to represent the target at the sensor node level. A statistical method of merging the position information of different sensor nodes to select the most correlated feature point pair at the base station is presented. This method releases the influence of the accuracy of target extraction on the accuracy of target localization in universal coordinate system. Simulations show that, compared with other relative approach, our proposed method can generate more desirable target localization's accuracy, and it has a better trade-off between camera node usage and localization accuracy.
Resumo:
The accuracy of Tomás López´s historical cartography of the Canary Islands included in the “Atlas Particular” of the Kingdoms of Spain, Portugal and Adjacent Islands” is analyzed. For this purpose, we propose a methodology based on Geographic Information Systems (GIS), a comparison of digitized historical cartography population centres with current ones. This study shows that the lineal error value is small for the smaller islands: Lanzarote, El Hierro, La Palma and La Gomera. In the large islands of Tenerife, Fuerteventura and Gran Canaria, the error is smaller in central zones but increases towards the coast. This indicates that Tomás López began his cartography starting from central island zones, accumulating errors due to lack of geodetic references as he moved toward the coast.
Resumo:
Abstract interpreters rely on the existence of a nxpoint algorithm that calculates a least upper bound approximation of the semantics of the program. Usually, that algorithm is described in terms of the particular language in study and therefore it is not directly applicable to programs written in a different source language. In this paper we introduce a generic, block-based, and uniform representation of the program control flow graph and a language-independent nxpoint algorithm that can be applied to a variety of languages and, in particular, Java. Two major characteristics of our approach are accuracy (obtained through a topdown, context sensitive approach) and reasonable efficiency (achieved by means of memoization and dependency tracking techniques). We have also implemented the proposed framework and show some initial experimental results for standard benchmarks, which further support the feasibility of the solution adopted.