898 resultados para trajectory accuracy


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years, there has been an enormous growth of location-aware devices, such as GPS embedded cell phones, mobile sensors and radio-frequency identification tags. The age of combining sensing, processing and communication in one device, gives rise to a vast number of applications leading to endless possibilities and a realization of mobile Wireless Sensor Network (mWSN) applications. As computing, sensing and communication become more ubiquitous, trajectory privacy becomes a critical piece of information and an important factor for commercial success. While on the move, sensor nodes continuously transmit data streams of sensed values and spatiotemporal information, known as ``trajectory information". If adversaries can intercept this information, they can monitor the trajectory path and capture the location of the source node. ^ This research stems from the recognition that the wide applicability of mWSNs will remain elusive unless a trajectory privacy preservation mechanism is developed. The outcome seeks to lay a firm foundation in the field of trajectory privacy preservation in mWSNs against external and internal trajectory privacy attacks. First, to prevent external attacks, we particularly investigated a context-based trajectory privacy-aware routing protocol to prevent the eavesdropping attack. Traditional shortest-path oriented routing algorithms give adversaries the possibility to locate the target node in a certain area. We designed the novel privacy-aware routing phase and utilized the trajectory dissimilarity between mobile nodes to mislead adversaries about the location where the message started its journey. Second, to detect internal attacks, we developed a software-based attestation solution to detect compromised nodes. We created the dynamic attestation node chain among neighboring nodes to examine the memory checksum of suspicious nodes. The computation time for memory traversal had been improved compared to the previous work. Finally, we revisited the trust issue in trajectory privacy preservation mechanism designs. We used Bayesian game theory to model and analyze cooperative, selfish and malicious nodes' behaviors in trajectory privacy preservation activities.^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Whereas previous research has demonstrated that trait ratings of faces at encoding leads to enhanced recognition accuracy as compared to feature ratings, this set of experiments examines whether ratings given after encoding and just prior to recognition influence face recognition accuracy. In Experiment 1 subjects who made feature ratings just prior to recognition were significantly less accurate than subjects who made no ratings or trait ratings. In Experiment 2 ratings were manipulated at both encoding and retrieval. The retrieval effect was smaller and nonsignificant, but a combined probability analysis showed that it was significant when results from both experiments are considered jointly. In a third experiment exposure duration at retrieval, a potentially confounding factor in Experiments 1 and 2, had a nonsignificant effect on recognition accuracy, suggesting that it probably does not explain the results from Experiments 1 and 2. These experiments demonstrate that face recognition accuracy can be influenced by processing instructions at retrieval.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Biological detectors, such as canines, are valuable tools used for the rapid identification of illicit materials. However, recent increased scrutiny over the reliability, field accuracy, and the capabilities of each detection canine is currently being evaluated in the legal system. For example, the Supreme Court case, State of Florida v. Harris, discussed the need for continuous monitoring of canine abilities, thresholds, and search capabilities. As a result, the fallibility of canines for detection was brought to light, as well as a need for further research and understanding of canine detection. This study is two-fold, as it looks to not only create new training aids for canines that can be manipulated for dissipation control, but also investigates canine field accuracy to objects with similar odors to illicit materials. It was the goal of this research to improve upon current canine training aid mimics. Sol-gel polymer training aids, imprinted with the active odor of cocaine, were developed. This novel training aid improved upon the longevity of currently existing training aids, while also provided a way to manipulate the polymer network to alter the dissipation rate of the imprinted active odors. The manipulation of the polymer network could allow handlers to control the abundance of odors presented to their canines, familiarizing themselves to their canine’s capabilities and thresholds, thereby increasing the canines’ strength in court. The field accuracy of detection canines was recently called into question during the Supreme Court case, State of Florida v. Jardines, where it was argued that if cocaine’s active odor, methyl benzoate, was found to be produced by the popular landscaping flower, snapdragons, canines will false alert to said flowers. Therefore, snapdragon flowers were grown and tested both in the laboratory and in the field to determine the odors produced by snapdragon flowers; the persistence of these odors once flowers have been cut; and whether detection canines will alert to both growing and cut flowers during a blind search scenario. Results revealed that although methyl benzoate is produced by snapdragon flowers, certified narcotics detection canines can distinguish cocaine’s odor profile from that of snapdragon flowers and will not alert.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this study was to determine which of the two methods is more appropriate to teach pitch discrimination to Grade 6 choral students to improve sight-singing note accuracy. This study consisted of three phases: pre-testing, instruction and post-testing. During the four week study, the experimental group received training using the Kodaly method while the control group received training using the traditional method. The pre and post tests were evaluated by three trained musicians. The analysis of the data utilized an independent t-test and a paired t-test with the methods of teaching (experimental and control) as a factor. Quantitative results suggest that the experimental subjects, those receiving Kodaly instruction at post-treatment showed a significant improvement in the pitch accuracy than the control group. The specific change resulted in the Kodaly method to be more effective in producing accurate pitch in sight-singing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Correct specification of the simple location quotients in regionalizing the national direct requirements table is essential to the accuracy of regional input-output multipliers. The purpose of this research is to examine the relative accuracy of these multipliers when earnings, employment, number of establishments, and payroll data specify the simple location quotients. For each specification type, I derive a column of total output multipliers and a column of total income multipliers. These multipliers are based on the 1987 benchmark input-output accounts of the U.S. economy and 1988-1992 state of Florida data. Error sign tests, and Standardized Mean Absolute Deviation (SMAD) statistics indicate that the output multiplier estimates overestimate the output multipliers published by the Department of Commerce-Bureau of Economic Analysis (BEA) for the state of Florida. In contrast, the income multiplier estimates underestimate the BEA's income multipliers. For a given multiplier type, the Spearman-rank correlation analysis shows that the multiplier estimates and the BEA multipliers have statistically different rank ordering of row elements. The above tests also find no significant different differences, both in size and ranking distributions, among the vectors of multiplier estimates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this Bachelor Thesis I want to provide readers with tools and scripts for the control of a 7DOF manipulator, backed up by some theory of Robotics and Computer Science, in order to better contextualize the work done. In practice, we will see most common software, and developing environments, used to cope with our task: these include ROS, along with visual simulation by VREP and RVIZ, and an almost "stand-alone" ROS extension called MoveIt!, a very complete programming interface for trajectory planning and obstacle avoidance. As we will better appreciate and understand in the introduction chapter, the capability of detecting collision objects through a camera sensor, and re-plan to the desired end-effector pose, are not enough. In fact, this work is implemented in a more complex system, where recognition of particular objects is needed. Through a package of ROS and customized scripts, a detailed procedure will be provided on how to distinguish a particular object, retrieve its reference frame with respect to a known one, and then allow navigation to that target. Together with technical details, the aim is also to report working scripts and a specific appendix (A) you can refer to, if desiring to put things together.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Funding/Support and role of the sponsor: None.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Funding/Support and role of the sponsor: None.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A method of accurately controlling the position of a mobile robot using an external Large Volume Metrology (LVM) instrument is presented in this paper. Utilizing a LVM instrument such as the laser tracker in mobile robot navigation, many of the most difficult problems in mobile robot navigation can be simplified or avoided. Using the real- Time position information from the laser tracker, a very simple navigation algorithm, and a low cost robot, 5mm repeatability was achieved over a volume of 30m radius. A surface digitization scan of a wind turbine blade section was also demonstrated, illustrating possible applications of the method for manufacturing processes. © Springer-Verlag Berlin Heidelberg 2010.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The auditory evoked N1m-P2m response complex presents a challenging case for MEG source-modelling, because symmetrical, phase-locked activity occurs in the hemispheres both contralateral and ipsilateral to stimulation. Beamformer methods, in particular, can be susceptible to localisation bias and spurious sources under these conditions. This study explored the accuracy and efficiency of event-related beamformer source models for auditory MEG data under typical experimental conditions: monaural and diotic stimulation; and whole-head beamformer analysis compared to a half-head analysis using only sensors from the hemisphere contralateral to stimulation. Event-related beamformer localisations were also compared with more traditional single-dipole models. At the group level, the event-related beamformer performed equally well as the single-dipole models in terms of accuracy for both the N1m and the P2m, and in terms of efficiency (number of successful source models) for the N1m. The results yielded by the half-head analysis did not differ significantly from those produced by the traditional whole-head analysis. Any localisation bias caused by the presence of correlated sources is minimal in the context of the inter-individual variability in source localisations. In conclusion, event-related beamformers provide a useful alternative to equivalent-current dipole models in localisation of auditory evoked responses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.

A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.

Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.

The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).

First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.

Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.

Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.

The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.

To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.

The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.

The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.

Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.

The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.

In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The focus on how one is behaving, feeling, and thinking, provides a powerful source of self-knowledge. How is this self-knowledge utilized in the dynamic reconstruction of autobiographical memories? How, in turn, might autobiographical memories support identity and the self-system? I address these questions through a critical review of the literature on autobiographical memory and the self-system, with a special focus on the self-concept, self-knowledge, and identity. I then outline the methods and results of a prospective longitudinal study examining the effects of an identity change on memory for events related to that identity. Participant-rated memory characteristics, computer-generated ratings of narrative content and structure, and neutral-observer ratings of coherence were examined for changes over time related to an identity-change, as well as for their ability to predict an identity-change. The conclusions from this study are threefold: (1) when the rated centrality of an event decreases, the reported instances of retrieval, as well as the phenomenology associated with retrieval and the number of words used to describe the memory, also decrease; (2) memory accuracy (here, estimating past behaviors) was not influenced by an identity change; and (3) remembering is not unidirectional – characteristics of identity-relevant memories and the life story predict and may help support persistence with an identity (here, an academic trajectory).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: To develop, evaluate and apply a novel high-resolution 3D remote dosimetry protocol for validation of MRI guided radiation therapy treatments (MRIdian® by ViewRay®). We demonstrate the first application of the protocol (including two small but required new correction terms) utilizing radiochromic 3D plastic PRESAGE® with optical-CT readout.

Methods: A detailed study of PRESAGE® dosimeters (2kg) was conducted to investigate the temporal and spatial stability of radiation induced optical density change (ΔOD) over 8 days. Temporal stability was investigated on 3 dosimeters irradiated with four equally-spaced square 6MV fields delivering doses between 10cGy and 300cGy. Doses were imaged (read-out) by optical-CT at multiple intervals. Spatial stability of ΔOD response was investigated on 3 other dosimeters irradiated uniformly with 15MV extended-SSD fields with doses of 15cGy, 30cGy and 60cGy. Temporal and spatial (radial) changes were investigated using CERR and MATLAB’s Curve Fitting Tool-box. A protocol was developed to extrapolate measured ΔOD readings at t=48hr (the typical shipment time in remote dosimetry) to time t=1hr.

Results: All dosimeters were observed to gradually darken with time (<5% per day). Consistent intra-batch sensitivity (0.0930±0.002 ΔOD/cm/Gy) and linearity (R2=0.9996) was observed at t=1hr. A small radial effect (<3%) was observed, attributed to curing thermodynamics during manufacture. The refined remote dosimetry protocol (including polynomial correction terms for temporal and spatial effects, CT and CR) was then applied to independent dosimeters irradiated with MR-IGRT treatments. Excellent line profile agreement and 3D-gamma results for 3%/3mm, 10% threshold were observed, with an average passing rate 96.5%± 3.43%.

Conclusion: A novel 3D remote dosimetry protocol is presented capable of validation of advanced radiation treatments (including MR-IGRT). The protocol uses 2kg radiochromic plastic dosimeters read-out by optical-CT within a week of treatment. The protocol requires small corrections for temporal and spatially-dependent behaviors observed between irradiation and readout.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Visual inspection with Acetic Acid (VIA) and Visual Inspection with Lugol’s Iodine (VILI) are increasingly recommended in various cervical cancer screening protocols in low-resource settings. Although VIA is more widely used, VILI has been advocated as an easier and more specific screening test. VILI has not been well-validated as a stand-alone screening test, compared to VIA or validated for use in HIV-infected women. We carried out a randomized clinical trial to compare the diagnostic accuracy of VIA and VILI among HIV-infected women. Women attending the Family AIDS Care and Education Services (FACES) clinic in western Kenya were enrolled and randomized to undergo either VIA or VILI with colposcopy. Lesions suspicious for cervical intraepithelial neoplasia 2 or greater (CIN2+) were biopsied. Between October 2011 and June 2012, 654 were randomized to undergo VIA or VILI. The test positivity rates were 26.2% for VIA and 30.6% for VILI (p = 0.22). The rate of detection of CIN2+ was 7.7% in the VIA arm and 11.5% in the VILI arm (p = 0.10). There was no significant difference in the diagnostic performance of VIA and VILI for the detection of CIN2+. Sensitivity and specificity were 84.0% and 78.6%, respectively, for VIA and 84.2% and 76.4% for VILI. The positive and negative predictive values were 24.7% and 98.3% for VIA, and 31.7% and 97.4% for VILI. Among women with CD4+ count < 350, VILI had a significantly decreased specificity (66.2%) compared to VIA in the same group (83.9%, p = 0.02) and compared to VILI performed among women with CD4+ count ≥ 350 (79.7%, p = 0.02). VIA and VILI had similar diagnostic accuracy and rates of CIN2+ detection among HIV-infected women.