823 resultados para Image-Intuitive Modes of Perception


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In article the problems of mutual adapting of the humans and computer environment are reviewed. Features of image-intuitive and physical-mathematical modes of perception and thinking are investigated. The problems of choice of means and methods of the differential education the computerized society are considered.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: The aim of this study was to evaluate the performances of observers in diagnosing proximal caries in digital images obtained from digital bitewing radiographs using two scanners and four digital cameras in Joint Photographic Experts Group (JPEG) and tagged image file format (TIFF) files, and comparing them with the original conventional radiographs. Method: In total, 56 extracted teeth were radiographed with Kodak Insight film (Eastman Kodak, Rochester, NY) in a Kaycor Yoshida X-ray device (Kaycor X-707;Yoshida Dental Manufacturing Co., Tokyo, Japan) operating at 70 kV and 7 mA with an exposure time of 0.40 s. The radiographs were obtained and scanned by CanonScan D646U (Canon USA Inc., Newport News, VA) and Genius ColorPage HR7X (KYE Systems Corp. America, Doral, FL) scanners, and by Canon Powershot G2 (Canon USA Inc.), Canon RebelXT (Canon USA Inc.), Nikon Coolpix 8700 (Nikon Inc., Melville, NY), and Nikon D70s (Nikon Inc.) digital cameras in JPEG and TIFF formats. Three observers evaluated the images. The teeth were then observed under the microscope in polarized light for the verification of the presence and depth of the carious lesions. Results: The probability of no diagnosis ranged from 1.34% (Insight film) to 52.83% (CanonScan/JPEG). The sensitivity ranged from 0.24 (Canon RebelXT/JPEG) to 0.53 (Insight film), the specificity ranged from 0.93 (Nikon Coolpix/JPEG, Canon Powershot/TIFF, Canon RebelXT/JPEG and TIFF) to 0.97 (CanonScan/TIFF and JPEG) and the accuracy ranged from 0.82 (Canon RebelXT/JPEG) to 0.91 (CanonScan/JPEG). Conclusion: The carious lesion diagnosis did not change in either of the file formats (JPEG and TIFF) in which the images were saved for any of the equipment used. Only the CanonScan scanner did not have adequate performance in radiography digitalization for caries diagnosis and it is not recommended for this purpose. Dentomaxillofacial Radiology (2011) 40, 338-343. doi: 10.1259/dmfr/67185962

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In 'Avalanche', an object is lowered, players staying in contact throughout. Normally the task is easily accomplished. However, with larger groups counter-intuitive behaviours appear. The paper proposes a formal theory for the underlying causal mechanisms. The aim is to not only provide an explicit, testable hypothesis for the source of the observed modes of behaviour-but also to exemplify the contribution that formal theory building can make to understanding complex social phenomena. Mapping reveals the importance of geometry to the Avalanche game; each player has a pair of balancing loops, one involved in lowering the object, the other ensuring contact. For more players, sets of balancing loops interact and these can allow dominance by reinforcing loops, causing the system to chase upwards towards an ever-increasing goal. However, a series of other effects concerning human physiology and behaviour (HPB) is posited as playing a role. The hypothesis is therefore rigorously tested using simulation. For simplicity a 'One Degree of Freedom' case is examined, allowing all of the effects to be included whilst rendering the analysis more transparent. Formulation and experimentation with the model gives insight into the behaviours. Multi-dimensional rate/level analysis indicates that there is only a narrow region in which the system is able to move downwards. Model runs reproduce the single 'desired' mode of behaviour and all three of the observed 'problematic' ones. Sensitivity analysis gives further insight into the system's modes and their causes. Behaviour is seen to arise only when the geometric effects apply (number of players greater than degrees of freedom of object) in combination with a range of HPB effects. An analogy exists between the co-operative behaviour required here and various examples: conflicting strategic objectives in organizations; Prisoners' Dilemma and integrated bargaining situations. Additionally, the game may be relatable in more direct algebraic terms to situations involving companies in which the resulting behaviours are mediated by market regulations. Finally, comment is offered on the inadequacy of some forms of theory building and the case is made for formal theory building involving the use of models, analysis and plausible explanations to create deep understanding of social phenomena.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes an image compounding technique based on the use of different apodization functions, the evaluation of the signals phases and information from the interaction of different propagation modes of Lamb waves with defects for enhanced damage detection, resolution and contrast. A 16 elements linear array is attached to a 1 mm thickness isotropic aluminum plate with artificial defects. The array can excite the fundamental A0 and S0 modes at the frequencies of 100 kHz and 360 kHz, respectively. For each mode two synthetic aperture (SA) images with uniform and Blackman apodization and one image of Coherence Factor Map (CFM) are obtained. The specific interaction between each propagation mode and the defects and the characteristics of acoustic radiation patterns due to different apodization functions result in images with different resolution and contrast. From the phase information one of the SA images is selected at each pixel to compound the final image. The SA images are multiplied by the CFM image to improve contrast and for the dispersive A0 mode it is used a technique for dispersion compensation. There is a contrast improvement of 47.5 dB, reducing the dead zone and improving resolution and damage detection. © 2012 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis proposes that despite many experimental studies of thinking, and the development of models of thinking, such as Bruner's (1966) enactive, iconic and symbolic developmental modes, the imagery and inner verbal strategies used by children need further investigation to establish a coherent, theoretical basis from which to create experimental curricula for direct improvement of those strategies. Five hundred and twenty-three first, second and third year comprehensive school children were tested on 'recall' imagery, using a modified Betts Imagery Test; and a test of dual-coding processes (Paivio, 1971, p.179), by the P/W Visual/Verbal Questionnaire, measuring 'applied imagery' and inner verbalising. Three lines of investigation were pursued: 1. An investigation a. of hypothetical representational strategy differences between boys and girls; and b. the extent to which strategies change with increasing age. 2. The second and third year children's use of representational processes, were taken separately and compared with performance measures of perception, field independence, creativity, self-sufficiency and self-concept. 3. The second and third year children were categorised into four dual-coding strategy groups: a. High Visual/High Verbal b. Low Visual/High Verbal c. High Visual/Low Verbal d. Low Visual/Low Verbal These groups were compared on the same performance measures. The main result indicates that: 1. A hierarchy of dual-coding strategy use can be identified that is significantly related (.01, Binomial Test) to success or failure in the performance measures: the High Visual/High Verbal group registering the highest scores, the Low Visual/High Verbal and High Visual/Low Verbal groups registering intermediate scores, and the Low Visual/Low Verbal group registering the lowest scores on the performance measures. Subsidiary results indicate that: 2. Boys' use of visual strategies declines, and of verbal strategies increases, with age; girls' recall imagery strategy increases with age. Educational implications from the main result are discussed, the establishment of experimental curricula proposed, and further research suggested.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.

A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.

Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.

The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).

First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.

Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.

Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.

The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.

To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.

The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.

The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.

Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.

The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.

In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using series solutions and time-domain evolutions, we probe the eikonal limit of the gravitational and scalar-field quasinormal modes of large black holes and black branes in anti-de Sitter backgrounds. These results are particularly relevant for the AdS/CFT correspondence, since the eikonal regime is characterized by the existence of long-lived modes which (presumably) dominate the decay time scale of the perturbations. We confirm all the main qualitative features of these slowly damped modes as predicted by Festuccia and Liu [G. Festuccia and H. Liu, arXiv:0811.1033.] for the scalar-field (tensor-type gravitational) fluctuations. However, quantitatively we find dimensional-dependent correction factors. We also investigate the dependence of the quasinormal mode frequencies on the horizon radius of the black hole (brane) and the angular momentum (wave number) of vector- and scalar-type gravitational perturbations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The properties of the localized states of a two-component Bose-Einstein condensate confined in a nonlinear periodic potential (nonlinear optical lattice) are investigated. We discuss the existence of different types of solitons and study their stability by means of analytical and numerical approaches. The symmetry properties of the localized states with respect to nonlinear optical lattices are also investigated. We show that nonlinear optical lattices allow the existence of bright soliton modes with equal symmetry in both components and bright localized modes of mixed symmetry type, as well as dark-bright bound states and bright modes on periodic backgrounds. In spite of the quasi-one-dimensional nature of the problem, the fundamental symmetric localized modes undergo a delocalizing transition when the strength of the nonlinear optical lattice is varied. This transition is associated with the existence of an unstable solution, which exhibits a shrinking (decaying) behavior for slightly overcritical (undercritical) variations in the number of atoms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we consider the evolution of a massive scalar field in cylindrically symmetric space-times. Quasinormal modes have been calculated for static and rotating cosmic cylinders. We found unstable modes in some cases. Rotating as well as static cosmic strings, i.e., without regular interior solutions, do not display quasinormal oscillation modes. We conclude that rotating cosmic cylinder space-times that present closed timelike curves are unstable against scalar perturbations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the massless scalar, Dirac, and electromagnetic fields propagating on a 4D-brane, which is embedded in higher-dimensional Gauss-Bonnet space-time. We calculate, in the time domain, the fundamental quasinormal modes of a spherically symmetric black hole for such fields. Using WKB approximation we study quasinormal modes in the large multipole limit. We observe also a universal behavior, independent on a field and value of the Gauss-Bonnet parameter, at an asymptotically late time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The AdS/CFT duality has established a mapping between quantities in the bulk AdS black-hole physics and observables in a boundary finite-temperature field theory. Such a relationship appears to be valid for an arbitrary number of spacetime dimensions, extrapolating the original formulations of Maldacena`s correspondence. In the same sense properties like the hydrodynamic behavior of AdS black-hole fluctuations have been proved to be universal. We investigate in this work the complete quasinormal spectra of gravitational perturbations of d-dimensional plane-symmetric AdS black holes (black branes). Holographically the frequencies of the quasinormal modes correspond to the poles of two-point correlation functions of the field-theory stress-energy tensor. The important issue of the correct boundary condition to be imposed on the gauge-invariant perturbation fields at the AdS boundary is studied and elucidated in a fully d-dimensional context. We obtain the dispersion relations of the first few modes in the low-, intermediate- and high-wavenumber regimes. The sound-wave (shear-mode) behavior of scalar (vector)-type low- frequency quasinormal mode is analytically and numerically confirmed. These results are found employing both a power series method and a direct numerical integration scheme.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Particle-image velocimetry (PIV) was used to visualize the flow within an optically transparent pediatric ventricular assist device (PVAD) under development in our laboratory The device studied is a diaphragm type pulsatile pump with an ejection volume of 30 ml per beating cycle intended for temporary cardiac assistance as a bridge to transplantation or recovery in children. Of particular interest was the identification of flow patterns, including regions of stagnation and/or strong turbulence that often promote thrombus formation and hemolysis, which can degrade the usefulness of such devices. For this purpose, phase-locked PIV measurements were performed in planes parallel to the diaphram that drives the flow in the device. The test fluid was seeded with 10 Am polystyrene spheres, and the motion of these particles was used to determine the instantaneous flow velocity distribution in the illumination plane. These measurements revealed that flow velocities up to 1.0 m/s can occur within the PVAD. Phase-averaged velocity fields revealed the fixed vortices that drive the bulk flow within the device, though significant cycle-to-cycle variability was also quite apparent in the instantaneous velocity distributions, most notably during the filling phase. This cycle-to-cycle variability can generate strong turbulence that may contribute to greater hemolysis. Stagnation regions have also been observed between the input and output branches of the prototype, which can increase the likelihood of thrombus formation. [DOI: 10.1115/1.4001252]

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When wandering around a city such as Sao Paulo, we are surrounded by letters, numbers and symbols. These elements form part of an environment full of signs in many shapes and sizes that compete for our attention. Our perception of these elements contributes towards our spatial guidance and sense of place. The idea of `reading` the city, or urban environment, was introduced by Kevin Lynch, for whom reading the urban structure follows on from recognizing or identifying its numerous visual elements, not necessarily verbal ones. Beginning with a brief bibliographic review of perception theories, this article combines concepts from environmental psychology with concerns brought up by the fields of information design and epigraphy studies, setting out the basis of a methodological proposal for the study of typography and lettering in the urban environment.