267 resultados para Image Interpretation
Resumo:
The ubiquity of multimodality in hypermedia environments is undeniable. Bezemer and Kress (2008) have argued that writing has been displaced by image as the central mode for representation. Given the current technical affordances of digital technology and user-friendly interfaces that enable the ease of multimodal design, the conspicuous absence of images in certain domains of cyberspace is deserving of critical analysis. In this presentation, I examine the politics of discourses implicit within hypertextual spaces, drawing textual examples from a higher education website. I critically examine the role of writing and other modes of production used in what Fairclough (1993) refers to as discourses of marketisation in higher education, tracing four pervasive discourses of teaching and learning in the current economy: i) materialization, ii) personalization, iii) technologisation, and iv) commodification (Fairclough, 1999). Each of these arguments is supported by the critical analysis of multimodal texts. The first is a podcast highlighting the new architectonic features of a university learning space. The second is a podcast and transcript of a university Open Day interview with prospective students. The third is a time-lapse video showing the construction of a new science and engineering precinct. These three multimodal texts contrast a final web-based text that exhibits a predominance of writing and the powerful absence or silencing of the image. I connect the weightiness of words and the function of monomodality in the commodification of discourses, and its resistance to the multimodal affordances of web-based technologies, and how this is used to establish particular sets of subject positions and ideologies through which readers are constrained to occupy. Applying principles of critical language study by theorists that include Fairclough, Kress, Lemke, and others whose semiotic analysis of texts focuses on the connections between language, power, and ideology, I demonstrate how the denial of image and the privileging of written words in the multimodality of cyberspace is an ideological effect to accentuate the dominance of the institution.
Resumo:
A strongly progressive surveying and mapping industry depends on a shared understanding of the industry as it exists, some shared vision or imagination of what the industry might become, and some shared action plan capable of bringing about a realisation of that vision. The emphasis on sharing implies a need for consensus reached through widespread discussion and mutual understanding. Unless this occurs, concerted action is unlikely. A more likely outcome is that industry representatives will negate each other's efforts in their separate bids for progress. The process of bringing about consensual viewpoints is essentially one of establishing an industry identity. Establishing the industry's identity and purpose is a prerequisite for rational development of the industry's education and training, its promotion and marketing, and operational research that can deal .with industry potential and efficiency. This paper interprets evolutionary developments occurring within Queensland's surveying and mapping industry within a framework that sets out logical requirements for a viable industry.
Resumo:
The most common software analysis tools available for measuring fluorescence images are for two-dimensional (2D) data that rely on manual settings for inclusion and exclusion of data points, and computer-aided pattern recognition to support the interpretation and findings of the analysis. It has become increasingly important to be able to measure fluorescence images constructed from three-dimensional (3D) datasets in order to be able to capture the complexity of cellular dynamics and understand the basis of cellular plasticity within biological systems. Sophisticated microscopy instruments have permitted the visualization of 3D fluorescence images through the acquisition of multispectral fluorescence images and powerful analytical software that reconstructs the images from confocal stacks that then provide a 3D representation of the collected 2D images. Advanced design-based stereology methods have progressed from the approximation and assumptions of the original model-based stereology(1) even in complex tissue sections(2). Despite these scientific advances in microscopy, a need remains for an automated analytic method that fully exploits the intrinsic 3D data to allow for the analysis and quantification of the complex changes in cell morphology, protein localization and receptor trafficking. Current techniques available to quantify fluorescence images include Meta-Morph (Molecular Devices, Sunnyvale, CA) and Image J (NIH) which provide manual analysis. Imaris (Andor Technology, Belfast, Northern Ireland) software provides the feature MeasurementPro, which allows the manual creation of measurement points that can be placed in a volume image or drawn on a series of 2D slices to create a 3D object. This method is useful for single-click point measurements to measure a line distance between two objects or to create a polygon that encloses a region of interest, but it is difficult to apply to complex cellular network structures. Filament Tracer (Andor) allows automatic detection of the 3D neuronal filament-like however, this module has been developed to measure defined structures such as neurons, which are comprised of dendrites, axons and spines (tree-like structure). This module has been ingeniously utilized to make morphological measurements to non-neuronal cells(3), however, the output data provide information of an extended cellular network by using a software that depends on a defined cell shape rather than being an amorphous-shaped cellular model. To overcome the issue of analyzing amorphous-shaped cells and making the software more suitable to a biological application, Imaris developed Imaris Cell. This was a scientific project with the Eidgenössische Technische Hochschule, which has been developed to calculate the relationship between cells and organelles. While the software enables the detection of biological constraints, by forcing one nucleus per cell and using cell membranes to segment cells, it cannot be utilized to analyze fluorescence data that are not continuous because ideally it builds cell surface without void spaces. To our knowledge, at present no user-modifiable automated approach that provides morphometric information from 3D fluorescence images has been developed that achieves cellular spatial information of an undefined shape (Figure 1). We have developed an analytical platform using the Imaris core software module and Imaris XT interfaced to MATLAB (Mat Works, Inc.). These tools allow the 3D measurement of cells without a pre-defined shape and with inconsistent fluorescence network components. Furthermore, this method will allow researchers who have extended expertise in biological systems, but not familiarity to computer applications, to perform quantification of morphological changes in cell dynamics.
Resumo:
Purpose Arbitrary numbers of corneal confocal microscopy images have been used for analysis of corneal subbasal nerve parameters under the implicit assumption that these are a representative sample of the central corneal nerve plexus. The purpose of this study is to present a technique for quantifying the number of random central corneal images required to achieve an acceptable level of accuracy in the measurement of corneal nerve fiber length and branch density. Methods Every possible combination of 2 to 16 images (where 16 was deemed the true mean) of the central corneal subbasal nerve plexus, not overlapping by more than 20%, were assessed for nerve fiber length and branch density in 20 subjects with type 2 diabetes and varying degrees of functional nerve deficit. Mean ratios were calculated to allow comparisons between and within subjects. Results In assessing nerve branch density, eight randomly chosen images not overlapping by more than 20% produced an average that was within 30% of the true mean 95% of the time. A similar sampling strategy of five images was 13% within the true mean 80% of the time for corneal nerve fiber length. Conclusions The “sample combination analysis” presented here can be used to determine the sample size required for a desired level of accuracy of quantification of corneal subbasal nerve parameters. This technique may have applications in other biological sampling studies.
Resumo:
In 2010, the State Library of Queensland (SLQ) donated their out-of-copyright Queensland images to Wikimedia Commons. One direct effect of publishing the collections at Wikimedia Commons is the ability of general audiences to participate and help the library in processing the images in the collection. This paper will discuss a project that explored user participation in the categorisation of the State Library of Queensland digital image collections. The outcomes of this project can be used to gain a better understanding of user participation that lead to improving access to library digital collections. Two techniques for data collection were used: documents analysis and interview. Document analysis was performed on the Wikimedia Commons monthly reports. Meanwhile, interview was used as the main data collection technique in this research. The data collected from document analysis was used to help the researchers to devise appropriate questions for interviews. The interviews were undertaken with participants who were divided into two groups: SLQ staff members and Wikimedians (users who participate in Wikimedia). The two sets of data collected from participants were analysed independently and compared. This method was useful for the researchers to understand the differences between the experiences of categorisation from both the librarians’ and the users’ perspectives. This paper will provide a discussion on the preliminary findings that have emerged from each group participant. This research provides preliminary information about the extent of user participation in the categorisation of SLQ collections in Wikimedia Commons that can be used by SLQ and other interested libraries in describing their digital content by their categorisations to improve user access to the collection in the future.
Rotorcraft collision avoidance using spherical image-based visual servoing and single point features
Resumo:
This paper presents a reactive collision avoidance method for small unmanned rotorcraft using spherical image-based visual servoing. Only a single point feature is used to guide the aircraft in a safe spiral like trajectory around the target, whilst a spherical camera model ensures the target always remains visible. A decision strategy to stop the avoidance control is derived based on the properties of spiral like motion, and the effect of accurate range measurements on the control scheme is discussed. We show that using a poor range estimate does not significantly degrade the collision avoidance performance, thus relaxing the need for accurate range measurements. We present simulated and experimental results using a small quad rotor to validate the approach.
Resumo:
Echocardiography is the commonest form of non-invasive cardiac imaging and is fundamental to patient management. However, due to its methodology, it is also operator dependent. There are well defined pathways in training and ongoing accreditation to achieve and maintain competency. To satisfy these requirements, significant time has to be dedicated to scanning patients, often in the time pressured clinical environment. Alternative, computer based training methods are being considered to augment echocardiographic training. Numerous advances in technology have resulted in the development of interactive programmes and simulators to teach trainees the skills to perform particular procedures, including transthoracic and transoesophageal echocardiography. 82 sonographers and TOE proceduralists utilised an echocardiographic simulator and assessed its utility using defined criteria. 40 trainee sonographers assessed the simulator and were taught how to obtain an apical 2 chamber (A2C) view and image the superior vena cava (SVC). 100% and 88% found the simulator useful in obtaining the SVC or A2C view respectively. All users found it easy to use and the majority found it helped with image acquisition and interpretation. 42 attendees of a TOE training day utilising the simulator assessed the simulator with 100% finding it easy to use, as well as the augmented reality graphics benefiting image acquisition. 90% felt that it was realistic. This study revealed that both trainee sonographers and TOE proceduralists found the simulation process was realistic, helped in image acquisition and improved assessment of spatial relationships. Echocardiographic simulators may play an important role in the future training of echocardiographic skills.
Resumo:
A qualitative, discourse analytic study of literate practices in a small religious community in a northern Australian city. The chapter documents how this community constructs religious reading and writing, affiliated ideologies and theologies, and how readers/hearers/learners are positions vis a vis the authority of sacred text.
Resumo:
Typical flow fields in a stormwater gross pollutant trap (GPT) with blocked retaining screens were experimentally captured and visualised. Particle image velocimetry (PIV) software was used to capture the flow field data by tracking neutrally buoyant particles with a high speed camera. A technique was developed to apply the Image Based Flow Visualization (IBFV) algorithm to the experimental raw dataset generated by the PIV software. The dataset consisted of scattered 2D point velocity vectors and the IBFV visualisation facilitates flow feature characterisation within the GPT. The flow features played a pivotal role in understanding gross pollutant capture and retention within the GPT. It was found that the IBFV animations revealed otherwise unnoticed flow features and experimental artefacts. For example, a circular tracer marker in the IBFV program visually highlighted streamlines to investigate specific areas and identify the flow features within the GPT.
Resumo:
Psychologists investigating dreams in non-Western cultures have generally not considered the meanings of dreams within the unique meaning-structure of the person in his or her societal context. The study was concerned with explicating the indigenous system of dream interpretation of the Xhosa-speaking people, as revealed by acknowledged dream experts, and elaborating upon the life-world of the participants. Fifty dreams and their interpretations were collected from participants, who were traditional healers and their clients. A phenomenological methodology was adopted in explicating the data. Themes explicated included : the physiognomy of the dreamer's life-world as revealed by significant dreams, the interpretation of significant dreams as revealed through action, and human bodiliness as revealed in dream interpretations. The participants' approach to dreams is not based upon an explicit theory, but upon an immediate and pathic understanding of the dream phenomenon. The understanding is based upon the interpreter's concrete understanding of the life-world, which includes the possibility of cosmic integration and continuity between personal and trans-personal realms of being
Resumo:
Many state of the art vision-based Simultaneous Localisation And Mapping (SLAM) and place recognition systems compute the salience of visual features in their environment. As computing salience can be problematic in radically changing environments new low resolution feature-less systems have been introduced, such as SeqSLAM, all of which consider the whole image. In this paper, we implement a supervised classifier system (UCS) to learn the salience of image regions for place recognition by feature-less systems. SeqSLAM only slightly benefits from the results of training, on the challenging real world Eynsham dataset, as it already appears to filter less useful regions of a panoramic image. However, when recognition is limited to specific image regions performance improves by more than an order of magnitude by utilising the learnt image region saliency. We then investigate whether the region salience generated from the Eynsham dataset generalizes to another car-based dataset using a perspective camera. The results suggest the general applicability of an image region salience mask for optimizing route-based navigation applications.
Resumo:
Since the first destination image studies were published in the early 1970s, the field has become one of the most popular in the tourism literature. While reviews of the destination image literature show no commonly agreed conceptualisation of the construct, researchers have predominantly used structured questionnaires for measurement. There has been criticism that the way some of these scales have been selected means a greater likelihood of attributes being irrelevant to participants. This opens up the risk of stimulating uninformed responses. The issue of uninformed response was first raised as a source of error 60 years ago. However, there has been little, if any, discussion in relation to destination image measurement, studies of which often require participants to provide opinion-driven rather than fact-based responses. This paper reports the trial of a ‘don’t know’ (DK) non-response option for participants in two destination image questionnaires. It is suggested the use of a DK option provides participants with an alternative to i) skipping the question, ii) using the scale midpoint to denote neutrality, or iii) providing an uninformed response. High levels of DK usage by participants can then alert the marketer of the need to improve awareness of destination performance for potential salient attributes.
Resumo:
The rank and census are two filters based on order statistics which have been applied to the image matching problem for stereo pairs. Advantages of these filters include their robustness to radiometric distortion and small amounts of random noise, and their amenability to hardware implementation. In this paper, a new matching algorithm is presented, which provides an overall framework for matching, and is used to compare the rank and census techniques with standard matching metrics. The algorithm was tested using both real stereo pairs and a synthetic pair with ground truth. The rank and census filters were shown to significantly improve performance in the case of radiometric distortion. In all cases, the results obtained were comparable to, if not better than, those obtained using standard matching metrics. Furthermore, the rank and census have the additional advantage that their computational overhead is less than these metrics. For all techniques tested, the difference between the results obtained for the synthetic stereo pair, and the ground truth results was small.
Resumo:
Trajectory basis Non-Rigid Structure From Motion (NRSFM) currently faces two problems: the limit of reconstructability and the need to tune the basis size for different sequences. This paper provides a novel theoretical bound on 3D reconstruction error, arguing that the existing definition of reconstructability is fundamentally flawed in that it fails to consider system condition. This insight motivates a novel strategy whereby the trajectory's response to a set of high-pass filters is minimised. The new approach eliminates the need to tune the basis size and is more efficient for long sequences. Additionally, the truncated DCT basis is shown to have a dual interpretation as a high-pass filter. The success of trajectory filter reconstruction is demonstrated quantitatively on synthetic projections of real motion capture sequences and qualitatively on real image sequences.