840 resultados para Visual research methods : image, society and representation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Kirjallisuusarvostelu

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the biggest challenges of integrating research in TESOL with research in digital literacies is that the research methodologies of these two traditions have developed out of different ontological and episte- mological assumptions about what is being researched (the object of study), where the research is located (the research site), and who is being researched (the research participants).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Just one lecture for first year. Its objective is to show that there exists more than on approach to tackling a research question - and that not all disciplines approach things the same way!

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Framing plays an important role in public policy. Interest groups strategically highlight some aspects of a policy proposal while downplaying others in order to steer the policy debate in a favorable direction. Despite the importance of framing, we still know relatively little about the framing strategies of interest groups due to methodological difficulties that have prevented scholars from systematically studying interest group framing across a large number of interest groups and multiple policy debates. This article therefore provides an overview of three novel research methods that allow researchers to systematically measure interest group frames. More specifically, this article introduces a word-based quantitative text analysis technique, a manual, computer-assisted content analysis approach and face-to-face interviews designed to systematically identify interest group frames. The results generated by all three techniques are compared on the basis of a case study of interest group framing in an environmental policy debate in the European Union.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Understand representation and basic semiotic theory i.e. signs, meaning and myth Use visual analysis to decode an image

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Personal memories composed of digital pictures are very popular at the moment. To retrieve these media items annotation is required. During the last years, several approaches have been proposed in order to overcome the image annotation problem. This paper presents our proposals to address this problem. Automatic and semi-automatic learning methods for semantic concepts are presented. The automatic method is based on semantic concepts estimated using visual content, context metadata and audio information. The semi-automatic method is based on results provided by a computer game. The paper describes our proposals and presents their evaluations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Computed tomography (CT) is one of the most used modalities for diagnostics in paediatric populations, which is a concern as it also delivers a high patient dose. Research has focused on developing computer algorithms that provide better image quality at lower dose. The iterative reconstruction algorithm Sinogram-Affirmed Iterative Reconstruction (SAFIRE) was introduced as a new technique that reduces noise to increase image quality. Purpose: The aim of this study is to compare SAFIRE with the current gold standard, Filtered Back Projection (FBP), and assess whether SAFIRE alone permits a reduction in dose while maintaining image quality in paediatric head CT. Methods: Images were collected using a paediatric head phantom using a SIEMENS SOMATOM PERSPECTIVE 128 modulated acquisition. 54 images were reconstructed using FBP and 5 different strengths of SAFIRE. Objective measures of image quality were determined by measuring SNR and CNR. Visual measures of image quality were determined by 17 observers with different radiographic experiences. Images were randomized and displayed using 2AFC; observers scored the images answering 5 questions using a Likert scale. Results: At different dose levels, SAFIRE significantly increased SNR (up to 54%) in the acquired images compared to FBP at 80kVp (5.2-8.4), 110kVp (8.2-12.3), 130kVp (8.8-13.1). Visual image quality was higher with increasing SAFIRE strength. The highest image quality was scored with SAFIRE level 3 and higher. Conclusion: The SAFIRE algorithm is suitable for image noise reduction in paediatric head CT. Our data demonstrates that SAFIRE enhances SNR while reducing noise with a possible reduction of dose of 68%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This project evolved out of a search for ways to conduct research on “others” in a way that does not exploit, stigmatize or misrepresent their experience. This thesis is an ethnographic study in leisure research and youth work and an experiment in running a photovoice project. Photovoice is a participatory visual method that embodies the emancipatory ideal of empowering others through self-representation. The literature on photovoice lacks a comprehensive discussion on the complexity of power and representation. Postmodern theorists have proposed that participatory methods are not benign and that initiatives are acts of power in themselves that produce effects (Cook & Kothari, 2001). A Foucauldian analysis of power is used to deconstruct the researcher’s practice and reflect on why and how youth are “engaged”. This project seeks to embrace the principle of working “with” others, but also work from a postmodern perspective that acknowledges power and representation as ongoing problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the “social turn” of language in the past decade within English studies, ethnographic and teacher research methods increasingly have acquired legitimacy as a means of studying student literacy. And with this legitimacy, graduate students specializing in literacy and composition studies increasingly are being encouraged to use ethnographic and teacher research methods to study student literacy within classrooms. Yet few of the narratives produced from these studies discuss the problems that frequently arise when participant observers enter the classroom. Recently, some researchers have begun to interrogate the extent to which ethnographic and teacher research methods are able to construct and disseminate knowledge in empowering ways (Anderson & Irvine, 1993; Bishop, 1993; Fine, 1994; Fleischer. 1994; McLaren, 1992). While ethnographic and teacher research methods have oftentimes been touted as being more democratic and nonhierarchical than quantitative methods—-which oftentimes erase individuals lived experiences with numbers and statistical formulas—-researchers are just beginning to probe the ways that ethnographic and teacher research models can also be silencing, unreflective, and oppressive. Those who have begun to question the ethics of conducting, writing about, and disseminating knowledge in education have coined the term “critical” research, a rather vague and loose term that proposes a position of reflexivity and self-critique for all research methods, not just ethnography or teacher research. Drawing upon theories of feminist consciousness-raising, liberatory praxis, and community-action research, theories of critical research aim to involve researchers and participants in a highly participatory framework for constructing knowledge, an inquiry that seeks to question, disrupt, or intervene in the conditions under study for some socially transformative end. While critical research methods are always contingent upon the context being studied, in general they are undergirded by principles of non-hierarchical relations, participatory collaboration, problem-posing, dialogic inquiry, and multiple and multi-voiced interpretations. In distinguishing between critical and traditional ethnographic processes, for instance, Peter McLaren says that critical ethnography asks questions such as “[u]nder what conditions and to what ends do we. as educational researchers, enter into relations of cooperation. mutuality, and reciprocity with those who we research?” (p. 78) and “what social effects do you want your evaluations and understandings to have?” (p. 83). In»the same vein, Michelle Fine suggests that critical researchers must move beyond notions of the etic/emic dichotomy of researcher positionality in order to “probe how we are in relation with the contexts we study and with our informants, understanding that we are all multiple in those relations” (p. 72). Researchers in composition and literacy stud¬ies who endorse critical research methods, then, aim to enact some sort of positive transformative change in keeping with the needs and interests of the participants with whom they work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Presentation Purpose:To relate structural change to functional change in age-related macular degeneration (AMD) in a cross-sectional population using fundus imaging and the visual field status. Methods:10 degree standard and SWAP visual fields and other standard functional clinical measures were acquired in 44 eyes of 27 patients at various stages of AMD, as well as fundus photographs. Retro-mode SLO images were captured in a subset of 29 eyes of 19 of the patients. Drusen area, measured by automated drusen segmentation software (Smith et al. 2005) was correlated with visual field data. Visual field defect position was compared to the position of the imaged drusen and deposits using custom software. Results:The effect of AMD stage on drusen area within the 6000µm was significant (One-way ANOVA: F = 17.231, p < 0.001), however the trend was not strong across all stages. There were significant linear relationships between visual field parameters and drusen area. The mean deviation (MD) declined by 3.00dB and 3.92dB for each log % drusen area for standard perimetry and SWAP, respectively. The visual field parameters of focal loss displayed the strongest correlations with drusen area. The number of pattern deviation (PD) defects increased by 9.30 and 9.68 defects per log % drusen area for standard perimetry and SWAP, respectively. Weaker correlations were found between drusen area and visual acuity, contrast sensitivity, colour vision and reading speed. 72.6% of standard PD defects and 65.2% of SWAP PD defects coincided with retinal signs of AMD on fundus photography. 67.5% of standard PD defects and 69.7% of SWAP PD defects coincided with deposits on retro-mode images. Conclusions:Perimetry exhibited a stronger relationship with drusen area than other measures of visual function. The structure-function relationship between visual field parameters and drusen area was linear. Overall the indices of focal loss had a stronger correlation with drusen area in SWAP than in standard perimetry. Visual field defects had a high coincidence proportion with retinal manifestations of AMD.Smith R.T. et al. (2005) Arch Ophthalmol 123:200-206.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research pursued the conceptualization, implementation, and verification of a system that enhances digital information displayed on an LCD panel to users with visual refractive errors. The target user groups for this system are individuals who have moderate to severe visual aberrations for which conventional means of compensation, such as glasses or contact lenses, does not improve their vision. This research is based on a priori knowledge of the user's visual aberration, as measured by a wavefront analyzer. With this information it is possible to generate images that, when displayed to this user, will counteract his/her visual aberration. The method described in this dissertation advances the development of techniques for providing such compensation by integrating spatial information in the image as a means to eliminate some of the shortcomings inherent in using display devices such as monitors or LCD panels. Additionally, physiological considerations are discussed and integrated into the method for providing said compensation. In order to provide a realistic sense of the performance of the methods described, they were tested by mathematical simulation in software, as well as by using a single-lens high resolution CCD camera that models an aberrated eye, and finally with human subjects having various forms of visual aberrations. Experiments were conducted on these systems and the data collected from these experiments was evaluated using statistical analysis. The experimental results revealed that the pre-compensation method resulted in a statistically significant improvement in vision for all of the systems. Although significant, the improvement was not as large as expected for the human subject tests. Further analysis suggest that even under the controlled conditions employed for testing with human subjects, the characterization of the eye may be changing. This would require real-time monitoring of relevant variables (e.g. pupil diameter) and continuous adjustment in the pre-compensation process to yield maximum viewing enhancement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research pursued the conceptualization, implementation, and verification of a system that enhances digital information displayed on an LCD panel to users with visual refractive errors. The target user groups for this system are individuals who have moderate to severe visual aberrations for which conventional means of compensation, such as glasses or contact lenses, does not improve their vision. This research is based on a priori knowledge of the user's visual aberration, as measured by a wavefront analyzer. With this information it is possible to generate images that, when displayed to this user, will counteract his/her visual aberration. The method described in this dissertation advances the development of techniques for providing such compensation by integrating spatial information in the image as a means to eliminate some of the shortcomings inherent in using display devices such as monitors or LCD panels. Additionally, physiological considerations are discussed and integrated into the method for providing said compensation. In order to provide a realistic sense of the performance of the methods described, they were tested by mathematical simulation in software, as well as by using a single-lens high resolution CCD camera that models an aberrated eye, and finally with human subjects having various forms of visual aberrations. Experiments were conducted on these systems and the data collected from these experiments was evaluated using statistical analysis. The experimental results revealed that the pre-compensation method resulted in a statistically significant improvement in vision for all of the systems. Although significant, the improvement was not as large as expected for the human subject tests. Further analysis suggest that even under the controlled conditions employed for testing with human subjects, the characterization of the eye may be changing. This would require real-time monitoring of relevant variables (e.g. pupil diameter) and continuous adjustment in the pre-compensation process to yield maximum viewing enhancement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the rise of smart phones, lifelogging devices (e.g. Google Glass) and popularity of image sharing websites (e.g. Flickr), users are capturing and sharing every aspect of their life online producing a wealth of visual content. Of these uploaded images, the majority are poorly annotated or exist in complete semantic isolation making the process of building retrieval systems difficult as one must firstly understand the meaning of an image in order to retrieve it. To alleviate this problem, many image sharing websites offer manual annotation tools which allow the user to “tag” their photos, however, these techniques are laborious and as a result have been poorly adopted; Sigurbjörnsson and van Zwol (2008) showed that 64% of images uploaded to Flickr are annotated with < 4 tags. Due to this, an entire body of research has focused on the automatic annotation of images (Hanbury, 2008; Smeulders et al., 2000; Zhang et al., 2012a) where one attempts to bridge the semantic gap between an image’s appearance and meaning e.g. the objects present. Despite two decades of research the semantic gap still largely exists and as a result automatic annotation models often offer unsatisfactory performance for industrial implementation. Further, these techniques can only annotate what they see, thus ignoring the “bigger picture” surrounding an image (e.g. its location, the event, the people present etc). Much work has therefore focused on building photo tag recommendation (PTR) methods which aid the user in the annotation process by suggesting tags related to those already present. These works have mainly focused on computing relationships between tags based on historical images e.g. that NY and timessquare co-exist in many images and are therefore highly correlated. However, tags are inherently noisy, sparse and ill-defined often resulting in poor PTR accuracy e.g. does NY refer to New York or New Year? This thesis proposes the exploitation of an image’s context which, unlike textual evidences, is always present, in order to alleviate this ambiguity in the tag recommendation process. Specifically we exploit the “what, who, where, when and how” of the image capture process in order to complement textual evidences in various photo tag recommendation and retrieval scenarios. In part II, we combine text, content-based (e.g. # of faces present) and contextual (e.g. day-of-the-week taken) signals for tag recommendation purposes, achieving up to a 75% improvement to precision@5 in comparison to a text-only TF-IDF baseline. We then consider external knowledge sources (i.e. Wikipedia & Twitter) as an alternative to (slower moving) Flickr in order to build recommendation models on, showing that similar accuracy could be achieved on these faster moving, yet entirely textual, datasets. In part II, we also highlight the merits of diversifying tag recommendation lists before discussing at length various problems with existing automatic image annotation and photo tag recommendation evaluation collections. In part III, we propose three new image retrieval scenarios, namely “visual event summarisation”, “image popularity prediction” and “lifelog summarisation”. In the first scenario, we attempt to produce a rank of relevant and diverse images for various news events by (i) removing irrelevant images such memes and visual duplicates (ii) before semantically clustering images based on the tweets in which they were originally posted. Using this approach, we were able to achieve over 50% precision for images in the top 5 ranks. In the second retrieval scenario, we show that by combining contextual and content-based features from images, we are able to predict if it will become “popular” (or not) with 74% accuracy, using an SVM classifier. Finally, in chapter 9 we employ blur detection and perceptual-hash clustering in order to remove noisy images from lifelogs, before combining visual and geo-temporal signals in order to capture a user’s “key moments” within their day. We believe that the results of this thesis show an important step towards building effective image retrieval models when there lacks sufficient textual content (i.e. a cold start).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays, new computers generation provides a high performance that enables to build computationally expensive computer vision applications applied to mobile robotics. Building a map of the environment is a common task of a robot and is an essential part to allow the robots to move through these environments. Traditionally, mobile robots used a combination of several sensors from different technologies. Lasers, sonars and contact sensors have been typically used in any mobile robotic architecture, however color cameras are an important sensor due to we want the robots to use the same information that humans to sense and move through the different environments. Color cameras are cheap and flexible but a lot of work need to be done to give robots enough visual understanding of the scenes. Computer vision algorithms are computational complex problems but nowadays robots have access to different and powerful architectures that can be used for mobile robotics purposes. The advent of low-cost RGB-D sensors like Microsoft Kinect which provide 3D colored point clouds at high frame rates made the computer vision even more relevant in the mobile robotics field. The combination of visual and 3D data allows the systems to use both computer vision and 3D processing and therefore to be aware of more details of the surrounding environment. The research described in this thesis was motivated by the need of scene mapping. Being aware of the surrounding environment is a key feature in many mobile robotics applications from simple robotic navigation to complex surveillance applications. In addition, the acquisition of a 3D model of the scenes is useful in many areas as video games scene modeling where well-known places are reconstructed and added to game systems or advertising where once you get the 3D model of one room the system can add furniture pieces using augmented reality techniques. In this thesis we perform an experimental study of the state-of-the-art registration methods to find which one fits better to our scene mapping purposes. Different methods are tested and analyzed on different scene distributions of visual and geometry appearance. In addition, this thesis proposes two methods for 3d data compression and representation of 3D maps. Our 3D representation proposal is based on the use of Growing Neural Gas (GNG) method. This Self-Organizing Maps (SOMs) has been successfully used for clustering, pattern recognition and topology representation of various kind of data. Until now, Self-Organizing Maps have been primarily computed offline and their application in 3D data has mainly focused on free noise models without considering time constraints. Self-organising neural models have the ability to provide a good representation of the input space. In particular, the Growing Neural Gas (GNG) is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation. However, this type of learning is time consuming, specially for high-dimensional input data. Since real applications often work under time constraints, it is necessary to adapt the learning process in order to complete it in a predefined time. This thesis proposes a hardware implementation leveraging the computing power of modern GPUs which takes advantage of a new paradigm coined as General-Purpose Computing on Graphics Processing Units (GPGPU). Our proposed geometrical 3D compression method seeks to reduce the 3D information using plane detection as basic structure to compress the data. This is due to our target environments are man-made and therefore there are a lot of points that belong to a plane surface. Our proposed method is able to get good compression results in those man-made scenarios. The detected and compressed planes can be also used in other applications as surface reconstruction or plane-based registration algorithms. Finally, we have also demonstrated the goodness of the GPU technologies getting a high performance implementation of a CAD/CAM common technique called Virtual Digitizing.