865 resultados para Representations.
Resumo:
Spatial representations, metaphors and imaginaries (cyberspace, web pages) have been the mainstay of internet research for a long time. Instead of repeating these themes, this paper seeks to answer the question of how we might understand the concept of time in relation to internet research. After a brief excursus on the general history of the concept, this paper proposes three different approaches to the conceptualisation of internet time. The common thread underlying all the approaches is the notion of time as an assemblage of elements such as technical artefacts, social relations and metaphors. By drawing out time in this way, the paper addresses the challenge of thinking of internet time as coexistence, a clash of fluxes, metaphors, lived experiences and assemblages. In other words, this paper proposes a way to articulate internet time as a multiplicity.
Resumo:
Rats are superior to the most advanced robots when it comes to creating and exploiting spatial representations. A wild rat can have a foraging range of hundreds of meters, possibly kilometers, and yet the rodent can unerringly return to its home after each foraging mission, and return to profitable foraging locations at a later date (Davis, et al., 1948). The rat runs through undergrowth and pipes with few distal landmarks, along paths where the visual, textural, and olfactory appearance constantly change (Hardy and Taylor, 1980; Recht, 1988). Despite these challenges the rat builds, maintains, and exploits internal representations of large areas of the real world throughout its two to three year lifetime. While algorithms exist that allow robots to build maps, the questions of how to maintain those maps and how to handle change in appearance over time remain open. The robotic approach to map building has been dominated by algorithms that optimise the geometry of the map based on measurements of distances to features. In a robotic approach, measurements of distance to features are taken with range-measuring devices such as laser range finders or ultrasound sensors, and in some cases estimates of depth from visual information. The features are incorporated into the map based on previous readings of other features in view and estimates of self-motion. The algorithms explicitly model the uncertainty in measurements of range and the measurement of self-motion, and use probability theory to find optimal solutions for the geometric configuration of the map features (Dissanayake, et al., 2001; Thrun and Leonard, 2008). Some of the results from the application of these algorithms have been impressive, ranging from three-dimensional maps of large urban strucutures (Thrun and Montemerlo, 2006) to natural environments (Montemerlo, et al., 2003).
Resumo:
In this paper an existing method for indoor Simultaneous Localisation and Mapping (SLAM) is extended to operate in large outdoor environments using an omnidirectional camera as its principal external sensor. The method, RatSLAM, is based upon computational models of the area in the rat brain that maintains the rodent’s idea of its position in the world. The system uses the visual appearance of different locations to build hybrid spatial-topological maps of places it has experienced that facilitate relocalisation and path planning. A large dataset was acquired from a dynamic campus environment and used to verify the system’s ability to construct representations of the world and simultaneously use these representations to maintain localisation.
Resumo:
The Lingodroids are a pair of mobile robots that evolve a language for places and relationships between places (based on distance and direction). Each robot in these studies has its own understanding of the layout of the world, based on its unique experiences and exploration of the environment. Despite having different internal representations of the world, the robots are able to develop a common lexicon for places, and then use simple sentences to explain and understand relationships between places even places that they could not physically experience, such as areas behind closed doors. By learning the language, the robots are able to develop representations for places that are inaccessible to them, and later, when the doors are opened, use those representations to perform goal-directed behavior.
Resumo:
Australian screen classics are seminal for a range of reasons: whether it is a particular title’s popularity and impact upon popular culture, its cultural and textual meaning, or what the film tells us about the social, political and cultural climate from which it emerged. Wolf Creek (Greg McLean, 2005) is undoubtedly an Australian screen classic. The film was an impressive low-budget breakout success, which played a big part in the renaissance of contemporary Australian genre cinema by opening doors for genre filmmakers targeting international markets in ways that haven’t been seen in Australia since the 1980s. Wolf Creek has become the quintessential Australian horror movie. It has captured collective national fears and anxieties about the Australian outback – the isolation, the repressive desolation, the idea that the landscape itself is your enemy. It challenges traditional representations of Australian masculinity and the “ocker larrikin” to show a negative image of the rural ocker which dominated Australian screen in the 1970s and, to lesser extent, the 1980s.
Resumo:
In this video, a couple sits on a couch slowly breaking up. A typical shot/reverse-shot filmic structure is offset as the sound and image goes out of synch. At different times, it becomes so out of synch that they mouth each other’s words. This work engages with the signifying processes of romantic narratives. It emphasizes disruption and discontinuity as fundamental and generative operations in making meaning. Extending on post-structural and deconstructionist ideas, this work emphasizes the constructed nature of representations of heterosexual relationships. It draws attention to the gaps, slippages and fragments that pervade signifying acts.
Resumo:
From location-aware computing to mining the social web, representations of context have promised to make better software applications. The opportunities and challenges of context-aware computing from representational, situated and interactional perspectives have been well documented, but arguments from the perspective of design are somewhat disparate. This paper draws on both theoretical perspectives and a design framing, using the problem of designing a social mobile agile ridesharing system, in order to reflect upon and call for broader design approaches for context-aware computing and human-computer Interaction research in general.
Resumo:
For more than a decade research in the field of context aware computing has aimed to find ways to exploit situational information that can be detected by mobile computing and sensor technologies. The goal is to provide people with new and improved applications, enhanced functionality and better use experience (Dey, 2001). Early applications focused on representing or computing on physical parameters, such as showing your location and the location of people or things around you. Such applications might show where the next bus is, which of your friends is in the vicinity and so on. With the advent of social networking software and microblogging sites such as Facebook and Twitter, recommender systems and so on context-aware computing is moving towards mining the social web in order to provide better representations and understanding of context, including social context. In this paper we begin by recapping different theoretical framings of context. We then discuss the problem of context- aware computing from a design perspective.
Resumo:
The ability to decode graphics is an increasingly important component of mathematics assessment and curricula. This study examined 50, 9- to 10-year-old students (23 male, 27 female), as they solved items from six distinct graphical languages (e.g., maps) that are commonly used to convey mathematical information. The results of the study revealed: 1) factors which contribute to success or hinder performance on tasks with various graphical representations; and 2) how the literacy and graphical demands of tasks influence the mathematical sense making of students. The outcomes of this study highlight the changing nature of assessment in school mathematics and identify the function and influence of graphics in the design of assessment tasks.
Resumo:
This article examines the figure of the ‘Cashed-up Bogan’ or ‘Cub’ in Australian media from 2006 to 2009. It explains that ‘Bogan’, like that of ‘Chav’ in Britain, is a widely engaged negative descriptor for the white working-class poor. In contrast, ‘Cubs’ have economic capital. This capital, and the Cub’s emergence, is linked to Australia’s resource boom of recent decades when the need for skilled labour allowed for a highly demarcated segment of the working class to earn relatively high incomes in the mining sector and to participate in consumption. We argue that access to economic capital has provided the Cub with mobility to enter the everyday spaces of the middle class, but this has caused disruption and anxiety to middle-class hegemony. As a result, the middle class has redrawn and reinforced class-infused symbolic and cultural boundaries, whereby, despite their wealth, pernicious media representations mark Cubs as ‘other’ to the middle-class deservingness, taste and morality.
Resumo:
Purpose: Investigations of foveal aberrations assume circular pupils. However, the pupil becomes increasingly elliptical with increase in visual field eccentricity. We address this and other issues concerning peripheral aberration specification. Methods: One approach uses an elliptical pupil similar to the actual pupil shape, stretched along its minor axis to become a circle so that Zernike circular aberration polynomials may be used. Another approach uses a circular pupil whose diameter matches either the larger or smaller dimension of the elliptical pupil. Pictorial presentation of aberrations, influence of wavelength on aberrations, sign differences between aberrations for fellow eyes, and referencing position to either the visual field or the retina are considered. Results: Examples show differences between the two approaches. Each has its advantages and disadvantages, but there are ways to compensate for most disadvantages. Two representations of data are pupil aberration maps at each position in the visual field and maps showing the variation in individual aberration coefficients across the field. Conclusions: Based on simplicity of use, adequacy of approximation, possible departures of off-axis pupils from ellipticity, and ease of understanding by clinicians, the circular pupil approach is preferable to the stretched elliptical approach for studies involving field angles up to 30 deg.
Resumo:
A key issue for the economic development and for performance of organizations is the existence of standards. As their definitions and control are source of power, it seems to be important to understand the concept and to wonder about the representations authorized by the concept which give their direction and their legitimacy. The difficulties of classical microeconomics of establishing a theory of standardisation compatible with its fundamental axiomatic are underlined. We propose to reconsider the problem by carrying out the opposite way: to question the theoretical base, by reformulating assumptions on the autonomy of the choice of the actors. The theory of conventions will offer us both a theoretical framework and tools, enabling us to understand the systemic dimension and dynamic structure of standards seen as special case of conventions. This work aims thus to provide a sound basis and promote a better consciousness in the development of global project management standards, aiming also to underline that social construction is not a matter of copyright but a matter of open minds, collective cognitive process and freedom for the common wealth.
Resumo:
In the past fifteen years, increasing attention has been given to the role of Vocational Education and Training (VET) in attracting large numbers of international students and its contribution to the economic development of Australia. This trend has given rise to many challenges in vocational education, especially with regard to providing quality education that ensures international students’ stay in Australia is a satisfactory experience. Teachers are key stakeholders in international education and share responsibility for ensuring international students gain quality learning experiences and positive outcomes. However, the challenges and needs of these teachers are generally not well understood. Therefore, this paper draws on the dilemmas faced by teachers of international students associated with professional, personal, ethical and educational aspects. This paper reports on a Masters Research project that is designed to investigate the dilemmas that teachers of international students face in VET in Australia, particularly in Brisbane. This study uses a qualitative approach within the interpretive constructivist paradigm to gain real-life insights through responsive interviewing and inductive data analysis. While the data collection has been done, the analysis of data is in progress. Responsive interviews with teachers of VET with different academic and national backgrounds, ages, industry experience have identified particular understandings, ideologies and representations of what it means to be a teacher in today's multicultural VET environment; provoking both resistances and new pedagogical understanding of teacher dilemmas and their work environment through the eyes of teachers of international students. The paper considers the challenges for the VET practitioners within the VET system while reflecting on the theme for the 2011 AVETRA conference, “Research in VET: Janus- Reflecting Back, Projecting Forward” by focusing particularly on “Rethinking pedagogies and pathways in VET work through the voice of VET workers”.
Resumo:
As computers approach the physical limits of information storable in memory, new methods will be needed to further improve information storage and retrieval. We propose a quantum inspired vector based approach, which offers a contextually dependent mapping from the subsymbolic to the symbolic representations of information. If implemented computationally, this approach would provide exceptionally high density of information storage, without the traditionally required physical increase in storage capacity. The approach is inspired by the structure of human memory and incorporates elements of Gardenfors’ Conceptual Space approach and Humphreys et al.’s matrix model of memory.
Resumo:
The design of pre-contoured fracture fixation implants (plates and nails) that correctly fit the anatomy of a patient utilises 3D models of long bones with accurate geometric representation. 3D data is usually available from computed tomography (CT) scans of human cadavers that generally represent the above 60 year old age group. Thus, despite the fact that half of the seriously injured population comes from the 30 year age group and below, virtually no data exists from these younger age groups to inform the design of implants that optimally fit patients from these groups. Hence, relevant bone data from these age groups is required. The current gold standard for acquiring such data–CT–involves ionising radiation and cannot be used to scan healthy human volunteers. Magnetic resonance imaging (MRI) has been shown to be a potential alternative in the previous studies conducted using small bones (tarsal bones) and parts of the long bones. However, in order to use MRI effectively for 3D reconstruction of human long bones, further validations using long bones and appropriate reference standards are required. Accurate reconstruction of 3D models from CT or MRI data sets requires an accurate image segmentation method. Currently available sophisticated segmentation methods involve complex programming and mathematics that researchers are not trained to perform. Therefore, an accurate but relatively simple segmentation method is required for segmentation of CT and MRI data. Furthermore, some of the limitations of 1.5T MRI such as very long scanning times and poor contrast in articular regions can potentially be reduced by using higher field 3T MRI imaging. However, a quantification of the signal to noise ratio (SNR) gain at the bone - soft tissue interface should be performed; this is not reported in the literature. As MRI scanning of long bones has very long scanning times, the acquired images are more prone to motion artefacts due to random movements of the subject‟s limbs. One of the artefacts observed is the step artefact that is believed to occur from the random movements of the volunteer during a scan. This needs to be corrected before the models can be used for implant design. As the first aim, this study investigated two segmentation methods: intensity thresholding and Canny edge detection as accurate but simple segmentation methods for segmentation of MRI and CT data. The second aim was to investigate the usability of MRI as a radiation free imaging alternative to CT for reconstruction of 3D models of long bones. The third aim was to use 3T MRI to improve the poor contrast in articular regions and long scanning times of current MRI. The fourth and final aim was to minimise the step artefact using 3D modelling techniques. The segmentation methods were investigated using CT scans of five ovine femora. The single level thresholding was performed using a visually selected threshold level to segment the complete femur. For multilevel thresholding, multiple threshold levels calculated from the threshold selection method were used for the proximal, diaphyseal and distal regions of the femur. Canny edge detection was used by delineating the outer and inner contour of 2D images and then combining them to generate the 3D model. Models generated from these methods were compared to the reference standard generated using the mechanical contact scans of the denuded bone. The second aim was achieved using CT and MRI scans of five ovine femora and segmenting them using the multilevel threshold method. A surface geometric comparison was conducted between CT based, MRI based and reference models. To quantitatively compare the 1.5T images to the 3T MRI images, the right lower limbs of five healthy volunteers were scanned using scanners from the same manufacturer. The images obtained using the identical protocols were compared by means of SNR and contrast to noise ratio (CNR) of muscle, bone marrow and bone. In order to correct the step artefact in the final 3D models, the step was simulated in five ovine femora scanned with a 3T MRI scanner. The step was corrected using the iterative closest point (ICP) algorithm based aligning method. The present study demonstrated that the multi-threshold approach in combination with the threshold selection method can generate 3D models from long bones with an average deviation of 0.18 mm. The same was 0.24 mm of the single threshold method. There was a significant statistical difference between the accuracy of models generated by the two methods. In comparison, the Canny edge detection method generated average deviation of 0.20 mm. MRI based models exhibited 0.23 mm average deviation in comparison to the 0.18 mm average deviation of CT based models. The differences were not statistically significant. 3T MRI improved the contrast in the bone–muscle interfaces of most anatomical regions of femora and tibiae, potentially improving the inaccuracies conferred by poor contrast of the articular regions. Using the robust ICP algorithm to align the 3D surfaces, the step artefact that occurred by the volunteer moving the leg was corrected, generating errors of 0.32 ± 0.02 mm when compared with the reference standard. The study concludes that magnetic resonance imaging, together with simple multilevel thresholding segmentation, is able to produce 3D models of long bones with accurate geometric representations. The method is, therefore, a potential alternative to the current gold standard CT imaging.