554 resultados para Audio-visual product
Resumo:
A key issue in the field of inclusive design is the ability to provide designers with an understanding of people's range of capabilities. Since it is not feasible to assess product interactions with a large sample, this paper assesses a range of proxy measures of design-relevant capabilities. It describes a study that was conducted to identify which measures provide the best prediction of people's abilities to use a range of products. A detailed investigation with 100 respondents aged 50-80 years was undertaken to examine how they manage typical household products. Predictor variables included self-report and performance measures across a variety of capabilities (vision, hearing, dexterity and cognitive function), component activities used in product interactions (e.g. using a remote control, touch screen) and psychological characteristics (e.g. self-efficacy, confidence with using electronic devices). Results showed, as expected, a higher prevalence of visual, hearing, dexterity, cognitive and product interaction difficulties in the 65-80 age group. Regression analyses showed that, in addition to age, performance measures of vision (acuity, contrast sensitivity) and hearing (hearing threshold) and self-report and performance measures of component activities are strong predictors of successful product interactions. These findings will guide the choice of measures to be used in a subsequent national survey of design-relevant capabilities, which will lead to the creation of a capability database. This will be converted into a tool for designers to understand the implications of their design decisions, so that they can design products in a more inclusive way.
Resumo:
The rapid growth of visual information on Web has led to immense interest in multimedia information retrieval (MIR). While advancement in MIR systems has achieved some success in specific domains, particularly the content-based approaches, general Web users still struggle to find the images they want. Despite the success in content-based object recognition or concept extraction, the major problem in current Web image searching remains in the querying process. Since most online users only express their needs in semantic terms or objects, systems that utilize visual features (e.g., color or texture) to search images create a semantic gap which hinders general users from fully expressing their needs. In addition, query-by-example (QBE) retrieval imposes extra obstacles for exploratory search because users may not always have the representative image at hand or in mind when starting a search (i.e. the page zero problem). As a result, the majority of current online image search engines (e.g., Google, Yahoo, and Flickr) still primarily use textual queries to search. The problem with query-based retrieval systems is that they only capture users’ information need in terms of formal queries;; the implicit and abstract parts of users’ information needs are inevitably overlooked. Hence, users often struggle to formulate queries that best represent their needs, and some compromises have to be made. Studies of Web search logs suggest that multimedia searches are more difficult than textual Web searches, and Web image searching is the most difficult compared to video or audio searches. Hence, online users need to put in more effort when searching multimedia contents, especially for image searches. Most interactions in Web image searching occur during query reformulation. While log analysis provides intriguing views on how the majority of users search, their search needs or motivations are ultimately neglected. User studies on image searching have attempted to understand users’ search contexts in terms of users’ background (e.g., knowledge, profession, motivation for search and task types) and the search outcomes (e.g., use of retrieved images, search performance). However, these studies typically focused on particular domains with a selective group of professional users. General users’ Web image searching contexts and behaviors are little understood although they represent the majority of online image searching activities nowadays. We argue that only by understanding Web image users’ contexts can the current Web search engines further improve their usefulness and provide more efficient searches. In order to understand users’ search contexts, a user study was conducted based on university students’ Web image searching in News, Travel, and commercial Product domains. The three search domains were deliberately chosen to reflect image users’ interests in people, time, event, location, and objects. We investigated participants’ Web image searching behavior, with the focus on query reformulation and search strategies. Participants’ search contexts such as their search background, motivation for search, and search outcomes were gathered by questionnaires. The searching activity was recorded with participants’ think aloud data for analyzing significant search patterns. The relationships between participants’ search contexts and corresponding search strategies were discovered by Grounded Theory approach. Our key findings include the following aspects: - Effects of users' interactive intents on query reformulation patterns and search strategies - Effects of task domain on task specificity and task difficulty, as well as on some specific searching behaviors - Effects of searching experience on result expansion strategies A contextual image searching model was constructed based on these findings. The model helped us understand Web image searching from user perspective, and introduced a context-aware searching paradigm for current retrieval systems. A query recommendation tool was also developed to demonstrate how users’ query reformulation contexts can potentially contribute to more efficient searching.
Resumo:
Background Standard operating procedures state that police officers should not drive while interacting with their mobile data terminal (MDT) which provides in-vehicle information essential to police work. Such interactions do however occur in practice and represent a potential source of driver distraction. The MDT comprises visual output with manual input via touch screen and keyboard. This study investigated the potential for alternative input and output methods to mitigate driver distraction with specific focus on eye movements. Method Nineteen experienced drivers of police vehicles (one female) from the NSW Police Force completed four simulated urban drives. Three drives included a concurrent secondary task: imitation licence plate search using an emulated MDT. Three different interface methods were examined: Visual-Manual, Visual-Voice, and Audio-Voice (“Visual” and “Audio” = output modality; “Manual” and “Voice” = input modality). During each drive, eye movements were recorded using FaceLAB™ (Seeing Machines Ltd, Canberra, ACT). Gaze direction and glances on the MDT were assessed. Results The Visual-Voice and Visual-Manual interfaces resulted in a significantly greater number of glances towards the MDT than Audio-Voice or Baseline. The Visual-Manual and Visual-Voice interfaces resulted in significantly more glances to the display than Audio-Voice or Baseline. For longer duration glances (>2s and 1-2s) the Visual-Manual interface resulted in significantly more fixations than Baseline or Audio-Voice. The short duration glances (<1s) were significantly greater for both Visual-Voice and Visual-Manual compared with Baseline and Audio-Voice. There were no significant differences between Baseline and Audio-Voice. Conclusion An Audio-Voice interface has the greatest potential to decrease visual distraction to police drivers. However, it is acknowledged that an audio output may have limitations for information presentation compared with visual output. The Visual-Voice interface offers an environment where the capacity to present information is sustained, whilst distraction to the driver is reduced (compared to Visual-Manual) by enabling adaptation of fixation behaviour.
Resumo:
Bioacoustic data can provide an important base for environmental monitoring. To explore a large amount of field recordings collected, an automated similarity search algorithm is presented in this paper. A region of an audio defined by frequency and time bounds is provided by a user; the content of the region is used to construct a query. In the retrieving process, our algorithm will automatically scan through recordings to search for similar regions. In detail, we present a feature extraction approach based on the visual content of vocalisations – in this case ridges, and develop a generic regional representation of vocalisations for indexing. Our feature extraction method works best for bird vocalisations showing ridge characteristics. The regional representation method allows the content of an arbitrary region of a continuous recording to be described in a compressed format.
Resumo:
Objectives: To investigate the relationship between two assessments to quantify delayed onset muscle soreness [DOMS]: visual analog scale [VAS] and pressure pain threshold [PPT]. Methods: Thirty-one healthy young men [25.8 ± 5.5 years] performed 10 sets of six maximal eccentric contractions of the elbow flexors with their non-dominant arm. Before and one to four days after the exercise, muscle pain perceived upon palpation of the biceps brachii at three sites [5, 9 and 13 cm above the elbow crease] was assessed by VAS with a 100 mm line [0 = no pain, 100 = extremely painful], and PPT of the same sites was determined by an algometer. Changes in VAS and PPT over time were compared amongst three sites by a two-way repeated measures analysis of variance, and the relationship between VAS and PPT was analyzed using a Pearson product-moment correlation. Results: The VAS increased one to four days after exercise and peaked two days post-exercise, while the PPT decreased most one day post-exercise and remained below baseline for four days following exercise [p < 0.05]. No significant difference among the three sites was found for VAS [p = 0.62] or PPT [p = 0.45]. The magnitude of change in VAS did not significantly correlate with that of PPT [r = −0.20, p = 0.28]. Conclusion: These results suggest that the level of muscle pain is not region-specific, at least among the three sites investigated in the study, and VAS and PPT provide different information about DOMS, indicating that VAS and PPT represent different aspects of pain.
Resumo:
The visual characteristics of urban environments have been changing dramatically with the growth of cities around the world. Protection and enhancement of landscape character in urban environments have been one of the challenges for policy makers in addressing sustainable urban growth. Visual openness and enclosure in urban environments are important attributes in perception of visual space which affect the human interaction with physical space and which can be often modified by new developments. Measuring visual openness in urban areas results in more accurate, reliable, and systematic approach to manage and control visual qualities in growing cities. Recent advances in techniques in geographic information systems (GIS) and survey systems make it feasible to measure and quantify this attribute with a high degree of realism and precision. Previous studies in this field do not take full advantage of these improvements. This paper proposes a method to measure the visual openness and enclosure in a changing urban landscape in Australia, on the Gold Coast, by using the improved functionality in GIS. Using this method, visual openness is calculated and described for all publicly accessible areas in the selected study area. A final map is produced which shows the areas with highest visual openness and visibility to natural landscape resources. The output of this research can be used by planners and decision-makers in managing and controlling views in complex urban landscapes. Also, depending on the availability of GIS data, this method can be applied to any region including non-urban landscapes to help planners and policy-makers manage views and visual qualities.
Resumo:
Acoustic recordings play an increasingly important role in monitoring terrestrial environments. However, due to rapid advances in technology, ecologists are accumulating more audio than they can listen to. Our approach to this big-data challenge is to visualize the content of long-duration audio recordings by calculating acoustic indices. These are statistics which describe the temporal-spectral distribution of acoustic energy and reflect content of ecological interest. We combine spectral indices to produce false-color spectrogram images. These not only reveal acoustic content but also facilitate navigation. An additional analytic challenge is to find appropriate descriptors to summarize the content of 24-hour recordings, so that it becomes possible to monitor long-term changes in the acoustic environment at a single location and to compare the acoustic environments of different locations. We describe a 24-hour ‘acoustic-fingerprint’ which shows some preliminary promise.
Resumo:
Acoustic recordings play an increasingly important role in monitoring terrestrial and aquatic environments. However, rapid advances in technology make it possible to accumulate thousands of hours of recordings, more than ecologists can ever listen to. Our approach to this big-data challenge is to visualize the content of long-duration audio recordings on multiple scales, from minutes, hours, days to years. The visualization should facilitate navigation and yield ecologically meaningful information prior to listening to the audio. To construct images, we calculate acoustic indices, statistics that describe the distribution of acoustic energy and reflect content of ecological interest. We combine various indices to produce false-color spectrogram images that reveal acoustic content and facilitate navigation. The technical challenge we investigate in this work is how to navigate recordings that are days or even months in duration. We introduce a method of zooming through multiple temporal scales, analogous to Google Maps. However, the “landscape” to be navigated is not geographical and not therefore intrinsically visual, but rather a graphical representation of the underlying audio. We describe solutions to navigating spectrograms that range over three orders of magnitude of temporal scale. We make three sets of observations: 1. We determine that at least ten intermediate scale steps are required to zoom over three orders of magnitude of temporal scale; 2. We determine that three different visual representations are required to cover the range of temporal scales; 3. We present a solution to the problem of maintaining visual continuity when stepping between different visual representations. Finally, we demonstrate the utility of the approach with four case studies.
Resumo:
Bactrocera tryoni (Froggatt) is Australia's major horticultural insect pest, yet monitoring females remains logistically difficult. We trialled the ‘Ladd trap’ as a potential female surveillance or monitoring tool. This trap design is used to trap and monitor fruit flies in countries other (e.g. USA) than Australia. The Ladd trap consists of a flat yellow panel (a traditional ‘sticky trap’), with a three dimensional red sphere (= a fruit mimic) attached in the middle. We confirmed, in field-cage trials, that the combination of yellow panel and red sphere was more attractive to B. tryoni than the two components in isolation. In a second set of field-cage trials, we showed that it was the red-yellow contrast, rather than the three dimensional effect, which was responsible for the trap's effectiveness, with B. tryoni equally attracted to a Ladd trap as to a two-dimensional yellow panel with a circular red centre. The sex ratio of catches was approximately even in the field-cage trials. In field trials, we tested the traditional red-sphere Ladd trap against traps for which the sphere was painted blue, black or yellow. The colour of sphere did not significantly influence trap efficiency in these trials, despite the fact the yellow-panel/yellow-sphere presented no colour contrast to the flies. In 6 weeks of field trials, over 1500 flies were caught, almost exactly two-thirds of them being females. Overall, flies were more likely to be caught on the yellow panel than the sphere; but, for the commercial Ladd trap, proportionally more females were caught on the red sphere versus the yellow panel than would be predicted based on relative surface area of each component, a result also seen the field-cage trial. We determined that no modification of the trap was more effective than the commercially available Ladd trap and so consider that product suitable for more extensive field testing as a B. tryoni research and monitoring tool.
Resumo:
The behavior of the hydroxyl units of synthetic goethite and its dehydroxylated product hematite was characterized using a combination of Fourier transform infrared (FTIR) spectroscopy and X-ray diffraction (XRD) during the thermal transformation over a temperature range of 180-270 degrees C. Hematite was detected at temperatures above 200 degrees C by XRD while goethite was not observed above 230 degrees C. Five intense OH vibrations at 3212-3194, 1687-1674, 1643-1640, 888-884 and 800-798 cm(-1), and a H2O vibration at 3450-3445 cm(-1) were observed for goethite. The intensity of hydroxyl stretching and bending vibrations decreased with the extent of dehydroxylation of goethite. Infrared absorption bands clearly show the phase transformation between goethite and hematite: in particular. the migration of excess hydroxyl units from goethite to hematite. Two bands at 536-533 and 454-452 cm(-1) are the low wavenumber vibrations of Fe-O in the hematite structure. Band component analysis data of FTIR spectra support the fact that the hydroxyl units mainly affect the a plane in goethite and the equivalent c plane in hematite.
Resumo:
This paper presents a preliminary study on the dielectric properties and curing of three different types of epoxy resins mixed at various stichiometric mixture of hardener, flydust and aluminium powder under microwave energy. In this work, the curing process of thin layers of epoxy resins using microwave radiation was investigated as an alternative technique that can be implemented to develop a new rapid product development technique. In this study it was observed that the curing time and temperature were a function of the percentage of hardener and fillers presence in the epoxy resins. Initially dielectric properties of epoxy resins with hardener were measured which was directly correlated to the curing process in order to understand the properties of cured specimen. Tensile tests were conducted on the three different types of epoxy resins with hardener and fillers. Modifying dielectric properties of the mixtures a significant decrease in curing time was observed. In order to study the microstructural changes of cured specimen the morphology of the fracture surface was carried out by using scanning electron microscopy.
Resumo:
This paper investigates the effectiveness of virtual product placement as a marketing tool by examining the relationship between brand recall and recognition and virtual product placement. It also aims to address a gap in the existing academic literature by focusing on the impact of product placement on recall and recognition of new brands. The growing importance of product placement is discussed and a review of previous research on product placement and virtual product placement is provided. The research methodology used to study the recall and recognition effects of virtual product placement are described and key findings presented. Finally, implications are discussed and recommendations for future research provided.