135 resultados para Collecting
Resumo:
This paper discusses methodological developments in phenomenography that make it apropos for the study of teaching and learning to use information in educational environments. Phenomenography is typically used to analyze interview data to determine different ways of experiencing a phenomenon. There is an established tradition of phenomenographic research in the study of information literacy (ex: Bruce, 1997; 2008; Lupton, 2008; Webber, Boon, & Johnston, 2005). Drawing from the large body of evidence complied in two decades of research, phenomenographers developed variation theory, which explains what a learner can feasibly learn from a classroom lesson based on how the phenomenon being studied is presented (Marton, Runesson, & Tsui, 2004). Variation theory’s ability to establish the critical conditions necessary for learning to occur has resulted in the use of phenomenographic methods to study classroom interactions by collecting and analyzing naturalistic data through observation, as well as interviews concerning teachers’ intentions and students’ different experiences of classroom lessons. Describing the methodological developments of phenomenography in relation to understanding the classroom experience, this paper discusses the potential benefits and challenges of utilizing such methods to research the experiences of teaching and learning to use information in discipline-focused classrooms. The application of phenomenographic methodology for this purpose is exemplified with an ongoing study that explores how students learned to use information in an undergraduate language and gender course (Maybee, Bruce, Lupton, & Rebmann, in press). This paper suggests that by providing a nuanced understanding of what is intended for students to learn about using information, and relating that to what transpires in the classroom and how students experience these lessons, phenomenography and variation theory offer a viable framework for further understanding and improving how students are taught, and learn to use information.
Resumo:
This work was composed in relation to the author's research of the popularity of themes of ephemerality and affect in recent global art. This focus correlated with Chicks on Speed's ongoing inquiries into issues of collections and collecting in the artworld, articulated as 'the art dump' by the group. This work was subsequently performed as a contribution to a performance with international multidisciplinary group Chicks on Speed as a part of their residency during MONA FOMA in Tasmania.
Resumo:
A food supply that delivers energy-dense products with high levels of salt, saturated fats and trans fats, in large portion sizes, is a major cause of non-communicable diseases (NCDs). The highly processed foods produced by large food corporations are primary drivers of increases in consumption of these adverse nutrients. The objective of this paper is to present an approach to monitoring food composition that can both document the extent of the problem and underpin novel actions to address it. The monitoring approach seeks to systematically collect information on high-level contextual factors influencing food composition and assess the energy density, salt, saturated fat, trans fats and portion sizes of highly processed foods for sale in retail outlets (with a focus on supermarkets and quick-service restaurants). Regular surveys of food composition are proposed across geographies and over time using a pragmatic, standardized methodology. Surveys have already been undertaken in several high- and middle-income countries, and the trends have been valuable in informing policy approaches. The purpose of collecting data is not to exhaustively document the composition of all foods in the food supply in each country, but rather to provide information to support governments, industry and communities to develop and enact strategies to curb food-related NCDs.
Resumo:
Food labelling on food packaging has the potential to have both positive and negative effects on diets. Monitoring different aspects of food labelling would help to identify priority policy options to help people make healthier food choices. A taxonomy of the elements of health-related food labelling is proposed. A systematic review of studies that assessed the nature and extent of health-related food labelling has been conducted to identify approaches to monitoring food labelling. A step-wise approach has been developed for independently assessing the nature and extent of health-related food labelling in different countries and over time. Procedures for sampling the food supply, and collecting and analysing data are proposed, as well as quantifiable measurement indicators and benchmarks for health-related food labelling.
Resumo:
This paper describes a generic and integrated solar powered remote Unmanned Air Vehicles (UAV) and Wireless Sensor Network (WSN) gas sensing system. The system uses a generic gas sensing system for CH4 and CO2 concentrations using metal oxide (MoX) and non-dispersive infrared sensors, and a new solar cell encapsulation method to power the UASs as well as a data management platform to store, analyse and share the information with operators and external users. The system was successfully field tested at ground and low altitudes, collecting, storing and transmitting data in real time to a central node for analysis and 3D mapping. The system can be used in a wide range of outdoor applications, especially in agriculture, bushfires, mining studies, opening the way to a ubiquitous low cost environmental monitoring. A video of the bench and flight test performed can be seen in the following link https://www.youtube.com/watch?v=Bwas7stYIxQ.
Resumo:
Purpose – Traumatic events can cause post-traumatic stress disorder due to the severity of the often unexpected events. The purpose of this paper is to reveal how conversations around lived experiences of traumatic events, such as the Christchurch earthquake in February 2011, can work as a strategy for people to come to terms with their experiences collaboratively. By encouraging young children to recall and tell of their earthquake stories with their early childhood teachers they can begin to respond, renew, and recover (Brown, 2012), and prevent or minimise more stress being developed. Design/methodology/approach – The study involved collecting data of the participating children taking turns to wear a wireless microphone where their interactions with each other and with teachers were video recorded over one week in November 2011. A total of eight hours and 21 minutes of footage was collected; four minutes and 19 seconds of that footage are presented and analysed in this paper. The footage was watched repeatedly and transcribed using conversation analysis methods (Sacks, 1995). Findings – Through analysing the detailed turn-taking utterances between teachers and children, the orderliness of the co-production of remembering is revealed to demonstrate that each member orients to being in agreement about what actually happened. These episodes of story telling between the teachers and children demonstrate how the teachers encourage the children to tell about their experiences through actively engaging in conversations with them about the earthquake. Originality/value – The conversation analysis approach used in this research was found to be useful in investigating aspects of disasters that the participants themselves remember as important and real. This approach offers a unique insight into understanding how the earthquake event was experienced and reflected on by young children and their teachers, and so can inform future policy and provision in post-disaster situations.
Resumo:
This article uses the Lavender Library, Archives, and Cultural Exchange of Sacramento, Incorporated, a small queer community archives in Northern California, as a case study for expanding our knowledge of community archives and issues of archival practice. It explores why creating a separate community archives was necessary, the role of community members in founding and maintaining the archives, the development of its collections, and the ongoing challenges community archives face. The article also considers the implications community archives have for professional practice, particularly in the areas of collecting, description, and collaboration.
Resumo:
Big Data presents many challenges related to volume, whether one is interested in studying past datasets or, even more problematically, attempting to work with live streams of data. The most obvious challenge, in a ‘noisy’ environment such as contemporary social media, is to collect the pertinent information; be that information for a specific study, tweets which can inform emergency services or other responders to an ongoing crisis, or give an advantage to those involved in prediction markets. Often, such a process is iterative, with keywords and hashtags changing with the passage of time, and both collection and analytic methodologies need to be continually adapted to respond to this changing information. While many of the data sets collected and analyzed are preformed, that is they are built around a particular keyword, hashtag, or set of authors, they still contain a large volume of information, much of which is unnecessary for the current purpose and/or potentially useful for future projects. Accordingly, this panel considers methods for separating and combining data to optimize big data research and report findings to stakeholders. The first paper considers possible coding mechanisms for incoming tweets during a crisis, taking a large stream of incoming tweets and selecting which of those need to be immediately placed in front of responders, for manual filtering and possible action. The paper suggests two solutions for this, content analysis and user profiling. In the former case, aspects of the tweet are assigned a score to assess its likely relationship to the topic at hand, and the urgency of the information, whilst the latter attempts to identify those users who are either serving as amplifiers of information or are known as an authoritative source. Through these techniques, the information contained in a large dataset could be filtered down to match the expected capacity of emergency responders, and knowledge as to the core keywords or hashtags relating to the current event is constantly refined for future data collection. The second paper is also concerned with identifying significant tweets, but in this case tweets relevant to particular prediction market; tennis betting. As increasing numbers of professional sports men and women create Twitter accounts to communicate with their fans, information is being shared regarding injuries, form and emotions which have the potential to impact on future results. As has already been demonstrated with leading US sports, such information is extremely valuable. Tennis, as with American Football (NFL) and Baseball (MLB) has paid subscription services which manually filter incoming news sources, including tweets, for information valuable to gamblers, gambling operators, and fantasy sports players. However, whilst such services are still niche operations, much of the value of information is lost by the time it reaches one of these services. The paper thus considers how information could be filtered from twitter user lists and hash tag or keyword monitoring, assessing the value of the source, information, and the prediction markets to which it may relate. The third paper examines methods for collecting Twitter data and following changes in an ongoing, dynamic social movement, such as the Occupy Wall Street movement. It involves the development of technical infrastructure to collect and make the tweets available for exploration and analysis. A strategy to respond to changes in the social movement is also required or the resulting tweets will only reflect the discussions and strategies the movement used at the time the keyword list is created — in a way, keyword creation is part strategy and part art. In this paper we describe strategies for the creation of a social media archive, specifically tweets related to the Occupy Wall Street movement, and methods for continuing to adapt data collection strategies as the movement’s presence in Twitter changes over time. We also discuss the opportunities and methods to extract data smaller slices of data from an archive of social media data to support a multitude of research projects in multiple fields of study. The common theme amongst these papers is that of constructing a data set, filtering it for a specific purpose, and then using the resulting information to aid in future data collection. The intention is that through the papers presented, and subsequent discussion, the panel will inform the wider research community not only on the objectives and limitations of data collection, live analytics, and filtering, but also on current and in-development methodologies that could be adopted by those working with such datasets, and how such approaches could be customized depending on the project stakeholders.
Resumo:
A large number of methods have been published that aim to evaluate various components of multi-view geometry systems. Most of these have focused on the feature extraction, description and matching stages (the visual front end), since geometry computation can be evaluated through simulation. Many data sets are constrained to small scale scenes or planar scenes that are not challenging to new algorithms, or require special equipment. This paper presents a method for automatically generating geometry ground truth and challenging test cases from high spatio-temporal resolution video. The objective of the system is to enable data collection at any physical scale, in any location and in various parts of the electromagnetic spectrum. The data generation process consists of collecting high resolution video, computing accurate sparse 3D reconstruction, video frame culling and down sampling, and test case selection. The evaluation process consists of applying a test 2-view geometry method to every test case and comparing the results to the ground truth. This system facilitates the evaluation of the whole geometry computation process or any part thereof against data compatible with a realistic application. A collection of example data sets and evaluations is included to demonstrate the range of applications of the proposed system.
Resumo:
Since the inception of the first Joint Registry in Sweden in 1979, many countries including Finland, Norway, Denmark, Australia, New Zealand, Canada, Scotland, England and Wales now have more than 10 years experience and data, and are collecting data on more than 90% of the procedures performed nationally. There are also Joint Registries in Romania, Slovakia, Slovenia, Croatia, Hungary, France, Germany, Switzerland, Czech Republic, Italy, Austria and Portugal, and work is ongoing to develop a Joint Registry in the US...
Resumo:
This paper describes the experimental evaluation of a novel Autonomous Surface Vehicle capable of navigating complex inland water reservoirs and measuring a range of water quality properties and greenhouse gas emissions. The 16 ft long solar powered catamaran is capable of collecting water column profiles whilst in motion. It is also directly integrated with a reservoir scale floating sensor network to allow remote mission uploads, data download and adaptive sampling strategies. This paper describes the onboard vehicle navigation and control algorithms as well as obstacle avoidance strategies. Experimental results are shown demonstrating its ability to maintain track and avoid obstacles on a variety of large-scale missions and under differing weather conditions, as well as its ability to continuously collect various water quality parameters complimenting traditional manual monitoring campaigns.
Resumo:
Environmental monitoring is becoming critical as human activity and climate change place greater pressures on biodiversity, leading to an increasing need for data to make informed decisions. Acoustic sensors can help collect data across large areas for extended periods making them attractive in environmental monitoring. However, managing and analysing large volumes of environmental acoustic data is a great challenge and is consequently hindering the effective utilization of the big dataset collected. This paper presents an overview of our current techniques for collecting, storing and analysing large volumes of acoustic data efficiently, accurately, and cost-effectively.
Resumo:
Video stimulated recall interviewing is a research technique in which subjects view a video sequence of their behaviour and are then invited to reflect on their decision-making processes during the videoed event. Despite its popularity, this technique raises methodological issues for researchers, particularly novice researchers in education. The paper reports that while stimulated recall is a valuable technique for investigating decision making processes in relation to specific events, it is not a technique that lends itself as a universal technique for research. This paper recounts one study in educational research where stimulated recall interview was used successfully as a useful tool for collecting data with an adapted version of SRI procedure.
Resumo:
Numerous research studies have evaluated whether distance learning is a viable alternative to traditional learning methods. These studies have generally made use of cross-sectional surveys for collecting data, comparing distance to traditional learners with intent to validate the former as a viable educational tool. Inherent fundamental differences between traditional and distance learning pedagogies, however, reduce the reliability of these comparative studies and constrain the validity of analyses resulting from this analytical approach. This article presents the results of a research project undertaken to analyze expectations and experiences of distance learners with their degree programs. Students were given surveys designed to examine factors expected to affect their overall value assessment of their distance learning program. Multivariate statistical analyses were used to analyze the correlations among variables of interest to support hypothesized relationships among them. Focusing on distance learners overcomes some of the limitations with assessments that compare off- and on-campus student experiences. Evaluation and modeling of distance learner responses on perceived value for money of the distance education they received indicate that the two most important influences are course communication requirements, which had a negative effect, and course logistical simplicity, which revealed a positive effect. Combined, these two factors accounted for approximately 47% of the variability in perceived value for money of the educational program of sampled students. A detailed focus on comparing expectations with outcomes of distance learners complements the existing literature dominated by comparative studies of distance and nondistance learners.
Resumo:
Within Human-Computer Interaction (HCI) and Computer Supported Cooperative Work (CSCW) research, the notion of technologically-mediated awareness is often used for allowing relevant people to maintain a mental model of activities, behaviors and status information about each other so that they can organize and coordinate work or other joint activities. The initial conceptions of awareness focused largely on improving productivity and efficiency within work environments. With new social, cultural and commercial needs and the emergence of novel computing technologies, the focus of technologically-mediated awareness has extended from work environments to people’s everyday interactions. Hence, the scope of awareness has extended from conveying work related activities to people’s emotions, love, social status and other broad range of aspects. This trend of conceptualizing HCI design is termed as experience-focused HCI. In my PhD dissertation, designing for awareness, I have reported on how we, as HCI researchers, can design awareness systems from experience-focused HCI perspective that follow the trend of conveying awareness beyond the task-based, instrumental and productive needs. Within the overall aim to design for awareness, my research advocates ethnomethodologically-informed approaches for conceptualizing and designing for awareness. In this sense, awareness is not a predefined phenomenon but something that is situated and particular to a given environment. I have used this approach in two design cases of developing interactive systems that support awareness beyond task-based aspects in work environments. In both the cases, I have followed a complete design cycle: collecting an in-situ understanding of an environment, developing implications for a new technology, implementing a prototype technology to studying the use of the technology in its natural settings.