891 resultados para Collecting


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article uses the Lavender Library, Archives, and Cultural Exchange of Sacramento, Incorporated, a small queer community archives in Northern California, as a case study for expanding our knowledge of community archives and issues of archival practice. It explores why creating a separate community archives was necessary, the role of community members in founding and maintaining the archives, the development of its collections, and the ongoing challenges community archives face. The article also considers the implications community archives have for professional practice, particularly in the areas of collecting, description, and collaboration.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Big Data presents many challenges related to volume, whether one is interested in studying past datasets or, even more problematically, attempting to work with live streams of data. The most obvious challenge, in a ‘noisy’ environment such as contemporary social media, is to collect the pertinent information; be that information for a specific study, tweets which can inform emergency services or other responders to an ongoing crisis, or give an advantage to those involved in prediction markets. Often, such a process is iterative, with keywords and hashtags changing with the passage of time, and both collection and analytic methodologies need to be continually adapted to respond to this changing information. While many of the data sets collected and analyzed are preformed, that is they are built around a particular keyword, hashtag, or set of authors, they still contain a large volume of information, much of which is unnecessary for the current purpose and/or potentially useful for future projects. Accordingly, this panel considers methods for separating and combining data to optimize big data research and report findings to stakeholders. The first paper considers possible coding mechanisms for incoming tweets during a crisis, taking a large stream of incoming tweets and selecting which of those need to be immediately placed in front of responders, for manual filtering and possible action. The paper suggests two solutions for this, content analysis and user profiling. In the former case, aspects of the tweet are assigned a score to assess its likely relationship to the topic at hand, and the urgency of the information, whilst the latter attempts to identify those users who are either serving as amplifiers of information or are known as an authoritative source. Through these techniques, the information contained in a large dataset could be filtered down to match the expected capacity of emergency responders, and knowledge as to the core keywords or hashtags relating to the current event is constantly refined for future data collection. The second paper is also concerned with identifying significant tweets, but in this case tweets relevant to particular prediction market; tennis betting. As increasing numbers of professional sports men and women create Twitter accounts to communicate with their fans, information is being shared regarding injuries, form and emotions which have the potential to impact on future results. As has already been demonstrated with leading US sports, such information is extremely valuable. Tennis, as with American Football (NFL) and Baseball (MLB) has paid subscription services which manually filter incoming news sources, including tweets, for information valuable to gamblers, gambling operators, and fantasy sports players. However, whilst such services are still niche operations, much of the value of information is lost by the time it reaches one of these services. The paper thus considers how information could be filtered from twitter user lists and hash tag or keyword monitoring, assessing the value of the source, information, and the prediction markets to which it may relate. The third paper examines methods for collecting Twitter data and following changes in an ongoing, dynamic social movement, such as the Occupy Wall Street movement. It involves the development of technical infrastructure to collect and make the tweets available for exploration and analysis. A strategy to respond to changes in the social movement is also required or the resulting tweets will only reflect the discussions and strategies the movement used at the time the keyword list is created — in a way, keyword creation is part strategy and part art. In this paper we describe strategies for the creation of a social media archive, specifically tweets related to the Occupy Wall Street movement, and methods for continuing to adapt data collection strategies as the movement’s presence in Twitter changes over time. We also discuss the opportunities and methods to extract data smaller slices of data from an archive of social media data to support a multitude of research projects in multiple fields of study. The common theme amongst these papers is that of constructing a data set, filtering it for a specific purpose, and then using the resulting information to aid in future data collection. The intention is that through the papers presented, and subsequent discussion, the panel will inform the wider research community not only on the objectives and limitations of data collection, live analytics, and filtering, but also on current and in-development methodologies that could be adopted by those working with such datasets, and how such approaches could be customized depending on the project stakeholders.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A large number of methods have been published that aim to evaluate various components of multi-view geometry systems. Most of these have focused on the feature extraction, description and matching stages (the visual front end), since geometry computation can be evaluated through simulation. Many data sets are constrained to small scale scenes or planar scenes that are not challenging to new algorithms, or require special equipment. This paper presents a method for automatically generating geometry ground truth and challenging test cases from high spatio-temporal resolution video. The objective of the system is to enable data collection at any physical scale, in any location and in various parts of the electromagnetic spectrum. The data generation process consists of collecting high resolution video, computing accurate sparse 3D reconstruction, video frame culling and down sampling, and test case selection. The evaluation process consists of applying a test 2-view geometry method to every test case and comparing the results to the ground truth. This system facilitates the evaluation of the whole geometry computation process or any part thereof against data compatible with a realistic application. A collection of example data sets and evaluations is included to demonstrate the range of applications of the proposed system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Since the inception of the first Joint Registry in Sweden in 1979, many countries including Finland, Norway, Denmark, Australia, New Zealand, Canada, Scotland, England and Wales now have more than 10 years experience and data, and are collecting data on more than 90% of the procedures performed nationally. There are also Joint Registries in Romania, Slovakia, Slovenia, Croatia, Hungary, France, Germany, Switzerland, Czech Republic, Italy, Austria and Portugal, and work is ongoing to develop a Joint Registry in the US...

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes the experimental evaluation of a novel Autonomous Surface Vehicle capable of navigating complex inland water reservoirs and measuring a range of water quality properties and greenhouse gas emissions. The 16 ft long solar powered catamaran is capable of collecting water column profiles whilst in motion. It is also directly integrated with a reservoir scale floating sensor network to allow remote mission uploads, data download and adaptive sampling strategies. This paper describes the onboard vehicle navigation and control algorithms as well as obstacle avoidance strategies. Experimental results are shown demonstrating its ability to maintain track and avoid obstacles on a variety of large-scale missions and under differing weather conditions, as well as its ability to continuously collect various water quality parameters complimenting traditional manual monitoring campaigns.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Environmental monitoring is becoming critical as human activity and climate change place greater pressures on biodiversity, leading to an increasing need for data to make informed decisions. Acoustic sensors can help collect data across large areas for extended periods making them attractive in environmental monitoring. However, managing and analysing large volumes of environmental acoustic data is a great challenge and is consequently hindering the effective utilization of the big dataset collected. This paper presents an overview of our current techniques for collecting, storing and analysing large volumes of acoustic data efficiently, accurately, and cost-effectively.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Video stimulated recall interviewing is a research technique in which subjects view a video sequence of their behaviour and are then invited to reflect on their decision-making processes during the videoed event. Despite its popularity, this technique raises methodological issues for researchers, particularly novice researchers in education. The paper reports that while stimulated recall is a valuable technique for investigating decision making processes in relation to specific events, it is not a technique that lends itself as a universal technique for research. This paper recounts one study in educational research where stimulated recall interview was used successfully as a useful tool for collecting data with an adapted version of SRI procedure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Numerous research studies have evaluated whether distance learning is a viable alternative to traditional learning methods. These studies have generally made use of cross-sectional surveys for collecting data, comparing distance to traditional learners with intent to validate the former as a viable educational tool. Inherent fundamental differences between traditional and distance learning pedagogies, however, reduce the reliability of these comparative studies and constrain the validity of analyses resulting from this analytical approach. This article presents the results of a research project undertaken to analyze expectations and experiences of distance learners with their degree programs. Students were given surveys designed to examine factors expected to affect their overall value assessment of their distance learning program. Multivariate statistical analyses were used to analyze the correlations among variables of interest to support hypothesized relationships among them. Focusing on distance learners overcomes some of the limitations with assessments that compare off- and on-campus student experiences. Evaluation and modeling of distance learner responses on perceived value for money of the distance education they received indicate that the two most important influences are course communication requirements, which had a negative effect, and course logistical simplicity, which revealed a positive effect. Combined, these two factors accounted for approximately 47% of the variability in perceived value for money of the educational program of sampled students. A detailed focus on comparing expectations with outcomes of distance learners complements the existing literature dominated by comparative studies of distance and nondistance learners.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Within Human-Computer Interaction (HCI) and Computer Supported Cooperative Work (CSCW) research, the notion of technologically-mediated awareness is often used for allowing relevant people to maintain a mental model of activities, behaviors and status information about each other so that they can organize and coordinate work or other joint activities. The initial conceptions of awareness focused largely on improving productivity and efficiency within work environments. With new social, cultural and commercial needs and the emergence of novel computing technologies, the focus of technologically-mediated awareness has extended from work environments to people’s everyday interactions. Hence, the scope of awareness has extended from conveying work related activities to people’s emotions, love, social status and other broad range of aspects. This trend of conceptualizing HCI design is termed as experience-focused HCI. In my PhD dissertation, designing for awareness, I have reported on how we, as HCI researchers, can design awareness systems from experience-focused HCI perspective that follow the trend of conveying awareness beyond the task-based, instrumental and productive needs. Within the overall aim to design for awareness, my research advocates ethnomethodologically-informed approaches for conceptualizing and designing for awareness. In this sense, awareness is not a predefined phenomenon but something that is situated and particular to a given environment. I have used this approach in two design cases of developing interactive systems that support awareness beyond task-based aspects in work environments. In both the cases, I have followed a complete design cycle: collecting an in-situ understanding of an environment, developing implications for a new technology, implementing a prototype technology to studying the use of the technology in its natural settings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

All design classes followed a systematic design approach, that, in an abstract way, can be characterized by figure 1. This approach is based on our design approach [1] that we labeled DUTCH (design for users and tasks, from concepts to handles).Consequently, each course starts with collecting, modeling, and analyzing an existing situation. The next step is the development of a vision on a future domain world where new technology and / or new representations have been implemented. This second step is the first tentative global design that will be represented in scenarios or prototypes and can be assessed. This second design model is based on both the client’s requirements and technological possibilities and challenges. In an iterative way multiple instantiations of detail design may follow, that each can be assessed and evaluated again...

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction This study investigated the sensitivity of calculated stereotactic radiotherapy and radiosurgery doses to the accuracy of the beam data used by the treatment planning system. Methods Two sets of field output factors were acquired using fields smaller than approximately 1 cm2, for inclusion in beam data used by the iPlan treatment planning system (Brainlab, Feldkirchen, Germany). One set of output factors were measured using an Exradin A16 ion chamber (Standard Imaging, Middleton, USA). Although this chamber has a relatively small collecting volume (0.007 cm3), measurements made in small fields using this chamber are subject to the effects of volume averaging, electronic disequilibrium and chamber perturbations. The second, more accurate, set of measurements were obtained by applying perturbation correction factors, calculated using Monte Carlo simulations according to a method recommended by Cranmer-Sargison et al. [1] to measurements made using a 60017 unshielded electron diode (PTW, Freiburg, Germany). A series of 12 sample patient treatments were used to investigate the effects of beam data accuracy on resulting planned dose. These treatments, which involved 135 fields, were planned for delivery via static conformal arcs and 3DCRT techniques, to targets ranging from prostates (up to 8 cm across) to meningiomas (usually more than 2 cm across) to arterioveinous malformations, acoustic neuromas and brain metastases (often less than 2 cm across). Isocentre doses were calculated for all of these fields using iPlan, and the results of using the two different sets of beam data were evaluated. Results While the isocentre doses for many fields are identical (difference = 0.0 %), there is a general trend for the doses calculated using the data obtained from corrected diode measurements to exceed the doses calculated using the less-accurate Exradin ion chamber measurements (difference\0.0 %). There are several alarming outliers (circled in the Fig. 1) where doses differ by more than 3 %, in beams from sample treatments planned for volumes up to 2 cm across. Discussion and conclusions These results demonstrate that treatment planning dose calculations for SRT/SRS treatments can be substantially affected when beam data for fields smaller than approximately 1 cm2 are measured inaccurately, even when treatment volumes are up to 2 cm across.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose The purpose of this review is to address important methodological issues related to conducting accelerometer-based assessments of physical activity in free-living individuals. Methods We review the extant scientific literature for empirical information related to the following issues: product selection, number of accelerometers needed, placement of accelerometers, epoch length, and days of monitoring required to estimate habitual physical activity. We also discuss the various options related to distributing and collecting monitors and strategies to enhance compliance with the monitoring protocol. Results No definitive evidence exists currently to indicate that one make and model of accelerometer is more valid and reliable than another. Selection of accelerometer therefore remains primarily an issue of practicality, technical support, and comparability with other studies. Studies employing multiple accelerometers to estimate energy expenditure report only marginal improvements in explanatory power. Accelerometers are best placed on hip or the lower back. Although the issue of epoch length has not been studied in adults, the use of count cut points based on 1-min time intervals maybe inappropriate in children and may result in underestimation of physical activity. Among adults, 3–5 d of monitoring is required to reliably estimate habitual physical activity. Among children and adolescents, the number of monitoring days required ranges from 4 to 9 d, making it difficult to draw a definitive conclusion for this population. Face-to-face distribution and collection of accelerometers is probably the best option in field-based research, but delivery and return by express carrier or registered mail is a viable option. Conclusion Accelerometer-based activity assessments requires careful planning and the use of appropriate strategies to increase compliance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Design process phases of development, evaluation and implementation were used to create a garment to simultaneously collect reliable data of speech production and intensity of movement of toddlers (18-36 months). A series of prototypes were developed and evaluated that housed accelerometer-based motion sensors and a digital transmitter with microphone. The approved test garment was a top constructed from loop-faced fabric with interior pockets to house devices. Extended side panels allowed for sizing. In total, 56 toddlers (28 male; 28 female; 16-36 months of age) participated in the study providing pilot and baseline data. The test garment was effective in collecting data as evaluated for accuracy and reliability using ANOVA for accelerometer data, transcription of video for type of movement, and number and length of utterances for speech production. The data collection garment has been implemented in various studies across disciplines.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The changing and challenging conditions of the 21st century have been significantly impacting our economy, society and built and natural environments. Today generation of knowledge—mostly in the form of technology and innovation—is seen as a panacea for the adaptation to changes and management of challenges (Yigitcanlar, 2010a). Making space and place that concentrate on knowledge generation, thus, has become a priority for many nations (van Winden, 2010). Along with this movement, concepts like knowledge cities and knowledge precincts are coined as places where citizenship undertakes a deliberate and systematic initiative for founding its development on the identification and sustainable balance of its shared value system, and bases its ability to create wealth on its capacity to generate and leverage its knowledge capabilities (Carrillo, 2006; Yigitcanlar, 2008a). In recent years, the term knowledge precinct (Hu & Chang, 2005) in its most contemporary interpretation evolved into knowledge community precinct (KCP). KCP is a mixed-use post-modern urban setting—e.g., flexible, decontextualized, enclaved, fragmented—including a critical mass of knowledge enterprises and advanced networked infrastructures, developed with the aim of collecting the benefits of blurring the boundaries of living, shopping, recreation and working facilities of knowledge workers and their families. KCPs are the critical building blocks of knowledge cities, and thus, building successful KCPs significantly contributes to the formation of prosperous knowledge cities. In the literature this type of development—a place containing economic prosperity, environmental sustainability, just socio‐spatial order and good governance—is referred as knowledge-based urban development (KBUD). This chapter aims to provide a conceptual understanding on KBUD and its contribution to the building of KCPs that supports the formation of prosperous knowledge cities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This e-book is devoted to the use of spreadsheets in the service of education in a broad spectrum of disciplines: science, mathematics, engineering, business, and general education. The effort is aimed at collecting the works of prominent researchers and educators that make use of spreadsheets as a means to communicate concepts with high educational value. The e-book brings some of the most recent applications of spreadsheets in education and research to the fore. To offer the reader a broad overview of the diversity of applications, carefully chosen articles from engineering (power systems and control), mathematics (calculus, differential equations, and probability), science (physics and chemistry), and education are provided. Some of these applications make use of Visual Basic for Applications (VBA), a versatile computer language that further expands the functionality of spreadsheets. The material included in this e-book should inspire readers to devise their own applications and enhance their teaching and/or learning experience.