960 resultados para Information setting
Resumo:
Big Data presents many challenges related to volume, whether one is interested in studying past datasets or, even more problematically, attempting to work with live streams of data. The most obvious challenge, in a ‘noisy’ environment such as contemporary social media, is to collect the pertinent information; be that information for a specific study, tweets which can inform emergency services or other responders to an ongoing crisis, or give an advantage to those involved in prediction markets. Often, such a process is iterative, with keywords and hashtags changing with the passage of time, and both collection and analytic methodologies need to be continually adapted to respond to this changing information. While many of the data sets collected and analyzed are preformed, that is they are built around a particular keyword, hashtag, or set of authors, they still contain a large volume of information, much of which is unnecessary for the current purpose and/or potentially useful for future projects. Accordingly, this panel considers methods for separating and combining data to optimize big data research and report findings to stakeholders. The first paper considers possible coding mechanisms for incoming tweets during a crisis, taking a large stream of incoming tweets and selecting which of those need to be immediately placed in front of responders, for manual filtering and possible action. The paper suggests two solutions for this, content analysis and user profiling. In the former case, aspects of the tweet are assigned a score to assess its likely relationship to the topic at hand, and the urgency of the information, whilst the latter attempts to identify those users who are either serving as amplifiers of information or are known as an authoritative source. Through these techniques, the information contained in a large dataset could be filtered down to match the expected capacity of emergency responders, and knowledge as to the core keywords or hashtags relating to the current event is constantly refined for future data collection. The second paper is also concerned with identifying significant tweets, but in this case tweets relevant to particular prediction market; tennis betting. As increasing numbers of professional sports men and women create Twitter accounts to communicate with their fans, information is being shared regarding injuries, form and emotions which have the potential to impact on future results. As has already been demonstrated with leading US sports, such information is extremely valuable. Tennis, as with American Football (NFL) and Baseball (MLB) has paid subscription services which manually filter incoming news sources, including tweets, for information valuable to gamblers, gambling operators, and fantasy sports players. However, whilst such services are still niche operations, much of the value of information is lost by the time it reaches one of these services. The paper thus considers how information could be filtered from twitter user lists and hash tag or keyword monitoring, assessing the value of the source, information, and the prediction markets to which it may relate. The third paper examines methods for collecting Twitter data and following changes in an ongoing, dynamic social movement, such as the Occupy Wall Street movement. It involves the development of technical infrastructure to collect and make the tweets available for exploration and analysis. A strategy to respond to changes in the social movement is also required or the resulting tweets will only reflect the discussions and strategies the movement used at the time the keyword list is created — in a way, keyword creation is part strategy and part art. In this paper we describe strategies for the creation of a social media archive, specifically tweets related to the Occupy Wall Street movement, and methods for continuing to adapt data collection strategies as the movement’s presence in Twitter changes over time. We also discuss the opportunities and methods to extract data smaller slices of data from an archive of social media data to support a multitude of research projects in multiple fields of study. The common theme amongst these papers is that of constructing a data set, filtering it for a specific purpose, and then using the resulting information to aid in future data collection. The intention is that through the papers presented, and subsequent discussion, the panel will inform the wider research community not only on the objectives and limitations of data collection, live analytics, and filtering, but also on current and in-development methodologies that could be adopted by those working with such datasets, and how such approaches could be customized depending on the project stakeholders.
Resumo:
Disagreement within the global science community about the certainty and causes of climate change has led the general public to question what to believe and who to trust on matters related to this issue. This paper reports on qualitative research undertaken with Australian residents from two rural areas to explore their perceptions of climate change and trust in information providers. While overall, residents tended to agree that climate change is a reality, perceptions varied in terms of its causes and how best to address it. Politicians, government, and the media were described as untrustworthy sources of information about climate change, with independent scientists being the most trusted. The vested interests of information providers appeared to be a key reason for their distrust. The findings highlight the importance of improved transparency and consultation with the public when communicating information about climate change and related policies.
Resumo:
This study explored individual, social, and built environmental attributes in and outside of the retirement village setting and associations with various active living outcomes including objectively measured physical activity, specific walking behaviors, and social participation. Residents in Perth, Australia (N = 323), were surveyed on environmental perceptions of the village and surrounding neighborhood, self-reported physical activity, and demographic characteristics and wore accelerometers. Managers (N = 32) were surveyed on village characteristics, and objective neighborhood measures were generated in a Geographic Information System (GIS). Results indicated that built- and social-environmental attributes within and outside of retirement villages were associated with active living among residents; however, salient attributes varied depending on the specific outcome considered. Findings suggest that locating villages close to destinations is important for walking and that locating them close to previous and familiar neighborhoods is important for social participation. Further understanding and consideration into retirement village designs that promote both walking and social participation are needed.
Resumo:
The overall aim of this research project was to provide a broader range of value propositions (beyond upfront traditional construction costs) that could transform both the demand side and supply side of the housing industry. The project involved gathering information about how building information is created, used and communicated and classifying building information, leading to the formation of an Information Flow Chart and Stakeholder Relationship Map. These were then tested via broad housing industry focus groups and surveys. The project revealed four key relationships that appear to operate in isolation to the whole housing sector and may have significant impact on the sustainability outcomes and life cycle costs of dwellings over their life cycle. It also found that although a lot of information about individual dwellings does already exist, this information is not coordinated or inventoried in any systematic manner and that national building information files of building passports would present value to a wide range of stakeholders.
Resumo:
Many mature term-based or pattern-based approaches have been used in the field of information filtering to generate users’ information needs from a collection of documents. A fundamental assumption for these approaches is that the documents in the collection are all about one topic. However, in reality users’ interests can be diverse and the documents in the collection often involve multiple topics. Topic modelling, such as Latent Dirichlet Allocation (LDA), was proposed to generate statistical models to represent multiple topics in a collection of documents, and this has been widely utilized in the fields of machine learning and information retrieval, etc. But its effectiveness in information filtering has not been so well explored. Patterns are always thought to be more discriminative than single terms for describing documents. However, the enormous amount of discovered patterns hinder them from being effectively and efficiently used in real applications, therefore, selection of the most discriminative and representative patterns from the huge amount of discovered patterns becomes crucial. To deal with the above mentioned limitations and problems, in this paper, a novel information filtering model, Maximum matched Pattern-based Topic Model (MPBTM), is proposed. The main distinctive features of the proposed model include: (1) user information needs are generated in terms of multiple topics; (2) each topic is represented by patterns; (3) patterns are generated from topic models and are organized in terms of their statistical and taxonomic features, and; (4) the most discriminative and representative patterns, called Maximum Matched Patterns, are proposed to estimate the document relevance to the user’s information needs in order to filter out irrelevant documents. Extensive experiments are conducted to evaluate the effectiveness of the proposed model by using the TREC data collection Reuters Corpus Volume 1. The results show that the proposed model significantly outperforms both state-of-the-art term-based models and pattern-based models
Resumo:
The integration of separate, yet complimentary, cortical pathways appears to play a role in visual perception and action when intercepting objects. The ventral system is responsible for object recognition and identification, while the dorsal system facilitates continuous regulation of action. This dual-system model implies that empirically manipulating different visual information sources during performance of an interceptive action might lead to the emergence of distinct gaze and movement pattern profiles. To test this idea, we recorded hand kinematics and eye movements of participants as they attempted to catch balls projected from a novel apparatus that synchronised or de-synchronised accompanying video images of a throwing action and ball trajectory. Results revealed that ball catching performance was less successful when patterns of hand movements and gaze behaviours were constrained by the absence of advanced perceptual information from the thrower's actions. Under these task constraints, participants began tracking the ball later, followed less of its trajectory, and adapted their actions by initiating movements later and moving the hand faster. There were no performance differences when the throwing action image and ball speed were synchronised or de-synchronised since hand movements were closely linked to information from ball trajectory. Results are interpreted relative to the two-visual system hypothesis, demonstrating that accurate interception requires integration of advanced visual information from kinematics of the throwing action and from ball flight trajectory.
Resumo:
This study aimed to determine if systematic variation of the diagnostic terminology embedded within written discharge information (i.e., concussion or mild traumatic brain injury, mTBI) would produce different expected symptoms and illness perceptions. We hypothesized that compared to concussion advice, mTBI advice would be associated with worse outcomes. Sixty-two volunteers with no history of brain injury or neurological disease were randomly allocated to one of two conditions in which they read a mTBI vignette followed by information that varied only by use of the embedded terms concussion (n = 28) or mTBI (n = 34). Both groups reported illness perceptions (timeline and consequences subscale of the Illness Perception Questionnaire-Revised) and expected Postconcussion Syndrome (PCS) symptoms 6 months post injury (Neurobehavioral Symptom Inventory, NSI). Statistically significant group differences due to terminology were found on selected NSI scores (i.e., total, cognitive and sensory symptom cluster scores (concussion > mTBI)), but there was no effect of terminology on illness perception. When embedded in discharge advice, diagnostic terminology affects some but not all expected outcomes. Given that such expectations are a known contributor to poor mTBI outcome, clinicians should consider the potential impact of varied terminology on their patients.
Resumo:
Object classification is plagued by the issue of session variation. Session variation describes any variation that makes one instance of an object look different to another, for instance due to pose or illumination variation. Recent work in the challenging task of face verification has shown that session variability modelling provides a mechanism to overcome some of these limitations. However, for computer vision purposes, it has only been applied in the limited setting of face verification. In this paper we propose a local region based intersession variability (ISV) modelling approach, and apply it to challenging real-world data. We propose a region based session variability modelling approach so that local session variations can be modelled, termed Local ISV. We then demonstrate the efficacy of this technique on a challenging real-world fish image database which includes images taken underwater, providing significant real-world session variations. This Local ISV approach provides a relative performance improvement of, on average, 23% on the challenging MOBIO, Multi-PIE and SCface face databases. It also provides a relative performance improvement of 35% on our challenging fish image dataset.
Resumo:
Genomic sequences are fundamentally text documents, admitting various representations according to need and tokenization. Gene expression depends crucially on binding of enzymes to the DNA sequence at small, poorly conserved binding sites, limiting the utility of standard pattern search. However, one may exploit the regular syntactic structure of the enzyme's component proteins and the corresponding binding sites, framing the problem as one of detecting grammatically correct genomic phrases. In this paper we propose new kernels based on weighted tree structures, traversing the paths within them to capture the features which underpin the task. Experimentally, we and that these kernels provide performance comparable with state of the art approaches for this problem, while offering significant computational advantages over earlier methods. The methods proposed may be applied to a broad range of sequence or tree-structured data in molecular biology and other domains.
Resumo:
In the evolving knowledge societies of today, some people are overloaded with information, others are starved for information. Everywhere, people are yearning to freely express themselves,to actively participate in governance processes and cultural exchanges. Universally, there is a deep thirst to understand the complex world around us. Media and Information Literacy (MIL) is a basis for enhancing access to information and knowledge, freedom of expression, and quality education. It describes skills, and attitudes that are needed to value the functions of media and other information providers, including those on the Internet, in societies and to find, evaluate and produce information and media content; in other words, it covers the competencies that are vital for people to be effectively engaged in all aspects of development.
Resumo:
Our study investigates the quality of firms’ continuous disclosure compliance during mandatory continuous disclosure reform, and whether the compliance quality is impacted by corporate governance, using the New Zealand market as the setting. We use a novel coding of different categories of disclosures (nonroutine, non-procedural and internal), which represents the extent of proprietary insider information inherent in disclosures, to evaluate firms’compliance quality. Our findings provide evidence that firms’ compliance quality improved after the reform, and this improvement is inconsistently impacted by corporate gvernance. Our findings provide important implications for regulators in their quest for a superior disclosure regime
Resumo:
This article content analyzes music in tourism TV commercials from 95 regions and countries to identify their general acoustic characteristics. The objective is to offer a general guideline in the postproduction of tourism TV commercials. It is found that tourism TV commercials tend to be produced in a faster tempo with beats per minute close to 120, which is rare to be found in general TV commercials. To compensate for the faster tempo (increased aural information load), less scenes (longer duration per scene) were edited into the footage. Production recommendations and future research are presented.
Resumo:
Introduction This paper reports on university students' experiences of learning information literacy. Method Phenomenography was selected as the research approach as it describes the experience from the perspective of the study participants, which in this case is a mixture of undergraduate and postgraduate students studying education at an Australian university. Semi-structured, one-on-one interviews were conducted with fifteen students. Analysis The interview transcripts were iteratively reviewed for similarities and differences in students' experiences of learning information literacy. Categories were constructed from an analysis of the distinct features of the experiences that students reported. The categories were grouped into a hierarchical structure that represents students' increasingly sophisticated experiences of learning information literacy. Results The study reveals that students experience learning information literacy in six ways: learning to find information; learning a process to use information; learning to use information to create a product; learning to use information to build a personal knowledge base; learning to use information to advance disciplinary knowledge; and learning to use information to grow as a person and to contribute to others. Conclusions Understanding the complexity of the concept of information literacy, and the collective and diverse range of ways students experience learning information literacy, enables academics and librarians to draw on the range of experiences reported by students to design academic curricula and information literacy education that targets more powerful ways of learning to find and use information.
Resumo:
Introduction In a connected world youth are participating in digital content creating communities. This paper introduces a description of teens' information practices in digital content creating and sharing communities. Method The research design was a constructivist grounded theory methodology. Seventeen interviews with eleven teens were collected and observation of their digital communities occurred over a two-year period. Analysis The data were analysed iteratively to describe teens' interactions with information through open and then focused coding. Emergent categories were shared with participants to confirm conceptual categories. Focused coding provided connections between conceptual categories resulting in the theory, which was also shared with participants for feedback. Results The paper posits a substantive theory of teens' information practices as they create and share content. It highlights that teens engage in the information actions of accessing, evaluating, and using information. They experienced information in five ways: participation, information, collaboration, process, and artefact. The intersection of enacting information actions and experiences of information resulted in five information practices: learning community, negotiating aesthetic, negotiating control, negotiating capacity, and representing knowledge. Conclusion This study contributes to our understanding of youth information actions, experiences, and practices. Further research into these communities might indicate what information practices are foundational to digital communities.
Resumo:
Term-based approaches can extract many features in text documents, but most include noise. Many popular text-mining strategies have been adapted to reduce noisy information from extracted features; however, text-mining techniques suffer from low frequency. The key issue is how to discover relevance features in text documents to fulfil user information needs. To address this issue, we propose a new method to extract specific features from user relevance feedback. The proposed approach includes two stages. The first stage extracts topics (or patterns) from text documents to focus on interesting topics. In the second stage, topics are deployed to lower level terms to address the low-frequency problem and find specific terms. The specific terms are determined based on their appearances in relevance feedback and their distribution in topics or high-level patterns. We test our proposed method with extensive experiments in the Reuters Corpus Volume 1 dataset and TREC topics. Results show that our proposed approach significantly outperforms the state-of-the-art models.