775 resultados para Schwarz Information Criterion
Resumo:
The need for native Information Systems (IS) theories has been discussed by several prominent scholars. Contributing to their conjectural discussion, this research moves towards theorizing IS success as a native theory for the discipline. Despite being one of the most cited scholarly works to-date, IS success of DeLone and McLean (1992) has been criticized by some for lacking focus on the theoretical approach. Following theory development frameworks, this study improves the theoretical standing of IS success by minimizing interaction and inconsistency. The empirical investigation of theorizing IS success includes 1396 respondents, gathered through six surveys and a case study. The respondents represent 70 organisations, multiple Information Systems, and both private and public sector organizations.
Resumo:
The Control Theory has provided a useful theoretical foundation for Information Systems development outsourcing (ISD-outsourcing) to examine the co-ordination between the client and the vendor. Recent research identified two control mechanisms: structural (structure of the control mode) and process (the process through which the control mode is enacted). Yet, the Control Theory research to-date does not describe the ways in which the two control mechanisms can be combined to ensure project success. Grounded in case study data of eight ISD-outsourcing projects, we derive three ‘control configurations’; i) aligned, ii) negotiated, and 3) self-managed, which describe the combinative patterns of structural and process control mechanisms within and across control modes.
Resumo:
This study explored the creation, dissemination and exchange of electronic word of mouth, in the form of product reviews and ratings of digital technology products. Based on 43 in-depth interviews and 500 responses to an online survey, it reveals a new communication model describing consumers' info-active and info-passive information search styles. The study delivers an in-depth understanding of consumers' attitudes towards current advertising tools and user-generated content, and points to new marketing techniques emerging in the online environment.
Resumo:
Most recommender systems attempt to use collaborative filtering, content-based filtering or hybrid approach to recommend items to new users. Collaborative filtering recommends items to new users based on their similar neighbours, and content-based filtering approach tries to recommend items that are similar to new users' profiles. The fundamental issues include how to profile new users, and how to deal with the over-specialization in content-based recommender systems. Indeed, the terms used to describe items can be formed as a concept hierarchy. Therefore, we aim to describe user profiles or information needs by using concepts vectors. This paper presents a new method to acquire user information needs, which allows new users to describe their preferences on a concept hierarchy rather than rating items. It also develops a new ranking function to recommend items to new users based on their information needs. The proposed approach is evaluated on Amazon book datasets. The experimental results demonstrate that the proposed approach can largely improve the effectiveness of recommender systems.
Resumo:
Big Data presents many challenges related to volume, whether one is interested in studying past datasets or, even more problematically, attempting to work with live streams of data. The most obvious challenge, in a ‘noisy’ environment such as contemporary social media, is to collect the pertinent information; be that information for a specific study, tweets which can inform emergency services or other responders to an ongoing crisis, or give an advantage to those involved in prediction markets. Often, such a process is iterative, with keywords and hashtags changing with the passage of time, and both collection and analytic methodologies need to be continually adapted to respond to this changing information. While many of the data sets collected and analyzed are preformed, that is they are built around a particular keyword, hashtag, or set of authors, they still contain a large volume of information, much of which is unnecessary for the current purpose and/or potentially useful for future projects. Accordingly, this panel considers methods for separating and combining data to optimize big data research and report findings to stakeholders. The first paper considers possible coding mechanisms for incoming tweets during a crisis, taking a large stream of incoming tweets and selecting which of those need to be immediately placed in front of responders, for manual filtering and possible action. The paper suggests two solutions for this, content analysis and user profiling. In the former case, aspects of the tweet are assigned a score to assess its likely relationship to the topic at hand, and the urgency of the information, whilst the latter attempts to identify those users who are either serving as amplifiers of information or are known as an authoritative source. Through these techniques, the information contained in a large dataset could be filtered down to match the expected capacity of emergency responders, and knowledge as to the core keywords or hashtags relating to the current event is constantly refined for future data collection. The second paper is also concerned with identifying significant tweets, but in this case tweets relevant to particular prediction market; tennis betting. As increasing numbers of professional sports men and women create Twitter accounts to communicate with their fans, information is being shared regarding injuries, form and emotions which have the potential to impact on future results. As has already been demonstrated with leading US sports, such information is extremely valuable. Tennis, as with American Football (NFL) and Baseball (MLB) has paid subscription services which manually filter incoming news sources, including tweets, for information valuable to gamblers, gambling operators, and fantasy sports players. However, whilst such services are still niche operations, much of the value of information is lost by the time it reaches one of these services. The paper thus considers how information could be filtered from twitter user lists and hash tag or keyword monitoring, assessing the value of the source, information, and the prediction markets to which it may relate. The third paper examines methods for collecting Twitter data and following changes in an ongoing, dynamic social movement, such as the Occupy Wall Street movement. It involves the development of technical infrastructure to collect and make the tweets available for exploration and analysis. A strategy to respond to changes in the social movement is also required or the resulting tweets will only reflect the discussions and strategies the movement used at the time the keyword list is created — in a way, keyword creation is part strategy and part art. In this paper we describe strategies for the creation of a social media archive, specifically tweets related to the Occupy Wall Street movement, and methods for continuing to adapt data collection strategies as the movement’s presence in Twitter changes over time. We also discuss the opportunities and methods to extract data smaller slices of data from an archive of social media data to support a multitude of research projects in multiple fields of study. The common theme amongst these papers is that of constructing a data set, filtering it for a specific purpose, and then using the resulting information to aid in future data collection. The intention is that through the papers presented, and subsequent discussion, the panel will inform the wider research community not only on the objectives and limitations of data collection, live analytics, and filtering, but also on current and in-development methodologies that could be adopted by those working with such datasets, and how such approaches could be customized depending on the project stakeholders.
Resumo:
Disagreement within the global science community about the certainty and causes of climate change has led the general public to question what to believe and who to trust on matters related to this issue. This paper reports on qualitative research undertaken with Australian residents from two rural areas to explore their perceptions of climate change and trust in information providers. While overall, residents tended to agree that climate change is a reality, perceptions varied in terms of its causes and how best to address it. Politicians, government, and the media were described as untrustworthy sources of information about climate change, with independent scientists being the most trusted. The vested interests of information providers appeared to be a key reason for their distrust. The findings highlight the importance of improved transparency and consultation with the public when communicating information about climate change and related policies.
Resumo:
The overall aim of this research project was to provide a broader range of value propositions (beyond upfront traditional construction costs) that could transform both the demand side and supply side of the housing industry. The project involved gathering information about how building information is created, used and communicated and classifying building information, leading to the formation of an Information Flow Chart and Stakeholder Relationship Map. These were then tested via broad housing industry focus groups and surveys. The project revealed four key relationships that appear to operate in isolation to the whole housing sector and may have significant impact on the sustainability outcomes and life cycle costs of dwellings over their life cycle. It also found that although a lot of information about individual dwellings does already exist, this information is not coordinated or inventoried in any systematic manner and that national building information files of building passports would present value to a wide range of stakeholders.
Resumo:
Many mature term-based or pattern-based approaches have been used in the field of information filtering to generate users’ information needs from a collection of documents. A fundamental assumption for these approaches is that the documents in the collection are all about one topic. However, in reality users’ interests can be diverse and the documents in the collection often involve multiple topics. Topic modelling, such as Latent Dirichlet Allocation (LDA), was proposed to generate statistical models to represent multiple topics in a collection of documents, and this has been widely utilized in the fields of machine learning and information retrieval, etc. But its effectiveness in information filtering has not been so well explored. Patterns are always thought to be more discriminative than single terms for describing documents. However, the enormous amount of discovered patterns hinder them from being effectively and efficiently used in real applications, therefore, selection of the most discriminative and representative patterns from the huge amount of discovered patterns becomes crucial. To deal with the above mentioned limitations and problems, in this paper, a novel information filtering model, Maximum matched Pattern-based Topic Model (MPBTM), is proposed. The main distinctive features of the proposed model include: (1) user information needs are generated in terms of multiple topics; (2) each topic is represented by patterns; (3) patterns are generated from topic models and are organized in terms of their statistical and taxonomic features, and; (4) the most discriminative and representative patterns, called Maximum Matched Patterns, are proposed to estimate the document relevance to the user’s information needs in order to filter out irrelevant documents. Extensive experiments are conducted to evaluate the effectiveness of the proposed model by using the TREC data collection Reuters Corpus Volume 1. The results show that the proposed model significantly outperforms both state-of-the-art term-based models and pattern-based models
Resumo:
The integration of separate, yet complimentary, cortical pathways appears to play a role in visual perception and action when intercepting objects. The ventral system is responsible for object recognition and identification, while the dorsal system facilitates continuous regulation of action. This dual-system model implies that empirically manipulating different visual information sources during performance of an interceptive action might lead to the emergence of distinct gaze and movement pattern profiles. To test this idea, we recorded hand kinematics and eye movements of participants as they attempted to catch balls projected from a novel apparatus that synchronised or de-synchronised accompanying video images of a throwing action and ball trajectory. Results revealed that ball catching performance was less successful when patterns of hand movements and gaze behaviours were constrained by the absence of advanced perceptual information from the thrower's actions. Under these task constraints, participants began tracking the ball later, followed less of its trajectory, and adapted their actions by initiating movements later and moving the hand faster. There were no performance differences when the throwing action image and ball speed were synchronised or de-synchronised since hand movements were closely linked to information from ball trajectory. Results are interpreted relative to the two-visual system hypothesis, demonstrating that accurate interception requires integration of advanced visual information from kinematics of the throwing action and from ball flight trajectory.
Resumo:
This study aimed to determine if systematic variation of the diagnostic terminology embedded within written discharge information (i.e., concussion or mild traumatic brain injury, mTBI) would produce different expected symptoms and illness perceptions. We hypothesized that compared to concussion advice, mTBI advice would be associated with worse outcomes. Sixty-two volunteers with no history of brain injury or neurological disease were randomly allocated to one of two conditions in which they read a mTBI vignette followed by information that varied only by use of the embedded terms concussion (n = 28) or mTBI (n = 34). Both groups reported illness perceptions (timeline and consequences subscale of the Illness Perception Questionnaire-Revised) and expected Postconcussion Syndrome (PCS) symptoms 6 months post injury (Neurobehavioral Symptom Inventory, NSI). Statistically significant group differences due to terminology were found on selected NSI scores (i.e., total, cognitive and sensory symptom cluster scores (concussion > mTBI)), but there was no effect of terminology on illness perception. When embedded in discharge advice, diagnostic terminology affects some but not all expected outcomes. Given that such expectations are a known contributor to poor mTBI outcome, clinicians should consider the potential impact of varied terminology on their patients.
Resumo:
Genomic sequences are fundamentally text documents, admitting various representations according to need and tokenization. Gene expression depends crucially on binding of enzymes to the DNA sequence at small, poorly conserved binding sites, limiting the utility of standard pattern search. However, one may exploit the regular syntactic structure of the enzyme's component proteins and the corresponding binding sites, framing the problem as one of detecting grammatically correct genomic phrases. In this paper we propose new kernels based on weighted tree structures, traversing the paths within them to capture the features which underpin the task. Experimentally, we and that these kernels provide performance comparable with state of the art approaches for this problem, while offering significant computational advantages over earlier methods. The methods proposed may be applied to a broad range of sequence or tree-structured data in molecular biology and other domains.
Resumo:
In the evolving knowledge societies of today, some people are overloaded with information, others are starved for information. Everywhere, people are yearning to freely express themselves,to actively participate in governance processes and cultural exchanges. Universally, there is a deep thirst to understand the complex world around us. Media and Information Literacy (MIL) is a basis for enhancing access to information and knowledge, freedom of expression, and quality education. It describes skills, and attitudes that are needed to value the functions of media and other information providers, including those on the Internet, in societies and to find, evaluate and produce information and media content; in other words, it covers the competencies that are vital for people to be effectively engaged in all aspects of development.
Resumo:
This article content analyzes music in tourism TV commercials from 95 regions and countries to identify their general acoustic characteristics. The objective is to offer a general guideline in the postproduction of tourism TV commercials. It is found that tourism TV commercials tend to be produced in a faster tempo with beats per minute close to 120, which is rare to be found in general TV commercials. To compensate for the faster tempo (increased aural information load), less scenes (longer duration per scene) were edited into the footage. Production recommendations and future research are presented.
Resumo:
Introduction This paper reports on university students' experiences of learning information literacy. Method Phenomenography was selected as the research approach as it describes the experience from the perspective of the study participants, which in this case is a mixture of undergraduate and postgraduate students studying education at an Australian university. Semi-structured, one-on-one interviews were conducted with fifteen students. Analysis The interview transcripts were iteratively reviewed for similarities and differences in students' experiences of learning information literacy. Categories were constructed from an analysis of the distinct features of the experiences that students reported. The categories were grouped into a hierarchical structure that represents students' increasingly sophisticated experiences of learning information literacy. Results The study reveals that students experience learning information literacy in six ways: learning to find information; learning a process to use information; learning to use information to create a product; learning to use information to build a personal knowledge base; learning to use information to advance disciplinary knowledge; and learning to use information to grow as a person and to contribute to others. Conclusions Understanding the complexity of the concept of information literacy, and the collective and diverse range of ways students experience learning information literacy, enables academics and librarians to draw on the range of experiences reported by students to design academic curricula and information literacy education that targets more powerful ways of learning to find and use information.
Resumo:
Introduction In a connected world youth are participating in digital content creating communities. This paper introduces a description of teens' information practices in digital content creating and sharing communities. Method The research design was a constructivist grounded theory methodology. Seventeen interviews with eleven teens were collected and observation of their digital communities occurred over a two-year period. Analysis The data were analysed iteratively to describe teens' interactions with information through open and then focused coding. Emergent categories were shared with participants to confirm conceptual categories. Focused coding provided connections between conceptual categories resulting in the theory, which was also shared with participants for feedback. Results The paper posits a substantive theory of teens' information practices as they create and share content. It highlights that teens engage in the information actions of accessing, evaluating, and using information. They experienced information in five ways: participation, information, collaboration, process, and artefact. The intersection of enacting information actions and experiences of information resulted in five information practices: learning community, negotiating aesthetic, negotiating control, negotiating capacity, and representing knowledge. Conclusion This study contributes to our understanding of youth information actions, experiences, and practices. Further research into these communities might indicate what information practices are foundational to digital communities.