954 resultados para information criterion
Resumo:
With the implementation of the Personally Controlled eHealth Records system (PCEHR) in Australia, shared Electronic Health Records (EHR) are now a reality. However, the characteristic implicit in the PCEHR that puts the consumer (i.e. patient) in control of managing his or her health information within the PCEHR prevents healthcare professionals (HCPs) from utilising it as a one-stop-shop for information at point of care decision making as they cannot trust that a complete record of the consumer's health history is available to them through it. As a result, whilst reaching a major milestone in Australia's eHealth journey, the PCEHR does not reap the full benefits that such a shared EHR system can offer.
Resumo:
This research was a qualitative study that explored the experience of health information literacy. It used a research approach that emphasised identifying and describing variation in experience to investigate people's experience of using information to learn about health, and what they experienced as information for learning about health. The study's findings identified seven categories that represented qualitatively different ways in which people experienced health information literacy, and provide new knowledge about people's engagement with health information for learning in everyday life. The study contributes to consumer health information research and is significant to the disciplines of health and information science.
Resumo:
Purpose - Contemporary offshore Information System Development (ISD) outsourcing is becoming even more complex. Outsourcing partner has begun ‘re-outsourcing’ components of their projects to other outsourcing companies to minimize cost and gain efficiencies. This paper aims to explore intra-organizational Information Asymmetry of re-outsourced offshore ISD outsourcing projects. Design/methodology/approach - An online survey was conducted to get an overall view of Information Asymmetry between Principal and Agents (as per the Agency theory). Findings - Statistical analysis showed that there are significant differences between the Principal and Agent on clarity of requirements, common domain knowledge and communication effectiveness constructs, implying an unbalanced relationship between the parties. Moreover, our results showed that these three are significant measurement constructs of Information Asymmetry. Research limitations/implications - In our study we have only considered three main factors as common domain knowledge, clarity of requirements and communication effectiveness as three measurement constructs of Information Asymmetry. Therefore, researches are encouraged to test the proposed constructs further to increase its precision. Practical implications - Our analysis indicates significant differences in all three measurement constructs, implying the difficulties to ensure that the Agent is performing according to the requirements of the Principal. Using the Agency theory as theoretical view, this study sheds light on the best contract governing methods which minimize Information Asymmetry between the multiple partners within ISD outsourcing organizations. Originality/value - Currently, to the best of our knowledge, no study has undertaken research on Intra-organizational Information Asymmetry in re-outsourced offshore ISD outsourcing projects.
Resumo:
In 2012, Queensland University of Technology (QUT) committed to the massive project of revitalizing its Bachelor of Science (ST01) degree. Like most universities in Australia, QUT has begun work to align all courses by 2015 to the requirements of the updated Australian Qualifications Framework (AQF) which is regulated by the Tertiary Education Quality and Standards Agency (TEQSA). From the very start of the redesigned degree program, students approach scientific study with an exciting mix of theory and highly topical real world examples through their chosen “grand challenge.” These challenges, Fukushima and nuclear energy for example, are the lenses used to explore science and lead to 21st century learning outcomes for students. For the teaching and learning support staff, our grand challenge is to expose all science students to multidisciplinary content with a strong emphasis on embedding information literacies into the curriculum. With ST01, QUT is taking the initiative to rethink not only content but how units are delivered and even how we work together between the faculty, the library and learning and teaching support. This was the desired outcome but as we move from design to implementation, has this goal been achieved? A main component of the new degree is to ensure scaffolding of information literacy skills throughout the entirety of the three year course. However, with the strong focus on problem-based learning and group work skills, many issues arise both for students and lecturers. A move away from a traditional lecture style is necessary but impacts on academics’ workload and comfort levels. Therefore, academics in collaboration with librarians and other learning support staff must draw on each others’ expertise to work together to ensure pedagogy, assessments and targeted classroom activities are mapped within and between units. This partnership can counteract the tendency of isolated, unsupported academics to concentrate on day-to-day teaching at the expense of consistency between units and big picture objectives. Support staff may have a more holistic view of a course or degree than coordinators of individual units, making communication and truly collaborative planning even more critical. As well, due to staffing time pressures, design and delivery of new curriculum is generally done quickly with no option for the designers to stop and reflect on the experience and outcomes. It is vital we take this unique opportunity to closely examine what QUT has and hasn’t achieved to be able to recommend a better way forward. This presentation will discuss these important issues and stumbling blocks, to provide a set of best practice guidelines for QUT and other institutions. The aim is to help improve collaboration within the university, as well as to maximize students’ ability to put information literacy skills into action. As our students embark on their own grand challenges, we must challenge ourselves to honestly assess our own work.
Resumo:
Information experience has emerged as a new and dynamic field of information research in recent years. This chapter will discuss and explore information experience in two distinct ways: (a) as a research object, and; (b) as a research domain. Two recent studies will provide the context for this exploration. The first study investigated the information experiences of people using social media (e.g., Facebook, Twitter, YouTube) during natural disasters. Data was gathered by in-depth semi-structured interviews with 25 participants, from two areas affected by natural disasters (i.e., Brisbane and Townsville). The second study investigated the qualitatively different ways in which people experienced information literacy during a natural disaster. Using phenomenography, data was collected via semi-structured interviews with 7 participants. These studies represent two related yet different investigations. Taken together the studies provide a means to critically debate and reflect upon our evolving understandings of information experience, both as a research object and as a research domain. This chapter presents our preliminary reflections and concludes that further research is needed to develop and strengthen our conceptualisation of this emerging area.
Resumo:
This chapter presents the preliminary results of a phenomenographic study aimed at exploring people’s experience of information literacy during the 2011 flood in Brisbane, Queensland. Phenomenography is a qualitative, interpretive and descriptive approach to research that explores the different ways in which people experience various phenomena and situations in the world around them. In this study, semi-structured interviews with seven adult residents of Brisbane suggested six categories that depicted different ways people experienced information literacy during this natural disaster. Access to timely, accurate and credible information during a natural disaster can save lives, safeguard property, and reduce fear and anxiety, however very little is currently known about citizens’ information literacy during times of natural disaster. Understanding how people use information to learn during times of crisis is a new terrain for community information literacy research, and one that warrants further attention by the information research community and the emergency management sector.
Resumo:
This chapter presents the preliminary findings of a qualitative study exploring people’s information experiences during the 2012 Queensland State election in Australia. Six residents of South East Queensland who were eligible to vote in the state election participated in a semi-structured interview. The interviews revealed five themes that depict participants’ information experience during the election: information sources, information flow, personal politics, party politics and sense making. Together these themes represent what is experienced as information, how information is experienced, as well as contextual aspects that were unique to voting in an election. The study outlined here is one in an emerging area of enquiry that has explored information experience as a research object. This study has revealed that people’s information experiences are rich, complex and dynamic, and that information experience as a construct of scholarly inquiry provides deep insights into the ways in which people relate to their information worlds. More studies exploring information experience within different contexts are needed to help develop our theoretical understanding of this important and emerging construct.
Resumo:
This thesis is an investigation of the media's representation of children and ICT. The study draws on moral panic theory and Queensland newspaper media, to identify the impact of newspaper reporting on the public's perceptions of young people and ICT.
Resumo:
This thesis considers how an information privacy system can and should develop in Libya. Currently, no information privacy system exists in Libya to protect individuals when their data is processed. This research reviews the main features of privacy law in several key jurisdictions in light of Libya's social, cultural, and economic context. The thesis identifies the basic principles that a Libyan privacy law must consider, including issues of scope, exceptions, principles, remedies, penalties, and the establishment of a legitimate data protection authority. This thesis concludes that Libya should adopt a strong information privacy law framework and highlights some of the considerations that will be relevant for the Libyan legislature.
Resumo:
Topic modelling, such as Latent Dirichlet Allocation (LDA), was proposed to generate statistical models to represent multiple topics in a collection of documents, which has been widely utilized in the fields of machine learning and information retrieval, etc. But its effectiveness in information filtering is rarely known. Patterns are always thought to be more representative than single terms for representing documents. In this paper, a novel information filtering model, Pattern-based Topic Model(PBTM) , is proposed to represent the text documents not only using the topic distributions at general level but also using semantic pattern representations at detailed specific level, both of which contribute to the accurate document representation and document relevance ranking. Extensive experiments are conducted to evaluate the effectiveness of PBTM by using the TREC data collection Reuters Corpus Volume 1. The results show that the proposed model achieves outstanding performance.
Resumo:
The need for native Information Systems (IS) theories has been discussed by several prominent scholars. Contributing to their conjectural discussion, this research moves towards theorizing IS success as a native theory for the discipline. Despite being one of the most cited scholarly works to-date, IS success of DeLone and McLean (1992) has been criticized by some for lacking focus on the theoretical approach. Following theory development frameworks, this study improves the theoretical standing of IS success by minimizing interaction and inconsistency. The empirical investigation of theorizing IS success includes 1396 respondents, gathered through six surveys and a case study. The respondents represent 70 organisations, multiple Information Systems, and both private and public sector organizations.
Resumo:
The Control Theory has provided a useful theoretical foundation for Information Systems development outsourcing (ISD-outsourcing) to examine the co-ordination between the client and the vendor. Recent research identified two control mechanisms: structural (structure of the control mode) and process (the process through which the control mode is enacted). Yet, the Control Theory research to-date does not describe the ways in which the two control mechanisms can be combined to ensure project success. Grounded in case study data of eight ISD-outsourcing projects, we derive three ‘control configurations’; i) aligned, ii) negotiated, and 3) self-managed, which describe the combinative patterns of structural and process control mechanisms within and across control modes.
Resumo:
This study explored the creation, dissemination and exchange of electronic word of mouth, in the form of product reviews and ratings of digital technology products. Based on 43 in-depth interviews and 500 responses to an online survey, it reveals a new communication model describing consumers' info-active and info-passive information search styles. The study delivers an in-depth understanding of consumers' attitudes towards current advertising tools and user-generated content, and points to new marketing techniques emerging in the online environment.
Resumo:
Most recommender systems attempt to use collaborative filtering, content-based filtering or hybrid approach to recommend items to new users. Collaborative filtering recommends items to new users based on their similar neighbours, and content-based filtering approach tries to recommend items that are similar to new users' profiles. The fundamental issues include how to profile new users, and how to deal with the over-specialization in content-based recommender systems. Indeed, the terms used to describe items can be formed as a concept hierarchy. Therefore, we aim to describe user profiles or information needs by using concepts vectors. This paper presents a new method to acquire user information needs, which allows new users to describe their preferences on a concept hierarchy rather than rating items. It also develops a new ranking function to recommend items to new users based on their information needs. The proposed approach is evaluated on Amazon book datasets. The experimental results demonstrate that the proposed approach can largely improve the effectiveness of recommender systems.
Resumo:
Big Data presents many challenges related to volume, whether one is interested in studying past datasets or, even more problematically, attempting to work with live streams of data. The most obvious challenge, in a ‘noisy’ environment such as contemporary social media, is to collect the pertinent information; be that information for a specific study, tweets which can inform emergency services or other responders to an ongoing crisis, or give an advantage to those involved in prediction markets. Often, such a process is iterative, with keywords and hashtags changing with the passage of time, and both collection and analytic methodologies need to be continually adapted to respond to this changing information. While many of the data sets collected and analyzed are preformed, that is they are built around a particular keyword, hashtag, or set of authors, they still contain a large volume of information, much of which is unnecessary for the current purpose and/or potentially useful for future projects. Accordingly, this panel considers methods for separating and combining data to optimize big data research and report findings to stakeholders. The first paper considers possible coding mechanisms for incoming tweets during a crisis, taking a large stream of incoming tweets and selecting which of those need to be immediately placed in front of responders, for manual filtering and possible action. The paper suggests two solutions for this, content analysis and user profiling. In the former case, aspects of the tweet are assigned a score to assess its likely relationship to the topic at hand, and the urgency of the information, whilst the latter attempts to identify those users who are either serving as amplifiers of information or are known as an authoritative source. Through these techniques, the information contained in a large dataset could be filtered down to match the expected capacity of emergency responders, and knowledge as to the core keywords or hashtags relating to the current event is constantly refined for future data collection. The second paper is also concerned with identifying significant tweets, but in this case tweets relevant to particular prediction market; tennis betting. As increasing numbers of professional sports men and women create Twitter accounts to communicate with their fans, information is being shared regarding injuries, form and emotions which have the potential to impact on future results. As has already been demonstrated with leading US sports, such information is extremely valuable. Tennis, as with American Football (NFL) and Baseball (MLB) has paid subscription services which manually filter incoming news sources, including tweets, for information valuable to gamblers, gambling operators, and fantasy sports players. However, whilst such services are still niche operations, much of the value of information is lost by the time it reaches one of these services. The paper thus considers how information could be filtered from twitter user lists and hash tag or keyword monitoring, assessing the value of the source, information, and the prediction markets to which it may relate. The third paper examines methods for collecting Twitter data and following changes in an ongoing, dynamic social movement, such as the Occupy Wall Street movement. It involves the development of technical infrastructure to collect and make the tweets available for exploration and analysis. A strategy to respond to changes in the social movement is also required or the resulting tweets will only reflect the discussions and strategies the movement used at the time the keyword list is created — in a way, keyword creation is part strategy and part art. In this paper we describe strategies for the creation of a social media archive, specifically tweets related to the Occupy Wall Street movement, and methods for continuing to adapt data collection strategies as the movement’s presence in Twitter changes over time. We also discuss the opportunities and methods to extract data smaller slices of data from an archive of social media data to support a multitude of research projects in multiple fields of study. The common theme amongst these papers is that of constructing a data set, filtering it for a specific purpose, and then using the resulting information to aid in future data collection. The intention is that through the papers presented, and subsequent discussion, the panel will inform the wider research community not only on the objectives and limitations of data collection, live analytics, and filtering, but also on current and in-development methodologies that could be adopted by those working with such datasets, and how such approaches could be customized depending on the project stakeholders.