799 resultados para Information Seeking.
Resumo:
This paper explores inquiry skills in the Australian Curriculum in relation to inquiry learning pedagogy. Inquiry skills in the Australian Curriculum are represented as questioning skills (i.e. posing and evaluating questions and hypotheses), information literacy (i.e. seeking, evaluating, selecting and using information), ICT literacy (i.e. fluency with computer hardware and software) and discipline specific skills (i.e. data gathering, mathematical measurement, data analysis and presentation of data). This paper provides an explanation of inquiry learning pedagogy that complements the Australian Curriculum inquiry skills.
Resumo:
This thesis considers how an information privacy system can and should develop in Libya. Currently, no information privacy system exists in Libya to protect individuals when their data is processed. This research reviews the main features of privacy law in several key jurisdictions in light of Libya's social, cultural, and economic context. The thesis identifies the basic principles that a Libyan privacy law must consider, including issues of scope, exceptions, principles, remedies, penalties, and the establishment of a legitimate data protection authority. This thesis concludes that Libya should adopt a strong information privacy law framework and highlights some of the considerations that will be relevant for the Libyan legislature.
Resumo:
Background: People often modify oral solid dosage forms when they experience difficulty swallowing them. Modifying dosage forms may cause adverse effects to the patient, and the person undertaking the modification. Pharmacists are often the first point of contact for people in the general community seeking advice regarding medications. Nurses are at the forefront of administering medications to patients and are likely to be most directly affected by a patient’s swallowing ability, while general practitioners (GPs) are expected to consider swallowing abilities when prescribing medications. Objective: To compare the perspectives and experiences of GPs, pharmacists, and nurses regarding medication dosage form modification and their knowledge of medication modification. Method: Questionnaires tailored to each profession were posted to 630 GPs, and links to an online version were distributed to 2,090 pharmacists and 505 nurses. Results: When compared to pharmacists and GPs, nurses perceived that a greater proportion of the general community modified solid dosage forms. Pharmacists and GPs were most likely to consider allergies and medical history when deciding whether to prescribe or dispense a medicine, while nurses’ priorities were allergies and swallowing problems when administering medications. While nurses were more likely to ask their patients about their ability to swallow medications, most health professionals reported that patients “rarely” or “never” volunteered information about swallowing difficulties. The majority of health professionals would advise a patient to crush or split noncoated non-sustained-release tablets, and would consult colleagues or reference sources for sustained-release or coated tablets. Health professionals appeared to rely heavily upon the suffix attached to medication names (which suggest modified release properties) to identify potential problems associated with modifying medications. Conclusion: The different professional roles and responsibilities of GPs, pharmacists, and nurses are associated with different perspectives of, and experiences with, people modifying medications in the general community and knowledge about consequences of medication modification.
Resumo:
Respite care is a cornerstone service for the home management of people with dementia. It is used by carers to mitigate the stress related to the demands of caring by allowing time for them to rest and do things for themselves, thus maintaining the caring relationship at home and perhaps forestalling long-term placement in a residential aged care facility. Despite numerous anecdotal reports in support of respite care, its uptake by carers of people with dementia remains relatively low. The aim of this paper was to examine the factors that constitute the use of respite by carers of people with dementia by reviewing quantitative and qualitative research predominantly from the years 1990 to 2012. Seventysix international studies of different types of respite care were included for this review and their methods were critically appraised. The key topics identified were in relation to information access, the barriers to carers realising need for and seeking respite, satisfaction with respite services including the outcomes for carers and people with dementia, the characteristics of an effective respite service and the role of health workers in providing appropriate respite care. Finally, limitations with considering the literature as a whole were highlighted and recommendations made for future research.
Resumo:
Topic modelling, such as Latent Dirichlet Allocation (LDA), was proposed to generate statistical models to represent multiple topics in a collection of documents, which has been widely utilized in the fields of machine learning and information retrieval, etc. But its effectiveness in information filtering is rarely known. Patterns are always thought to be more representative than single terms for representing documents. In this paper, a novel information filtering model, Pattern-based Topic Model(PBTM) , is proposed to represent the text documents not only using the topic distributions at general level but also using semantic pattern representations at detailed specific level, both of which contribute to the accurate document representation and document relevance ranking. Extensive experiments are conducted to evaluate the effectiveness of PBTM by using the TREC data collection Reuters Corpus Volume 1. The results show that the proposed model achieves outstanding performance.
Resumo:
The need for native Information Systems (IS) theories has been discussed by several prominent scholars. Contributing to their conjectural discussion, this research moves towards theorizing IS success as a native theory for the discipline. Despite being one of the most cited scholarly works to-date, IS success of DeLone and McLean (1992) has been criticized by some for lacking focus on the theoretical approach. Following theory development frameworks, this study improves the theoretical standing of IS success by minimizing interaction and inconsistency. The empirical investigation of theorizing IS success includes 1396 respondents, gathered through six surveys and a case study. The respondents represent 70 organisations, multiple Information Systems, and both private and public sector organizations.
Resumo:
The Control Theory has provided a useful theoretical foundation for Information Systems development outsourcing (ISD-outsourcing) to examine the co-ordination between the client and the vendor. Recent research identified two control mechanisms: structural (structure of the control mode) and process (the process through which the control mode is enacted). Yet, the Control Theory research to-date does not describe the ways in which the two control mechanisms can be combined to ensure project success. Grounded in case study data of eight ISD-outsourcing projects, we derive three ‘control configurations’; i) aligned, ii) negotiated, and 3) self-managed, which describe the combinative patterns of structural and process control mechanisms within and across control modes.
Resumo:
This study explored the creation, dissemination and exchange of electronic word of mouth, in the form of product reviews and ratings of digital technology products. Based on 43 in-depth interviews and 500 responses to an online survey, it reveals a new communication model describing consumers' info-active and info-passive information search styles. The study delivers an in-depth understanding of consumers' attitudes towards current advertising tools and user-generated content, and points to new marketing techniques emerging in the online environment.
Resumo:
The studies presented in this review explore three distinct areas with potential for inhibiting HIV infection in women. Based on emerging information from the physiology, endocrinology and immunology of the female reproductive tract (FRT), we propose unique 'works in progress' for protecting women from HIV. Various aspects of FRT immunity are suppressed by estradiol during the menstrual cycle, making women more susceptible to HIV infection. By engineering commensal Lactobacillus to secrete the anti-HIV molecule Elafin as estradiol levels increase, women could be protected from HIV infection. Selective estrogen response modifiers enhance barrier integrity and enhance secretion of protective anti-HIV molecules. Finally, understanding the interactions and regulation of FRT endogenous antimicrobials, proteases, antiproteases, etc., all of which are under hormonal control, will open new avenues to therapeutic manipulation of the FRT mucosal microenvironment. By seeking new alternatives to preventing HIV infection in women, we may finally disrupt the HIV pandemic.
Resumo:
Most recommender systems attempt to use collaborative filtering, content-based filtering or hybrid approach to recommend items to new users. Collaborative filtering recommends items to new users based on their similar neighbours, and content-based filtering approach tries to recommend items that are similar to new users' profiles. The fundamental issues include how to profile new users, and how to deal with the over-specialization in content-based recommender systems. Indeed, the terms used to describe items can be formed as a concept hierarchy. Therefore, we aim to describe user profiles or information needs by using concepts vectors. This paper presents a new method to acquire user information needs, which allows new users to describe their preferences on a concept hierarchy rather than rating items. It also develops a new ranking function to recommend items to new users based on their information needs. The proposed approach is evaluated on Amazon book datasets. The experimental results demonstrate that the proposed approach can largely improve the effectiveness of recommender systems.
Resumo:
Big Data presents many challenges related to volume, whether one is interested in studying past datasets or, even more problematically, attempting to work with live streams of data. The most obvious challenge, in a ‘noisy’ environment such as contemporary social media, is to collect the pertinent information; be that information for a specific study, tweets which can inform emergency services or other responders to an ongoing crisis, or give an advantage to those involved in prediction markets. Often, such a process is iterative, with keywords and hashtags changing with the passage of time, and both collection and analytic methodologies need to be continually adapted to respond to this changing information. While many of the data sets collected and analyzed are preformed, that is they are built around a particular keyword, hashtag, or set of authors, they still contain a large volume of information, much of which is unnecessary for the current purpose and/or potentially useful for future projects. Accordingly, this panel considers methods for separating and combining data to optimize big data research and report findings to stakeholders. The first paper considers possible coding mechanisms for incoming tweets during a crisis, taking a large stream of incoming tweets and selecting which of those need to be immediately placed in front of responders, for manual filtering and possible action. The paper suggests two solutions for this, content analysis and user profiling. In the former case, aspects of the tweet are assigned a score to assess its likely relationship to the topic at hand, and the urgency of the information, whilst the latter attempts to identify those users who are either serving as amplifiers of information or are known as an authoritative source. Through these techniques, the information contained in a large dataset could be filtered down to match the expected capacity of emergency responders, and knowledge as to the core keywords or hashtags relating to the current event is constantly refined for future data collection. The second paper is also concerned with identifying significant tweets, but in this case tweets relevant to particular prediction market; tennis betting. As increasing numbers of professional sports men and women create Twitter accounts to communicate with their fans, information is being shared regarding injuries, form and emotions which have the potential to impact on future results. As has already been demonstrated with leading US sports, such information is extremely valuable. Tennis, as with American Football (NFL) and Baseball (MLB) has paid subscription services which manually filter incoming news sources, including tweets, for information valuable to gamblers, gambling operators, and fantasy sports players. However, whilst such services are still niche operations, much of the value of information is lost by the time it reaches one of these services. The paper thus considers how information could be filtered from twitter user lists and hash tag or keyword monitoring, assessing the value of the source, information, and the prediction markets to which it may relate. The third paper examines methods for collecting Twitter data and following changes in an ongoing, dynamic social movement, such as the Occupy Wall Street movement. It involves the development of technical infrastructure to collect and make the tweets available for exploration and analysis. A strategy to respond to changes in the social movement is also required or the resulting tweets will only reflect the discussions and strategies the movement used at the time the keyword list is created — in a way, keyword creation is part strategy and part art. In this paper we describe strategies for the creation of a social media archive, specifically tweets related to the Occupy Wall Street movement, and methods for continuing to adapt data collection strategies as the movement’s presence in Twitter changes over time. We also discuss the opportunities and methods to extract data smaller slices of data from an archive of social media data to support a multitude of research projects in multiple fields of study. The common theme amongst these papers is that of constructing a data set, filtering it for a specific purpose, and then using the resulting information to aid in future data collection. The intention is that through the papers presented, and subsequent discussion, the panel will inform the wider research community not only on the objectives and limitations of data collection, live analytics, and filtering, but also on current and in-development methodologies that could be adopted by those working with such datasets, and how such approaches could be customized depending on the project stakeholders.
Resumo:
The policy objectives of the continuous disclosure regime augmented by the misleading or deceptive conduct provisions in the Corporations Act are to enhance the integrity and efficiency of Australian capital markets by ensuring equality of opportunity for all investors through public access to accurate and material company information to enable them to make well-informed investment decisions. This article argues that there were failures by the regulators in the performance of their roles to protect the interests of investors in Forrest v ASIC; FMG v ASIC (2012) 247 CLR 486: ASX failed to enforce timely compliance with the continuous disclosure regime and ensure that the market was properly informed by seeking immediate clarification from FMG as to the agreed fixed price and/or seeking production of a copy of the CREC agreement; and ASIC failed to succeed in the High Court because of the way it pleaded its case. The article also examines the reasoning of the High Court in Forrest v ASIC and whether it might have changed previous understandings of the Campomar test for determining whether representations directed to the public generally are misleading.
Resumo:
Victim/survivors of human trafficking involving partner migration employ diverse help-seeking strategies, both formal and informal, to exit their exploitative situations. Drawing on primary research conducted by Lyneham and Richards (forthcoming), the authors highlight the importance of educating the community and professionals from a wide range of sectors—including health, mental health, child protection, social welfare, social work, domestic violence, migration, legal and law enforcement services—about human trafficking and the help-seeking strategies of victims/survivors in order to support them to leave exploitative situations. Enhancing Australia’s knowledge of victim/survivors’ help-seeking strategies will better inform government and community responses to this crime, improve detection and identification of human trafficking matters and subsequent referral to appropriate victim services.
Resumo:
Disagreement within the global science community about the certainty and causes of climate change has led the general public to question what to believe and who to trust on matters related to this issue. This paper reports on qualitative research undertaken with Australian residents from two rural areas to explore their perceptions of climate change and trust in information providers. While overall, residents tended to agree that climate change is a reality, perceptions varied in terms of its causes and how best to address it. Politicians, government, and the media were described as untrustworthy sources of information about climate change, with independent scientists being the most trusted. The vested interests of information providers appeared to be a key reason for their distrust. The findings highlight the importance of improved transparency and consultation with the public when communicating information about climate change and related policies.
Resumo:
Process improvement has become a number one business priority, and more and more project requests are raised in organizations, seeking approval and resources for process-related projects. Realistically, the total of the requested funds exceeds the allocated budget, the number of projects is higher than the available bandwidth, and only some of these (very often only few) can be supported and most never see any light. Relevant resources are scarce, and correct decisions must be made to make sure that those projects that are of best value are implemented. How can decision makers make the right decision on the following: Which project(s) are to be approved and when to commence work on them? Which projects are most aligned with corporate strategy? How can the project’s value to the business be calculated and explained? How can these decisions be made in a fair, justifiable manner that brings the best results to the company and its stakeholders? This chapter describes a business value scoring (BVS) model that was built, tested, and implemented by a leading financial institution in Australia to address these very questions. The chapter discusses the background and motivations for such an initiative and describes the tool in detail. All components and underlying concepts are explained, together with details on its application. This tool has been successfully implemented in the case organization. The chapter provides practical guidelines for organizations that wish to adopt this approach.