837 resultados para INFORMATION DISSEMINATION


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Two hundred million people are displaced annually due to natural disasters with a further one billion living in inadequate conditions in urban areas. Architects have a responsibility to respond to this statistic as the effects of natural and social disasters become more visibly catastrophic when paired with population rise. The research discussed in this paper initially questions and considers how digital tools can be employed to enhance rebuilding processes, but still achieve sensitive, culturally appropriate and accepted built solutions. Secondly the paper reflects on the impact ‘real-world’ projects have on architectural education. Research aspirations encouraged an atypical ‘research by design’ methodology involving a focused case study in the recently devastated village Keigold, Ranongga, Solomon Islands. Through this qualitative approach specific place data and the accounts of those affected were documented through naturalistic and archival methods of observation and participation. Findings reveal a number of unanticipated results which would have been otherwise undetected if field research within the design and rebuilding process was not undertaken, reflecting the importance of place specific research in the design process. Ultimately, the study proves that it is critical for issues of disaster to be addressed on a local rather than global scale; decisions cannot be speculative, or solved at a distance, but require intensive collaborative work with communities to achieve optimum solutions. Architectural education and design studios would continue to benefit from focused community engagement and field research within the design process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In 2012, Queensland University of Technology (QUT) committed to the massive project of revitalizing its Bachelor of Science (ST01) degree. Like most universities in Australia, QUT has begun work to align all courses by 2015 to the requirements of the updated Australian Qualifications Framework (AQF) which is regulated by the Tertiary Education Quality and Standards Agency (TEQSA). From the very start of the redesigned degree program, students approach scientific study with an exciting mix of theory and highly topical real world examples through their chosen “grand challenge.” These challenges, Fukushima and nuclear energy for example, are the lenses used to explore science and lead to 21st century learning outcomes for students. For the teaching and learning support staff, our grand challenge is to expose all science students to multidisciplinary content with a strong emphasis on embedding information literacies into the curriculum. With ST01, QUT is taking the initiative to rethink not only content but how units are delivered and even how we work together between the faculty, the library and learning and teaching support. This was the desired outcome but as we move from design to implementation, has this goal been achieved? A main component of the new degree is to ensure scaffolding of information literacy skills throughout the entirety of the three year course. However, with the strong focus on problem-based learning and group work skills, many issues arise both for students and lecturers. A move away from a traditional lecture style is necessary but impacts on academics’ workload and comfort levels. Therefore, academics in collaboration with librarians and other learning support staff must draw on each others’ expertise to work together to ensure pedagogy, assessments and targeted classroom activities are mapped within and between units. This partnership can counteract the tendency of isolated, unsupported academics to concentrate on day-to-day teaching at the expense of consistency between units and big picture objectives. Support staff may have a more holistic view of a course or degree than coordinators of individual units, making communication and truly collaborative planning even more critical. As well, due to staffing time pressures, design and delivery of new curriculum is generally done quickly with no option for the designers to stop and reflect on the experience and outcomes. It is vital we take this unique opportunity to closely examine what QUT has and hasn’t achieved to be able to recommend a better way forward. This presentation will discuss these important issues and stumbling blocks, to provide a set of best practice guidelines for QUT and other institutions. The aim is to help improve collaboration within the university, as well as to maximize students’ ability to put information literacy skills into action. As our students embark on their own grand challenges, we must challenge ourselves to honestly assess our own work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Information experience has emerged as a new and dynamic field of information research in recent years. This chapter will discuss and explore information experience in two distinct ways: (a) as a research object, and; (b) as a research domain. Two recent studies will provide the context for this exploration. The first study investigated the information experiences of people using social media (e.g., Facebook, Twitter, YouTube) during natural disasters. Data was gathered by in-depth semi-structured interviews with 25 participants, from two areas affected by natural disasters (i.e., Brisbane and Townsville). The second study investigated the qualitatively different ways in which people experienced information literacy during a natural disaster. Using phenomenography, data was collected via semi-structured interviews with 7 participants. These studies represent two related yet different investigations. Taken together the studies provide a means to critically debate and reflect upon our evolving understandings of information experience, both as a research object and as a research domain. This chapter presents our preliminary reflections and concludes that further research is needed to develop and strengthen our conceptualisation of this emerging area.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This chapter presents the preliminary results of a phenomenographic study aimed at exploring people’s experience of information literacy during the 2011 flood in Brisbane, Queensland. Phenomenography is a qualitative, interpretive and descriptive approach to research that explores the different ways in which people experience various phenomena and situations in the world around them. In this study, semi-structured interviews with seven adult residents of Brisbane suggested six categories that depicted different ways people experienced information literacy during this natural disaster. Access to timely, accurate and credible information during a natural disaster can save lives, safeguard property, and reduce fear and anxiety, however very little is currently known about citizens’ information literacy during times of natural disaster. Understanding how people use information to learn during times of crisis is a new terrain for community information literacy research, and one that warrants further attention by the information research community and the emergency management sector.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This chapter presents the preliminary findings of a qualitative study exploring people’s information experiences during the 2012 Queensland State election in Australia. Six residents of South East Queensland who were eligible to vote in the state election participated in a semi-structured interview. The interviews revealed five themes that depict participants’ information experience during the election: information sources, information flow, personal politics, party politics and sense making. Together these themes represent what is experienced as information, how information is experienced, as well as contextual aspects that were unique to voting in an election. The study outlined here is one in an emerging area of enquiry that has explored information experience as a research object. This study has revealed that people’s information experiences are rich, complex and dynamic, and that information experience as a construct of scholarly inquiry provides deep insights into the ways in which people relate to their information worlds. More studies exploring information experience within different contexts are needed to help develop our theoretical understanding of this important and emerging construct.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis is an investigation of the media's representation of children and ICT. The study draws on moral panic theory and Queensland newspaper media, to identify the impact of newspaper reporting on the public's perceptions of young people and ICT.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis considers how an information privacy system can and should develop in Libya. Currently, no information privacy system exists in Libya to protect individuals when their data is processed. This research reviews the main features of privacy law in several key jurisdictions in light of Libya's social, cultural, and economic context. The thesis identifies the basic principles that a Libyan privacy law must consider, including issues of scope, exceptions, principles, remedies, penalties, and the establishment of a legitimate data protection authority. This thesis concludes that Libya should adopt a strong information privacy law framework and highlights some of the considerations that will be relevant for the Libyan legislature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Forming peer alliances to share and build knowledge is an important aspect of community arts practice, and these co-creation processes are increasingly being mediated by the internet. This paper offers guidance for practitioners who are interested in better utilising the internet to connect, share, and make new knowledge. It argues that new approaches are required to foster the organising activities that underpin online co-creation, building from the premise that people have become increasingly networked as individuals rather than in groups (Rainie and Wellman 2012: 6), and that these new ways of connecting enable new modes of peer-to-peer production and exchange. This position advocates that practitioners move beyond situating the internet as a platform for dissemination and a tool for co-creating media, to embrace its knowledge collaboration potential. Drawing on a design experiment I developed to promote online knowledge co-creation, this paper suggests three development phases – developing connections, developing ideas, and developing agility – to ground six methods. They are: switching and routing, engaging in small trades of ideas with networked individuals; organising, co-ordinating networked individuals and their data; beta-release, offering ‘beta’ artifacts as knowledge trades; beta-testing, trialing and modifying other peoples ‘beta’ ideas; adapting, responding to technological disruption; and, reconfiguring, embracing opportunities offered by technological disruption. These approaches position knowledge co-creation as another capability of the community artist, along with co-creating art and media.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As support grows for greater access to information and data held by governments, so does awareness of the need for appropriate policy, technical and legal frameworks to achieve the desired economic and societal outcomes. Since the late 2000s numerous international organizations, inter-governmental bodies and governments have issued open government data policies, which set out key principles underpinning access to, and the release and reuse of data. These policies reiterate the value of government data and establish the default position that it should be openly accessible to the public under transparent and non-discriminatory conditions, which are conducive to innovative reuse of the data. A key principle stated in open government data policies is that legal rights in government information must be exercised in a manner that is consistent with and supports the open accessibility and reusability of the data. In particular, where government information and data is protected by copyright, access should be provided under licensing terms which clearly permit its reuse and dissemination. This principle has been further developed in the policies issued by Australian Governments into a specific requirement that Government agencies are to apply the Creative Commons Attribution licence (CC BY) as the default licensing position when releasing government information and data. A wide-ranging survey of the practices of Australian Government agencies in managing their information and data, commissioned by the Office of the Australian Information Commissioner in 2012, provides valuable insights into progress towards the achievement of open government policy objectives and the adoption of open licensing practices. The survey results indicate that Australian Government agencies are embracing open access and a proactive disclosure culture and that open licensing under Creative Commons licences is increasingly prevalent. However, the finding that ‘[t]he default position of open access licensing is not clearly or robustly stated, nor properly reflected in the practice of Government agencies’ points to the need to further develop the policy framework and the principles governing information access and reuse, and to provide practical guidance tools on open licensing if the broadest range of government information and data is to be made available for innovative reuse.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Topic modelling, such as Latent Dirichlet Allocation (LDA), was proposed to generate statistical models to represent multiple topics in a collection of documents, which has been widely utilized in the fields of machine learning and information retrieval, etc. But its effectiveness in information filtering is rarely known. Patterns are always thought to be more representative than single terms for representing documents. In this paper, a novel information filtering model, Pattern-based Topic Model(PBTM) , is proposed to represent the text documents not only using the topic distributions at general level but also using semantic pattern representations at detailed specific level, both of which contribute to the accurate document representation and document relevance ranking. Extensive experiments are conducted to evaluate the effectiveness of PBTM by using the TREC data collection Reuters Corpus Volume 1. The results show that the proposed model achieves outstanding performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The need for native Information Systems (IS) theories has been discussed by several prominent scholars. Contributing to their conjectural discussion, this research moves towards theorizing IS success as a native theory for the discipline. Despite being one of the most cited scholarly works to-date, IS success of DeLone and McLean (1992) has been criticized by some for lacking focus on the theoretical approach. Following theory development frameworks, this study improves the theoretical standing of IS success by minimizing interaction and inconsistency. The empirical investigation of theorizing IS success includes 1396 respondents, gathered through six surveys and a case study. The respondents represent 70 organisations, multiple Information Systems, and both private and public sector organizations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Control Theory has provided a useful theoretical foundation for Information Systems development outsourcing (ISD-outsourcing) to examine the co-ordination between the client and the vendor. Recent research identified two control mechanisms: structural (structure of the control mode) and process (the process through which the control mode is enacted). Yet, the Control Theory research to-date does not describe the ways in which the two control mechanisms can be combined to ensure project success. Grounded in case study data of eight ISD-outsourcing projects, we derive three ‘control configurations’; i) aligned, ii) negotiated, and 3) self-managed, which describe the combinative patterns of structural and process control mechanisms within and across control modes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The rapid development of the World Wide Web has created massive information leading to the information overload problem. Under this circumstance, personalization techniques have been brought out to help users in finding content which meet their personalized interests or needs out of massively increasing information. User profiling techniques have performed the core role in this research. Traditionally, most user profiling techniques create user representations in a static way. However, changes of user interests may occur with time in real world applications. In this research we develop algorithms for mining user interests by integrating time decay mechanisms into topic-based user interest profiling. Time forgetting functions will be integrated into the calculation of topic interest measurements on in-depth level. The experimental study shows that, considering temporal effects of user interests by integrating time forgetting mechanisms shows better performance of recommendation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Big Data presents many challenges related to volume, whether one is interested in studying past datasets or, even more problematically, attempting to work with live streams of data. The most obvious challenge, in a ‘noisy’ environment such as contemporary social media, is to collect the pertinent information; be that information for a specific study, tweets which can inform emergency services or other responders to an ongoing crisis, or give an advantage to those involved in prediction markets. Often, such a process is iterative, with keywords and hashtags changing with the passage of time, and both collection and analytic methodologies need to be continually adapted to respond to this changing information. While many of the data sets collected and analyzed are preformed, that is they are built around a particular keyword, hashtag, or set of authors, they still contain a large volume of information, much of which is unnecessary for the current purpose and/or potentially useful for future projects. Accordingly, this panel considers methods for separating and combining data to optimize big data research and report findings to stakeholders. The first paper considers possible coding mechanisms for incoming tweets during a crisis, taking a large stream of incoming tweets and selecting which of those need to be immediately placed in front of responders, for manual filtering and possible action. The paper suggests two solutions for this, content analysis and user profiling. In the former case, aspects of the tweet are assigned a score to assess its likely relationship to the topic at hand, and the urgency of the information, whilst the latter attempts to identify those users who are either serving as amplifiers of information or are known as an authoritative source. Through these techniques, the information contained in a large dataset could be filtered down to match the expected capacity of emergency responders, and knowledge as to the core keywords or hashtags relating to the current event is constantly refined for future data collection. The second paper is also concerned with identifying significant tweets, but in this case tweets relevant to particular prediction market; tennis betting. As increasing numbers of professional sports men and women create Twitter accounts to communicate with their fans, information is being shared regarding injuries, form and emotions which have the potential to impact on future results. As has already been demonstrated with leading US sports, such information is extremely valuable. Tennis, as with American Football (NFL) and Baseball (MLB) has paid subscription services which manually filter incoming news sources, including tweets, for information valuable to gamblers, gambling operators, and fantasy sports players. However, whilst such services are still niche operations, much of the value of information is lost by the time it reaches one of these services. The paper thus considers how information could be filtered from twitter user lists and hash tag or keyword monitoring, assessing the value of the source, information, and the prediction markets to which it may relate. The third paper examines methods for collecting Twitter data and following changes in an ongoing, dynamic social movement, such as the Occupy Wall Street movement. It involves the development of technical infrastructure to collect and make the tweets available for exploration and analysis. A strategy to respond to changes in the social movement is also required or the resulting tweets will only reflect the discussions and strategies the movement used at the time the keyword list is created — in a way, keyword creation is part strategy and part art. In this paper we describe strategies for the creation of a social media archive, specifically tweets related to the Occupy Wall Street movement, and methods for continuing to adapt data collection strategies as the movement’s presence in Twitter changes over time. We also discuss the opportunities and methods to extract data smaller slices of data from an archive of social media data to support a multitude of research projects in multiple fields of study. The common theme amongst these papers is that of constructing a data set, filtering it for a specific purpose, and then using the resulting information to aid in future data collection. The intention is that through the papers presented, and subsequent discussion, the panel will inform the wider research community not only on the objectives and limitations of data collection, live analytics, and filtering, but also on current and in-development methodologies that could be adopted by those working with such datasets, and how such approaches could be customized depending on the project stakeholders.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE The study investigates the knowledge, intentions, and driving behavior of persons prescribed medications that display a warning about driving. It also examines their confidence that they can self-assess possible impairment, as is required by the Australian labeling system. METHOD We surveyed 358 outpatients in an Australian public hospital pharmacy, representing a well-advised group taking a range of medications including those displaying a warning label about driving. A brief telephone follow-up survey was conducted with a subgroup of the participants. RESULTS The sample had a median age of 53.2 years and was 53 percent male. Nearly three quarters (73.2%) had taken a potentially impairing class of medication and more than half (56.1%) had taken more than one such medication in the past 12 months. Knowledge of the potentially impairing effects of medication was relatively high for most items; however, participants underestimated the possibility of increased impairment from exceeding the prescribed dose and at commencing treatment. Participants' responses to the safety implications of taking drugs with the highest level of warning varied. Around two thirds (62.8%) indicated that they would consult a health practitioner for advice and around half would modify their driving in some way. However, one fifth (20.9%) would drive when the traffic was thought to be less heavy and over a third (37.7%) would modify their medication regime so that they could drive. The findings from the follow-up survey of a subsample taking target drugs at the time of the first interview were also of concern. Only just over half (51%) recalled seeing the warning label on their medications and, of this group, three quarters (78%) reported following the warning label advice. These findings indicated that there remains a large proportion of people who either did not notice or did not consider the warning when deciding whether to drive. There was a very high level of confidence in this group that they could determine whether they were personally affected by the medication, which may be a problem from a safety perspective. CONCLUSION This study involved persons who should have had a very high level of knowledge and awareness of medication warning labeling. Even in this group there was a lack of informed response to potential impairment. A review of the Australian warning system and wider dissemination of information on medication treatment effects would be useful. Clarifying the importance of potential risk in the general community context is recommended for consideration and further research.