231 resultados para collections access
Resumo:
The advantages of bundling e-journals together into publisher collections include increased access to information for the subscribing institution’s clients, purchasing cost-effectiveness and streamlined workflows. Whilst cataloguing a consortial e-journal collection has its advantages, there are also various pitfalls and the author outlines efforts by the CAUL (Council of Australian University Libraries) Consortium libraries to further streamline this process, working in conjunction with major publishers. Despite the advantages that publisher collections provide, pressures to unbundle existing packages continue to build, fuelled by an ever-increasing selection of available electronic resources; decreases in, and competing demands upon, library budgets; the impact of currency fluctuations; and poor usage for an alarmingly high proportion of collection titles. Consortial perspectives on bundling and unbundling titles are discussed, including options for managing the addition of new titles to the bundle and why customising consortial collections currently does not work. Unbundling analyses carried out at Queensland University of Technology during 2006 to 2008 prior to the renewal of several major publisher collections are presented as further case studies which illustrate why the “big deal” continues to persist.
Resumo:
Access All was performance produced following a three-month mentorship in web-based performance that I was commissioned to conduct for the performance company Igneous. This live, triple-site performance event for three performers in three remote venues was specifically designed for presentation at Access Grid Nodes - conference rooms located around the globe equipped with a high end, open source computer teleconferencing technology that allowed multiple nodes to cross-connect with each other. Whilst each room was setup somewhat differently they all deployed the same basic infrastructre of multiple projectors, cameras, and sound as well as a reconfigurable floorspace. At that time these relatively formal setups imposed a clear series of limitations in terms of software capabilities and basic infrastructure and so there was much interest in understanding how far its capabilities might be pushed.----- Numerous performance experiments were undertaken between three Access Grid nodes in QUT Brisbane, VISLAB Sydney and Manchester Supercomputing Centre, England, culminating in the public performance staged simultaneously between the sites with local audiences at each venue and others online. Access All was devised in collaboration with interdisciplinary performance company Bonemap, Kelli Dipple (Interarts curator, Tate Modern London) and Mike Stubbs British curator and Director of FACT (Liverpool).----- This period of research and development was instigated and shaped by a public lecture I had earlier delivered in Sydney for the ‘Global Access Grid Network, Super Computing Global Conference’ entitled 'Performance Practice across Electronic Networks'. The findings of this work went on to inform numerous future networked and performative works produced from 2002 onwards.
Resumo:
Search engines have forever changed the way people access and discover knowledge, allowing information about almost any subject to be quickly and easily retrieved within seconds. As increasingly more material becomes available electronically the influence of search engines on our lives will continue to grow. This presents the problem of how to find what information is contained in each search engine, what bias a search engine may have, and how to select the best search engine for a particular information need. This research introduces a new method, search engine content analysis, in order to solve the above problem. Search engine content analysis is a new development of traditional information retrieval field called collection selection, which deals with general information repositories. Current research in collection selection relies on full access to the collection or estimations of the size of the collections. Also collection descriptions are often represented as term occurrence statistics. An automatic ontology learning method is developed for the search engine content analysis, which trains an ontology with world knowledge of hundreds of different subjects in a multilevel taxonomy. This ontology is then mined to find important classification rules, and these rules are used to perform an extensive analysis of the content of the largest general purpose Internet search engines in use today. Instead of representing collections as a set of terms, which commonly occurs in collection selection, they are represented as a set of subjects, leading to a more robust representation of information and a decrease of synonymy. The ontology based method was compared with ReDDE (Relevant Document Distribution Estimation method for resource selection) using the standard R-value metric, with encouraging results. ReDDE is the current state of the art collection selection method which relies on collection size estimation. The method was also used to analyse the content of the most popular search engines in use today, including Google and Yahoo. In addition several specialist search engines such as Pubmed and the U.S. Department of Agriculture were analysed. In conclusion, this research shows that the ontology based method mitigates the need for collection size estimation.
Resumo:
Wynne and Schaffer (2003) have highlighted both the strong growth of gambling activity in recent years, and the revenue streams this has generated for governments and communities. Gambling activities and the revenues derived from them have, unsurprisingly, therefore also been seen as a way in which to increase economic development in deprived areas (Jinkner-Lloyd, 1996). Consequently, according to Brown et al (2003), gambling is now a large taxation revenue earner for many western governments, at both federal and state levels, worldwide (for example UK, USA, Australia). In size and importance, the Australian gambling industry in particular has grown significantly over the last three decades, experiencing a fourfold increase in real gambling turnover. There are, however, also concerns expressed about gambling and Electronic Gaming in particular, as illustrated in economic, social and ethical terms in Oddo (1997). There are also spatial aspects to understanding these issues. Marshall’s (1998) study, for example, highlights that benefits from gambling are more likely to accrue at the macro as opposed to the local level, because of centralised tax gathering and spending of tax revenues, whilst localities may suffer from displacement of activities with higher multipliers than the institutions with EGMs that replace them. This also highlights a regional context of costs, where benefits accrue to the centre, but the costs accrue to the regions and localities, as simultaneously resources leave those communities through both the gambling activities themselves (in the form of revenue for the EGM owners), and the government (through taxes).
Resumo:
The full economic, cultural and environmental value of information produced or funded by the public sector can be realised through enabling greater access to and reuse of the information. To do this effectively it is necessary to describe and implement a policy framework that supports greater access and reuse among a distributed, online network of information suppliers and users. The objective of this study was to identify materials dealing with policies, principles and practices relating to information access and reuse in Australia and in other key jurisdictions internationally. Open Access Policies, Practices and Licensing: A review of the literature in Australia and selected jurisdictions sets out the findings of an extensive review of published materials dealing with policies, practices and legal issues relating to information access and reuse, with a particular focus on materials generated, held or funded by public sector bodies. The report was produced as part of the work program of the project “Enabling Real-Time Information Access in Both Urban and Regional Areas”, established within the Cooperative Research Centre for Spatial Information (CRCSI).
Resumo:
Information and Communications Technologies globally are moving towards Service Oriented Architectures and Web Services. The healthcare environment is rapidly moving to the use of Service Oriented Architecture/Web Services systems interconnected via this global open Internet. Such moves present major challenges where these structures are not based on highly trusted operating systems. This paper argues the need of a radical re-think of access control in the contemporary healthcare environment in light of modern information system structures, legislative and regulatory requirements, and security operation demands in Health Information Systems. This paper proposes the Open and Trusted Health Information Systems (OTHIS), a viable solution including override capability to the provision of appropriate levels of secure access control for the protection of sensitive health data.
Resumo:
In Orissa state, India, the DakNet system supports asynchronous Internet communication between an urban hub and rural nodes. DakNet is noteworthy in many respects, not least in how the system leverages existing transport infrastructure. Wi-Fi transceivers mounted on local buses send and receive user data from roadside kiosks, for later transfer to/from the Internet via wireless protocols. This store-and-forward system allows DakNet to offer asynchronous communication capacity to rural users at low cost. The original ambition of the DakNet system was to provide email and SMS facilities to rural communities. Our 2008 study of the communicative ecology surrounding the DakNet system revealed that this ambition has now evolved – in response to market demand – to the extent that e-shopping (rather than email) has become the primary driver behind the DakNet offer.
Resumo:
With the size and state of the Internet today, a good quality approach to organizing this mass of information is of great importance. Clustering web pages into groups of similar documents is one approach, but relies heavily on good feature extraction and document representation as well as a good clustering approach and algorithm. Due to the changing nature of the Internet, resulting in a dynamic dataset, an incremental approach is preferred. In this work we propose an enhanced incremental clustering approach to develop a better clustering algorithm that can help to better organize the information available on the Internet in an incremental fashion. Experiments show that the enhanced algorithm outperforms the original histogram based algorithm by up to 7.5%.
Resumo:
This chapter considers how open content licences of copyright-protected materials – specifically, Creative Commons (CC) licences - can be used by governments as a simple and effective mechanism to enable reuse of their PSI, particularly where materials are made available in digital form online or distributed on disk.