993 resultados para collections access
Resumo:
In recent years the Australian government has dedicated considerable project funds to establish public Internet access points in rural and regional communities. Drawing on data from a major Australian study of the social and economic impact of new technologies on rural areas, this paper explores some of the difficulties rural communities have faced in setting up public access points and sustaining them beyond their project funding. Of particular concern is the way that economic sustainability has been positioned as a measure of the success of such ventures. Government funding has been allocated on the basis of these rural public access points becoming economically self-sustaining. This is problematic on a number of counts. It is therefore argued that these public access points should be reconceptualised as essential community infrastructure like schools and libraries, rather than potential economic enterprises. Author Keywords: Author Keywords: Internet; Public access; Sustainability; Digital divide; Rural Australia
Resumo:
The advantages of bundling e-journals together into publisher collections include increased access to information for the subscribing institution’s clients, purchasing cost-effectiveness and streamlined workflows. Whilst cataloguing a consortial e-journal collection has its advantages, there are also various pitfalls and the author outlines efforts by the CAUL (Council of Australian University Libraries) Consortium libraries to further streamline this process, working in conjunction with major publishers. Despite the advantages that publisher collections provide, pressures to unbundle existing packages continue to build, fuelled by an ever-increasing selection of available electronic resources; decreases in, and competing demands upon, library budgets; the impact of currency fluctuations; and poor usage for an alarmingly high proportion of collection titles. Consortial perspectives on bundling and unbundling titles are discussed, including options for managing the addition of new titles to the bundle and why customising consortial collections currently does not work. Unbundling analyses carried out at Queensland University of Technology during 2006 to 2008 prior to the renewal of several major publisher collections are presented as further case studies which illustrate why the “big deal” continues to persist.
Resumo:
Access All was performance produced following a three-month mentorship in web-based performance that I was commissioned to conduct for the performance company Igneous. This live, triple-site performance event for three performers in three remote venues was specifically designed for presentation at Access Grid Nodes - conference rooms located around the globe equipped with a high end, open source computer teleconferencing technology that allowed multiple nodes to cross-connect with each other. Whilst each room was setup somewhat differently they all deployed the same basic infrastructre of multiple projectors, cameras, and sound as well as a reconfigurable floorspace. At that time these relatively formal setups imposed a clear series of limitations in terms of software capabilities and basic infrastructure and so there was much interest in understanding how far its capabilities might be pushed.----- Numerous performance experiments were undertaken between three Access Grid nodes in QUT Brisbane, VISLAB Sydney and Manchester Supercomputing Centre, England, culminating in the public performance staged simultaneously between the sites with local audiences at each venue and others online. Access All was devised in collaboration with interdisciplinary performance company Bonemap, Kelli Dipple (Interarts curator, Tate Modern London) and Mike Stubbs British curator and Director of FACT (Liverpool).----- This period of research and development was instigated and shaped by a public lecture I had earlier delivered in Sydney for the ‘Global Access Grid Network, Super Computing Global Conference’ entitled 'Performance Practice across Electronic Networks'. The findings of this work went on to inform numerous future networked and performative works produced from 2002 onwards.
Resumo:
Search engines have forever changed the way people access and discover knowledge, allowing information about almost any subject to be quickly and easily retrieved within seconds. As increasingly more material becomes available electronically the influence of search engines on our lives will continue to grow. This presents the problem of how to find what information is contained in each search engine, what bias a search engine may have, and how to select the best search engine for a particular information need. This research introduces a new method, search engine content analysis, in order to solve the above problem. Search engine content analysis is a new development of traditional information retrieval field called collection selection, which deals with general information repositories. Current research in collection selection relies on full access to the collection or estimations of the size of the collections. Also collection descriptions are often represented as term occurrence statistics. An automatic ontology learning method is developed for the search engine content analysis, which trains an ontology with world knowledge of hundreds of different subjects in a multilevel taxonomy. This ontology is then mined to find important classification rules, and these rules are used to perform an extensive analysis of the content of the largest general purpose Internet search engines in use today. Instead of representing collections as a set of terms, which commonly occurs in collection selection, they are represented as a set of subjects, leading to a more robust representation of information and a decrease of synonymy. The ontology based method was compared with ReDDE (Relevant Document Distribution Estimation method for resource selection) using the standard R-value metric, with encouraging results. ReDDE is the current state of the art collection selection method which relies on collection size estimation. The method was also used to analyse the content of the most popular search engines in use today, including Google and Yahoo. In addition several specialist search engines such as Pubmed and the U.S. Department of Agriculture were analysed. In conclusion, this research shows that the ontology based method mitigates the need for collection size estimation.
Resumo:
Wynne and Schaffer (2003) have highlighted both the strong growth of gambling activity in recent years, and the revenue streams this has generated for governments and communities. Gambling activities and the revenues derived from them have, unsurprisingly, therefore also been seen as a way in which to increase economic development in deprived areas (Jinkner-Lloyd, 1996). Consequently, according to Brown et al (2003), gambling is now a large taxation revenue earner for many western governments, at both federal and state levels, worldwide (for example UK, USA, Australia). In size and importance, the Australian gambling industry in particular has grown significantly over the last three decades, experiencing a fourfold increase in real gambling turnover. There are, however, also concerns expressed about gambling and Electronic Gaming in particular, as illustrated in economic, social and ethical terms in Oddo (1997). There are also spatial aspects to understanding these issues. Marshall’s (1998) study, for example, highlights that benefits from gambling are more likely to accrue at the macro as opposed to the local level, because of centralised tax gathering and spending of tax revenues, whilst localities may suffer from displacement of activities with higher multipliers than the institutions with EGMs that replace them. This also highlights a regional context of costs, where benefits accrue to the centre, but the costs accrue to the regions and localities, as simultaneously resources leave those communities through both the gambling activities themselves (in the form of revenue for the EGM owners), and the government (through taxes).