931 resultados para User-centric API Framework
Resumo:
The Australian National Data Service (ANDS) was established in 2008 and aims to: influence national policy in the area of data management in the Australian research community; inform best practice for the curation of data, and, transform the disparate collections of research data around Australia into a cohesive collection of research resources One high profile ANDS activity is to establish the population of Research Data Australia, a set of web pages describing data collections produced by or relevant to Australian researchers. It is designed to promote visibility of research data collections in search engines, in order to encourage their re-use. As part of activities associated with the Australian National Data Service, an increasing number of Australian Universities are choosing to implement VIVO, not as a platform to profile information about researchers, but as a 'metadata store' platform to profile information about institutional research data sets, both locally and as part of a national data commons. To date, the University of Melbourne, Griffith University, the Queensland University of Technology, and the University of Western Australia have all chosen to implement VIVO, with interest from other Universities growing.
Resumo:
Griffith University is developing a digital repository system using HarvestRoad Hive software to better meet the needs of academics and students using institutional learning and teaching, course readings, and institutional intellectual capital systems. Issues with current operations and systems are discussed in terms of user behaviour. New repository systems are being designed in such a way that they address current service and user behaviour issues by closely aligning systems with user needs. By developing attractive online services, Griffith is working to change current user behaviour to achieve strategic priorities in the sharing and reuse of learning objects, improved selection and use of digitised course readings, the development of ePrint and eScience services, and the management of a research portfolio service.
Resumo:
Seven endemic governance problems are shown to be currently present in governments around the globe and at any level of government as well (for example municipal, federal). These problems have their roots traced back through more than two thousand years of political, specifically ‘democratic’, history. The evidence shows that accountability, transparency, corruption, representation, campaigning methods, constitutionalism and long-term goals were problematic for the ancient Athenians as well as modern international democratisation efforts encompassing every major global region. Why then, given the extended time period humans have had to deal with these problems, are they still present? At least part of the answer to this question is that philosophers, academics and NGOs as well as MNOs have only approached these endemic problems in a piecemeal manner with a skewed perspective on democracy. Their works have also been subject to the ebbs and flows of human history which essentially started and stopped periods of thinking. In order to approach the investigation of endemic problems in relation to democracy (as the overall quest of this thesis was to generate prescriptive results for the improvement of democratic government), it was necessary to delineate what exactly is being written about when using the term ‘democracy’. It is common knowledge that democracy has no one specific definition or practice, even though scholars and philosophers have been attempting to create a definition for generations. What is currently evident, is that scholars are not approaching democracy in an overly simplified manner (that is, it is government for the people, by the people) but, rather, are seeking the commonalities that democracies share, in other words, those items which are common to all things democratic. Following that specific line of investigation, the major practiced and theoretical versions of democracy were thematically analysed. After that, their themes were collapsed into larger categories, at which point the larger categories were comparatively analysed with the practiced and theoretical versions of democracy. Four democratic ‘particles’ (selecting officials, law, equality and communication) were seen to be present in all practiced and theoretical democratic styles. The democratic particles fused with a unique investigative perspective and in-depth political study created a solid conceptualisation of democracy. As such, it is argued that democracy is an ever-present element of any state government, ‘democratic’ or not, and the particles are the bodies which comprise the democratic element. Frequency- and proximity-based analyses showed that democratic particles are related to endemic problems in international democratisation discourse. The linkages between democratic particles and endemic problems were also evident during the thematic analysis as well historical review. This ultimately led to the viewpoint that if endemic problems are mitigated the act may improve democratic particles which might strengthen the element of democracy in the governing apparatus of any state. Such may actively minimise or wholly displace inefficient forms of government, leading to a government specifically tailored to the population it orders. Once the theoretical and empirical goals were attained, this thesis provided some prescriptive measures which government, civil society, academics, professionals and/or active citizens can use to mitigate endemic problems (in any country and at any level of government) so as to improve the human condition via better democratic government.
Resumo:
We describe research into the identification of anomalous events and event patterns as manifested in computer system logs. Prototype software has been developed with a capability that identifies anomalous events based on usage patterns or user profiles, and alerts administrators when such events are identified. To reduce the number of false positive alerts we have investigated the use of different user profile training techniques and introduce the use of abstractions to group together applications which are related. Our results suggest that the number of false alerts that are generated is significantly reduced when a growing time window is used for user profile training and when abstraction into groups of applications is used.
Resumo:
With regard to the long-standing problem of the semantic gap between low-level image features and high-level human knowledge, the image retrieval community has recently shifted its emphasis from low-level features analysis to high-level image semantics extrac- tion. User studies reveal that users tend to seek information using high-level semantics. Therefore, image semantics extraction is of great importance to content-based image retrieval because it allows the users to freely express what images they want. Semantic content annotation is the basis for semantic content retrieval. The aim of image anno- tation is to automatically obtain keywords that can be used to represent the content of images. The major research challenges in image semantic annotation are: what is the basic unit of semantic representation? how can the semantic unit be linked to high-level image knowledge? how can the contextual information be stored and utilized for image annotation? In this thesis, the Semantic Web technology (i.e. ontology) is introduced to the image semantic annotation problem. Semantic Web, the next generation web, aims at mak- ing the content of whatever type of media not only understandable to humans but also to machines. Due to the large amounts of multimedia data prevalent on the Web, re- searchers and industries are beginning to pay more attention to the Multimedia Semantic Web. The Semantic Web technology provides a new opportunity for multimedia-based applications, but the research in this area is still in its infancy. Whether ontology can be used to improve image annotation and how to best use ontology in semantic repre- sentation and extraction is still a worth-while investigation. This thesis deals with the problem of image semantic annotation using ontology and machine learning techniques in four phases as below. 1) Salient object extraction. A salient object servers as the basic unit in image semantic extraction as it captures the common visual property of the objects. Image segmen- tation is often used as the �rst step for detecting salient objects, but most segmenta- tion algorithms often fail to generate meaningful regions due to over-segmentation and under-segmentation. We develop a new salient object detection algorithm by combining multiple homogeneity criteria in a region merging framework. 2) Ontology construction. Since real-world objects tend to exist in a context within their environment, contextual information has been increasingly used for improving object recognition. In the ontology construction phase, visual-contextual ontologies are built from a large set of fully segmented and annotated images. The ontologies are composed of several types of concepts (i.e. mid-level and high-level concepts), and domain contextual knowledge. The visual-contextual ontologies stand as a user-friendly interface between low-level features and high-level concepts. 3) Image objects annotation. In this phase, each object is labelled with a mid-level concept in ontologies. First, a set of candidate labels are obtained by training Support Vectors Machines with features extracted from salient objects. After that, contextual knowledge contained in ontologies is used to obtain the �nal labels by removing the ambiguity concepts. 4) Scene semantic annotation. The scene semantic extraction phase is to get the scene type by using both mid-level concepts and domain contextual knowledge in ontologies. Domain contextual knowledge is used to create scene con�guration that describes which objects co-exist with which scene type more frequently. The scene con�guration is represented in a probabilistic graph model, and probabilistic inference is employed to calculate the scene type given an annotated image. To evaluate the proposed methods, a series of experiments have been conducted in a large set of fully annotated outdoor scene images. These include a subset of the Corel database, a subset of the LabelMe dataset, the evaluation dataset of localized semantics in images, the spatial context evaluation dataset, and the segmented and annotated IAPR TC-12 benchmark.
Resumo:
This paper joins growing interest in the concept of practice, and uses it to reconceptualise international student engagement with the demands of study at an Australian university. Practice foregrounds institutional structures and student agency and brings together psychologically- and socially-oriented perspectives on international student learning approaches. Utilising discourse theory, practice is defined as habitual and individual instances of socially-contextualised configurations of elements such as actions and interactions, roles and relations, identities, objects, values, and language. In the university context, academic practice highlights the institutionally-sanctioned ways of knowing, doing and being that constitute academic tasks. The concept is applied here to six international students’ ‘readings’ of and strategic responses to academic work in a Master of Education course. It is argued that academic practice provides a comprehensive framework for explaining the interface between university academic requirements and international student learning, and the crucial role that teaching has in facilitating the experience.
Resumo:
In 2009, the Commonwealth Government of Australia published the first national learning framework for use with children aged birth to five years. The framework marks a departure from tradition in that it emphasizes intentional teaching, learning as well as child development, a particular type of play-based learning, outcomes, and equity. This article analyzes aspects of the document that depart from well established approaches to early childhood education in Australia and identifies challenges for educators who are required to use the document. It concludes that ongoing and supportive professional learning opportunities must accompany the introduction and enactment of the document.
Resumo:
Project Procurement is a ‘great’ environment for ethical issues with its low-price state of mind and competition. It has many opportunities that could contribute to illegal activities or unethical behavior especially in the construction industry. In 2006 alone, 17.3% of 417 Malaysian government contract projects were considered sick due to the poor performance by the contractors. Therefore it is important to govern the project procurement, especially the plan procurement stage to ensure the accountability and transparency of the decision made in awarding the right contract to the best contractor. This is where project governance framework (PGF) is really needed in project procurement planning. Project governance is a subset of corporate governance focusing on the areas of corporate governance related to project activities, including: portfolio direction, project sponsorship, project and program management and efficiency and disclosure and reporting. This paper highlights the importance of implementing project governance framework (PGF) to ensure that the decision makers are answerable and accountable to the stakeholders, and the decision making is transparent to avoid any ethical issues arises. A comprehensive preliminary literature is carried out to discover the importance of executing PGF in project procurement in Malaysian public sector. By understanding the important of PGF, it is hoped that this will bring a signal to other developing countries to implement the similar method in ensuring the transparency of the decision making in project procurement planning in their countries.
Resumo:
There is a lack of research which identifies the role of the public-sector client in relation to ethical practice in plan procurement. This paper discusses a conceptual framework for ethical decision making in project procurement, focusing on public sector clients within the Malaysian construction industry. A framework is proposed to ensure that effective ethical decision making strategies are deployed to ensure that plan procurement is carried out with a transparent process so that the public sector clients are able to adopt. The conceptual framework adopts various factors that contribute to ethical decision making at the early stage of procurement and consists of the procurement system, individual factors, project characteristics, and organizational culture as the internal factors and professional code of conduct and government policies as the external factors. This framework rationalizes the relationships between systems, psychology and organizational theory to form an innovative understanding of making ethical decisions in plan procurement. It is expected that this proposed framework will be useful as a foundation for identifying the factors that contribute to ethical decision making focusing on the planning stage of procurement process.
Resumo:
Purpose: In the global knowledge economy, investment in knowledge-intensive industries and information and communication technology (ICT) infrastructures are seen as significant factors in improving the overall socio-economic fabric of cities. Consequently knowledge-based urban development (KBUD) has become a new paradigm in urban planning and development, for increasing the welfare and competitiveness of cities and regions. The paper discusses the critical connections between KBUD strategies and knowledge-intensive industries and ICT infrastructures. In particular, it investigates the application of the knowledge-based urban development concept by discussing one of South East Asia’s large scale manifestations of KBUD; Malaysia’s Multimedia Super Corridor. ----- ----- Design/methodology/approach: The paper provides a review of the KBUD concept and develops a knowledge-based urban development assessment framework to provide a clearer understanding of development and evolution of KBUD manifestations. Subsequently the paper investigates the implementation of the KBUD concept within the Malaysian context, and particularly the Multimedia Super Corridor (MSC). ----- ----- Originality/value: The paper, with its KBUD assessment framework, scrutinises Malaysia’s experince; providing an overview of the MSC project and discussion of the case findings. The development and evolution of the MSC is viewed with regard to KBUD policy implementation, infrastructural implications, and the agencies involved in the development and management of the MSC. ----- ----- Practical implications: The emergence of the knowledge economy, together with the issues of globalisation and rapid urbanisation, have created an urgent need for urban planners to explore new ways of strategising planning and development that encompasses the needs and requirements of the knowledge economy and society. In light of the literature and MSC case findings, the paper provides generic recommendations, on the orchestration of knowledge-based urban development, for other cities and regions seeking to transform to the knowledge economy.
Resumo:
Special collections, because of the issues associated with conservation and use, a feature they share with archives, tend to be the most digitized areas in libraries. The Nineteenth Century Schoolbooks collection is a collection of 9000 rarely held nineteenth-century schoolbooks that were painstakingly collected over a lifetime of work by Prof. John A. Nietz, and donated to the Hillman Library at the University of Pittsburgh in 1958, which has since grown to 15,000. About 140 of these texts are completely digitized and showcased in a publicly accessible website through the University of Pittsburgh’s Library, along with a searchable bibliography of the entire collection, which expanded the awareness of this collection and its user base to beyond the academic community. The URL for the website is http://digital.library.pitt.edu/nietz/. The collection is a rich resource for researchers studying the intellectual, educational, and textbook publishing history of the United States. In this study, we examined several existing records collected by the Digital Research Library at the University of Pittsburgh in order to determine the identity and searching behaviors of the users of this collection. Some of the records examined include: 1) The results of a 3-month long user survey, 2) User access statistics including search queries for a period of one year, a year after the digitized collection became publicly available in 2001, and 3) E-mail input received by the website over 4 years from 2000-2004. The results of the study demonstrate the differences in online retrieval strategies used by academic researchers and historians, archivists, avocationists, and the general public, and the importance of facilitating the discovery of digitized special collections through the use of electronic finding aids and an interactive interface with detailed metadata.
Resumo:
This chapter sets out the debates about the changing role of audiences in relation to user-created content as they appear in New Media and Cultural Studies. The discussion moves beyond the simple dichotomies between active producers and passive audiences, and draws on empirical evidence, in order to examine those practices that are most ordinary and widespread. Building on the knowledge of television’s role in facilitating public life, and the everyday, affective practices through which it is experienced and used, I focus on the way in which YouTube operates as a site of community, creativity and cultural citizenship; and as an archive of popular cultural memory.
Resumo:
One of the major challenges in the design of social technologies is the evaluation of their qualities of use and how they are appropriated over time. While the field of HCI abounds in short-term exploratory design and studies of use, relatively little attention has focused on the continuous development of prototypes longitudinally and studies of their emergent use. We ground the exploration and analysis of use in the everyday world, embracing contingency and open-ended use, through the use of a continuously-available exploratory prototype. Through examining use longitudinally, clearer insight can be gained of realistic, non-novelty usage and appropriation into everyday use. This paper sketches out a framework for design that puts a premium on immediate use and evolving the design in response to use and user feedback. While such design practices with continuously developing systems are common in the design of social technologies, they are little documented. We describe our approach and reflect upon its key characteristics, based on our experiences from two case studies. We also present five major patterns of long-term usage which we found useful for design.
Resumo:
Process models are used by information professionals to convey semantics about the business operations in a real world domain intended to be supported by an information system. The understandability of these models is vital to them being used for information systems development. In this paper, we examine two factors that we predict will influence the understanding of a business process that novice developers obtain from a corresponding process model: the content presentation form chosen to articulate the business domain, and the user characteristics of the novice developers working with the model. Our experimental study provides evidence that novice developers obtain similar levels of understanding when confronted with an unfamiliar or a familiar process model. However, previous modeling experience, the use of English as a second language, and previous work experience in BPM are important influencing factors of model understanding. Our findings suggest that education and research in process modeling should increase the focus on human factors and how they relate to content and content presentation formats for different modeling tasks. We discuss implications for practice and research.
Resumo:
Effective strategies for the design of effi cient and environmentally sensitive buildings require a close collaboration between architects and engineers in the design of the building shell and environmental control systems at the outset of projects. However, it is often not practical for engineers to be involved early on in the design process. It is therefore essential that architects be able to perform preliminary energy analyses to evaluate their proposed designs prior to the major building characteristics becoming fi xed. Subsequently, a need exists for a simplifi ed energy design tool for architects. This paper discusses the limitations of existing analysis software in supporting early design explorations and proposes a framework for the development of a tool that provides decision support by permitting architects to quickly assess the performance of design alternatives.