90 resultados para Metadata schema
Resumo:
This grounded theory study examined the practices of twenty-one Australian early childhood teachers who work with children experiencing parental separation and divorce. Findings showed that teachers constructed personalised support for these children. Teachers’ pedagogical decision-making processes had five phases: constructing their knowledge, applying their knowledge, applying decision-making schema, taking action, and monitoring action and evaluating. This study contributes new understandings about teachers’ work with young children experiencing parental separation and divorce, and extends existing theoretical frameworks related to the provision of support. It adds to scholarship by applying grounded theory methodology in a new context. Recommendations are made for school policies and procedures within and across schools and school systems.
Resumo:
Queensland University of Technology (QUT) Library offers a range of resources and services to researchers as part of their research support portfolio. This poster will present key features of two of the data management services offered by research support staff at QUT Library. The first service is QUT Research Data Finder (RDF), a product of the Australian National Data Service (ANDS) funded Metadata Stores project. RDF is a data registry (metadata repository) that aims to publicise datasets that are research outputs arising from completed QUT research projects. The second is a software and code registry, which is currently under development with the sole purpose of improving discovery of source code and software as QUT research outputs. RESEARCH DATA FINDER As an integrated metadata repository, Research Data Finder aligns with institutional sources of truth, such as QUT’s research administration system, ResearchMaster, as well as QUT’s Academic Profiles system to provide high quality data descriptions that increase awareness of, and access to, shareable research data. The repository and its workflows are designed to foster better data management practices, enhance opportunities for collaboration and research, promote cross-disciplinary research and maximise the impact of existing research data sets. SOFTWARE AND CODE REGISTRY The QUT Library software and code registry project stems from concerns amongst researchers with regards to development activities, storage, accessibility, discoverability and impact, sharing, copyright and IP ownership of software and code. As a result, the Library is developing a registry for code and software research outputs, which will use existing Research Data Finder architecture. The underpinning software for both registries is VIVO, open source software developed by Cornell University. The registry will use the Research Data Finder service instance of VIVO and will include a searchable interface, links to code/software locations and metadata feeds to Research Data Australia. Key benefits of the project include:improving the discoverability and reuse of QUT researchers’ code and software amongst QUT and the QUT research community; increasing the profile of QUT research outputs on a national level by providing a metadata feed to Research Data Australia, and; improving the metrics for access and reuse of code and software in the repository.
Resumo:
Tags or personal metadata for annotating web resources have been widely adopted in Web 2.0 sites. However, as tags are freely chosen by users, the vocabularies are diverse, ambiguous and sometimes only meaningful to individuals. Tag recommenders may assist users during tagging process. Its objective is to suggest relevant tags to use as well as to help consolidating vocabulary in the systems. In this paper we discuss our approach for providing personalized tag recommendation by making use of existing domain ontology generated from folksonomy. Specifically we evaluated the approach in sparse situation. The evaluation shows that the proposed ontology-based method has improved the accuracy of tag recommendation in this situation.
Resumo:
Tag recommendation is a specific recommendation task for recommending metadata (tag) for a web resource (item) during user annotation process. In this context, sparsity problem refers to situation where tags need to be produced for items with few annotations or for user who tags few items. Most of the state of the art approaches in tag recommendation are rarely evaluated or perform poorly under this situation. This paper presents a combined method for mitigating sparsity problem in tag recommendation by mainly expanding and ranking candidate tags based on similar items’ tags and existing tag ontology. We evaluated the approach on two public social bookmarking datasets. The experiment results show better accuracy for recommendation in sparsity situation over several state of the art methods.
Resumo:
Democratic governments raise taxes and charges and spend revenue on delivering peace, order and good government. The delivery process begins with a legislature as that can provide a framework of legally enforceable rules enacted according to the government’s constitution. These rules confer rights and obligations that allow particular people to carry on particular functions at particular places and times. Metadata standards as applied to public records contain information about the functioning of government as distinct from the non-government sector of society. Metadata standards apply to database construction. Data entry, storage, maintenance, interrogation and retrieval depend on a controlled vocabulary needed to enable accurate retrieval of suitably catalogued records in a global information environment. Queensland’s socioeconomic progress now depends in part on technical efficiency in database construction to address queries about who does what, where and when; under what legally enforceable authority; and how the evidence of those facts is recorded. The Survey and Mapping Infrastructure Act 2003 (Qld) addresses technical aspects of where questions – typically the officially recognised name of a place and a description of its boundaries. The current 10-year review of the Survey and Mapping Regulation 2004 provides a valuable opportunity to consider whether the Regulation makes sense in the context of a number of later laws concerned with management of Public Sector Information (PSI) as well as policies for ICT hardware and software procurement. Removing ambiguities about how official place names are to be regarded on a whole-of-government basis can achieve some short term goals. Longer-term goals depend on a more holistic approach to information management – and current aspirations for more open government and community engagement are unlikely to occur without such a longer-term vision.
Resumo:
Certain autistic children whose linguistic ability is virtually nonexistent can draw natural scenes from memory with astonishing accuracy. In particular their drawings display convincing perspective. In contrast, normal children of the same preschool age group and even untrained adults draw primitive schematics or symbols of objects which they can verbally identify. These are usually conceptual outlines devoid of detail. It is argued that the difference between autistic child artists and normal individuals is that autistic artists make no assumptions about what is to be seen in their environment. They have not formed mental representations of what is significant and consequently perceive all details as equally important. Equivalently, they do not impose visual or linguistic schema -- a process necessary for rapid conceptualisation in a dynamic existence, especially when the information presented to the eye is incomplete.
Resumo:
Certain autistic children whose linguistic ability is virtually nonexistent can draw natural scenes from memory with astonishing accuracy. In particular their drawings display convincing perspective. In contrast, normal children of the same preschool age group and even untrained adults draw primitive schematics or symbols of objects which they can verbally identify. These are usually conceptual outlines devoid of detail. It is argued that the difference between autistic child artists and normal individuals is that autistic artists make no assumptions about what is to be seen in their environment. They have not formed mental representations of what is significant and consequently perceive all details as equally important. Equivalently, they do not impose visual or linguistic schema -- a process necessary for rapid conceptualisation in a dynamic existence, especially when the information presented to the eye is incomplete.
Resumo:
The use of online tools to support teaching and learning is now commonplace within educational institutions, with many of these institutions mandating or strongly encouraging the use of a blended learning approach to teaching and learning. Consequently, these institutions generally adopt a learning management system (LMS), with a fixed set of collaborative tools, in the belief that effective teaching and learning approaches will be used, to allow students to build knowledge. While some studies into the use of an LMS’s still identify continued didactic approaches to teaching and learning, the focus of this paper is on the ability of collaborative tools such as discussion forums, to build knowledge. In the context of science education, argumentation is touted as playing an important role in this process of knowledge building. However, there is limited research into argumentation in other domains using online discussion and a blended learning approach. This paper describes a study, using design research, which adapts a framework for argumentation that can be applied to other domains. In particular it will focus on an adapted social argumentation schema to identify argument in a discussion forum of N=16 participants in a secondary High School.
Resumo:
We conducted on-road and simulator studies to explore the mechanisms underpinning driver-rider crashes. In Study 1 the verbal protocols of 40 drivers and riders were assessed at intersections as part of a 15km on-road route in Melbourne. Network analysis of the verbal transcripts highlighted key differences in the situation awareness of drivers and riders at intersections. In a further study using a driving simulator we examined in car drivers the influence of acute exposure to motorcyclists. In a 15 min simulated drive, 40 drivers saw either no motorcycles or a high number of motorcycles in the surrounding traffic. In a subsequent 45-60 min drive, drivers were asked to detect motorcycles in traffic. The proportion of motorcycles was manipulated so that there was either a high (120) or low (6) number of motorcycles during the drive. Those drivers exposed to a high number of motorcycles were significantly faster at detecting motorcycles. Fundamentally, the incompatible situation awareness at intersections by drivers and riders underpins the conflicts. Study 2 offers some suggestion for a countermeasure here, although more research around schema and exposure training to support safer interactions is needed.
Resumo:
The purpose of this study is to elaborate shared schema change theory in the context of the radical restructuring-commercialization of an Australian public infrastructure organization. Commercialization of the case organization imposed high individual and collective cognitive processing and emotional demands as organizational members sought to develop new shared schema. Existing schema change research suggests that radical restructuring renders pre-existing shared schema irrelevant and triggers new schema development through experiential learning (Balogun and Johnson, 2004). Focus groups and semi-structured interviews were conducted at four points over a three-year period. The analysis revealed that shared schema change occurred in three broad phases: (1) radical restructuring and aftermath; (2) new CEO and new change process schema, and: (3) large-group meeting and schema change. Key findings include: (1) radical structural change does not necessarily trigger new shared schema development as indicated in prior research; (2) leadership matters, particularly in framing new means-ends schema; (3) how change leader interventions are sequenced has an important influence on shared schema change, and; (4) the creation of facilitated social processes have an important influence on shared schema change.
Resumo:
The continuous growth of the XML data poses a great concern in the area of XML data management. The need for processing large amounts of XML data brings complications to many applications, such as information retrieval, data integration and many others. One way of simplifying this problem is to break the massive amount of data into smaller groups by application of clustering techniques. However, XML clustering is an intricate task that may involve the processing of both the structure and the content of XML data in order to identify similar XML data. This research presents four clustering methods, two methods utilizing the structure of XML documents and the other two utilizing both the structure and the content. The two structural clustering methods have different data models. One is based on a path model and other is based on a tree model. These methods employ rigid similarity measures which aim to identifying corresponding elements between documents with different or similar underlying structure. The two clustering methods that utilize both the structural and content information vary in terms of how the structure and content similarity are combined. One clustering method calculates the document similarity by using a linear weighting combination strategy of structure and content similarities. The content similarity in this clustering method is based on a semantic kernel. The other method calculates the distance between documents by a non-linear combination of the structure and content of XML documents using a semantic kernel. Empirical analysis shows that the structure-only clustering method based on the tree model is more scalable than the structure-only clustering method based on the path model as the tree similarity measure for the tree model does not need to visit the parents of an element many times. Experimental results also show that the clustering methods perform better with the inclusion of the content information on most test document collections. To further the research, the structural clustering method based on tree model is extended and employed in XML transformation. The results from the experiments show that the proposed transformation process is faster than the traditional transformation system that translates and converts the source XML documents sequentially. Also, the schema matching process of XML transformation produces a better matching result in a shorter time.
Resumo:
This chapter focuses on ‘intergenerational collaborative drawing’, a particular process of drawing whereby adults and children draw at the same time on a blank paper space. Such drawings can be produced for a range of purposes, and based on different curriculum or stimulus subjects. Children of all ages, and with a range of physical and intellectual abilities are able to draw with parents, carers and teachers. Intergenerational collaborative drawing is a highly potent method for drawing in early childhood contexts because it brings adults and children together in the process of thinking and theorizing in order to create visual imagery and this exposes in deep ways to adults and children, the ideas and concepts being learned about. For adults, this exposure to a child’s thinking is a far more effective assessment tool than when they are presented with a finished drawing they know little about. This chapter focuses on drawings to examine wider issues of learning independence and how in drawing, preferred schema in the form of hand-out worksheets, the suggestive drawings provided by adults, and visual material seen in everyday life all serve to co-opt a young child into making particular schematic choices. I suggest that intergenerational collaborative drawing therefore serves to work as a small act of resistance to that co-opting, in that it helps adults and children to collectively challenge popular creativity and learning discourses.
Resumo:
The analysis of content and meta–data has long been the subject of most Twitter studies, however such research only tells part of the story of the development of Twitter as a platform. In this work, we introduce a methodology to determine the growth patterns of individual users of the platform, a technique we refer to as follower accession, and through a number of case studies consider the factors which lead to follower growth, and the identification of non–authentic followers. Finally, we consider what such an approach tells us about the history of the platform itself, and the way in which changes to the new user signup process have impacted upon users.
Resumo:
Background Cancer monitoring and prevention relies on the critical aspect of timely notification of cancer cases. However, the abstraction and classification of cancer from the free-text of pathology reports and other relevant documents, such as death certificates, exist as complex and time-consuming activities. Aims In this paper, approaches for the automatic detection of notifiable cancer cases as the cause of death from free-text death certificates supplied to Cancer Registries are investigated. Method A number of machine learning classifiers were studied. Features were extracted using natural language techniques and the Medtex toolkit. The numerous features encompassed stemmed words, bi-grams, and concepts from the SNOMED CT medical terminology. The baseline consisted of a keyword spotter using keywords extracted from the long description of ICD-10 cancer related codes. Results Death certificates with notifiable cancer listed as the cause of death can be effectively identified with the methods studied in this paper. A Support Vector Machine (SVM) classifier achieved best performance with an overall F-measure of 0.9866 when evaluated on a set of 5,000 free-text death certificates using the token stem feature set. The SNOMED CT concept plus token stem feature set reached the lowest variance (0.0032) and false negative rate (0.0297) while achieving an F-measure of 0.9864. The SVM classifier accounts for the first 18 of the top 40 evaluated runs, and entails the most robust classifier with a variance of 0.001141, half the variance of the other classifiers. Conclusion The selection of features significantly produced the most influences on the performance of the classifiers, although the type of classifier employed also affects performance. In contrast, the feature weighting schema created a negligible effect on performance. Specifically, it is found that stemmed tokens with or without SNOMED CT concepts create the most effective feature when combined with an SVM classifier.
Resumo:
It is well established that there are inherent difficulties involved in communicating across cultural boundaries. When these difficulties are encountered within the justice system the innocent can be convicted and witnesses undermined. A large amount of research has been undertaken regarding the implications of miscommunication within the courtroom but far less has been carried out on language and interactions between police and Indigenous Australians. It is necessary that officers of the law be made aware of linguistic issues to ensure they conduct their investigations in a fair, effective and therefore ethical manner. This paper draws on Cultural Schema Theory to illustrate how this could be achieved. The justice system is reliant upon the skills and knowledge of the police, therefore, this paper highlights the need for research to focus on the linguistic and non‐verbal differences between Australian Aboriginal English and Australian Standard English in order to develop techniques to facilitate effective communication.