50 resultados para schema
Resumo:
Extracting and aggregating the relevant event records relating to an identified security incident from the multitude of heterogeneous logs in an enterprise network is a difficult challenge. Presenting the information in a meaningful way is an additional challenge. This paper looks at solutions to this problem by first identifying three main transforms; log collection, correlation, and visual transformation. Having identified that the CEE project will address the first transform, this paper focuses on the second, while the third is left for future work. To aggregate by correlating event records we demonstrate the use of two correlation methods, simple and composite. These make use of a defined mapping schema and confidence values to dynamically query the normalised dataset and to constrain result events to within a time window. Doing so improves the quality of results, required for the iterative re-querying process being undertaken. Final results of the process are output as nodes and edges suitable for presentation as a network graph.
Resumo:
Video presented as part of Smart Services CRC Participants conferences. This video is a demonstration of a 3D visualisation of a running workflow in YAWL connected by a custom service to Second Life. The avatar, Clik, is being controlled by a workflow tool called YAWL, as it traverses the workflow schema, illustrating the process of film preproduction and shooting. This video was captured while the workflow tool was running - NO human is controlling the avatar during the video. It is all scripted from an external source on the Internet. See www.bpmve.org for more on this work.
Resumo:
This grounded theory study examined the practices of twenty-one Australian early childhood teachers who work with children experiencing parental separation and divorce. Findings showed that teachers constructed personalised support for these children. Teachers’ pedagogical decision-making processes had five phases: constructing their knowledge, applying their knowledge, applying decision-making schema, taking action, and monitoring action and evaluating. This study contributes new understandings about teachers’ work with young children experiencing parental separation and divorce, and extends existing theoretical frameworks related to the provision of support. It adds to scholarship by applying grounded theory methodology in a new context. Recommendations are made for school policies and procedures within and across schools and school systems.
Resumo:
Certain autistic children whose linguistic ability is virtually nonexistent can draw natural scenes from memory with astonishing accuracy. In particular their drawings display convincing perspective. In contrast, normal children of the same preschool age group and even untrained adults draw primitive schematics or symbols of objects which they can verbally identify. These are usually conceptual outlines devoid of detail. It is argued that the difference between autistic child artists and normal individuals is that autistic artists make no assumptions about what is to be seen in their environment. They have not formed mental representations of what is significant and consequently perceive all details as equally important. Equivalently, they do not impose visual or linguistic schema -- a process necessary for rapid conceptualisation in a dynamic existence, especially when the information presented to the eye is incomplete.
Resumo:
Certain autistic children whose linguistic ability is virtually nonexistent can draw natural scenes from memory with astonishing accuracy. In particular their drawings display convincing perspective. In contrast, normal children of the same preschool age group and even untrained adults draw primitive schematics or symbols of objects which they can verbally identify. These are usually conceptual outlines devoid of detail. It is argued that the difference between autistic child artists and normal individuals is that autistic artists make no assumptions about what is to be seen in their environment. They have not formed mental representations of what is significant and consequently perceive all details as equally important. Equivalently, they do not impose visual or linguistic schema -- a process necessary for rapid conceptualisation in a dynamic existence, especially when the information presented to the eye is incomplete.
Resumo:
The use of online tools to support teaching and learning is now commonplace within educational institutions, with many of these institutions mandating or strongly encouraging the use of a blended learning approach to teaching and learning. Consequently, these institutions generally adopt a learning management system (LMS), with a fixed set of collaborative tools, in the belief that effective teaching and learning approaches will be used, to allow students to build knowledge. While some studies into the use of an LMS’s still identify continued didactic approaches to teaching and learning, the focus of this paper is on the ability of collaborative tools such as discussion forums, to build knowledge. In the context of science education, argumentation is touted as playing an important role in this process of knowledge building. However, there is limited research into argumentation in other domains using online discussion and a blended learning approach. This paper describes a study, using design research, which adapts a framework for argumentation that can be applied to other domains. In particular it will focus on an adapted social argumentation schema to identify argument in a discussion forum of N=16 participants in a secondary High School.
Resumo:
We conducted on-road and simulator studies to explore the mechanisms underpinning driver-rider crashes. In Study 1 the verbal protocols of 40 drivers and riders were assessed at intersections as part of a 15km on-road route in Melbourne. Network analysis of the verbal transcripts highlighted key differences in the situation awareness of drivers and riders at intersections. In a further study using a driving simulator we examined in car drivers the influence of acute exposure to motorcyclists. In a 15 min simulated drive, 40 drivers saw either no motorcycles or a high number of motorcycles in the surrounding traffic. In a subsequent 45-60 min drive, drivers were asked to detect motorcycles in traffic. The proportion of motorcycles was manipulated so that there was either a high (120) or low (6) number of motorcycles during the drive. Those drivers exposed to a high number of motorcycles were significantly faster at detecting motorcycles. Fundamentally, the incompatible situation awareness at intersections by drivers and riders underpins the conflicts. Study 2 offers some suggestion for a countermeasure here, although more research around schema and exposure training to support safer interactions is needed.
Resumo:
The purpose of this study is to elaborate shared schema change theory in the context of the radical restructuring-commercialization of an Australian public infrastructure organization. Commercialization of the case organization imposed high individual and collective cognitive processing and emotional demands as organizational members sought to develop new shared schema. Existing schema change research suggests that radical restructuring renders pre-existing shared schema irrelevant and triggers new schema development through experiential learning (Balogun and Johnson, 2004). Focus groups and semi-structured interviews were conducted at four points over a three-year period. The analysis revealed that shared schema change occurred in three broad phases: (1) radical restructuring and aftermath; (2) new CEO and new change process schema, and: (3) large-group meeting and schema change. Key findings include: (1) radical structural change does not necessarily trigger new shared schema development as indicated in prior research; (2) leadership matters, particularly in framing new means-ends schema; (3) how change leader interventions are sequenced has an important influence on shared schema change, and; (4) the creation of facilitated social processes have an important influence on shared schema change.
Resumo:
The continuous growth of the XML data poses a great concern in the area of XML data management. The need for processing large amounts of XML data brings complications to many applications, such as information retrieval, data integration and many others. One way of simplifying this problem is to break the massive amount of data into smaller groups by application of clustering techniques. However, XML clustering is an intricate task that may involve the processing of both the structure and the content of XML data in order to identify similar XML data. This research presents four clustering methods, two methods utilizing the structure of XML documents and the other two utilizing both the structure and the content. The two structural clustering methods have different data models. One is based on a path model and other is based on a tree model. These methods employ rigid similarity measures which aim to identifying corresponding elements between documents with different or similar underlying structure. The two clustering methods that utilize both the structural and content information vary in terms of how the structure and content similarity are combined. One clustering method calculates the document similarity by using a linear weighting combination strategy of structure and content similarities. The content similarity in this clustering method is based on a semantic kernel. The other method calculates the distance between documents by a non-linear combination of the structure and content of XML documents using a semantic kernel. Empirical analysis shows that the structure-only clustering method based on the tree model is more scalable than the structure-only clustering method based on the path model as the tree similarity measure for the tree model does not need to visit the parents of an element many times. Experimental results also show that the clustering methods perform better with the inclusion of the content information on most test document collections. To further the research, the structural clustering method based on tree model is extended and employed in XML transformation. The results from the experiments show that the proposed transformation process is faster than the traditional transformation system that translates and converts the source XML documents sequentially. Also, the schema matching process of XML transformation produces a better matching result in a shorter time.
Resumo:
This chapter focuses on ‘intergenerational collaborative drawing’, a particular process of drawing whereby adults and children draw at the same time on a blank paper space. Such drawings can be produced for a range of purposes, and based on different curriculum or stimulus subjects. Children of all ages, and with a range of physical and intellectual abilities are able to draw with parents, carers and teachers. Intergenerational collaborative drawing is a highly potent method for drawing in early childhood contexts because it brings adults and children together in the process of thinking and theorizing in order to create visual imagery and this exposes in deep ways to adults and children, the ideas and concepts being learned about. For adults, this exposure to a child’s thinking is a far more effective assessment tool than when they are presented with a finished drawing they know little about. This chapter focuses on drawings to examine wider issues of learning independence and how in drawing, preferred schema in the form of hand-out worksheets, the suggestive drawings provided by adults, and visual material seen in everyday life all serve to co-opt a young child into making particular schematic choices. I suggest that intergenerational collaborative drawing therefore serves to work as a small act of resistance to that co-opting, in that it helps adults and children to collectively challenge popular creativity and learning discourses.
Resumo:
Background Cancer monitoring and prevention relies on the critical aspect of timely notification of cancer cases. However, the abstraction and classification of cancer from the free-text of pathology reports and other relevant documents, such as death certificates, exist as complex and time-consuming activities. Aims In this paper, approaches for the automatic detection of notifiable cancer cases as the cause of death from free-text death certificates supplied to Cancer Registries are investigated. Method A number of machine learning classifiers were studied. Features were extracted using natural language techniques and the Medtex toolkit. The numerous features encompassed stemmed words, bi-grams, and concepts from the SNOMED CT medical terminology. The baseline consisted of a keyword spotter using keywords extracted from the long description of ICD-10 cancer related codes. Results Death certificates with notifiable cancer listed as the cause of death can be effectively identified with the methods studied in this paper. A Support Vector Machine (SVM) classifier achieved best performance with an overall F-measure of 0.9866 when evaluated on a set of 5,000 free-text death certificates using the token stem feature set. The SNOMED CT concept plus token stem feature set reached the lowest variance (0.0032) and false negative rate (0.0297) while achieving an F-measure of 0.9864. The SVM classifier accounts for the first 18 of the top 40 evaluated runs, and entails the most robust classifier with a variance of 0.001141, half the variance of the other classifiers. Conclusion The selection of features significantly produced the most influences on the performance of the classifiers, although the type of classifier employed also affects performance. In contrast, the feature weighting schema created a negligible effect on performance. Specifically, it is found that stemmed tokens with or without SNOMED CT concepts create the most effective feature when combined with an SVM classifier.
Resumo:
It is well established that there are inherent difficulties involved in communicating across cultural boundaries. When these difficulties are encountered within the justice system the innocent can be convicted and witnesses undermined. A large amount of research has been undertaken regarding the implications of miscommunication within the courtroom but far less has been carried out on language and interactions between police and Indigenous Australians. It is necessary that officers of the law be made aware of linguistic issues to ensure they conduct their investigations in a fair, effective and therefore ethical manner. This paper draws on Cultural Schema Theory to illustrate how this could be achieved. The justice system is reliant upon the skills and knowledge of the police, therefore, this paper highlights the need for research to focus on the linguistic and non‐verbal differences between Australian Aboriginal English and Australian Standard English in order to develop techniques to facilitate effective communication.
Resumo:
Background Designing novel proteins with site-directed recombination has enormous prospects. By locating effective recombination sites for swapping sequence parts, the probability that hybrid sequences have the desired properties is increased dramatically. The prohibitive requirements for applying current tools led us to investigate machine learning to assist in finding useful recombination sites from amino acid sequence alone. Results We present STAR, Site Targeted Amino acid Recombination predictor, which produces a score indicating the structural disruption caused by recombination, for each position in an amino acid sequence. Example predictions contrasted with those of alternative tools, illustrate STAR'S utility to assist in determining useful recombination sites. Overall, the correlation coefficient between the output of the experimentally validated protein design algorithm SCHEMA and the prediction of STAR is very high (0.89). Conclusion STAR allows the user to explore useful recombination sites in amino acid sequences with unknown structure and unknown evolutionary origin. The predictor service is available from http://pprowler.itee.uq.edu.au/star.
Resumo:
We present a machine learning model that predicts a structural disruption score from a protein s primary structure. SCHEMA was introduced by Frances Arnold and colleagues as a method for determining putative recombination sites of a protein on the basis of the full (PDB) description of its structure. The present method provides an alternative to SCHEMA that is able to determine the same score from sequence data only. Circumventing the need for resolving the full structure enables the exploration of yet unresolved and even hypothetical sequences for protein design efforts. Deriving the SCHEMA score from a primary structure is achieved using a two step approach: first predicting a secondary structure from the sequence and then predicting the SCHEMA score from the predicted secondary structure. The correlation coefficient for the prediction is 0.88 and indicates the feasibility of replacing SCHEMA with little loss of precision.
Resumo:
Heterogeneous health data is a critical issue when managing health information for quality decision making processes. In this paper we examine the efficient aggregation of lifestyle information through a data warehousing architecture lens. We present a proof of concept for a clinical data warehouse architecture that enables evidence based decision making processes by integrating and organising disparate data silos in support of healthcare services improvement paradigms.