959 resultados para MARC format
Resumo:
Petri nets are often used to model and analyze workflows. Many workflow languages have been mapped onto Petri nets in order to provide formal semantics or to verify correctness properties. Typically, the so-called Workflow nets are used to model and analyze workflows and variants of the classical soundness property are used as a correctness notion. Since many workflow languages have cancelation features, a mapping to workflow nets is not always possible. Therefore, it is interesting to consider workflow nets with reset arcs. Unfortunately, soundness is undecidable for workflow nets with reset arcs. In this paper, we provide a proof and insights into the theoretical limits of workflow verification.
Resumo:
The progress of technology has led to the increased adoption of energy monitors among household energy consumers. While the monitors available on the market deliver real-time energy usage feedback to the consumer, the format of this data is usually unengaging and mundane. Moreover, it fails to address consumers with different motivations and needs to save and compare energy. This paper presents a study that seeks to provide initial indications for motivation-specific design of energy-related feedback. We focus on comparative feedback supported by a community of energy consumers. In particular, we examine eco-visualisations, temporal self-comparison, norm comparison, one-on-one comparison and ranking, whereby the last three allow us to explore the potential of socialising energy-related feedback. These feedback types were integrated in EnergyWiz – a mobile application that enables users to compare with their past performance, neighbours, contacts from social networking sites and other EnergyWiz users. The application was evaluated in personal, semi-structured interviews, which provided first insights on how to design motivation-related comparative feedback.
Resumo:
QUT Library and the High Performance Computing and Research Support (HPC) Team have been collaborating on developing and delivering a range of research support services, including those designed to assist researchers to manage their data. QUT’s Management of Research Data policy has been available since 2010 and is complemented by the Data Management Guidelines and Checklist. QUT has partnered with the Australian Research Data Service (ANDS) on a number of projects including Seeding the Commons, Metadata Hub (with Griffith University) and the Data Capture program. The HPC Team has also been developing the QUT Research Data Repository based on the Architecta Mediaflux system and have run several pilots with faculties. Library and HPC staff have been trained in the principles of research data management and are providing a range of research data management seminars and workshops for researchers and HDR students.
Resumo:
In this paper, we highlight key concepts from dynamical systems theory and complexity sciences to exemplify constraints on talent development in a sample of elite cricketers. Eleven international fast bowlers who cumulatively had taken more than 2,400 test wickets in over 600 international test matches were interviewed using an in-depth, open-ended, and semi-structured approach. Qualitative data were analysed to identify key components in fast bowling expertise development. Results revealed that, contrary to traditional perspectives, the athletes progressed through unique, nonlinear trajectories of development, which appears to be a commonality in the experts' developmental pathways. During development, individual experts encountered unique constraints on the acquisition of expertise in cricket fast bowling, resulting in unique performance adaptations. Specifically, data illustrated experts' ability to continually adapt behaviours under multifaceted ecological constraints.
Resumo:
Road crashes cost world and Australian society a significant proportion of GDP, affecting productivity and causing significant suffering for communities and individuals. This paper presents a case study that generates data mining models that contribute to understanding of road crashes by allowing examination of the role of skid resistance (F60) and other road attributes in road crashes. Predictive data mining algorithms, primarily regression trees, were used to produce road segment crash count models from the road and traffic attributes of crash scenarios. The rules derived from the regression trees provide evidence of the significance of road attributes in contributing to crash, with a focus on the evaluation of skid resistance.
Resumo:
The study investigated the effect on learning of four different instructional formats used to teach assembly procedures. Cognitive load and spatial information processing theories were used to generate the instructional material. The first group received a physical model to study, the second an isometric drawing, the third an isometric drawing plus a model and the fourth an orthographic drawing. Forty secondary school students were presented with the four different instructional formats and subsequently tested on an assembly task. The findings indicated that there may be evidence to argue that the model format which only required encoding of an already constructed three dimensional representation, caused less extraneous cognitive load compared to the isometric and the orthographic formats. No significant difference was found between the model and the isometric-plus-model formats on all measures because 80% of the students in the isometric-plus-model format chose to use the model format only. The model format also did not differ significantly from other groups in total time taken to complete the assembly, in number of correctly assembled pieces and in time spent on studying the tasks. However, the model group had significantly more correctly completed models and required fewer extra looks than the other groups.
Resumo:
In today’s electronic world vast amounts of knowledge is stored within many datasets and databases. Often the default format of this data means that the knowledge within is not immediately accessible, but rather has to be mined and extracted. This requires automated tools and they need to be effective and efficient. Association rule mining is one approach to obtaining knowledge stored with datasets / databases which includes frequent patterns and association rules between the items / attributes of a dataset with varying levels of strength. However, this is also association rule mining’s downside; the number of rules that can be found is usually very big. In order to effectively use the association rules (and the knowledge within) the number of rules needs to be kept manageable, thus it is necessary to have a method to reduce the number of association rules. However, we do not want to lose knowledge through this process. Thus the idea of non-redundant association rule mining was born. A second issue with association rule mining is determining which ones are interesting. The standard approach has been to use support and confidence. But they have their limitations. Approaches which use information about the dataset’s structure to measure association rules are limited, but could yield useful association rules if tapped. Finally, while it is important to be able to get interesting association rules from a dataset in a manageable size, it is equally as important to be able to apply them in a practical way, where the knowledge they contain can be taken advantage of. Association rules show items / attributes that appear together frequently. Recommendation systems also look at patterns and items / attributes that occur together frequently in order to make a recommendation to a person. It should therefore be possible to bring the two together. In this thesis we look at these three issues and propose approaches to help. For discovering non-redundant rules we propose enhanced approaches to rule mining in multi-level datasets that will allow hierarchically redundant association rules to be identified and removed, without information loss. When it comes to discovering interesting association rules based on the dataset’s structure we propose three measures for use in multi-level datasets. Lastly, we propose and demonstrate an approach that allows for association rules to be practically and effectively used in a recommender system, while at the same time improving the recommender system’s performance. This especially becomes evident when looking at the user cold-start problem for a recommender system. In fact our proposal helps to solve this serious problem facing recommender systems.
Resumo:
This is the first outdoor test of small-scale dye sensitized solar cells (DSC) powering a standalone nanosensor node. A solar cell test station (SCTS) has been developed using standard DSC to power a gas nanosensor, a radio transmitter, and the control electronics (CE) for battery charging. The station is remotely monitored through wired (Ethernet cable) or wireless connection (radio transmitter) in order to evaluate in real time the performance of the solar cells powering a nanosensor and a transmitter under different weather conditions. We analyze trends of energy conversion efficiency after 60 days of operation. The 408 cm2 active surface module produces enough energy to power a gas nanosensor and a radio transmitter during the day and part of the night. Also, by using a variable programmable load we keep the system working on the maximum power point (MPP) quantifying the total energy generated and stored in a battery. Although this technology is at an early stage of development, these experiments provide useful data for future outdoor applications such as nanosensor network nodes.
Resumo:
As a model for knowledge description and formalization, ontologies are widely used to represent user profiles in personalized web information gathering. However, when representing user profiles, many models have utilized only knowledge from either a global knowledge base or a user local information. In this paper, a personalized ontology model is proposed for knowledge representation and reasoning over user profiles. This model learns ontological user profiles from both a world knowledge base and user local instance repositories. The ontology model is evaluated by comparing it against benchmark models in web information gathering. The results show that this ontology model is successful.
Resumo:
Workflow nets, a particular class of Petri nets, have become one of the standard ways to model and analyze workflows. Typically, they are used as an abstraction of the workflow that is used to check the so-called soundness property. This property guarantees the absence of livelocks, deadlocks, and other anomalies that can be detected without domain knowledge. Several authors have proposed alternative notions of soundness and have suggested to use more expressive languages, e.g., models with cancellations or priorities. This paper provides an overview of the different notions of soundness and investigates these in the presence of different extensions of workflow nets.We will show that the eight soundness notions described in the literature are decidable for workflow nets. However, most extensions will make all of these notions undecidable. These new results show the theoretical limits of workflow verification. Moreover, we discuss some of the analysis approaches described in the literature.