906 resultados para Interpreting graphs
Resumo:
Aim: In this paper we discuss the use of the Precede-Proceed model when investigating health promotion options for breast cancer survivors. Background: Adherence to recommended health behaviors can optimize well-being after cancer treatment. Guided by the Precede-Proceed approach, we studied the behaviors of breast cancer survivors in our health service area. Data sources: The interview data from the cohort of breast cancer survivors are used in this paper to illustrate the use of Precede-Proceed in this nursing research context. Interview data were collected from June to December 2009. We also searched Medline, CINAHL, PsychInfo and PsychExtra up to 2010 for relevant literature in English to interrogate the data from other theoretical perspectives. Discussion: The Precede-Proceed model is theoretically-complex. The deductive analytic process guided by the model usefully explained some of the health behaviors of cancer survivors, although it could not explicate many other findings. A complementary inductive approach to the analysis and subsequent interpretation by way of Uncertainty in Illness Theory and other psychosocial perspectives provided a comprehensive account of the qualitative data that resulted in contextually-relevant recommendations for nursing practice. Implications for nursing: Nursing researchers using Precede-Proceed should maintain theoretical flexibility when interpreting qualitative data. Perspectives not embedded in the model might need to be considered to ensure that the data are analyzed in a contextually-relevant way. Conclusion: Precede-Proceed provides a robust framework for nursing researchers investigating health promotion in cancer survivors; however additional theoretical lenses to those embedded in the model can enhance data interpretation.
Resumo:
Continuous user authentication with keystroke dynamics uses characters sequences as features. Since users can type characters in any order, it is imperative to find character sequences (n-graphs) that are representative of user typing behavior. The contemporary feature selection approaches do not guarantee selecting frequently-typed features which may cause less accurate statistical user-representation. Furthermore, the selected features do not inherently reflect user typing behavior. We propose four statistical based feature selection techniques that mitigate limitations of existing approaches. The first technique selects the most frequently occurring features. The other three consider different user typing behaviors by selecting: n-graphs that are typed quickly; n-graphs that are typed with consistent time; and n-graphs that have large time variance among users. We use Gunetti’s keystroke dataset and k-means clustering algorithm for our experiments. The results show that among the proposed techniques, the most-frequent feature selection technique can effectively find user representative features. We further substantiate our results by comparing the most-frequent feature selection technique with three existing approaches (popular Italian words, common n-graphs, and least frequent ngraphs). We find that it performs better than the existing approaches after selecting a certain number of most-frequent n-graphs.
Resumo:
The management of models over time in many domains requires different constraints to apply to some parts of the model as it evolves. Using EMF and its meta-language Ecore, the development of model management code and tools usually relies on the meta- model having some constraints, such as attribute and reference cardinalities and changeability, set in the least constrained way that any model user will require. Stronger versions of these constraints can then be enforced in code, or by attaching additional constraint expressions, and their evaluations engines, to the generated model code. We propose a mechanism that allows for variations to the constraining meta-attributes of metamodels, to allow enforcement of different constraints at different lifecycle stages of a model. We then discuss the implementation choices within EMF to support the validation of a state-specific metamodel on model graphs when changing states, as well as the enforcement of state-specific constraints when executing model change operations.
Resumo:
The research objectives of this thesis were to contribute to Bayesian statistical methodology by contributing to risk assessment statistical methodology, and to spatial and spatio-temporal methodology, by modelling error structures using complex hierarchical models. Specifically, I hoped to consider two applied areas, and use these applications as a springboard for developing new statistical methods as well as undertaking analyses which might give answers to particular applied questions. Thus, this thesis considers a series of models, firstly in the context of risk assessments for recycled water, and secondly in the context of water usage by crops. The research objective was to model error structures using hierarchical models in two problems, namely risk assessment analyses for wastewater, and secondly, in a four dimensional dataset, assessing differences between cropping systems over time and over three spatial dimensions. The aim was to use the simplicity and insight afforded by Bayesian networks to develop appropriate models for risk scenarios, and again to use Bayesian hierarchical models to explore the necessarily complex modelling of four dimensional agricultural data. The specific objectives of the research were to develop a method for the calculation of credible intervals for the point estimates of Bayesian networks; to develop a model structure to incorporate all the experimental uncertainty associated with various constants thereby allowing the calculation of more credible credible intervals for a risk assessment; to model a single day’s data from the agricultural dataset which satisfactorily captured the complexities of the data; to build a model for several days’ data, in order to consider how the full data might be modelled; and finally to build a model for the full four dimensional dataset and to consider the timevarying nature of the contrast of interest, having satisfactorily accounted for possible spatial and temporal autocorrelations. This work forms five papers, two of which have been published, with two submitted, and the final paper still in draft. The first two objectives were met by recasting the risk assessments as directed, acyclic graphs (DAGs). In the first case, we elicited uncertainty for the conditional probabilities needed by the Bayesian net, incorporated these into a corresponding DAG, and used Markov chain Monte Carlo (MCMC) to find credible intervals, for all the scenarios and outcomes of interest. In the second case, we incorporated the experimental data underlying the risk assessment constants into the DAG, and also treated some of that data as needing to be modelled as an ‘errors-invariables’ problem [Fuller, 1987]. This illustrated a simple method for the incorporation of experimental error into risk assessments. In considering one day of the three-dimensional agricultural data, it became clear that geostatistical models or conditional autoregressive (CAR) models over the three dimensions were not the best way to approach the data. Instead CAR models are used with neighbours only in the same depth layer. This gave flexibility to the model, allowing both the spatially structured and non-structured variances to differ at all depths. We call this model the CAR layered model. Given the experimental design, the fixed part of the model could have been modelled as a set of means by treatment and by depth, but doing so allows little insight into how the treatment effects vary with depth. Hence, a number of essentially non-parametric approaches were taken to see the effects of depth on treatment, with the model of choice incorporating an errors-in-variables approach for depth in addition to a non-parametric smooth. The statistical contribution here was the introduction of the CAR layered model, the applied contribution the analysis of moisture over depth and estimation of the contrast of interest together with its credible intervals. These models were fitted using WinBUGS [Lunn et al., 2000]. The work in the fifth paper deals with the fact that with large datasets, the use of WinBUGS becomes more problematic because of its highly correlated term by term updating. In this work, we introduce a Gibbs sampler with block updating for the CAR layered model. The Gibbs sampler was implemented by Chris Strickland using pyMCMC [Strickland, 2010]. This framework is then used to consider five days data, and we show that moisture in the soil for all the various treatments reaches levels particular to each treatment at a depth of 200 cm and thereafter stays constant, albeit with increasing variances with depth. In an analysis across three spatial dimensions and across time, there are many interactions of time and the spatial dimensions to be considered. Hence, we chose to use a daily model and to repeat the analysis at all time points, effectively creating an interaction model of time by the daily model. Such an approach allows great flexibility. However, this approach does not allow insight into the way in which the parameter of interest varies over time. Hence, a two-stage approach was also used, with estimates from the first-stage being analysed as a set of time series. We see this spatio-temporal interaction model as being a useful approach to data measured across three spatial dimensions and time, since it does not assume additivity of the random spatial or temporal effects.
Resumo:
Defence organisations perform information security evaluations to confirm that electronic communications devices are safe to use in security-critical situations. Such evaluations include tracing all possible dataflow paths through the device, but this process is tedious and error-prone, so automated reachability analysis tools are needed to make security evaluations faster and more accurate. Previous research has produced a tool, SIFA, for dataflow analysis of basic digital circuitry, but it cannot analyse dataflow through microprocessors embedded within the circuit since this depends on the software they run. We have developed a static analysis tool that produces SIFA compatible dataflow graphs from embedded microcontroller programs written in C. In this paper we present a case study which shows how this new capability supports combined hardware and software dataflow analyses of a security critical communications device.
Resumo:
Data flow analysis techniques can be used to help assess threats to data confidentiality and integrity in security critical program code. However, a fundamental weakness of static analysis techniques is that they overestimate the ways in which data may propagate at run time. Discounting large numbers of these false-positive data flow paths wastes an information security evaluator's time and effort. Here we show how to automatically eliminate some false-positive data flow paths by precisely modelling how classified data is blocked by certain expressions in embedded C code. We present a library of detailed data flow models of individual expression elements and an algorithm for introducing these components into conventional data flow graphs. The resulting models can be used to accurately trace byte-level or even bit-level data flow through expressions that are normally treated as atomic. This allows us to identify expressions that safely downgrade their classified inputs and thereby eliminate false-positive data flow paths from the security evaluation process. To validate the approach we have implemented and tested it in an existing data flow analysis toolkit.
Resumo:
Usability is a multi-dimensional characteristic of a computer system. This paper focuses on usability as a measurement of interaction between the user and the system. The research employs a task-oriented approach to evaluate the usability of a meta search engine. This engine encourages and accepts queries of unlimited size expressed in natural language. A variety of conventional metrics developed by academic and industrial research, including ISO standards,, are applied to the information retrieval process consisting of sequential tasks. Tasks range from formulating (long) queries to interpreting and retaining search results. Results of the evaluation and analysis of the operation log indicate that obtaining advanced search engine results can be accomplished simultaneously with enhancing the usability of the interactive process. In conclusion, we discuss implications for interactive information retrieval system design and directions for future usability research. © 2008 Academy Publisher.
Resumo:
This chapter examines the changing landscape of literacy in the early years and considers how the diverse spaces and places in which early literacy learning is promoted and takes place can be conceptualised and researched. We argue that early literacy research needs to extend beyond a language focus to become attentive to the embodied, material dimensions of learning environments. The discussion is organised in terms of three kinds of spaces within which children encounter opportunities to participate in communication and representational practices. These are domestic spaces, commercial spaces and spaces of formal education. Theories of spatiality and material semiotics provide the conceptual tools for interpreting research studies located in these spaces. Implications for educators are considered.
Resumo:
With the growing number of XML documents on theWeb it becomes essential to effectively organise these XML documents in order to retrieve useful information from them. A possible solution is to apply clustering on the XML documents to discover knowledge that promotes effective data management, information retrieval and query processing. However, many issues arise in discovering knowledge from these types of semi-structured documents due to their heterogeneity and structural irregularity. Most of the existing research on clustering techniques focuses only on one feature of the XML documents, this being either their structure or their content due to scalability and complexity problems. The knowledge gained in the form of clusters based on the structure or the content is not suitable for reallife datasets. It therefore becomes essential to include both the structure and content of XML documents in order to improve the accuracy and meaning of the clustering solution. However, the inclusion of both these kinds of information in the clustering process results in a huge overhead for the underlying clustering algorithm because of the high dimensionality of the data. The overall objective of this thesis is to address these issues by: (1) proposing methods to utilise frequent pattern mining techniques to reduce the dimension; (2) developing models to effectively combine the structure and content of XML documents; and (3) utilising the proposed models in clustering. This research first determines the structural similarity in the form of frequent subtrees and then uses these frequent subtrees to represent the constrained content of the XML documents in order to determine the content similarity. A clustering framework with two types of models, implicit and explicit, is developed. The implicit model uses a Vector Space Model (VSM) to combine the structure and the content information. The explicit model uses a higher order model, namely a 3- order Tensor Space Model (TSM), to explicitly combine the structure and the content information. This thesis also proposes a novel incremental technique to decompose largesized tensor models to utilise the decomposed solution for clustering the XML documents. The proposed framework and its components were extensively evaluated on several real-life datasets exhibiting extreme characteristics to understand the usefulness of the proposed framework in real-life situations. Additionally, this research evaluates the outcome of the clustering process on the collection selection problem in the information retrieval on the Wikipedia dataset. The experimental results demonstrate that the proposed frequent pattern mining and clustering methods outperform the related state-of-the-art approaches. In particular, the proposed framework of utilising frequent structures for constraining the content shows an improvement in accuracy over content-only and structure-only clustering results. The scalability evaluation experiments conducted on large scaled datasets clearly show the strengths of the proposed methods over state-of-the-art methods. In particular, this thesis work contributes to effectively combining the structure and the content of XML documents for clustering, in order to improve the accuracy of the clustering solution. In addition, it also contributes by addressing the research gaps in frequent pattern mining to generate efficient and concise frequent subtrees with various node relationships that could be used in clustering.
Resumo:
The Link the Wiki track at INEX 2008 offered two tasks, file-to-file link discovery and anchor-to-BEP link discovery. In the former 6600 topics were used and in the latter 50 were used. Manual assessment of the anchor-to-BEP runs was performed using a tool developed for the purpose. Runs were evaluated using standard precision & recall measures such as MAP and precision / recall graphs. 10 groups participated and the approaches they took are discussed. Final evaluation results for all runs are presented.
Resumo:
Occupant injury comprises the largest proportion of child road crash trauma in most highly motorised countries. In Australia, road crashes are the primary cause of death for children aged 1-14 years and are among the top three causes of serious injury to this age group. For this reason considerable research attention has been focused on understanding the contributing factors and the most effective ways of improving children’s safety as car passengers. Australia has been particularly active in this area, with well regarded work being conducted on levels of use of dedicated child restraints, restraint crash performance in laboratory conditions, examination of real world restraint crash performance (case review), and studies of psychosocial factors influencing perceptions about restraints and their use (Brown & Bilston, 2006; Brown, McCaskill, Henderson & Bilston, 2006; Edwards, Anderson & Hutchinson, 2006; Lennon, 2005, 2007). New legislation for the restraint of children as vehicle passengers was enacted in Queensland in March 2010. This new legislation recognises the importance of dedicated restraint use for children up to at least age 7 years and the protective benefits of rear seating position in the event of a crash. As part of improving children’s safety and addressing key priority areas, the Queensland Injury Prevention Council (QIPC) and Department of Transport and Main Roads (TMR) commissioned the Centre for Accident Research and Road Safety, Queensland (CARRS-Q) to evaluate the impact of the new legislation. Although at the time of commencing the research the legislation had only been in force for 14 months, it was deemed critical to review its effectiveness in guiding parental choices and compliance in order to inform the design and focus of further supporting initiatives and interventions. Specifically, the research sought clear evidence of exactly what impact, if any, the legislation has had on compliance levels and what difficulties (if any) parents/carers experience in relation to interpreting as well as complying with the requirements of the new law. Knowledge about these barriers or difficulties will allow any future changes or improvements to the legislation to address such barriers and thus improve its effectiveness. Moreover, better information about how the legislation has affected parents will provide a basis to plan non-legislative comprehensive multi-strategy interventions such as community, educational or behavioural interventions with parents/carers and other stakeholder groups. In addition, it will allow identification of the most effective aspects of the legislation and those areas in need of extra attention to improve effectiveness/compliance and thus better protect children travelling in cars and improve their health and safety. This report presents the findings from the four components of the research: the literature review; observational study; intercept interviews and focus group with parents; and the interviews with key stakeholders.
Resumo:
The purpose of this paper is to identify and empirically examine the key features, purposes, uses, and benefits of performance dashboards. We find that only about a quarter of the sales managers surveyed1 in Finland used a dashboard, which was lower than previously reported. Dashboards were used for four distinct purposes: (i) monitoring, (ii) problem solving, (iii) rationalizing, and (iv) communication and consistency. There was a high correlation between the different uses of dashboards and user productivity indicating that dashboards were perceived as effective tools in performance management, not just for monitoring one‟s own performance but for other purposes including communication. The quality of the data in dashboards did not seem to be a concern (except for completeness) but it was a critical driver regarding its use. This is the first empirical study on performance dashboards in terms of adoption rates, key features, and benefits. The study highlights the research potential and benefits of dashboards, which could be valuable for future researchers and practitioners.
Resumo:
This paper outlines how the Ortelia project’s 3D virtual reality models have the capacity to assist our understanding of sites of cultural heritage. The VR investigation of such spaces can be a valuable tool in 'real world' empirical research in theatre and spatiality. Through a demonstration of two of Ortelia's VR models (an art gallery and a theatre), we suggest how we might consider interpreting cultural space and sites as contributing significantly to cultural capital. We also introduce the potential for human interaction in such venues through motion-capture to discuss the potential for assessing how humans interact in such contexts.
Resumo:
The Graphics-Decoding Proficiency (G-DP) instrument was developed as a screening test for the purpose of measuring students’ (aged 8-11 years) capacity to solve graphics-based mathematics tasks. These tasks include number lines, column graphs, maps and pie charts. The instrument was developed within a theoretical framework which highlights the various types of information graphics commonly presented to students in large-scale national and international assessments. The instrument provides researchers, classroom teachers and test designers with an assessment tool which measures students’ graphics decoding proficiency across and within five broad categories of information graphics. The instrument has implications for a number of stakeholders in an era where graphics have become an increasingly important way of representing information.
Resumo:
Very little has been written on charitable laws in Fiji to date. Most of the organisations in Fiji seek incorporation under the pre-independence legislation dealing with charities, the Charitable Trusts Act (Cap 67). This Act is the basis of this paper. The key provisions of the Act are discussed in this paper. Recently serious questions have been raised on the status of charitable bodies with the de-registration of one of the registered charities (the Citizens’ Constitutional Forum (CCF)) for political activity. This paper also provides an insight into the CCF ‘saga’, which goes to the ‘heart’ of the Act and examines the serious questions that are raised in interpreting the provisions in the Act. In the concluding part, various issues of reform in the charity sphere are also proposed.