117 resultados para Interpreting graphs


Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is a common acceptance that contemporary schoolchildren live in a world that is intensely visual and commercially motivated, where what is imagined and what is experienced intermingle. Because of this, contemporary education should encourage a child to make reference to, and connection with their ‘out-of-school’ life. The core critical underpinnings of curriculum based arts appreciation and theory hinge on educators and students taking a historical look at the ways artists have engaged with, and made comment upon, their contemporary societies. My article uses this premise to argue for the need to persist with pushing for critique of/through the visual, that it be delivered as an active process via the arts classroom rather than as visual literacy, here regarded as a more passive process for interpreting and understanding visual material. The article asserts that visual arts lessons are best placed to provide fully students with such critique because they help students to develop a ’critical eye’, an interpretive lens often used by artists to view, analyse and independently navigate and respond to contemporary society.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Porous yttria-stabilized zirconia (YSZ) has been regarded as a potential candidate for bone substitute as its high mechanical strength. However, porous YSZ bodies are biologically inert to bone tissue. It is therefore necessary to introduce bioactive coatings onto the walls of the porous structures to enhance the bioactivity. In this study, the porous zirconia scaffolds were prepared by infiltration of Acrylonitrile Butadiene Styrene (ABS) scaffolds with 3 mol% yttria stabilized zirconia slurry. After sintering, a method of sol-gel dip coating was involved to make coating layer of mesoporous bioglass (MBGs). The porous zirconia without the coating had high porosities of 60.1% to 63.8%, and most macropores were interconnected with pore sizes of 0.5-0.8mm. The porous zirconia had compressive strengths of 9.07-9.90MPa. Moreover, the average coating thickness was about 7μm. There is no significant change of compressive strength for the porous zirconia with mesoporous biogalss coating. The bone marrow stromal cell (BMSC) proliferation test showed both uncoated and coated zirconia scaffolds have good biocompatibility. The scanning electron microscope (SEM) micrographs and the compositional analysis graphs demonstrated that after testing in the simulated body fluid (SBF) for 7 days, the apatite formation occurred on the coating surface. Thus, porous zirconia-based ceramics were modified with bioactive coating of mesoporous bioglass for potential biomedical applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Business practices vary from one company to another and business practices often need to be changed due to changes of business environments. To satisfy different business practices, enterprise systems need to be customized. To keep up with ongoing business practice changes, enterprise systems need to be adapted. Because of rigidity and complexity, the customization and adaption of enterprise systems often takes excessive time with potential failures and budget shortfall. Moreover, enterprise systems often drag business behind because they cannot be rapidly adapted to support business practice changes. Extensive literature has addressed this issue by identifying success or failure factors, implementation approaches, and project management strategies. Those efforts were aimed at learning lessons from post implementation experiences to help future projects. This research looks into this issue from a different angle. It attempts to address this issue by delivering a systematic method for developing flexible enterprise systems which can be easily tailored for different business practices or rapidly adapted when business practices change. First, this research examines the role of system models in the context of enterprise system development; and the relationship of system models with software programs in the contexts of computer aided software engineering (CASE), model driven architecture (MDA) and workflow management system (WfMS). Then, by applying the analogical reasoning method, this research initiates a concept of model driven enterprise systems. The novelty of model driven enterprise systems is that it extracts system models from software programs and makes system models able to stay independent of software programs. In the paradigm of model driven enterprise systems, system models act as instructors to guide and control the behavior of software programs. Software programs function by interpreting instructions in system models. This mechanism exposes the opportunity to tailor such a system by changing system models. To make this true, system models should be represented in a language which can be easily understood by human beings and can also be effectively interpreted by computers. In this research, various semantic representations are investigated to support model driven enterprise systems. The significance of this research is 1) the transplantation of the successful structure for flexibility in modern machines and WfMS to enterprise systems; and 2) the advancement of MDA by extending the role of system models from guiding system development to controlling system behaviors. This research contributes to the area relevant to enterprise systems from three perspectives: 1) a new paradigm of enterprise systems, in which enterprise systems consist of two essential elements: system models and software programs. These two elements are loosely coupled and can exist independently; 2) semantic representations, which can effectively represent business entities, entity relationships, business logic and information processing logic in a semantic manner. Semantic representations are the key enabling techniques of model driven enterprise systems; and 3) a brand new role of system models; traditionally the role of system models is to guide developers to write system source code. This research promotes the role of system models to control the behaviors of enterprise.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This case study explored how a group of primary school teachers in Papua New Guinea (PNG) understood Outcomes-based Education (OBE). OBE measures students. learning against specific outcomes. These outcomes are derived from a country.s vision of the kind of citizen that the education system should produce. While countries such as Australia, South Africa, New Zealand and the United States have abandoned OBE, others such as PNG have adopted it in various ways. How teachers understand OBE in PNG is important because such understandings are likely to influence how they implement the OBE curriculum. There has been no research to date which has investigated PNG primary school teachers. understandings and experiences with OBE. This study used a single exploratory case study design to investigate how twenty primary school teachers from the National Capital District (NCD) in PNG understood OBE. The study, underpinned by an intepretivist paradigm, explored the research question: How do primary school teachers understand outcomes-based education in PNG? The data comprised surveys, in-depth interviews and documents. Data were analysed thematically and using explanation building techniques. The findings revealed that OBE is viewed by teachers as a way to equip them with additional strategies for planning and programming, teaching and learning, and assessment. Teachers also described how OBE enabled both students and teachers to become more engaged and develop positive attitudes towards teaching and learning. There was also a perception that OBE enhanced students. future life skills through increased local community support. While some teachers commented on how the OBE reforms provided them with increased professional development opportunities, the greatest impediment to implementing OBE was perceived to be a lack of sufficient teaching and learning resources. The process of planning and programming classroom activities was also regarded as onerous. Some teachers indicated that they had been required to implement OBE without adequate in-service training support. The social constructivist theory of knowledge which underpins OBE.s student-centred pedagogy can cause tensions within PNG.s cultural contexts of teaching and learning. Teachers need to be aware of these tensions when conducting peer or group learning under OBE in PNG. By exploring how these PNG primary teachers understood OBE, the study highlighted how teachers engaged with OBE concepts when interpreting syllabus documents and how they applied these concepts to curriculum. Identifying differences in teacher understanding of OBE provides guidance for both the design of materials to support the implementation of OBE and for the design of in-service training. Thus, the outcomes of this study will inform educators about the implementation of OBE in PNG. In addition, the outcomes will provide much needed insight into how a mandated curriculum and pedagogical reform impacts teachers‟ practices in PNG.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dashboards are expected to improve decision making by amplifying cognition and capitalizing on human perceptual capabilities. Hence, interest in dashboards has increased recently, which is also evident from the proliferation of dashboard solution providers in the market. Despite dashboards' popularity, little is known about the extent of their effectiveness, i.e. what types of dashboards work best for different users or tasks. In this paper, we conduct a comprehensive multidisciplinary literature review with an aim to identify the critical issues organizations might need to consider when implementing dashboards. Dashboards are likely to succeed and solve the problems of presentation format and information load when certain visualization principles and features are present (e.g. high data-ink ratio and drill down features).Werecommend that dashboards come with some level of flexibility, i.e. allowing users to switch between alternative presentation formats. Also some theory driven guidance through popups and warnings can help users to select an appropriate presentation format. Given the dearth of research on dashboards, we conclude the paper with a research agenda that could guide future studies in this area.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper provides an overview of contemporary information literacy research and practice. While the content is highly selective, the intention has been to highlight international and Australian developments which have achieved significant recognition, which are representative of similar trends in other places, or which are unique in some way. There are three main foci in the paper. Firstly, an exploration of ways of interpreting the idea of information literacy. Secondly, a synthesis of various efforts to seek new directions in educational, community and workplace contexts, beginning with the major initiatives being undertaken in the United States. Thirdly, an introduction to some recent research, concluding with a summary of my own investigation into different ways of experiencing information literacy

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aim: In this paper we discuss the use of the Precede-Proceed model when investigating health promotion options for breast cancer survivors. Background: Adherence to recommended health behaviors can optimize well-being after cancer treatment. Guided by the Precede-Proceed approach, we studied the behaviors of breast cancer survivors in our health service area. Data sources: The interview data from the cohort of breast cancer survivors are used in this paper to illustrate the use of Precede-Proceed in this nursing research context. Interview data were collected from June to December 2009. We also searched Medline, CINAHL, PsychInfo and PsychExtra up to 2010 for relevant literature in English to interrogate the data from other theoretical perspectives. Discussion: The Precede-Proceed model is theoretically-complex. The deductive analytic process guided by the model usefully explained some of the health behaviors of cancer survivors, although it could not explicate many other findings. A complementary inductive approach to the analysis and subsequent interpretation by way of Uncertainty in Illness Theory and other psychosocial perspectives provided a comprehensive account of the qualitative data that resulted in contextually-relevant recommendations for nursing practice. Implications for nursing: Nursing researchers using Precede-Proceed should maintain theoretical flexibility when interpreting qualitative data. Perspectives not embedded in the model might need to be considered to ensure that the data are analyzed in a contextually-relevant way. Conclusion: Precede-Proceed provides a robust framework for nursing researchers investigating health promotion in cancer survivors; however additional theoretical lenses to those embedded in the model can enhance data interpretation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Continuous user authentication with keystroke dynamics uses characters sequences as features. Since users can type characters in any order, it is imperative to find character sequences (n-graphs) that are representative of user typing behavior. The contemporary feature selection approaches do not guarantee selecting frequently-typed features which may cause less accurate statistical user-representation. Furthermore, the selected features do not inherently reflect user typing behavior. We propose four statistical based feature selection techniques that mitigate limitations of existing approaches. The first technique selects the most frequently occurring features. The other three consider different user typing behaviors by selecting: n-graphs that are typed quickly; n-graphs that are typed with consistent time; and n-graphs that have large time variance among users. We use Gunetti’s keystroke dataset and k-means clustering algorithm for our experiments. The results show that among the proposed techniques, the most-frequent feature selection technique can effectively find user representative features. We further substantiate our results by comparing the most-frequent feature selection technique with three existing approaches (popular Italian words, common n-graphs, and least frequent ngraphs). We find that it performs better than the existing approaches after selecting a certain number of most-frequent n-graphs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The management of models over time in many domains requires different constraints to apply to some parts of the model as it evolves. Using EMF and its meta-language Ecore, the development of model management code and tools usually relies on the meta- model having some constraints, such as attribute and reference cardinalities and changeability, set in the least constrained way that any model user will require. Stronger versions of these constraints can then be enforced in code, or by attaching additional constraint expressions, and their evaluations engines, to the generated model code. We propose a mechanism that allows for variations to the constraining meta-attributes of metamodels, to allow enforcement of different constraints at different lifecycle stages of a model. We then discuss the implementation choices within EMF to support the validation of a state-specific metamodel on model graphs when changing states, as well as the enforcement of state-specific constraints when executing model change operations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The research objectives of this thesis were to contribute to Bayesian statistical methodology by contributing to risk assessment statistical methodology, and to spatial and spatio-temporal methodology, by modelling error structures using complex hierarchical models. Specifically, I hoped to consider two applied areas, and use these applications as a springboard for developing new statistical methods as well as undertaking analyses which might give answers to particular applied questions. Thus, this thesis considers a series of models, firstly in the context of risk assessments for recycled water, and secondly in the context of water usage by crops. The research objective was to model error structures using hierarchical models in two problems, namely risk assessment analyses for wastewater, and secondly, in a four dimensional dataset, assessing differences between cropping systems over time and over three spatial dimensions. The aim was to use the simplicity and insight afforded by Bayesian networks to develop appropriate models for risk scenarios, and again to use Bayesian hierarchical models to explore the necessarily complex modelling of four dimensional agricultural data. The specific objectives of the research were to develop a method for the calculation of credible intervals for the point estimates of Bayesian networks; to develop a model structure to incorporate all the experimental uncertainty associated with various constants thereby allowing the calculation of more credible credible intervals for a risk assessment; to model a single day’s data from the agricultural dataset which satisfactorily captured the complexities of the data; to build a model for several days’ data, in order to consider how the full data might be modelled; and finally to build a model for the full four dimensional dataset and to consider the timevarying nature of the contrast of interest, having satisfactorily accounted for possible spatial and temporal autocorrelations. This work forms five papers, two of which have been published, with two submitted, and the final paper still in draft. The first two objectives were met by recasting the risk assessments as directed, acyclic graphs (DAGs). In the first case, we elicited uncertainty for the conditional probabilities needed by the Bayesian net, incorporated these into a corresponding DAG, and used Markov chain Monte Carlo (MCMC) to find credible intervals, for all the scenarios and outcomes of interest. In the second case, we incorporated the experimental data underlying the risk assessment constants into the DAG, and also treated some of that data as needing to be modelled as an ‘errors-invariables’ problem [Fuller, 1987]. This illustrated a simple method for the incorporation of experimental error into risk assessments. In considering one day of the three-dimensional agricultural data, it became clear that geostatistical models or conditional autoregressive (CAR) models over the three dimensions were not the best way to approach the data. Instead CAR models are used with neighbours only in the same depth layer. This gave flexibility to the model, allowing both the spatially structured and non-structured variances to differ at all depths. We call this model the CAR layered model. Given the experimental design, the fixed part of the model could have been modelled as a set of means by treatment and by depth, but doing so allows little insight into how the treatment effects vary with depth. Hence, a number of essentially non-parametric approaches were taken to see the effects of depth on treatment, with the model of choice incorporating an errors-in-variables approach for depth in addition to a non-parametric smooth. The statistical contribution here was the introduction of the CAR layered model, the applied contribution the analysis of moisture over depth and estimation of the contrast of interest together with its credible intervals. These models were fitted using WinBUGS [Lunn et al., 2000]. The work in the fifth paper deals with the fact that with large datasets, the use of WinBUGS becomes more problematic because of its highly correlated term by term updating. In this work, we introduce a Gibbs sampler with block updating for the CAR layered model. The Gibbs sampler was implemented by Chris Strickland using pyMCMC [Strickland, 2010]. This framework is then used to consider five days data, and we show that moisture in the soil for all the various treatments reaches levels particular to each treatment at a depth of 200 cm and thereafter stays constant, albeit with increasing variances with depth. In an analysis across three spatial dimensions and across time, there are many interactions of time and the spatial dimensions to be considered. Hence, we chose to use a daily model and to repeat the analysis at all time points, effectively creating an interaction model of time by the daily model. Such an approach allows great flexibility. However, this approach does not allow insight into the way in which the parameter of interest varies over time. Hence, a two-stage approach was also used, with estimates from the first-stage being analysed as a set of time series. We see this spatio-temporal interaction model as being a useful approach to data measured across three spatial dimensions and time, since it does not assume additivity of the random spatial or temporal effects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Defence organisations perform information security evaluations to confirm that electronic communications devices are safe to use in security-critical situations. Such evaluations include tracing all possible dataflow paths through the device, but this process is tedious and error-prone, so automated reachability analysis tools are needed to make security evaluations faster and more accurate. Previous research has produced a tool, SIFA, for dataflow analysis of basic digital circuitry, but it cannot analyse dataflow through microprocessors embedded within the circuit since this depends on the software they run. We have developed a static analysis tool that produces SIFA compatible dataflow graphs from embedded microcontroller programs written in C. In this paper we present a case study which shows how this new capability supports combined hardware and software dataflow analyses of a security critical communications device.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data flow analysis techniques can be used to help assess threats to data confidentiality and integrity in security critical program code. However, a fundamental weakness of static analysis techniques is that they overestimate the ways in which data may propagate at run time. Discounting large numbers of these false-positive data flow paths wastes an information security evaluator's time and effort. Here we show how to automatically eliminate some false-positive data flow paths by precisely modelling how classified data is blocked by certain expressions in embedded C code. We present a library of detailed data flow models of individual expression elements and an algorithm for introducing these components into conventional data flow graphs. The resulting models can be used to accurately trace byte-level or even bit-level data flow through expressions that are normally treated as atomic. This allows us to identify expressions that safely downgrade their classified inputs and thereby eliminate false-positive data flow paths from the security evaluation process. To validate the approach we have implemented and tested it in an existing data flow analysis toolkit.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Usability is a multi-dimensional characteristic of a computer system. This paper focuses on usability as a measurement of interaction between the user and the system. The research employs a task-oriented approach to evaluate the usability of a meta search engine. This engine encourages and accepts queries of unlimited size expressed in natural language. A variety of conventional metrics developed by academic and industrial research, including ISO standards,, are applied to the information retrieval process consisting of sequential tasks. Tasks range from formulating (long) queries to interpreting and retaining search results. Results of the evaluation and analysis of the operation log indicate that obtaining advanced search engine results can be accomplished simultaneously with enhancing the usability of the interactive process. In conclusion, we discuss implications for interactive information retrieval system design and directions for future usability research. © 2008 Academy Publisher.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This chapter examines the changing landscape of literacy in the early years and considers how the diverse spaces and places in which early literacy learning is promoted and takes place can be conceptualised and researched. We argue that early literacy research needs to extend beyond a language focus to become attentive to the embodied, material dimensions of learning environments. The discussion is organised in terms of three kinds of spaces within which children encounter opportunities to participate in communication and representational practices. These are domestic spaces, commercial spaces and spaces of formal education. Theories of spatiality and material semiotics provide the conceptual tools for interpreting research studies located in these spaces. Implications for educators are considered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the growing number of XML documents on theWeb it becomes essential to effectively organise these XML documents in order to retrieve useful information from them. A possible solution is to apply clustering on the XML documents to discover knowledge that promotes effective data management, information retrieval and query processing. However, many issues arise in discovering knowledge from these types of semi-structured documents due to their heterogeneity and structural irregularity. Most of the existing research on clustering techniques focuses only on one feature of the XML documents, this being either their structure or their content due to scalability and complexity problems. The knowledge gained in the form of clusters based on the structure or the content is not suitable for reallife datasets. It therefore becomes essential to include both the structure and content of XML documents in order to improve the accuracy and meaning of the clustering solution. However, the inclusion of both these kinds of information in the clustering process results in a huge overhead for the underlying clustering algorithm because of the high dimensionality of the data. The overall objective of this thesis is to address these issues by: (1) proposing methods to utilise frequent pattern mining techniques to reduce the dimension; (2) developing models to effectively combine the structure and content of XML documents; and (3) utilising the proposed models in clustering. This research first determines the structural similarity in the form of frequent subtrees and then uses these frequent subtrees to represent the constrained content of the XML documents in order to determine the content similarity. A clustering framework with two types of models, implicit and explicit, is developed. The implicit model uses a Vector Space Model (VSM) to combine the structure and the content information. The explicit model uses a higher order model, namely a 3- order Tensor Space Model (TSM), to explicitly combine the structure and the content information. This thesis also proposes a novel incremental technique to decompose largesized tensor models to utilise the decomposed solution for clustering the XML documents. The proposed framework and its components were extensively evaluated on several real-life datasets exhibiting extreme characteristics to understand the usefulness of the proposed framework in real-life situations. Additionally, this research evaluates the outcome of the clustering process on the collection selection problem in the information retrieval on the Wikipedia dataset. The experimental results demonstrate that the proposed frequent pattern mining and clustering methods outperform the related state-of-the-art approaches. In particular, the proposed framework of utilising frequent structures for constraining the content shows an improvement in accuracy over content-only and structure-only clustering results. The scalability evaluation experiments conducted on large scaled datasets clearly show the strengths of the proposed methods over state-of-the-art methods. In particular, this thesis work contributes to effectively combining the structure and the content of XML documents for clustering, in order to improve the accuracy of the clustering solution. In addition, it also contributes by addressing the research gaps in frequent pattern mining to generate efficient and concise frequent subtrees with various node relationships that could be used in clustering.