469 resultados para Model information


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In images with low contrast-to-noise ratio (CNR), the information gain from the observed pixel values can be insufficient to distinguish foreground objects. A Bayesian approach to this problem is to incorporate prior information about the objects into a statistical model. A method for representing spatial prior information as an external field in a hidden Potts model is introduced. This prior distribution over the latent pixel labels is a mixture of Gaussian fields, centred on the positions of the objects at a previous point in time. It is particularly applicable in longitudinal imaging studies, where the manual segmentation of one image can be used as a prior for automatic segmentation of subsequent images. The method is demonstrated by application to cone-beam computed tomography (CT), an imaging modality that exhibits distortions in pixel values due to X-ray scatter. The external field prior results in a substantial improvement in segmentation accuracy, reducing the mean pixel misclassification rate for an electron density phantom from 87% to 6%. The method is also applied to radiotherapy patient data, demonstrating how to derive the external field prior in a clinical context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Narrative text is a useful way of identifying injury circumstances from the routine emergency department data collections. Automatically classifying narratives based on machine learning techniques is a promising technique, which can consequently reduce the tedious manual classification process. Existing works focus on using Naive Bayes which does not always offer the best performance. This paper proposes the Matrix Factorization approaches along with a learning enhancement process for this task. The results are compared with the performance of various other classification approaches. The impact on the classification results from the parameters setting during the classification of a medical text dataset is discussed. With the selection of right dimension k, Non Negative Matrix Factorization-model method achieves 10 CV accuracy of 0.93.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

School curriculum change processes have traditionally been managed internally. However, in Queensland, Australia, as a response to the current high-stakes accountability regime, more and more principals are outsourcing this work to external change agents (ECAs). In 2009, one of the authors (a university lecturer and ECA) developed a curriculum change model (the Controlled Rapid Approach to Curriculum Change (CRACC)), specifically outlining the involvement of an ECA in the initiation phase of a school’s curriculum change process. The purpose of this paper is to extend the CRACC model by unpacking the implementation phase, drawing on data from a pilot study of a single school. Interview responses revealed that during the implementation phase, teachers wanted to be kept informed of the wider educational context; use data to constantly track students; relate pedagogical practices to testing practices; share information between departments and professional levels; and, own whole school performance. It is suggested that the findings would be transferable to other school settings and internal leadership of curriculum change. The paper also strikes a chord of concern – Do the responses from teachers operating in such an accountability regime live their professional lives within this corporate and globalised ideology whether they want to or not?

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatic Vehicle Identification Systems are being increasingly used as a new source of travel information. As in the last decades these systems relied on expensive new technologies, few of them were scattered along a networks making thus Travel-Time and Average Speed estimation their main objectives. However, as their price dropped, the opportunity of building dense AVI networks arose, as in Brisbane where more than 250 Bluetooth detectors are now installed. As a consequence this technology represents an effective means to acquire accurate time dependant Origin Destination information. In order to obtain reliable estimations, however, a number of issues need to be addressed. Some of these problems stem from the structure of a network made out of isolated detectors itself while others are inherent of Bluetooth technology (overlapping detection area, missing detections,\...). The aim of this paper is threefold: First, after having presented the level of details that can be reached with a network of isolated detectors we present how we modelled Brisbane's network, keeping only the information valuable for the retrieval of trip information. Second, we give an overview of the issues inherent to the Bluetooth technology and we propose a method for retrieving the itineraries of the individual Bluetooth vehicles. Last, through a comparison with Brisbane Transport Strategic Model results, we highlight the opportunities and the limits of Bluetooth detectors networks. The aim of this paper is twofold. We first give a comprehensive overview of the aforementioned issues. Further, we propose a methodology that can be followed, in order to cleanse, correct and aggregate Bluetooth data. We postulate that the methods introduced by this paper are the first crucial steps that need to be followed in order to compute accurate Origin-Destination matrices in urban road networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While organizations strive to leverage the vast information generated daily from social media platforms, and decision makers are keen to identify and exploit its value, the quality of this information remains uncertain. Past research on information quality criteria and evaluation issues in social media is largely disparate, incomparable and lacking any common theoretical basis. In attention to this gap, this study adapts existing guidelines and exemplars of construct conceptualization in information systems research, to deductively define information quality and related criteria in the social media context. Building on a notion of information derived from semiotic theory, this paper suggests a general conceptualization of information quality in the social media context that can be used in future research to develop more context specific conceptual models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many organizations realize that increasing amounts of data (“Big Data”) need to be dealt with intelligently in order to compete with other organizations in terms of efficiency, speed and services. The goal is not to collect as much data as possible, but to turn event data into valuable insights that can be used to improve business processes. However, data-oriented analysis approaches fail to relate event data to process models. At the same time, large organizations are generating piles of process models that are disconnected from the real processes and information systems. In this chapter we propose to manage large collections of process models and event data in an integrated manner. Observed and modeled behavior need to be continuously compared and aligned. This results in a “liquid” business process model collection, i.e. a collection of process models that is in sync with the actual organizational behavior. The collection should self-adapt to evolving organizational behavior and incorporate relevant execution data (e.g. process performance and resource utilization) extracted from the logs, thereby allowing insightful reports to be produced from factual organizational data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Advances in neural network language models have demonstrated that these models can effectively learn representations of words meaning. In this paper, we explore a variation of neural language models that can learn on concepts taken from structured ontologies and extracted from free-text, rather than directly from terms in free-text. This model is employed for the task of measuring semantic similarity between medical concepts, a task that is central to a number of techniques in medical informatics and information retrieval. The model is built with two medical corpora (journal abstracts and patient records) and empirically validated on two ground-truth datasets of human-judged concept pairs assessed by medical professionals. Empirically, our approach correlates closely with expert human assessors ($\approx$ 0.9) and outperforms a number of state-of-the-art benchmarks for medical semantic similarity. The demonstrated superiority of this model for providing an effective semantic similarity measure is promising in that this may translate into effectiveness gains for techniques in medical information retrieval and medical informatics (e.g., query expansion and literature-based discovery).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose The purpose of this research is to explore the idea of the participatory library in higher education settings. This research aims to address the question, what is a participatory university library? Design/methodology/approach Grounded theory approach was adopted. In-depth individual interviews were conducted with two diverse groups of participants including ten library staff members and six library users. Data collection and analysis were carried out simultaneously and complied with Straussian grounded theory principles and techniques. Findings Three core categories representing the participatory library were found including “community”, “empowerment”, and “experience”. Each category was thoroughly delineated via sub-categories, properties, and dimensions that all together create a foundation for the participatory library. A participatory library model was also developed together with an explanation of model building blocks that provide a deeper understanding of the participatory library phenomenon. Research limitations The research focuses on a specific library system, i.e., academic libraries. Therefore, the research results may not be very applicable to public, special, and school library contexts. Originality/value This is the first empirical study developing a participatory library model. It provides librarians, library managers, researchers, library students, and the library community with a holistic picture of the contemporary library.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis targets on a challenging issue that is to enhance users' experience over massive and overloaded web information. The novel pattern-based topic model proposed in this thesis can generate high-quality multi-topic user interest models technically by incorporating statistical topic modelling and pattern mining. We have successfully applied the pattern-based topic model to both fields of information filtering and information retrieval. The success of the proposed model in finding the most relevant information to users mainly comes from its precisely semantic representations to represent documents and also accurate classification of the topics at both document level and collection level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces a modified Kano approach to analysing and classifying quality attributes that drive student satisfaction in tertiary education. The approach provides several benefits over the traditional Kano approach. Firstly, it uses existing student evaluations of subjects in the educational institution instead of purpose-built surveys as the data source. Secondly, since the data source includes qualitative comments and feedback, it has the exploratory capability to identify emerging and unique attributes. Finally, since the quality attributes identified could be tied directly to students’ detailed feedback, the approach enables practitioners to easily translate the results into concrete action plans. In this paper, the approach is applied to analysing 26 subjects in the information systems school of an Australia university. The approach has enabled the school to uncover new quality attributes and paves the way for other institutions to use their student evaluations to continually understand and addressed students’ changing needs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose Performance heterogeneity between collaborative infrastructure projects is typically examined by considering procurement systems and their governance mechanisms at static points in time. The literature neglects to consider the impact of dynamic learning capability, which is thought to reconfigure governance mechanisms over time in response to evolving market conditions. This conceptual paper proposes a new model to show how continuous joint learning of participant organisations improves project performance. Design/methodology/approach There are two stages of conceptual development. In the first stage, the management literature is analysed to explain the Standard Model of dynamic learning capability that emphasises three learning phases for organisations. This Standard Model is extended to derive a novel Circular Model of dynamic learning capability that shows a new feedback loop between performance and learning. In the second stage, the construction management literature is consulted, adding project lifecycle, stakeholder diversity and three organisational levels to the analysis, to arrive at the Collaborative Model of dynamic learning capability. Findings The Collaborative Model should enable construction organisations to successfully adapt and perform under changing market conditions. The complexity of learning cycles results in capabilities that are imperfectly imitable between organisations, explaining performance heterogeneity on projects. Originality/value The Collaborative Model provides a theoretically substantiated description of project performance, driven by the evolution of procurement systems and governance mechanisms. The Model’s empirical value will be tested in future research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The potential benefits of shared eHealth records systems are promising for the future of improved healthcare. However, the uptake of such systems is hindered by concerns over the security and privacy of patient information. The use of Information Accountability and so called Accountable-eHealth (AeH) systems has been proposed to balance the privacy concerns of patients with the information needs of healthcare professionals. However, a number of challenges remain before AeH systems can become a reality. Among these is the need to protect the information stored in the usage policies and provenance logs used by AeH systems to define appropriate use of information and hold users accountable for their actions. In this paper, we discuss the privacy and security issues surrounding these accountability mechanisms, define valid access to the information they contain, discuss solutions to protect them, and verify and model an implementation of the access requirements as part of an Information Accountability Framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This project examined the role that written specifications play in the building procurement process and the relationship that specifications should have with respect to the use of BIM within the construction industry. A three-part approach was developed to integrate specifications, product libraries and BIM. Typically handled by different disciplines within project teams, these provide the basis for a holistic approach to the development of building descriptions through the design process and into construction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Source Monitoring Framework is a promising model of constructive memory, yet fails because it is connectionist and does not allow content tagging. The Dual-Process Signal Detection Model is an improvement because it reduces mnemic qualia to a single memory signal (or degree of belief), but still commits itself to non-discrete representation. By supposing that ‘tagging’ means the assignment of propositional attitudes to aggregates of anemic characteristics informed inductively, then a discrete model becomes plausible. A Bayesian model of source monitoring accounts for the continuous variation of inputs and assignment of prior probabilities to memory content. A modified version of the High-Threshold Dual-Process model is recommended to further source monitoring research.