822 resultados para information engine
Resumo:
In this paper, we propose an unsupervised segmentation approach, named "n-gram mutual information", or NGMI, which is used to segment Chinese documents into n-character words or phrases, using language statistics drawn from the Chinese Wikipedia corpus. The approach alleviates the tremendous effort that is required in preparing and maintaining the manually segmented Chinese text for training purposes, and manually maintaining ever expanding lexicons. Previously, mutual information was used to achieve automated segmentation into 2-character words. The NGMI approach extends the approach to handle longer n-character words. Experiments with heterogeneous documents from the Chinese Wikipedia collection show good results.
Resumo:
The Queensland Injury Surveillance Unit (QISU) has been collecting and analysing injury data in Queensland since 1988. QISU data is collected from participating emergency departments (EDs) in urban, rural and remote areas of Queensland. Using this data, QISU produces several injury bulletins per year on selected topics, providing a picture of Queensland injury, and setting this in the context of relevant local, national and international research and policy. These bulletins are used by numerous government and non-government groups to inform injury prevention and practice throughout the state. QISU bulletins are also used by local and state media to inform the general public of injury risk and prevention strategies. In addition to producing the bulletins, QISU regularly responds to requests for information from a variety of sources. These requests often require additional analysis of QISU data to tailor the response to the needs of the end user. This edition of the bulletin reviews 5 years of information requests to QISU.
Resumo:
Particle emissions, volatility, and the concentration of reactive oxygen species (ROS) were investigated for a pre-Euro I compression ignition engine to study the potential health impacts of employing ethanol fumigation technology. Engine testing was performed in two separate experimental campaigns with most testing performed at intermediate speed with four different load settings and various ethanol substitutions. A scanning mobility particle sizer (SMPS) was used to determine particle size distributions, a volatilization tandem differential mobility analyzer (V-TDMA) was used to explore particle volatility, and a new profluorescent nitroxide probe, BPEAnit, was used to investigate the potential toxicity of particles. The greatest particulate mass reduction was achieved with ethanol fumigation at full load, which contributed to the formation of a nucleation mode. Ethanol fumigation increased the volatility of particles by coating the particles with organic material or by making extra organic material available as an external mixture. In addition, the particle-related ROS concentrations increased with ethanol fumigation and were associated with the formation of a nucleation mode. The smaller particles, the increased volatility, and the increase in potential particle toxicity with ethanol fumigation may provide a substantial barrier for the uptake of fumigation technology using ethanol as a supplementary fuel.
Resumo:
Since the industrial revolution, our world has experienced rapid and unplanned industrialization and urbanization. As a result, we have had to cope with serious environmental challenges. In this context, an explanation of how smart urban ecosystems can emerge, gains a crucial importance. Capacity building and community involvement have always been key issues in achieving sustainable development and enhancing urban ecosystems. By considering these, this paper looks at new approaches to increase public awareness of environmental decision making. This paper will discuss the role of Information and Communication Technologies (ICT), particularly Webbased Geographic Information Systems (Web-based GIS) as spatial decision support systems to aid public participatory environmental decision making. The paper also explores the potential and constraints of these webbased tools for collaborative decision making.
Resumo:
1. Ecological data sets often use clustered measurements or use repeated sampling in a longitudinal design. Choosing the correct covariance structure is an important step in the analysis of such data, as the covariance describes the degree of similarity among the repeated observations. 2. Three methods for choosing the covariance are: the Akaike information criterion (AIC), the quasi-information criterion (QIC), and the deviance information criterion (DIC). We compared the methods using a simulation study and using a data set that explored effects of forest fragmentation on avian species richness over 15 years. 3. The overall success was 80.6% for the AIC, 29.4% for the QIC and 81.6% for the DIC. For the forest fragmentation study the AIC and DIC selected the unstructured covariance, whereas the QIC selected the simpler autoregressive covariance. Graphical diagnostics suggested that the unstructured covariance was probably correct. 4. We recommend using DIC for selecting the correct covariance structure.
Resumo:
Intuitively, any `bag of words' approach in IR should benefit from taking term dependencies into account. Unfortunately, for years the results of exploiting such dependencies have been mixed or inconclusive. To improve the situation, this paper shows how the natural language properties of the target documents can be used to transform and enrich the term dependencies to more useful statistics. This is done in three steps. The term co-occurrence statistics of queries and documents are each represented by a Markov chain. The paper proves that such a chain is ergodic, and therefore its asymptotic behavior is unique, stationary, and independent of the initial state. Next, the stationary distribution is taken to model queries and documents, rather than their initial distri- butions. Finally, ranking is achieved following the customary language modeling paradigm. The main contribution of this paper is to argue why the asymptotic behavior of the document model is a better representation then just the document's initial distribution. A secondary contribution is to investigate the practical application of this representation in case the queries become increasingly verbose. In the experiments (based on Lemur's search engine substrate) the default query model was replaced by the stable distribution of the query. Just modeling the query this way already resulted in significant improvements over a standard language model baseline. The results were on a par or better than more sophisticated algorithms that use fine-tuned parameters or extensive training. Moreover, the more verbose the query, the more effective the approach seems to become.
Resumo:
While spoken term detection (STD) systems based on word indices provide good accuracy, there are several practical applications where it is infeasible or too costly to employ an LVCSR engine. An STD system is presented, which is designed to incorporate a fast phonetic decoding front-end and be robust to decoding errors whilst still allowing for rapid search speeds. This goal is achieved through mono-phone open-loop decoding coupled with fast hierarchical phone lattice search. Results demonstrate that an STD system that is designed with the constraint of a fast and simple phonetic decoding front-end requires a compromise to be made between search speed and search accuracy.
Resumo:
Recommender Systems is one of the effective tools to deal with information overload issue. Similar with the explicit rating and other implicit rating behaviours such as purchase behaviour, click streams, and browsing history etc., the tagging information implies user’s important personal interests and preferences information, which can be used to recommend personalized items to users. This paper is to explore how to utilize tagging information to do personalized recommendations. Based on the distinctive three dimensional relationships among users, tags and items, a new user profiling and similarity measure method is proposed. The experiments suggest that the proposed approach is better than the traditional collaborative filtering recommender systems using only rating data.
Resumo:
This chapter considers how open content licences of copyright-protected materials – specifically, Creative Commons (CC) licences - can be used by governments as a simple and effective mechanism to enable reuse of their PSI, particularly where materials are made available in digital form online or distributed on disk.
Resumo:
Information System (IS) success may be the most arguable and important dependent variable in the IS field. The purpose of the present study is to address IS success by empirically assess and compare DeLone and McLean’s (1992) and Gable’s et al. (2008) models of IS success in Australian Universities context. The two models have some commonalities and several important distinctions. Both models integrate and interrelate multiple dimensions of IS success. Hence, it would be useful to compare the models to see which is superior; as it is not clear how IS researchers should respond to this controversy.
Resumo:
One of the major challenges facing a present day game development company is the removal of bugs from such complex virtual environments. This work presents an approach for measuring the correctness of synthetic scenes generated by a rendering system of a 3D application, such as a computer game. Our approach builds a database of labelled point clouds representing the spatiotemporal colour distribution for the objects present in a sequence of bug-free frames. This is done by converting the position that the pixels take over time into the 3D equivalent points with associated colours. Once the space of labelled points is built, each new image produced from the same game by any rendering system can be analysed by measuring its visual inconsistency in terms of distance from the database. Objects within the scene can be relocated (manually or by the application engine); yet the algorithm is able to perform the image analysis in terms of the 3D structure and colour distribution of samples on the surface of the object. We applied our framework to the publicly available game RacingGame developed for Microsoft(R) Xna(R). Preliminary results show how this approach can be used to detect a variety of visual artifacts generated by the rendering system in a professional quality game engine.
Resumo:
To assess the effects of information interventions which orient patients and their carers/family to a cancer care facility and the services available in the facility.
Resumo:
The two longitudinal case studies that make up this dissertation sought to explain and predict the relationship between usability and clinician acceptance of a health information system. The overall aim of the research study was to determine what role usability plays in the acceptance or rejection of systems used by clinicians in a healthcare context. The focus was on the end users (the clinicians) rather than the views of the system designers and managers responsible for implementation and the clients of the clinicians. A mixed methods approach was adopted that drew on both qualitative and quantitative research methods. This study followed the implementation of a community health information system from early beginnings to its established practice. Users were drawn from different health service departments with distinctly different organisational cultures and attitudes to information and communication technology used in this context. This study provided evidence that a usability analysis in this context would not necessarily be valid when the users have prior reservations on acceptance. Investigation was made on the initial training and post-implementation support together with a study on the nature of the clinicians to determine factors that may influence their attitude. This research identified that acceptance of a system is not necessarily a measure of its quality, capability and usability, is influenced by the user’s attitude which is determined by outside factors, and the nature and quality of training. The need to recognise the limitations of the current methodologies for analysing usability and acceptance was explored to lay the foundations for further research.
Resumo:
Objective: To systematically review the published evidence of the impact of health information technology (HIT) on the quality of medical and health care specifically clinicians’ adherence to evidence-based guidelines and the corresponding impact this had on patient clinical outcomes. In order to be as inclusive as possible the research examined literature discussing the use of health information technologies and systems in both medical care such as clinical and surgical, and other health care such as allied health and preventive services.----- Design: Systematic review----- Data Sources: Relevant literature was systematically searched on English language studies indexed in MEDLINE and CINAHL(1998 to 2008), Cochrane Library, PubMed, Database of Abstracts of Review of Effectiveness (DARE), Google scholar and other relevant electronic databases. A search for eligible studies (matching the inclusion criteria) was also performed by searching relevant conference proceedings available through internet and electronic databases, as well as using reference lists identified from cited papers.----- Selection criteria: Studies were included in the review if they examined the impact of Electronic Health Record (EHR), Computerised Provider Order-Entry (CPOE), or Decision Support System (DS); and if the primary outcomes of the studies were focused on the level of compliance with evidence-based guidelines among clinicians. Measures could be either changes in clinical processes resulting from a change of the providers’ behaviour or specific patient outcomes that demonstrated the effectiveness of a particular treatment given by providers. ----- Methods: Studies were reviewed and summarised in tabular and text form. Due to heterogeneity between studies, meta-analysis was not performed.----- Results: Out of 17 studies that assessed the impact of health information technology on health care practitioners’ performance, 14 studies revealed a positive improvement in relation to their compliance with evidence-based guidelines. The primary domain of improvement was evident from preventive care and drug ordering studies. Results from the studies that included an assessment for patient outcomes however, were insufficient to detect either clinically or statistically important improvements as only a small proportion of these studies found benefits. For instance, only 3 studies had shown positive improvement, while 5 studies revealed either no change or adverse outcomes.----- Conclusion: Although the number of included studies was relatively small for reaching a conclusive statement about the effectiveness of health information technologies and systems on clinical care, the results demonstrated consistency with other systematic reviews previously undertaken. Widescale use of HIT has been shown to increase clinician’s adherence to guidelines in this review. Therefore, it presents ongoing opportunities to maximise the uptake of research evidence into practice for health care organisations, policy makers and stakeholders.