6 resultados para Machine-readable Library Cataloguing

em Deakin Research Online - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes advances in automated health service selection and composition in the Ambient Assisted Living (AAL) domain. We apply a Service Value Network (SVN) approach to automatically match medical practice recommendations to health services based on sensor readings in a home care context. Medical practice recommendations are extracted from National Health and Medical Research Council (NHMRC) guidelines. Service networks are derived from Medicare Benefits Schedule (MBS) listings. Service provider rules are further formalised using Semantics of Business Vocabulary and Business Rules (SBVR), which allows business participants to identify and define machine-readable rules. We demonstrate our work by applying an SVN composition process to patient profiles in the context of Type 2 Diabetes Management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Combining the Semantic Web and the Ubiquitous Web, Web 3.0 is for things. The Semantic Web enables human knowledge to be machine-readable and the Ubiquitous Web allows Web services to serve any thing, forming a bridge between the virtual world and the real world. By using context, Web services can become smarter-that is, aware of the target things' or applications' physical environments, or situations and respond proactively and intelligently. Existing methods for implementing context-aware Web services on Web 2.0 mainly enumerate different implementations corresponding to different attribute values of the context, in order to improve the Quality of Services (QoS). However, things in the physical world are extremely diverse, which poses new problems for Web services: it is difficult to unify the context of things and to implement a flexible smart Web service for things. This article proposes a novel smart Web service based on the context of things, which is implemented using a REpresentational State Transfer for Things (Thing-REST) style, to tackle the two problems. In a smart Web service, the user's description (semantic context) and sensor reports (sensing context) are two channels for acquiring the context of things which are then employed by ontology services to make the context of things machine-readable. With guidance of domain knowledge services, event detection services can analyze things' needs particularly, well through the context of things. We then propose a Thing-REST style to manage the context of things and user context, and to mashup Web services through three structures (i.e., chain, select, and merge) to implement smart Web services. A smart plant watering-service application demonstrates the effectiveness of our method. © 2012 ACM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this chapter, we introduce an interesting type of Web services for "things". Existing Web services are applications across the Web that perform functions mainly to satisfy users' social needs "from simple requests to complicated business processes". Throughout history, humans have accumulated lots of knowledge about diverse things in the physical world. However, human knowledge about the world has not been fully used on the current Web which focuses on social communication; the prospect of interacting with things other than people on the future Web is very exciting. The purpose of Web services for "things" is to provide a tunnel for people to interact with things in the physical world from anywhere through the Internet. Extending the service targets from people to anything challenges the existing techniques of Web services from three aspects: first, an unified interface should be provided for people to describe the needs of things; then basic components should be designed in a Web service for things; finally, implementation of a Web service for things should be optimized when mashing up multiple sub Web services. We tackle the challenges faced by a Web service for things and make the best use of human knowledge from the following aspects. We first define a context of things as an unified interface. The users' description (semantic context) and sensors (sensing context) are two channels for acquiring the context of things. Then, we define three basic modules for a Web service for things: ontology Web services to unify the context of things, machine readable domain knowledge Web services and event report Web services (such as weather report services and sensor event report services). Meanwhile, we develop a Thing-REST framework to optimally mashup structures to loosely couple the three basic modules. We employ a smart plant watering service application to demonstrate all the techniques we have developed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper draws on the findings from, and the methods and approach used, in the provision of a database of Australian PhD thesis records for the period 1987 to 2006, coded by Research Fields, Courses and Disciplines (RFCD) fields of study. The project was funded by the Research Excellence Branch of the Australian Research Council. Importantly, the project was not merely the creation of yet another database but constitutes a valuable research resource in its own right. It provides an alternative source of data about research training with a focus on research output and research capacity building rather than input as does data on enrolment. The database is significant as it can be used to track knowledge production in Australia over a twenty year period and contains approximately 54,000 bibliographic records. The database of Australian PhDs has been constructed from downloaded bibliographic records from Libraries Australia. Recommendations for practice relate to university libraries, doctoral candidates, and the coded database. We suggest that libraries are more consistent with cataloguing procedures, including the thesis ‘publication’ date, and that they are more timely in uploading their thesis records to Libraries Australia or, alternatively, Australian Research Online. We also suggest that PhD candidates code their own theses using the new ANZSRC scheme (which replaced the RFCD classification in 2008), and also use clear and communicative thesis titles and thesis abstracts. With regard to the coded database, we suggest it becomes a requirement for universities to provide the ANZSRC coding of submitted theses

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For years, we have relied on population surveys to keep track of regional public health statistics, including the prevalence of non-communicable diseases. Because of the cost and limitations of such surveys, we often do not have the up-to-date data on health outcomes of a region. In this paper, we examined the feasibility of inferring regional health outcomes from socio-demographic data that are widely available and timely updated through national censuses and community surveys. Using data for 50 American states (excluding Washington DC) from 2007 to 2012, we constructed a machine-learning model to predict the prevalence of six non-communicable disease (NCD) outcomes (four NCDs and two major clinical risk factors), based on population socio-demographic characteristics from the American Community Survey. We found that regional prevalence estimates for non-communicable diseases can be reasonably predicted. The predictions were highly correlated with the observed data, in both the states included in the derivation model (median correlation 0.88) and those excluded from the development for use as a completely separated validation sample (median correlation 0.85), demonstrating that the model had sufficient external validity to make good predictions, based on demographics alone, for areas not included in the model development. This highlights both the utility of this sophisticated approach to model development, and the vital importance of simple socio-demographic characteristics as both indicators and determinants of chronic disease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Atheoretical large-scale data mining techniques using machine learning algorithms have promise in the analysis of large epidemiological datasets. This study illustrates the use of a hybrid methodology for variable selection that took account of missing data and complex survey design to identify key biomarkers associated with depression from a large epidemiological study.

METHODS: The study used a three-step methodology amalgamating multiple imputation, a machine learning boosted regression algorithm and logistic regression, to identify key biomarkers associated with depression in the National Health and Nutrition Examination Study (2009-2010). Depression was measured using the Patient Health Questionnaire-9 and 67 biomarkers were analysed. Covariates in this study included gender, age, race, smoking, food security, Poverty Income Ratio, Body Mass Index, physical activity, alcohol use, medical conditions and medications. The final imputed weighted multiple logistic regression model included possible confounders and moderators.

RESULTS: After the creation of 20 imputation data sets from multiple chained regression sequences, machine learning boosted regression initially identified 21 biomarkers associated with depression. Using traditional logistic regression methods, including controlling for possible confounders and moderators, a final set of three biomarkers were selected. The final three biomarkers from the novel hybrid variable selection methodology were red cell distribution width (OR 1.15; 95% CI 1.01, 1.30), serum glucose (OR 1.01; 95% CI 1.00, 1.01) and total bilirubin (OR 0.12; 95% CI 0.05, 0.28). Significant interactions were found between total bilirubin with Mexican American/Hispanic group (p = 0.016), and current smokers (p<0.001).

CONCLUSION: The systematic use of a hybrid methodology for variable selection, fusing data mining techniques using a machine learning algorithm with traditional statistical modelling, accounted for missing data and complex survey sampling methodology and was demonstrated to be a useful tool for detecting three biomarkers associated with depression for future hypothesis generation: red cell distribution width, serum glucose and total bilirubin.