875 resultados para OWL web ontology language
Resumo:
In cloud computing, resource allocation and scheduling of multiple composite web services is an important and challenging problem. This is especially so in a hybrid cloud where there may be some low-cost resources available from private clouds and some high-cost resources from public clouds. Meeting this challenge involves two classical computational problems: one is assigning resources to each of the tasks in the composite web services; the other is scheduling the allocated resources when each resource may be used by multiple tasks at different points of time. In addition, Quality-of-Service (QoS) issues, such as execution time and running costs, must be considered in the resource allocation and scheduling problem. Here we present a Cooperative Coevolutionary Genetic Algorithm (CCGA) to solve the deadline-constrained resource allocation and scheduling problem for multiple composite web services. Experimental results show that our CCGA is both efficient and scalable.
Resumo:
In this paper, we propose a search-based approach to join two tables in the absence of clean join attributes. Non-structured documents from the web are used to express the correlations between a given query and a reference list. To implement this approach, a major challenge we meet is how to efficiently determine the number of times and the locations of each clean reference from the reference list that is approximately mentioned in the retrieved documents. We formalize the Approximate Membership Localization (AML) problem and propose an efficient partial pruning algorithm to solve it. A study using real-word data sets demonstrates the effectiveness of our search-based approach, and the efficiency of our AML algorithm.
Resumo:
Despite many incidents about fake online consumer reviews have been reported, very few studies have been conducted to date to examine the trustworthiness of online consumer reviews. One of the reasons is the lack of an effective computational method to separate the untruthful reviews (i.e., spam) from the legitimate ones (i.e., ham) given the fact that prominent spam features are often missing in online reviews. The main contribution of our research work is the development of a novel review spam detection method which is underpinned by an unsupervised inferential language modeling framework. Another contribution of this work is the development of a high-order concept association mining method which provides the essential term association knowledge to bootstrap the performance for untruthful review detection. Our experimental results confirm that the proposed inferential language model equipped with high-order concept association knowledge is effective in untruthful review detection when compared with other baseline methods.
Resumo:
In response to concerns about the quality of English Language Learning (ELL) education at tertiary level, the Chinese Ministry of Education (CMoE) launched the College English Reform Program (CERP) in 2004. By means of a press release (CMoE, 2005) and a guideline document titled College English Curriculum Requirements (CECR) (CMoE, 2007), the CERP proposed two major changes to the College English assessment policy, which were: (1) the shift to optional status for the compulsory external test, the College English Test Band 4 (CET4); and (2) the incorporation of formative assessment into the existing summative assessment framework. This study investigated the interactions between the College English assessment policy change, the theoretical underpinnings, and the assessment practices within two Chinese universities (one Key University and one Non-Key University). It adopted a sociocultural theoretical perspective to examine the implementation process as experienced by local actors of institutional and classroom levels. Systematic data analysis using a constant comparative method (Merriam, 1998) revealed that contextual factors and implementation issues did not lead to significant differences in the two cases. Lack of training in assessment and the sociocultural factors such as the traditional emphasis on the product of learning and hierarchical teacher/students relationship are decisive and responsible for the limited effect of the reform.
Resumo:
Concerns raised in educational reports about school science in terms of students. outcomes and attitudes, as well as science teaching practices prompted investigation into science learning and teaching practices at the foundational level of school science. Without science content and process knowledge, understanding issues of modern society and active participation in decision-making is difficult. This study contended that a focus on the development of the language of science could enable learners to engage more effectively in learning science and enhance their interest and attitudes towards science. Furthermore, it argued that explicit teaching practices where science language is modelled and scaffolded would facilitate the learning of science by young children at the beginning of their formal schooling. This study aimed to investigate science language development at the foundational level of school science learning in the preparatory-school with students aged five and six years. It focussed on the language of science and science teaching practices in early childhood. In particular, the study focussed on the capacity for young students to engage with and understand science language. Previous research suggests that students have difficulty with the language of science most likely because of the complexities and ambiguities of science language. Furthermore, literature indicates that tensions transpire between traditional science teaching practices and accepted early childhood teaching practices. This contention prompted investigation into means and models of pedagogy for learning foundational science language, knowledge and processes in early childhood. This study was positioned within qualitative assumptions of research and reported via descriptive case study. It was located in a preparatory-school classroom with the class teacher, teacher-aide, and nineteen students aged four and five years who participated with the researcher in the study. Basil Bernstein.s pedagogical theory coupled with Halliday.s Systemic Functional Linguistics (SFL) framed an examination of science pedagogical practices for early childhood science learning. Students. science learning outcomes were gauged by focussing a Hallydayan lens on their oral and reflective language during 12 science-focussed episodes of teaching. Data were collected throughout the 12 episodes. Data included video and audio-taped science activities, student artefacts, journal and anecdotal records, semi-structured interviews and photographs. Data were analysed according to Bernstein.s visible and invisible pedagogies and performance and competence models. Additionally, Halliday.s SFL provided the resource to examine teacher and student language to determine teacher/student interpersonal relationships as well as specialised science and everyday language used in teacher and student science talk. Their analysis established the socio-linguistic characteristics that promoted science competencies in young children. An analysis of the data identified those teaching practices that facilitate young children.s acquisition of science meanings. Positive indications for modelling science language and science text types to young children have emerged. Teaching within the studied setting diverged from perceived notions of common early childhood practices and the benefits of dynamic shifting pedagogies were validated. Significantly, young students demonstrated use of particular specialised components of school-science language in terms of science language features and vocabulary. As well, their use of language demonstrated the students. knowledge of science concepts, processes and text types. The young students made sense of science phenomena through their incorporation of a variety of science language and text-types in explanations during both teacher-directed and independent situations. The study informs early childhood science practices as well as practices for foundational school science teaching and learning. It has exposed implications for science education policy, curriculum and practices. It supports other findings in relation to the capabilities of young students. The study contributes to Systemic Functional Linguistic theory through the development of a specific resource to determine the technicality of teacher language used in teaching young students. Furthermore, the study contributes to methodology practices relating to Bernsteinian theoretical perspectives and has demonstrated new ways of depicting and reporting teaching practices. It provides an analytical tool which couples Bernsteinian and Hallidayan theoretical perspectives. Ultimately, it defines directions for further research in terms of foundation science language learning, ongoing learning of the language of science and learning science, science teaching and learning practices, specifically in foundational school science, and relationships between home and school science language experiences.
Resumo:
Since manually constructing domain-specific sentiment lexicons is extremely time consuming and it may not even be feasible for domains where linguistic expertise is not available. Research on the automatic construction of domain-specific sentiment lexicons has become a hot topic in recent years. The main contribution of this paper is the illustration of a novel semi-supervised learning method which exploits both term-to-term and document-to-term relations hidden in a corpus for the construction of domain specific sentiment lexicons. More specifically, the proposed two-pass pseudo labeling method combines shallow linguistic parsing and corpusbase statistical learning to make domain-specific sentiment extraction scalable with respect to the sheer volume of opinionated documents archived on the Internet these days. Another novelty of the proposed method is that it can utilize the readily available user-contributed labels of opinionated documents (e.g., the user ratings of product reviews) to bootstrap the performance of sentiment lexicon construction. Our experiments show that the proposed method can generate high quality domain-specific sentiment lexicons as directly assessed by human experts. Moreover, the system generated domain-specific sentiment lexicons can improve polarity prediction tasks at the document level by 2:18% when compared to other well-known baseline methods. Our research opens the door to the development of practical and scalable methods for domain-specific sentiment analysis.
Resumo:
The Wikipedia has become the most popular online source of encyclopedic information. The English Wikipedia collection, as well as some other languages collections, is extensively linked. However, as a multilingual collection the Wikipedia is only very weakly linked. There are few cross-language links or cross-dialect links (see, for example, Chinese dialects). In order to link the multilingual-Wikipedia as a single collection, automated cross language link discovery systems are needed – systems that identify anchor-texts in one language and targets in another. The evaluation of Link Discovery approaches within the English version of the Wikipedia has been examined in the INEX Link the-Wiki track since 2007, whilst both CLEF and NTCIR emphasized the investigation and the evaluation of cross-language information retrieval. In this position paper we propose a new virtual evaluation track: Cross Language Link Discovery (CLLD). The track will initially examine cross language linking of Wikipedia articles. This virtual track will not be tied to any one forum; instead we hope it can be connected to each of (at least): CLEF, NTCIR, and INEX as it will cover ground currently studied by each. The aim is to establish a virtual evaluation environment supporting continuous assessment and evaluation, and a forum for the exchange of research ideas. It will be free from the difficulties of scheduling and synchronizing groups of collaborating researchers and alleviate the necessity to travel across the globe in order to share knowledge. We aim to electronically publish peer-reviewed publications arising from CLLD in a similar fashion: online, with open access, and without fixed submission deadlines.
Resumo:
Web service technology is increasingly being used to build various e-Applications, in domains such as e-Business and e-Science. Characteristic benefits of web service technology are its inter-operability, decoupling and just-in-time integration. Using web service technology, an e-Application can be implemented by web service composition — by composing existing individual web services in accordance with the business process of the application. This means the application is provided to customers in the form of a value-added composite web service. An important and challenging issue of web service composition, is how to meet Quality-of-Service (QoS) requirements. This includes customer focused elements such as response time, price, throughput and reliability as well as how to best provide QoS results for the composites. This in turn best fulfils customers’ expectations and achieves their satisfaction. Fulfilling these QoS requirements or addressing the QoS-aware web service composition problem is the focus of this project. From a computational point of view, QoS-aware web service composition can be transformed into diverse optimisation problems. These problems are characterised as complex, large-scale, highly constrained and multi-objective problems. We therefore use genetic algorithms (GAs) to address QoS-based service composition problems. More precisely, this study addresses three important subproblems of QoS-aware web service composition; QoS-based web service selection for a composite web service accommodating constraints on inter-service dependence and conflict, QoS-based resource allocation and scheduling for multiple composite services on hybrid clouds, and performance-driven composite service partitioning for decentralised execution. Based on operations research theory, we model the three problems as a constrained optimisation problem, a resource allocation and scheduling problem, and a graph partitioning problem, respectively. Then, we present novel GAs to address these problems. We also conduct experiments to evaluate the performance of the new GAs. Finally, verification experiments are performed to show the correctness of the GAs. The major outcomes from the first problem are three novel GAs: a penaltybased GA, a min-conflict hill-climbing repairing GA, and a hybrid GA. These GAs adopt different constraint handling strategies to handle constraints on interservice dependence and conflict. This is an important factor that has been largely ignored by existing algorithms that might lead to the generation of infeasible composite services. Experimental results demonstrate the effectiveness of our GAs for handling the QoS-based web service selection problem with constraints on inter-service dependence and conflict, as well as their better scalability than the existing integer programming-based method for large scale web service selection problems. The major outcomes from the second problem has resulted in two GAs; a random-key GA and a cooperative coevolutionary GA (CCGA). Experiments demonstrate the good scalability of the two algorithms. In particular, the CCGA scales well as the number of composite services involved in a problem increases, while no other algorithms demonstrate this ability. The findings from the third problem result in a novel GA for composite service partitioning for decentralised execution. Compared with existing heuristic algorithms, the new GA is more suitable for a large-scale composite web service program partitioning problems. In addition, the GA outperforms existing heuristic algorithms, generating a better deployment topology for a composite web service for decentralised execution. These effective and scalable GAs can be integrated into QoS-based management tools to facilitate the delivery of feasible, reliable and high quality composite web services.
Resumo:
In the present paper, we introduce BioPatML.NET, an application library for the Microsoft Windows .NET framework [2] that implements the BioPatML pattern definition language and sequence search engine. BioPatML.NET is integrated with the Microsoft Biology Foundation (MBF) application library [3], unifying the parsers and annotation services supported or emerging through MBF with the language, search framework and pattern repository of BioPatML. End users who wish to exploit the BioPatML.NET engine and repository without engaging the services of a programmer may do so via the freely accessible web-based BioPatML Editor, which we describe below.
Resumo:
Language Modeling (LM) has been successfully applied to Information Retrieval (IR). However, most of the existing LM approaches only rely on term occurrences in documents, queries and document collections. In traditional unigram based models, terms (or words) are usually considered to be independent. In some recent studies, dependence models have been proposed to incorporate term relationships into LM, so that links can be created between words in the same sentence, and term relationships (e.g. synonymy) can be used to expand the document model. In this study, we further extend this family of dependence models in the following two ways: (1) Term relationships are used to expand query model instead of document model, so that query expansion process can be naturally implemented; (2) We exploit more sophisticated inferential relationships extracted with Information Flow (IF). Information flow relationships are not simply pairwise term relationships as those used in previous studies, but are between a set of terms and another term. They allow for context-dependent query expansion. Our experiments conducted on TREC collections show that we can obtain large and significant improvements with our approach. This study shows that LM is an appropriate framework to implement effective query expansion.
Resumo:
This paper describes a senior, multimodal task developed by Shauna O’Connor and the English staff at Brigidine College after consultation in the form of media workshops with Anita Jetnikoff. Gunther Kress (2006) suggested recently that due to the affordances of media platforms such as Web 2.0, “we need to be doing new things with texts”. The year 11 unit’s Finding a Voice parent text was the memoir, Mao’s last Dancer. The summative assessment task morphed over time from an ‘identity portrait’, into ‘a multimodal, first person narrative’.
Resumo:
Beryl & Gael discuss the ‘new’ metalanguage for knowledge about language presented in the Australian Curriculum English (ACARA, 2010). Their discussion connects to practice by recounting how one teacher scaffolds her students through detailed understandings of noun and adjective groups in reading activities. The stimulus text is the novel ‘A wrinkle in time’ (L’Engle, 1962, reproduced 2007) and the purpose is to build students’ understandings so they can work towards ‘expressing and developing ideas’ in written text (ACARA, 2010).
Resumo:
With the growth of the Web, E-commerce activities are also becoming popular. Product recommendation is an effective way of marketing a product to potential customers. Based on a user’s previous searches, most recommendation methods employ two dimensional models to find relevant items. Such items are then recommended to a user. Further too many irrelevant recommendations worsen the information overload problem for a user. This happens because such models based on vectors and matrices are unable to find the latent relationships that exist between users and searches. Identifying user behaviour is a complex process, and usually involves comparing searches made by him. In most of the cases traditional vector and matrix based methods are used to find prominent features as searched by a user. In this research we employ tensors to find relevant features as searched by users. Such relevant features are then used for making recommendations. Evaluation on real datasets show the effectiveness of such recommendations over vector and matrix based methods.
Resumo:
The growing importance and need of data processing for information extraction is vital for Web databases. Due to the sheer size and volume of databases, retrieval of relevant information as needed by users has become a cumbersome process. Information seekers are faced by information overloading - too many result sets are returned for their queries. Moreover, too few or no results are returned if a specific query is asked. This paper proposes a ranking algorithm that gives higher preference to a user’s current search and also utilizes profile information in order to obtain the relevant results for a user’s query.