16 resultados para World Wide Web
em Aston University Research Archive
Resumo:
The main argument of this paper is that Natural Language Processing (NLP) does, and will continue to, underlie the Semantic Web (SW), including its initial construction from unstructured sources like the World Wide Web (WWW), whether its advocates realise this or not. Chiefly, we argue, such NLP activity is the only way up to a defensible notion of meaning at conceptual levels (in the original SW diagram) based on lower level empirical computations over usage. Our aim is definitely not to claim logic-bad, NLP-good in any simple-minded way, but to argue that the SW will be a fascinating interaction of these two methodologies, again like the WWW (which has been basically a field for statistical NLP research) but with deeper content. Only NLP technologies (and chiefly information extraction) will be able to provide the requisite RDF knowledge stores for the SW from existing unstructured text databases in the WWW, and in the vast quantities needed. There is no alternative at this point, since a wholly or mostly hand-crafted SW is also unthinkable, as is a SW built from scratch and without reference to the WWW. We also assume that, whatever the limitations on current SW representational power we have drawn attention to here, the SW will continue to grow in a distributed manner so as to serve the needs of scientists, even if it is not perfect. The WWW has already shown how an imperfect artefact can become indispensable.
Resumo:
This thesis explores how the world-wide-web can be used to support English language teachers doing further studies at a distance. The future of education worldwide is moving towards a requirement that we, as teacher educators, use the latest web technology not as a gambit, but as a viable tool to improve learning. By examining the literature on knowledge, teacher education and web training, a model of teacher knowledge development, along with statements of advice for web developers based upon the model are developed. Next, the applicability and viability of both the model and statements of advice are examined by developing a teacher support site (bttp://www. philseflsupport. com) according to these principles. The data collected from one focus group of users from sixteen different countries, all studying on the same distance Masters programme, is then analysed in depth. The outcomes from the research are threefold: A functioning website that is averaging around 15, 000 hits a month provides a professional contribution. An expanded model of teacher knowledge development that is based upon five theoretical principles that reflect the ever-expanding cyclical nature of teacher learning provides an academic contribution. A series of six statements of advice for developers of teacher support sites. These statements are grounded in the theoretical principles behind the model of teacher knowledge development and incorporate nine keys to effective web facilitation. Taken together, they provide a forward-looking contribution to the praxis of web supported teacher education, and thus to the potential dissemination of the research presented here. The research has succeeded in reducing the proliferation of terminology in teacher knowledge into a succinct model of teacher knowledge development. The model may now be used to further our understanding of how teachers learn and develop as other research builds upon the individual study here. NB: Appendix 4 is only available only available for consultation at Aston University Library with prior arrangement.
Resumo:
Web document cluster analysis plays an important role in information retrieval by organizing large amounts of documents into a small number of meaningful clusters. Traditional web document clustering is based on the Vector Space Model (VSM), which takes into account only two-level (document and term) knowledge granularity but ignores the bridging paragraph granularity. However, this two-level granularity may lead to unsatisfactory clustering results with “false correlation”. In order to deal with the problem, a Hierarchical Representation Model with Multi-granularity (HRMM), which consists of five-layer representation of data and a twophase clustering process is proposed based on granular computing and article structure theory. To deal with the zero-valued similarity problemresulted from the sparse term-paragraphmatrix, an ontology based strategy and a tolerance-rough-set based strategy are introduced into HRMM. By using granular computing, structural knowledge hidden in documents can be more efficiently and effectively captured in HRMM and thus web document clusters with higher quality can be generated. Extensive experiments show that HRMM, HRMM with tolerancerough-set strategy, and HRMM with ontology all outperform VSM and a representative non VSM-based algorithm, WFP, significantly in terms of the F-Score.
Resumo:
The World Wide Web provides plentiful contents for Web-based learning, but its hyperlink-based architecture connects Web resources for browsing freely rather than for effective learning. To support effective learning, an e-learning system should be able to discover and make use of the semantic communities and the emerging semantic relations in a dynamic complex network of learning resources. Previous graph-based community discovery approaches are limited in ability to discover semantic communities. This paper first suggests the Semantic Link Network (SLN), a loosely coupled semantic data model that can semantically link resources and derive out implicit semantic links according to a set of relational reasoning rules. By studying the intrinsic relationship between semantic communities and the semantic space of SLN, approaches to discovering reasoning-constraint, rule-constraint, and classification-constraint semantic communities are proposed. Further, the approaches, principles, and strategies for discovering emerging semantics in dynamic SLNs are studied. The basic laws of the semantic link network motion are revealed for the first time. An e-learning environment incorporating the proposed approaches, principles, and strategies to support effective discovery and learning is suggested.
Resumo:
The World Wide Web is opening up access to documents and data for scholars. However it has not yet impacted on one of the primary activities in research: assessing new findings in the light of current knowledge and debating it with colleagues. The ClaiMaker system uses a directed graph model with similarities to hypertext, in which new ideas are published as nodes, which other contributors can build on or challenge in a variety of ways by linking to them. Nodes and links have semantic structure to facilitate the provision of specialist services for interrogating and visualizing the emerging network. By way of example, this paper is grounded in a ClaiMaker model to illustrate how new claims can be described in this structured way.
Resumo:
Educational institutions are under pressure to provide high quality education to large numbers of students very efficiently. The efficiency target combined with the large numbers generally militates against providing students with a great deal of personal or small group tutorial contact with academic staff. As a result of this, students often develop their learning criteria as a group activity, being guided by comparisons one with another rather than the formal assessments made of their submitted work. IT systems and the World Wide Web are increasingly employed to amplify the resources of academic departments although their emphasis tends to be with course administration rather than learning support. The ready availability of information on the World Wide Web and the ease with which is may be incorporated into essays can lead students to develop a limited view of learning as the process of finding, editing and linking information. This paper examines a module design strategy for tackling these issues, based on developments in modules where practical knowledge is a significant element of the learning objectives. Attempts to make effective use of IT support in these modules will be reviewed as a contribution to the development of an IT for learning strategy currently being undertaken in the author’s Institution.
Resumo:
The American Academy of Optometry (AAO) had their annual meeting in San Diego in December 2005 and the BCLA and CLAE were well represented there. The BCLA does have a reasonable number of non-UK based members and hopefully in the future will attract more. This will certainly be beneficial to the society as a whole and may draw more delegates to the BCLA annual conference. To increase awareness of the BCLA at the AAO a special evening seminar was arranged where BCLA president Dr. James Wolffsohn gave his presidential address. Dr. Wolffsohn has given the presidential address in the UK, Ireland, Hong Kong and Japan – making it the most travelled presidential address for the BCLA to date. Aside from the BCLA activity at the AAO there were numerous lectures of interest to all, truly a “something for everyone” meeting. All the sessions were multi-track (often up to 10 things occurring at the same time) and the biggest dilemma was often deciding what to attend and more importantly what you will miss! Nearly 200 new AAO Fellows were inducted at the Gala Dinner from many countries including 3 new fellows from the UK (this year they all just happened to be from Aston University!). It is certainly one of the highlights of the AAO to see fellows from different schools of training from around the world fulfilling the same criteria and being duly rewarded for their commitment to the profession. BCLA members will be aware that 2006 sees the introduction of the new fellowship scheme of the BCLA and by the time you read this the first set of fellowship examinations will have taken place. For more details of the FBCLA scheme see the BCLA web site http://www.bcla.org.uk. Since many of CLAE's editorial panel were at the AAO an informal meeting and dinner was arranged for them where ideas were exchanged about the future of the journal. It is envisaged that the panel will meet twice a year – the next meeting will be at the BCLA conference. The biggest excitement by far was the fact that CLAE is now Medline/PubMed indexed. You may ask why is this significant to CLAE? PubMed is the free web-based service from the US National Library of Medicine. It holds over 15 million biomedical citations and abstracts from the Medline database. Medline is the largest component of PubMed and covers over 4800 journals published in more than 70 countries. The impact of this is that CLAE is starting to attract more submissions as researchers and authors are not worried that their work will not be hidden from other colleagues in the field but rather the work is available to view on the World Wide Web. CLAE is one of a very small number of contact lens journals that is indexed this way. Amongst the other CL journals listed you will note that the International Contact Lens Clinic has now merged with CLAE and the journal CLAO has been renamed Eye and Contact Lenses – making the list of indexed CL journals even smaller than it appears. The on-line submission and reviewing system introduced in 2005 has also made it easier for authors to submit their work and easier for reviewers to check the content. This ease of use has lead to quicker times from submission to publication. Looking back at the articles published in CLAE in 2005 reveals some interesting facts. The majority of the material still tends to be from UK groups related to the field of Optometry, although we hope that in the future we will attract more work from non-UK groups and also from non-Optometric areas such as refractive surgery or anterior eye pathology. Interestingly in 2005 the most downloaded article from CLAE was “Wavefront technology: Past, present and future” by Professor W. Neil Charman, who was also the recipient of the Charles F. Prentice award at the AAO – one of the highest awards honours that the AAO can bestow. Professor Charman was also the keynote speaker at the BCLA's first Pioneer's Day meeting in 2004. In 2006, readers of CLAE will notice more changes, firstly we are moving to 5 issues per year. It is hoped that in the future, depending on increased submissions, a move to 6 issues may be feasible. Secondly, CLAE will aim to have one article per issue that carries CL CET points. You will see in this issue there is an article from Professor Mark Wilcox (who was a keynote speaker at the BCLA conference in 2005). In future articles that carry CET points will be either reviews from BCLA conference keynote speakers, members of the editorial panel or material from other invited persons that will be of interest to the readership of CLAE. Finally, in 2006, you will notice a change to the Editorial Panel, some of the distinguished panel felt that it was good time to step down and new members have been invited to join the remaining panel. The panel represent some of the most eminent names in the fields of contact lenses and/or anterior eye and have varying backgrounds and interests from many of the prominent institutions around the world. One of the tasks that the Editorial Panel undertake is to seek out possible submissions to the journal, either from conferences they attend (posters and papers that they will see and hear) and from their own research teams. However, on behalf of CLAE I would like to extend that invitation to seek original articles to all readers – if you hear a talk and think it could make a suitable publication to CLAE please ask the presenters to submit the work via the on-line submission system. If you found the work interesting then the chances are so will others. CLAE invites submissions that are original research, full length articles, short case reports, full review articles, technical reports and letters to the editor. The on-line submission web page is http://www.ees.elsevier.com/clae/.
Resumo:
With its implications for vaccine discovery, the accurate prediction of T cell epitopes is one of the key aspirations of computational vaccinology. We have developed a robust multivariate statistical method, based on partial least squares, for the quantitative prediction of peptide binding to major histocompatibility complexes (MHC), the principal checkpoint on the antigen presentation pathway. As a service to the immunobiology community, we have made a Perl implementation of the method available via a World Wide Web server. We call this server MHCPred. Access to the server is freely available from the URL: http://www.jenner.ac.uk/MHCPred. We have exemplified our method with a model for peptides binding to the common human MHC molecule HLA-B*3501.
Resumo:
Accurate T-cell epitope prediction is a principal objective of computational vaccinology. As a service to the immunology and vaccinology communities at large, we have implemented, as a server on the World Wide Web, a partial least squares-base multivariate statistical approach to the quantitative prediction of peptide binding to major histocom-patibility complexes (MHC), the key checkpoint on the antigen presentation pathway within adaptive,cellular immunity. MHCPred implements robust statistical models for both Class I alleles (HLA-A*0101, HLA-A*0201, HLA-A*0202, HLA-A*0203,HLA-A*0206, HLA-A*0301, HLA-A*1101, HLA-A*3301, HLA-A*6801, HLA-A*6802 and HLA-B*3501) and Class II alleles (HLA-DRB*0401, HLA-DRB*0401and HLA-DRB* 0701).
Resumo:
Social media influence analysis, sometimes also called authority detection, aims to rank users based on their influence scores in social media. Existing approaches of social influence analysis usually focus on how to develop effective algorithms to quantize users’ influence scores. They rarely consider a person’s expertise levels which are arguably important to influence measures. In this paper, we propose a computational approach to measuring the correlation between expertise and social media influence, and we take a new perspective to understand social media influence by incorporating expertise into influence analysis. We carefully constructed a large dataset of 13,684 Chinese celebrities from Sina Weibo (literally ”Sina microblogging”). We found that there is a strong correlation between expertise levels and social media influence scores. Our analysis gave a good explanation of the phenomenon of “top across-domain influencers”. In addition, different expertise levels showed influence variation patterns: e.g., (1) high-expertise celebrities have stronger influence on the “audience” in their expertise domains; (2) expertise seems to be more important than relevance and participation for social media influence; (3) the audiences of top expertise celebrities are more likely to forward tweets on topics outside the expertise domains from high-expertise celebrities.
Resumo:
Microposts are small fragments of social media content that have been published using a lightweight paradigm (e.g. Tweets, Facebook likes, foursquare check-ins). Microposts have been used for a variety of applications (e.g., sentiment analysis, opinion mining, trend analysis), by gleaning useful information, often using third-party concept extraction tools. There has been very large uptake of such tools in the last few years, along with the creation and adoption of new methods for concept extraction. However, the evaluation of such efforts has been largely consigned to document corpora (e.g. news articles), questioning the suitability of concept extraction tools and methods for Micropost data. This report describes the Making Sense of Microposts Workshop (#MSM2013) Concept Extraction Challenge, hosted in conjunction with the 2013 World Wide Web conference (WWW'13). The Challenge dataset comprised a manually annotated training corpus of Microposts and an unlabelled test corpus. Participants were set the task of engineering a concept extraction system for a defined set of concepts. Out of a total of 22 complete submissions 13 were accepted for presentation at the workshop; the submissions covered methods ranging from sequence mining algorithms for attribute extraction to part-of-speech tagging for Micropost cleaning and rule-based and discriminative models for token classification. In this report we describe the evaluation process and explain the performance of different approaches in different contexts.
Resumo:
Value of online Question Answering (QandA) communities is driven by the question-answering behaviour of its members. Finding the questions that members are willing to answer is therefore vital to the effcient operation of such communities. In this paper, we aim to identify the parameters that cor- relate with such behaviours. We train different models and construct effective predictions using various user, question and thread feature sets. We show that answering behaviour can be predicted with a high level of success.
Resumo:
We describe the Joint Effort-Topic (JET) model and the Author Joint Effort-Topic (aJET) model that estimate the effort required for users to contribute on different topics. We propose to learn word-level effort taking into account term preference over time and use it to set the priors of our models. Since there is no gold standard which can be easily built, we evaluate them by measuring their abilities to validate expected behaviours such as correlations between user contributions and the associated effort.