906 resultados para World Wide Web -- Design
Resumo:
Title from cover.
Resumo:
Mode of access: Internet.
Resumo:
The main argument of this paper is that Natural Language Processing (NLP) does, and will continue to, underlie the Semantic Web (SW), including its initial construction from unstructured sources like the World Wide Web (WWW), whether its advocates realise this or not. Chiefly, we argue, such NLP activity is the only way up to a defensible notion of meaning at conceptual levels (in the original SW diagram) based on lower level empirical computations over usage. Our aim is definitely not to claim logic-bad, NLP-good in any simple-minded way, but to argue that the SW will be a fascinating interaction of these two methodologies, again like the WWW (which has been basically a field for statistical NLP research) but with deeper content. Only NLP technologies (and chiefly information extraction) will be able to provide the requisite RDF knowledge stores for the SW from existing unstructured text databases in the WWW, and in the vast quantities needed. There is no alternative at this point, since a wholly or mostly hand-crafted SW is also unthinkable, as is a SW built from scratch and without reference to the WWW. We also assume that, whatever the limitations on current SW representational power we have drawn attention to here, the SW will continue to grow in a distributed manner so as to serve the needs of scientists, even if it is not perfect. The WWW has already shown how an imperfect artefact can become indispensable.
Resumo:
This thesis explores how the world-wide-web can be used to support English language teachers doing further studies at a distance. The future of education worldwide is moving towards a requirement that we, as teacher educators, use the latest web technology not as a gambit, but as a viable tool to improve learning. By examining the literature on knowledge, teacher education and web training, a model of teacher knowledge development, along with statements of advice for web developers based upon the model are developed. Next, the applicability and viability of both the model and statements of advice are examined by developing a teacher support site (bttp://www. philseflsupport. com) according to these principles. The data collected from one focus group of users from sixteen different countries, all studying on the same distance Masters programme, is then analysed in depth. The outcomes from the research are threefold: A functioning website that is averaging around 15, 000 hits a month provides a professional contribution. An expanded model of teacher knowledge development that is based upon five theoretical principles that reflect the ever-expanding cyclical nature of teacher learning provides an academic contribution. A series of six statements of advice for developers of teacher support sites. These statements are grounded in the theoretical principles behind the model of teacher knowledge development and incorporate nine keys to effective web facilitation. Taken together, they provide a forward-looking contribution to the praxis of web supported teacher education, and thus to the potential dissemination of the research presented here. The research has succeeded in reducing the proliferation of terminology in teacher knowledge into a succinct model of teacher knowledge development. The model may now be used to further our understanding of how teachers learn and develop as other research builds upon the individual study here. NB: Appendix 4 is only available only available for consultation at Aston University Library with prior arrangement.
Resumo:
Web document cluster analysis plays an important role in information retrieval by organizing large amounts of documents into a small number of meaningful clusters. Traditional web document clustering is based on the Vector Space Model (VSM), which takes into account only two-level (document and term) knowledge granularity but ignores the bridging paragraph granularity. However, this two-level granularity may lead to unsatisfactory clustering results with “false correlation”. In order to deal with the problem, a Hierarchical Representation Model with Multi-granularity (HRMM), which consists of five-layer representation of data and a twophase clustering process is proposed based on granular computing and article structure theory. To deal with the zero-valued similarity problemresulted from the sparse term-paragraphmatrix, an ontology based strategy and a tolerance-rough-set based strategy are introduced into HRMM. By using granular computing, structural knowledge hidden in documents can be more efficiently and effectively captured in HRMM and thus web document clusters with higher quality can be generated. Extensive experiments show that HRMM, HRMM with tolerancerough-set strategy, and HRMM with ontology all outperform VSM and a representative non VSM-based algorithm, WFP, significantly in terms of the F-Score.
Resumo:
Educational institutions are under pressure to provide high quality education to large numbers of students very efficiently. The efficiency target combined with the large numbers generally militates against providing students with a great deal of personal or small group tutorial contact with academic staff. As a result of this, students often develop their learning criteria as a group activity, being guided by comparisons one with another rather than the formal assessments made of their submitted work. IT systems and the World Wide Web are increasingly employed to amplify the resources of academic departments although their emphasis tends to be with course administration rather than learning support. The ready availability of information on the World Wide Web and the ease with which is may be incorporated into essays can lead students to develop a limited view of learning as the process of finding, editing and linking information. This paper examines a module design strategy for tackling these issues, based on developments in modules where practical knowledge is a significant element of the learning objectives. Attempts to make effective use of IT support in these modules will be reviewed as a contribution to the development of an IT for learning strategy currently being undertaken in the author’s Institution.
Resumo:
* The work is partly supported by RFFI grant 08-07-00062-a
Resumo:
Иван Шотлеков, Асен Рахнев - В настоящата работа се представя набор от критерии за оценяване на качеството на студентски проекти за уеб дизайн. Критериите са подходящи за разработване, самооценяване, колегиално оценяване и оценка на уеб сайтове, проектирани от студенти. Тази оценъчна скала е апробирана по време на курс “Английски език в информационните технологии”, проведен със студенти от първи курс по информатика във Факултета по математика и информатика на Пловдивски университет “Паисий Хилендарски”. Тя помага на студентите при разработването на мултидисциплинарен проект за уеб дизайн да придобият не само технически умения, свързани с проектирането на качествени уеб сайтове, но и някои процесуални умения, които ще им бъдат необходими в реалната практика.
Resumo:
An implementation of Sem-ODB—a database management system based on the Semantic Binary Model is presented. A metaschema of Sem-ODB database as well as the top-level architecture of the database engine is defined. A new benchmarking technique is proposed which allows databases built on different database models to compete fairly. This technique is applied to show that Sem-ODB has excellent efficiency comparing to a relational database on a certain class of database applications. A new semantic benchmark is designed which allows evaluation of the performance of the features characteristic of semantic database applications. An application used in the benchmark represents a class of problems requiring databases with sparse data, complex inheritances and many-to-many relations. Such databases can be naturally accommodated by semantic model. A fixed predefined implementation is not enforced allowing the database designer to choose the most efficient structures available in the DBMS tested. The results of the benchmark are analyzed. ^ A new high-level querying model for semantic databases is defined. It is proven adequate to serve as an efficient native semantic database interface, and has several advantages over the existing interfaces. It is optimizable and parallelizable, supports the definition of semantic userviews and the interoperability of semantic databases with other data sources such as World Wide Web, relational, and object-oriented databases. The query is structured as a semantic database schema graph with interlinking conditionals. The query result is a mini-database, accessible in the same way as the original database. The paradigm supports and utilizes the rich semantics and inherent ergonomics of semantic databases. ^ The analysis and high-level design of a system that exploits the superiority of the Semantic Database Model to other data models in expressive power and ease of use to allow uniform access to heterogeneous data sources such as semantic databases, relational databases, web sites, ASCII files, and others via a common query interface is presented. The Sem-ODB engine is used to control all the data sources combined under a unified semantic schema. A particular application of the system to provide an ODBC interface to the WWW as a data source is discussed. ^
Resumo:
The authors report the generally poor results attained when the NAACP assessed the diversity management performance of 16 major hotel companies. Then, as an alternative means of assessing the same hotel companies’ commitment to diversity, they report the results of an analysis of the world-wide web pages the companies use to represent themselves in the electronic marketplace. Analysis of the web sites found virtually no evidence of corporate concern for diversity.
Resumo:
O presente trabalho apresenta as bases teórico-conceituais que fundamentam o processo de construção de um objeto gráfico de comunicação sobre o mercado de arte na contemporaneidade e apresenta o desenvolvimento de sua concretização, assim como seu produto final. Trata-se da elaboração de uma ferramenta de gestão cultural com características cartográficas: A Cartografia do Sistema de Arte na Contemporaneidade – CA, é um objeto de comunicação teórico-imagético sobre o mercado de arte que tem como objetivo comunicar - de forma sintética e sinérgica, rápida, intuitiva e totalizadora - a complexidade desse sistema de arte. O método utilizado para seu desenvolvimento envolveu pesquisa bibliográfica sobre bases teóricas nas áreas de gestão, filosofia, design e comunicação (dentre outras). Envolveu também a elaboração de uma análise sobre os elementos recolhidos, os modos de operação e o estabelecimento de nexos dentro do mercado contemporâneo das artes; além da elaboração de estudos prévios e diretrizes para a construção do objeto comunicacional: Por fim, incluiu o acompanhamento e concretização do projeto em parceria com uma profissional do design. Como produto, foi desenvolvida a CA, um objeto impresso que representa graficamente a estrutura e o funcionamento do sistema. Durante a execução do projeto, identificou-se a possibilidade de construir uma complementação para detalhamento do conteúdo da CA e disponibilizá-lo ao público interessado no sistema do mercado de arte na contemporaneidade, em formato eletrônico, na world wide web, para o qual o mapa direciona seus leitores. Concluiu-se que este é um campo aberto a novos estudos, inovações e empreendimentos e continua-se a acreditar, principalmente agora que adquiriu materialidade, que a CA possa ser útil a estudantes, ao público, aos profissionais das artes e da gestão cultural que desejam conhecer o funcionamento do mercado de arte contemporâneo.
Resumo:
The hypothesis that the same educational objective, raised as cooperative or collaborative learning in university teaching does not affect students’ perceptions of the learning model, leads this study. It analyses the reflections of two students groups of engineering that shared the same educational goals implemented through two different methodological active learning strategies: Simulation as cooperative learning strategy and Problem-based Learning as a collaborative one. The different number of participants per group (eighty-five and sixty-five, respectively) as well as the use of two active learning strategies, either collaborative or cooperative, did not show differences in the results from a qualitative perspective.
Resumo:
Provenance is a record that describes the people, institutions, entities, and activities, involved in producing, influencing, or delivering a piece of data or a thing in the world. Some 10 years after beginning research on the topic of provenance, I co-chaired the provenance working group at the World Wide Web Consortium. The working group published the PROV standard for provenance in 2013. In this talk, I will present some use cases for provenance, the PROV standard and some flagship examples of adoption. I will then move on to our current research area aiming to exploit provenance, in the context of the Sociam, SmartSociety, ORCHID projects. Doing so, I will present techniques to deal with large scale provenance, to build predictive models based on provenance, and to analyse provenance.