153 resultados para hypertext


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The issue of workplace bullying has received considerable attention in recent times in both the academic literature and in the print and electronic media. The stereotypical bullying scenario can be described as the “bully boss” model, where those in more senior positions tend to bully the staff they supervise. By way of contrast, this paper presents the findings of a three year exemplarian action research study into the lesser known phenomenon of workplace mobbing. Consistent with grounded theory methods, the findings are discussed in the context of emergent propositions in relation to the broader social, cultural, and organisational factors that can perpetuate workplace mobbing in the public sector.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article, published in ON LINE Opinion on 26 October 2006, discusses the broad ranging amendments to the Copyright Act which (in part) implement obligations under the Australia-US Free Trade Agreement (AUSFTA) which were introduced into parliament on October 19, 2006. It covers issues relating to the criminalisation of copyright infringement, user rights and liabilities, and Technological Protection Measures (TPMs).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nowadays people heavily rely on the Internet for information and knowledge. Wikipedia is an online multilingual encyclopaedia that contains a very large number of detailed articles covering most written languages. It is often considered to be a treasury of human knowledge. It includes extensive hypertext links between documents of the same language for easy navigation. However, the pages in different languages are rarely cross-linked except for direct equivalent pages on the same subject in different languages. This could pose serious difficulties to users seeking information or knowledge from different lingual sources, or where there is no equivalent page in one language or another. In this thesis, a new information retrieval task—cross-lingual link discovery (CLLD) is proposed to tackle the problem of the lack of cross-lingual anchored links in a knowledge base such as Wikipedia. In contrast to traditional information retrieval tasks, cross language link discovery algorithms actively recommend a set of meaningful anchors in a source document and establish links to documents in an alternative language. In other words, cross-lingual link discovery is a way of automatically finding hypertext links between documents in different languages, which is particularly helpful for knowledge discovery in different language domains. This study is specifically focused on Chinese / English link discovery (C/ELD). Chinese / English link discovery is a special case of cross-lingual link discovery task. It involves tasks including natural language processing (NLP), cross-lingual information retrieval (CLIR) and cross-lingual link discovery. To justify the effectiveness of CLLD, a standard evaluation framework is also proposed. The evaluation framework includes topics, document collections, a gold standard dataset, evaluation metrics, and toolkits for run pooling, link assessment and system evaluation. With the evaluation framework, performance of CLLD approaches and systems can be quantified. This thesis contributes to the research on natural language processing and cross-lingual information retrieval in CLLD: 1) a new simple, but effective Chinese segmentation method, n-gram mutual information, is presented for determining the boundaries of Chinese text; 2) a voting mechanism of name entity translation is demonstrated for achieving a high precision of English / Chinese machine translation; 3) a link mining approach that mines the existing link structure for anchor probabilities achieves encouraging results in suggesting cross-lingual Chinese / English links in Wikipedia. This approach was examined in the experiments for better, automatic generation of cross-lingual links that were carried out as part of the study. The overall major contribution of this thesis is the provision of a standard evaluation framework for cross-lingual link discovery research. It is important in CLLD evaluation to have this framework which helps in benchmarking the performance of various CLLD systems and in identifying good CLLD realisation approaches. The evaluation methods and the evaluation framework described in this thesis have been utilised to quantify the system performance in the NTCIR-9 Crosslink task which is the first information retrieval track of this kind.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We report on an accurate numerical scheme for the evolution of an inviscid bubble in radial Hele-Shaw flow, where the nonlinear boundary effects of surface tension and kinetic undercooling are included on the bubble-fluid interface. As well as demonstrating the onset of the Saffman-Taylor instability for growing bubbles, the numerical method is used to show the effect of the boundary conditions on the separation (pinch-off) of a contracting bubble into multiple bubbles, and the existence of multiple possible asymptotic bubble shapes in the extinction limit. The numerical scheme also allows for the accurate computation of bubbles which pinch off very close to the theoretical extinction time, raising the possibility of computing solutions for the evolution of bubbles with non-generic extinction behaviour.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cross-Lingual Link Discovery (CLLD) is a new problem in Information Retrieval. The aim is to automatically identify meaningful and relevant hypertext links between documents in different languages. This is particularly helpful in knowledge discovery if a multi-lingual knowledge base is sparse in one language or another, or the topical coverage in each language is different; such is the case with Wikipedia. Techniques for identifying new and topically relevant cross-lingual links are a current topic of interest at NTCIR where the CrossLink task has been running since the 2011 NTCIR-9. This paper presents the evaluation framework for benchmarking algorithms for cross-lingual link discovery evaluated in the context of NTCIR-9. This framework includes topics, document collections, assessments, metrics, and a toolkit for pooling, assessment, and evaluation. The assessments are further divided into two separate sets: manual assessments performed by human assessors; and automatic assessments based on links extracted from Wikipedia itself. Using this framework we show that manual assessment is more robust than automatic assessment in the context of cross-lingual link discovery.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many websites offer the opportunity for customers to rate items and then use customers' ratings to generate items reputation, which can be used later by other users for decision making purposes. The aggregated value of the ratings per item represents the reputation of this item. The accuracy of the reputation scores is important as it is used to rank items. Most of the aggregation methods didn't consider the frequency of distinct ratings and they didn't test how accurate their reputation scores over different datasets with different sparsity. In this work we propose a new aggregation method which can be described as a weighted average, where weights are generated using the normal distribution. The evaluation result shows that the proposed method outperforms state-of-the-art methods over different sparsity datasets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

"We live in times in which unlearning has become as important as learning. Dan Pink has called these times the Conceptual Age,i to distinguish them from the Knowledge/Information Age in which many of us were born and educated. Before the current Conceptual Age, the core business of learning was the routine accessing of information to solve routine problems, so there was real value in retaining and reusing the templates taught to us at schools and universities. What is different about the Conceptual Age is that it is characterised by new cultural forms and modes of consumption that require us to unlearn our Knowledge/Information Age habits to live well in our less predictable social world. The ‘correct’ way to write, for example, is no longer ‘correct’ if communicating by hypertext rather than by essay or letter. And who would bother with an essay or a letter or indeed a pen these days? Whether or not we agree that the Conceptual Age, amounts to the first real generation gap since rock and roll, as Ken Robinson claims,ii it certainly makes unique demands of educators, just as it makes unique demands of the systems, strategies and sustainability of organisations. Foremost among these demands, according to innovation analyst Charlie Leadbeater,iii is to unlearn the idea that we are becoming a more knowledgeable society with each new generation. If knowing means being intimately familiar with the knowledge embedded in the technologies we use in our daily lives, then, Leadbeater says, we have never been more ignorant.iv He reminds us that our great grandparents had an intimate knowledge of the technologies around them, and had no problem with getting the butter churn to work or preventing the lamp from smoking. Few of us would know what to do if our mobile phones stopped functioning, just as few of us know what is ‘underneath’ or ‘behind’ the keys of our laptops. Nor, indeed, do many of us want to know. But this means that we are all very quickly reduced to the quill and the lamp if we lose our power sources or if our machines cease to function. This makes us much more vulnerable – as well as much more ignorant in relative terms – than our predecessors."

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the emergence of Internet, the global connectivity of computers has become a reality. Internet has progressed to provide many user-friendly tools like Gopher, WAIS, WWW etc. for information publishing and access. The WWW, which integrates all other access tools, also provides a very convenient means for publishing and accessing multimedia and hypertext linked documents stored in computers spread across the world. With the emergence of WWW technology, most of the information activities are becoming Web-centric. Once the information is published on the Web, a user can access this information from any part of the world. A Web browser like Netscape or Internet Explorer is used as a common user interface for accessing information/databases. This will greatly relieve a user from learning the search syntax of individual information systems. Libraries are taking advantage of these developments to provide access to their resources on the Web. CDS/ISIS is a very popular bibliographic information management software used in India. In this tutorial we present details of integrating CDS/ISIS with the WWW. A number of tools are now available for making CDS/ISIS database accessible on the Internet/Web. Some of these are 1) the WAIS_ISIS Server. 2) the WWWISIS Server 3) the IQUERY Server. In this tutorial, we have explained in detail the steps involved in providing Web access to an existing CDS/ISIS database using the freely available software, WWWISIS. This software is developed, maintained and distributed by BIREME, the Latin American & Caribbean Centre on Health Sciences Information. WWWISIS acts as a server for CDS/ISIS databases in a WWW client/server environment. It supports functions for searching, formatting and data entry operations over CDS/ISIS databases. WWWISIS is available for various operating systems. We have tested this software on Windows '95, Windows NT and Red Hat Linux release 5.2 (Appolo) Kernel 2. 0. 36 on an i686. The testing was carried out using IISc's main library's OPAC containing more than 80,000 records and Current Contents issues (bibliographic data) containing more than 25,000 records. WWWISIS is fully compatible with CDS/ISIS 3.07 file structure. However, on a system running Unix or its variant, there is no guarantee of this compatibility. It is therefore safe to recreate the master and the inverted files, using utilities provided by BIREME, under Unix environment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Os diários de escrita íntima constituem tipo de texto do domínio confessional. Apresentam narrativas pessoais com características específicas ao gênero, como datação, marcas de subjetividade, escrita informal e coloquialidade. Durante muitos anos, eram escritos em cadernos e guardados a sete chaves por seus autores para que não fossem lidos por outras pessoas. Por volta dos anos 80, surgiram as agendas de adolescentes. Aproveitando o modelo pré-definido industrialmente, as agendas eram preenchidas dia a dia, como um diário, mas com a novidade do acréscimo de elementos semióticos, como fotos, papéis de bala, recortes de revistas, entre outros. Além disso, traziam como diferencial a presença de um leitor participativo: os textos eram compartilhados com amigos, e bilhetes e comentários eram escritos nas páginas das agendas. Com o advento da Internet, o diário e a agenda se fundem no blog que aproveita os recursos do suporte virtual para tornar o gênero interativo, hipertextual e multimídia, acentuando o processo de leitura e de escrita nos jovens produtores de blogs. Paralelamente, a escrita se torna grande ferramenta de comunicação no ambiente virtual, adquirindo características peculiares em função da rapidez na comunicação e da economia de digitação. A partir da teoria de Bakthin sobre gêneros do discurso e do conceito de gêneros digitais de Marcuschi, a pesquisa apresenta como objeto perceber e elencar categorias pertinentes aos gêneros diário e blog para analisá-las e compará-las, na intenção de mapear um possível percurso dos diários aos blogs de adolescentes, discutindo o contraste público-privado na escrita íntima, bem como suas principais marcas linguísticas, percebendo vantagens e desvantagens de sua utilização como ferramenta auxiliar no processo de aprendizagem da escrita e da leitura de Língua Portuguesa. A pesquisa foi motivada pela discussão de que a escrita digital pode prejudicar o desenvolvimento da produção textual de jovens em formação, o que não se confirmou, visto que a estrutura sintática da língua se mantém, e que a variação acontece apenas no nível vocabular, não interferindo na comunicação. Os resultados apontam para a utilização de blogs na educação como complementação do material pedagógico e como incentivo à leitura, à escrita, à construção da argumentação e do posicionamento crítico, aproximando a escola da vida cotidiana dos estudantes

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A tese tem por objetivo primordial observar a construção identitária de mestiças fidalgas nos romances Wide Sargasso Sea (1966), de Jean Rhys, True Women (1993), de Janice Windle, e Rosaura: a enjeitada (1883), de Bernardo Guimarães, considerando três fatores distintos: o multiculturalismo e a interracialização no século XIX; a tentativa de as protagonistas se passarem como caucasianas perante elites locais; a reprimida identificação das mestiças escravocratas com classes menos abastadas. Observam-se os pontos de convergência e divergência entre as obras estudadas, uma vez que o autor brasileiro discute a identidade como fator hereditário e nacional, enquanto as demais autoras a interpretam como construto cultural subjetivo. De modo geral, a pesquisa demonstra como estes autores resistem ao cientificismo que vislumbra o mestiço como ser degenerado, metabólica e ontologicamente desequilibrado, procurando advogar-lhe a imagem de modo distinto. Visto que Wide Sargasso Sea e True Women são releituras de obras oitocentistas, o trabalho também contempla relações intertextuais em dois vetores: o primeiro, voltado para relação entre hipertexto e hipotexto, e o segundo, voltado para a eventual relação entre Guimarães e as obras relidas por Rhys e Windle

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Os recentes avanços tecnológicos fizeram aumentar o nível de qualificação do pesquisador em epidemiologia. A importância do papel estratégico da educação não pode ser ignorada. Todavia, a Associação Brasileira de Pós-graduação em Saúde Coletiva (ABRASCO), no seu último plano diretor (2005-2009), aponta uma pequena valorização na produção de material didático-pedagógico e, ainda, a falta de uma política de desenvolvimento e utilização de software livre no ensino da epidemiologia. É oportuno, portanto, investir em uma perspectiva relacional, na linha do que a corrente construtivista propõe, uma vez que esta teoria tem sido reconhecida como a mais adequada no desenvolvimento de materiais didáticos informatizados. Neste sentido, promover cursos interativos e, no bojo destes, desenvolver material didático conexo é oportuno e profícuo. No âmbito da questão política de desenvolvimento e utilização de software livre no ensino da epidemiologia, particularmente em estatística aplicada, o R tem se mostrado um software de interesse emergente. Ademais, não só porque evita possíveis penalizações por utilização de software comercial sem licença, mas também porque o franco acesso aos códigos e programação o torna uma ferramenta excelente para a elaboração de material didático em forma de hiperdocumentos, importantes alicerces para uma tão desejada interação docentediscente em sala de aula. O principal objetivo é desenvolver material didático em R para os cursos de bioestatística aplicada à análise epidemiológica. Devido a não implementação de certas funções estatísticas no R, também foi incluída a programação de funções adicionais. Os cursos empregados no desenvolvimento desse material fundamentaram-se nas disciplinas Uma introdução à Plataforma R para Modelagem Estatística de Dados e Instrumento de Aferição em Epidemiologia I: Teoria Clássica de Medidas (Análise) vinculadas ao departamento de Epidemiologia, Instituto de Medicina Social (IMS) da Universidade do Estado do Rio de Janeiro (UERJ). A base teórico-pedagógica foi definida a partir dos princípios construtivistas, na qual o indivíduo é agente ativo e crítico de seu próprio conhecimento, construindo significados a partir de experiências próprias. E, à ótica construtivista, seguiu-se a metodologia de ensino da problematização, abrangendo problemas oriundos de situações reais e sistematizados por escrito. Já os métodos computacionais foram baseados nas Novas Tecnologias da Informação e Comunicação (NTIC). As NTICs exploram a busca pela consolidação de currículos mais flexíveis, adaptados às características diferenciadas de aprendizagem dos alunos. A implementação das NTICs foi feita através de hipertexto, que é uma estrutura de textos interligados por nós ou vínculos (links), formando uma rede de informações relacionadas. Durante a concepção do material didático, foram realizadas mudanças na interface básica do sistema de ajuda do R para garantir a interatividade aluno-material. O próprio instrutivo é composto por blocos, que incentivam a discussão e a troca de informações entre professor e alunos.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A presente pesquisa tem como objetivo geral defender o ato de jogar como processo de leitura, concebida como construção de sentido(s), da qual o jogador participa como coautor. Para realização dessa tarefa, sugerimos uma descrição do game como gênero discursivo hipermodal. Teorias da área do estudo de games foram articuladas a teorias da área de Letras, a fim de respaldar com coerência teórica os objetivos aqui pretendidos. O corpus a ser estudado compõe-se de uma transcrição de uma jogada completa do game Heavy Rain e de gravações de cenas fílmicas e controláveis do mesmo jogo. Nesta pesquisa, defendemos a atividade lúdica como um processo de leitura bastante enriquecedor, que exige do jogador uma postura ativa, na qual ele precisa concretizar habilidades diversas para dar conta de construir sentidos a partir da multiplicidade semiótica do game

Relevância:

10.00% 10.00%

Publicador:

Resumo:

将基于时序逻辑的多媒体脚本描述模型从线性顺序时空关系描述推广到非线性时空关系的超文本描述,提出了一种新的超文本模型.通过该模型可将超文本的结点、链和超文本结构的逐步求精过程在一个统一的框架内描述.使用该模型设计的一个超文本标注语言已经实现,并基于该语言开发了一个交互式超文本编著环境.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This report describes our attempt to add animation as another data type to be used on the World Wide Web. Our current network infrastructure, the Internet, is incapable of carrying video and audio streams for them to be used on the web for presentation purposes. In contrast, object-oriented animation proves to be efficient in terms of network resource requirements. We defined an animation model to support drawing-based and frame-based animation. We also extended the HyperText Markup Language in order to include this animation mode. BU-NCSA Mosanim, a modified version of the NCSA Mosaic for X(v2.5), is available to demonstrate the concept and potentials of animation in presentations an interactive game playing over the web.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.