994 resultados para Machine-readable Library Cataloguing
Resumo:
New designs of user input systems have resulted from the developing technologies and specialized user demands. Conventional keyboard and mouse input devices still dominate the input speed, but other input mechanisms are demanded in special application scenarios. Touch screen and stylus input methods have been widely adopted by PDAs and smartphones. Reduced keypads are necessary for mobile phones. A new design trend is exploring the design space in applications requiring single-handed input, even with eyes-free on small mobile devices. This requires as few keys on the input device to make it feasible to operate. But representing many characters with fewer keys can make the input ambiguous. Accelerometers embedded in mobile devices provide opportunities to combine device movements with keys for input signal disambiguation. Recent research has explored its design space for text input. In this dissertation an accelerometer assisted single key positioning input system is developed. It utilizes input device tilt directions as input signals and maps their sequences to output characters and functions. A generic positioning model is developed as guidelines for designing positioning input systems. A calculator prototype and a text input prototype on the 4+1 (5 positions) positioning input system and the 8+1 (9 positions) positioning input system are implemented using accelerometer readings on a smartphone. Users use one physical key to operate and feedbacks are audible. Controlled experiments are conducted to evaluate the feasibility, learnability, and design space of the accelerometer assisted single key positioning input system. This research can provide inspiration and innovational references for researchers and practitioners in the positioning user input designs, applications of accelerometer readings, and new development of standard machine readable sign languages.
Resumo:
Lehrvideos erfreuen sich dank aktueller Entwicklungen im Bereich der Online-Lehre (Videoplattformen, MOOCs) auf der einen Seite und einer riesigen Auswahl sowie einer einfachen Produktion und Distribution auf der anderen Seite großer Beliebtheit bei der Wissensvermittlung. Trotzdem bringen Videos einen entscheidenden Nachteil mit sich, welcher in der Natur des Datenformats liegt. So sind die Suche nach konkreten Sachverhalten in einem Video sowie die semantische Aufbereitung zur automatisierten Verknüpfung mit weiteren spezifischen Inhalten mit hohem Aufwand verbunden. Daher werden die lernerfolg-orientierte Selektion von Lehrsegmenten und ihr Arrangement zur auf Lernprozesse abgestimmten Steuerung gehemmt. Beim Betrachten des Videos werden unter Umständen bereits bekannte Sachverhalte wiederholt bzw. können nur durch aufwendiges manuelles Spulen übersprungen werden. Selbiges Problem besteht auch bei der gezielten Wiederholung von Videoabschnitten. Als Lösung dieses Problems wird eine Webapplikation vorgestellt, welche die semantische Aufbereitung von Videos hin zu adaptiven Lehrinhalten ermöglicht: mittels Integration von Selbsttestaufgaben mit definierten Folgeaktionen können auf Basis des aktuellen Nutzerwissens Videoabschnitte automatisiert übersprungen oder wiederholt und externe Inhalte verlinkt werden. Der präsentierte Ansatz basiert somit auf einer Erweiterung der behavioristischen Lerntheorie der Verzweigten Lehrprogramme nach Crowder, die auf den Lernverlauf angepasste Sequenzen von Lerneinheiten beinhaltet. Gleichzeitig werden mittels regelmäßig eingeschobener Selbsttestaufgaben Motivation sowie Aufmerksamkeit des Lernenden nach Regeln der Programmierten Unterweisung nach Skinner und Verstärkungstheorie gefördert. Durch explizite Auszeichnung zusammengehöriger Abschnitte in Videos können zusätzlich die enthaltenden Informationen maschinenlesbar gestaltet werden, sodass weitere Möglichkeiten zum Auffinden und Verknüpfen von Lerninhalten geschaffen werden.
Resumo:
Linking the physical world to the Internet, also known as the Internet of Things, has increased available information and services in everyday life and in the Enterprise world. In Enterprise IT an increasing number of communication is done between IT backend systems and small IoT devices, for example sensor networks or RFID readers. This introduces some challenges in terms of complexity and integration. We are working on the integration of IoT devices into Enterprise IT by leveraging SOA techniques and Semantic Web technologies. We present a SOA based integration platform for connecting WSNs and large enterprise business processes. For ensuring interoperability our platform is based on Linked Services. These are thoroughly described, machine-readable, machine-reasonable service descriptions.
Resumo:
In 2008, the 50th anniversary of the IGY (International Geophysical Year), WDCMARE presents with this CD publication 3632 data sets in Open Access as part of the most important results from 73 cruises of the research vessel METEOR between 1964 and 1985. The archive is a coherent organized collection of published and unpublished data sets produced by scientists of all marine research disciplines who participated in Meteor expeditions, measured environmental parameters during cruises and investigated sample material post cruise in the labs of the participating institutions. In most cases, the data was gathered from the Meteor Forschungsergebnisse, published by the Deutsche Forschungsgemeinschaft (DFG). A second important data source are time series and radiosonde ascensions of more than 20 years of ships weather observations, which were provided by the Deutscher Wetterdienst, Hamburg. The final inclusion of all data into the PANGAEA information system ensures secure archiving, future updates, widespread distribution in electronic, machine-readable form with longterm access via the Internet. To produce this publication, all data sets with metadata were extracted from PANGAEA and organized in a directory structure on a CD together with a search capability.
Resumo:
Nowadays, there is a significant quantity of linguistic data available on the Web. However, linguistic resources are often published using proprietary formats and, as such, it can be difficult to interface with one another and they end up confined in “data silos”. The creation of web standards for the publishing of data on the Web and projects to create Linked Data have lead to interest in the creation of resources that can be published using Web principles. One of the most important aspects of “Lexical Linked Data” is the sharing of lexica and machine readable dictionaries. It is for this reason, that the lemon format has been proposed, which we briefly describe. We then consider two resources that seem ideal candidates for the Linked Data cloud, namely WordNet 3.0 and Wiktionary, a large document based dictionary. We discuss the challenges of converting both resources to lemon , and in particular for Wiktionary, the challenge of processing the mark-up, and handling inconsistencies and underspecification in the source material. Finally, we turn to the task of creating links between the two resources and present a novel algorithm for linking lexica as lexical Linked Data.
Resumo:
Linked Data is not always published with a license. Sometimes a wrong license type is used, like a license for software, or it is not expressed in a standard, machine readable manner. Yet, Linked Data resources may be subject to intellectual property and database laws, may contain personal data subject to privacy restrictions or may even contain important trade secrets. The proper declaration of which rights are held, waived or licensed is a must for the lawful use of Linked Data at its different granularity levels, from the simple RDF statement to a dataset or a mapping. After comparing the current practice with the actual needs, six research questions are posed.
Resumo:
The application of Linked Data technology to the publication of linguistic data promises to facilitate interoperability of these data and has lead to the emergence of the so called Linguistic Linked Data Cloud (LLD) in which linguistic data is published following the Linked Data principles. Three essential issues need to be addressed for such data to be easily exploitable by language technologies: i) appropriate machine-readable licensing information is needed for each dataset, ii) minimum quality standards for Linguistic Linked Data need to be defined, and iii) appropriate vocabularies for publishing Linguistic Linked Data resources are needed. We propose the notion of Licensed Linguistic Linked Data (3LD) in which different licensing models might co-exist, from totally open to more restrictive licenses through to completely closed datasets.
Resumo:
This study presents a detailed contrastive description of the textual functioning of connectives in English and Arabic. Particular emphasis is placed on the organisational force of connectives and their role in sustaining cohesion. The description is intended as a contribution for a better understanding of the variations in the dominant tendencies for text organisation in each language. The findings are expected to be utilised for pedagogical purposes, particularly in improving EFL teaching of writing at the undergraduate level. The study is based on an empirical investigation of the phenomenon of connectivity and, for optimal efficiency, employs computer-aided procedures, particularly those adopted in corpus linguistics, for investigatory purposes. One important methodological requirement is the establishment of two comparable and statistically adequate corpora, also the design of software and the use of existing packages and to achieve the basic analysis. Each corpus comprises ca 250,000 words of newspaper material sampled in accordance to a specific set of criteria and assembled in machine readable form prior to the computer-assisted analysis. A suite of programmes have been written in SPITBOL to accomplish a variety of analytical tasks, and in particular to perform a battery of measurements intended to quantify the textual functioning of connectives in each corpus. Concordances and some word lists are produced by using OCP. Results of these researches confirm the existence of fundamental differences in text organisation in Arabic in comparison to English. This manifests itself in the way textual operations of grouping and sequencing are performed and in the intensity of the textual role of connectives in imposing linearity and continuity and in maintaining overall stability. Furthermore, computation of connective functionality and range of operationality has identified fundamental differences in the way favourable choices for text organisation are made and implemented.
Resumo:
Autonomic systems are required to adapt continually to changing environments and user goals. This process involves the real-Time update of the system's knowledge base, which should therefore be stored in a machine-readable format and automatically checked for consistency. OWL ontologies meet both requirements, as they represent collections of knowl- edge expressed in FIrst order logic, and feature embedded reasoners. To take advantage of these OWL ontology char- acteristics, this PhD project will devise a framework com- prising a theoretical foundation, tools and methods for de- veloping knowledge-centric autonomic systems. Within this framework, the knowledge storage and maintenance roles will be fulfilled by a specialised class of OWL ontologies. ©2014 ACM.
Resumo:
UncertWeb is a European research project running from 2010-2013 that will realize the uncertainty enabled model web. The assumption is that data services, in order to be useful, need to provide information about the accuracy or uncertainty of the data in a machine-readable form. Models taking these data as imput should understand this and propagate errors through model computations, and quantify and communicate errors or uncertainties generated by the model approximations. The project will develop technology to realize this and provide demonstration case studies.
Resumo:
Dissertação de Mestrado, Ciências da Linguagem, Faculdade de Ciências Humanas e Sociais, Universidade do Algarve, 2016
Resumo:
This thesis focuses on the history of the inflexional subjunctive and its functional substitutes in Late Middle English. To explore why and how the inflexional subjunctive declined in the history of English language, I analysed 2653 examples of three adverbial clauses introduced by if (1882 examples), though (305 examples) and lest (466 examples). Using a corpus-based approach, this thesis argues that linguistic change in subjunctive constructions did not happen suddenly but rather gradually, and the way it changed was varied , and that different constructions changed at different speeds in different environments. It is well known that the inflexional subjunctive declined in the history of English, mainly because of inflexional loss. Strangely however this topic has been comparatively neglected in the scholarly literature, especially with regard to the Middle English period, probably due to the limitations of data and also because study of this development requires very cumbersome textual research. This thesis has derived and analysed the data from three large corpora in the public domain: the Middle English Grammar Corpus (MEG-C for short), the Innsbruck Computer Archive of Machine-Readable English Texts (ICAMET for short), and some selected texts from The Corpus of Middle English Prose and Verse, part of the Middle English Compendium that also includes the Middle English Dictionary. The data were analysed from three perspectives: 1) clausal type, 2) dialect, and 3) textual genre. The basic methodology for the research was to analyse the examples one by one, with special attention being paid to the peculiarities of each text. In addition, this thesis draw on some complementary – indeed overlapping -- linguistic theories for further discussion: 1) Biber’s multi-dimensional theory, 2) Ogura and Wang’s (1994) S-curve or ‘diffusion’ theory, 3) Kretzchmar’s (2009) linguistics of speech, and 4) Halliday’s (1987) notion of language as a dynamic open system. To summarise the outcomes of this thesis: 1) On variation between clausal types, it was shown that the distributional tendencies of verb types (sub, ind, mod) are different between the three adverbial clauses under consideration. 2) On variation between dialects, it has been shown that the northern area, i.e. the so-called Great Scandinavian Belt, displays an especially high comparative ratio of the inflexional subjunctive construction compared to the other areas. This thesis suggests that this result was caused by the influence of Norse, relating the finding to the argument of Samuels (1989) that the present tense -es ending in the northern dialect was introduced by the influence of the Scandinavians. 3) On variation between genres, those labelled Science, Documents and Religion display relatively high ratio of the inflexional subjunctive, while Letter, Romance and History show relatively low ratio of the inflexional subjunctive. This results are explained by Biber’s multi-dimensional theory, which shows that the inflexional subjunctive can be related to the factors ‘informational’, ‘non-narrative’, ‘persuasive’ and ‘abstract’. 4) Lastly, on the inflexional subjunctive in Late Middle English, this thesis concludes that 1) the change did not happen suddenly but gradually, and 2) the way language changes varies. Thus the inflexional subjunctive did not disappear suddenly from England, and there was a time lag among the clausal types, dialects and genres, which can be related to Ogura and Wang’s S-curve (“diffusion”) theory and Kretzchmars’s view of “linguistic continuum”. This thesis has shown that the issues with regard to the inflexional subjunctive are quite complex, so that research in this area requires not only textual analysis but also theoretical analysis, considering both intra- and extra- linguistic factors.
Resumo:
Resumen: Objetivo: determinar la asociación entre el tipo de profesor (especialista y no especialista en educación física), con el nivel de actividad física, el contenido-contexto de la clase y el comportamiento del profesor. Método: Estudio descriptivo de corte transversal en un colegio distrital de Bogotá. Fueron evaluadas 57 clases de educación física, y dos docentes (uno con formación académica en Educación física), por medio del Sistema para la observación del tiempo de instrucción de la condición física (SOFIT). Las variables observadas fueron analizadas con estadística descriptiva en cantidades relativas a los minutos y proporción de la clase. Para establecer la asociación entre el género de los estudiantes y el tipo de profesor se usaron test t para muestras independientes y U de Mann-Witney. Resultados: La duración promedio de la clase fue 82,7 minutos, 69% del tiempo programado; los estudiantes pasaron la mayor parte del tiempo de pie 29% (25 minutos), el contenido predominante de la clase fue el de tipo general 21% (25 minutos) y los maestros ocuparon en promedio el 36% (29 minutos) de la clase observando. Los estudiantes pasaron 53% (44 minutos) en actividades físicas moderadas a vigorosas (AFMV). Los niños fueron más activos que las niñas (53.94% vs 50,83%). Se observó una asociación positiva entre el género y casi todos los niveles de actividad física de los estudiantes (p<0,05). Se identificó que existe una diferencia estadísticamente significativa (p<0,05), para las categorías sentado y estar de pie de la variable Niveles de Actividad física tanto en los resultados expresados en minutos, como en la proporción del tiempo de la clase y para la categoría caminando expresada en tiempo de la clase. Para la variable contenido-contexto se determinó una asociación para la categoría conocimiento, tanto en la proporción como en el número de minutos, y para la categoría contenido general en los resultados expresados en proporción de la clase. Finalmente, para la variable comportamiento del profesor expresada tanto en minutos como en proporción de la clase tuvo significancia estadística en todas sus categorías a excepción de la categoría promover Conclusiones: hay una diferencia importante en la forma como los dos tipos de maestros desarrollan la clase y los niveles de actividad física en que involucran a los estudiantes. La educación física en la escuela debe ser impartida por profesionales formados en el área, que tengan las destrezas y habilidades necesarias para desarrollar una educación física de calidad.
Resumo:
The dissertation addresses the still not solved challenges concerned with the source-based digital 3D reconstruction, visualisation and documentation in the domain of archaeology, art and architecture history. The emerging BIM methodology and the exchange data format IFC are changing the way of collaboration, visualisation and documentation in the planning, construction and facility management process. The introduction and development of the Semantic Web (Web 3.0), spreading the idea of structured, formalised and linked data, offers semantically enriched human- and machine-readable data. In contrast to civil engineering and cultural heritage, academic object-oriented disciplines, like archaeology, art and architecture history, are acting as outside spectators. Since the 1990s, it has been argued that a 3D model is not likely to be considered a scientific reconstruction unless it is grounded on accurate documentation and visualisation. However, these standards are still missing and the validation of the outcomes is not fulfilled. Meanwhile, the digital research data remain ephemeral and continue to fill the growing digital cemeteries. This study focuses, therefore, on the evaluation of the source-based digital 3D reconstructions and, especially, on uncertainty assessment in the case of hypothetical reconstructions of destroyed or never built artefacts according to scientific principles, making the models shareable and reusable by a potentially wide audience. The work initially focuses on terminology and on the definition of a workflow especially related to the classification and visualisation of uncertainty. The workflow is then applied to specific cases of 3D models uploaded to the DFG repository of the AI Mainz. In this way, the available methods of documenting, visualising and communicating uncertainty are analysed. In the end, this process will lead to a validation or a correction of the workflow and the initial assumptions, but also (dealing with different hypotheses) to a better definition of the levels of uncertainty.