990 resultados para 0804 Data Format


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Introduction: Authority records interchange requires establishing and using metadata standards, such as MARC 21 Format for Authority Data, format used by several cataloging agencies, and Metadata Authority Description Schema (MADS), that has received little attention and it is a little widespread standard among agencies. Purpose: Presenting an introductory study about Metadata Authority Description Schema (MADS). Methodology: Descriptive and exploratory bibliographic research. Results: The paper address the MADS creation context, its goals and its structure and key issues related to conversion of records from MARC 21 to MADS. Conclusions: The study concludes that, despite its limitations, MADS might be used to create simple authority records in Web environment and beyond libraries context.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Lehrvideos erfreuen sich dank aktueller Entwicklungen im Bereich der Online-Lehre (Videoplattformen, MOOCs) auf der einen Seite und einer riesigen Auswahl sowie einer einfachen Produktion und Distribution auf der anderen Seite großer Beliebtheit bei der Wissensvermittlung. Trotzdem bringen Videos einen entscheidenden Nachteil mit sich, welcher in der Natur des Datenformats liegt. So sind die Suche nach konkreten Sachverhalten in einem Video sowie die semantische Aufbereitung zur automatisierten Verknüpfung mit weiteren spezifischen Inhalten mit hohem Aufwand verbunden. Daher werden die lernerfolg-orientierte Selektion von Lehrsegmenten und ihr Arrangement zur auf Lernprozesse abgestimmten Steuerung gehemmt. Beim Betrachten des Videos werden unter Umständen bereits bekannte Sachverhalte wiederholt bzw. können nur durch aufwendiges manuelles Spulen übersprungen werden. Selbiges Problem besteht auch bei der gezielten Wiederholung von Videoabschnitten. Als Lösung dieses Problems wird eine Webapplikation vorgestellt, welche die semantische Aufbereitung von Videos hin zu adaptiven Lehrinhalten ermöglicht: mittels Integration von Selbsttestaufgaben mit definierten Folgeaktionen können auf Basis des aktuellen Nutzerwissens Videoabschnitte automatisiert übersprungen oder wiederholt und externe Inhalte verlinkt werden. Der präsentierte Ansatz basiert somit auf einer Erweiterung der behavioristischen Lerntheorie der Verzweigten Lehrprogramme nach Crowder, die auf den Lernverlauf angepasste Sequenzen von Lerneinheiten beinhaltet. Gleichzeitig werden mittels regelmäßig eingeschobener Selbsttestaufgaben Motivation sowie Aufmerksamkeit des Lernenden nach Regeln der Programmierten Unterweisung nach Skinner und Verstärkungstheorie gefördert. Durch explizite Auszeichnung zusammengehöriger Abschnitte in Videos können zusätzlich die enthaltenden Informationen maschinenlesbar gestaltet werden, sodass weitere Möglichkeiten zum Auffinden und Verknüpfen von Lerninhalten geschaffen werden.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work we address a scenario where 3D content is transmitted to a mobile terminal with 3D display capabilities. We consider the use of 2D plus depth format to represent the 3D content and focus on the generation of synthetic views in the terminal. We evaluate different types of smoothing filters that are applied to depth maps with the aim of reducing the disoccluded regions. The evaluation takes into account the reduction of holes in the synthetic view as well as the presence of geometrical distortion caused by the smoothing operation. The selected filter has been included within an implemented module for the VideoLan Client (VLC) software in order to render 3D content from the 2D plus depth data format.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Baltic Sea is a seasonally ice-covered, marginal sea in central northern Europe. It is an essential waterway connecting highly industrialised countries. Because ship traffic is intermittently hindered by sea ice, the local weather services have been monitoring sea ice conditions for decades. In the present study we revisit a historical monitoring data set, covering the winters 1960/1961 to 1978/1979. This data set, dubbed Data Bank for Baltic Sea Ice and Sea Surface Temperatures (BASIS) ice, is based on hand-drawn maps that were collected and then digitised in 1981 in a joint project of the Finnish Institute of Marine Research (today the Finnish Meteorological Institute (FMI)) and the Swedish Meteorological and Hydrological Institute (SMHI). BASIS ice was designed for storage on punch cards and all ice information is encoded by five digits. This makes the data hard to access. Here we present a post-processed product based on the original five-digit code. Specifically, we convert to standard ice quantities (including information on ice types), which we distribute in the current and free Network Common Data Format (NetCDF). Our post-processed data set will help to assess numerical ice models and provide easy-to-access unique historical reference material for sea ice in the Baltic Sea. In addition we provide statistics showcasing the data quality. The website http://www.baltic-ocean.org hosts the post-processed data and the conversion code.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background. The development of therapeutic interventions to prevent progressive valve damage is more likely to limit the progression of structural damage to the aortic valve with normal function (aortic sclerosis [ASC]) than clinically apparent aortic stenosis. Currently, the ability to appreciate the progression of ASC is compromised by the subjective and qualitative evaluation of sclerosis severity. Methods: We sought to reveal whether the intensity of ultrasonic backscatter could be used to quantify sclerosis severity in 26 patients with ASC and 23 healthy young adults. images of the aortic valve were obtained in the parasternal long-axis view and saved in raw data format. Six square-shaped 11 X 11 pixel regions of interest were placed on the anterior and posterior leaflets, and calibrated backscatter values were obtained by subtracting the regions of interest in the blood pool from the averaged backscatter values obtained from the leaflets. Results. Mean ultrasonic backscatter values for sclerotic valves exceeded the results in normal valve tissue (16.3 +/- 4.4 dB vs 9.8 +/- 3.3 dB, P < .0001). Backscatter values were greater (22.0 +/- 3.5 dB) in a group of 6 patients with aortic stenosis. Within the sclerosis group, the magnitude of backscatter was directly correlated (P < .05) with a subjective sclerosis score, and with transvalvular pressure gradient. mean reproducibility was 2.4 +/- 1.8 dB (SD) between observers, and 2.3 +/- 1.7 dB (SD) between examinations. Conclusion: Measurement of backscatter from the valve leaflets of patients with ASC may be a feasible means of following the progression and treatment response of aortic sclerosis.

Relevância:

80.00% 80.00%

Publicador:

Relevância:

80.00% 80.00%

Publicador:

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Firstly, we numerically model a practical 20 Gb/s undersea configuration employing the Return-to-Zero Differential Phase Shift Keying data format. The modelling is completed using the Split-Step Fourier Method to solve the Generalised Nonlinear Schrdinger Equation. We optimise the dispersion map and per-channel launch power of these channels and investigate how the choice of pre/post compensation can influence the performance. After obtaining these optimal configurations, we investigate the Bit Error Rate estimation of these systems and we see that estimation based on Gaussian electrical current systems is appropriate for systems of this type, indicating quasi-linear behaviour. The introduction of narrower pulses due to the deployment of quasi-linear transmission decreases the tolerance to chromatic dispersion and intra-channel nonlinearity. We used tools from Mathematical Statistics to study the behaviour of these channels in order to develop new methods to estimate Bit Error Rate. In the final section, we consider the estimation of Eye Closure Penalty, a popular measure of signal distortion. Using a numerical example and assuming the symmetry of eye closure, we see that we can simply estimate Eye Closure Penalty using Gaussian statistics. We also see that the statistics of the logical ones dominates the statistics of the logical ones dominates the statistics of signal distortion in the case of Return-to-Zero On-Off Keying configurations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This document describes the core components to create customizable game analytics and dashboards: their present status; links to their full designs and downloadable versions; and how to configure them, and take advantage of the analytics visualizations and the underlying architecture of the platform. All the dashboard components are working with data collected using the xAPI data format that the RAGE project has developed in collaboration with ADL Co-Lab.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Chapter 6 concerns ‘Designing and developing digital and blended learning solutions’, however, despite its title, it is not aimed at developing L&D professionals to be technologists (in so much as how Chapter 3 is not aimed at developing L&D professionals to be accounting and financial experts). Chapter 6 is about developing L&D professionals to be technology savvy. In doing so, I adopt a culinary analogy in presenting this chapter, where the most important factors in creating a dish (e.g. blended learning), are the ingredients and the flavour each of it brings. The chapter first explores the typical technologies and technology products that are available for learning and development i.e. the ingredients. I then introduce the data Format, Interactivity/ Immersion, Timing, Content (creation and curation), Connectivity and Administration (FITCCA) framework, that helps L&D professionals to look beyond the labels of technologies in identifying what the technology offers, its functions and features, which is analogous to the ‘flavours’ of the ingredients. The next section discusses some multimedia principles that are important for L&D professionals to consider in designing and developing digital learning solutions. Finally, whilst there are innumerable permutations of blended learning, this section focuses on the typical emphasis in blended learning and how technology may support such blends.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A replicação de base de dados tem como objectivo a cópia de dados entre bases de dados distribuídas numa rede de computadores. A replicação de dados é importante em várias situações, desde a realização de cópias de segurança da informação, ao balanceamento de carga, à distribuição da informação por vários locais, até à integração de sistemas heterogéneos. A replicação possibilita uma diminuição do tráfego de rede, pois os dados ficam disponíveis localmente possibilitando também o seu acesso no caso de indisponibilidade da rede. Esta dissertação baseia-se na realização de um trabalho que consistiu no desenvolvimento de uma aplicação genérica para a replicação de bases de dados a disponibilizar como open source software. A aplicação desenvolvida possibilita a integração de dados entre vários sistemas, com foco na integração de dados heterogéneos, na fragmentação de dados e também na possibilidade de adaptação a várias situações. ABSTRACT: Data replication is a mechanism to synchronize and integrate data between distributed databases over a computer network. Data replication is an important tool in several situations, such as the creation of backup systems, load balancing between various nodes, distribution of information between various locations, integration of heterogeneous systems. Replication enables a reduction in network traffic, because data remains available locally even in the event of a temporary network failure. This thesis is based on the work carried out to develop an application for database replication to be made accessible as open source software. The application that was built allows for data integration between various systems, with particular focus on, amongst others, the integration of heterogeneous data, the fragmentation of data, replication in cascade, data format changes between replicas, master/slave and multi master synchronization.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The first topic analyzed in the thesis will be Neural Architecture Search (NAS). I will focus on two different tools that I developed, one to optimize the architecture of Temporal Convolutional Networks (TCNs), a convolutional model for time-series processing that has recently emerged, and one to optimize the data precision of tensors inside CNNs. The first NAS proposed explicitly targets the optimization of the most peculiar architectural parameters of TCNs, namely dilation, receptive field, and the number of features in each layer. Note that this is the first NAS that explicitly targets these networks. The second NAS proposed instead focuses on finding the most efficient data format for a target CNN, with the granularity of the layer filter. Note that applying these two NASes in sequence allows an "application designer" to minimize the structure of the neural network employed, minimizing the number of operations or the memory usage of the network. After that, the second topic described is the optimization of neural network deployment on edge devices. Importantly, exploiting edge platforms' scarce resources is critical for NN efficient execution on MCUs. To do so, I will introduce DORY (Deployment Oriented to memoRY) -- an automatic tool to deploy CNNs on low-cost MCUs. DORY, in different steps, can manage different levels of memory inside the MCU automatically, offload the computation workload (i.e., the different layers of a neural network) to dedicated hardware accelerators, and automatically generates ANSI C code that orchestrates off- and on-chip transfers with the computation phases. On top of this, I will introduce two optimized computation libraries that DORY can exploit to deploy TCNs and Transformers on edge efficiently. I conclude the thesis with two different applications on bio-signal analysis, i.e., heart rate tracking and sEMG-based gesture recognition.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The dissertation addresses the still not solved challenges concerned with the source-based digital 3D reconstruction, visualisation and documentation in the domain of archaeology, art and architecture history. The emerging BIM methodology and the exchange data format IFC are changing the way of collaboration, visualisation and documentation in the planning, construction and facility management process. The introduction and development of the Semantic Web (Web 3.0), spreading the idea of structured, formalised and linked data, offers semantically enriched human- and machine-readable data. In contrast to civil engineering and cultural heritage, academic object-oriented disciplines, like archaeology, art and architecture history, are acting as outside spectators. Since the 1990s, it has been argued that a 3D model is not likely to be considered a scientific reconstruction unless it is grounded on accurate documentation and visualisation. However, these standards are still missing and the validation of the outcomes is not fulfilled. Meanwhile, the digital research data remain ephemeral and continue to fill the growing digital cemeteries. This study focuses, therefore, on the evaluation of the source-based digital 3D reconstructions and, especially, on uncertainty assessment in the case of hypothetical reconstructions of destroyed or never built artefacts according to scientific principles, making the models shareable and reusable by a potentially wide audience. The work initially focuses on terminology and on the definition of a workflow especially related to the classification and visualisation of uncertainty. The workflow is then applied to specific cases of 3D models uploaded to the DFG repository of the AI Mainz. In this way, the available methods of documenting, visualising and communicating uncertainty are analysed. In the end, this process will lead to a validation or a correction of the workflow and the initial assumptions, but also (dealing with different hypotheses) to a better definition of the levels of uncertainty.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The HUPO Proteomics Standards Initiative has developed several standardized data formats to facilitate data sharing in mass spectrometry (MS)-based proteomics. These allow researchers to report their complete results in a unified way. However, at present, there is no format to describe the final qualitative and quantitative results for proteomics and metabolomics experiments in a simple tabular format. Many downstream analysis use cases are only concerned with the final results of an experiment and require an easily accessible format, compatible with tools such as Microsoft Excel or R. We developed the mzTab file format for MS-based proteomics and metabolomics results to meet this need. mzTab is intended as a lightweight supplement to the existing standard XML-based file formats (mzML, mzIdentML, mzQuantML), providing a comprehensive summary, similar in concept to the supplemental material of a scientific publication. mzTab files can contain protein, peptide, and small molecule identifications together with experimental metadata and basic quantitative information. The format is not intended to store the complete experimental evidence but provides mechanisms to report results at different levels of detail. These range from a simple summary of the final results to a representation of the results including the experimental design. This format is ideally suited to make MS-based proteomics and metabolomics results available to a wider biological community outside the field of MS. Several software tools for proteomics and metabolomics have already adapted the format as an output format. The comprehensive mzTab specification document and extensive additional documentation can be found online.