912 resultados para digital tools
Resumo:
BrainMaps.org is an interactive high-resolution digital brain atlas and virtual microscope that is based on over 20 million megapixels of scanned images of serial sections of both primate and non-primate brains and that is integrated with a high-speed database for querying and retrieving data about brain structure and function over the internet. Complete brain datasets for various species, including Homo sapiens, Macaca mulatta, Chlorocebus aethiops, Felis catus, Mus musculus, Rattus norvegicus, and Tyto alba, are accessible online. The methods and tools we describe are useful for both research and teaching, and can be replicated by labs seeking to increase accessibility and sharing of neuroanatomical data. These tools offer the possibility of visualizing and exploring completely digitized sections of brains at a sub-neuronal level, and can facilitate large-scale connectional tracing, histochemical and stereological analyses.
Resumo:
New tools for editing of digital images, music and films have opened up new possibilities to enable wider circles of society to engage in ’artistic’ activities of different qualities. User-generated content has produced a plethora of new forms of artistic expression. One type of user-generated content is the mashup. Mashups are compositions that combine existing works (often) protected by copyright and transform them into new original creations. The European legislative framework has not yet reacted to the copyright problems provoked by mashups. Neither under the US fair use doctrine, nor under the strict corset of limitations and exceptions in Art 5 (2)-(3) of the Copyright Directive (2001/29/EC) have mashups found room to develop in a safe legal environment. The contribution analyzes the current European legal framework and identifies its insufficiencies with regard to enabling a legal mashup culture. By comparison with the US fair use approach, in particular the parody defense, a recent CJEU judgment serves as a comparative example. Finally, an attempt is made to suggest solutions for the European legislator, based on the policy proposals of the EU Commission’s “Digital Agenda” and more recent policy documents (e.g. “On Content in the Digital Market”, “Licenses for Europe”). In this context, a distinction is made between non-commercial mashup artists and the emerging commercial mashup scene.
Resumo:
Background: Monitoring alcohol use is important in numerous situations. Direct ethanol metabolites, such as ethyl glucuronide (EtG), have been shown to be useful tools in detecting alcohol use and documenting abstinence. For very frequent or continuous control of abstinence, they lack practicability. Therefore, devices measuring ethanol itself might be of interest. This pilot study aims at elucidating the usability and accuracy of the cellular photo digital breathalyzer (CPDB) compared to self-reports in a naturalistic setting. Method: 12 social drinkers were included. Subjects used a CPDB 4 times daily, kept diaries of alcohol use and submitted urine for EtG testing over a period of 5 weeks. Results: In total, the 12 subjects reported 84 drinking episodes. 1,609 breath tests were performed and 55 urine EtG tests were collected. Of 84 drinking episodes, CPDB detected 98.8%. The compliance rate for breath testing was 96%. Of the 55 EtG tests submitted, 1 (1.8%) was positive. Conclusions: The data suggest that the CPDB device holds promise in detecting high, moderate, and low alcohol intake. It seems to have advantages compared to biomarkers and other Monitoring devices. The preference for CPDB by the participants might explain the high compliance. Further studies including comparison with biomarkers and transdermal devices are needed.
Resumo:
Following the recent UNESCO Convention on the Protection and Promotion of the Diversity of Cultural Expressions, the first wave of scholarly work has focused on clarifying the interface between the Convention and the WTO Agreements. Building upon these analyses, the present article takes however a different stance. It seeks a new, rather pragmatic definition of the relationship between trade and culture and argues that such a re-definition is particularly needed in the digital networked environment that has modified the ways markets for cultural content function and the ways in which cultural content is created, distributed and accessed. The article explores first the significance of the UNESCO Convention (or the lack thereof) and subsequently outlines a variety of ways in which the WTO framework can be improved in a ‘neutral’, not necessarily culturally motivated, manner to become more conducive to the pursuit of cultural diversity and taking into account the changed reality of digital media. The article also looks at other facets of the profoundly fragmented culture-related regulatory framework and underscores the critical importance of intellectual property rights and of other domains that appear at first sight peripheral to the trade and culture discussion, such as access to infrastructure, interoperability or net neutrality. It is argued that a number of feasible solutions exist beyond the politically charged confrontation of trade versus culture and that the new digital media landscape may require a readjustment of the priorities and the tools for the achievement of the widely accepted objective of cultural diversity.
Resumo:
In the face of increasing globalisation, there is a pressing need for innovative trans-disciplinary analyses of the value of traditional cultural expressions (TCE) that also suggest appropriate protection mechanisms for them. The book to which this preface belongs combines approaches from history, philosophy, anthropology, sociology and law, and charts previously untravelled paths for developing new policy tools and legal designs that go beyond conventional copyright models. It reflects also upon the specific features of the digital environment, which, despite enhancing the risks of misappropriation of traditional knowledge and creativity, may equally offer some opportunities for revitalising indigenous peoples' values and provide for the sustainability of TCE.
Resumo:
Propensity score (PS) techniques are useful if the number of potential confounding pretreatment variables is large and the number of analysed outcome events is rather small so that conventional multivariable adjustment is hardly feasible. Only pretreatment characteristics should be chosen to derive PS, and only when they are probably associated with outcome. A careful visual inspection of PS will help to identify areas of no or minimal overlap, which suggests residual confounding, and trimming of the data according to the distribution of PS will help to minimise residual confounding. Standardised differences in pretreatment characteristics provide a useful check of the success of the PS technique employed. As with conventional multivariable adjustment, PS techniques cannot account for confounding variables that are not or are only imperfectly measured, and no PS technique is a substitute for an adequately designed randomised trial.
Resumo:
The paper showcases the field- and lab-documentation system developed for Kinneret Regional Project, an international archaeological expedition to the Northwestern shore of the Sea of Galilee (Israel) under the auspices of the University of Bern, the University of Helsinki, Leiden University and Wofford College. The core of the data management system is a fully relational, server-based database framework, which also includes time-based and static GIS services, stratigraphic analysis tools and fully indexed document/digital image archives. Data collection in the field is based on mobile, hand-held devices equipped with a custom-tailored stand-alone application. Comprehensive three-dimensional documentation of all finds and findings is achieved by means of total stations and/or high-precision GPS devices. All archaeological information retrieved in the field – including tachymetric data – is synched with the core system on the fly and thus immediately available for further processing in the field lab (within the local network) or for post-excavation analysis at remote institutions (via the WWW). Besides a short demonstration of the main functionalities, the paper also presents some of the key technologies used and illustrates usability aspects of the system’s individual components.
Resumo:
This thesis covers a broad part of the field of computational photography, including video stabilization and image warping techniques, introductions to light field photography and the conversion of monocular images and videos into stereoscopic 3D content. We present a user assisted technique for stereoscopic 3D conversion from 2D images. Our approach exploits the geometric structure of perspective images including vanishing points. We allow a user to indicate lines, planes, and vanishing points in the input image, and directly employ these as guides of an image warp that produces a stereo image pair. Our method is most suitable for scenes with large scale structures such as buildings and is able to skip the step of constructing a depth map. Further, we propose a method to acquire 3D light fields using a hand-held camera, and describe several computational photography applications facilitated by our approach. As the input we take an image sequence from a camera translating along an approximately linear path with limited camera rotations. Users can acquire such data easily in a few seconds by moving a hand-held camera. We convert the input into a regularly sampled 3D light field by resampling and aligning them in the spatio-temporal domain. We also present a novel technique for high-quality disparity estimation from light fields. Finally, we show applications including digital refocusing and synthetic aperture blur, foreground removal, selective colorization, and others.
Resumo:
A three-level satellite to ground monitoring scheme for conservation easement monitoring has been implemented in which high-resolution imagery serves as an intermediate step for inspecting high priority sites. A digital vertical aerial camera system was developed to fulfill the need for an economical source of imagery for this intermediate step. A method for attaching the camera system to small aircraft was designed, and the camera system was calibrated and tested. To ensure that the images obtained were of suitable quality for use in Level 2 inspections, rectified imagery was required to provide positional accuracy of 5 meters or less to be comparable to current commercially available high-resolution satellite imagery. Focal length calibration was performed to discover the infinity focal length at two lens settings (24mm and 35mm) with a precision of O.1mm. Known focal length is required for creation of navigation points representing locations to be photographed (waypoints). Photographing an object of known size at distances on a test range allowed estimates of focal lengths of 25.lmm and 35.4mm for the 24mm and 35mm lens settings, respectively. Constants required for distortion removal procedures were obtained using analytical plumb-line calibration procedures for both lens settings, with mild distortion at the 24mm setting and virtually no distortion found at the 35mm setting. The system was designed to operate in a series of stages: mission planning, mission execution, and post-mission processing. During mission planning, waypoints were created using custom tools in geographic information system (GIs) software. During mission execution, the camera is connected to a laptop computer with a global positioning system (GPS) receiver attached. Customized mobile GIs software accepts position information from the GPS receiver, provides information for navigation, and automatically triggers the camera upon reaching the desired location. Post-mission processing (rectification) of imagery for removal of lens distortion effects, correction of imagery for horizontal displacement due to terrain variations (relief displacement), and relating the images to ground coordinates were performed with no more than a second-order polynomial warping function. Accuracy testing was performed to verify the positional accuracy capabilities of the system in an ideal-case scenario as well as a real-world case. Using many welldistributed and highly accurate control points on flat terrain, the rectified images yielded median positional accuracy of 0.3 meters. Imagery captured over commercial forestland with varying terrain in eastern Maine, rectified to digital orthophoto quadrangles, yielded median positional accuracies of 2.3 meters with accuracies of 3.1 meters or better in 75 percent of measurements made. These accuracies were well within performance requirements. The images from the digital camera system are of high quality, displaying significant detail at common flying heights. At common flying heights the ground resolution of the camera system ranges between 0.07 meters and 0.67 meters per pixel, satisfying the requirement that imagery be of comparable resolution to current highresolution satellite imagery. Due to the high resolution of the imagery, the positional accuracy attainable, and the convenience with which it is operated, the digital aerial camera system developed is a potentially cost-effective solution for use in the intermediate step of a satellite to ground conservation easement monitoring scheme.
Resumo:
From 2005 to 2007, the University of Connecticut Libraries Copyright Project Team engaged in a wide range of activities to fulfill its charge and to raise awareness of copyright issues in the library and across the university. This article highlights some of the primary activities and tools used by the team to involve stakeholders, to provide educational opportunities, and to stay current on copyright issues in higher education. Among other activities, the team developed a new copyright web site for use by library staff and the broader university community.
Resumo:
Purpose – The purpose of this paper is to describe the tools and strategies that were employed by C/W MARS to successfully develop and implement the Digital Treasures digital repository. Design/methodology/approach – This paper outlines the planning and subsequent technical issues that arise when implementing a digitization project on the scale of the large, multi-type, automated library network. Workflow solutions addressed include synchronous online metadata record submissions from multiple library sources and the delivery of collection-level use statistics to participating library administrators. The importance of standards-based descriptive metadata and the role of project collaboration are also discussed. Findings – From the time of its initial planning, the Digital Treasures repository was fully implemented in six months. The discernable and statistically quantified online discovery and access of actual digital objects greatly assisted libraries unsure of their own staffing costs/benefits to join the repository. Originality/value – This case study may serve as a possible example of initial planning, workflow and final implementation strategies for new repositories in both the general and library consortium environment. Keywords – Digital repositories, Library networks, Data management. Paper type – Case study
Resumo:
Este artículo presenta los avances de un trabajo de tesis de Magister en Tecnología Informática Aplicada en Educación de la Facultad de Informática de la UNLP, cuyo tema es “Accesibilidad digital para usuarios con limitaciones visuales y su relación con espacios virtuales de aprendizaje". 2 Aborda el tema de accesibilidad digital desde el marco teórico seleccionado y se mencionan los ejes de análisis dentro del marco del uso de las tecnologías como herramientas que favorecen la cognición. Se enuncia la propuesta de tesis y los primeros resultados. Se realiza una primera comparación donde se discuten las ventajas y desventajas de los espacios digitales al acceder mediante los lectores de pantalla, permitiendo establecer líneas de trabajo futuro.
Resumo:
This paper explains the methodology followed to teach the subject `Digital control of power converters'. This subject belongs to the research master on `Industrial Electronics' of the Universidad Politécnica de Madrid. The subject is composed of several theoretical lessons plus the development of an actual digital control. For that purpose an ad hoc dc-dc converter has been designed and built. The use of this board together with some software tools seems a very powerful way for the students to learn the concepts from the design to the real world
Resumo:
En este proyecto se estudian y analizan las diferentes técnicas de procesado digital de señal aplicadas a acelerómetros. Se hace uso de una tarjeta de prototipado, basada en DSP, para realizar las diferentes pruebas. El proyecto se basa, principalmente, en realizar filtrado digital en señales provenientes de un acelerómetro en concreto, el 1201F, cuyo campo de aplicación es básicamente la automoción. Una vez estudiadas la teoría de procesado y las características de los filtros, diseñamos una aplicación basándonos sobre todo en el entorno en el que se desarrollaría una aplicación de este tipo. A lo largo del diseño, se explican las diferentes fases: diseño por ordenador (Matlab), diseño de los filtros en el DSP (C), pruebas sobre el DSP sin el acelerómetro, calibración del acelerómetro, pruebas finales sobre el acelerómetro... Las herramientas utilizadas son: la plataforma Kit de evaluación 21-161N de Analog Devices (equipado con el entorno de desarrollo Visual DSP 4.5++), el acelerómetro 1201F, el sistema de calibración de acelerómetros CS-18-LF de Spektra y los programas software MATLAB 7.5 y CoolEditPRO 2.0. Se realizan únicamente filtros IIR de 2º orden, de todos los tipos (Butterworth, Chebyshev I y II y Elípticos). Realizamos filtros de banda estrecha, paso-banda y banda eliminada, de varios tipos, dentro del fondo de escala que permite el acelerómetro. Una vez realizadas todas las pruebas, tanto simulaciones como físicas, se seleccionan los filtros que presentan un mejor funcionamiento y se analizan para obtener conclusiones. Como se dispone de un entorno adecuado para ello, se combinan los filtros entre sí de varias maneras, para obtener filtros de mayor orden (estructura paralelo). De esta forma, a partir de filtros paso-banda, podemos obtener otras configuraciones que nos darán mayor flexibilidad. El objetivo de este proyecto no se basa sólo en obtener buenos resultados en el filtrado, sino también de aprovechar las facilidades del entorno y las herramientas de las que disponemos para realizar el diseño más eficiente posible. In this project, we study and analize digital signal processing in order to design an accelerometer-based application. We use a hardware card of evaluation, based on DSP, to make different tests. This project is based in design digital filters for an automotion application. The accelerometer type is 1201F. First, we study digital processing theory and main parameters of real filters, to make a design based on the application environment. Along the application, we comment all the different steps: computer design (Matlab), filter design on the DSP (C language), simulation test on the DSP without the accelerometer, accelerometer calibration, final tests on the accelerometer... Hardware and software tools used are: Kit of Evaluation 21-161-N, based on DSP, of Analog Devices (equiped with software development tool Visual DSP 4.5++), 1201-F accelerometer, CS-18-LF calibration system of SPEKTRA and software tools MATLAB 7.5 and CoolEditPRO 2.0. We only perform 2nd orden IIR filters, all-type : Butterworth, Chebyshev I and II and Ellyptics. We perform bandpass and stopband filters, with very narrow band, taking advantage of the accelerometer's full scale. Once all the evidence, both simulations and physical, are finished, filters having better performance and analyzed and selected to draw conclusions. As there is a suitable environment for it, the filters are combined together in different ways to obtain higher order filters (parallel structure). Thus, from band-pass filters, we can obtain many configurations that will give us greater flexibility. The purpose of this project is not only based on good results in filtering, but also to exploit the facilities of the environment and the available tools to make the most efficient design possible.