898 resultados para Digital Object Identifier (DOI)
Resumo:
Background: One of the main challenges for biomedical research lies in the computer-assisted integrative study of large and increasingly complex combinations of data in order to understand molecular mechanisms. The preservation of the materials and methods of such computational experiments with clear annotations is essential for understanding an experiment, and this is increasingly recognized in the bioinformatics community. Our assumption is that offering means of digital, structured aggregation and annotation of the objects of an experiment will provide necessary meta-data for a scientist to understand and recreate the results of an experiment. To support this we explored a model for the semantic description of a workflow-centric Research Object (RO), where an RO is defined as a resource that aggregates other resources, e.g., datasets, software, spreadsheets, text, etc. We applied this model to a case study where we analysed human metabolite variation by workflows. Results: We present the application of the workflow-centric RO model for our bioinformatics case study. Three workflows were produced following recently defined Best Practices for workflow design. By modelling the experiment as an RO, we were able to automatically query the experiment and answer questions such as “which particular data was input to a particular workflow to test a particular hypothesis?”, and “which particular conclusions were drawn from a particular workflow?”. Conclusions: Applying a workflow-centric RO model to aggregate and annotate the resources used in a bioinformatics experiment, allowed us to retrieve the conclusions of the experiment in the context of the driving hypothesis, the executed workflows and their input data. The RO model is an extendable reference model that can be used by other systems as well.
Resumo:
The world's largest fossil oyster reef, formed by the giant oyster Crassostrea gryphoides and located in Stetten (north of Vienna, Austria) is studied by Harzhauser et al., 2015, 2016; Djuricic et al., 2016. Digital documentation of the unique geological site is provided by terrestrial laser scanning (TLS) at the millimeter scale. Obtaining meaningful results is not merely a matter of data acquisition with a suitable device; it requires proper planning, data management, and postprocessing. Terrestrial laser scanning technology has a high potential for providing precise 3D mapping that serves as the basis for automatic object detection in different scenarios; however, it faces challenges in the presence of large amounts of data and the irregular geometry of an oyster reef. We provide a detailed description of the techniques and strategy used for data collection and processing in Djuricic et al., 2016. The use of laser scanning provided the ability to measure surface points of 46,840 (estimated) shells. They are up to 60-cm-long oyster specimens, and their surfaces are modeled with a high accuracy of 1 mm. In addition to laser scanning measurements, more than 300 photographs were captured, and an orthophoto mosaic was generated with a ground sampling distance (GSD) of 0.5 mm. This high-resolution 3D information and the photographic texture serve as the basis for ongoing and future geological and paleontological analyses. Moreover, they provide unprecedented documentation for conservation issues at a unique natural heritage site.
Resumo:
The dissertation addresses the still not solved challenges concerned with the source-based digital 3D reconstruction, visualisation and documentation in the domain of archaeology, art and architecture history. The emerging BIM methodology and the exchange data format IFC are changing the way of collaboration, visualisation and documentation in the planning, construction and facility management process. The introduction and development of the Semantic Web (Web 3.0), spreading the idea of structured, formalised and linked data, offers semantically enriched human- and machine-readable data. In contrast to civil engineering and cultural heritage, academic object-oriented disciplines, like archaeology, art and architecture history, are acting as outside spectators. Since the 1990s, it has been argued that a 3D model is not likely to be considered a scientific reconstruction unless it is grounded on accurate documentation and visualisation. However, these standards are still missing and the validation of the outcomes is not fulfilled. Meanwhile, the digital research data remain ephemeral and continue to fill the growing digital cemeteries. This study focuses, therefore, on the evaluation of the source-based digital 3D reconstructions and, especially, on uncertainty assessment in the case of hypothetical reconstructions of destroyed or never built artefacts according to scientific principles, making the models shareable and reusable by a potentially wide audience. The work initially focuses on terminology and on the definition of a workflow especially related to the classification and visualisation of uncertainty. The workflow is then applied to specific cases of 3D models uploaded to the DFG repository of the AI Mainz. In this way, the available methods of documenting, visualising and communicating uncertainty are analysed. In the end, this process will lead to a validation or a correction of the workflow and the initial assumptions, but also (dealing with different hypotheses) to a better definition of the levels of uncertainty.
Resumo:
Remotely sensed imagery has been widely used for land use/cover classification thanks to the periodic data acquisition and the widespread use of digital image processing systems offering a wide range of classification algorithms. The aim of this work was to evaluate some of the most commonly used supervised and unsupervised classification algorithms under different landscape patterns found in Rondônia, including (1) areas of mid-size farms, (2) fish-bone settlements and (3) a gradient of forest and Cerrado (Brazilian savannah). Comparison with a reference map based on the kappa statistics resulted in good to superior indicators (best results - K-means: k=0.68; k=0.77; k=0.64 and MaxVer: k=0.71; k=0.89; k=0.70 respectively for three areas mentioned). Results show that choosing a specific algorithm requires to take into account both its capacity to discriminate among various spectral signatures under different landscape patterns as well as a cost/benefit analysis considering the different steps performed by the operator performing a land cover/use map. it is suggested that a more systematic assessment of several options of implementation of a specific project is needed prior to beginning a land use/cover mapping job.
Resumo:
We evaluated the performance of a novel procedure for segmenting mammograms and detecting clustered microcalcifications in two types of image sets obtained from digitization of mammograms using either a laser scanner, or a conventional ""optical"" scanner. Specific regions forming the digital mammograms were identified and selected, in which clustered microcalcifications appeared or not. A remarkable increase in image intensity was noticed in the images from the optical scanner compared with the original mammograms. A procedure based on a polynomial correction was developed to compensate the changes in the characteristic curves from the scanners, relative to the curves from the films. The processing scheme was applied to both sets, before and after the polynomial correction. The results indicated clearly the influence of the mammogram digitization on the performance of processing schemes intended to detect microcalcifications. The image processing techniques applied to mammograms digitized by both scanners, without the polynomial intensity correction, resulted in a better sensibility in detecting microcalcifications in the images from the laser scanner. However, when the polynomial correction was applied to the images from the optical scanner, no differences in performance were observed for both types of images. (C) 2008 SPIE and IS&T [DOI: 10.1117/1.3013544]
Resumo:
There is little empirical data about the impact of digital inclusion on cognition among older adults. This paper aimed at investigating the effects of a digital inclusion program in the cognitive performance of older individuals who participated in a computer learning workshop named ""Idosos On-Line`` (Elderly Online). Forty-two aged individuals participated in the research study: 22 completed the computer training workshop and 20 constituted the control group. All subjects answered a sociodemographic questionnaire and completed the Addenbrooke`s cognitive examination, revised (ACE-R), which examines five cognitive domains: orientation and attention, memory, verbal fluency, language, and visuo-spatial skills. It was noted that the experimental group`s cognitive performance significantly improved after the program, particularly in the language and memory domains, when compared to the control group. These findings suggest that the acquisition of new knowledge and the use of a new tool, that makes it possible to access the Internet, may bring gains to cognition. (C) 2010 Elsevier Ireland Ltd. All rights reserved.
Resumo:
A new simple method to design linear-phase finite impulse response (FIR) digital filters, based on the steepest-descent optimization method, is presented in this paper. Starting from the specifications of the desired frequency response and a maximum approximation error a nearly optimum digital filter is obtained. Tests have shown that this method is alternative to other traditional ones such as Frequency Sampling and Parks-McClellan, mainly when other than brick wall frequency response is required as a desired frequency response. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
This research presents a method for frequency estimation in power systems using an adaptive filter based on the Least Mean Square Algorithm (LMS). In order to analyze a power system, three-phase voltages were converted into a complex signal applying the alpha beta-transform and the results were used in an adaptive filtering algorithm. Although the use of the complex LMS algorithm is described in the literature, this paper deals with some practical aspects of the algorithm implementation. In order to reduce computing time, a coefficient generator was implemented. For the algorithm validation, a computing simulation of a power system was carried Out using the ATP software. Many different situations were Simulated for the performance analysis of the proposed methodology. The results were compared to a commercial relay for validation, showing the advantages of the new method. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
A way of coupling digital image correlation (to measure displacement fields) and boundary element method (to compute displacements and tractions along a crack surface) is presented herein. It allows for the identification of Young`s modulus and fracture parameters associated with a cohesive model. This procedure is illustrated to analyze the latter for an ordinary concrete in a three-point bend test on a notched beam. In view of measurement uncertainties, the results are deemed trustworthy thanks to the fact that numerous measurement points are accessible and used as entries to the identification procedure. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The Learning Object (OA) is any digital resource that can be reused to support learning with specific functions and objectives. The OA specifications are commonly offered in SCORM model without considering activities in groups. This deficiency was overcome by the solution presented in this paper. This work specified OA for e-learning activities in groups based on SCORM model. This solution allows the creation of dynamic objects which include content and software resources for the collaborative learning processes. That results in a generalization of the OA definition, and in a contribution with e-learning specifications.
Resumo:
This paper reports on design of digital control for wind turbines and its relation to the quality of power fed into the Brazilian grid on connecting to it a 192 MW wind farm equipped with doubly fed induction generators. PWM converters are deployed as vector controlled regulated current voltage sources for their rotors, for independent control of both active and reactive power of those generators. Both speed control and active power control strategies are analyzed, in the search for maximum efficiency of conversion of wind kinetic energy into electric power and enhanced quality of delivered power. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The purpose of this article is to study the application of the holographic interferometry techniques in the structural analysis of submarine environment. These techniques are widely used today, with applications in many areas. Nevertheless, its application in submarine environments presents some challenges. The application of two techniques, electronic speckle pattern interferometry (ESPI) and digital holography, comparison of advantages and disadvantages of each of them is presented. A brief study is done on the influence of water properties and the optical effects due to suspended particles as well as possible solutions to minimize these problems. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Green tapes of Li(2)O-ZrO(2)-SiO(2)-Al(2)O(3) (LZSA) parent glass were produced by aqueous tape casting as the starting material for the laminated object manufacturing (LOM) process. The rheological behavior of the powder suspensions in aqueous media, as well as the mechanical properties of the cast tapes, was evaluated. According to xi potential measurements, the LZSA glass powder particles showed acid surface characteristics and an IEP of around 4 when in aqueous media. The critical volume fraction of solids was about 72 wt% (27 vol%), which hindered the processability of more concentrated slurries. The glass particles also showed an anisometric profile, which contributed to an increase in the interactions between particles during flow. Therefore, the suspensions could not be processed at high solids loadings. Aqueous-based glass suspensions were also characterized by shear thickening after the addition of dispersants. Three slurry compositions were formulated, suitable green tapes were cast, and tapes were successfully laminated by LOM to a gear wheel geometry. A higher tensile strength of the green tapes corresponded to a higher tensile strength of the laminates. Thermal treatment was then applied to the laminates: pyrolysis at 525 degrees C, sintering at 700 degrees C for 1 h, and crystallization at 850 degrees C for 30 min. A 20% volumetric shrinkage was observed, but no surface flaws or inhomogeneous areas were detected. The sintered part maintained the curved edges and internal profile after heat treatment.
Resumo:
Recently, the development of industrial processes brought on the outbreak of technologically complex systems. This development generated the necessity of research relative to the mathematical techniques that have the capacity to deal with project complexities and validation. Fuzzy models have been receiving particular attention in the area of nonlinear systems identification and analysis due to it is capacity to approximate nonlinear behavior and deal with uncertainty. A fuzzy rule-based model suitable for the approximation of many systems and functions is the Takagi-Sugeno (TS) fuzzy model. IS fuzzy models are nonlinear systems described by a set of if then rules which gives local linear representations of an underlying system. Such models can approximate a wide class of nonlinear systems. In this paper a performance analysis of a system based on IS fuzzy inference system for the calibration of electronic compass devices is considered. The contribution of the evaluated IS fuzzy inference system is to reduce the error obtained in data acquisition from a digital electronic compass. For the reliable operation of the TS fuzzy inference system, adequate error measurements must be taken. The error noise must be filtered before the application of the IS fuzzy inference system. The proposed method demonstrated an effectiveness of 57% at reducing the total error based on considered tests. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Over the air download is an important feature for terrestrial digital television systems. It provides a cheaper option for DTV receiver manufacturers to provide bug fixes and quality improvements to their products, allowing a shorter time to the market. This paper presents a mechanism proposal of an over the air download software update for the Brazilian system. This mechanism was specified considering the Brazilian DTV over the air download specifications, but it was extended considering efficiency, reliability and user transparency as requirements for software update. A proof of concept was implemented on a Linux based set-top box. The mechanism is divided into five main functional parts: download schedule, packets download, packets authentication, installation and error robustness. Some analyses were conducted upon the implementation considering the following criteria: download robustness and maximum downloading rate. (1)