898 resultados para Digital Object Identifier (DOI)
Resumo:
One of the biggest challenges of integrating research in TESOL with research in digital literacies is that the research methodologies of these two traditions have developed out of different ontological and episte- mological assumptions about what is being researched (the object of study), where the research is located (the research site), and who is being researched (the research participants).
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The identification of ground control on photographs or images is usually carried out by a human operator, who uses his natural skills to make interpretations. In Digital Photogrammetry, which uses techniques of digital image processing extraction of ground control can be automated by using an approach based on relational matching and a heuristic that uses the analytical relation between straight features of object space and its homologous in the image space. A build-in self-diagnosis is also used in this method. It is based on implementation of data snooping statistic test in the process of spatial resection using the Iterated Extended Kalman Filtering (IEKF). The aim of this paper is to present the basic principles of the proposed approach and results based on real data.
Resumo:
Physical parameters of different types of lenses were measured through digital speckle pattern interferometry (DSPI) using a multimode diode laser as light source. When such lasers emit two or more longitudinal modes simultaneously the speckle image of an object appears covered of contour fringes. By performing the quantitative fringe evaluation the radii of curvature as well as the refractive indexes of the lenses were determined. The fringe quantitative evaluation was carried out through the four- and the eight-stepping techniques and the branch-cut method was employed for phase unwrapping. With all these parameters the focal length was calculated. This whole-field multi-wavelength method does enable the characterization of spherical and aspherical lenses and of positive and negative ones as well. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Purpose: To determine palpebral dimensions and development in Brazilian children using digital images. Methods: An observational study was performed measuring eyelid angles, palpebral fissure area and interpupillary distance in 220 children aged from 4 to 72 months. Digital images were obtained with a Sony Lithium movie camera (Sony DCR-TRV110, Brazil) in frontal view from awake children in primary ocular position; the object of observation was located at pupil height. The images were saved to tape, transferred to a Macintosh G4 (Apple Computer Inc., USA) computer and processed using NIH 1.58 software (NTIS, 5285 Port Royal Rd., Springfield, VA 22161, USA). Data were submitted to statistical analysis. Results: All parameters studied increased with age. The outer palpebral angle was greater than the inner, and palpebral fissure and angles showed greater changes between 4 and 5 months old and at around 24 to 36 months. Conclusion: There are significant variations in palpebral dimensions in children under 72 months old, especially around 24 to 36 months. Copyright © 2006 Informa Healthcare.
Resumo:
The digital image processing has been applied in several areas, especially where it is necessary use tools for feature extraction and to get patterns of the studied images. In an initial stage, the segmentation is used to separate the image in parts that represents a interest object, that may be used in a specific study. There are several methods that intends to perform such task, but is difficult to find a method that can easily adapt to different type of images, that often are very complex or specific. To resolve this problem, this project aims to presents a adaptable segmentation method, that can be applied to different type of images, providing an better segmentation. The proposed method is based in a model of automatic multilevel thresholding and considers techniques of group histogram quantization, analysis of the histogram slope percentage and calculation of maximum entropy to define the threshold. The technique was applied to segment the cell core and potential rejection of tissue in myocardial images of biopsies from cardiac transplant. The results are significant in comparison with those provided by one of the best known segmentation methods available in the literature. © 2010 IEEE.
Resumo:
Based on literature review, electronic systems design employ largely top-down methodology. The top-down methodology is vital for success in the synthesis and implementation of electronic systems. In this context, this paper presents a new computational tool, named BD2XML, to support electronic systems design. From a block diagram system of mixed-signal is generated object code in XML markup language. XML language is interesting because it has great flexibility and readability. The BD2XML was developed with object-oriented paradigm. It was used the AD7528 converter modeled in MATLAB / Simulink as a case study. The MATLAB / Simulink was chosen as a target due to its wide dissemination in academia and industry. From this case study it is possible to demonstrate the functionality of the BD2XML and make it a reflection on the design challenges. Therefore, an automatic tool for electronic systems design reduces the time and costs of the design.
Resumo:
Topographical surfaces can be represented with a good degree of accuracy by means of maps. However these are not always the best tools for the understanding of more complex reliefs. In this sense, the greatest contribution of this work is to specify and to implement the architecture of an opensource software system capable of representing TIN (Triangular Irregular Network) based digital terrain models. The system implementation follows the object oriented programming and generic paradigms enabling the integration of various opensource tools such as GDAL, OGR, OpenGL, OpenSceneGraph and Qt. Furthermore, the representation core of the system has the ability to work with multiple topological data structures from which can be extracted, in constant time, all the connectivity relations between the entities vertices, edges and faces existing in a planar triangulation what helps enormously the implementation for real time applications. This is an important capability, for example, in the use of laser survey data (Lidar, ALS, TLS), allowing for the generation of triangular mesh models in the order of millions of points.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Information Science has for its object of study the general properties of information and analysis of their construction, communication and use processes. Organic information, one of the information types, is the one recorded in the archives, which can be split into two distinct groups based on users: current and permanent, used by administrators, historians/citizens, respectively. After defining the information behavior of each group, the articles directs the discussion to the mediation of information in the permanent archives. The interaction between user and information professionals through references services aiming the user needs is presented. In addition, the standards of archival description and the research instruments as tools to reference service are discussed. Moreover, it argues the importance of information technologies and the new possibilities for the promotion of organic information in permanent archives, especially concerning the information architecture of websites and the conversion of the DTD standards of archival descriptions.
Resumo:
In this letter, a semiautomatic method for road extraction in object space is proposed that combines a stereoscopic pair of low-resolution aerial images with a digital terrain model (DTM) structured as a triangulated irregular network (TIN). First, we formulate an objective function in the object space to allow the modeling of roads in 3-D. In this model, the TIN-based DTM allows the search for the optimal polyline to be restricted along a narrow band that is overlaid upon it. Finally, the optimal polyline for each road is obtained by optimizing the objective function using the dynamic programming optimization algorithm. A few seed points need to be supplied by an operator. To evaluate the performance of the proposed method, a set of experiments was designed using two stereoscopic pairs of low-resolution aerial images and a TIN-based DTM with an average resolution of 1 m. The experimental results showed that the proposed method worked properly, even when faced with anomalies along roads, such as obstructions caused by shadows and trees.
Resumo:
We present and describe a catalog of galaxy photometric redshifts (photo-z) for the Sloan Digital Sky Survey (SDSS) Co-add Data. We use the artificial neural network (ANN) technique to calculate the photo-z and the nearest neighbor error method to estimate photo-z errors for similar to 13 million objects classified as galaxies in the co-add with r < 24.5. The photo-z and photo-z error estimators are trained and validated on a sample of similar to 83,000 galaxies that have SDSS photometry and spectroscopic redshifts measured by the SDSS Data Release 7 (DR7), the Canadian Network for Observational Cosmology Field Galaxy Survey, the Deep Extragalactic Evolutionary Probe Data Release 3, the VIsible imaging Multi-Object Spectrograph-Very Large Telescope Deep Survey, and the WiggleZ Dark Energy Survey. For the best ANN methods we have tried, we find that 68% of the galaxies in the validation set have a photo-z error smaller than sigma(68) = 0.031. After presenting our results and quality tests, we provide a short guide for users accessing the public data.
Resumo:
Graphene, that is a monolayer of carbon atoms arranged in a honeycomb lattice, has been isolated only recently from graphite. This material shows very attractive physical properties, like superior carrier mobility, current carrying capability and thermal conductivity. In consideration of that, graphene has been the object of large investigation as a promising candidate to be used in nanometer-scale devices for electronic applications. In this work, graphene nanoribbons (GNRs), that are narrow strips of graphene, for which a band-gap is induced by the quantum confinement of carriers in the transverse direction, have been studied. As experimental GNR-FETs are still far from being ideal, mainly due to the large width and edge roughness, an accurate description of the physical phenomena occurring in these devices is required to have valuable predictions about the performance of these novel structures. A code has been developed to this purpose and used to investigate the performance of 1 to 15-nm wide GNR-FETs. Due to the importance of an accurate description of the quantum effects in the operation of graphene devices, a full-quantum transport model has been adopted: the electron dynamics has been described by a tight-binding (TB) Hamiltonian model and transport has been solved within the formalism of the non-equilibrium Green's functions (NEGF). Both ballistic and dissipative transport are considered. The inclusion of the electron-phonon interaction has been taken into account in the self-consistent Born approximation. In consideration of their different energy band-gap, narrow GNRs are expected to be suitable for logic applications, while wider ones could be promising candidates as channel material for radio-frequency applications.
Resumo:
The large, bunodont postcanine teeth in living sea otters (Enhydra lutris) have been likened to those of certain fossil hominins, particularly the ’robust’ australopiths (genus Paranthropus). We examine this evolutionary convergence by conducting fracture experiments on extracted molar teeth of sea otters and modern humans (Homo sapiens) to determine how load-bearing capacity relates to tooth morphology and enamel material properties. In situ optical microscopy and x-ray imaging during simulated occlusal loading reveal the nature of the fracture patterns. Explicit fracture relations are used to analyze the data and to extrapolate the results from humans to earlier hominins. It is shown that the molar teeth of sea otters have considerably thinner enamel than those of humans, making sea otter molars more susceptible to certain kinds of fractures. At the same time, the base diameter of sea otter first molars is larger, diminishing the fracture susceptibility in a compensatory manner. We also conduct nanoindentation tests to map out elastic modulus and hardness of sea otter and human molars through a section thickness, and microindentation tests to measure toughness. We find that while sea otter enamel is just as stiff elastically as human enamel, it is a little softer and tougher. The role of these material factors in the capacity of dentition to resist fracture and deformation is considered. From such comparisons, we argue that early hominin species like Paranthropus most likely consumed hard food objects with substantially higher biting forces than those exerted by modern humans.
Resumo:
New digital artifacts are emerging in data-intensive science. For example, scientific workflows are executable descriptions of scientific procedures that define the sequence of computational steps in an automated data analysis, supporting reproducible research and the sharing and replication of best-practice and know-how through reuse. Workflows are specified at design time and interpreted through their execution in a variety of situations, environments, and domains. Hence it is essential to preserve both their static and dynamic aspects, along with the research context in which they are used. To achieve this, we propose the use of multidimensional digital objects (Research Objects) that aggregate the resources used and/or produced in scientific investigations, including workflow models, provenance of their executions, and links to the relevant associated resources, along with the provision of technological support for their preservation and efficient retrieval and reuse. In this direction, we specified a software architecture for the design and implementation of a Research Object preservation system, and realized this architecture with a set of services and clients, drawing together practices in digital libraries, preservation systems, workflow management, social networking and Semantic Web technologies. In this paper, we describe the backbone system of this realization, a digital library system built on top of dLibra.