981 resultados para rich text format
Resumo:
Mode of access: Internet.
Resumo:
Tratando do tema dos meios de comunicação e de seus usos, esta dissertação discute as possíveis transformações nas práticas de leitura diante do surgimento dos novos suportes textuais baseados na tecnologia digital. O estudo mantém seu escopo de análise nas práticas de leitura por lazer estabelecidas em livros impressos e em livros eletrônicos (e-books) que são consumidos em e-readers. Entre outros, discutem-se a questão da linearidade ou fragmentação da leitura, os hábitos de marcações e anotações, locais de leitura e posturas, preferências de suportes. Os dados analisados foram obtidos através de entrevistas com 16 leitores divididos em dois grupos, um de leitores de e-readers e outro de leitores do impresso. Os resultados foram estudados comparativamente e mostram semelhanças e diferenças nas práticas de leituras dos dois grupos, que podem ser associadas à nova tecnologia e ao leitor eletrônico, tanto quanto às intenções e motivos de leitura, ao formato do texto e às particularidades do leitor.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Educação Matemática - IGCE
Resumo:
Pós-graduação em Educação Matemática - IGCE
Resumo:
A new version of the TomoRebuild data reduction software package is presented, for the reconstruction of scanning transmission ion microscopy tomography (STIMT) and particle induced X-ray emission tomography (PIXET) images. First, we present a state of the art of the reconstruction codes available for ion beam microtomography. The algorithm proposed here brings several advantages. It is a portable, multi-platform code, designed in C++ with well-separated classes for easier use and evolution. Data reduction is separated in different steps and the intermediate results may be checked if necessary. Although no additional graphic library or numerical tool is required to run the program as a command line, a user friendly interface was designed in Java, as an ImageJ plugin. All experimental and reconstruction parameters may be entered either through this plugin or directly in text format files. A simple standard format is proposed for the input of experimental data. Optional graphic applications using the ROOT interface may be used separately to display and fit energy spectra. Regarding the reconstruction process, the filtered backprojection (FBP) algorithm, already present in the previous version of the code, was optimized so that it is about 10 times as fast. In addition, Maximum Likelihood Expectation Maximization (MLEM) and its accelerated version Ordered Subsets Expectation Maximization (OSEM) algorithms were implemented. A detailed user guide in English is available. A reconstruction example of experimental data from a biological sample is given. It shows the capability of the code to reduce noise in the sinograms and to deal with incomplete data, which puts a new perspective on tomography using low number of projections or limited angle.
Resumo:
A compiled set of in situ data is important to evaluate the quality of ocean-colour satellite-data records. Here we describe the data compiled for the validation of the ocean-colour products from the ESA Ocean Colour Climate Change Initiative (OC-CCI). The data were acquired from several sources (MOBY, BOUSSOLE, AERONET-OC, SeaBASS, NOMAD, MERMAID, AMT, ICES, HOT, GeP&CO), span between 1997 and 2012, and have a global distribution. Observations of the following variables were compiled: spectral remote-sensing reflectances, concentrations of chlorophyll a, spectral inherent optical properties and spectral diffuse attenuation coefficients. The data were from multi-project archives acquired via the open internet services or from individual projects, acquired directly from data providers. Methodologies were implemented for homogenisation, quality control and merging of all data. No changes were made to the original data, other than averaging of observations that were close in time and space, elimination of some points after quality control and conversion to a standard format. The final result is a merged table designed for validation of satellite-derived ocean-colour products and available in text format. Metadata of each in situ measurement (original source, cruise or experiment, principal investigator) were preserved throughout the work and made available in the final table. Using all the data in a validation exercise increases the number of matchups and enhances the representativeness of different marine regimes. By making available the metadata, it is also possible to analyse each set of data separately.
Resumo:
Visual recognition is a fundamental research topic in computer vision. This dissertation explores datasets, features, learning, and models used for visual recognition. In order to train visual models and evaluate different recognition algorithms, this dissertation develops an approach to collect object image datasets on web pages using an analysis of text around the image and of image appearance. This method exploits established online knowledge resources (Wikipedia pages for text; Flickr and Caltech data sets for images). The resources provide rich text and object appearance information. This dissertation describes results on two datasets. The first is Berg’s collection of 10 animal categories; on this dataset, we significantly outperform previous approaches. On an additional set of 5 categories, experimental results show the effectiveness of the method. Images are represented as features for visual recognition. This dissertation introduces a text-based image feature and demonstrates that it consistently improves performance on hard object classification problems. The feature is built using an auxiliary dataset of images annotated with tags, downloaded from the Internet. Image tags are noisy. The method obtains the text features of an unannotated image from the tags of its k-nearest neighbors in this auxiliary collection. A visual classifier presented with an object viewed under novel circumstances (say, a new viewing direction) must rely on its visual examples. This text feature may not change, because the auxiliary dataset likely contains a similar picture. While the tags associated with images are noisy, they are more stable when appearance changes. The performance of this feature is tested using PASCAL VOC 2006 and 2007 datasets. This feature performs well; it consistently improves the performance of visual object classifiers, and is particularly effective when the training dataset is small. With more and more collected training data, computational cost becomes a bottleneck, especially when training sophisticated classifiers such as kernelized SVM. This dissertation proposes a fast training algorithm called Stochastic Intersection Kernel Machine (SIKMA). This proposed training method will be useful for many vision problems, as it can produce a kernel classifier that is more accurate than a linear classifier, and can be trained on tens of thousands of examples in two minutes. It processes training examples one by one in a sequence, so memory cost is no longer the bottleneck to process large scale datasets. This dissertation applies this approach to train classifiers of Flickr groups with many group training examples. The resulting Flickr group prediction scores can be used to measure image similarity between two images. Experimental results on the Corel dataset and a PASCAL VOC dataset show the learned Flickr features perform better on image matching, retrieval, and classification than conventional visual features. Visual models are usually trained to best separate positive and negative training examples. However, when recognizing a large number of object categories, there may not be enough training examples for most objects, due to the intrinsic long-tailed distribution of objects in the real world. This dissertation proposes an approach to use comparative object similarity. The key insight is that, given a set of object categories which are similar and a set of categories which are dissimilar, a good object model should respond more strongly to examples from similar categories than to examples from dissimilar categories. This dissertation develops a regularized kernel machine algorithm to use this category dependent similarity regularization. Experiments on hundreds of categories show that our method can make significant improvement for categories with few or even no positive examples.
Resumo:
A compiled set of in situ data is important to evaluate the quality of ocean-colour satellite-data records. Here we describe the data compiled for the validation of the ocean-colour products from the ESA Ocean Colour Climate Change Initiative (OC-CCI). The data were acquired from several sources (MOBY, BOUSSOLE, AERONET-OC, SeaBASS, NOMAD, MERMAID, AMT, ICES, HOT, GeP&CO), span between 1997 and 2012, and have a global distribution. Observations of the following variables were compiled: spectral remote-sensing reflectances, concentrations of chlorophyll a, spectral inherent optical properties and spectral diffuse attenuation coefficients. The data were from multi-project archives acquired via the open internet services or from individual projects, acquired directly from data providers. Methodologies were implemented for homogenisation, quality control and merging of all data. No changes were made to the original data, other than averaging of observations that were close in time and space, elimination of some points after quality control and conversion to a standard format. The final result is a merged table designed for validation of satellite-derived ocean-colour products and available in text format. Metadata of each in situ measurement (original source, cruise or experiment, principal investigator) were preserved throughout the work and made available in the final table. Using all the data in a validation exercise increases the number of matchups and enhances the representativeness of different marine regimes. By making available the metadata, it is also possible to analyse each set of data separately. The compiled data are available at doi: 10.1594/PANGAEA.854832 (Valente et al., 2015).
Resumo:
This dataset provides an inventory of thermo-erosional landforms and streams in three lowland areas underlain by ice-rich permafrost of the Yedoma-type Ice Complex at the Siberian Laptev Sea coast. It consists of two shapefiles per study region: one shapefile for the digitized thermo-erosional landforms and streams, one for the study area extent. Thermo-erosional landforms were manually digitized from topographic maps and satellite data as line features and subsequently analyzed in a Geographic Information System (GIS) using ArcGIS 10.0. The mapping included in particular thermo-erosional gullies and valleys as well as streams and rivers, since development of all of these features potentially involved thermo-erosional processes. For the Cape Mamontov Klyk site, data from Grosse et al. [2006], which had been digitized from 1:100000 topographic map sheets, were clipped to the Ice Complex extent of Cape Mamontov Klyk, which excludes the hill range in the southwest with outcropping bedrock and rocky slope debris, coastal barrens, and a large sandy floodplain area in the southeast. The mapped features (streams, intermittent streams) were then visually compared with panchromatic Landsat-7 ETM+ satellite data (4 August 2000, 15 m spatial resolution) and panchromatic Hexagon data (14 July 1975, 10 m spatial resolution). Smaller valleys and gullies not captured in the maps were subsequently digitized from the satellite data. The criterion for the mapping of linear features as thermo-erosional valleys and gullies was their clear incision into the surface with visible slopes. Thermo-erosional features of the Lena Delta site were mapped on the basis of a Landsat-7 ETM+ image mosaic (2000 and 2001, 30 m ground resolution) [Schneider et al., 2009] and a Hexagon satellite image mosaic (1975, 10 m ground resolution) [G. Grosse, unpublished data] of the Lena River Delta within the extent of the Lena Delta Ice Complex [Morgenstern et al., 2011]. For the Buor Khaya Peninsula, data from Arcos [2012], which had been digitized based on RapidEye satellite data (8 August 2010, 6.5 m ground resolution), were completed for smaller thermo-erosional features using the same RapidEye scene as a mapping basis. The spatial resolution, acquisition date, time of the day, and viewing geometry of the satellite data used may have influenced the identification of thermo-erosional landforms in the images. For Cape Mamontov Klyk and the Lena Delta, thermo-erosional features were digitized using both Hexagon and Landsat data; Hexagon provided higher resolution and Landsat provided the modern extent of features. Allowance of up to decameters was made for the lateral expansion of features between Hexagon and Landsat acquisitions (between 1975 and 2000).
Resumo:
This document describes a large set of Benchmark Problem Instances for the Rich Vehicle Routing Problem. All files are supplied as a single compressed (zipped) archive containing the instances, in XML format, an Object-Oriented Model supplied in XSD format, documentation and an XML parser written in Java to ease use.
Resumo:
We explore the impact of a latitudinal shift in the westerly wind belt over the Southern Ocean on the Atlantic meridional overturning circulation (AMOC) and on the carbon cycle for Last Glacial Maximum background conditions using a state-of-the-art ocean general circulation model. We find that a southward (northward) shift in the westerly winds leads to an intensification (weakening) of no more than 10% of the AMOC. This response of the ocean physics to shifting winds agrees with other studies starting from preindustrial background climate, but the responsible processes are different. In our setup changes in AMOC seemed to be more pulled by upwelling in the south than pushed by downwelling in the north, opposite to what previous studies with different background climate are suggesting. The net effects of the changes in ocean circulation lead to a rise in atmospheric pCO2 of less than 10 atm for both northward and southward shift in the winds. For northward shifted winds the zone of upwelling of carbon- and nutrient-rich waters in the Southern Ocean is expanded, leading to more CO2 outgassing to the atmosphere but also to an enhanced biological pump in the subpolar region. For southward shifted winds the upwelling region contracts around Antarctica, leading to less nutrient export northward and thus a weakening of the biological pump. These model results do not support the idea that shifts in the westerly wind belt play a dominant role in coupling atmospheric CO2 rise and Antarctic temperature during deglaciation suggested by the ice core data.
Resumo:
It is a big challenge to acquire correct user profiles for personalized text classification since users may be unsure in providing their interests. Traditional approaches to user profiling adopt machine learning (ML) to automatically discover classification knowledge from explicit user feedback in describing personal interests. However, the accuracy of ML-based methods cannot be significantly improved in many cases due to the term independence assumption and uncertainties associated with them. This paper presents a novel relevance feedback approach for personalized text classification. It basically applies data mining to discover knowledge from relevant and non-relevant text and constraints specific knowledge by reasoning rules to eliminate some conflicting information. We also developed a Dempster-Shafer (DS) approach as the means to utilise the specific knowledge to build high-quality data models for classification. The experimental results conducted on Reuters Corpus Volume 1 and TREC topics support that the proposed technique achieves encouraging performance in comparing with the state-of-the-art relevance feedback models.
Resumo:
Our everyday environment is full of text but this rich source of information remains largely inaccessible to mobile robots. In this paper we describe an active text spotting system that uses a small number of wide angle views to locate putative text in the environment and then foveates and zooms onto that text in order to improve the reliability of text recognition. We present extensive experimental results obtained with a pan/tilt/zoom camera and a ROS-based mobile robot operating in an indoor environment.