938 resultados para Content-based image retrieval


Relevância:

40.00% 40.00%

Publicador:

Resumo:

This layer is a georeferenced raster image of the historic paper map entitled: Map of the State of Rhode Island and Providence Plantations. It was compiled and published by J.C. Thompson in 1887. Scale 1:95,000. Source map and image missing bottom panels, including part of title; description based partly on published bibliography. The image inside the map neatline is georeferenced to the surface of the earth and fit to the Rhode Island State Plane Coordinate System (Feet) (FIPS 3800). All map collar and inset information is also available as part of the raster image, including any inset maps, profiles, statistical tables, directories, text, illustrations, or other information associated with the principal map. This map shows features such as roads, railroads, drainage, public buildings, schools, churches, cemeteries, industry locations (e.g. mills, factories, mines, etc.), selected private residences, town and county boundaries and more. Relief shown by hachures. Includes population statistics from the census of 1875 and 1885. This layer is part of a selection of digitally scanned and georeferenced historic maps of New England from the Harvard Map Collection. These maps typically portray both natural and manmade features. The selection represents a range of regions, originators, ground condition dates, scales, and map purposes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This layer is a georeferenced raster image of the historic paper map entitled: Contour map of the Caribbean Sea 1885, prepared from data furnished by the U.S. Hydrographic Office, based on the deep-sea soundings of the U.S.C.S.Str. Blake and the U.S.F.Str. Albatross. It was published by the Museum of Comparative Zoology, 1894. Scale [ca. 1:7,300,000]. Covers the Caribbean Sea. The image inside the map neatline is georeferenced to the surface of the earth and fit to a non-standard 'World Polyconic' projection with the central meridian at 75 degrees west. All map collar and inset information is also available as part of the raster image, including any inset maps, profiles, statistical tables, directories, text, illustrations, index maps, legends, or other information associated with the principal map. This map shows features such as drainage, islands, shoreline features, and more. Relief shown by hachures. Depths shown by isolines and soundings. This layer is part of a selection of digitally scanned and georeferenced historic maps from the Harvard Map Collection and the Harvard University Library as part of the Open Collections Program at Harvard University project: Organizing Our World: Sponsored Exploration and Scientific Discovery in the Modern Age. Maps selected for the project correspond to various expeditions and represent a range of regions, originators, ground condition dates, scales, and purposes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This layer is a georeferenced raster image of the historic paper map entitled: Seattle Harbor : Puget Sound Washington territory, issued May 1870 C.P. Patterson, superintendant; verification J.E. Hilgard, assistant in charge of the office; triangulation by J. S. Lawson assistant in 1874 based upon the primary triangulation by George Davidson, assistant in 1855-6; topography and hydrography by J.S. Lawson, assistant in 1874 & 5; resurvey of city of Seattle and water front by assist. J.J. Gilbert in 1886; additions by asst. Pratt in 1889; verifications of hydrology by Lieut. Comdr. W. H. Brownson U.S.N. inspector of hydrography. It was published by United States Coast and Geodetic Survey in July 1889. Scale 1:20,000. The image inside the map neatline is georeferenced to the surface of the earth and fit to the Washington State Plane North Coordinate System HARN NAD83 (in Feet) (Fipszone 4601). All map collar and inset information is also available as part of the raster image, including any inset maps, profiles, statistical tables, directories, text, illustrations, index maps, legends, or other information associated with the principal map. This map shows coastal features such as lighthouses, rocks, channels, points, coves, islands, bottom soil types, flats, wharves, and more. Includes also selected land features such as roads, railroads, drainage, land cover, selected buildings, towns, and more. Relief shown by contours and spot heights; depths by soundings. Includes notes, tables, and list of authorities. This layer is part of a selection of digitally scanned and georeferenced historic maps from The Harvard Map Collection as part of the Imaging the Urban Environment project. Maps selected for this project represent major urban areas and cities of the world, at various time periods. These maps typically portray both natural and manmade features at a large scale. The selection represents a range of regions, originators, ground condition dates, scales, and purposes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Thesis (M. S.)--University of Illinois at Urbana-Champaign.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Provocative advertising is characterized by a deliberate attempt to gain attention through shock. This research investigates the reactions of individuals to a provocative appeal for a cause as opposed to a provocative advertisement for a standard consumer product, using mild erotica as the element of provocative imagery. An experiment using 391 adult subjects was conducted, and two analyses were performed. The first examined the effect of stimulus type (mildly erotic/nonerotic) by product category (cause appeal/consumer product) on attitude to the ad. The second examined the effect of stimulus type (mildly erotic/nonerotic) by cause (AIDS [acquired immunodeficiency syndrome]/SIDS [sudden infant death syndrome]) on corporate image. Both analyses also included gender as a third independent variable. The results suggest that people prefer mildly erotic ads generally, that an organization using mild erotica in appeals for a cause will be viewed more favorably where the erotica is congruent with the cause, and that women may be more responsive to mild erotica in cause appeals than are men.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Information and content integration are believed to be a possible solution to the problem of information overload in the Internet. The article is an overview of a simple solution for integration of information and content on the Web. Previous approaches to content extraction and integration are discussed, followed by introduction of a novel technology to deal with the problems, based on XML processing. The article includes lessons learned from solving issues of changing webpage layout, incompatibility with HTML standards and multiplicity of the results returned. The method adopting relative XPath queries over DOM tree proves to be more robust than previous approaches to Web information integration. Furthermore, the prototype implementation demonstrates the simplicity that enables non-professional users to easily adopt this approach in their day-to-day information management routines.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Domain specific information retrieval has become in demand. Not only domain experts, but also average non-expert users are interested in searching domain specific (e.g., medical and health) information from online resources. However, a typical problem to average users is that the search results are always a mixture of documents with different levels of readability. Non-expert users may want to see documents with higher readability on the top of the list. Consequently the search results need to be re-ranked in a descending order of readability. It is often not practical for domain experts to manually label the readability of documents for large databases. Computational models of readability needs to be investigated. However, traditional readability formulas are designed for general purpose text and insufficient to deal with technical materials for domain specific information retrieval. More advanced algorithms such as textual coherence model are computationally expensive for re-ranking a large number of retrieved documents. In this paper, we propose an effective and computationally tractable concept-based model of text readability. In addition to textual genres of a document, our model also takes into account domain specific knowledge, i.e., how the domain-specific concepts contained in the document affect the document’s readability. Three major readability formulas are proposed and applied to health and medical information retrieval. Experimental results show that our proposed readability formulas lead to remarkable improvements in terms of correlation with users’ readability ratings over four traditional readability measures.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Current image database metadata schemas require users to adopt a specific text-based vocabulary. Text-based metadata is good for searching but not for browsing. Existing image-based search facilities, on the other hand, are highly specialised and so suffer similar problems. Wexelblat's semantic dimensional spatial visualisation schemas go some way towards addressing this problem by making both searching and browsing more accessible to the user in a single interface. But the question of how and what initial metadata to enter a database remains. Different people see different things in an image and will organise a collection in equally diverse ways. However, we can find some similarity across groups of users regardless of their reasoning. For example, a search on Amazon.com returns other products also, based on an averaging of how users navigate the database. In this paper, we report on applying this concept to a set of images for which we have visualised them using traditional methods and the Amazon.com method. We report on the findings of this comparative investigation in a case study setting involving a group of randomly selected participants. We conclude with the recommendation that in combination, the traditional and averaging methods would provide an enhancement to current database visualisation, searching, and browsing facilities.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper discusses an document discovery tool based on formal concept analysis. The program allows users to navigate email using a visual lattice metaphor rather than a tree. It implements a virtual file structure over email where files and entire directories can appear in multiple positions. The content and shape of the lattice formed by the conceptual ontology can assist in email discovery. The system described provides more flexibility in retrieving stored emails than what is normally available in email clients. The paper discusses how conceptual ontologies can leverage traditional document retrieval systems.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Obtaining wind vectors over the ocean is important for weather forecasting and ocean modelling. Several satellite systems used operationally by meteorological agencies utilise scatterometers to infer wind vectors over the oceans. In this paper we present the results of using novel neural network based techniques to estimate wind vectors from such data. The problem is partitioned into estimating wind speed and wind direction. Wind speed is modelled using a multi-layer perceptron (MLP) and a sum of squares error function. Wind direction is a periodic variable and a multi-valued function for a given set of inputs; a conventional MLP fails at this task, and so we model the full periodic probability density of direction conditioned on the satellite derived inputs using a Mixture Density Network (MDN) with periodic kernel functions. A committee of the resulting MDNs is shown to improve the results.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Obtaining wind vectors over the ocean is important for weather forecasting and ocean modelling. Several satellite systems used operationally by meteorological agencies utilise scatterometers to infer wind vectors over the oceans. In this paper we present the results of using novel neural network based techniques to estimate wind vectors from such data. The problem is partitioned into estimating wind speed and wind direction. Wind speed is modelled using a multi-layer perceptron (MLP) and a sum of squares error function. Wind direction is a periodic variable and a multi-valued function for a given set of inputs; a conventional MLP fails at this task, and so we model the full periodic probability density of direction conditioned on the satellite derived inputs using a Mixture Density Network (MDN) with periodic kernel functions. A committee of the resulting MDNs is shown to improve the results.