929 resultados para Discovery and monitoringservices


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Tara Oceans Expedition (2009-2013) sampled the world oceans on board a 36 m long schooner, collecting environmental data and organisms from viruses to planktonic metazoans for later analyses using modern sequencing and state-of-the-art imaging technologies. Tara Oceans Data are particularly suited to study the genetic, morphological and functional diversity of plankton. The present data set provides continuous measurements made with a WETLabs Eco-FL sensor mounted on the flowthrough system between June 4th, 2011 and March 30th, 2012. Data was recorded approximately every 10s. Two issues affected the data: 1. Periods when the water 0.2µm filtered water were used as blanks and 2. Periods where fluorescence was affected by non-photochemical quenching (NPQ, chlorophyll fluorescence is reduced when cells are exposed to light, e.g. Falkowski and Raven, 1997). Median data and their standard deviation were binned to 5min bins with period of light/dark indicated by an added variable (so that NPQ affected data could be neglected if the user so chooses). Data was first calibrated using HPLC data collected on the Tara (there were 36 data within 30min of each other). Fewer were available when there was no evident NPQ and the resulting scale factor was 0.0106 mg Chl m-3/count. To increase the calibration match-ups we used the AC-S data which provided a robust estimate of Chlorophyll (e.g. Boss et al., 2013). Scale factor computed over a much larger range of values than HPLC was 0.0088 mg Chl m-3/count (compared to 0.0079 mg Chl m-3/count based on manufacturer). In the archived data the fluorometer data is merged with the TSG, raw data is provided as well as manufacturer calibration constants, blank computed from filtered measurements and chlorophyll calibrated using the AC-S. For a full description of the processing of the Eco-FL please see Taillandier, 2015.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Tara Oceans Expedition (2009-2013) sampled the world oceans on board a 36 m long schooner, collecting environmental data and organisms from viruses to planktonic metazoans for later analyses using modern sequencing and state-of-the-art imaging technologies. Tara Oceans Data are particularly suited to study the genetic, morphological and functional diversity of plankton. The present data set provides continuous measurements made with a FRRF instrument, operating in a flow-through mode during the 2009-2012 part of the expedition. It operates by exciting chlorophyll fluorescence using a series of short flashes of controlled energy and time intervals (Kolber et al, 1998). The fluorescence transients produced by this excitation signal were analysed in real-time to provide estimates of abundance of photosynthetic pigments, the photosynthetic yields (Fv/Fm), the functional absorption cross section (a proxy for efficiency of photosynthetic energy acquisition), the kinetics of photosynthetic electron transport between Photosystem II and Photosystem I, and the size of the PQ pool. These parameters were measured at excitation wavelength of 445 nm, 470nm, 505 nm, and 535 nm, allowing to assess the presence and the photosynthetic performance of different phytoplankton taxa based on the spectral composition of their light harvesting pigments. The FRRF-derived photosynthetic characteristics were used to calculate the initial slope, the half saturation, and the maximum level of Photosynthesis vs Irradiance relationship. FRRF data were acquired continuously, at 1-minute time intervals.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Tara Oceans Expedition (2009-2013) sampled the world oceans on board a 36 m long schooner, collecting environmental data and organisms from viruses to planktonic metazoans for later analyses using modern sequencing and state-of-the-art imaging technologies. Tara Oceans Data are particularly suited to study the genetic, morphological and functional diversity of plankton. The present data set provides continuous measurements made with an Aquatic Laser Fluorescence Analyzer (ALFA) (Chekalyuk et al., 2014), connected in-line to the TARA flow through system during 2013. The ALFA instrument provides dual-wavelength excitation (405 and 514 nm) of laser-stimulated emission (LSE) for spectral and temporal analysis. It offers in vivo fluorescence assessments of phytoplankton pigments, biomass, photosynthetic yield (Fv/Fm), phycobiliprotein (PBP)-containing phytoplankton groups, and chromophoric dissolved organic matter (CDOM) (Chekalyuk and Hafez, 2008; 2013A). Spectral deconvolution (SDC) is used to assess the overlapped spectral bands of aquatic fluorescence constituents and water Raman scattering (R). The Fv/Fm measurements are spectrally corrected for non-chlorophyll fluorescence background produced by CDOM and other constituents (Chekalyuk and Hafez, 2008). The sensor was cleaned weakly following the manufacturer recommended protocol.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This PhD thesis contributes to the problem of resource and service discovery in the context of the composable web. In the current web, mashup technologies allow developers reusing services and contents to build new web applications. However, developers face a problem of information flood when searching for appropriate services or resources for their combination. To contribute to overcoming this problem, a framework is defined for the discovery of services and resources. In this framework, three levels are defined for performing discovery at content, discovery and agente levels. The content level involves the information available in web resources. The web follows the Representational Stateless Transfer (REST) architectural style, in which resources are returned as representations from servers to clients. These representations usually employ the HyperText Markup Language (HTML), which, along with Content Style Sheets (CSS), describes the markup employed to render representations in a web browser. Although the use of SemanticWeb standards such as Resource Description Framework (RDF) make this architecture suitable for automatic processes to use the information present in web resources, these standards are too often not employed, so automation must rely on processing HTML. This process, often referred as Screen Scraping in the literature, is the content discovery according to the proposed framework. At this level, discovery rules indicate how the different pieces of data in resources’ representations are mapped onto semantic entities. By processing discovery rules on web resources, semantically described contents can be obtained out of them. The service level involves the operations that can be performed on the web. The current web allows users to perform different tasks such as search, blogging, e-commerce, or social networking. To describe the possible services in RESTful architectures, a high-level feature-oriented service methodology is proposed at this level. This lightweight description framework allows defining service discovery rules to identify operations in interactions with REST resources. The discovery is thus performed by applying discovery rules to contents discovered in REST interactions, in a novel process called service probing. Also, service discovery can be performed by modelling services as contents, i.e., by retrieving Application Programming Interface (API) documentation and API listings in service registries such as ProgrammableWeb. For this, a unified model for composable components in Mashup-Driven Development (MDD) has been defined after the analysis of service repositories from the web. The agent level involves the orchestration of the discovery of services and contents. At this level, agent rules allow to specify behaviours for crawling and executing services, which results in the fulfilment of a high-level goal. Agent rules are plans that allow introspecting the discovered data and services from the web and the knowledge present in service and content discovery rules to anticipate the contents and services to be found on specific resources from the web. By the definition of plans, an agent can be configured to target specific resources. The discovery framework has been evaluated on different scenarios, each one covering different levels of the framework. Contenidos a la Carta project deals with the mashing-up of news from electronic newspapers, and the framework was used for the discovery and extraction of pieces of news from the web. Similarly, in Resulta and VulneraNET projects the discovery of ideas and security knowledge in the web is covered, respectively. The service level is covered in the OMELETTE project, where mashup components such as services and widgets are discovered from component repositories from the web. The agent level is applied to the crawling of services and news in these scenarios, highlighting how the semantic description of rules and extracted data can provide complex behaviours and orchestrations of tasks in the web. The main contributions of the thesis are the unified framework for discovery, which allows configuring agents to perform automated tasks. Also, a scraping ontology has been defined for the construction of mappings for scraping web resources. A novel first-order logic rule induction algorithm is defined for the automated construction and maintenance of these mappings out of the visual information in web resources. Additionally, a common unified model for the discovery of services is defined, which allows sharing service descriptions. Future work comprises the further extension of service probing, resource ranking, the extension of the Scraping Ontology, extensions of the agent model, and contructing a base of discovery rules. Resumen La presente tesis doctoral contribuye al problema de descubrimiento de servicios y recursos en el contexto de la web combinable. En la web actual, las tecnologías de combinación de aplicaciones permiten a los desarrolladores reutilizar servicios y contenidos para construir nuevas aplicaciones web. Pese a todo, los desarrolladores afrontan un problema de saturación de información a la hora de buscar servicios o recursos apropiados para su combinación. Para contribuir a la solución de este problema, se propone un marco de trabajo para el descubrimiento de servicios y recursos. En este marco, se definen tres capas sobre las que se realiza descubrimiento a nivel de contenido, servicio y agente. El nivel de contenido involucra a la información disponible en recursos web. La web sigue el estilo arquitectónico Representational Stateless Transfer (REST), en el que los recursos son devueltos como representaciones por parte de los servidores a los clientes. Estas representaciones normalmente emplean el lenguaje de marcado HyperText Markup Language (HTML), que, unido al estándar Content Style Sheets (CSS), describe el marcado empleado para mostrar representaciones en un navegador web. Aunque el uso de estándares de la web semántica como Resource Description Framework (RDF) hace apta esta arquitectura para su uso por procesos automatizados, estos estándares no son empleados en muchas ocasiones, por lo que cualquier automatización debe basarse en el procesado del marcado HTML. Este proceso, normalmente conocido como Screen Scraping en la literatura, es el descubrimiento de contenidos en el marco de trabajo propuesto. En este nivel, un conjunto de reglas de descubrimiento indican cómo los diferentes datos en las representaciones de recursos se corresponden con entidades semánticas. Al procesar estas reglas sobre recursos web, pueden obtenerse contenidos descritos semánticamente. El nivel de servicio involucra las operaciones que pueden ser llevadas a cabo en la web. Actualmente, los usuarios de la web pueden realizar diversas tareas como búsqueda, blogging, comercio electrónico o redes sociales. Para describir los posibles servicios en arquitecturas REST, se propone en este nivel una metodología de alto nivel para descubrimiento de servicios orientada a funcionalidades. Este marco de descubrimiento ligero permite definir reglas de descubrimiento de servicios para identificar operaciones en interacciones con recursos REST. Este descubrimiento es por tanto llevado a cabo al aplicar las reglas de descubrimiento sobre contenidos descubiertos en interacciones REST, en un nuevo procedimiento llamado sondeo de servicios. Además, el descubrimiento de servicios puede ser llevado a cabo mediante el modelado de servicios como contenidos. Es decir, mediante la recuperación de documentación de Application Programming Interfaces (APIs) y listas de APIs en registros de servicios como ProgrammableWeb. Para ello, se ha definido un modelo unificado de componentes combinables para Mashup-Driven Development (MDD) tras el análisis de repositorios de servicios de la web. El nivel de agente involucra la orquestación del descubrimiento de servicios y contenidos. En este nivel, las reglas de nivel de agente permiten especificar comportamientos para el rastreo y ejecución de servicios, lo que permite la consecución de metas de mayor nivel. Las reglas de los agentes son planes que permiten la introspección sobre los datos y servicios descubiertos, así como sobre el conocimiento presente en las reglas de descubrimiento de servicios y contenidos para anticipar contenidos y servicios por encontrar en recursos específicos de la web. Mediante la definición de planes, un agente puede ser configurado para descubrir recursos específicos. El marco de descubrimiento ha sido evaluado sobre diferentes escenarios, cada uno cubriendo distintos niveles del marco. El proyecto Contenidos a la Carta trata de la combinación de noticias de periódicos digitales, y en él el framework se ha empleado para el descubrimiento y extracción de noticias de la web. De manera análoga, en los proyectos Resulta y VulneraNET se ha llevado a cabo un descubrimiento de ideas y de conocimientos de seguridad, respectivamente. El nivel de servicio se cubre en el proyecto OMELETTE, en el que componentes combinables como servicios y widgets se descubren en repositorios de componentes de la web. El nivel de agente se aplica al rastreo de servicios y noticias en estos escenarios, mostrando cómo la descripción semántica de reglas y datos extraídos permiten proporcionar comportamientos complejos y orquestaciones de tareas en la web. Las principales contribuciones de la tesis son el marco de trabajo unificado para descubrimiento, que permite configurar agentes para realizar tareas automatizadas. Además, una ontología de extracción ha sido definida para la construcción de correspondencias y extraer información de recursos web. Asimismo, un algoritmo para la inducción de reglas de lógica de primer orden se ha definido para la construcción y el mantenimiento de estas correspondencias a partir de la información visual de recursos web. Adicionalmente, se ha definido un modelo común y unificado para el descubrimiento de servicios que permite la compartición de descripciones de servicios. Como trabajos futuros se considera la extensión del sondeo de servicios, clasificación de recursos, extensión de la ontología de extracción y la construcción de una base de reglas de descubrimiento.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

One of the challenges facing the current web is the efficient use of all the available information. The Web 2.0 phenomenon has favored the creation of contents by average users, and thus the amount of information that can be found for diverse topics has grown exponentially in the last years. Initiatives such as linked data are helping to build the Semantic Web, in which a set of standards are proposed for the exchange of data among heterogeneous systems. However, these standards are sometimes not used, and there are still plenty of websites that require naive techniques to discover their contents and services. This paper proposes an integrated framework for content and service discovery and extraction. The framework is divided into several layers where the discovery of contents and services is made in a representational stateless transfer system such as the web. It employs several web mining techniques as well as feature-oriented modeling for the discovery of cross-cutting features in web resources. The framework is used in a scenario of electronic newspapers. An intelligent agent crawls the web for related news, and uses services and visits links automatically according to its goal. This scenario illustrates how the discovery is made at different levels and how the use of semantics helps implement an agent that performs high-level tasks.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Un Service Business Framework consiste en una serie de componentes interrelacionados que permiten la gestión de servicios de negocio a través de su ciclo de vida, desde su creación, descubrimiento y comparación, hasta su monetización (incluyendo un posible reparto de beneficios). De esta manera, el denominado FIWARE Business Framework trata de permitir a los usuarios de la plataforma FIWARE mejorar sus productos con funcionalidades de búsqueda, describrimiento, comparación, monetización y reparto de beneficios. Para lograr este objetivo, el Business Framework de FIWARE proporciona la especificación abierta y las APIs de una serie de components (denominados \Generic Enablers" en terminología FIWARE), junto con una implementación de referencia de las mismas pueden ser facilmente integradas en los sitemas existentes para conseguir aplicaciones con valor a~nadido. Al comienzo de este trabajo de fin de master, el Business Framework de FIWARE no era lo suficientemente maduro como para cubrir los requisitos de sus usuarios, ya que ofrecía modelos demasiado generales y dejaba algunas funcionalidades clave para ser implementadas por los usuarios. Para solucionar estos problemas, el principal objectivo desarrollado en el contexto de este trabajo de fin de master ha consistido en mejorar y evolucionar el Business Framework de FIWARE para dar respuesta a las demandas de sus usuarios. Para alcanzar el pricipal objetivo propuesto, el Business Framework de FIWARE ha sido evaluado usando la información proporcionada por los usuarios de la plataforma, principalmente PyMEs y start-ups que usan este framework en sus soluciones, con el objetivo de obtener una lista de requisitos y de dise~nar a partir de éstos un roadmap de evolución a 6 meses. Después, los diferentes problemas identificados se han tratado uno por uno dando en cada caso una solución capaz de cubrir los requisitos de los usuarios. Finalmente, se han evaluado los resultados obtenidos en el proyecto integrando el Business Framework desarrollado con un sistema existente para la gestión de datos de consusmo energético, construyendo lo que se ha denominado Mercado de Datos de Consumo Energético. Esto además ha permitido demostrar la utilidad del framework propuesto para evolucionar una plataforma de datos abiertos bien conocida como es CKAN a un verdadero mercado de datos.---ABSTRACT---Service Business Frameworks consist on a number of interrelated components that support the management of business services across their whole lifecycle, from their creation, publication, discovery and comparison, to their monetization (possibly including revenue settlement and sharing). In this regard, the FIWARE Business Framework aims at allowing FIWARE users to enhance their solutions with search, discovery, comparison, monetization and revenue settlement and sharing features. To achieve this objective, the FIWARE Business Framework provides the open specification and APIs of a comprehensive set of components (called Generic Enablers in FIWARE terminology), along with a reference implementation of these APIs,, that can be easily integrated with existing systems in order to create value added applications. At the beginning of the current Master's Thesis, the FIWARE Business Framework was not mature enough to cover the requirements of the its users, since it provided too general models and leaved some key functionality to be implemented by those users. To deal with these issues, the main objective carried out in the context of this Master's Thesis have been enhancing and evolving the FIWARE Business Framework to accomplish with the demands of its users. For achieving the main objective of this Master's Thesis, the FWARE Business Framework has been evaluated using the feedback provided by FIWARE users, mainly SMEs and start-ups, actually using the framework in their solutions, in order to determine a list of requirements and to design a roadmap for the evolution and improvement of the existing framework in the next 6 months. Then, the diferent issues detected have been tackle one by one enhancing them, and trying to give a solution able to cover users requirements. Finally, the results of the project have been evaluated by integrating the evolved FIWARE Business Framework with an existing system in charge of the management of energy consumption data, building what has been called the Energy Consumption Data Market. This has also allowed demonstrating the usefulness of the proposed business framework to evolve CKAN, a renowned open data platform, into an actual, fully- edged data market.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Alternative agriculture, which expands the uses of plants well beyond food and fiber, is beginning to change plant biology. Two plant-based biotechnologies were recently developed that take advantage of the ability of plant roots to absorb or secrete various substances. They are (i) phytoextraction, the use of plants to remove pollutants from the environment and (ii) rhizosecretion, a subset of molecular farming, designed to produce and secrete valuable natural products and recombinant proteins from roots. Here we discuss recent advances in these technologies and assess their potential in soil remediation, drug discovery, and molecular farming.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The search for novel leads is a critical step in the drug discovery process. Computational approaches to identify new lead molecules have focused on discovering complete ligands by evaluating the binding affinity of a large number of candidates, a task of considerable complexity. A new computational method is introduced in this work based on the premise that the primary molecular recognition event in the protein binding site may be accomplished by small core fragments that serve as molecular anchors, providing a structurally stable platform that can be subsequently tailored into complete ligands. To fulfill its role, we show that an effective molecular anchor must meet both the thermodynamic requirement of relative energetic stability of a single binding mode and its consistent kinetic accessibility, which may be measured by the structural consensus of multiple docking simulations. From a large number of candidates, this technique is able to identify known core fragments responsible for primary recognition by the FK506 binding protein (FKBP-12), along with a diverse repertoire of novel molecular cores. By contrast, absolute energetic criteria for selecting molecular anchors are found to be promiscuous. A relationship between a minimum frustration principle of binding energy landscapes and receptor-specific molecular anchors in their role as "recognition nuclei" is established, thereby unraveling a mechanism of lead discovery and providing a practical route to receptor-biased computational combinatorial chemistry.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Previously, we reported on the discovery and characterization of a mammalian chromatin-associated protein, CHD1 (chromo-ATPase/helicase-DNA-binding domain), with features that led us to suspect that it might have an important role in the modification of chromatin structure. We now report on the characterization of the Drosophila melanogaster CHD1 homologue (dCHD1) and its localization on polytene chromosomes. A set of overlapping cDNAs encodes an 1883-aa open reading frame that is 50% identical and 68% similar to the mouse CHD1 sequence, including conservation of the three signature domains for which the protein was named. When the chromo and ATPase/helicase domain sequences in various CHD1 homologues were compared with the corresponding sequences in other proteins, certain distinctive features of the CHD1 chromo and ATPase/helicase domains were revealed. The dCHD1 gene was mapped to position 23C-24A on chromosome 2L. Western blot analyses with antibodies raised against a dCHD1 fusion protein specifically recognized an approximately 210-kDa protein in nuclear extracts from Drosophila embryos and cultured cells. Most interestingly, these antibodies revealed that dCHD1 localizes to sites of extended chromatin (interbands) and regions associated with high transcriptional activity (puffs) on polytene chromosomes from salivary glands of third instar larvae. These observations strongly support the idea that CHD1 functions to alter chromatin structure in a way that facilitates gene expression.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Very large combinatorial libraries of small molecules on solid supports can now be synthesized and each library element can be identified after synthesis by using chemical tags. These tag-encoded libraries are potentially useful in drug discovery, and, to test this utility directly, we have targeted carbonic anhydrase (carbonate dehydratase; carbonate hydro-lyase, EC 4.2.1.1) as a model. Two libraries consisting of a total of 7870 members were synthesized, and structure-activity relationships based on the structures predicted by the tags were derived. Subsequently, an active representative of each library was resynthesized (2-[N-(4-sulfamoylbenzoyl)-4'-aminocyclohexanespiro]-4-oxo-7 -hydroxy- 2,3-dihydrobenzopyran and [N-(4-sulfamoylbenzoyl)-L-leucyl]piperidine-3-carboxylic acid) and these compounds were shown to have nanomolar dissociation constants (15 and 4 nM, respectively). In addition, a focused sublibrary of 217 sulfamoylbenzamides was synthesized and revealed a clear, testable structure-activity relationship describing isozyme-selective carbonic anhydrase inhibitors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Copied orders and narrative entries of a military expedition to Schenectady and the Oneida station.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This layer is a georeferenced raster image of the historic paper map entitled: Chart exhibiting the discoveries of the second American-Grinnell-Expedition in search of Sir John Franklin : unrevised from the original material and projected on the spot by E.K. Kane. It was published by Lith of J. Bien in [1855]. Scale [ca. 1:400,000]. Covers the Nares Strait region, Greenland and Canada. The image inside the map neatline is georeferenced to the surface of the earth and fit to the 'NAD 1983 CSRS UTM Zone 19 North' projection. All map collar and inset information is also available as part of the raster image, including any inset maps, profiles, statistical tables, directories, text, illustrations, index maps, legends, or other information associated with the principal map. This map shows coastal features such as drainage, islands, capes, bays, tides, lines of ice, camps, and more. Relief shown by hachures. This layer is part of a selection of digitally scanned and georeferenced historic maps from the Harvard Map Collection and the Harvard University Library as part of the Open Collections Program at Harvard University project: Organizing Our World: Sponsored Exploration and Scientific Discovery in the Modern Age. Maps selected for the project correspond to various expeditions and represent a range of regions, originators, ground condition dates, scales, and purposes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This layer is a georeferenced raster image of the historic paper map entitled: Map of the route explored by Captns. Speke & Grant from Zanzibar to Egypt : showing the outfall of the Nile from the Victoria Nyanza (Lake) and the various Negro territories discovered by them. It was published by Edward Stanford in 1863. Scale [ca. 1:5,800,000]. Covers portions of north and eastern Africa including parts of Sudan, Eritrea, Ethiopia, Uganda, Kenya, Rwanda, Burundi, and Tanzania. The image inside the map neatline is georeferenced to the surface of the earth and projected to the 'World Mercator' projection. All map collar and inset information is also available as part of the raster image, including any inset maps, profiles, statistical tables, directories, text, illustrations, index maps, legends, or other information associated with the principal map. This map shows features such as drainage, expedition routes of John Speke and James Grant, cities and other human settlements, tribe and territorial boundaries, and more. Relief is shown by hachures. Includes location map and text. This layer is part of a selection of digitally scanned and georeferenced historic maps from the Harvard Map Collection and the Harvard University Library as part of the Open Collections Program at Harvard University project: Organizing Our World: Sponsored Exploration and Scientific Discovery in the Modern Age. Maps selected for the project correspond to various expeditions and represent a range of regions, originators, ground condition dates, scales, and purposes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This layer is a georeferenced raster image of the historic paper map entitled: Behring's Sea and Arctic Ocean : from surveys of the U.S. North Pacific Surveying Expedition in 1855, Commander John Rodgers U.S.N. commanding and from Russian and English authorities, J.C.P. de Kraft, commodore U.S.N. Hydrographer to the Bureau of Navigation ; compiled by E.R. Knorr ; drawn by Louis Waldecker. Corr. & additions to Jan. 1882. It was published by U.S. Navy, Hydrographic Office in 1882. Scale [ca. 1:4,400,000]. Covers the Bering Sea and Arctic Ocean region. The image inside the map neatline is georeferenced to the surface of the earth and fit to a non-standard 'Mercator' projection with the central meridian at 180 degrees west. All map collar and inset information is also available as part of the raster image, including any inset maps, profiles, statistical tables, directories, text, illustrations, index maps, legends, or other information associated with the principal map. Note: The central meridian of this map is not the same as the Prime Meridian and may wrap the International Date Line or overlap itself when displayed in GIS software. This map shows features such as drainage, cities and other human settlements, territorial boundaries, expedition routes, shoreline features, bays, harbors, islands, rocks, and more. Relief shown by hachures and spot heights. Depths shown by soundings. Includes drawing of Wrangel Island "as seen from Bark Nile of New London ... ; 15 to 18 miles distant". This layer is part of a selection of digitally scanned and georeferenced historic maps from the Harvard Map Collection and the Harvard University Library as part of the Open Collections Program at Harvard University project: Organizing Our World: Sponsored Exploration and Scientific Discovery in the Modern Age. Maps selected for the project correspond to various expeditions and represent a range of regions, originators, ground condition dates, scales, and purposes.