959 resultados para ArcGIS API for JavaScript


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this research work we searched for open source libraries which supports graph drawing and visualisation and can run in a browser. Subsequent these libraries were evaluated to find out which one is the best for this task. The result was the d3.js is that library which has the greatest functionality, flexibility and customisability. Afterwards we developed an open source software tool where d3.js was included and which was written in JavaScript so that it can run browser-based.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este trabajo ha tenido como objetivo la realización de una aplicación para móviles desarrollada en HTML5 llamada Audioguía turística del Real Monasterio de San Lorenzo de El Escorial. Se han utilizado tecnologías como HTML5, JavaScript, jQuery Mobile, CSS, ThemRoller y PhoneGap.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El projecte va dirigit a fomentar l'aprenentatge fora de les aules, que tingui lloc en qualsevol part i en qualsevol moment, on l'estudiant pugui obtenir més coneixement d'una matèria obligatòria determinada. PracticaClic és un sistema educatiu que va dirigit a nens i nenes de primària, i en concret, en la matèria de les matemàtiques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

WebGraphEd is an open source software for graph visualization and manipulation. It is especially designed to work for the web platform through a web browser. The web application has been written in JavaScript and compacted later, which makes it a very lightweight software. There is no need of additional software, and the only requirement is to have an HTML5 compliant browser. WebGraphEd works with scalable vector graphics (SVG), which it makes possible to create lossless graph drawings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Creació de dos prototips, un per Android i l'altre perUnity, establint les bases per a la producció d'un videojoc d'acció lateral (Beat 'em up)amb plataformes (puzles) anomenat "Ouroboros". Android és un sistema operatiu basat en Linux, designat primerament per mòbils tàctils(smartphones) i tabletes. En concret s'utilitzarà el SDK (Software Development Kit) dins del'entorn de programació Eclipse amb llenguatge Java, i les bases d'un frameworkanomenat LibGDX. Unity, en canvi, és un motor de videojocs multi-plataforma amb un entorn dedesenvolupament integrat, del que nosaltres utilitzarem la versió en Javascript.Es volen explorar les dues plataformes per tal d'esbrinar quina de les dues vies és la mésidònia de cares a la producció final d'un joc

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L'ontologia que s'ha dissenyat contempla els conceptes bàsics de Twitter, les relacions entre ells i les restriccions que cal respectar. L'ontologia s'ha dissenyat amb el programa Protégé i està disponible en format OWL. S'ha desenvolupat una aplicació per poblar l'ontologia amb els tweets que s'obtenen a partir d'una cerca a Twitter. L'accés a Twitter es fa via l'API que ofereix per accedir a les dades des d'aplicacions de tercers. El resultat de l'execució de l'aplicació és un fitxer RDF/XML amb les tripletes corresponents a les instàncies dels objectes en l'ontologia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

RESUMO A demanda pelo azeite de oliva tem sido crescente no mundo, principalmente no Brasil,aumentando a necessidade de expansão da área de plantio. Contudo, nas regiões que são tradicionalmente produtoras, não existe mais espaço, havendo necessidade de procurar novas áreas para plantio no mundo. Neste trabalho, foram identificadas áreas marginais para plantio no Brasil, utilizando modelagem de nicho ecológico, baseando-se em pontos de ocorrência levantados nas regiões produtoras da Europa, Estados Unidos, Austrália e América Latina (Chile, Argentina e Uruguai). Foram usados 346 pontos georreferenciados de ocorrência e planos de informação com dados climáticos (média de temperatura mínima do ar, média de temperatura máxima do ar - em oC - e total de precipitação pluviométrica - em mm) estacionais (primavera, verão, outono e inverno) do mundo, baixados do site do Worldclim. Os dados foram organizados e formatados em sistemas de informações geográficas, usando ArcGIS 10. Finalmente, foram elaborados mapas com a probabilidade de ocorrência de oliveiras no mundo. Foi feito também um mapa apenas da América do Sul, em que são apresentadas as zonas com maior potencial de ocorrência de oliveiras.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Article About the Authors Metrics Comments Related Content Abstract Introduction Functionality Implementation Discussion Acknowledgments Author Contributions References Reader Comments (0) Figures Abstract Despite of the variety of available Web services registries specially aimed at Life Sciences, their scope is usually restricted to a limited set of well-defined types of services. While dedicated registries are generally tied to a particular format, general-purpose ones are more adherent to standards and usually rely on Web Service Definition Language (WSDL). Although WSDL is quite flexible to support common Web services types, its lack of semantic expressiveness led to various initiatives to describe Web services via ontology languages. Nevertheless, WSDL 2.0 descriptions gained a standard representation based on Web Ontology Language (OWL). BioSWR is a novel Web services registry that provides standard Resource Description Framework (RDF) based Web services descriptions along with the traditional WSDL based ones. The registry provides Web-based interface for Web services registration, querying and annotation, and is also accessible programmatically via Representational State Transfer (REST) API or using a SPARQL Protocol and RDF Query Language. BioSWR server is located at http://inb.bsc.es/BioSWR/and its code is available at https://sourceforge.net/projects/bioswr/​under the LGPL license.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis evaluates methods for obtaining high performance in applications running on the mobile Java platform. Based on the evaluated methods, an optimization was done to a Java extension API running on top the Symbian operating system. The API provides location-based services for mobile Java applications. As a part of this thesis, the JNI implementation in Symbian OS was also benchmarked. A benchmarking tool was implemented in the analysis phase in order to implement extensive performance test set. Based on the benchmark results, it was noted that the landmarks implementation of the API was performing very slowly with large amounts of data. The existing implementation proved to be very inconvenient for optimization because the early implementers did not take performance and design issues into consideration. A completely new architecture was implemented for the API in order to provide scalable landmark initialization and data extraction by using lazy initialization methods. Additionally, runtime memory consumption was also an important part of the optimization. The improvement proved to be very efficient based on the measurements after the optimization. Most of the common API use cases performed extremely well compared to the old implementation. Performance optimization is an important quality attribute of any piece of software especially in embedded mobile devices. Typically, projects get into trouble with performance because there are no clear performance targets and knowledge how to achieve them. Well-known guidelines and performance models help to achieve good overall performance in Java applications and programming interfaces.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract Personalized medicine is a challenging research area in paediatric treatments. Elaborating new paediatric formulations when no commercial forms are available is a common practice in pharmacy laboratories; among these, oral liquid formulations are the most common. But due to the lack of specialized equipment, frequently studies to assure the efficiency and safety of the final medicine cannot be carried out. Thus the purpose of this work was the development, characterization and stability evaluation of two oral formulations of sildenafil for the treatment of neonatal persistent pulmonary hypertension. After the establishment of a standard operating procedure (SOP) and elaboration, the physicochemical stability parameters appearance, pH, particle size, rheological behaviour and drug content of formulations were evaluated at three different temperatures for 90 days. Equally, prediction of long term stability, as well as, microbiological stability was performed. Formulations resulted in a suspension and a solution slightly coloured exhibiting fruity odour. Formulation I (suspension) exhibited the best physicochemical properties including Newtonian behaviour and uniformity of API content above 90% to assure an exact dosification process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El sistema de visualización SIG del Servei Meteorològic de Catalunya (SMC) consiste en un sistema que visualiza la información meteorológica del SMC mediante un visor SIG. La herramienta permitirá a los técnicos del SMC poder consultar productos meteorológicos, como modelos numéricos, productos de Meteosat, radar, descargas eléctricas, etc. El sistema se compone de un servidor MapServer, utilizando MapCache como sistema de caché, y de un visor SIG web implementado en JavaScript, utilizando las librerías Dojo Toolkit y OpenLayers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[Summary] 2. Roles of quality control in the pharmaceutical and biopharmaceutical industries. - 2.1. Pharmaceutical industry. - 2.2. Biopharmaceutical industry. - 2.3. Policy and regulatory. - 2.3.1. The US Food and Drug Administration (FDA). - 2.3.2. The European Medicine Agency (EMEA). - 2.3.3. The Japanese Ministry of Work, Labor and Welfare (MHLW). - 2.3.4. The Swiss Agency for Therapeutic Products (Swissmedic). - 2.3.5. The International Conference on Harmonization (ICH). - - 3. Types of testing. - 3.1. Microbiological purity tests. - 3.2. Physiochemical tests. - 3.3. Critical to quality steps. - 3.3.1. API starting materials and excipients. - 3.3.2. Intermediates. - 3.3.3. APIs (drug substances) and final drug product. - 3.3.4. Primary and secondary packaging materials fro drug products. - - 4. Manufacturing cost and quality control. - 4.1.1. Pharmaceutical manufacturing cost breakdown. - 4.1.2. Biopharmaceutical manufacturing cost breakdown. - 4.2. Batch failure / rejection / rework / recalls. - - 5. Future trends in the quality control of pharmaceuticals and biopharmaceuticals. - 5.1. Rapid and real time testing. - 5.1.1. Physio-chemicals testing. - 5.1.2. Rapid microbiology methods

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Estudi i disseny d'una app basada en la geolocalització d'elements i la compartició en un aplicatiu dissenyats per a mòbils multiplataforma basat en el model de les xarxes socials. Amb aquest projecte volem familiaritzar-nos amb en el desenvolupament sobre entorns mòbils i amb la utilització de estàndards sòlids que permetin una aplicació multi plataforma. Un altre objectiu a aconseguir amb aquest projecte, és el desenvolupament d’una API que permeti als usuaris de l’aplicació comunicar-se amb el cloud.Tant mateix caldrà que l’aplicació es comuniqui de forma eficient amb el cloud propi per obtenir dades, com són els punts d’interès (POIs) o amb el cloud extern per obtenir els mapes, en aquest cas, emprant les API de openstreetmap. Per aconseguir aquests objectius el projecte es dividirà fases, en la primera es perseguirà l’objectiu d’aconseguir una eina mòbil que permeti mostrar per pantalla, la ubicació del telèfon mòbil, amb els punts d’interès que es trobin en el seu radi d’acció. En una segona fase es desenvoluparà una API pròpia per a l’aplicació, que permeti la consulta dels punts d’interès, la publicació dels mateixos, etc..per últim s’integrarà l’aplicació mòbil amb la API del cloud, per mostrar la informació en els dispositius mòbils.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Amb aquest treball de final de carrera de la titulació d'enginyeria tècnica en informàtica de gestió es pretén fer una primera aproximació al món de l'anàlisi semàntic de webs. Consisteix, per una banda, en la creació d'una ontologia per emmagatzemar informació provinent de la web de LinkedIn de manera que després pugui ser analitzada i permeti filtrar les dades de manera pràctica evitant l'excés d'informació no útil. Per altra banda, el treball inclou el desenvolupament d'una aplicació per a l'obtenció de la informació de la web de LinkedIn de manera automàtica, i un mètode per a la importació a l'ontologia creada.