975 resultados para API (Application Programming Interface)
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Inform
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Crisis-affected communities and global organizations for international aid are becoming increasingly digital as consequence geotechnology popularity. Humanitarian sector changed in profound ways by adopting new technical approach to obtain information from area with difficult geographical or political access. Since 2011, turkey is hosting a growing number of Syrian refugees along southeastern region. Turkish policy of hosting them in camps and the difficulty created by governors to international aid group expeditions to get information, made such international organizations to investigate and adopt other approach in order to obtain information needed. They intensified its remote sensing approach. However, the majority of studies used very high-resolution satellite imagery (VHRSI). The study area is extensive and the temporal resolution of VHRSI is low, besides it is infeasible only using these sensors as unique approach for the whole area. The focus of this research, aims to investigate the potentialities of mid-resolution imagery (here only Landsat) to obtain information from region in crisis (here, southeastern Turkey) through a new web-based platform called Google Earth Engine (GEE). Hereby it is also intended to verify GEE currently reliability once the Application Programming Interface (API) is still in beta version. The finds here shows that the basic functions are trustworthy. Results pointed out that Landsat can recognize change in the spectral resolution clearly only for the first settlement. The ongoing modifications vary for each case. Overall, Landsat demonstrated high limitations, but need more investigations and may be used, with restriction, as a support of VHRSI.
Resumo:
Dissertação de mestrado integrado em Engenharia Biomédica (área de especialização em Informática Médica)
Resumo:
The Microbe browser is a web server providing comparative microbial genomics data. It offers comprehensive, integrated data from GenBank, RefSeq, UniProt, InterPro, Gene Ontology and the Orthologs Matrix Project (OMA) database, displayed along with gene predictions from five software packages. The Microbe browser is daily updated from the source databases and includes all completely sequenced bacterial and archaeal genomes. The data are displayed in an easy-to-use, interactive website based on Ensembl software. The Microbe browser is available at http://microbe.vital-it.ch/. Programmatic access is available through the OMA application programming interface (API) at http://microbe.vital-it.ch/api.
Resumo:
Symbian OS on käyttöjärjestelmä edistyneille matkapuhelimille. Symbiania käyttävistä laitteista on monia muunnoksia. Joitakin käytetään näppäimistön avulla ja toisia kynällä. Näytön mittasuhteet ja muut ominaisuudet vaihtelevat huomattavasti älypuhelimista kommunikaattoreihin. Tämän seurauksena Symbianin eri laiteperheiden viitemallien käyttöliittymäosat ovat sovelluskehittäjän kannalta melko erilaisia. Esimerkiksi kaikki käyttöliittymäkomponentit eivät ole saatavilla kaikissa laiteperheissä. Perinteisesti sovellusten käyttöliittymät on kirjoitettu erikseen kullekin laiteperheelle, mikä pidentää kehitysaikaa. Tämä työ esittelee Symbianin käyttöliittymäarkkitehtuurin, siirrettävyyden käsitteen ja tekniikoita sovellusten suunnitteluun ja toteutukseen, joilla saavutetaan parempi siirrettävyys Symbian-ympäristössä. Työssä suunnitellaan ja toteutetaan AppTest-nimisen testaustyökalun käyttöliittymä siten, että sovellus on helposti siirrettävissä eri laiteperheisiin.
Resumo:
The use of domain-specific languages (DSLs) has been proposed as an approach to cost-e ectively develop families of software systems in a restricted application domain. Domain-specific languages in combination with the accumulated knowledge and experience of previous implementations, can in turn be used to generate new applications with unique sets of requirements. For this reason, DSLs are considered to be an important approach for software reuse. However, the toolset supporting a particular domain-specific language is also domain-specific and is per definition not reusable. Therefore, creating and maintaining a DSL requires additional resources that could be even larger than the savings associated with using them. As a solution, di erent tool frameworks have been proposed to simplify and reduce the cost of developments of DSLs. Developers of tool support for DSLs need to instantiate, customize or configure the framework for a particular DSL. There are di erent approaches for this. An approach is to use an application programming interface (API) and to extend the basic framework using an imperative programming language. An example of a tools which is based on this approach is Eclipse GEF. Another approach is to configure the framework using declarative languages that are independent of the underlying framework implementation. We believe this second approach can bring important benefits as this brings focus to specifying what should the tool be like instead of writing a program specifying how the tool achieves this functionality. In this thesis we explore this second approach. We use graph transformation as the basic approach to customize a domain-specific modeling (DSM) tool framework. The contributions of this thesis includes a comparison of di erent approaches for defining, representing and interchanging software modeling languages and models and a tool architecture for an open domain-specific modeling framework that e ciently integrates several model transformation components and visual editors. We also present several specific algorithms and tool components for DSM framework. These include an approach for graph query based on region operators and the star operator and an approach for reconciling models and diagrams after executing model transformation programs. We exemplify our approach with two case studies MICAS and EFCO. In these studies we show how our experimental modeling tool framework has been used to define tool environments for domain-specific languages.
Resumo:
The forthcoming media revolution of exchanging paper documents to digital media in construction engineering requires new tools to be developed. The basis of this bachelor’s thesis was to explore the preliminary possibilities of exporting imagery from a Building Information Modelling –software to a mobile phone on a construction yard. This was done by producing a Web Service which uses the design software’s Application Programming Interface to interact with a structures model in order to produce the requested imagery. While mobile phones were found lacking as client devices, because of limited processing power and small displays, the implementation showed that the Tekla Structures API can be used to automatically produce various types of imagery. Web Services can be used to transfer this data to the client. Before further development the needs of the contractor, benefits for the building master and inspector and the full potential of the BIM-software need to be mapped out with surveys.
Resumo:
Java Card technology allows the development and execution of small applications embedded in smart cards. A Java Card application is composed of an external card client and of an application in the card that implements the services available to the client by means of an Application Programming Interface (API). Usually, these applications manipulate and store important information, such as cash and confidential data of their owners. Thus, it is necessary to adopt rigor on developing a smart card application to improve its quality and trustworthiness. The use of formal methods on the development of these applications is a way to reach these quality requirements. The B method is one of the many formal methods for system specification. The development in B starts with the functional specification of the system, continues with the application of some optional refinements to the specification and, from the last level of refinement, it is possible to generate code for some programming language. The B formalism has a good tool support and its application to Java Card is adequate since the specification and development of APIs is one of the major applications of B. The BSmart method proposed here aims to promote the rigorous development of Java Card applications up to the generation of its code, based on the refinement of its formal specification described in the B notation. This development is supported by the BSmart tool, that is composed of some programs that automate each stage of the method; and by a library of B modules and Java Card classes that model primitive types, essential Java Card API classes and reusable data structures
Resumo:
This article presents considerations about viability on reutilize existing web based e-Learning systems on Interactive Digital TV environment according to Digital TV standard adopted in Brazil. Considering the popularity of Moodle system in academic and corporative area, such system was chosen as a foundation for a survey into its properties to create a specification of an Application Programming Interface (API) for convergence to t-Learning characteristics that demands efforts in interface design area due the fact that computer and TV concepts are totally different. This work aims to present studies concerning user interface design during two stages: survey and detail of functionalities from an e-Learning system and how to adapt them for the Interactive TV regarding usability context and Information Architecture concepts.
Resumo:
[ES] El principal objetivo de este Trabajo Final de Grado (TFG) fue la creación de un sistema de gestión de vídeo distribuido utilizando cámaras de videovigilancia IP. Esta propuesta surgió a partir de la idea de ofrecer un acceso simultáneo, tanto online como offline, a las secuencias de vídeo generadas por una red de cámaras IP en un entorno dado. El resultado obtenido fue una infraestructura software ampliable que ofrece al usuario una serie de funcionalidades con cámaras de red, abstrayéndolo de detalles internos. El trabajo está compuesto por tres elementos claramente diferenciados: integración de cámaras IP, almacenamiento en vídeo y creación del sistema de vídeo distribuido. La integración de cámaras IP tiene como objetivo comunicar al equipo con la cámara de red para la obtención del flujo de imágenes que transmite. Dicha comunicación se establece vía HTTP (Hypertext Transfer Protocol) gracias a la interfaz de programación (API) de la que disponen estos dispositivos. El segundo elemento, el almacenamiento en vídeo, tiene como función guardar las imágenes de la cámara IP en archivos de vídeo. De esta manera se ofrece su posterior visualización en diferido. Finalmente, el sistema de vídeo distribuido permite la reproducción simultánea de múltiples vídeos grabados por la red de cámaras IP. Adicionalmente, vídeos grabados por otros dispositivos también son admitidos. El material desarrollado dispone del potencial necesario para convertirse en una herramienta libre de amplio uso en sistemas UNIX para cámaras IP, así como suponer la base de futuros proyectos relacionados con estos dispositivos.
Resumo:
This PhD thesis contributes to the problem of resource and service discovery in the context of the composable web. In the current web, mashup technologies allow developers reusing services and contents to build new web applications. However, developers face a problem of information flood when searching for appropriate services or resources for their combination. To contribute to overcoming this problem, a framework is defined for the discovery of services and resources. In this framework, three levels are defined for performing discovery at content, discovery and agente levels. The content level involves the information available in web resources. The web follows the Representational Stateless Transfer (REST) architectural style, in which resources are returned as representations from servers to clients. These representations usually employ the HyperText Markup Language (HTML), which, along with Content Style Sheets (CSS), describes the markup employed to render representations in a web browser. Although the use of SemanticWeb standards such as Resource Description Framework (RDF) make this architecture suitable for automatic processes to use the information present in web resources, these standards are too often not employed, so automation must rely on processing HTML. This process, often referred as Screen Scraping in the literature, is the content discovery according to the proposed framework. At this level, discovery rules indicate how the different pieces of data in resources’ representations are mapped onto semantic entities. By processing discovery rules on web resources, semantically described contents can be obtained out of them. The service level involves the operations that can be performed on the web. The current web allows users to perform different tasks such as search, blogging, e-commerce, or social networking. To describe the possible services in RESTful architectures, a high-level feature-oriented service methodology is proposed at this level. This lightweight description framework allows defining service discovery rules to identify operations in interactions with REST resources. The discovery is thus performed by applying discovery rules to contents discovered in REST interactions, in a novel process called service probing. Also, service discovery can be performed by modelling services as contents, i.e., by retrieving Application Programming Interface (API) documentation and API listings in service registries such as ProgrammableWeb. For this, a unified model for composable components in Mashup-Driven Development (MDD) has been defined after the analysis of service repositories from the web. The agent level involves the orchestration of the discovery of services and contents. At this level, agent rules allow to specify behaviours for crawling and executing services, which results in the fulfilment of a high-level goal. Agent rules are plans that allow introspecting the discovered data and services from the web and the knowledge present in service and content discovery rules to anticipate the contents and services to be found on specific resources from the web. By the definition of plans, an agent can be configured to target specific resources. The discovery framework has been evaluated on different scenarios, each one covering different levels of the framework. Contenidos a la Carta project deals with the mashing-up of news from electronic newspapers, and the framework was used for the discovery and extraction of pieces of news from the web. Similarly, in Resulta and VulneraNET projects the discovery of ideas and security knowledge in the web is covered, respectively. The service level is covered in the OMELETTE project, where mashup components such as services and widgets are discovered from component repositories from the web. The agent level is applied to the crawling of services and news in these scenarios, highlighting how the semantic description of rules and extracted data can provide complex behaviours and orchestrations of tasks in the web. The main contributions of the thesis are the unified framework for discovery, which allows configuring agents to perform automated tasks. Also, a scraping ontology has been defined for the construction of mappings for scraping web resources. A novel first-order logic rule induction algorithm is defined for the automated construction and maintenance of these mappings out of the visual information in web resources. Additionally, a common unified model for the discovery of services is defined, which allows sharing service descriptions. Future work comprises the further extension of service probing, resource ranking, the extension of the Scraping Ontology, extensions of the agent model, and contructing a base of discovery rules. Resumen La presente tesis doctoral contribuye al problema de descubrimiento de servicios y recursos en el contexto de la web combinable. En la web actual, las tecnologías de combinación de aplicaciones permiten a los desarrolladores reutilizar servicios y contenidos para construir nuevas aplicaciones web. Pese a todo, los desarrolladores afrontan un problema de saturación de información a la hora de buscar servicios o recursos apropiados para su combinación. Para contribuir a la solución de este problema, se propone un marco de trabajo para el descubrimiento de servicios y recursos. En este marco, se definen tres capas sobre las que se realiza descubrimiento a nivel de contenido, servicio y agente. El nivel de contenido involucra a la información disponible en recursos web. La web sigue el estilo arquitectónico Representational Stateless Transfer (REST), en el que los recursos son devueltos como representaciones por parte de los servidores a los clientes. Estas representaciones normalmente emplean el lenguaje de marcado HyperText Markup Language (HTML), que, unido al estándar Content Style Sheets (CSS), describe el marcado empleado para mostrar representaciones en un navegador web. Aunque el uso de estándares de la web semántica como Resource Description Framework (RDF) hace apta esta arquitectura para su uso por procesos automatizados, estos estándares no son empleados en muchas ocasiones, por lo que cualquier automatización debe basarse en el procesado del marcado HTML. Este proceso, normalmente conocido como Screen Scraping en la literatura, es el descubrimiento de contenidos en el marco de trabajo propuesto. En este nivel, un conjunto de reglas de descubrimiento indican cómo los diferentes datos en las representaciones de recursos se corresponden con entidades semánticas. Al procesar estas reglas sobre recursos web, pueden obtenerse contenidos descritos semánticamente. El nivel de servicio involucra las operaciones que pueden ser llevadas a cabo en la web. Actualmente, los usuarios de la web pueden realizar diversas tareas como búsqueda, blogging, comercio electrónico o redes sociales. Para describir los posibles servicios en arquitecturas REST, se propone en este nivel una metodología de alto nivel para descubrimiento de servicios orientada a funcionalidades. Este marco de descubrimiento ligero permite definir reglas de descubrimiento de servicios para identificar operaciones en interacciones con recursos REST. Este descubrimiento es por tanto llevado a cabo al aplicar las reglas de descubrimiento sobre contenidos descubiertos en interacciones REST, en un nuevo procedimiento llamado sondeo de servicios. Además, el descubrimiento de servicios puede ser llevado a cabo mediante el modelado de servicios como contenidos. Es decir, mediante la recuperación de documentación de Application Programming Interfaces (APIs) y listas de APIs en registros de servicios como ProgrammableWeb. Para ello, se ha definido un modelo unificado de componentes combinables para Mashup-Driven Development (MDD) tras el análisis de repositorios de servicios de la web. El nivel de agente involucra la orquestación del descubrimiento de servicios y contenidos. En este nivel, las reglas de nivel de agente permiten especificar comportamientos para el rastreo y ejecución de servicios, lo que permite la consecución de metas de mayor nivel. Las reglas de los agentes son planes que permiten la introspección sobre los datos y servicios descubiertos, así como sobre el conocimiento presente en las reglas de descubrimiento de servicios y contenidos para anticipar contenidos y servicios por encontrar en recursos específicos de la web. Mediante la definición de planes, un agente puede ser configurado para descubrir recursos específicos. El marco de descubrimiento ha sido evaluado sobre diferentes escenarios, cada uno cubriendo distintos niveles del marco. El proyecto Contenidos a la Carta trata de la combinación de noticias de periódicos digitales, y en él el framework se ha empleado para el descubrimiento y extracción de noticias de la web. De manera análoga, en los proyectos Resulta y VulneraNET se ha llevado a cabo un descubrimiento de ideas y de conocimientos de seguridad, respectivamente. El nivel de servicio se cubre en el proyecto OMELETTE, en el que componentes combinables como servicios y widgets se descubren en repositorios de componentes de la web. El nivel de agente se aplica al rastreo de servicios y noticias en estos escenarios, mostrando cómo la descripción semántica de reglas y datos extraídos permiten proporcionar comportamientos complejos y orquestaciones de tareas en la web. Las principales contribuciones de la tesis son el marco de trabajo unificado para descubrimiento, que permite configurar agentes para realizar tareas automatizadas. Además, una ontología de extracción ha sido definida para la construcción de correspondencias y extraer información de recursos web. Asimismo, un algoritmo para la inducción de reglas de lógica de primer orden se ha definido para la construcción y el mantenimiento de estas correspondencias a partir de la información visual de recursos web. Adicionalmente, se ha definido un modelo común y unificado para el descubrimiento de servicios que permite la compartición de descripciones de servicios. Como trabajos futuros se considera la extensión del sondeo de servicios, clasificación de recursos, extensión de la ontología de extracción y la construcción de una base de reglas de descubrimiento.
Resumo:
A raíz de la aparición de los procesadores dotados de varios “cores”, la programación paralela, un concepto que, por otra parte no era nada nuevo y se conocía desde hace décadas, sufrió un nuevo impulso, pues se creía que se podía superar el techo tecnológico que había estado limitando el rendimiento de esta programación durante años. Este impulso se ha ido manteniendo hasta la actualidad, movido por la necesidad de sistemas cada vez más potentes y gracias al abaratamiento de los costes de fabricación. Esta tendencia ha motivado la aparición de nuevo software y lenguajes con componentes orientados precisamente al campo de la programación paralela. Este es el caso del lenguaje Go, desarrollado por Google y lanzado en 2009. Este lenguaje se basa en modelos de concurrencia que lo hacen muy adecuados para abordar desarrollos de naturaleza paralela. Sin embargo, la programación paralela es un campo complejo y heterogéneo, y los programadores son reticentes a utilizar herramientas nuevas, en beneficio de aquellas que ya conocen y les son familiares. Un buen ejemplo son aquellas implementaciones de lenguajes conocidos, pero orientadas a programación paralela, y que siguen las directrices de un estándar ampliamente reconocido y aceptado. Este es el caso del estándar OpenMP, un Interfaz de Programación de Aplicaciones (API) flexible, portable y escalable, orientado a la programación paralela multiproceso en arquitecturas multi-core o multinucleo. Dicho estándar posee actualmente implementaciones en los lenguajes C, C++ y Fortran. Este proyecto nace como un intento de aunar ambos conceptos: un lenguaje emergente con interesantes posibilidades en el campo de la programación paralela, y un estándar reputado y ampliamente extendido, con el que los programadores se encuentran familiarizados. El objetivo principal es el desarrollo de un conjunto de librerías del sistema (que engloben directivas de compilación o pragmas, librerías de ejecución y variables de entorno), soportadas por las características y los modelos de concurrencia propios de Go; y que añadan funcionalidades propias del estándar OpenMP. La idea es añadir funcionalidades que permitan programar en lenguaje Go utilizando la sintaxis que OpenMP proporciona para otros lenguajes, como Fortan y C/C++ (concretamente, similar a esta última), y, de esta forma, dotar al usuario de Go de herramientas para programar estructuras de procesamiento paralelo de forma sencilla y transparente, de la misma manera que lo haría utilizando C/C++.---ABSTRACT---As a result of the appearance of processors equipped with multiple "cores ", parallel programming, a concept which, moreover, it was not new and it was known for decades, suffered a new impulse, because it was believed they could overcome the technological ceiling had been limiting the performance of this program for years. This impulse has been maintained until today, driven by the need for ever more powerful systems and thanks to the decrease in manufacturing costs. This trend has led to the emergence of new software and languages with components guided specifically to the field of parallel programming. This is the case of Go language, developed by Google and released in 2009. This language is based on concurrency models that make it well suited to tackle developments in parallel nature. However, parallel programming is a complex and heterogeneous field, and developers are reluctant to use new tools to benefit those who already know and are familiar. A good example are those implementations from well-known languages, but parallel programming oriented, and witch follow the guidelines of a standard widely recognized and accepted. This is the case of the OpenMP standard, an application programming interface (API), flexible, portable and scalable, parallel programming oriented, and designed for multi-core architectures. This standard currently has implementations in C, C ++ and Fortran. This project was born as an attempt to combine two concepts: an emerging language, with interesting possibilities in the field of parallel programming, and a reputed and widespread standard, with which programmers are familiar with. The main objective is to develop a set of system libraries (which includes compiler directives or pragmas, runtime libraries and environment variables), supported by the characteristics and concurrency patterns of Go; and that add custom features from the OpenMP standard. The idea is to add features that allow programming in Go language using the syntax OpenMP provides for other languages, like Fortran and C / C ++ (specifically, similar to the latter ), and, in this way, provide Go users with tools for programming parallel structures easily and, in the same way they would using C / C ++.
Resumo:
A nuclear waste stream is the complete flow of waste material from origin to treatment facility to final disposal. The objective of this study was to design and develop a Geographic Information Systems (GIS) module using Google Application Programming Interface (API) for better visualization of nuclear waste streams that will identify and display various nuclear waste stream parameters. A proper display of parameters would enable managers at Department of Energy waste sites to visualize information for proper planning of waste transport. The study also developed an algorithm using quadratic Bézier curve to make the map more understandable and usable. Microsoft Visual Studio 2012 and Microsoft SQL Server 2012 were used for the implementation of the project. The study has shown that the combination of several technologies can successfully provide dynamic mapping functionality. Future work should explore various Google Maps API functionalities to further enhance the visualization of nuclear waste streams.
Resumo:
O uso das tecnologias de informação e comunicação em Portugal na área médica tem tido um grande aumento nas últimas décadas. Tal pode constatar-se a vários níveis, como sejam a implementação crescente e em larga escala de sistemas de informação em hospitais e centros de saúde, o desenvolvimento de aplicações para auxiliar a análise dos principais meios complementares de diagnóstico, a receita eletrónica e o registo de paciente eletrónico, apenas para citar alguns exemplos. A procura crescente de profissionais de saúde de várias especialidades tem tido um aumento considerável nos últimos anos. Por um lado, a população está cada vez mais envelhecida, por via do aumento da esperança média de vida. Por outro lado tem aumentado a preocupação com a saúde e o bem estar próprio dos cidadãos, levando-os a recorrer mais vezes e a mais especialidades do que no passado. O êxodo da população do interior do país para os grandes centros no litoral, complementado pelas políticas orçamentais restritivas na área da saúde, tem acentuado as diferenças de prestação de cuidados de saúde a toda a população do país, de forma equitativa e eficaz. Para tal tem ainda contribuído o emagrecimento dos orçamentos dos hospitais e a pressão para que estes cumpram as metas de produtividade definidas, com custos cada vez mais reduzidos. Um dos contributos das tecnologias de informação para mitigar o afastamento entre o paciente e os profissionais de saúde, consiste na implementação de soluções de consulta "à distância", com a utilização de vídeo e voz, através de aplicações de telemedicina. Ao nível da teleconsulta e da telemedicina têm existido alguns avanços significativos, sendo possível encontrar alguns casos de sucesso na utilização destes meios para facilitar o acesso generalizado de toda a população a cuidados médicos de saúde. Constata-se contudo que as aplicações usadas são geralmente proprietárias, carecem de instalação de software específico, muitas vezes proprietário e por vezes com custos para as entidades que disponibilizam o serviço. Por exemplo, a utilização de uma ligação por Skype para uma teleconsulta obriga a que a aplicação esteja instalada em ambos os computadores (médico e paciente). Nesta dissertação apresenta-se uma solução de telemedicina baseada na Application Programming Interface (API) Web Real-Time Comunication (WebRTC), que permite o envio de voz e imagem entre dois browsers usando os protocolos de comunicação na Web. Além do vídeo e da voz foram integrados na aplicação duas funcionalidades particularmente interessantes numa teleconsulta: envio bidirecional de ficheiros (por exemplo, ficheiro PDF com o resultado das últimas análise que o paciente realizou) e desenho num "quadro branco", permitindo ao paciente ou ao médico ilustrarem de forma livre algum aspeto associado à consulta em causa. A aplicação utiliza exclusivamente componentes de software opensource e apenas necessita que ambos os computadores tenham instalado um browser de acesso à Web que suporte a comunicação por WebRTC, como o Google Chrome ou o Firefox. Pretende-se desta forma facilitar o acesso aos serviços de telemedicina evitando a instalação e configuração de software específico, bem como reduzir os custos através de soluções opensource com licença General Public License (GPL) e isenta de custos. Foram realizados alguns testes de aceitação da solução, em ambiente hospitalar. Genericamente, pretendeu-se validar o funcionamento da API WebRTC, aferir sobre a aceitação das funcionalidades implementadas e identificar obstáculos técnicos à sua implementação na rede de um hospital ou centro de saúde. Embora tenham sido identificados alguns problemas na comunicação, resultantes maioritariamente do tipo de configurações da rede em que os computadores estavam instalados, os resultados globais obtidos são bastante promissores, dando-nos boas perspetivas quanto à sua implementação em ambiente de produção.