963 resultados para Android, Application Programming Interface, Fansubbing, Android Services, App Developing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Der CampusSource Workshop fand vom 10. bis 12. Oktober 2006 an der Westfälischen Wilhelms Universität (WWU) in Münster statt. Kernpunkte der Veranstaltung waren die Entwicklung einer Engine zur Verknüpfung von e-Learning Anwendungen mit Systemen der HIS GmbH und die Erstellung von Lehr- und Lerninhalten mit dem Ziel der Wiederverwendung. Im zweiten Kapitel sind Vorträge der Veranstaltung im Adobe Flash Format zusammengetragen. Zur Betrachtung der Vorträge ist der Adobe Flash Player, mindestens in der Version 6 erforderlich

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This PhD thesis contributes to the problem of resource and service discovery in the context of the composable web. In the current web, mashup technologies allow developers reusing services and contents to build new web applications. However, developers face a problem of information flood when searching for appropriate services or resources for their combination. To contribute to overcoming this problem, a framework is defined for the discovery of services and resources. In this framework, three levels are defined for performing discovery at content, discovery and agente levels. The content level involves the information available in web resources. The web follows the Representational Stateless Transfer (REST) architectural style, in which resources are returned as representations from servers to clients. These representations usually employ the HyperText Markup Language (HTML), which, along with Content Style Sheets (CSS), describes the markup employed to render representations in a web browser. Although the use of SemanticWeb standards such as Resource Description Framework (RDF) make this architecture suitable for automatic processes to use the information present in web resources, these standards are too often not employed, so automation must rely on processing HTML. This process, often referred as Screen Scraping in the literature, is the content discovery according to the proposed framework. At this level, discovery rules indicate how the different pieces of data in resources’ representations are mapped onto semantic entities. By processing discovery rules on web resources, semantically described contents can be obtained out of them. The service level involves the operations that can be performed on the web. The current web allows users to perform different tasks such as search, blogging, e-commerce, or social networking. To describe the possible services in RESTful architectures, a high-level feature-oriented service methodology is proposed at this level. This lightweight description framework allows defining service discovery rules to identify operations in interactions with REST resources. The discovery is thus performed by applying discovery rules to contents discovered in REST interactions, in a novel process called service probing. Also, service discovery can be performed by modelling services as contents, i.e., by retrieving Application Programming Interface (API) documentation and API listings in service registries such as ProgrammableWeb. For this, a unified model for composable components in Mashup-Driven Development (MDD) has been defined after the analysis of service repositories from the web. The agent level involves the orchestration of the discovery of services and contents. At this level, agent rules allow to specify behaviours for crawling and executing services, which results in the fulfilment of a high-level goal. Agent rules are plans that allow introspecting the discovered data and services from the web and the knowledge present in service and content discovery rules to anticipate the contents and services to be found on specific resources from the web. By the definition of plans, an agent can be configured to target specific resources. The discovery framework has been evaluated on different scenarios, each one covering different levels of the framework. Contenidos a la Carta project deals with the mashing-up of news from electronic newspapers, and the framework was used for the discovery and extraction of pieces of news from the web. Similarly, in Resulta and VulneraNET projects the discovery of ideas and security knowledge in the web is covered, respectively. The service level is covered in the OMELETTE project, where mashup components such as services and widgets are discovered from component repositories from the web. The agent level is applied to the crawling of services and news in these scenarios, highlighting how the semantic description of rules and extracted data can provide complex behaviours and orchestrations of tasks in the web. The main contributions of the thesis are the unified framework for discovery, which allows configuring agents to perform automated tasks. Also, a scraping ontology has been defined for the construction of mappings for scraping web resources. A novel first-order logic rule induction algorithm is defined for the automated construction and maintenance of these mappings out of the visual information in web resources. Additionally, a common unified model for the discovery of services is defined, which allows sharing service descriptions. Future work comprises the further extension of service probing, resource ranking, the extension of the Scraping Ontology, extensions of the agent model, and contructing a base of discovery rules. Resumen La presente tesis doctoral contribuye al problema de descubrimiento de servicios y recursos en el contexto de la web combinable. En la web actual, las tecnologías de combinación de aplicaciones permiten a los desarrolladores reutilizar servicios y contenidos para construir nuevas aplicaciones web. Pese a todo, los desarrolladores afrontan un problema de saturación de información a la hora de buscar servicios o recursos apropiados para su combinación. Para contribuir a la solución de este problema, se propone un marco de trabajo para el descubrimiento de servicios y recursos. En este marco, se definen tres capas sobre las que se realiza descubrimiento a nivel de contenido, servicio y agente. El nivel de contenido involucra a la información disponible en recursos web. La web sigue el estilo arquitectónico Representational Stateless Transfer (REST), en el que los recursos son devueltos como representaciones por parte de los servidores a los clientes. Estas representaciones normalmente emplean el lenguaje de marcado HyperText Markup Language (HTML), que, unido al estándar Content Style Sheets (CSS), describe el marcado empleado para mostrar representaciones en un navegador web. Aunque el uso de estándares de la web semántica como Resource Description Framework (RDF) hace apta esta arquitectura para su uso por procesos automatizados, estos estándares no son empleados en muchas ocasiones, por lo que cualquier automatización debe basarse en el procesado del marcado HTML. Este proceso, normalmente conocido como Screen Scraping en la literatura, es el descubrimiento de contenidos en el marco de trabajo propuesto. En este nivel, un conjunto de reglas de descubrimiento indican cómo los diferentes datos en las representaciones de recursos se corresponden con entidades semánticas. Al procesar estas reglas sobre recursos web, pueden obtenerse contenidos descritos semánticamente. El nivel de servicio involucra las operaciones que pueden ser llevadas a cabo en la web. Actualmente, los usuarios de la web pueden realizar diversas tareas como búsqueda, blogging, comercio electrónico o redes sociales. Para describir los posibles servicios en arquitecturas REST, se propone en este nivel una metodología de alto nivel para descubrimiento de servicios orientada a funcionalidades. Este marco de descubrimiento ligero permite definir reglas de descubrimiento de servicios para identificar operaciones en interacciones con recursos REST. Este descubrimiento es por tanto llevado a cabo al aplicar las reglas de descubrimiento sobre contenidos descubiertos en interacciones REST, en un nuevo procedimiento llamado sondeo de servicios. Además, el descubrimiento de servicios puede ser llevado a cabo mediante el modelado de servicios como contenidos. Es decir, mediante la recuperación de documentación de Application Programming Interfaces (APIs) y listas de APIs en registros de servicios como ProgrammableWeb. Para ello, se ha definido un modelo unificado de componentes combinables para Mashup-Driven Development (MDD) tras el análisis de repositorios de servicios de la web. El nivel de agente involucra la orquestación del descubrimiento de servicios y contenidos. En este nivel, las reglas de nivel de agente permiten especificar comportamientos para el rastreo y ejecución de servicios, lo que permite la consecución de metas de mayor nivel. Las reglas de los agentes son planes que permiten la introspección sobre los datos y servicios descubiertos, así como sobre el conocimiento presente en las reglas de descubrimiento de servicios y contenidos para anticipar contenidos y servicios por encontrar en recursos específicos de la web. Mediante la definición de planes, un agente puede ser configurado para descubrir recursos específicos. El marco de descubrimiento ha sido evaluado sobre diferentes escenarios, cada uno cubriendo distintos niveles del marco. El proyecto Contenidos a la Carta trata de la combinación de noticias de periódicos digitales, y en él el framework se ha empleado para el descubrimiento y extracción de noticias de la web. De manera análoga, en los proyectos Resulta y VulneraNET se ha llevado a cabo un descubrimiento de ideas y de conocimientos de seguridad, respectivamente. El nivel de servicio se cubre en el proyecto OMELETTE, en el que componentes combinables como servicios y widgets se descubren en repositorios de componentes de la web. El nivel de agente se aplica al rastreo de servicios y noticias en estos escenarios, mostrando cómo la descripción semántica de reglas y datos extraídos permiten proporcionar comportamientos complejos y orquestaciones de tareas en la web. Las principales contribuciones de la tesis son el marco de trabajo unificado para descubrimiento, que permite configurar agentes para realizar tareas automatizadas. Además, una ontología de extracción ha sido definida para la construcción de correspondencias y extraer información de recursos web. Asimismo, un algoritmo para la inducción de reglas de lógica de primer orden se ha definido para la construcción y el mantenimiento de estas correspondencias a partir de la información visual de recursos web. Adicionalmente, se ha definido un modelo común y unificado para el descubrimiento de servicios que permite la compartición de descripciones de servicios. Como trabajos futuros se considera la extensión del sondeo de servicios, clasificación de recursos, extensión de la ontología de extracción y la construcción de una base de reglas de descubrimiento.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O contexto tecnológico em que vivemos é uma realidade. E a tendência é para ser assim também no futuro. Cada vez mais. É o caso das representações de locais e entidades em mapas digitais na web. Na visão de Crocker (2014), esta tendência é ainda mais acentuada, no âmbito das aplicações móveis, como mostram as mais diversas location-based applications. No setor do desporto e da respetiva gestão nem sempre foi fácil desenvolver aplicações, recorrendo a este tipo de representações espaciais. A tecnologia não era fácil e o know-how não era adequadamente qualificado. Mas, as empresas fornecedoras de tecnologia geoespacial simplificaram o desenvolvimento de aplicações web nesta área, através da utilização de application programming interfaces (API). Como refere Svennerberg (2010), estas API’s servem de interface entre um serviço proporcionado por uma empresa, caso da Google Maps (2013) e uma aplicação web ou móvel que utiliza esses serviços. Foi com este objetivo que desenvolvemos uma aplicação web, utilizando as metodologias próprias neste domínio, como a framework de Zachman (2009), tal como foi originalmente adaptada por Whitten e Bentley (2005), onde um dos módulos é precisamente a representação de espaços desportivos, recorrendo à utilização dos serviços da Google Maps. Para além disso, toda a aplicação é suportada numa abordagem Model-View-Control (MVC). Para conseguir representar as instalações desportivas num mapa, criámos uma base de dados MySQL, com dados de longitude e latitude, de cada instalação desportiva. Através de JavaScript criou-se o mapa propriamente dito, indicando o tipo (mapa de estradas, satélite ou street view) e as respetivas opções (nível de zoom, alinhamento, controlo de interface e posicionamente, entre muitas outras opções). O passo seguinte consistiu em passar os dados para o frontend da aplicação web. Para isso, recorreu-se à integração do PHP com as livrarias externas de código JavaSrcipt, criadas especificamente para o efeito (caso da MarkerManager). A implementação destas funcionalidades permite georeferenciar todos os tipos e géneros de espaços desportivos de um concelho, região ou País. Obteve-se ainda know-how, background e massa crítica, para o desenvolvimento de novas funcionalidades. A sua utilização em dispositivos móveis é outra das possibilidades atualmente já em desenvolvimento.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este trabalho é uma parte do tema global “Suporte à Computação Paralela e Distribuída em Java”, também tema da tese de Daniel Barciela no mestrado de Engenharia Informática do Instituto Superior de Engenharia do Porto. O seu objetivo principal consiste na definição/criação da interface com o programador, assim como também abrange a forma como os nós comunicam e cooperam entre si para a execução de determinadas tarefas, de modo a atingirem um único objetivo global. No âmbito desta dissertação foi realizado um estudo prévio relativamente aos modelos teóricos referentes à computação paralela, assim como também foram analisadas linguagens e frameworks que fornecem suporte a este mesmo tipo de computação. Este estudo teve como principal objetivo a análise da forma como estes modelos e linguagens permitem ao programador expressar o processamento paralelo no desenvolvimento das aplicações. Como resultado desta dissertação surgiu a framework denominada Distributed Parallel Framework for Java (DPF4j), cujo objetivo principal é fornecer aos programadores o suporte para o desenvolvimento de aplicações paralelas e distribuídas. Esta framework foi desenvolvida na linguagem Java. Esta dissertação contempla a parte referente à interface de programação e a toda a comunicação entre nós cooperantes da framework DPF4j. Por fim, foi demonstrado através dos testes realizados que a DPF4j, apesar de ser ainda um protótipo, já demonstra ter uma performance superior a outras frameworks e linguagens que possuem os mesmos objetivos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Constrained and unconstrained Nonlinear Optimization Problems often appear in many engineering areas. In some of these cases it is not possible to use derivative based optimization methods because the objective function is not known or it is too complex or the objective function is non-smooth. In these cases derivative based methods cannot be used and Direct Search Methods might be the most suitable optimization methods. An Application Programming Interface (API) including some of these methods was implemented using Java Technology. This API can be accessed either by applications running in the same computer where it is installed or, it can be remotely accessed through a LAN or the Internet, using webservices. From the engineering point of view, the information needed from the API is the solution for the provided problem. On the other hand, from the optimization methods researchers’ point of view, not only the solution for the problem is needed. Also additional information about the iterative process is useful, such as: the number of iterations; the value of the solution at each iteration; the stopping criteria, etc. In this paper are presented the features added to the API to allow users to access to the iterative process data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In Nonlinear Optimization Penalty and Barrier Methods are normally used to solve Constrained Problems. There are several Penalty/Barrier Methods and they are used in several areas from Engineering to Economy, through Biology, Chemistry, Physics among others. In these areas it often appears Optimization Problems in which the involved functions (objective and constraints) are non-smooth and/or their derivatives are not know. In this work some Penalty/Barrier functions are tested and compared, using in the internal process, Derivative-free, namely Direct Search, methods. This work is a part of a bigger project involving the development of an Application Programming Interface, that implements several Optimization Methods, to be used in applications that need to solve constrained and/or unconstrained Nonlinear Optimization Problems. Besides the use of it in applied mathematics research it is also to be used in engineering software packages.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Inform

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Engenharia Informática

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Crisis-affected communities and global organizations for international aid are becoming increasingly digital as consequence geotechnology popularity. Humanitarian sector changed in profound ways by adopting new technical approach to obtain information from area with difficult geographical or political access. Since 2011, turkey is hosting a growing number of Syrian refugees along southeastern region. Turkish policy of hosting them in camps and the difficulty created by governors to international aid group expeditions to get information, made such international organizations to investigate and adopt other approach in order to obtain information needed. They intensified its remote sensing approach. However, the majority of studies used very high-resolution satellite imagery (VHRSI). The study area is extensive and the temporal resolution of VHRSI is low, besides it is infeasible only using these sensors as unique approach for the whole area. The focus of this research, aims to investigate the potentialities of mid-resolution imagery (here only Landsat) to obtain information from region in crisis (here, southeastern Turkey) through a new web-based platform called Google Earth Engine (GEE). Hereby it is also intended to verify GEE currently reliability once the Application Programming Interface (API) is still in beta version. The finds here shows that the basic functions are trustworthy. Results pointed out that Landsat can recognize change in the spectral resolution clearly only for the first settlement. The ongoing modifications vary for each case. Overall, Landsat demonstrated high limitations, but need more investigations and may be used, with restriction, as a support of VHRSI.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Engenharia Civil

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Engenharia Biomédica (área de especialização em Informática Médica)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Microbe browser is a web server providing comparative microbial genomics data. It offers comprehensive, integrated data from GenBank, RefSeq, UniProt, InterPro, Gene Ontology and the Orthologs Matrix Project (OMA) database, displayed along with gene predictions from five software packages. The Microbe browser is daily updated from the source databases and includes all completely sequenced bacterial and archaeal genomes. The data are displayed in an easy-to-use, interactive website based on Ensembl software. The Microbe browser is available at http://microbe.vital-it.ch/. Programmatic access is available through the OMA application programming interface (API) at http://microbe.vital-it.ch/api.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of domain-specific languages (DSLs) has been proposed as an approach to cost-e ectively develop families of software systems in a restricted application domain. Domain-specific languages in combination with the accumulated knowledge and experience of previous implementations, can in turn be used to generate new applications with unique sets of requirements. For this reason, DSLs are considered to be an important approach for software reuse. However, the toolset supporting a particular domain-specific language is also domain-specific and is per definition not reusable. Therefore, creating and maintaining a DSL requires additional resources that could be even larger than the savings associated with using them. As a solution, di erent tool frameworks have been proposed to simplify and reduce the cost of developments of DSLs. Developers of tool support for DSLs need to instantiate, customize or configure the framework for a particular DSL. There are di erent approaches for this. An approach is to use an application programming interface (API) and to extend the basic framework using an imperative programming language. An example of a tools which is based on this approach is Eclipse GEF. Another approach is to configure the framework using declarative languages that are independent of the underlying framework implementation. We believe this second approach can bring important benefits as this brings focus to specifying what should the tool be like instead of writing a program specifying how the tool achieves this functionality. In this thesis we explore this second approach. We use graph transformation as the basic approach to customize a domain-specific modeling (DSM) tool framework. The contributions of this thesis includes a comparison of di erent approaches for defining, representing and interchanging software modeling languages and models and a tool architecture for an open domain-specific modeling framework that e ciently integrates several model transformation components and visual editors. We also present several specific algorithms and tool components for DSM framework. These include an approach for graph query based on region operators and the star operator and an approach for reconciling models and diagrams after executing model transformation programs. We exemplify our approach with two case studies MICAS and EFCO. In these studies we show how our experimental modeling tool framework has been used to define tool environments for domain-specific languages.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Internet of Things (IoT) technologies are developing rapidly, and therefore there exist several standards of interconnection protocols and platforms. The existence of heterogeneous protocols and platforms has become a critical challenge for IoT system developers. To mitigate this challenge, few alliances and organizations have taken the initiative to build a framework that helps to integrate application silos. Some of these frameworks focus only on a specific domain like home automation. However, the resource constraints in the large proportion of connected devices make it difficult to build an interoperable system using such frameworks. Therefore, a general purpose, lightweight interoperability framework that can be used for a range of devices is required. To tackle the heterogeneous nature, this work introduces an embedded, distributed and lightweight service bus, Lightweight IoT Service bus Architecture (LISA), which fits inside the network stack of a small real-time operating system for constrained nodes. LISA provides a uniform application programming interface for an IoT system on a range of devices with variable resource constraints. It hides platform and protocol variations underneath it, thus facilitating interoperability in IoT implementations. LISA is inspired by the Network on Terminal Architecture, a service centric open architecture by Nokia Research Center. Unlike many other interoperability frameworks, LISA is designed specifically for resource constrained nodes and it provides essential features of a service bus for easy service oriented architecture implementation. The presented architecture utilizes an intermediate computing layer, a Fog layer, between the small nodes and the cloud, thereby facilitating the federation of constrained nodes into subnetworks. As a result of a modular and distributed design, the part of LISA running in the Fog layer handles the heavy lifting to assist the lightweight portion of LISA inside the resource constrained nodes. Furthermore, LISA introduces a new networking paradigm, Node Centric Networking, to route messages across protocol boundaries to facilitate interoperability. This thesis presents a concept implementation of the architecture and creates a foundation for future extension towards a comprehensive interoperability framework for IoT.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O uso da Internet como ferramenta de ensino tem se tornado cada vez mais freqüente. A recente popularização da Internet vem permitindo o desenvolvimento de ambientes de ensino-aprendizagem baseados na Web. Os principais recursos explorados para fins educacionais são hipertexto e hipermídia, que proporcionam uma grande gama de elementos para o instrutor que pretende utilizar a WWW. Este trabalho está inserido no desenvolvimento do ambiente AdaptWeb (Ambiente de Ensino e Aprendizagem Adaptativo para a Web), que visa o desenvolvimento de um ambiente de educação a distância. A arquitetura do ambiente é composta por quatro módulos entre eles o módulo de Armazenamento de dados que armazena todos os dados provenientes da fase de Autoria utilizando XML (Extensible Markup Language). Na etapa de Autoria é feita a inserção de todos os dados relativos a disciplina que deseja disponibilizar, estes dados serão armazenados temporariamente em uma representação matricial em memória. A entrada de dados do módulo de Armazenamento de Dados é esta representação matricial que serve então como base para a geração dos arquivos XML, que são utilizados nas demais etapas do ambiente. Para a validação dos arquivos XML foram desenvolvidas DTD (Document Type Definition) e também foi implementado um analisador de documentos XML, utilizando a API (Application Programming Interface) DOM (Document Object Model), para efetuar a validação sintática destes documentos. Para conversão da representação matricial em memória foi especificado e implementado um algoritmo que funciona em conformidade com as DTD especificadas e com a sintaxe da linguagem XML.