995 resultados para Interface de programas aplicativos (Software)
Resumo:
El presente proyecto consiste en la implantación de Software Libre en el Centro de Desarrollo Infantil (CENDI) No. 4, que es un centro de educación infantil de niños de 0 a 6 años con equipos antiguos de recursos limitados, de forma que se incorporen programas que sean adecuados a la edad de los niños, y además el profesorado pueda aprovechar los ordenadores, todo ello con el menor coste posible, debido a que el presupuesto del centro es muy limitado.
Resumo:
Aquest treball descriu els principis de disseny i els components essencials d'un hipotètic programa informàtic que té per finalitat facilitar el procés d'autoajuda i que també pot ser utilitzat com a eina de desenvolupament personal i motivació. Prèviament, l'autor fa una revisió dels mètodes existents, des de l'èxit dels llibres d'autoajuda del segle XX fins al'expansió de la interactivitat impulsada pel desenvolupament de les tecnologiesinformàtiques. A través d'aquest recorregut es constata la pobre implantació de les novestecnologies com a instruments populars d'autoajuda i s'advoca per la creació i ús deprogrames informàtics flexibles i generalistes com a mitjà de suport psicològic.
Resumo:
BACKGROUND. Bioinformatics is commonly featured as a well assorted list of available web resources. Although diversity of services is positive in general, the proliferation of tools, their dispersion and heterogeneity complicate the integrated exploitation of such data processing capacity. RESULTS. To facilitate the construction of software clients and make integrated use of this variety of tools, we present a modular programmatic application interface (MAPI) that provides the necessary functionality for uniform representation of Web Services metadata descriptors including their management and invocation protocols of the services which they represent. This document describes the main functionality of the framework and how it can be used to facilitate the deployment of new software under a unified structure of bioinformatics Web Services. A notable feature of MAPI is the modular organization of the functionality into different modules associated with specific tasks. This means that only the modules needed for the client have to be installed, and that the module functionality can be extended without the need for re-writing the software client. CONCLUSIONS. The potential utility and versatility of the software library has been demonstrated by the implementation of several currently available clients that cover different aspects of integrated data processing, ranging from service discovery to service invocation with advanced features such as workflows composition and asynchronous services calls to multiple types of Web Services including those registered in repositories (e.g. GRID-based, SOAP, BioMOBY, R-bioconductor, and others).
Resumo:
Introduction To guarantee the success of a virtual library it is essential that all users can access all the library resources independently of the user’s location. Achieving this goal in the Andalusian Public Health System has been a particularly difficult task, due to it is made up of 10 research centers and 95.000 health-care professionals. Aims Since the BV-SSPA started three years ago, one of its mayor aims has been to provide remote access to all its resources in this complex scenario, as well as facilitate the access to the virtual library to both professionals and citizens. IP access was guaranteed because health-care professionals could access everything from their workplaces thanks to the intranet, but it was restricted when they were not there. The BV-SSPA solved this problem by installing a federated authentication and authorization system called PAPI and a PAPI rewriting proxy. After three years the BV-SSPA has met a new challenge: adapting its federated access system to Metalib and SFX, specifically the access management module PDS had to be connected with the existing PAPI system. This new challenge came along with the introduction of a new metasearcher and link resolver. Material and Methods Initially there were three independent systems: • A Metalib and SFX PDS module, • A federated authentication and authorization system: PAPI. • A PAPI Rewriting Proxy. The chosen solution went through the reutilization of the existing software. To achieve this goal, a PHP connector between these applications was developed and several modules in the PDS configuration were modified. On the other hand, providing a simplified access to Metalib has been solved using Xerxes and integrating it in a Drupal website. Results Thanks to this connector the BV-SSPA was able to get all its users remotely accessing its new metasearcher without changing the way they used to validate, or without having to remember a new username and password. Futhermore, thanks to Xerxes, it is possible to use Metalib from a simple interface and without having to leave the BV-SSPA website to go its native interface.
Resumo:
El proyecto surge con el fin de crear una aplicación que responda a las necesidades de gestión y control de las actividades que se desarrollan en empresas minoristas dedicadas al sector de la electrónica de consumo. Este tipo de establecimientos aunque están muy especializados y proporcionan un trato excelente al público, suelen tener deficiencias técnicas a la hora de gestionar y controlar su negocio. Muchos de ellos carecen de programas informáticos adaptados a sus necesidades, e incluso en algún caso no disponen de ninguno por el alto coste que les supone. Dicho software tiene dos objetivos: por un lado atender las necesidades de este sector y por otro que el desembolso de la adquisición e implantación del mismo sea asequible. Para conseguir este último objetivo se utilizará software libre.
Resumo:
A complexidade para operacionalizar o método de dimensionamento de profissionais de enfermagem, diante das inúmeras variáveis relativas à identificação da carga de trabalho, do tempo efetivo de trabalho dos profissionais e do Índice de Segurança Técnica (IST), evidenciou a necessidade de desenvolver um software, denominado: Dimensionamento Informatizado de Profissionais de Enfermagem (DIPE). Este estudo exploratório descritivo teve como objetivo avaliar a qualidade técnica e o desempenho funcional do DIPE. Participaram como sujeitos da pesquisa dezoito avaliadores, sendo dez enfermeiros docentes ou enfermeiros gerentes de unidades de saúde hospitalar e oito especialistas em informática em saúde. A avaliação do software baseou-se na norma NBR ISO/IEC 9126-1, considerando as características funcionalidade, confiabilidade, usabilidade, eficiência e manutenibilidade. A avaliação do software obteve resultados positivos, sobre os quais os avaliadores concordaram em todas as características avaliadas. As sugestões relatadas serão importantes para a proposição de melhorias e aprimoramento do DIPE.
Resumo:
Background: The analysis and usage of biological data is hindered by the spread of information across multiple repositories and the difficulties posed by different nomenclature systems and storage formats. In particular, there is an important need for data unification in the study and use of protein-protein interactions. Without good integration strategies, it is difficult to analyze the whole set of available data and its properties.Results: We introduce BIANA (Biologic Interactions and Network Analysis), a tool for biological information integration and network management. BIANA is a Python framework designed to achieve two major goals: i) the integration of multiple sources of biological information, including biological entities and their relationships, and ii) the management of biological information as a network where entities are nodes and relationships are edges. Moreover, BIANA uses properties of proteins and genes to infer latent biomolecular relationships by transferring edges to entities sharing similar properties. BIANA is also provided as a plugin for Cytoscape, which allows users to visualize and interactively manage the data. A web interface to BIANA providing basic functionalities is also available. The software can be downloaded under GNU GPL license from http://sbi.imim.es/web/BIANA.php.Conclusions: BIANA's approach to data unification solves many of the nomenclature issues common to systems dealing with biological data. BIANA can easily be extended to handle new specific data repositories and new specific data types. The unification protocol allows BIANA to be a flexible tool suitable for different user requirements: non-expert users can use a suggested unification protocol while expert users can define their own specific unification rules.
Resumo:
SUMMARY: We present a tool designed for visualization of large-scale genetic and genomic data exemplified by results from genome-wide association studies. This software provides an integrated framework to facilitate the interpretation of SNP association studies in genomic context. Gene annotations can be retrieved from Ensembl, linkage disequilibrium data downloaded from HapMap and custom data imported in BED or WIG format. AssociationViewer integrates functionalities that enable the aggregation or intersection of data tracks. It implements an efficient cache system and allows the display of several, very large-scale genomic datasets. AVAILABILITY: The Java code for AssociationViewer is distributed under the GNU General Public Licence and has been tested on Microsoft Windows XP, MacOSX and GNU/Linux operating systems. It is available from the SourceForge repository. This also includes Java webstart, documentation and example datafiles.
Resumo:
La elección de un programa de gestión de bibliotecas se ve afectada muchas veces por una serie de condiciones sociales, económicas y políticas que hacen que la elección no sea la más adecuada para las necesidades, características y funciones de la biblioteca. El software libre está siendo una de las soluciones más optadas, por sus libertades de copia, modificación y distribución, además de la libertad de licencias y las posibilidades de interoperación con otras aplicaciones. Esta nueva tendencia hacia el software libre en bibliotecas se refleja también en los estudios de biblioteconomía y documentación, en los que desde diferentes asignaturas se aportan conocimientos sobre programas de automatización, de gestión de repositorios, incluso del sistema operativo Linux/GNU, entre otros. Esta combinación entre las necesidades de los centros y la tendencia al uso de software libre, es la que un grupo de profesores de la Facultat de Biblioteconomia i Documentació (Universitat de Barcelona) y estudiantes, miembros del Grup de Treball sobre Programari Lliure per als Professionals de la Informació (Cobdc), han querido aportar a la comunidad profesional, creando un laboratorio virtual para el uso de software libre de aplicación en bibliotecas.
Resumo:
O ensaio de consolidação preconiza a utilização de um consolidômetro. Esse equipamento, até então, não era produzido e comercializado no Brasil. Os modelos não automatizados disponíveis para importação, apesar da proposta de baixo custo, ainda são rústicos e necessitam de contínua calibração dos níveis de pressão durante a realização do ensaio. A exclusividade e intervenção de um técnico durante todo o ensaio, associada à precária coleta de dados nesses modelos, ainda são os principais fatores que têm inviabilizado a consolidação desse ensaio na ciência do solo brasileira. Como alternativa a esses problemas, este trabalho teve por objetivos desenvolver e automatizar um consolidômetro a partir de um Controlador Lógico Programável (CLP) com interface homem-máquina (IHM). O equipamento é constituído de um gabinete de metal que aloja conjuntos de dispositivos pneumáticos, eletrônico-digital e atuadores de força e posição. O funcionamento de cada dispositivo de forma isolado ou conjugado é gerenciado por meio de um software em linguagem de programação ladder, que, a partir de um CLP com IHM incorporada, possibilita armazenar instruções e implementar funções. A interface entre o PC e o consolidômetro é feita pelo software CA-Linker, v 1.0, projetado especificamente para o equipamento. O uso do CLP com IHM incorporada permitiu o desenvolvimento e a automação do consolidômetro. O desempenho e a eficiência do conjunto de dispositivos (pneumáticos, eletrônico-digital e atuadores de força e pressão) foram comprovados pelos excelentes resultados dos valores de deformação e pressão obtidos em função do tempo e, principalmente, do comportamento da curva de compressão, gerada pelos ensaios de compressão.
Resumo:
Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.
Resumo:
This documents sums up a projectaimed at building a new web interfaceto the Apertium machine translationplatform, including pre-editing andpost-editing environments. It containsa description of the accomplished workon this project, as well as an overviewof possible evolutions.
Resumo:
Tässä diplomityössä tutkitaan automatisoitua testausta ja käyttöliittymätestauksen tekemistä helpommaksi Symbian-käyttöjärjestelmässä. Työssä esitellään Symbian ja Symbian-sovelluskehityksessä kohdattavia haasteita. Lisäksi kerrotaan testausstrategioista ja -tavoista sekä automatisoidusta testaamisesta. Lopuksi esitetään työkalu, jolla testitapausten luominen toiminnalisuus- ja järjestelmätestaukseen tehdään helpommaksi. Graafiset käyttöliittymättuovat ainutlaatuisia haasteita ohjelmiston testaamiseen. Ne tehdään usein monimutkaisista komponenteista ja niitä suunnitellaan jatkuvasti uusiksi ohjelmistokehityksen aikana. Graafisten käyttöliittymien testaukseen käytetään usein kaappaus- ja toistotyökaluja. Käyttöliittymätestauksen testitapausten suunnittelu ja toteutus vaatii paljon panostusta. Koska graafiset käyttöliittymät muodostavat suuren osan koodista, voitaisiin säästää paljon resursseja tekemällä testitapausten luomisesta helpompaa. Käytännön osuudessa toteutettu projekti pyrkii tähän tekemällä testiskriptien luomisesta visuaalista. Näin ollen itse testien skriptikieltä ei tarvitse ymmärtää ja testien hahmottaminen on myös helpompaa.
Resumo:
3G-radioverkon asetusten hallinnointi suoritetaan säätämällä radioverkkotietokantaan talletettavia parametreja. Hallinnointiohjelmistossa tuhannetradioverkon parametrit näkyvät käyttöliittymäkomponentteina, joita ohjelmiston kehityskaaressa jatkuvasti lisätään, muutetaan ja poistetaan asiakkaan tarpeidenmukaan. Parametrien lisäämisen toteutusprosessi on ohjelmistokehittäjälle työlästä ja mekaanista. Diplomityön tavoitteeksi asetettiin kehittää koodigeneraattori, joka luo kaiken toteutusprosessissa tuotetun koodin automaattisesti niistä määrittelyistä, jotka ovat nykyäänkin saatavilla. Työssä kehitetty generaattori nopeuttaa ohjelmoijan työtä eliminoimalla yhden aikaa vievän ja mekaanisen työvaiheen. Seurauksena saadaan yhtenäisempää ohjelmistokoodia ja säästetään yrityksen ohjelmistotuotannon kuluissa, kun ohjelmoijan taito voidaan keskittää vaativimpiin tehtäviin.
Resumo:
Today's business environment has become increasingly unexpected and fast changing because of the global competition. This new environment requires the companies to organize their control differently, e.g. by logistic process thinking. Logistic process thinking in software engineering applies the principles of production process to immaterial products. Processes must be optimized, so that every phase adds value to the customer, and the lead times can be cut shorter to meet the new customer requirements. The purpose of this thesis is to examine and optimize the testing processes of software engineering concentrating on module testing, functional testing and their interface. The concept of logistic process thinking is introduced through production process, value added model and process management. Also theory of testing based on literature is presented, concentrating on module testing and functional testing. The testing processes of the Case Company are presented together with the project models in which they are implemented. The real life practices in module testing and functional testing and their interface are examined through interviews. These practices are analyzed against the processes and the testing theory, through which ideas for optimizing the testing process are introduced. The project world of the Case Company is also introduced together with two example testing projects in different life cycle phases. The examples give a view of how much effort of the project is put in different types of testing.