9 resultados para tecnologie web rest restful database

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the growth in new technologies, using online tools have become an everyday lifestyle. It has a greater impact on researchers as the data obtained from various experiments needs to be analyzed and knowledge of programming has become mandatory even for pure biologists. Hence, VTT came up with a new tool, R Executables (REX) which is a web application designed to provide a graphical interface for biological data functions like Image analysis, Gene expression data analysis, plotting, disease and control studies etc., which employs R functions to provide results. REX provides a user interactive application for the biologists to directly enter the values and run the required analysis with a single click. The program processes the given data in the background and prints results rapidly. Due to growth of data and load on server, the interface has gained problems concerning time consumption, poor GUI, data storage issues, security, minimal user interactive experience and crashes with large amount of data. This thesis handles the methods by which these problems were resolved and made REX a better application for the future. The old REX was developed using Python Django and now, a new programming language, Vaadin has been implemented. Vaadin is a Java framework for developing web applications and the programming language is extremely similar to Java with new rich components. Vaadin provides better security, better speed, good and interactive interface. In this thesis, subset functionalities of REX was selected which includes IST bulk plotting and image segmentation and implemented those using Vaadin. A code of 662 lines was programmed by me which included Vaadin as the front-end handler while R language was used for back-end data retrieval, computing and plotting. The application is optimized to allow further functionalities to be migrated with ease from old REX. Future development is focused on including Hight throughput screening functions along with gene expression database handling

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tässä diplomityössä kuvataan sähköisen toimintajärjestelmän hallintaohjelmiston toteuttaminen yritysverkostojen käyttöön. Jokainen toimintajärjestelmän osa on kuvattu erikseen ja sitä vastaamaan on toteutettu oma osio, joka vastaa nykyisten standardien ja spesifikaatioiden vaatimuksiin. Tämän työn standardit ja spesifikaatiot ovat ISO 9001:2000 (laatustandardi), ISO 14001 (ympäristöstandardi) ja OHSAS 18001 (turvallisuusjärjestelmäspesifikaatio). Hallintaohjelmistolla pystytään ylläpitämään toimintajärjestelmän perusosat, joita ovat prosessikuvaukset, asiakirjat, raportit ja mittarit. Ohjelma toteutetaan servlet-tekniikalla web-ympäristöön. Tietokantaratkaisuna käytetään SQL:ää, joka sopii hyvin yhteen Javan kanssa. Käyttöliittymänä on selain, mikä osaltaan helpottaa käyttöönottoa yrityksissä, koska erillisiä asennuksia käyttäjien koneisiin ei tarvita. Ohjelma on tarkoitettu asennettavaksi yrityksen sisäverkkoon.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

A web service is a software system that provides a machine-processable interface to the other machines over the network using different Internet protocols. They are being increasingly used in the industry in order to automate different tasks and offer services to a wider audience. The REST architectural style aims at producing scalable and extensible web services using technologies that play well with the existing tools and infrastructure of the web. It provides a uniform set of operation that can be used to invoke a CRUD interface (create, retrieve, update and delete) of a web service. The stateless behavior of the service interface requires that every request to a resource is independent of the previous ones facilitating scalability. Automated systems, e.g., hotel reservation systems, provide advanced scenarios for stateful services that require a certain sequence of requests that must be followed in order to fulfill the service goals. Designing and developing such services for advanced scenarios with REST constraints require rigorous approaches that are capable of creating web services that can be trusted for their behavior. Systems that can be trusted for their behavior can be termed as dependable systems. This thesis presents an integrated design, analysis and validation approach that facilitates the service developer to create dependable and stateful REST web services. The main contribution of this thesis is that we provide a novel model-driven methodology to design behavioral REST web service interfaces and their compositions. The behavioral interfaces provide information on what methods can be invoked on a service and the pre- and post-conditions of these methods. The methodology uses Unified Modeling Language (UML), as the modeling language, which has a wide user base and has mature tools that are continuously evolving. We have used UML class diagram and UML state machine diagram with additional design constraints to provide resource and behavioral models, respectively, for designing REST web service interfaces. These service design models serve as a specification document and the information presented in them have manifold applications. The service design models also contain information about the time and domain requirements of the service that can help in requirement traceability which is an important part of our approach. Requirement traceability helps in capturing faults in the design models and other elements of software development environment by tracing back and forth the unfulfilled requirements of the service. The information about service actors is also included in the design models which is required for authenticating the service requests by authorized actors since not all types of users have access to all the resources. In addition, following our design approach, the service developer can ensure that the designed web service interfaces will be REST compliant. The second contribution of this thesis is consistency analysis of the behavioral REST interfaces. To overcome the inconsistency problem and design errors in our service models, we have used semantic technologies. The REST interfaces are represented in web ontology language, OWL2, that can be part of the semantic web. These interfaces are used with OWL 2 reasoners to check unsatisfiable concepts which result in implementations that fail. This work is fully automated thanks to the implemented translation tool and the existing OWL 2 reasoners. The third contribution of this thesis is the verification and validation of REST web services. We have used model checking techniques with UPPAAL model checker for this purpose. The timed automata of UML based service design models are generated with our transformation tool that are verified for their basic characteristics like deadlock freedom, liveness, reachability and safety. The implementation of a web service is tested using a black-box testing approach. Test cases are generated from the UPPAAL timed automata and using the online testing tool, UPPAAL TRON, the service implementation is validated at runtime against its specifications. Requirement traceability is also addressed in our validation approach with which we can see what service goals are met and trace back the unfulfilled service goals to detect the faults in the design models. A final contribution of the thesis is an implementation of behavioral REST interfaces and service monitors from the service design models. The partial code generation tool creates code skeletons of REST web services with method pre and post-conditions. The preconditions of methods constrain the user to invoke the stateful REST service under the right conditions and the post condition constraint the service developer to implement the right functionality. The details of the methods can be manually inserted by the developer as required. We do not target complete automation because we focus only on the interface aspects of the web service. The applicability of the approach is demonstrated with a pedagogical example of a hotel room booking service and a relatively complex worked example of holiday booking service taken from the industrial context. The former example presents a simple explanation of the approach and the later worked example shows how stateful and timed web services offering complex scenarios and involving other web services can be constructed using our approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tänä päivänä tiedon nopea saatavuus ja hyvä hallittavuus ovat liiketoiminnan avainasioita. Tämän takia nykyisiä tietojärjestelmiä pyritään integroimaan. Integraatio asettaa monenlaisia vaatimuksia, jolloin sopivan integraatiomenetelmän ja -teknologian valitsemiseen pitää paneutua huolella. Integraatiototeutuksessa tulisi pyrkiä ns. löyhään sidokseen, jonka avulla voidaan saavuttaa aika-, paikka- ja alustariippumattomuus. Tällöin integraation eri osapuolien väliset oletukset saadaan karsittua minimiin, jonka myötä integraation hallittavuus ja vikasietoisuus paranee. Tässä diplomityössä keskitytään tutkimaan nykyisin teollisuuden käytössä olevien integraatiomenetelmien ja -teknologioiden ominaisuuksia, etuja ja haittoja. Lisäksi työssä tutustutaan Web-palvelutekniikkaan ja toteutetaan asynkroninen tiedonkopiointisovellus ko. teknologian avulla. Web-palvelutekniikka on vielä kehittyvä palvelukeskeinen teknologia, jolla pyritään voittamaan monet aiempia teknologioita vaivanneet ongelmat. Yhtenä teknologian päätavoitteista on luoda löyhä sidos integroitavien osapuolien välille ja mahdollistaa toiminta heterogeenisessa ympäristössä. Teknologiaa vaivaa kuitenkin vielä standardien puute esimerkiksi tietoturva-asioissa sekä päällekkäisten standardien kehitys eri valmistajien toimesta. Jotta teknologia voi yleistyä, on nämä ongelmat pystyttävä ratkaisemaan.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Web portaalit tarjoavat ainutlaatuisia apuvälineitä erilaisien sisältöjen luomiseksi, monenlaisia navigointipolkuja, henkilökohtaisia sivuja ja turvapalveluja. Portaali on monimutkainen systeemi, joka sisältää monta yhteistyötä tekevää komponenttia, yleensä toteutuu valmiiksi tehdyillä ongelmistoilla. Tämä tutkimus kansittelee portaalin toteutusta IBM/Tivolin tuotteella. Portaalin komponenttien integraatio on kriittinen koko järjestelmä arkkitehtuurille ja saattaa vaatia lisää ohjelmistokehittelyä. Tutkimuksen ensisijainen tavoite on kehittää räätälöityä komponenttia kahta portaali-alijärjestelmä varten, tilaaja - turvapalvelu. Tutkimuksessa Tivoli Personalized Services Manager (TPSM) ja Tivoli SecureWay Policy Director (PD) on tutkittu. Integraatio sisältää TPSM tietokaunan ja PD User Registry tiedon synkronisointia. Integraatio-ohjelmisto on suunniteltu ja tehty olemassaoloevien alijärjestelmien perusteella.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tässä työssä selvitettiin hyviä tapoja ja vakiintuneita käytäntöjä pitkän käyttöiän web-sovelluksen tekemiseksi. Saatiin selville, että sovelluksen elinkaaren aikana suurin osa kustannuksista tulee ylläpidosta. Tavoitteena oli tehdä pitkään käytettävä sovellus, joten ylläpidon kustannusten osuudesta tuli saada mandollisimman pieni. Ohjelmistotuotantoprosessissa mandollisimman aikaisessa vaiheessa havaitut virheet vähentävät korjauskustannuksia oleellisesti verrattuna siihen, että virheet havaittaisiin valmiissa tuotteessa. Siksi tässä työssä tehdyssä web-sovelluksessa panostettiin prosessin alkuvaiheisiin, määrittelyyn ja suunnitteluun. Web-sovelluksen ylläpidettävyyteen ja selkeyteen vaikuttavat oleellisesti hyvät ohjelmistokehitystavat. Käyttämällä valmista sovelluskehystä ja lisäämällä toiminnallisuuksia valmiiden ohjelmistokomponenttien avulla saadaan aikaiseksi hyvien tapojen mukaisesti tehty sovellus. Tässä työssä toteutettu web-sovellus laadittiin käyttämällä sovelluskehystä ja komponenttiarkkitehtuuria. Toteutuksesta saatiin selkeä. Sovellus jaettiin loogisiin kokonaisuuksiin, jotka käsittelevät näkymiä, tietokantaa ja tietojen yhdistämistä näiden välillä. Jokainen kokonaisuus on itsenäisesti toimiva, mikä auttaa sovelluksen ylläpitämisessä ja testaamisessa.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research focus of this thesis is to explore options for building systems for business critical web applications. Business criticality here includes requirements for data protection and system availability. The focus is on open source software. Goals are to identify robust technologies and engineering practices to implement such systems. Research methods include experiments made with sample systems built around chosen software packages that represent certain technologies. The main research focused on finding a good method for database data replication, a key functionality for high-availability, database-driven web applications. Research included also finding engineering best practices from books written by administrators of high traffic web applications. Experiment with database replication showed, that block level synchronous replication offered by DRBD replication software offered considerably more robust data protection and high-availability functionality compared to leading open source database product MySQL, and its built-in asynchronous replication. For master-master database setups, block level replication is more recommended way to build high-availability into the system. Based on thesis research, building high-availability web applications is possible using a combination of open source software and engineering best practices for data protection, availability planning and scaling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014