28 resultados para SIB Semantic Information Broker OSGI Semantic Web
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
A web service is a software system that provides a machine-processable interface to the other machines over the network using different Internet protocols. They are being increasingly used in the industry in order to automate different tasks and offer services to a wider audience. The REST architectural style aims at producing scalable and extensible web services using technologies that play well with the existing tools and infrastructure of the web. It provides a uniform set of operation that can be used to invoke a CRUD interface (create, retrieve, update and delete) of a web service. The stateless behavior of the service interface requires that every request to a resource is independent of the previous ones facilitating scalability. Automated systems, e.g., hotel reservation systems, provide advanced scenarios for stateful services that require a certain sequence of requests that must be followed in order to fulfill the service goals. Designing and developing such services for advanced scenarios with REST constraints require rigorous approaches that are capable of creating web services that can be trusted for their behavior. Systems that can be trusted for their behavior can be termed as dependable systems. This thesis presents an integrated design, analysis and validation approach that facilitates the service developer to create dependable and stateful REST web services. The main contribution of this thesis is that we provide a novel model-driven methodology to design behavioral REST web service interfaces and their compositions. The behavioral interfaces provide information on what methods can be invoked on a service and the pre- and post-conditions of these methods. The methodology uses Unified Modeling Language (UML), as the modeling language, which has a wide user base and has mature tools that are continuously evolving. We have used UML class diagram and UML state machine diagram with additional design constraints to provide resource and behavioral models, respectively, for designing REST web service interfaces. These service design models serve as a specification document and the information presented in them have manifold applications. The service design models also contain information about the time and domain requirements of the service that can help in requirement traceability which is an important part of our approach. Requirement traceability helps in capturing faults in the design models and other elements of software development environment by tracing back and forth the unfulfilled requirements of the service. The information about service actors is also included in the design models which is required for authenticating the service requests by authorized actors since not all types of users have access to all the resources. In addition, following our design approach, the service developer can ensure that the designed web service interfaces will be REST compliant. The second contribution of this thesis is consistency analysis of the behavioral REST interfaces. To overcome the inconsistency problem and design errors in our service models, we have used semantic technologies. The REST interfaces are represented in web ontology language, OWL2, that can be part of the semantic web. These interfaces are used with OWL 2 reasoners to check unsatisfiable concepts which result in implementations that fail. This work is fully automated thanks to the implemented translation tool and the existing OWL 2 reasoners. The third contribution of this thesis is the verification and validation of REST web services. We have used model checking techniques with UPPAAL model checker for this purpose. The timed automata of UML based service design models are generated with our transformation tool that are verified for their basic characteristics like deadlock freedom, liveness, reachability and safety. The implementation of a web service is tested using a black-box testing approach. Test cases are generated from the UPPAAL timed automata and using the online testing tool, UPPAAL TRON, the service implementation is validated at runtime against its specifications. Requirement traceability is also addressed in our validation approach with which we can see what service goals are met and trace back the unfulfilled service goals to detect the faults in the design models. A final contribution of the thesis is an implementation of behavioral REST interfaces and service monitors from the service design models. The partial code generation tool creates code skeletons of REST web services with method pre and post-conditions. The preconditions of methods constrain the user to invoke the stateful REST service under the right conditions and the post condition constraint the service developer to implement the right functionality. The details of the methods can be manually inserted by the developer as required. We do not target complete automation because we focus only on the interface aspects of the web service. The applicability of the approach is demonstrated with a pedagogical example of a hotel room booking service and a relatively complex worked example of holiday booking service taken from the industrial context. The former example presents a simple explanation of the approach and the later worked example shows how stateful and timed web services offering complex scenarios and involving other web services can be constructed using our approach.
Resumo:
Context: Web services have been gaining popularity due to the success of service oriented architecture and cloud computing. Web services offer tremendous opportunity for service developers to publish their services and applications over the boundaries of the organization or company. However, to fully exploit these opportunities it is necessary to find efficient discovery mechanism thus, Web services discovering mechanism has attracted a considerable attention in Semantic Web research, however, there have been no literature surveys that systematically map the present research result thus overall impact of these research efforts and level of maturity of their results are still unclear. This thesis aims at providing an overview of the current state of research into Web services discovering mechanism using systematic mapping. The work is based on the papers published 2004 to 2013, and attempts to elaborate various aspects of the analyzed literature including classifying them in terms of the architecture, frameworks and methods used for web services discovery mechanism. Objective: The objective if this work is to summarize the current knowledge that is available as regards to Web service discovery mechanisms as well as to systematically identify and analyze the current published research works in order to identify different approaches presented. Method: A systematic mapping study has been employed to assess the various Web Services discovery approaches presented in the literature. Systematic mapping studies are useful for categorizing and summarizing the level of maturity research area. Results: The result indicates that there are numerous approaches that are consistently being researched and published in this field. In terms of where these researches are published, conferences are major contributing publishing arena as 48% of the selected papers were conference published papers illustrating the level of maturity of the research topic. Additionally selected 52 papers are categorized into two broad segments namely functional and non-functional based approaches taking into consideration architectural aspects and information retrieval approaches, semantic matching, syntactic matching, behavior based matching as well as QOS and other constraints.
Resumo:
Web-palvelut muodostavat keskeisen osan semanttista web:iä. Ne mahdollistavat nykyaikaisen ja tehokkaan välineistön hajautettuun laskentaan ja luovat perustan palveluperustaisille arkkitehtuureille. Verkottunut automatisoitu liiketoiminta edellyttää jatkuvaa aktiivisuutta kaikilta osapuolilta. Lisäksi sitä tukevan järjestelmäntulee olla joustava ja sen tulee tukea monipuolista toiminnallisuutta. Nämä tavoitteet voidaan saavuttamaan yhdistämällä web-palveluita. Yhdistämisprosessi muodostuu joukosta tehtäviä kuten esim. palveluiden mallintaminen, palveluiden koostaminen, palveluiden suorittaminen ja tarkistaminen. Työssä on toteutettu yksinkertainen liiketoimintaprosessi. Toteutuksen osalta tarkasteltiin vaihtoehtoisia standardeja ja toteutustekniikoita. Myös suorituksen optimointiin liittyvät näkökulmat pyrittiin ottamaan huomioon.
Resumo:
Sähköinen reseptijärjestelmä nykyisessä muodossaan sai alkunsa Sosiaali- ja terveysministeriön aloitteesta ja Kansaneläkelaitoksen toimesta vuoden 2001 aikana. Käytössä oleva kansallinen reseptipalvelin mahdollistaa potilaiden oikeudet muun muassa apteekin vapaaseen valintaan ja lääkkeiden suorakorvausmenetelmään. Sähköinen resepti pyritään integroimaan olemassa olevien potilasrekisterijärjestelmien yhteyteen, joten semanttisen webin tuomat teknologiat kuten Web-palvelut ja XML-pohjaiset kielet ovat tärkeässä asemassa reseptien hallinnassa. Tässä työssä pyritään esittämään sähköisen reseptijärjestelmän toimintaa ja sen mahdollistavia jo olemassa olevia tekniikoita. Esille tuodaan myös uusia lisäarvoa tuovia tekniikoita, kuten RDF(S)-kuvauskielet, jotka mahdollistavat muun muassa resepteihin kohdistuvat kyselyt ja täten uudet terveyspalvelut sekä potilaille että lääkäreille. Oleellista on tietenkin myös tietoturva, sillä organisaatioiden välillä resepteissä liikkuu arkaluontoisia tietoja. Turvallisuutta pyritään edistämään muun muassa sähköisillä allekirjoituksilla ja älykorttien avulla tapahtuvalla järjestelmään tunnistautumisella. Jotta sähköinen resepti toimisi tulevaisuudessakin halutulla tavalla ja mahdollistaisi lisäarvopalveluita, on siinä panostettava sisällön lisäksi senkuvaamiseen. Sisällön ja sen suhteiden tarkka määrittely auttaa muun muassa reseptikyselyjen tekemisessä, mikä onkin olennainen osa potilasturvallisuuden ylläpitoa. Kuvauskielien ja ontologioiden käyttö voi myös auttaa lääkehoidon määrittämisessä useiden erilaisten ja jatkuvasti lisääntyvien lääkkeiden viidakossa, kun lääkkeitä voidaan etsiä tietokannoista reseptin kirjoituksen yhteydessä.
Resumo:
Increasing usage of Web Services has been result of efforts to automate Web Services discovery and interoperability. The Semantic Web Service descriptions create basis for automatic Web Service information management tasks such as discovery and interoperability. The discussion of opportunities enabled by service descriptions have arisen in recent years. The end user has been considered only as a consumer of services and information sharing occurred from one service provider to public in service distribution. The social networking has changed the nature of services. The end user cannot be seen anymore only as service consumer, because by enabling semantically rich environment and right tools, the end user will be in the future the producer of services. This study investigates the ways to provide for end users the empowerment to create service descriptions on mobile device. Special focus is given to the changed role of the end user in service creation. In addition, the Web Services technologies are presented as well as different Semantic Web Service description approaches are compared. The main focus in the study is to investigate tools and techniques to enable service description creation and semantic information management on mobile device.
Resumo:
A growing concern for organisations is how they should deal with increasing amounts of collected data. With fierce competition and smaller margins, organisations that are able to fully realize the potential in the data they collect can gain an advantage over the competitors. It is almost impossible to avoid imprecision when processing large amounts of data. Still, many of the available information systems are not capable of handling imprecise data, even though it can offer various advantages. Expert knowledge stored as linguistic expressions is a good example of imprecise but valuable data, i.e. data that is hard to exactly pinpoint to a definitive value. There is an obvious concern among organisations on how this problem should be handled; finding new methods for processing and storing imprecise data are therefore a key issue. Additionally, it is equally important to show that tacit knowledge and imprecise data can be used with success, which encourages organisations to analyse their imprecise data. The objective of the research conducted was therefore to explore how fuzzy ontologies could facilitate the exploitation and mobilisation of tacit knowledge and imprecise data in organisational and operational decision making processes. The thesis introduces both practical and theoretical advances on how fuzzy logic, ontologies (fuzzy ontologies) and OWA operators can be utilized for different decision making problems. It is demonstrated how a fuzzy ontology can model tacit knowledge which was collected from wine connoisseurs. The approach can be generalised and applied also to other practically important problems, such as intrusion detection. Additionally, a fuzzy ontology is applied in a novel consensus model for group decision making. By combining the fuzzy ontology with Semantic Web affiliated techniques novel applications have been designed. These applications show how the mobilisation of knowledge can successfully utilize also imprecise data. An important part of decision making processes is undeniably aggregation, which in combination with a fuzzy ontology provides a promising basis for demonstrating the benefits that one can retrieve from handling imprecise data. The new aggregation operators defined in the thesis often provide new possibilities to handle imprecision and expert opinions. This is demonstrated through both theoretical examples and practical implementations. This thesis shows the benefits of utilizing all the available data one possess, including imprecise data. By combining the concept of fuzzy ontology with the Semantic Web movement, it aspires to show the corporate world and industry the benefits of embracing fuzzy ontologies and imprecision.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Työssä tutkittiin IFC (Industrial Foundation Classes)-tietomallin mukaisen tiedoston jäsentämistä, tiedon jatkoprosessointia ja tiedonsiirtoa sovellusten välillä. Tutkittiin, mitä vaihtoehtoja tiedon siirron toteuttamiseksi ohjelmallisesti on ja mihin suuntaan tiedon siirtäminen on menossa tulevaisuudessa. Soveltavassa osassa toteutettiin IFC-standardin mukaisen ISO10303-tiedoston (Osa 21) jäsentäminen ja tulkitseminen XML-muotoon. Sovelluksessa jäsennetään ja tulkitaan CAD-ohjelmistolla tehty IFC-tiedosto C# -ohjelmointikielellä ja tallennetaan tieto XML-tietokantaan kustannuslaskentaohjelmiston luettavaksi.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
This study presents a review of theories of the so-called post-industrial society, and proposes that the concept of post-industrial society can be used to understand the recent developments of the World Wide Web, often described as Web 2.0 or social Web. The study combines theories ranging from post-war management science and cultural studies to software development, and tries to build a holistic view of the development of the post-industrial society, and especially the Internet. The discourse on the emergence of a post-industrial society after the World Wars has addressed the ways in which the growing importance of information, and innovations in digital communications technology, are changing our society. It is furthermore deeply connected with the discourse on the postmodern society, which emphasizes cultural fragmentation, intertextuality, and pluralism. The Internet age is characterized by increasing masses of information that are managed through various technologies. While 1990s Internet technologies often used the network as a traditional broadcasting channel with added interactivity, Web 2.0 technologies are specifically designed to utilize the network model by facilitating communication between various services and devices, and analyzing the relationships between users and objects in order to produce intelligent insight. The wide adoption of the Internet, and recently of Internet-enabled mobile devices, is furthermore continuously producing new ways of communicating, consuming, and producing. Applications of the social Web, such as social media or social networking services, are permanently changing our traditional social, cultural, and economic practices. The study first presents an overview of the post-industrial society, the Internet, and the concept of Web 2.0. Then the concept of social Web is described with an analysis of the term social media, the brief histories of the interactive Web and social networking services, and a description of the concept ―long tail‖, used to represent the masses of information available in the Web that do not receive mainstream attention. Finally, methods for retrieving and filtering information, modeling social and cultural relationships, and communicating with customers, are presented.
Resumo:
Esitys Kirjastoverkkopäivillä 22.10.2014 Helsingissä – Presentation of Jakob Voß at the Library Network Days, October 22, 2014 in Helsinki.
Resumo:
Today, biodiversity is endangered by the currently applied intensive farming methods imposed on food producers by intermediate actors (e.g.: retailers). The lack of a direct communication technology between the food producer and the consumer creates dependency on the intermediate actors for both producers and the consumers. A tool allowing producers to directly and efficiently market produce that meets customer demands could greatly reduce the dependency enforced by intermediate actors. To this end, in this thesis, we propose, develop, implement and validate a Real Time Context Sharing (RCOS) system. RCOS takes advantage of the widely used publish/subscribe paradigm to exchange messages between producers and consumers, directly, according to their interest and context. Current systems follow a topic-based model or a content-based model. With RCOS, we propose a context-awareness approach into the matching process of publish/subscribe paradigm. Finally, as a proof of concept, we extend the Apache ActiveMQ Artemis software and create a client prototype. We evaluate our proof of concept for larger scale deployment.
Resumo:
Työn tavoitteena oli tutkia toiminnanohjausjärjestelmän jälleenmyyjän suunnitteleman ohjelmistokokonaisuuden sopivuutta teknisen tukkukaupan ratkaisuksi. Työssä käydään läpi ohjelmistokokonaisuutta, johon kuuluu toiminnanohjausjärjestelmän lisäksi useita muita ohjelmistoja. Ohjelmat yhdessä muodostavat kokonaisratkaisun erityisesti tukkukaupalle. Työssä käydään läpi myös teknisen tukkukaupan liiketoimintaympäristöä. Samalla pyritään löytämään ohjelmistoratkaisun vahvuudet ja heikkoudet tukkukaupan liiketoiminnalle. Haastattelututkimuksella varmistetaan löydettyjen ominaisuuksien sopiminen tekniselle tukkukaupalle. Tutkimuksessa havaitaan, että tarjottu ratkaisu riittää tekniselle tukkukauppiaalle mainiosti. Haastattelututkimuksen perusteella läheskään kaikkia lisätoiminnallisuuksia ei kuitenkaan tarvita, vaan pelkkä toiminnanohjausjärjestelmä riittää sellaisenaan. Järjestelmän ainoa havaittu heikkous liittyy varastonhallinnan hyllypaikkojen käsittelyyn, mutta haastattelun perusteella tämä ei haittaa teknistä tukkukauppaa harjoittavaa yritystä. Tärkeänä piirteenä toiminnanohjausjärjestelmän lisäksi on internetin käyttö tiedotuskanavana. Tiedon kulku tietojärjestelmien ja web-sivujen välillä koetaan tärkeäksi toiminnallisuudeksi.
Resumo:
The topic of this thesis is the simulation of a combination of several control and data assimilation methods, meant to be used for controlling the quality of paper in a paper machine. Paper making is a very complex process and the information obtained from the web is sparse. A paper web scanner can only measure a zig zag path on the web. An assimilation method is needed to process estimates for Machine Direction (MD) and Cross Direction (CD) profiles of the web. Quality control is based on these measurements. There is an increasing need for intelligent methods to assist in data assimilation. The target of this thesis is to study how such intelligent assimilation methods are affecting paper web quality. This work is based on a paper web simulator, which has been developed in the TEKES funded MASI NoTes project. The simulator is a valuable tool in comparing different assimilation methods. The thesis contains the comparison of four different assimilation methods. These data assimilation methods are a first order Bayesian model estimator, an ARMA model based on a higher order Bayesian estimator, a Fourier transform based Kalman filter estimator and a simple block estimator. The last one can be considered to be close to current operational methods. From these methods Bayesian, ARMA and Kalman all seem to have advantages over the commercial one. The Kalman and ARMA estimators seems to be best in overall performance.
Resumo:
Human activity recognition in everyday environments is a critical, but challenging task in Ambient Intelligence applications to achieve proper Ambient Assisted Living, and key challenges still remain to be dealt with to realize robust methods. One of the major limitations of the Ambient Intelligence systems today is the lack of semantic models of those activities on the environment, so that the system can recognize the speci c activity being performed by the user(s) and act accordingly. In this context, this thesis addresses the general problem of knowledge representation in Smart Spaces. The main objective is to develop knowledge-based models, equipped with semantics to learn, infer and monitor human behaviours in Smart Spaces. Moreover, it is easy to recognize that some aspects of this problem have a high degree of uncertainty, and therefore, the developed models must be equipped with mechanisms to manage this type of information. A fuzzy ontology and a semantic hybrid system are presented to allow modelling and recognition of a set of complex real-life scenarios where vagueness and uncertainty are inherent to the human nature of the users that perform it. The handling of uncertain, incomplete and vague data (i.e., missing sensor readings and activity execution variations, since human behaviour is non-deterministic) is approached for the rst time through a fuzzy ontology validated on real-time settings within a hybrid data-driven and knowledgebased architecture. The semantics of activities, sub-activities and real-time object interaction are taken into consideration. The proposed framework consists of two main modules: the low-level sub-activity recognizer and the high-level activity recognizer. The rst module detects sub-activities (i.e., actions or basic activities) that take input data directly from a depth sensor (Kinect). The main contribution of this thesis tackles the second component of the hybrid system, which lays on top of the previous one, in a superior level of abstraction, and acquires the input data from the rst module's output, and executes ontological inference to provide users, activities and their in uence in the environment, with semantics. This component is thus knowledge-based, and a fuzzy ontology was designed to model the high-level activities. Since activity recognition requires context-awareness and the ability to discriminate among activities in di erent environments, the semantic framework allows for modelling common-sense knowledge in the form of a rule-based system that supports expressions close to natural language in the form of fuzzy linguistic labels. The framework advantages have been evaluated with a challenging and new public dataset, CAD-120, achieving an accuracy of 90.1% and 91.1% respectively for low and high-level activities. This entails an improvement over both, entirely data-driven approaches, and merely ontology-based approaches. As an added value, for the system to be su ciently simple and exible to be managed by non-expert users, and thus, facilitate the transfer of research to industry, a development framework composed by a programming toolbox, a hybrid crisp and fuzzy architecture, and graphical models to represent and con gure human behaviour in Smart Spaces, were developed in order to provide the framework with more usability in the nal application. As a result, human behaviour recognition can help assisting people with special needs such as in healthcare, independent elderly living, in remote rehabilitation monitoring, industrial process guideline control, and many other cases. This thesis shows use cases in these areas.