25 resultados para TSDEAI Semantic-Web Twitter Semantic-Search WordNet LSA

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Web-palvelut muodostavat keskeisen osan semanttista web:iä. Ne mahdollistavat nykyaikaisen ja tehokkaan välineistön hajautettuun laskentaan ja luovat perustan palveluperustaisille arkkitehtuureille. Verkottunut automatisoitu liiketoiminta edellyttää jatkuvaa aktiivisuutta kaikilta osapuolilta. Lisäksi sitä tukevan järjestelmäntulee olla joustava ja sen tulee tukea monipuolista toiminnallisuutta. Nämä tavoitteet voidaan saavuttamaan yhdistämällä web-palveluita. Yhdistämisprosessi muodostuu joukosta tehtäviä kuten esim. palveluiden mallintaminen, palveluiden koostaminen, palveluiden suorittaminen ja tarkistaminen. Työssä on toteutettu yksinkertainen liiketoimintaprosessi. Toteutuksen osalta tarkasteltiin vaihtoehtoisia standardeja ja toteutustekniikoita. Myös suorituksen optimointiin liittyvät näkökulmat pyrittiin ottamaan huomioon.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

"Helmiä sioille", pärlor för svin, säger man på finska om någonting bra och fint som tas emot av en mottagare som inte vill eller har ingen förmåga att förstå, uppskatta eller utnyttja hela den potential som finns hos det mottagna föremålet, är ointresserad av den eller gillar den inte. För sådana relativt stabila flerordiga uttryck, som är lagrade i språkbrukarnas minnen och som demonstrerar olika slags oregelbundna drag i sin struktur använder man inom lingvistiken bl.a. termerna "idiom" eller "fraseologiska enheter". Som en oregelbundenhet kan man t.ex. beskriva det faktum att betydelsen hos uttrycket inte är densamma som man skulle komma till ifall man betraktade det som en vanlig regelbunden fras. En annan oregelbundenhet, som idiomforskare har observerat, ligger i den begränsade förmågan att varieras i form och betydelse, som många idiom har jämfört med regelbundna fraser. Därför talas det ofta om "grundform" och "grundbetydelse" hos idiom och variationen avses som avvikelse från dessa. Men när man tittar på ett stort antal förekomstexempel av idiom i språkbruk, märker man att många av dem tillåter variation, t.o.m. i sådan utsträckning att gränserna mellan en variant och en "grundform" suddas ut, och istället för ett idiom råkar vi plötsligt på en "familj" av flera besläktade uttryck. Allt detta väcker frågan om hur dessa uttryck egentligen ska vara representerade i språket. I avhandlingen utförs en kritisk granskning av olika tidigare tillvägagångssätt att beskriva fraseologiska enheter i syfte att klargöra vilka svårigheter deras struktur och variation erbjuder för den lingvistiska teorin. Samtidigt presenteras ett alternativt sätt att beskriva dessa uttryck. En systematisk och formell modell som utvecklas i denna avhandling integrerar en beskrivning av idiom på många olika språkliga nivåer och skildrar deras variation i form av ett nätverk och som ett resultat av samspel mellan idiomets struktur och kontexter där det förekommer, samt av interaktion med andra fasta uttryck. Modellen bygger på en fördjupande, språkbrukbaserad analys av det finska idiomet "X HEITTÄÄ HELMIÄ SIOILLE" (X kastar pärlor för svin).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A web service is a software system that provides a machine-processable interface to the other machines over the network using different Internet protocols. They are being increasingly used in the industry in order to automate different tasks and offer services to a wider audience. The REST architectural style aims at producing scalable and extensible web services using technologies that play well with the existing tools and infrastructure of the web. It provides a uniform set of operation that can be used to invoke a CRUD interface (create, retrieve, update and delete) of a web service. The stateless behavior of the service interface requires that every request to a resource is independent of the previous ones facilitating scalability. Automated systems, e.g., hotel reservation systems, provide advanced scenarios for stateful services that require a certain sequence of requests that must be followed in order to fulfill the service goals. Designing and developing such services for advanced scenarios with REST constraints require rigorous approaches that are capable of creating web services that can be trusted for their behavior. Systems that can be trusted for their behavior can be termed as dependable systems. This thesis presents an integrated design, analysis and validation approach that facilitates the service developer to create dependable and stateful REST web services. The main contribution of this thesis is that we provide a novel model-driven methodology to design behavioral REST web service interfaces and their compositions. The behavioral interfaces provide information on what methods can be invoked on a service and the pre- and post-conditions of these methods. The methodology uses Unified Modeling Language (UML), as the modeling language, which has a wide user base and has mature tools that are continuously evolving. We have used UML class diagram and UML state machine diagram with additional design constraints to provide resource and behavioral models, respectively, for designing REST web service interfaces. These service design models serve as a specification document and the information presented in them have manifold applications. The service design models also contain information about the time and domain requirements of the service that can help in requirement traceability which is an important part of our approach. Requirement traceability helps in capturing faults in the design models and other elements of software development environment by tracing back and forth the unfulfilled requirements of the service. The information about service actors is also included in the design models which is required for authenticating the service requests by authorized actors since not all types of users have access to all the resources. In addition, following our design approach, the service developer can ensure that the designed web service interfaces will be REST compliant. The second contribution of this thesis is consistency analysis of the behavioral REST interfaces. To overcome the inconsistency problem and design errors in our service models, we have used semantic technologies. The REST interfaces are represented in web ontology language, OWL2, that can be part of the semantic web. These interfaces are used with OWL 2 reasoners to check unsatisfiable concepts which result in implementations that fail. This work is fully automated thanks to the implemented translation tool and the existing OWL 2 reasoners. The third contribution of this thesis is the verification and validation of REST web services. We have used model checking techniques with UPPAAL model checker for this purpose. The timed automata of UML based service design models are generated with our transformation tool that are verified for their basic characteristics like deadlock freedom, liveness, reachability and safety. The implementation of a web service is tested using a black-box testing approach. Test cases are generated from the UPPAAL timed automata and using the online testing tool, UPPAAL TRON, the service implementation is validated at runtime against its specifications. Requirement traceability is also addressed in our validation approach with which we can see what service goals are met and trace back the unfulfilled service goals to detect the faults in the design models. A final contribution of the thesis is an implementation of behavioral REST interfaces and service monitors from the service design models. The partial code generation tool creates code skeletons of REST web services with method pre and post-conditions. The preconditions of methods constrain the user to invoke the stateful REST service under the right conditions and the post condition constraint the service developer to implement the right functionality. The details of the methods can be manually inserted by the developer as required. We do not target complete automation because we focus only on the interface aspects of the web service. The applicability of the approach is demonstrated with a pedagogical example of a hotel room booking service and a relatively complex worked example of holiday booking service taken from the industrial context. The former example presents a simple explanation of the approach and the later worked example shows how stateful and timed web services offering complex scenarios and involving other web services can be constructed using our approach.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Context: Web services have been gaining popularity due to the success of service oriented architecture and cloud computing. Web services offer tremendous opportunity for service developers to publish their services and applications over the boundaries of the organization or company. However, to fully exploit these opportunities it is necessary to find efficient discovery mechanism thus, Web services discovering mechanism has attracted a considerable attention in Semantic Web research, however, there have been no literature surveys that systematically map the present research result thus overall impact of these research efforts and level of maturity of their results are still unclear. This thesis aims at providing an overview of the current state of research into Web services discovering mechanism using systematic mapping. The work is based on the papers published 2004 to 2013, and attempts to elaborate various aspects of the analyzed literature including classifying them in terms of the architecture, frameworks and methods used for web services discovery mechanism. Objective: The objective if this work is to summarize the current knowledge that is available as regards to Web service discovery mechanisms as well as to systematically identify and analyze the current published research works in order to identify different approaches presented. Method: A systematic mapping study has been employed to assess the various Web Services discovery approaches presented in the literature. Systematic mapping studies are useful for categorizing and summarizing the level of maturity research area. Results: The result indicates that there are numerous approaches that are consistently being researched and published in this field. In terms of where these researches are published, conferences are major contributing publishing arena as 48% of the selected papers were conference published papers illustrating the level of maturity of the research topic. Additionally selected 52 papers are categorized into two broad segments namely functional and non-functional based approaches taking into consideration architectural aspects and information retrieval approaches, semantic matching, syntactic matching, behavior based matching as well as QOS and other constraints.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Human activity recognition in everyday environments is a critical, but challenging task in Ambient Intelligence applications to achieve proper Ambient Assisted Living, and key challenges still remain to be dealt with to realize robust methods. One of the major limitations of the Ambient Intelligence systems today is the lack of semantic models of those activities on the environment, so that the system can recognize the speci c activity being performed by the user(s) and act accordingly. In this context, this thesis addresses the general problem of knowledge representation in Smart Spaces. The main objective is to develop knowledge-based models, equipped with semantics to learn, infer and monitor human behaviours in Smart Spaces. Moreover, it is easy to recognize that some aspects of this problem have a high degree of uncertainty, and therefore, the developed models must be equipped with mechanisms to manage this type of information. A fuzzy ontology and a semantic hybrid system are presented to allow modelling and recognition of a set of complex real-life scenarios where vagueness and uncertainty are inherent to the human nature of the users that perform it. The handling of uncertain, incomplete and vague data (i.e., missing sensor readings and activity execution variations, since human behaviour is non-deterministic) is approached for the rst time through a fuzzy ontology validated on real-time settings within a hybrid data-driven and knowledgebased architecture. The semantics of activities, sub-activities and real-time object interaction are taken into consideration. The proposed framework consists of two main modules: the low-level sub-activity recognizer and the high-level activity recognizer. The rst module detects sub-activities (i.e., actions or basic activities) that take input data directly from a depth sensor (Kinect). The main contribution of this thesis tackles the second component of the hybrid system, which lays on top of the previous one, in a superior level of abstraction, and acquires the input data from the rst module's output, and executes ontological inference to provide users, activities and their in uence in the environment, with semantics. This component is thus knowledge-based, and a fuzzy ontology was designed to model the high-level activities. Since activity recognition requires context-awareness and the ability to discriminate among activities in di erent environments, the semantic framework allows for modelling common-sense knowledge in the form of a rule-based system that supports expressions close to natural language in the form of fuzzy linguistic labels. The framework advantages have been evaluated with a challenging and new public dataset, CAD-120, achieving an accuracy of 90.1% and 91.1% respectively for low and high-level activities. This entails an improvement over both, entirely data-driven approaches, and merely ontology-based approaches. As an added value, for the system to be su ciently simple and exible to be managed by non-expert users, and thus, facilitate the transfer of research to industry, a development framework composed by a programming toolbox, a hybrid crisp and fuzzy architecture, and graphical models to represent and con gure human behaviour in Smart Spaces, were developed in order to provide the framework with more usability in the nal application. As a result, human behaviour recognition can help assisting people with special needs such as in healthcare, independent elderly living, in remote rehabilitation monitoring, industrial process guideline control, and many other cases. This thesis shows use cases in these areas.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this thesis, we propose to infer pixel-level labelling in video by utilising only object category information, exploiting the intrinsic structure of video data. Our motivation is the observation that image-level labels are much more easily to be acquired than pixel-level labels, and it is natural to find a link between the image level recognition and pixel level classification in video data, which would transfer learned recognition models from one domain to the other one. To this end, this thesis proposes two domain adaptation approaches to adapt the deep convolutional neural network (CNN) image recognition model trained from labelled image data to the target domain exploiting both semantic evidence learned from CNN, and the intrinsic structures of unlabelled video data. Our proposed approaches explicitly model and compensate for the domain adaptation from the source domain to the target domain which in turn underpins a robust semantic object segmentation method for natural videos. We demonstrate the superior performance of our methods by presenting extensive evaluations on challenging datasets comparing with the state-of-the-art methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sähköinen reseptijärjestelmä nykyisessä muodossaan sai alkunsa Sosiaali- ja terveysministeriön aloitteesta ja Kansaneläkelaitoksen toimesta vuoden 2001 aikana. Käytössä oleva kansallinen reseptipalvelin mahdollistaa potilaiden oikeudet muun muassa apteekin vapaaseen valintaan ja lääkkeiden suorakorvausmenetelmään. Sähköinen resepti pyritään integroimaan olemassa olevien potilasrekisterijärjestelmien yhteyteen, joten semanttisen webin tuomat teknologiat kuten Web-palvelut ja XML-pohjaiset kielet ovat tärkeässä asemassa reseptien hallinnassa. Tässä työssä pyritään esittämään sähköisen reseptijärjestelmän toimintaa ja sen mahdollistavia jo olemassa olevia tekniikoita. Esille tuodaan myös uusia lisäarvoa tuovia tekniikoita, kuten RDF(S)-kuvauskielet, jotka mahdollistavat muun muassa resepteihin kohdistuvat kyselyt ja täten uudet terveyspalvelut sekä potilaille että lääkäreille. Oleellista on tietenkin myös tietoturva, sillä organisaatioiden välillä resepteissä liikkuu arkaluontoisia tietoja. Turvallisuutta pyritään edistämään muun muassa sähköisillä allekirjoituksilla ja älykorttien avulla tapahtuvalla järjestelmään tunnistautumisella. Jotta sähköinen resepti toimisi tulevaisuudessakin halutulla tavalla ja mahdollistaisi lisäarvopalveluita, on siinä panostettava sisällön lisäksi senkuvaamiseen. Sisällön ja sen suhteiden tarkka määrittely auttaa muun muassa reseptikyselyjen tekemisessä, mikä onkin olennainen osa potilasturvallisuuden ylläpitoa. Kuvauskielien ja ontologioiden käyttö voi myös auttaa lääkehoidon määrittämisessä useiden erilaisten ja jatkuvasti lisääntyvien lääkkeiden viidakossa, kun lääkkeitä voidaan etsiä tietokannoista reseptin kirjoituksen yhteydessä.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Työssä tutkittiin IFC (Industrial Foundation Classes)-tietomallin mukaisen tiedoston jäsentämistä, tiedon jatkoprosessointia ja tiedonsiirtoa sovellusten välillä. Tutkittiin, mitä vaihtoehtoja tiedon siirron toteuttamiseksi ohjelmallisesti on ja mihin suuntaan tiedon siirtäminen on menossa tulevaisuudessa. Soveltavassa osassa toteutettiin IFC-standardin mukaisen ISO10303-tiedoston (Osa 21) jäsentäminen ja tulkitseminen XML-muotoon. Sovelluksessa jäsennetään ja tulkitaan CAD-ohjelmistolla tehty IFC-tiedosto C# -ohjelmointikielellä ja tallennetaan tieto XML-tietokantaan kustannuslaskentaohjelmiston luettavaksi.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tietojärjestelmien integraatio on nykypäivänä tärkeä osa alue yritysten toiminnassa ja kilpailukyvyn ylläpitämisessä. Palvelukeskeinen arkkitehtuuri ja Web palvelut on uusi joustava tapa tehdä tietojärjestelmien välinen integraatio. Web palveluiden yksi ydinkomponentti on UDDI, Universal Description, Discovery and Integration. UDDI toimii palvelurekisterin tavoin. UDDI määrittää tavan julkaista, löytää ja ottaa käyttöön Web palveluja. Web palveluja voidaan hakea UDDI:sta erilaisin kriteerein, kuten esimerkiksi palvelun sijainnin, yrityksen nimen ja toimialan perusteella. UDDI on myös itsessään Web palvelu, joka perustuu XML kuvauskieleen ja SOAP protokollaan. Työssä paneudutaan tarkemmin UDDI:in. UDDI:ta käsitellään tarkemmin myös teknisesti. Oleellinen osa UDDI:ta on ollut julkaisijoiden ja käyttäjien mielestä tietoturvan puute, joka on rajoittanut huomattavasti UDDI:n käyttöä ja käyttöönottamista. Työssä tarkastellaankin tarkemmin juuri tietoturvaan liittyviä asioita ja ratkaisuja sekä myös UDDI:n merkitystä yrityksille.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Increasing usage of Web Services has been result of efforts to automate Web Services discovery and interoperability. The Semantic Web Service descriptions create basis for automatic Web Service information management tasks such as discovery and interoperability. The discussion of opportunities enabled by service descriptions have arisen in recent years. The end user has been considered only as a consumer of services and information sharing occurred from one service provider to public in service distribution. The social networking has changed the nature of services. The end user cannot be seen anymore only as service consumer, because by enabling semantically rich environment and right tools, the end user will be in the future the producer of services. This study investigates the ways to provide for end users the empowerment to create service descriptions on mobile device. Special focus is given to the changed role of the end user in service creation. In addition, the Web Services technologies are presented as well as different Semantic Web Service description approaches are compared. The main focus in the study is to investigate tools and techniques to enable service description creation and semantic information management on mobile device.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study focuses on work commitment creation on rhetorical level, that is to say, the rhetorical and linguistic means that are used to construct or elicit worker commitment. The commitment of the worker is one of the most important objectives of all business communication. There is a strong demand for commitment, identification, or adherence to work in various walks of life, although the actual circumstances are often somewhat insecure and shortsighted. The analysis demonstrates that the actual object of commitment may vary from work itself or work organization to one’s career or professional development. The ideal pattern for commitment appears as comprehensive: it contains affective and rational as well as ideological dimensions. This thesis is a rhetorical discourse analysis, or rhetorical analysis with discourse-analytic influences. Primarily it is a rhetorical analysis in which discourses are observed mainly as tools of a rhetorician. The study also draws on various findings of sociology of work and organizational studies. Research material consists of magazines from three and web pages from six different companies. This study explores repeated discourses in commitment rhetoric, mainly through pointing core concepts and recurrent patterns of argumentation. In this analysis section, a semantic and concept-analytic approach is also employed. Companies talk about ideas, values, feelings and attitudes thus constructing a united and unanimous group and an ideal model of commitment. Probably the most important domain of commitment rhetoric is the construction of group and community. Collective identity is constructed through shared meanings, values and goals, and these rhetorical group constructs that can be used and modified in various ways. Every now and then business communication also focuses on the individual, employing different speakers, positions and discourses associated to them. Constructing and using these positions also paints the picture of an ideal worker and ideal work orientation. For example, the so called entrepreneurship model is frequently used here. Commitment talk and the rhetorical situation it constructs are full of tensions and contradictions; the presence of seemingly contradictory values, goals or identities is constant. This study demonstrates tensions like self-fulfilment and individuality versus conformity, and constant change and development versus dependable establishment, and analyses how they are used, processed and dealt with. An important dimension in commitment rhetoric is the way companies define themselves in respect of current social issues, and how they define themselves as responsible social actors, and how they, in this sense, seek to appear as attractive workplaces. This point of view gives rise to problematic questions as companies process the tensions between, for example, rhetoric and action, ethical ideals and business conditions and so on. For its part, the commitment talk also defines the meaning of waged work in human life. Changing society, changing working life, and changing business environments set new claims and standards for workers and contents of work. In this point of view this research contributes to the study of working life and takes part in current public discussion concerning the meaning, role and future of waged work.