952 resultados para TSDEAI Semantic-Web Twitter Semantic-Search WordNet LSA
Resumo:
Background Mesial temporal lobe epilepsy (MTLE) is the most common type of focal epilepsy in adults and can be successfully cured by surgery. One of the main complications of this surgery however is a decline in language abilities. The magnitude of this decline is related to the degree of language lateralization to the left hemisphere. Most fMRI paradigms used to determine language dominance in epileptic populations have used active language tasks. Sometimes, these paradigms are too complex and may result in patient underperformance. Only a few studies have used purely passive tasks, such as listening to standard speech. Methods In the present study we characterized language lateralization in patients with MTLE using a rapid and passive semantic language task. We used functional magnetic resonance imaging (fMRI) to study 23 patients [12 with Left (LMTLE), 11 with Right mesial temporal lobe epilepsy (RMTLE)] and 19 healthy right-handed controls using a 6 minute long semantic task in which subjects passively listened to groups of sentences (SEN) and pseudo sentences (PSEN). A lateralization index (LI) was computed using a priori regions of interest of the temporal lobe. Results The LI for the significant contrasts produced activations for all participants in both temporal lobes. 81.8% of RMTLE patients and 79% of healthy individuals had a bilateral language representation for this particular task. However, 50% of LMTLE patients presented an atypical right hemispheric dominance in the LI. More importantly, the degree of right lateralization in LMTLE patients was correlated with the age of epilepsy onset. Conclusions The simple, rapid, non-collaboration dependent, passive task described in this study, produces a robust activation in the temporal lobe in both patients and controls and is capable of illustrating a pattern of atypical language organization for LMTLE patients. Furthermore, we observed that the atypical right-lateralization patterns in LMTLE patients was associated to earlier age at epilepsy onset. These results are in line with the idea that early onset of epileptic activity is associated to larger neuroplastic changes.
Resumo:
Recognition of environmental sounds is believed to proceed through discrimination steps from broad to more narrow categories. Very little is known about the neural processes that underlie fine-grained discrimination within narrow categories or about their plasticity in relation to newly acquired expertise. We investigated how the cortical representation of birdsongs is modulated by brief training to recognize individual species. During a 60-minute session, participants learned to recognize a set of birdsongs; they improved significantly their performance for trained (T) but not control species (C), which were counterbalanced across participants. Auditory evoked potentials (AEPs) were recorded during pre- and post-training sessions. Pre vs. post changes in AEPs were significantly different between T and C i) at 206-232ms post stimulus onset within a cluster on the anterior part of the left superior temporal gyrus; ii) at 246-291ms in the left middle frontal gyrus; and iii) 512-545ms in the left middle temporal gyrus as well as bilaterally in the cingulate cortex. All effects were driven by weaker activity for T than C species. Thus, expertise in discriminating T species modulated early stages of semantic processing, during and immediately after the time window that sustains the discrimination between human vs. animal vocalizations. Moreover, the training-induced plasticity is reflected by the sharpening of a left lateralized semantic network, including the anterior part of the temporal convexity and the frontal cortex. Training to identify birdsongs influenced, however, also the processing of C species, but at a much later stage. Correct discrimination of untrained sounds seems to require an additional step which results from lower-level features analysis such as apperception. We therefore suggest that the access to objects within an auditory semantic category is different and depends on subject's level of expertise. More specifically, correct intra-categorical auditory discrimination for untrained items follows the temporal hierarchy and transpires in a late stage of semantic processing. On the other hand, correct categorization of individually trained stimuli occurs earlier, during a period contemporaneous with human vs. animal vocalization discrimination, and involves a parallel semantic pathway requiring expertise.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
Depuis avril 2014, plus de deux millions de tweets ont été publiés, touchant aux commémorations du Centenaire de la Grande Guerre, principalement en Anglais et en Français. Cette communication s'intéresse à une pratique précise lié aux modalités d'usage de Twitter: le partage de liens. En s'intéressant notamment aux liens pointant vers des pages web francophones partagés le 11 novembre 2015, l'auteur a constitué un corpus d'environ 1000 pages web et en analyse le contenu. Il s'interroge notamment sur l'émergence en tant que lieu de mémoire du site « Mémoire des Hommes » et plus particulièrement de la base de données des morts pour la France.
Resumo:
The role of grammatical class in lexical access and representation is still not well understood. Grammatical effects obtained in picture-word interference experiments have been argued to show the operation of grammatical constraints during lexicalization when syntactic integration is required by the task. Alternative views hold that the ostensibly grammatical effects actually derive from the coincidence of semantic and grammatical differences between lexical candidates. We present three picture-word interference experiments conducted in Spanish. In the first two, the semantic relatedness (related or unrelated) and the grammatical class (nouns or verbs) of the target and the distracter were manipulated in an infinitive form action naming task in order to disentangle their contributions to verb lexical access. In the third experiment, a possible confound between grammatical class and semantic domain (objects or actions) was eliminated by using action-nouns as distracters. A condition in which participants were asked to name the action pictures using an inflected form of the verb was also included to explore whether the need of syntactic integration modulated the appearance of grammatical effects. Whereas action-words (nouns or verbs), but not object-nouns, produced longer reaction times irrespective of their grammatical class in the infinitive condition, only verbs slowed latencies in the inflected form condition. Our results suggest that speech production relies on the exclusion of candidate responses that do not fulfil task-pertinent criteria like membership in the appropriate semantic domain or grammatical class. Taken together, these findings are explained by a response-exclusion account of speech output. This and alternative hypotheses are discussed.
Resumo:
In this paper we describe a browsing and searching personalization system for digitallibraries based on the use of ontologies for describing the relationships between all theelements which take part in a digital library scenario of use. The main goal of thisproject is to help the users of a digital library to improve their experience of use bymeans of two complementary strategies: first, by maintaining a complete history recordof his or her browsing and searching activities, which is part of a navigational userprofile which includes preferences and all the aspects related to community involvement; and second, by reusing all the knowledge which has been extracted from previous usage from other users with similar profiles. This can be accomplished in terms of narrowing and focusing the search results and browsing options through the use of a recommendation system which organizes such results in the most appropriatemanner, using ontologies and concepts drawn from the semantic web field. The complete integration of the experience of use of a digital library in the learning process is also pursued. Both the usage and information organization can be also exploited to extract useful knowledge from the way users interact with a digital library, knowledge that can be used to improve several design aspects of the library, ranging from internal organization aspects to human factors and user interfaces. Although this project is still on an early development stage, it is possible to identify all the desired functionalities and requirements that are necessary to fully integrate the use of a digital library in an e-learning environment.
Resumo:
"Helmiä sioille", pärlor för svin, säger man på finska om någonting bra och fint som tas emot av en mottagare som inte vill eller har ingen förmåga att förstå, uppskatta eller utnyttja hela den potential som finns hos det mottagna föremålet, är ointresserad av den eller gillar den inte. För sådana relativt stabila flerordiga uttryck, som är lagrade i språkbrukarnas minnen och som demonstrerar olika slags oregelbundna drag i sin struktur använder man inom lingvistiken bl.a. termerna "idiom" eller "fraseologiska enheter". Som en oregelbundenhet kan man t.ex. beskriva det faktum att betydelsen hos uttrycket inte är densamma som man skulle komma till ifall man betraktade det som en vanlig regelbunden fras. En annan oregelbundenhet, som idiomforskare har observerat, ligger i den begränsade förmågan att varieras i form och betydelse, som många idiom har jämfört med regelbundna fraser. Därför talas det ofta om "grundform" och "grundbetydelse" hos idiom och variationen avses som avvikelse från dessa. Men när man tittar på ett stort antal förekomstexempel av idiom i språkbruk, märker man att många av dem tillåter variation, t.o.m. i sådan utsträckning att gränserna mellan en variant och en "grundform" suddas ut, och istället för ett idiom råkar vi plötsligt på en "familj" av flera besläktade uttryck. Allt detta väcker frågan om hur dessa uttryck egentligen ska vara representerade i språket. I avhandlingen utförs en kritisk granskning av olika tidigare tillvägagångssätt att beskriva fraseologiska enheter i syfte att klargöra vilka svårigheter deras struktur och variation erbjuder för den lingvistiska teorin. Samtidigt presenteras ett alternativt sätt att beskriva dessa uttryck. En systematisk och formell modell som utvecklas i denna avhandling integrerar en beskrivning av idiom på många olika språkliga nivåer och skildrar deras variation i form av ett nätverk och som ett resultat av samspel mellan idiomets struktur och kontexter där det förekommer, samt av interaktion med andra fasta uttryck. Modellen bygger på en fördjupande, språkbrukbaserad analys av det finska idiomet "X HEITTÄÄ HELMIÄ SIOILLE" (X kastar pärlor för svin).
Resumo:
A web service is a software system that provides a machine-processable interface to the other machines over the network using different Internet protocols. They are being increasingly used in the industry in order to automate different tasks and offer services to a wider audience. The REST architectural style aims at producing scalable and extensible web services using technologies that play well with the existing tools and infrastructure of the web. It provides a uniform set of operation that can be used to invoke a CRUD interface (create, retrieve, update and delete) of a web service. The stateless behavior of the service interface requires that every request to a resource is independent of the previous ones facilitating scalability. Automated systems, e.g., hotel reservation systems, provide advanced scenarios for stateful services that require a certain sequence of requests that must be followed in order to fulfill the service goals. Designing and developing such services for advanced scenarios with REST constraints require rigorous approaches that are capable of creating web services that can be trusted for their behavior. Systems that can be trusted for their behavior can be termed as dependable systems. This thesis presents an integrated design, analysis and validation approach that facilitates the service developer to create dependable and stateful REST web services. The main contribution of this thesis is that we provide a novel model-driven methodology to design behavioral REST web service interfaces and their compositions. The behavioral interfaces provide information on what methods can be invoked on a service and the pre- and post-conditions of these methods. The methodology uses Unified Modeling Language (UML), as the modeling language, which has a wide user base and has mature tools that are continuously evolving. We have used UML class diagram and UML state machine diagram with additional design constraints to provide resource and behavioral models, respectively, for designing REST web service interfaces. These service design models serve as a specification document and the information presented in them have manifold applications. The service design models also contain information about the time and domain requirements of the service that can help in requirement traceability which is an important part of our approach. Requirement traceability helps in capturing faults in the design models and other elements of software development environment by tracing back and forth the unfulfilled requirements of the service. The information about service actors is also included in the design models which is required for authenticating the service requests by authorized actors since not all types of users have access to all the resources. In addition, following our design approach, the service developer can ensure that the designed web service interfaces will be REST compliant. The second contribution of this thesis is consistency analysis of the behavioral REST interfaces. To overcome the inconsistency problem and design errors in our service models, we have used semantic technologies. The REST interfaces are represented in web ontology language, OWL2, that can be part of the semantic web. These interfaces are used with OWL 2 reasoners to check unsatisfiable concepts which result in implementations that fail. This work is fully automated thanks to the implemented translation tool and the existing OWL 2 reasoners. The third contribution of this thesis is the verification and validation of REST web services. We have used model checking techniques with UPPAAL model checker for this purpose. The timed automata of UML based service design models are generated with our transformation tool that are verified for their basic characteristics like deadlock freedom, liveness, reachability and safety. The implementation of a web service is tested using a black-box testing approach. Test cases are generated from the UPPAAL timed automata and using the online testing tool, UPPAAL TRON, the service implementation is validated at runtime against its specifications. Requirement traceability is also addressed in our validation approach with which we can see what service goals are met and trace back the unfulfilled service goals to detect the faults in the design models. A final contribution of the thesis is an implementation of behavioral REST interfaces and service monitors from the service design models. The partial code generation tool creates code skeletons of REST web services with method pre and post-conditions. The preconditions of methods constrain the user to invoke the stateful REST service under the right conditions and the post condition constraint the service developer to implement the right functionality. The details of the methods can be manually inserted by the developer as required. We do not target complete automation because we focus only on the interface aspects of the web service. The applicability of the approach is demonstrated with a pedagogical example of a hotel room booking service and a relatively complex worked example of holiday booking service taken from the industrial context. The former example presents a simple explanation of the approach and the later worked example shows how stateful and timed web services offering complex scenarios and involving other web services can be constructed using our approach.
Resumo:
Context: Web services have been gaining popularity due to the success of service oriented architecture and cloud computing. Web services offer tremendous opportunity for service developers to publish their services and applications over the boundaries of the organization or company. However, to fully exploit these opportunities it is necessary to find efficient discovery mechanism thus, Web services discovering mechanism has attracted a considerable attention in Semantic Web research, however, there have been no literature surveys that systematically map the present research result thus overall impact of these research efforts and level of maturity of their results are still unclear. This thesis aims at providing an overview of the current state of research into Web services discovering mechanism using systematic mapping. The work is based on the papers published 2004 to 2013, and attempts to elaborate various aspects of the analyzed literature including classifying them in terms of the architecture, frameworks and methods used for web services discovery mechanism. Objective: The objective if this work is to summarize the current knowledge that is available as regards to Web service discovery mechanisms as well as to systematically identify and analyze the current published research works in order to identify different approaches presented. Method: A systematic mapping study has been employed to assess the various Web Services discovery approaches presented in the literature. Systematic mapping studies are useful for categorizing and summarizing the level of maturity research area. Results: The result indicates that there are numerous approaches that are consistently being researched and published in this field. In terms of where these researches are published, conferences are major contributing publishing arena as 48% of the selected papers were conference published papers illustrating the level of maturity of the research topic. Additionally selected 52 papers are categorized into two broad segments namely functional and non-functional based approaches taking into consideration architectural aspects and information retrieval approaches, semantic matching, syntactic matching, behavior based matching as well as QOS and other constraints.
Resumo:
Human activity recognition in everyday environments is a critical, but challenging task in Ambient Intelligence applications to achieve proper Ambient Assisted Living, and key challenges still remain to be dealt with to realize robust methods. One of the major limitations of the Ambient Intelligence systems today is the lack of semantic models of those activities on the environment, so that the system can recognize the speci c activity being performed by the user(s) and act accordingly. In this context, this thesis addresses the general problem of knowledge representation in Smart Spaces. The main objective is to develop knowledge-based models, equipped with semantics to learn, infer and monitor human behaviours in Smart Spaces. Moreover, it is easy to recognize that some aspects of this problem have a high degree of uncertainty, and therefore, the developed models must be equipped with mechanisms to manage this type of information. A fuzzy ontology and a semantic hybrid system are presented to allow modelling and recognition of a set of complex real-life scenarios where vagueness and uncertainty are inherent to the human nature of the users that perform it. The handling of uncertain, incomplete and vague data (i.e., missing sensor readings and activity execution variations, since human behaviour is non-deterministic) is approached for the rst time through a fuzzy ontology validated on real-time settings within a hybrid data-driven and knowledgebased architecture. The semantics of activities, sub-activities and real-time object interaction are taken into consideration. The proposed framework consists of two main modules: the low-level sub-activity recognizer and the high-level activity recognizer. The rst module detects sub-activities (i.e., actions or basic activities) that take input data directly from a depth sensor (Kinect). The main contribution of this thesis tackles the second component of the hybrid system, which lays on top of the previous one, in a superior level of abstraction, and acquires the input data from the rst module's output, and executes ontological inference to provide users, activities and their in uence in the environment, with semantics. This component is thus knowledge-based, and a fuzzy ontology was designed to model the high-level activities. Since activity recognition requires context-awareness and the ability to discriminate among activities in di erent environments, the semantic framework allows for modelling common-sense knowledge in the form of a rule-based system that supports expressions close to natural language in the form of fuzzy linguistic labels. The framework advantages have been evaluated with a challenging and new public dataset, CAD-120, achieving an accuracy of 90.1% and 91.1% respectively for low and high-level activities. This entails an improvement over both, entirely data-driven approaches, and merely ontology-based approaches. As an added value, for the system to be su ciently simple and exible to be managed by non-expert users, and thus, facilitate the transfer of research to industry, a development framework composed by a programming toolbox, a hybrid crisp and fuzzy architecture, and graphical models to represent and con gure human behaviour in Smart Spaces, were developed in order to provide the framework with more usability in the nal application. As a result, human behaviour recognition can help assisting people with special needs such as in healthcare, independent elderly living, in remote rehabilitation monitoring, industrial process guideline control, and many other cases. This thesis shows use cases in these areas.
Resumo:
This study examines the efficiency of search engine advertising strategies employed by firms. The research setting is the online retailing industry, which is characterized by extensive use of Web technologies and high competition for market share and profitability. For Internet retailers, search engines are increasingly serving as an information gateway for many decision-making tasks. In particular, Search engine advertising (SEA) has opened a new marketing channel for retailers to attract new customers and improve their performance. In addition to natural (organic) search marketing strategies, search engine advertisers compete for top advertisement slots provided by search brokers such as Google and Yahoo! through keyword auctions. The rationale being that greater visibility on a search engine during a keyword search will capture customers' interest in a business and its product or service offerings. Search engines account for most online activities today. Compared with the slow growth of traditional marketing channels, online search volumes continue to grow at a steady rate. According to the Search Engine Marketing Professional Organization, spending on search engine marketing by North American firms in 2008 was estimated at $13.5 billion. Despite the significant role SEA plays in Web retailing, scholarly research on the topic is limited. Prior studies in SEA have focused on search engine auction mechanism design. In contrast, research on the business value of SEA has been limited by the lack of empirical data on search advertising practices. Recent advances in search and retail technologies have created datarich environments that enable new research opportunities at the interface of marketing and information technology. This research uses extensive data from Web retailing and Google-based search advertising and evaluates Web retailers' use of resources, search advertising techniques, and other relevant factors that contribute to business performance across different metrics. The methods used include Data Envelopment Analysis (DEA), data mining, and multivariate statistics. This research contributes to empirical research by analyzing several Web retail firms in different industry sectors and product categories. One of the key findings is that the dynamics of sponsored search advertising vary between multi-channel and Web-only retailers. While the key performance metrics for multi-channel retailers include measures such as online sales, conversion rate (CR), c1ick-through-rate (CTR), and impressions, the key performance metrics for Web-only retailers focus on organic and sponsored ad ranks. These results provide a useful contribution to our organizational level understanding of search engine advertising strategies, both for multi-channel and Web-only retailers. These results also contribute to current knowledge in technology-driven marketing strategies and provide managers with a better understanding of sponsored search advertising and its impact on various performance metrics in Web retailing.
Resumo:
Cette thèse constitue une étude systématique du lexique du déné sųłiné, une langue athabaskane du nord-ouest canadien. Elle présente les définitions et les patrons de combinatoire syntaxique et lexicale de plus de 200 unités lexicales, lexèmes et phrasèmes, qui représentent une partie importante du vocabulaire déné sųłiné dans sept domaines: les émotions, le caractère humain, la description physique des entités, le mouvement des êtres vivants, la position des entités, les conditions atmospheriques et les formations topologiques, en les comparant avec le vocubulaire équivalent de l'anglais. L’approche théorique choisie est la Théorie Sens-Texte (TST), une approche formelle qui met l’accent sur la description sémantique et lexicographique empiriques. La présente recherche relève d'importantes différences entre le lexique du déné sųłiné et celui de l'anglais à tous les niveaux: dans la correspondence entre la représentation conceptuelle, considérée (quasi-)extralinguistique, et la structure sémantique; dans les patrons de lexicalisation des unités lexicales, et dans les patrons de combinatoire syntaxique et lexicale, qui montrent parfois des traits propres au déné sųłiné intéressants.
Resumo:
Dans cette thèse, nous présentons les problèmes d’échange de documents d'affaires et proposons une méthode pour y remédier. Nous proposons une méthodologie pour adapter les standards d’affaires basés sur XML aux technologies du Web sémantique en utilisant la transformation des documents définis en DTD ou XML Schema vers une représentation ontologique en OWL 2. Ensuite, nous proposons une approche basée sur l'analyse formelle de concept pour regrouper les classes de l'ontologie partageant une certaine sémantique dans le but d'améliorer la qualité, la lisibilité et la représentation de l'ontologie. Enfin, nous proposons l’alignement d'ontologies pour déterminer les liens sémantiques entre les ontologies d'affaires hétérogènes générés par le processus de transformation pour aider les entreprises à communiquer fructueusement.