971 resultados para Natural User Interfaces
Resumo:
Desarrollo de un sistema capaz de procesar consultas en lenguaje natural introducidas por el usuario mediante el teclado. El sistema es capaz de responder a consultas en castellano, relacionadas con un dominio de aplicación representado mediante una base de datos relacional.
Resumo:
Protein-protein interactions encode the wiring diagram of cellular signaling pathways and their deregulations underlie a variety of diseases, such as cancer. Inhibiting protein-protein interactions with peptide derivatives is a promising way to develop new biological and therapeutic tools. Here, we develop a general framework to computationally handle hundreds of non-natural amino acid sidechains and predict the effect of inserting them into peptides or proteins. We first generate all structural files (pdb and mol2), as well as parameters and topologies for standard molecular mechanics software (CHARMM and Gromacs). Accurate predictions of rotamer probabilities are provided using a novel combined knowledge and physics based strategy. Non-natural sidechains are useful to increase peptide ligand binding affinity. Our results obtained on non-natural mutants of a BCL9 peptide targeting beta-catenin show very good correlation between predicted and experimental binding free-energies, indicating that such predictions can be used to design new inhibitors. Data generated in this work, as well as PyMOL and UCSF Chimera plug-ins for user-friendly visualization of non-natural sidechains, are all available at http://www.swisssidechain.ch. Our results enable researchers to rapidly and efficiently work with hundreds of non-natural sidechains.
Resumo:
This study is a concise summary of a study of trail users on the Raccoon River Valley Trail commissioned by the Dallas County Conservation Board. It provides information associated with natural and cultural resources.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
With the growth in new technologies, using online tools have become an everyday lifestyle. It has a greater impact on researchers as the data obtained from various experiments needs to be analyzed and knowledge of programming has become mandatory even for pure biologists. Hence, VTT came up with a new tool, R Executables (REX) which is a web application designed to provide a graphical interface for biological data functions like Image analysis, Gene expression data analysis, plotting, disease and control studies etc., which employs R functions to provide results. REX provides a user interactive application for the biologists to directly enter the values and run the required analysis with a single click. The program processes the given data in the background and prints results rapidly. Due to growth of data and load on server, the interface has gained problems concerning time consumption, poor GUI, data storage issues, security, minimal user interactive experience and crashes with large amount of data. This thesis handles the methods by which these problems were resolved and made REX a better application for the future. The old REX was developed using Python Django and now, a new programming language, Vaadin has been implemented. Vaadin is a Java framework for developing web applications and the programming language is extremely similar to Java with new rich components. Vaadin provides better security, better speed, good and interactive interface. In this thesis, subset functionalities of REX was selected which includes IST bulk plotting and image segmentation and implemented those using Vaadin. A code of 662 lines was programmed by me which included Vaadin as the front-end handler while R language was used for back-end data retrieval, computing and plotting. The application is optimized to allow further functionalities to be migrated with ease from old REX. Future development is focused on including Hight throughput screening functions along with gene expression database handling
Resumo:
Ce document présente les résultats d’une étude empirique sur l’utilisation de la vidéoconférence mobile selon le contexte de l’usager afin de proposer des lignes directrices pour la conception des interfaces des dispositifs de communication vidéo mobile. Grâce à un échange riche d’informations, ce type de communication peut amener un sentiment de présence fort, mais les interfaces actuelles manquent de flexibilité qui permettrait aux usagers d’être créatifs et d’avoir des échanges plus riches lors d’une vidéoconférence. Nous avons mené une recherche avec seize participants dans trois activités où leurs conversations, leurs réactions et leurs comportements ont été observés. Deux groupes de discussion ont aussi servi à identifier les habitudes développées à partir de leur utilisation régulière de la vidéoconférence. Les résultats suggèrent une différence importante entre l’utilisation de la caméra avant et la caméra arrière de l’appareil mobile, et la nécessité de fournir des outils qui offrent plus de contrôle sur l’échange dans la conversation. L’étude propose plusieurs lignes directrices de conception pour les interfaces de communication vidéo mobiles, concernant la construction du contexte mobile de l’utilisateur.
Resumo:
Isora fibre-reinforced natural rubber (NR) composites were cured at 80, 100, 120 and 150°C using a low temperature curing accelerator system. Composites were also prepared using a conventional accelerator system and cured at 150°C. The swelling behavior of these composites at varying fibre loadings was studied in toluene and hexane. Results show that the uptake of solvent and volume fraction of rubber due to swelling was lower for the low temperature cured vulcanizates which is an indication of the better fibre/rubber adhesion. The uptake of aromatic solvent was higher than that of aliphatic solvent, for all the composites. As the fibre content increased, the solvent uptake decreased, due to the superior solvent resistance of the fibre and good fibre-rubber interactions. The bonding agent improved the swelling resistance of the composites due to the strong interfacial adhesion. Due to the improved adhesion between the fibre and rubber, the ratio of the change in volume fraction of rubber due to swelling to the volume fraction of rubber in the dry sample (V,) was found to decrease in the presence of bonding agent. At a fixed fibre loading, the alkali treated fibre composite showed a lower percentage swelling than untreated one for both systems showing superior rubber-fibre interactions.
Resumo:
The goal of this work was developing a query processing system using software agents. Open Agent Architecture framework is used for system development. The system supports queries in both Hindi and Malayalam; two prominent regional languages of India. Natural language processing techniques are used for meaning extraction from the plain query and information from database is given back to the user in his native language. The system architecture is designed in a structured way that it can be adapted to other regional languages of India. . This system can be effectively used in application areas like e-governance, agriculture, rural health, education, national resource planning, disaster management, information kiosks etc where people from all walks of life are involved.
Resumo:
In this lecture, we will focus on analyzing user goals in search query logs. Readings: M. Strohmaier, P. Prettenhofer, M. Lux, Different Degrees of Explicitness in Intentional Artifacts - Studying User Goals in a Large Search Query Log, CSKGOI'08 International Workshop on Commonsense Knowledge and Goal Oriented Interfaces, in conjunction with IUI'08, Canary Islands, Spain, 2008.
Resumo:
Resumen basado en el de la publicaci??n
Resumo:
One of the main challenges for developers of new human-computer interfaces is to provide a more natural way of interacting with computer systems, avoiding excessive use of hand and finger movements. In this way, also a valuable alternative communication pathway is provided to people suffering from motor disabilities. This paper describes the construction of a low cost eye tracker using a fixed head setup. Therefore a webcam, laptop and an infrared lighting source were used together with a simple frame to fix the head of the user. Furthermore, detailed information on the various image processing techniques used for filtering the centre of the pupil and different methods to calculate the point of gaze are discussed. An overall accuracy of 1.5 degrees was obtained while keeping the hardware cost of the device below 100 euros.
Resumo:
In recent years there has been a growing debate over whether or not standards should be produced for user system interfaces. Those in favor of standardization argue that standards in this area will result in more usable systems, while those against argue that standardization is neither practical nor desirable. The present paper reviews both sides of this debate in relation to expert systems. It argues that in many areas guidelines are more appropriate than standards for user interface design.
Resumo:
BCI systems require correct classification of signals interpreted from the brain for useful operation. To this end this paper investigates a method proposed in [1] to correctly classify a series of images presented to a group of subjects in [2]. We show that it is possible to use the proposed methods to correctly recognise the original stimuli presented to a subject from analysis of their EEG. Additionally we use a verification set to show that the trained classification method can be applied to a different set of data. We go on to investigate the issue of invariance in EEG signals. That is, the brain representation of similar stimuli is recognisable across different subjects. Finally we consider the usefulness of the methods investigated towards an improved BCI system and discuss how it could potentially lead to great improvements in the ease of use for the end user by offering an alternative, more intuitive control based mode of operation.
Resumo:
Human-like computer interaction systems requires far more than just simple speech input/output. Such a system should communicate with the user verbally, using a conversational style language. It should be aware of its surroundings and use this context for any decisions it makes. As a synthetic character, it should have a computer generated human-like appearance. This, in turn, should be used to convey emotions, expressions and gestures. Finally, and perhaps most important of all, the system should interact with the user in real time, in a fluent and believable manner.
Resumo:
Haptic computer interfaces provide users with feedback through the sense of touch, thereby allowing users to feel a graphical user interface. Force feedback gravity wells, i.e. attractive basins that can pull the cursor toward a target, are one type of haptic effect that have been shown to provide improvements in "point and click" tasks. For motion-impaired users, gravity wells could improve times by as much as 50%. It has been reported that the presentation of information to multiple sensory modalities, e.g. haptics and vision, can provide performance benefits. However, previous studies investigating the use of force feedback gravity wells have generally not provided visual representations of the haptic effect. Where force fields extend beyond clickable targets, the addition of visual cues may affect performance. This paper investigates how the performance of motion-impaired computer users is affected by having visual representations of force feedback gravity wells presented on-screen. Results indicate that the visual representation does not affect times and errors in a "point and click" task involving multiple targets.