957 resultados para object orientated user interface


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Engenharia Informática

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Based on the presentation and discussion at the 3rd Winter School on Technology Assessment, December 2012, Universidade Nova de Lisboa (Portugal), Caparica Campus, PhD programme on Technology Assessment

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Novel input modalities such as touch, tangibles or gestures try to exploit human's innate skills rather than imposing new learning processes. However, despite the recent boom of different natural interaction paradigms, it hasn't been systematically evaluated how these interfaces influence a user's performance or whether each interface could be more or less appropriate when it comes to: 1) different age groups; and 2) different basic operations, as data selection, insertion or manipulation. This work presents the first step of an exploratory evaluation about whether or not the users' performance is indeed influenced by the different interfaces. The key point is to understand how different interaction paradigms affect specific target-audiences (children, adults and older adults) when dealing with a selection task. 60 participants took part in this study to assess how different interfaces may influence the interaction of specific groups of users with regard to their age. Four input modalities were used to perform a selection task and the methodology was based on usability testing (speed, accuracy and user preference). The study suggests a statistically significant difference between mean selection times for each group of users, and also raises new issues regarding the “old” mouse input versus the “new” input modalities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present study proposes a dynamic constitutive material interface model that includes non-associated flow rule and high strain rate effects, implemented in the finite element code ABAQUS as a user subroutine. First, the model capability is validated with numerical simulations of unreinforced block work masonry walls subjected to low velocity impact. The results obtained are compared with field test data and good agreement is found. Subsequently, a comprehensive parametric analysis is accomplished with different joint tensile strengths and cohesion, and wall thickness to evaluate the effect of the parameter variations on the impact response of masonry walls.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Engenharia Eletrónica Industrial e de Computadores

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was to evaluate the determinism of the AS-lnterface network and the 3 main families of control systems, which may use it, namely PLC, PC and RTOS. During the course of this study the PROFIBUS and Ethernet field level networks were also considered in order to ensure that they would not introduce unacceptable latencies into the overall control system. This research demonstrated that an incorrectly configured Ethernet network introduces unacceptable variable duration latencies into the control system, thus care must be exercised if the determinism of a control system is not to be compromised. This study introduces a new concept of using statistics and process capability metrics in the form of CPk values, to specify how suitable a control system is for a given control task. The PLC systems, which were tested, demonstrated extremely deterministic responses, but when a large number of iterations were introduced in the user program, the mean control system latency was much too great for an AS-I network. Thus the PLC was found to be unsuitable for an AS-I network if a large, complex user program Is required. The PC systems, which were tested were non-deterministic and had latencies of variable duration. These latencies became extremely exaggerated when a graphing ActiveX was included in the control application. These PC systems also exhibited a non-normal frequency distribution of control system latencies, and as such are unsuitable for implementation with an AS-I network. The RTOS system, which was tested, overcame the problems identified with the PLC systems and produced an extremely deterministic response, even when a large number of iterations were introduced in the user program. The RTOS system, which was tested, is capable of providing a suitable deterministic control system response, even when an extremely large, complex user program is required.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Given a set of images of scenes containing different object categories (e.g. grass, roads) our objective is to discover these objects in each image, and to use this object occurrences to perform a scene classification (e.g. beach scene, mountain scene). We achieve this by using a supervised learning algorithm able to learn with few images to facilitate the user task. We use a probabilistic model to recognise the objects and further we classify the scene based on their object occurrences. Experimental results are shown and evaluated to prove the validity of our proposal. Object recognition performance is compared to the approaches of He et al. (2004) and Marti et al. (2001) using their own datasets. Furthermore an unsupervised method is implemented in order to evaluate the advantages and disadvantages of our supervised classification approach versus an unsupervised one

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Currently, individuals including designers, contractors, and owners learn about the project requirements by studying a combination of paper and electronic copies of the construction documents including the drawings, specifications (standard and supplemental), road and bridge standard drawings, design criteria, contracts, addenda, and change orders. This can be a tedious process since one needs to go back and forth between the various documents (paper or electronic) to obtain information about the entire project. Object-oriented computer-aided design (OO-CAD) is an innovative technology that can bring a change to this process by graphical portrayal of information. OO-CAD allows users to point and click on portions of an object-oriented drawing that are then linked to relevant databases of information (e.g., specifications, procurement status, and shop drawings). The vision of this study is to turn paper-based design standards and construction specifications into an object-oriented design and specification (OODAS) system or a visual electronic reference library (ERL). Individuals can use the system through a handheld wireless book-size laptop that includes all of the necessary software for operating in a 3D environment. All parties involved in transportation projects can access all of the standards and requirements simultaneously using a 3D graphical interface. By using this system, users will have all of the design elements and all of the specifications readily available without concerns of omissions. A prototype object-oriented model was created and demonstrated to potential users representing counties, cities, and the state. Findings suggest that a system like this could improve productivity to find information by as much as 75% and provide a greater sense of confidence that all relevant information had been identified. It was also apparent that this system would be used by more people in construction than in design. There was also concern related to the cost to develop and maintain the complete system. The future direction should focus on a project-based system that can help the contractors and DOT inspectors find information (e.g., road standards, specifications, instructional memorandums) more rapidly as it pertains to a specific project.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lappeenrannan teknillinen yliopiston Tietotekniikan osaston Tietojenkäsittelytieteen laitoksen tutkimuskäytössä olevaan liikkuvaan robottiin toteutettiin tässä työssä graafinen kaukokäyttöliittymä. Työlle on motivaationa laajennettavuus, jota olemassaoleva suljetun lähdekoodin käyttöliittymä ei pysty tarjoamaan. Työssä olennaisin on olio-ohjelmointitekniikalla toteutettu robotin datamallin, ja sen graafisen esityksen arkkitehtuurillinen erottaminen. Lisäksi tarkastellaan lyhyesti liikkuvien robottien kaukokäyttöliittymien teoriaa, ja WLAN-tekniikan soveltuvuutta robotin ja käyttöliittymän välisen yhteyden toteuttamiseen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

CORBA (Common Object Request Broker Architecture) on laajalle levinnyt ja teollisuudessa yleisesti käytetty hajautetun tietojenkäsittelyn arkkitehtuuri. CORBA skaalautuu eri kokoisiin tarpeisiin ja sitä voidaan hyödynntää myös sulautetuissa langattomissa laitteissa. Oleellista sulautetussa ympäristössä on rakentaa rajapinnat kevytrakenteisiksi, pysyviksi ja helposti laajennettaviksi ilman että yhteensopivuus aikaisempiin rajapintoihin olisi vaarassa. Langattomissa laitteissa resurssit, kuten muistin määrä ja prosessointiteho, ovat hyvin rajalliset, joten rajapinta tulee suunnitella ja toteuttaa optimaalisesti. Palveluiden tulee ottaa huomioon myös langattomuuden rajoitukset, kuten hitaat tiedonsiirtonopeudet ja tiedonsiirron yhteydettömän luonteen. Työssä suunniteltiin ja toteutettiin CORBA-rajapinta GSM-päätelaitteeseen, jonka on todettu täyttävän sille asetetut tavoitteet. Rajapinta tarjoaa kaikki yleisimmät GSM-terminaalin ominaisuudet ja on laajennettavissa tulevia tuotteita ja verkkotekniikoita varten. Laajennettavuutta saavutetaan esimerkiksi kuvaamalla terminaalin ominaisuudet yleisellä kuvauskielellä, kuten XML:lla (Extensible Markup Language).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Contact stains recovered at break-in crime scenes are frequently characterized by mixtures of DNA from several persons. Broad knowledge on the relative contribution of DNA left behind by different users overtime is of paramount importance. Such information might help crime investigators to robustly evaluate the possibility of detecting a specific (or known) individual's DNA profile based on the type and history of an object. To address this issue, a contact stain simulation-based protocol was designed. Fourteen volunteers either acting as first or second object's users were recruited. The first user was required to regularly handle/wear 9 different items during an 8-10-day period, whilst the second user for 5, 30 and 120 min, in three independent simulation sessions producing a total of 231 stains. Subsequently, the relative DNA profile contribution of each individual pair was investigated. Preliminary results showed a progressive increase of the percentage contribution of the second user compared to the first. Interestingly, the second user generally became the major DNA contributor when most objects were handled/worn for 120 min, Furthermore, the observation of unexpected additional alleles will then prompt the investigation of indirect DNA transfer events.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The number of digital images has been increasing exponentially in the last few years. People have problems managing their image collections and finding a specific image. An automatic image categorization system could help them to manage images and find specific images. In this thesis, an unsupervised visual object categorization system was implemented to categorize a set of unknown images. The system is unsupervised, and hence, it does not need known images to train the system which needs to be manually obtained. Therefore, the number of possible categories and images can be huge. The system implemented in the thesis extracts local features from the images. These local features are used to build a codebook. The local features and the codebook are then used to generate a feature vector for an image. Images are categorized based on the feature vectors. The system is able to categorize any given set of images based on the visual appearance of the images. Images that have similar image regions are grouped together in the same category. Thus, for example, images which contain cars are assigned to the same cluster. The unsupervised visual object categorization system can be used in many situations, e.g., in an Internet search engine. The system can categorize images for a user, and the user can then easily find a specific type of image.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of load-bearing osseous implant with desired mechanical and surface properties in order to promote incorporation with bone and to eliminate risk of bone resorption and implant failure is a very challenging task. Bone formation and resoption processes depend on the mechanical environment. Certain stress/strain conditions are required to promote new bone growth and to prevent bone mass loss. Conventional metallic implants with high stiffness carry most of the load and the surrounding bone becomes virtually unloaded and inactive. Fibre-reinforced composites offer an interesting alternative to metallic implants, because their mechanical properties can be tailored to be equal to those of bone, by the careful selection of matrix polymer, type of fibres, fibre volume fraction, orientation and length. Successful load transfer at bone-implant interface requires proper fixation between the bone and implant. One promising method to promote fixation is to prepare implants with porous surface. Bone ingrowth into porous surface structure stabilises the system and improves clinical success of the implant. The experimental part of this work was focused on polymethyl methacrylate (PMMA) -based composites with dense load-bearing core and porous surface. Three-dimensionally randomly orientated chopped glass fibres were used to reinforce the composite. A method to fabricate those composites was developed by a solvent treatment technique and some characterisations concerning the functionality of the surface structure were made in vitro and in vivo. Scanning electron microscope observations revealed that the pore size and interconnective porous architecture of the surface layer of the fibre-reinforced composite (FRC) could be optimal for bone ingrowth. Microhardness measurements showed that the solvent treatment did not have an effect on the mechanical properties of the load-bearing core. A push-out test, using dental stone as a bone model material, revealed that short glass fibre-reinforced porous surface layer is strong enough to carry load. Unreacted monomers can cause the chemical necrosis of the tissue, but the levels of leachable resisidual monomers were considerably lower than those found in chemically cured fibre-reinforced dentures and in modified acrylic bone cements. Animal experiments proved that surface porous FRC implant can enhance fixation between bone and FRC. New bone ingrowth into the pores was detected and strong interlocking between bone and the implant was achieved.