983 resultados para Prototype system
Resumo:
Tese de Doutoramento Ramo Engenharia Industrial e de Sistemas
Resumo:
In the United States many bridge structures have been designed without consideration for their unique construction problems. Many problems could have been avoided if construction knowledge and experience was utilized in the design process. A systematic process is needed to create and capture construction knowledge for use in the design process. This study was conducted to develop a system to capture construction considerations from field people and incorporate it into a knowledge-base for use by the bridge designers. This report presents the results of this study. As a part of this study a microcomputer-based constructability system has been developed. The system is a user-friendly microcomputer database which codifies construction knowledge, provides easy access to specifications, and provides simple design computation checks for the designer. A structure for the final database was developed and used in the prototype system. A process for collecting, developing and maintaining the database is presented and explained. The study involved a constructability survey, interviews with designers and constructors, and visits to construction sites to collect constuctability concepts. The report describes the development of the constructability system and addresses the future needs for the Iowa Department of Transportation to make the system operational. A user's manual for the system is included along with the report.
Resumo:
Due to limited budgets and reduced inspection staff, state departments of transportation (DOTs) are in need of innovative approaches for providing more efficient quality assurance on concrete paving projects. The goal of this research was to investigate and test new methods that can determine pavement thickness in real time. Three methods were evaluated: laser scanning, ultrasonic sensors, and eddy current sensors. Laser scanning, which scans the surface of the base prior to paving and then scans the surface after paving, can determine the thickness at any point. Also, scanning lasers provide thorough data coverage that can be used to calculate thickness variance accurately and identify any areas where the thickness is below tolerance. Ultrasonic and eddy current sensors also have the potential to measure thickness nondestructively at discrete points and may result in an easier method of obtaining thickness. There appear to be two viable approaches for measuring concrete pavement thickness during the paving operation: laser scanning and eddy current sensors. Laser scanning has proved to be a reliable technique in terms of its ability to provide virtual core thickness with low variability. Research is still required to develop a prototype system that integrates point cloud data from two scanners. Eddy current sensors have also proved to be a suitable alternative, and are probably closer to field implementation than the laser scanning approach. As a next step for this research project, it is suggested that a pavement thickness measuring device using eddy current sensors be created, which would involve both a handheld and paver-mounted version of the device.
Resumo:
Pienjännitejakeluverkko Suomessa on toteutettu 400 V:n kolmivaiheisella vaihtosähköllä. Pienestä jännitteestä johtuen 20/0.4 kV:n muuntajat täytyy sijoittaa lähelle kuluttajaa, jotta siirtohäviöt eivät nouse liian suuriksi. Suuremman vaihto- tai tasajännitteen käyttö pienjännitejakelussa kasvattaisi verkon tehonsiirtokapasiteettia ja mahdollistaisi pidempien siirtomatkojen käytön. Käynnissä olevassa tutkimushankkeessa käsitellään vaihtoehtoa, jossa tasajännitettä käytettäisiin 20 kV:n verkon ja kuluttajan välisessä tehonsiirrossa ja kuluttajalla sijaitseva vaihtosuuntaaja muodostaisi tasasähköstä standardien mukaista yksi- tai kolmivaiheista vaihtosähköä. Tässä diplomityössä käsitellään tehoelektroniikan soveltamista kuluttajalle sijoitetussa vaihtosuuntaajassa. Työssä tarkastellaan yksivaiheisia invertteritopologioita, niiden ohjausta ja soveltamista erilaisissa vaihtosuuntaajaratkaisuissa sekä LC- ja LCL-suotimien soveltuvuutta invertterin lähtöjännitteen suodatukseen. Lisäksi esitellään erilaisia rakenneratkaisuja vaihtosuuntauksen toteutukseen ja tarkastellaan näiden järjestelmien vikatilanteita ja sähköturvallisuutta. Lopuksi käsitellään koko järjestelmän häviöitä ja hyötysuhdetta eri suodinkomponenteilla sekä kytkentätaajuuksilla ja esitellään laboratorioprototyyppi. Työssä saatiin selville, että puolisiltainvertteri ei sovellu suurten kondensaattorien vuoksi syöttämään verkkotaajuista kuormaa, vaan joudutaan käyttämään kokosiltainvertteriä. Kokosiltainvertterin ja LC- tai LCL-suotimen käsittävää kokonaisuutta tarkasteltaessa havaittiin, että pienimmät häviöt saavutetaan LC-suotimella 5 %:n ja LCL-suotimella 1 %:n särövaatimuksella. Hyötysuhdekäyrää tarkasteltaessa saatiin sama tulos läpi koko invertterin tehoalueen. Suotimen häviöiden tarkka laskenta on kuitenkin erittäin haasteellista, joten tulokset ovat suuntaa-antavia.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
Open educational resources (OER) promise increased access, participation, quality, and relevance, in addition to cost reduction. These seemingly fantastic promises are based on the supposition that educators and learners will discover existing resources, improve them, and share the results, resulting in a virtuous cycle of improvement and re-use. By anecdotal metrics, existing web scale search is not working for OER. This situation impairs the cycle underlying the promise of OER, endangering long term growth and sustainability. While the scope of the problem is vast, targeted improvements in areas of curation, indexing, and data exchange can improve the situation, and create opportunities for further scale. I explore the way the system is currently inadequate, discuss areas for targeted improvement, and describe a prototype system built to test these ideas. I conclude with suggestions for further exploration and development.
Resumo:
Computer Supported Collaborative Learning (CSCL) is a teaching and learning approach which is widely adopted. However there are still some problems can be found when CSCL takes place. Studies show that using game-like mechanics can increase motivation, engagement, as well as modelling behaviors of players. Gamification is a rapid growing trend by applying the same mechanics. It refers to use game design elements in non-game contexts. This thesis is about combining gamification concept and computer supported collaborative learning together in software engineering education field. And finally a gamified prototype system is designed.
Resumo:
This paper proposes a solution to the problems associated with network latency within distributed virtual environments. It begins by discussing the advantages and disadvantages of synchronous and asynchronous distributed models, in the areas of user and object representation and user-to-user interaction. By introducing a hybrid solution, which utilises the concept of a causal surface, the advantages of both synchronous and asynchronous models are combined. Object distortion is a characteristic feature of the hybrid system, and this is proposed as a solution which facilitates dynamic real-time user collaboration. The final section covers implementation details, with reference to a prototype system available from the Internet.
Resumo:
This paper proposes a solution to the problems associated with network latency within distributed virtual environments. It begins by discussing the advantages and disadvantages of synchronous and asynchronous distributed models, in the areas of user and object representation and user-to-user interaction. By introducing a hybrid solution, which utilises the concept of a causal surface, the advantages of both synchronous and asynchronous models are combined. Object distortion is a characteristic feature of the hybrid system, and this is proposed as a solution which facilitates dynamic real-time user collaboration. The final section covers implementation details, with reference to a prototype system available from the Internet.
Resumo:
Numerical climate models constitute the best available tools to tackle the problem of climate prediction. Two assumptions lie at the heart of their suitability: (1) a climate attractor exists, and (2) the numerical climate model's attractor lies on the actual climate attractor, or at least on the projection of the climate attractor on the model's phase space. In this contribution, the Lorenz '63 system is used both as a prototype system and as an imperfect model to investigate the implications of the second assumption. By comparing results drawn from the Lorenz '63 system and from numerical weather and climate models, the implications of using imperfect models for the prediction of weather and climate are discussed. It is shown that the imperfect model's orbit and the system's orbit are essentially different, purely due to model error and not to sensitivity to initial conditions. Furthermore, if a model is a perfect model, then the attractor, reconstructed by sampling a collection of initialised model orbits (forecast orbits), will be invariant to forecast lead time. This conclusion provides an alternative method for the assessment of climate models.
Resumo:
Haptic devices tend to be kept small as it is easier to achieve a large change of stiffness with a low associated apparent mass. If large movements are required there is a usually a reduction in the quality of the haptic sensations which can be displayed. The typical measure of haptic device performance is impedance-width (z-width) but this does not account for actuator saturation, usable workspace or the ability to do rapid movements. This paper presents the analysis and evaluation of a haptic device design, utilizing a variant of redundant kinematics, sometimes referred to as a macro-micro configuration, intended to allow large and fast movements without loss of impedance-width. A brief mathematical analysis of the design constraints is given and a prototype system is described where the effects of different elements of the control scheme can be examined to better understand the potential benefits and trade-offs in the design. Finally, the performance of the system is evaluated using a Fitts’ Law test and found to compare favourably with similar evaluations of smaller workspace devices.
Resumo:
This article describes a prototype system for quantifying bioassays and for exchanging the results of the assays digitally with physicians located off-site. The system uses paper-based microfluidic devices for running multiple assays simultaneously, camera phones or portable scanners for digitizing the intensity of color associated with each colorimetric assay, and established communications infrastructure for transferring the digital information from the assay site to an off-site laboratory for analysis by a trained medical professional; the diagnosis then can be returned directly to the healthcare provider in the field. The microfluidic devices were fabricated in paper using photolithography and were functionalized with reagents for colorimetric assays. The results of the assays were quantified by comparing the intensities of the color developed in each assay with those of calibration curves. An example of this system quantified clinically relevant concentrations of glucose and protein in artificial urine. The combination of patterned paper, a portable method for obtaining digital images, and a method for exchanging results of the assays with off-site diagnosticians offers new opportunities for inexpensive monitoring of health, especially in situations that require physicians to travel to patients (e.g., in the developing world, in emergency management, and during field operations by the military) to obtain diagnostic information that might be obtained more effectively by less valuable personnel.
Resumo:
In recent years the number of bicycles with e-motors has been increased steadily. Within the pedelec – bikes where an e-motor supports the pedaling – a special group of transportation bikes has developed. These bikes have storage boxes in addition to the basic parts of a bike. Due to the space available on top of those boxes it is possible to install a PV system to generate electricity which could be used to recharge the battery of the pedelec. Such a system would lead to grid independent charging of the battery and to the possibility of an increased range of motor support. The feasibility of such a PV system is investigated for a three wheeled pedelec delivered by the company BABBOE NORDIC.The measured data of the electricity generation of this mobile system is compared to the possible electricity generation of a stationary system.To measure the consumption of the pedelec different tracks are covered, and the energy which is necessary to recharge the bike battery is measured using an energy logger. This recharge energy is used as an indirect measure of the electricity consumption. A PV prototype system is installed on the bike. It is a simple PV stand alone system consisting of PV panel, charge controller with MPP tracker and a solar battery. This system has the task to generate as much electricity as possible. The produced PV current and voltage aremeasured and documented using a data logger. Afterwards the average PV power is calculated. To compare the produced electricity of the on-bike system to that of a stationary system, the irradiance on the latter is measured simultaneously. Due to partial shadings on the on-bike PV panel, which are caused by the driver and some other bike parts, the average power output during riding the bike is very low. It is too low to support the motor directly. In case of a similar installation as the PV prototype system and the intention always to park the bike on a sunny spot an on-bike system could generate electricity to at least partly recharge a bike battery during one day. The stationary PV system using the same PV panel could have produced between 1.25 and 8.1 times as much as the on-bike PV system. Even though the investigation is done for a very specific case it can be concluded that anon-bike PV system, using similar components as in the investigation, is not feasible to recharge the battery of a pedelec in an appropriate manner. The biggest barrier is that partial shadings on the PV panel, which can be hardly avoided during operation and parking, result in a significant reduction of generated electricity. Also the installation of the on-bike PV system would lead to increased weight of the whole bike and the need for space which is reducing the storage capacity. To use solar energy for recharging a bike battery an indirect way is giving better results. In this case a stationary PV stand alone system is used which is located in a sunny spot without shadings and adjusted to use the maximum available solar energy. The battery of the bike is charged using the corresponding charger and an inverter which provides AC power using the captured solar energy.
Resumo:
Very large scale computations are now becoming routinely used as a methodology to undertake scientific research. In this context, `provenance systems' are regarded as the equivalent of the scientist's logbook for in silico experimentation: provenance captures the documentation of the process that led to some result. Using a protein compressibility analysis application, we derive a set of generic use cases for a provenance system. In order to support these, we address the following fundamental questions: what is provenance? how to record it? what is the performance impact for grid execution? what is the performance of reasoning? In doing so, we define a technologyindependent notion of provenance that captures interactions between components, internal component information and grouping of interactions, so as to allow us to analyse and reason about the execution of scientific processes. In order to support persistent provenance in heterogeneous applications, we introduce a separate provenance store, in which provenance documentation can be stored, archived and queried independently of the technology used to run the application. Through a series of practical tests, we evaluate the performance impact of such a provenance system. In summary, we demonstrate that provenance recording overhead of our prototype system remains under 10% of execution time, and we show that the recorded information successfully supports our use cases in a performant manner.
Resumo:
Distributed systems comprised of autonomous self-interested entities require some sort of control mechanism to ensure the predictability of the interactions that drive them. This is certainly true in the aerospace domain, where manufacturers, suppliers and operators must coordinate their activities to maximise safety and profit, for example. To address this need, the notion of norms has been proposed which, when incorporated into formal electronic documents, allow for the specification and deployment of contract-driven systems. In this context, we describe the CONTRACT framework and architecture for exactly this purpose, and describe a concrete instantiation of this architecture as a prototype system applied to an aerospace aftercare scenario.