63 resultados para Combined Web crippling and Flange Crushing
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
The objective of this thesis was to identify the effects of different factors on the tension and tension relaxation of wet paper web after high-speed straining. The study was motivated by the plausible connection between wet web mechanical properties and wet web runnability on paper machines shown by previous studies. The mechanical properties of wet paper were examined using a fast tensile test rig with a strain rate of 1000%/s. Most of the tests were carried out with laboratory handsheets, but samples from a pilot paper machine were also used. The tension relaxation of paper was evaluated as the tension remaining after 0.475 s of relaxation (residual tension). The tensile and relaxation properties of wet webs were found to be strongly dependent on the quality and amount of fines. With low fines content, the tensile strength and residual tension of wet paper was mainly determined by the mechanical interactions between fibres at their contact points. As the fines strengthen the mechanical interaction in the network, the fibre properties also become important. Fibre deformations caused by the mechanical treatment of pulp were shown to reduce the mechanical properties of both dry and wet paper. However, the effect was significantly higher for wet paper. An increase of filler content from 10% to 25% greatly reduced the tensile strength of dry paper, but did not significantly impair wet web tensile strength or residual tension. Increased filler content in wet web was shown to increase the dryness of the wet web after the press section, which partly compensates for the reduction of fibrous material in the web. It is also presumable that fillers increase entanglement friction between fibres, which is beneficial for wet web strength. Different contaminants present in white water during sheet formation resulted in lowered surface tension and increased dryness after wet pressing. The addition of different contaminants reduced the tensile strength of the dry paper. The reduction of dry paper tensile strength could not be explained by the reduced surface tension, but rather on the tendency of different contaminants to interfere with the inter-fibre bonding. Additionally, wet web strength was not affected by the changes in the surface tension of white water or possible changes in the hydrophilicity of fibres caused by the addition of different contaminants. The spraying of different polymers on wet paper before wet pressing had a significant effect on both dry and wet web tensile strength, whereas wet web elastic modulus and residual tension were basically not affected. We suggest that the increase of dry and wet paper strength could be affected by the molecular level interactions between these chemicals and fibres. The most significant increases in dry and wet paper strength were achieved with a dual application of anionic and cationic polymers. Furthermore, selectively adding papermaking chemicals to different fibre fractions (as opposed to adding chemicals to the whole pulp) improved the wet web mechanical properties and the drainage of the pulp suspension.
Resumo:
There are reasons of necessity in bio-fuel use and bio-energy fast development. It includes the material about bio-energy technologies, applications and methods. There are basic thermodynamics and economic theories. The economic calculation presents the comparison between two combinations. There are boiler plant below 20 MW in combination with ablative pyrolysis plant for bio-oil production and CHP plant below 100 MW in combination with the RTP pyrolysis bio-oil production technology. It provides a material about wood chips and bio-oil characteristics and explains it nature, presents the situation around the bio-fuel market or bio-fuel trade. There is a description of pyrolysis technologies such as ablative and RTP. The liquid product of the pyrolysis processes is bio-oil. The bio-oil could be different even of the same production process, because of the raw material nature and characteristics. The calculation shows advantages and weaknesses of combinations and obtained a proof of suppositions. The next thing, proven by this work is the fact that to get more efficiency from energy project it is good possibility to built plants in combinations.
Resumo:
With the growth in new technologies, using online tools have become an everyday lifestyle. It has a greater impact on researchers as the data obtained from various experiments needs to be analyzed and knowledge of programming has become mandatory even for pure biologists. Hence, VTT came up with a new tool, R Executables (REX) which is a web application designed to provide a graphical interface for biological data functions like Image analysis, Gene expression data analysis, plotting, disease and control studies etc., which employs R functions to provide results. REX provides a user interactive application for the biologists to directly enter the values and run the required analysis with a single click. The program processes the given data in the background and prints results rapidly. Due to growth of data and load on server, the interface has gained problems concerning time consumption, poor GUI, data storage issues, security, minimal user interactive experience and crashes with large amount of data. This thesis handles the methods by which these problems were resolved and made REX a better application for the future. The old REX was developed using Python Django and now, a new programming language, Vaadin has been implemented. Vaadin is a Java framework for developing web applications and the programming language is extremely similar to Java with new rich components. Vaadin provides better security, better speed, good and interactive interface. In this thesis, subset functionalities of REX was selected which includes IST bulk plotting and image segmentation and implemented those using Vaadin. A code of 662 lines was programmed by me which included Vaadin as the front-end handler while R language was used for back-end data retrieval, computing and plotting. The application is optimized to allow further functionalities to be migrated with ease from old REX. Future development is focused on including Hight throughput screening functions along with gene expression database handling
Resumo:
Tutkimuksen tavoite oli selvittää yrityksen web toiminnan rakentamisen vaiheita sekä menestyksen mittaamista. Rakennusprosessia tutkittiin viisiportaisen askelmallin avulla. Mallin askeleet ovat; arviointi, strategian muotoilu, suunnitelma, pohjapiirros ja toteutus. Arviointi- ja toteutusvaiheiden täydentämiseksi sekä erityisesti myös internet toiminnan onnistumisen mittaamisen avuksi internet toiminnan hyödyt (CRM,kommunikointi-, myynti-, ja jakelukanava hyödyt markkinoinnin kannalta) käsiteltiin. Toiminnan menestyksen arvioinnin avuksi esiteltiin myös porrasmalli internet toimintaan. Porrasmalli määrittelee kauppakulissi-, dynaaminen-, transaktio- ja e-businessportaat. Tutkimuksessa löydettiin menestystekijöitä internet toimintojen menestykselle. Nämä tekijät ovat laadukas sisältö, kiinnostavuus, viihdyttävyys, informatiivisuus, ajankohtaisuus, personoitavuus, luottamus, interaktiivisuus, käytettävyys, kätevyys, lojaalisuus, suoriutuminen, responssiivisuus ja käyttäjätiedon kerääminen. Mittarit jaettiin tutkimuksessa aktiivisuus-, käyttäytymis- ja muunnosmittareihin. Lisäksi muita mittareita ja menestysindikaattoreita esiteltiin. Nämä menestyksen elementit ja mittarit koottiin yhteen uudessa internet toimintojen menestyksenarviointimallissa. Tutkielman empiirisessä osuudessa,esitettyjä teorioita peilattiin ABB:n (ABB:n sisällä erityisesti ABB Stotz-Kontakt) web toimintaan. Apuna olivat dokumenttianalyysi sekä haastattelut. Empiirinen osa havainnollisti teoriat käytännössä ja toi ilmi mahdollisuuden teorioiden laajentamiseen. Internet toimintojen rakentamismallia voidaan käyttää myös web toimintojen kehittämiseen ja porrasmalli sopii myös nykyisten internet toimintojen arvioimiseen. Mittareiden soveltaminen käytännössä toi kuitenkin ilmi tarpeen niiden kehittämiseen ja aiheen lisätutkimukseen. Niiden tulisi olla myös aiempaatiiviimmin liitetty kokonaisvaltaisen liiketoiminnan menestyksen mittaamiseen.
Resumo:
Browsing the web has become one of the most important features in high end mobile phones and in the future more and more people will be using mobile phone for web browsing. Large touchscreens improve browsing experience but many web sites are designed to be used with a mouse. A touchscreen differs substantially from a mouse as a pointing device and therefore mouse emulation logic is required in the browsers to make more web sites usable. This Master's thesis lists the most significant cases where the differences of a mouse and a touchscreen affect web browsing. Five touchscreen mobile phones and their web browsers were evaluated to find out if and how these cases are handled in them. Also as a part of this thesis, a simple QtWebKit based mobile web browser with advanced mouse emulation model was implemented, aiming to solve all the problematic cases. The conclusion of this work is that it is feasible to emulate a mouse with a touchscreen and thus deliver good user experience in mobile web browsing. However, current highend touchscreen mobile phones have relatively underdeveloped mouse emulations in their web browsers and there is a lot to improve.
Resumo:
Technological innovations, the development of the internet, and globalization have increased the number and complexity of web applications. As a result, keeping web user interfaces understandable and usable (in terms of ease-of-use, effectiveness, and satisfaction) is a challenge. As part of this, designing userintuitive interface signs (i.e., the small elements of web user interface, e.g., navigational link, command buttons, icons, small images, thumbnails, etc.) is an issue for designers. Interface signs are key elements of web user interfaces because ‘interface signs’ act as a communication artefact to convey web content and system functionality, and because users interact with systems by means of interface signs. In the light of the above, applying semiotic (i.e., the study of signs) concepts on web interface signs will contribute to discover new and important perspectives on web user interface design and evaluation. The thesis mainly focuses on web interface signs and uses the theory of semiotic as a background theory. The underlying aim of this thesis is to provide valuable insights to design and evaluate web user interfaces from a semiotic perspective in order to improve overall web usability. The fundamental research question is formulated as What do practitioners and researchers need to be aware of from a semiotic perspective when designing or evaluating web user interfaces to improve web usability? From a methodological perspective, the thesis follows a design science research (DSR) approach. A systematic literature review and six empirical studies are carried out in this thesis. The empirical studies are carried out with a total of 74 participants in Finland. The steps of a design science research process are followed while the studies were designed and conducted; that includes (a) problem identification and motivation, (b) definition of objectives of a solution, (c) design and development, (d) demonstration, (e) evaluation, and (f) communication. The data is collected using observations in a usability testing lab, by analytical (expert) inspection, with questionnaires, and in structured and semi-structured interviews. User behaviour analysis, qualitative analysis and statistics are used to analyze the study data. The results are summarized as follows and have lead to the following contributions. Firstly, the results present the current status of semiotic research in UI design and evaluation and highlight the importance of considering semiotic concepts in UI design and evaluation. Secondly, the thesis explores interface sign ontologies (i.e., sets of concepts and skills that a user should know to interpret the meaning of interface signs) by providing a set of ontologies used to interpret the meaning of interface signs, and by providing a set of features related to ontology mapping in interpreting the meaning of interface signs. Thirdly, the thesis explores the value of integrating semiotic concepts in usability testing. Fourthly, the thesis proposes a semiotic framework (Semiotic Interface sign Design and Evaluation – SIDE) for interface sign design and evaluation in order to make them intuitive for end users and to improve web usability. The SIDE framework includes a set of determinants and attributes of user-intuitive interface signs, and a set of semiotic heuristics to design and evaluate interface signs. Finally, the thesis assesses (a) the quality of the SIDE framework in terms of performance metrics (e.g., thoroughness, validity, effectiveness, reliability, etc.) and (b) the contributions of the SIDE framework from the evaluators’ perspective.
Resumo:
The aim of this thesis was to examine emotions in a web-based learning environment (WBLE). Theoretically, the thesis was grounded on the dimensional model of emotions. Four empirical studies were conducted. Study I focused on students’ anxiety and their self-efficacy in computer-using situations. Studies II and III examined the influence of experienced emotions on students’ collaborative visible and non-collaborative invisible activities and lurking in a WBLE. Study II also focused on the antecedents of the emotions students experience in a web-based learning environment. Study IV concentrated on clarifying the differences between emotions experienced in face-to-face and web-based collaborative learning. The results of these studies are reported in four original research articles published in scientific journals. The present studies demonstrate that emotions are important determinants of student behaviour in a web-based learning, and justify the conclusion that interactions on the web can and do have an emotional content. Based on the results of these empirical studies, it can be concluded that the emotions students experience during the web-based learning result mostly from the social interactions rather than from the technological context. The studies indicate that the technology itself is not the only antecedent of students’ emotional reactions in the collaborative web-based learning situations. However, the technology itself also exerted an influence on students’ behaviour. It was found that students’ computer anxiety was associated with their negative expectations of the consequences of using technology-based learning environments in their studies. Moreover, the results also indicated that student behaviours in a WBLE can be divided into three partially overlapping classes: i) collaborative visible ii) non-collaborative invisible activities, and iii) lurking. What is more, students’ emotions experienced during the web-based learning affected how actively they participated in such activities in the environment. Especially lurkers, i.e. students who seldom participated in discussions but frequently visited the online environment, experienced more negatively valenced emotions during the courses than did the other students. This result indicates that such negatively toned emotional experiences can make the lurking individuals less eager to participate in other WBLE courses in the future. Therefore, future research should also focus more precisely on the reasons that cause individuals to lurk in online learning groups, and the development of learning tasks that do not encourage or permit lurking or inactivity. Finally, the results from the study comparing emotional reactions in web-based and face-to-face collaborative learning indicated that the learning by means of web-based communication resulted in more affective reactivity when compared to learning in a face-to-face situation. The results imply that the students in the web-based learning group experienced more intense emotions than the students in the face-to-face learning group.The interpretations of this result are that the lack of means for expressing emotional reactions and perceiving others’ emotions increased the affectivity in the web-based learning groups. Such increased affective reactivity could, for example, debilitate individual’s learning performance, especially in complex learning tasks. Therefore, it is recommended that in the future more studies should be focused on the possibilities to express emotions in a text-based web environment to ensure better means for communicating emotions, and subsequently, possibly decrease the high level of affectivity. However, we do not yet know whether the use of means for communicating emotional expressions via the web (for example, “smileys” or “emoticons”) would be beneficial or disadvantageous in formal learning situations. Therefore, future studies should also focus on assessing how the use of such symbols as a means for expressing emotions in a text-based web environment would affect students’ and teachers’ behaviour and emotional state in web-based learning environments.
Resumo:
Web-portaalien aiheenmukaista luokittelua voidaan hyödyntää tunnistamaan käyttäjän kiinnostuksen kohteet keräämällä tilastotietoa hänen selaustottumuksistaan eri kategorioissa. Tämä diplomityö käsittelee web-sovelluksien osa-alueita, joissa kerättyä tilastotietoa voidaan hyödyntää personalisoinnissa. Yleisperiaatteet sisällön personalisoinnista, Internet-mainostamisesta ja tiedonhausta selitetään matemaattisia malleja käyttäen. Lisäksi työssä kuvaillaan yleisluontoiset ominaisuudet web-portaaleista sekä tilastotiedon keräämiseen liittyvät seikat.
Resumo:
Tänä päivänä tiedon nopea saatavuus ja hyvä hallittavuus ovat liiketoiminnan avainasioita. Tämän takia nykyisiä tietojärjestelmiä pyritään integroimaan. Integraatio asettaa monenlaisia vaatimuksia, jolloin sopivan integraatiomenetelmän ja -teknologian valitsemiseen pitää paneutua huolella. Integraatiototeutuksessa tulisi pyrkiä ns. löyhään sidokseen, jonka avulla voidaan saavuttaa aika-, paikka- ja alustariippumattomuus. Tällöin integraation eri osapuolien väliset oletukset saadaan karsittua minimiin, jonka myötä integraation hallittavuus ja vikasietoisuus paranee. Tässä diplomityössä keskitytään tutkimaan nykyisin teollisuuden käytössä olevien integraatiomenetelmien ja -teknologioiden ominaisuuksia, etuja ja haittoja. Lisäksi työssä tutustutaan Web-palvelutekniikkaan ja toteutetaan asynkroninen tiedonkopiointisovellus ko. teknologian avulla. Web-palvelutekniikka on vielä kehittyvä palvelukeskeinen teknologia, jolla pyritään voittamaan monet aiempia teknologioita vaivanneet ongelmat. Yhtenä teknologian päätavoitteista on luoda löyhä sidos integroitavien osapuolien välille ja mahdollistaa toiminta heterogeenisessa ympäristössä. Teknologiaa vaivaa kuitenkin vielä standardien puute esimerkiksi tietoturva-asioissa sekä päällekkäisten standardien kehitys eri valmistajien toimesta. Jotta teknologia voi yleistyä, on nämä ongelmat pystyttävä ratkaisemaan.
Resumo:
Taloudellisen laskennan yhdistäminen elinkaariarviointiin (LCA) on alkanut kiinnostaa eri teollisuuden aloja maailmanlaajuisesti viime aikoina. Useat LCA-tietokoneohjelmat sisältävät kustannuslaskentaominaisuuksia ja yksittäiset projektit ovat yhdistäneet ympäristö- ja talouslaskentamenetelmiä. Tässä projektissa tutkitaan näiden yhdistelmien soveltuvuutta suomalaiselle sellu- ja paperiteollisuudelle, sekä kustannuslaskentaominaisuuden lisäämistä KCL:n LCA-ohjelmaan, KCL-ECO 3.0:aan. Kaikki tutkimuksen aikana löytyneet menetelmät, jotka yhdistävät LCA:n ja taloudellista laskentaa, on esitelty tässä työssä. Monet näistä käyttävät elinkaarikustannusarviointia (LCCA). Periaatteessa elinkaari määritellään eri tavalla LCCA:ssa ja LCA:ssa, mikä luo haasteita näiden menetelmien yhdistämiselle. Sopiva elinkaari tulee määritellä laskennan tavoitteiden mukaisesti. Työssä esitellään suositusmenetelmä, joka lähtee suomalaisen sellu- ja paperiteollisuuden erikoispiirteistä. Perusvaatimuksena on yhteensopivuus tavanomaisesti paperin LCA:ssa käytetyn elinkaaren kanssa. Menetelmän yhdistäminen KCL-ECO 3.0:aan on käsitelty yksityiskohtaisesti.
Resumo:
Työssä luodaan yleiskuva Web-palvelut -tekniikasta ja toteutetaan sen avulla kahden tietojärjestelmän integrointi. Web-palvelut on uusi toteutustekniikasta riippumaton lähestymistapa tietojärjestelmien integrointiin, organisaatioiden väliseen sähköiseen liiketoimintaan ja sovelluslogiikan hajautukseen. Työssä keskitytään Web-palveluiden alemman tason perusteknologioiden (SOAP, WSDL ja UDDI) tarkasteluun. Työn teoriaosassa määritellään Web-palvelut ja kuvataan Web-palveluiden arkkitehtuuri sekä arkkitehtuurin toteuttavat standardit. Soveltavassa osassa toteutetaan kahden tietojärjestelmän integrointi Web-palveluiden avulla. Web-palveluiden käyttöä ja luontia helpottamaan toteutettiin yleiskäyttöinen komponentti, jota voidaan käyttää myöhemmin muissa vastaavissa projekteissa. Työssä tarkastellaan Web-palveluiden käytettävyyttä organisaation tietojärjestelmien sisäisessä integroinnissa ja sovelluslogiikan hajautuksessa. Tarkastelun tuloksena todetaan, että Web-palvelut on tällä hetkellä keskeneräinen tekniikka ja soveltuu toistaiseksi vain yksinkertaisten ongelmien ratkaisemiseen. Tulevaisuudessa Web-palveluilla on kuitenkin edellytykset yleistyä sekä integroinnin että sovelluslogiikan hajautuksen perustekniikkana.