42 resultados para Web participatif


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Online paper web analysis relies on traversing scanners that criss-cross on top of a rapidly moving paper web. The sensors embedded in the scanners measure many important quality variables of paper, such as basis weight, caliper and porosity. Most of these quantities are varying a lot and the measurements are noisy at many different scales. The zigzagging nature of scanning makes it difficult to separate machine direction (MD) and cross direction (CD) variability from one another. For improving the 2D resolution of the quality variables above, the paper quality control team at the Department of Mathematics and Physics at LUT has implemented efficient Kalman filtering based methods that currently use 2D Fourier series. Fourier series are global and therefore resolve local spatial detail on the paper web rather poorly. The target of the current thesis is to study alternative wavelet based representations as candidates to replace the Fourier basis for a higher resolution spatial reconstruction of these quality variables. The accuracy of wavelet compressed 2D web fields will be compared with corresponding truncated Fourier series based fields.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tässä työssä selvitettiin hyviä tapoja ja vakiintuneita käytäntöjä pitkän käyttöiän web-sovelluksen tekemiseksi. Saatiin selville, että sovelluksen elinkaaren aikana suurin osa kustannuksista tulee ylläpidosta. Tavoitteena oli tehdä pitkään käytettävä sovellus, joten ylläpidon kustannusten osuudesta tuli saada mandollisimman pieni. Ohjelmistotuotantoprosessissa mandollisimman aikaisessa vaiheessa havaitut virheet vähentävät korjauskustannuksia oleellisesti verrattuna siihen, että virheet havaittaisiin valmiissa tuotteessa. Siksi tässä työssä tehdyssä web-sovelluksessa panostettiin prosessin alkuvaiheisiin, määrittelyyn ja suunnitteluun. Web-sovelluksen ylläpidettävyyteen ja selkeyteen vaikuttavat oleellisesti hyvät ohjelmistokehitystavat. Käyttämällä valmista sovelluskehystä ja lisäämällä toiminnallisuuksia valmiiden ohjelmistokomponenttien avulla saadaan aikaiseksi hyvien tapojen mukaisesti tehty sovellus. Tässä työssä toteutettu web-sovellus laadittiin käyttämällä sovelluskehystä ja komponenttiarkkitehtuuria. Toteutuksesta saatiin selkeä. Sovellus jaettiin loogisiin kokonaisuuksiin, jotka käsittelevät näkymiä, tietokantaa ja tietojen yhdistämistä näiden välillä. Jokainen kokonaisuus on itsenäisesti toimiva, mikä auttaa sovelluksen ylläpitämisessä ja testaamisessa.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tässä työssä selvitettiin Ajax-tekniikan tilannetta web-sovellusten kehityksessä. Sitä varten kehitettiin demosovellus, jonka avulla tekniikoiden käyttökelpoisuutta voitiin arvioida. Samalla työssä on esitelty eri tekniikoita, jotka liittyvät kiinteästi Ajax-sovellusten toteuttamiseen. Demosovellus tehtiin vapaalla LAMP (Linux, Apache, MySQL and PHP) -alustalla. Työssä on arvioitu Ajax-tekniikan käyttökelpoisuutta ja ongelmia nykyisen webin, web-kehittäjien, käytössä olevien selainten ja käyttäjien kannalta. Lopussa on myös pohdittu hieman webin tulevaisuutta ja Ajaxin osaa siinä.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tietojärjestelmien integraatio on nykypäivänä tärkeä osa alue yritysten toiminnassa ja kilpailukyvyn ylläpitämisessä. Palvelukeskeinen arkkitehtuuri ja Web palvelut on uusi joustava tapa tehdä tietojärjestelmien välinen integraatio. Web palveluiden yksi ydinkomponentti on UDDI, Universal Description, Discovery and Integration. UDDI toimii palvelurekisterin tavoin. UDDI määrittää tavan julkaista, löytää ja ottaa käyttöön Web palveluja. Web palveluja voidaan hakea UDDI:sta erilaisin kriteerein, kuten esimerkiksi palvelun sijainnin, yrityksen nimen ja toimialan perusteella. UDDI on myös itsessään Web palvelu, joka perustuu XML kuvauskieleen ja SOAP protokollaan. Työssä paneudutaan tarkemmin UDDI:in. UDDI:ta käsitellään tarkemmin myös teknisesti. Oleellinen osa UDDI:ta on ollut julkaisijoiden ja käyttäjien mielestä tietoturvan puute, joka on rajoittanut huomattavasti UDDI:n käyttöä ja käyttöönottamista. Työssä tarkastellaankin tarkemmin juuri tietoturvaan liittyviä asioita ja ratkaisuja sekä myös UDDI:n merkitystä yrityksille.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Due to concerns regarding globalisation and sustainable development, corporate social responsibility (CSR) is topical in the business context and in the field of accounting. The main objective of this study was to review previous academic literature in the field of CSR reporting and develop an insight into CSR reporting in the Web-based environment. The main purpose was to find out what Web-based CSR reporting is like and how companies are utilising the Internet to communicate on responsibility issues. I did not, however, collect empirical research data but limited my study into theoretical and descriptive examination. In order to create an insight into Web-based reporting, I examined the development, motives and current practices of CSR reporting. I concluded that the Internet is a unique, interactive communication channel that is used differently compared with annual reports. The amount of companies engaging in Web-based CSR reporting is increasing and the reporting practices in terms of e.g. content and accessibility of information vary. I also concluded that many companies have not yet discovered the true potential of the Web as an interactive communication medium.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aim of the Thesis is to study and understand the theoretical concept of Metanational corporation and understand how the Web 2.0 technologies can be used to support the theory. Empiric part of the study compares the theory to the case company’s current situation Goal of theoretical framework is to show how the Web 2.0 technologies can be used in the three levels of the Metanational corporation. In order to do this, knowledge management and more accurately knowledge transferring is studied to understand what is needed from the Web 2.0 technologies in the different functions and operations of the Metanational corporation. Final synthesis of the theoretical framework is to present a model where the Web 2.0 technologies are placed on the levels of the Metanational corporation. Empirical part of the study is based on interviews made in the case company. Aim of the interviews is to understand the current state of the company related to the theoretical framework. Based on the interviews, the differences between the theoretical concept and the case company are presented and studied. Finally the study presents the found problem areas, and where the adoption of the Web 2.0 tools is seen as beneficiary, based on the interviews and theoretical framework.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Web application performance testing is an emerging and important field of software engineering. As web applications become more commonplace and complex, the need for performance testing will only increase. This paper discusses common concepts, practices and tools that lie at the heart of web application performance testing. A pragmatic, hands-on approach is assumed where applicable; real-life examples of test tooling, execution and analysis are presented right next to the underpinning theory. At the client-side, web application performance is primarily driven by the amount of data transmitted over the wire. At the server-side, selection of programming language and platform, implementation complexity and configuration are the primary contributors to web application performance. Web application performance testing is an activity that requires delicate coordination between project stakeholders, developers, system administrators and testers in order to produce reliable and useful results. Proper test definition, execution, reporting and repeatable test results are of utmost importance. Open-source performance analysis tools such as Apache JMeter, Firebug and YSlow can be used to realise effective web application performance tests. A sample case study using these tools is presented in this paper. The sample application was found to perform poorly even under the moderate load incurred by the sample tests.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Various strength properties of paper are measured to tell how well it resists breaks in a paper machine or in printing presses. The most often measured properties are dry tensile strength and dry tear strength. However, in many situations where paper breaks, it is not dry. For example, in web breaks after the wet pressing the dry matter content can be around 45%. Thus, wet-web strength is often a more critical paper property than dry strength. Both wet and dry strength properties of the samples were measured with a L&W tensile tester. Originally this device was not designed for the measurement of the wet web tensile strength, thus a new procedure to handle the wet samples was developed. The method was tested with Pine Kraft (never dried). The effect of different strength additives on the wet-web and dry paper tensile strength was studied. The polymers used in this experiment were aqueous solution of a cationic polyamidoamine-epichlorohydrin resin (PAE), cationic hydrophilised polyisocyanate and cationic polyvinylamine (PVAm). From all three used chemicals only Cationic PAE considerably increased the wet web strength. However it was noticed that at constant solids content all chemicals decreased the wet web tensile strength. So, since all chemicals enhanced solid content it can be concluded that they work as drainage aids, not as wet web strength additives. From all chemicals only PVAm increased the dry strength and two other chemicals even decreased the strength. As chemicals were used in strong diluted forms and were injected into the pulp slurry, not on the surface of the papersheets, changes in samples densities did not happen. Also it has to be noted that all these chemicals are mainly used to improve the wet strength after the drying of the web.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Käyttöliittymä on rajapinta käyttäjän ja järjestelmän tarjoamien toimintojen välillä ja sen toimivuus vaikuttaa toimintojen suorittamiseen joko positiivisesti tai negatiivisesti. Täten sovelluksen suunnitteluvaiheessa on hyvä arvioida käyttöliittymän ja sen toimintojen laatua ja kokeilla ideoiden toimivuutta rakentamalla asiasta prototyyppejä. Prototypoinnilla voidaan tunnistaa ja korjata mahdolliset ongelmat jo suunnittelupöydällä. Tämä diplomityö käsittelee Web-sovelluksen kehityksen aikana toteutettua käyttöliittymän ja sen toimintojen prototypointia. Käyttöliittymien mallintamista voidaan toteuttaa erilaisilla menetelmillä, joita työssä käydään läpi teknologisista näkökulmista eli miten prototypointimenetelmiä voidaan soveltaa projektin eri vaiheissa. Prototypoinnin apuna käytettäviin työkaluihin luodaan lyhyt katsaus esitellen yleisellä tasolla muutamia eri sovelluskategorian ohjelmistoja ja lisäksi käsitellään suunnittelumallien hyödyntämistä. Työ osoittaa, että yleisiä prototypointimenetelmiä ja -periaatteita voidaan soveltaa Web-sovellusten prototypoinnissa. Prototypointi on hyödyllistä aloittaa luonnostelemalla ja jatkaa aikaisessa vaiheessa HTML-malleihin, joilla päästään lähelle toteutuksen teknologioita ja mallintamaan sovelluksen luonnetta, ilmettä, tuntumaa ja vuorovaikutusta. HTML-prototyypeistä voidaan jalostaa sekoitetun tarkkuuden malleja ja ne toimivat toteutuksen perustana. Jatkokehityksessä ideoita voidaan esittää useilla eri tarkkuuden tekniikoilla.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Research focus of this thesis is to explore options for building systems for business critical web applications. Business criticality here includes requirements for data protection and system availability. The focus is on open source software. Goals are to identify robust technologies and engineering practices to implement such systems. Research methods include experiments made with sample systems built around chosen software packages that represent certain technologies. The main research focused on finding a good method for database data replication, a key functionality for high-availability, database-driven web applications. Research included also finding engineering best practices from books written by administrators of high traffic web applications. Experiment with database replication showed, that block level synchronous replication offered by DRBD replication software offered considerably more robust data protection and high-availability functionality compared to leading open source database product MySQL, and its built-in asynchronous replication. For master-master database setups, block level replication is more recommended way to build high-availability into the system. Based on thesis research, building high-availability web applications is possible using a combination of open source software and engineering best practices for data protection, availability planning and scaling.