936 resultados para Web system
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
This paper presents the current state and development of a prototype web-GIS (Geographic Information System) decision support platform intended for application in natural hazards and risk management, mainly for floods and landslides. This web platform uses open-source geospatial software and technologies, particularly the Boundless (formerly OpenGeo) framework and its client side software development kit (SDK). The main purpose of the platform is to assist the experts and stakeholders in the decision-making process for evaluation and selection of different risk management strategies through an interactive participation approach, integrating web-GIS interface with decision support tool based on a compromise programming approach. The access rights and functionality of the platform are varied depending on the roles and responsibilities of stakeholders in managing the risk. The application of the prototype platform is demonstrated based on an example case study site: Malborghetto Valbruna municipality of North-Eastern Italy where flash floods and landslides are frequent with major events having occurred in 2003. The preliminary feedback collected from the stakeholders in the region is discussed to understand the perspectives of stakeholders on the proposed prototype platform.
Resumo:
This paper presents a prototype of an interactive web-GIS tool for risk analysis of natural hazards, in particular for floods and landslides, based on open-source geospatial software and technologies. The aim of the presented tool is to assist the experts (risk managers) in analysing the impacts and consequences of a certain hazard event in a considered region, providing an essential input to the decision-making process in the selection of risk management strategies by responsible authorities and decision makers. This tool is based on the Boundless (OpenGeo Suite) framework and its client-side environment for prototype development, and it is one of the main modules of a web-based collaborative decision support platform in risk management. Within this platform, the users can import necessary maps and information to analyse areas at risk. Based on provided information and parameters, loss scenarios (amount of damages and number of fatalities) of a hazard event are generated on the fly and visualized interactively within the web-GIS interface of the platform. The annualized risk is calculated based on the combination of resultant loss scenarios with different return periods of the hazard event. The application of this developed prototype is demonstrated using a regional data set from one of the case study sites, Fella River of northeastern Italy, of the Marie Curie ITN CHANGES project.
Resumo:
Peer-reviewed
Resumo:
El projecte té per objecte explorar les possibilitats dels sistemes d'informació geogràfica en els estudis de vigilància epidemiològica (VE).
Resumo:
Peer-reviewed
Resumo:
L'objectiu d'aquest projecte és millorar la situació actual del servei tècnic de reparació i manteniment d'equipament mèdic de l'Hospital Verge de la Cinta i dels centres d'atenció primària de Tortosa i crear unes interfícies clares, ordenades, fàcils i intuïtives per als usuaris, ja siguin els clients o els propis empleats, i que serviran de pont de comunicació amb el servidor central de l'organització on estarà tota la seva informació. Per crear les interfícies s'utilitzarà la metodologia del disseny centrat en l'usuari (DCU) i, a través d'un prototip, s'avaluaran les tasques que han de realitzar els usuaris (clients i empleats).
Resumo:
Amb la situació econòmica actual pot ser interessant poder vendre objectes que ja no s’utilitzen i també poder-ne comprar de segona mà. Amb aquesta idea sorgeix el projecte de crear una pàgina de subhastes online on la gent pugui comerciar amb les coses que ja no necessita. Tenint en compte el concepte inicial, el propietari de la pàgina no rebrà cap retribució ni percentatge de cada subhasta, tot l’import serà pel venedor. L’objectiu principal és el de poder oferir un lloc on després de registrar-se, els usuaris puguin veure i pujar per els articles que altres persones estan subhastant i també la possibilitat de crear les seves pròpies subhastes. Cada usuari disposarà d’un espai personal on veure les subhastes amb les que ha interactuat i així no perdre-les de vista i també on poder veure en cada moment l’estat de les subhastes que ha creat. La vista d’una subhasta s’actualitzarà automàticament sense haver de recarregar la pàgina i si algú puja durant l’últim minut la subhasta s’allargarà un minut més per evitar puges a l’últim moment i així maximitzar el preu final. Hi haurà un administrador que serà l’encarregat de gestionar el bon funcionament de la pàgina amb permís per afegir, editar, consultar i eliminar tota la informació disponible. Per portar a terme el projecte s’ha utilitzat PHP per la part de programació i MySQL com a sistema gestor de bases de dades.
Resumo:
En aquest món on ens ha tocat viure i patir canvis tan durs amb la crisi econòmica que patim, que ens ha fet passar de lligar els gossos amb llonganisses a vigilar en les despeses del dia a dia per poder arribar just a final de mes, és el moment de reinventar-se. És per aquest motiu que presento aquesta idea, on el seu objectiu és desenvolupar una pàgina web que esdevingui un punt de trobada entre usuaris que volen transmetre o ampliar el seu coneixement i oferir-los la possibilitat que entre ells puguin compartir les seves habilitats i destreses. El web consistirà en un panell d’activitats on els usuaris un cop s’hagin registrat puguin crear les activitats que vulguin aprendre o bé ensenyar, tot demanant, si ho desitgen, quelcom a canvi. Aleshores la resta d’usuaris si els interessa l’activitat, poden acceptar la demanda o bé fer una proposta pròpia. A partir d’aquí els usuaris s’han de posar d’acord a l’hora de dur a terme l’activitat. El web disposarà d’una part pels usuaris amb permisos d’administrador perquè puguin gestionar el portal. Aquest projecte s’ha desenvolupat amb el framework de PHP Codeigniter, el qual utilitza la programació per capes MVC, la qual separa la programació en tres parts: el Model, la Vista i el Controlador. També s’han utilitzat els llenguatges HTML5 i CSS3, i jQuery, que és una llibreria de JavaScript. Com a sistema gestor de base de dades s’ha utilitzat el MySQL.
Resumo:
Web application performance testing is an emerging and important field of software engineering. As web applications become more commonplace and complex, the need for performance testing will only increase. This paper discusses common concepts, practices and tools that lie at the heart of web application performance testing. A pragmatic, hands-on approach is assumed where applicable; real-life examples of test tooling, execution and analysis are presented right next to the underpinning theory. At the client-side, web application performance is primarily driven by the amount of data transmitted over the wire. At the server-side, selection of programming language and platform, implementation complexity and configuration are the primary contributors to web application performance. Web application performance testing is an activity that requires delicate coordination between project stakeholders, developers, system administrators and testers in order to produce reliable and useful results. Proper test definition, execution, reporting and repeatable test results are of utmost importance. Open-source performance analysis tools such as Apache JMeter, Firebug and YSlow can be used to realise effective web application performance tests. A sample case study using these tools is presented in this paper. The sample application was found to perform poorly even under the moderate load incurred by the sample tests.
Resumo:
Research focus of this thesis is to explore options for building systems for business critical web applications. Business criticality here includes requirements for data protection and system availability. The focus is on open source software. Goals are to identify robust technologies and engineering practices to implement such systems. Research methods include experiments made with sample systems built around chosen software packages that represent certain technologies. The main research focused on finding a good method for database data replication, a key functionality for high-availability, database-driven web applications. Research included also finding engineering best practices from books written by administrators of high traffic web applications. Experiment with database replication showed, that block level synchronous replication offered by DRBD replication software offered considerably more robust data protection and high-availability functionality compared to leading open source database product MySQL, and its built-in asynchronous replication. For master-master database setups, block level replication is more recommended way to build high-availability into the system. Based on thesis research, building high-availability web applications is possible using a combination of open source software and engineering best practices for data protection, availability planning and scaling.
Resumo:
Customer specific functionalities are a challenging part of procurement and invoice automation environments. In Basware Enterprise Purchase to Payment product family the customer specific reports are supported only in a basic level without any seamless interface between all EPP products. Also other customer specific functionalities are not supported as there is no customizable interface between the applications and only the most common features are implemented to the products themselves. In this thesis foundations are created for a new web based value added module where it is possible to create seamless customer specific functionalities throughout the whole EPP product family. The work is implemented in a Proof of Concept type of piloting. The system is created in user centered way where the users are able to explain their requests and determine their needs. The result is an excellent foundation for a module that can be developed further.
Resumo:
App Engine on lyhenne englanninkielisistä termeistä application, sovellus ja engine, moottori. Kyseessä on Google, Inc. -konsernin toteuttama kaupallinen palvelu, joka noudattaa pilvimallin tietojenkäsittelyn periaatteita ja mahdollistaa asiakkaan oman sovelluskehityksen. Järjestelmään on mahdollista ohjelmoida itse ideoitu palvelu Internet - verkon välityksellä käytettäväksi, joko yksityisesti tai julkisesti. Kyse on siis hajautetusta palvelinjärjestelmästä, jonka tarjoaa dynaamisesti kuormitukseen sopeutuvan sovellusalustan, jossa asiakas ei vuokraa virtuaalikoneita. Myös järjestelmän tarjoama tallennuskapasiteetti on saatavilla joustavasti. Itse kandidaatintyössä syvennytään yksityiskohtaisemmin sovelluksen toteuttamiseen palvelussa, rajoitteisiin ja soveltuvuuteen. Alussa käydään läpi pilvikäsite, joista monilla tietokoneiden käyttäjillä on epäselvä käsitys. Erilaisia kokonaisuuksia voidaan luoda erittäin monella tavalla, joista rajaamme käsittelyn kohteeksi toteuttamiskelpoiset yleiset ratkaisut.
Resumo:
INTRODUCTION: Web-based e-learning is a teaching tool increasingly used in many medical schools and specialist fields, including ophthalmology. AIMS: this pilot study aimed to develop internet-based course-based clinical cases and to evaluate the effectiveness of this method within a graduate medical education group. METHODS: this was an interventional randomized study. First, a website was built using a distance learning platform. Sixteen first-year ophthalmology residents were then divided into two randomized groups: one experimental group, which was submitted to the intervention (use of the e-learning site) and another control group, which was not submitted to the intervention. The students answered a printed clinical case and their scores were compared. RESULTS: there was no statistically significant difference between the groups. CONCLUSION: We were able to successfully develop the e-learning site and the respective clinical cases. Despite the fact that there was no statistically significant difference between the access and the non access group, the study was a pioneer in our department, since a clinical case online program had never previously been developed.
Resumo:
Web-palvelussa sivuston suorituskyky muodostaa suuren osan käyttökokemuksen mielekkyydestä. Sivuston ollessa kuormitetumpi kuin normaalisti, saattaa se toimia tavallista hitaammin. Tasaamalla yhden web-palvelimen kuormaa muille palvelimille, joko laitteisto tai ohjelmistopohjaisella kuormantasauksella, voidaan saavuttaa merkittäviä suorituskykyparannuksia koko palvelulle. Teoriaosassa selvitettiin kuormantasaukseen soveltuvien algoritmien toimintaa sekä tietokantapohjaista replikaatiota, joka välittää saamansa tiedon lähes välittömästi toiselle palvelimelle. Apachen kuormantasausmoduuliin sekä sen sisältämiin algoritmeihin luotiin myös katsaus teoriaosassa. Varsinaisessa työssä luotiin Apachen kuormantasauspalvelimen ja kahden suorituspalvelimen avulla toimiva palvelinjärjestelmä. Kuormantasaimeen asennettiin käyttöön tahmeat sessiot, joiden toimintaa selvitettiin Drupal-ohjelman avulla. Apachessa on ollut ongelmia tahmeiden sessioiden kanssa, mutta työssä huomattiin kaiken toimivan mainiosti. Kuormantasaus ja tietokantojen replikaatio toimi, kuten odotettiin.