917 resultados para Web page


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Urquhart, C., Spink, S., Thomas, R., Yeoman, A., Durbin, J., Turner, J., Fenton, R. & Armstrong, C. (2004). JUSTEIS: JISC Usage Surveys: Trends in Electronic Information Services Final report 2003/2004 Cycle Five. Aberystwyth: Department of Information Studies, University of Wales Aberystwyth. Sponsorship: JISC

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To be presented at SIG/ISMB07 ontology workshop: http://bio-ontologies.org.uk/index.php To be published in BMC Bioinformatics. Sponsorship: JISC

Relevância:

20.00% 20.00%

Publicador:

Resumo:

R. Jensen and Q. Shen, 'Fuzzy-Rough Attribute Reduction with Application to Web Categorization,' Fuzzy Sets and Systems, vol. 141, no. 3, pp. 469-485, 2004.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Poster pokazuje metody komunikacji z czytelnikiem stosowane w Bibliotece Uniwersyteckiej w Poznaniu w technologii mediów cyfrowych. Cyfrowe narzędzia komunikacji stały się bardzo pomocne, niemal niezbędne w pozyskiwaniu nowych czytelników, podtrzymywaniu i rozwijaniu współpracy w społeczności w sieci Web.2.0, zarówno tej globalnej, jak i lokalnej akademickiej. Strona WWW jako statyczna komunikacyjnie jest wspierana przez fora dyskusyjne, chaty, wideokonferencje, warsztaty informacyjne, które są prowadzone w czasie rzeczywistym. Twórczą siłę relacji społecznych z biblioteką rozwinęły interaktywne serwisy społecznościowe (Facebook) oraz komunikatory internetowe integrowane na platformie Ask a Librarian. Biblioteka stała się Biblioteką 2.0 ukierunkowaną na komunikację z czytelnikiem. Aktywne uczestnictwo i udział czytelników przy tworzeniu zasobów naukowych wdrożyliśmy w projekcie instytucjonalnego repozytorium - Adam Mickiewicz Repository (AMUR). Biblioteka zmienia się dla czytelników i z czytelnikami. Wykorzystywane platformy i serwisy społecznościowe dostarczają unikatowych danych o nowych potrzebach informacyjnych i oczekiwaniach docelowego Patrona 2.0, co skutkuje doskonaleniu usług istniejących i tworzeniu nowych. Biblioteka monitoruje usługi i potrzeby czytelników przez prowadzone badania społeczne. Technologie cyfrowe stosowane w komunikacji sprawiają, iż biblioteka staje się bliższa, bardziej dostępna, aby stać się w rezultacie partnerem dla stałych i nowych czytelników. Biblioteka Uniwersytecka w Poznaniu bierze udział w programach europejskich w zakresie katalogowania i digitalizacji zasobu biblioteki cyfrowej WBC, w zakresie wdrożenia nowych technologii i rozwiązań podnoszących jakość usług bibliotecznych, działalności kulturotwórczej (Poznańska Dyskusyjna Akademia Kominksu, deBiUty) i edukacji informacyjnej. Biblioteka Uniwersytecka w Poznaniu jest członkiem organizacji międzynarodowych: LIBER (Liga Europejskich Bibliotek Naukowych), IAML (Stowarzyszenie Bibliotek Muzycznych, Archiwów i Ośrodków Dokumentacji), CERL - Europejskie Konsorcjum Bibliotek Naukowych.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

http://www.archive.org/details/missiontalesday00forbrich

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a type system, StaXML, which employs the stacked type syntax to represent essential aspects of the potential roles of XML fragments to the structure of complete XML documents. The simplest application of this system is to enforce well-formedness upon the construction of XML documents without requiring the use of templates or balanced "gap plugging" operators; this allows it to be applied to programs written according to common imperative web scripting idioms, particularly the echoing of unbalanced XML fragments to an output buffer. The system can be extended to verify particular XML applications such as XHTML and identifying individual XML tags constructed from their lexical components. We also present StaXML for PHP, a prototype precompiler for the PHP4 scripting language which infers StaXML types for expressions without assistance from the programmer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We analyzed the logs of our departmental HTTP server http://cs-www.bu.edu as well as the logs of the more popular Rolling Stones HTTP server http://www.stones.com. These servers have very different purposes; the former caters primarily to local clients, whereas the latter caters exclusively to remote clients all over the world. In both cases, our analysis showed that remote HTTP accesses were confined to a very small subset of documents. Using a validated analytical model of server popularity and file access profiles, we show that by disseminating the most popular documents on servers (proxies) closer to the clients, network traffic could be reduced considerably, while server loads are balanced. We argue that this process could be generalized so as to provide for an automated demand-based duplication of documents. We believe that such server-based information dissemination protocols will be more effective at reducing both network bandwidth and document retrieval times than client-based caching protocols [2].

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This report describes our attempt to add animation as another data type to be used on the World Wide Web. Our current network infrastructure, the Internet, is incapable of carrying video and audio streams for them to be used on the web for presentation purposes. In contrast, object-oriented animation proves to be efficient in terms of network resource requirements. We defined an animation model to support drawing-based and frame-based animation. We also extended the HyperText Markup Language in order to include this animation mode. BU-NCSA Mosanim, a modified version of the NCSA Mosaic for X(v2.5), is available to demonstrate the concept and potentials of animation in presentations an interactive game playing over the web.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently the notion of self-similarity has been shown to apply to wide-area and local-area network traffic. In this paper we examine the mechanisms that give rise to self-similar network traffic. We present an explanation for traffic self-similarity by using a particular subset of wide area traffic: traffic due to the World Wide Web (WWW). Using an extensive set of traces of actual user executions of NCSA Mosaic, reflecting over half a million requests for WWW documents, we show evidence that WWW traffic is self-similar. Then we show that the self-similarity in such traffic can be explained based on the underlying distributions of WWW document sizes, the effects of caching and user preference in file transfer, the effect of user "think time", and the superimposition of many such transfers in a local area network. To do this we rely on empirically measured distributions both from our traces and from data independently collected at over thirty WWW sites.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose the development of a world wide web image search engine that crawls the web collecting information about the images it finds, computes the appropriate image decompositions and indices, and stores this extracted information for searches based on image content. Indexing and searching images need not require solving the image understanding problem. Instead, the general approach should be to provide an arsenal of image decompositions and discriminants that can be precomputed for images. At search time, users can select a weighted subset of these decompositions to be used for computing image similarity measurements. While this approach avoids the search-time-dependent problem of labeling what is important in images, it still holds several important problems that require further research in the area of query by image content. We briefly explore some of these problems as they pertain to shape.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Server performance has become a crucial issue for improving the overall performance of the World-Wide Web. This paper describes Webmonitor, a tool for evaluating and understanding server performance, and presents new results for a realistic workload. Webmonitor measures activity and resource consumption, both within the kernel and in HTTP processes running in user space. Webmonitor is implemented using an efficient combination of sampling and event-driven techniques that exhibit low overhead. Our initial implementation is for the Apache World-Wide Web server running on the Linux operating system. We demonstrate the utility of Webmonitor by measuring and understanding the performance of a Pentium-based PC acting as a dedicated WWW server. Our workload uses a file size distribution with a heavy tail. This captures the fact that Web servers must concurrently handle some requests for large audio and video files, and a large number of requests for small documents, containing text or images. Our results show that in a Web server saturated by client requests, over 90% of the time spent handling HTTP requests is spent in the kernel. Furthermore, keeping TCP connections open, as required by TCP, causes a factor of 2-9 increase in the elapsed time required to service an HTTP request. Data gathered from Webmonitor provide insight into the causes of this performance penalty. Specifically, we observe a significant increase in resource consumption along three dimensions: the number of HTTP processes running at the same time, CPU utilization, and memory utilization. These results emphasize the important role of operating system and network protocol implementation in determining Web server performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

ImageRover is a search by image content navigation tool for the world wide web. To gather images expediently, the image collection subsystem utilizes a distributed fleet of WWW robots running on different computers. The image robots gather information about the images they find, computing the appropriate image decompositions and indices, and store this extracted information in vector form for searches based on image content. At search time, users can iteratively guide the search through the selection of relevant examples. Search performance is made efficient through the use of an approximate, optimized k-d tree algorithm. The system employs a novel relevance feedback algorithm that selects the distance metrics appropriate for a particular query.