798 resultados para applicazione web, semantic web, semantic publishing, angularJS, user experience, usabilità


Relevância:

50.00% 50.00%

Publicador:

Resumo:

Server performance has become a crucial issue for improving the overall performance of the World-Wide Web. This paper describes Webmonitor, a tool for evaluating and understanding server performance, and presents new results for a realistic workload. Webmonitor measures activity and resource consumption, both within the kernel and in HTTP processes running in user space. Webmonitor is implemented using an efficient combination of sampling and event-driven techniques that exhibit low overhead. Our initial implementation is for the Apache World-Wide Web server running on the Linux operating system. We demonstrate the utility of Webmonitor by measuring and understanding the performance of a Pentium-based PC acting as a dedicated WWW server. Our workload uses a file size distribution with a heavy tail. This captures the fact that Web servers must concurrently handle some requests for large audio and video files, and a large number of requests for small documents, containing text or images. Our results show that in a Web server saturated by client requests, over 90% of the time spent handling HTTP requests is spent in the kernel. Furthermore, keeping TCP connections open, as required by TCP, causes a factor of 2-9 increase in the elapsed time required to service an HTTP request. Data gathered from Webmonitor provide insight into the causes of this performance penalty. Specifically, we observe a significant increase in resource consumption along three dimensions: the number of HTTP processes running at the same time, CPU utilization, and memory utilization. These results emphasize the important role of operating system and network protocol implementation in determining Web server performance.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Web caching aims to reduce network traffic, server load, and user-perceived retrieval delays by replicating "popular" content on proxy caches that are strategically placed within the network. While key to effective cache utilization, popularity information (e.g. relative access frequencies of objects requested through a proxy) is seldom incorporated directly in cache replacement algorithms. Rather, other properties of the request stream (e.g. temporal locality and content size), which are easier to capture in an on-line fashion, are used to indirectly infer popularity information, and hence drive cache replacement policies. Recent studies suggest that the correlation between these secondary properties and popularity is weakening due in part to the prevalence of efficient client and proxy caches (which tend to mask these correlations). This trend points to the need for proxy cache replacement algorithms that directly capture and use popularity information. In this paper, we (1) present an on-line algorithm that effectively captures and maintains an accurate popularity profile of Web objects requested through a caching proxy, (2) propose a novel cache replacement policy that uses such information to generalize the well-known GreedyDual-Size algorithm, and (3) show the superiority of our proposed algorithm by comparing it to a host of recently-proposed and widely-used algorithms using extensive trace-driven simulations and a variety of performance metrics.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

BACKGROUND: Web-based decision aids are increasingly important in medical research and clinical care. However, few have been studied in an intensive care unit setting. The objectives of this study were to develop a Web-based decision aid for family members of patients receiving prolonged mechanical ventilation and to evaluate its usability and acceptability. METHODS: Using an iterative process involving 48 critical illness survivors, family surrogate decision makers, and intensivists, we developed a Web-based decision aid addressing goals of care preferences for surrogate decision makers of patients with prolonged mechanical ventilation that could be either administered by study staff or completed independently by family members (Development Phase). After piloting the decision aid among 13 surrogate decision makers and seven intensivists, we assessed the decision aid's usability in the Evaluation Phase among a cohort of 30 surrogate decision makers using the Systems Usability Scale (SUS). Acceptability was assessed using measures of satisfaction and preference for electronic Collaborative Decision Support (eCODES) versus the original printed decision aid. RESULTS: The final decision aid, termed 'electronic Collaborative Decision Support', provides a framework for shared decision making, elicits relevant values and preferences, incorporates clinical data to personalize prognostic estimates generated from the ProVent prediction model, generates a printable document summarizing the user's interaction with the decision aid, and can digitally archive each user session. Usability was excellent (mean SUS, 80 ± 10) overall, but lower among those 56 years and older (73 ± 7) versus those who were younger (84 ± 9); p = 0.03. A total of 93% of users reported a preference for electronic versus printed versions. CONCLUSIONS: The Web-based decision aid for ICU surrogate decision makers can facilitate highly individualized information sharing with excellent usability and acceptability. Decision aids that employ an electronic format such as eCODES represent a strategy that could enhance patient-clinician collaboration and decision making quality in intensive care.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

BACKGROUND: Scientists rarely reuse expert knowledge of phylogeny, in spite of years of effort to assemble a great "Tree of Life" (ToL). A notable exception involves the use of Phylomatic, which provides tools to generate custom phylogenies from a large, pre-computed, expert phylogeny of plant taxa. This suggests great potential for a more generalized system that, starting with a query consisting of a list of any known species, would rectify non-standard names, identify expert phylogenies containing the implicated taxa, prune away unneeded parts, and supply branch lengths and annotations, resulting in a custom phylogeny suited to the user's needs. Such a system could become a sustainable community resource if implemented as a distributed system of loosely coupled parts that interact through clearly defined interfaces. RESULTS: With the aim of building such a "phylotastic" system, the NESCent Hackathons, Interoperability, Phylogenies (HIP) working group recruited 2 dozen scientist-programmers to a weeklong programming hackathon in June 2012. During the hackathon (and a three-month follow-up period), 5 teams produced designs, implementations, documentation, presentations, and tests including: (1) a generalized scheme for integrating components; (2) proof-of-concept pruners and controllers; (3) a meta-API for taxonomic name resolution services; (4) a system for storing, finding, and retrieving phylogenies using semantic web technologies for data exchange, storage, and querying; (5) an innovative new service, DateLife.org, which synthesizes pre-computed, time-calibrated phylogenies to assign ages to nodes; and (6) demonstration projects. These outcomes are accessible via a public code repository (GitHub.com), a website (http://www.phylotastic.org), and a server image. CONCLUSIONS: Approximately 9 person-months of effort (centered on a software development hackathon) resulted in the design and implementation of proof-of-concept software for 4 core phylotastic components, 3 controllers, and 3 end-user demonstration tools. While these products have substantial limitations, they suggest considerable potential for a distributed system that makes phylogenetic knowledge readily accessible in computable form. Widespread use of phylotastic systems will create an electronic marketplace for sharing phylogenetic knowledge that will spur innovation in other areas of the ToL enterprise, such as annotation of sources and methods and third-party methods of quality assessment.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This study evaluated the effect of an online diet-tracking tool on college students’ self-efficacy regarding fruit and vegetable intake. A convenience sample of students completed online self-efficacy surveys before and after a six-week intervention in which they tracked dietary intake with an online tool. Group one (n=22 fall, n=43 spring) accessed a tracking tool without nutrition tips; group two (n=20 fall, n=33 spring) accessed the tool and weekly nutrition tips. The control group (n=36 fall, n=60 spring) had access to neither. Each semester there were significant changes in self-efficacy from pre- to post-test for men and for women when experimental groups were combined (p<0.05 for all); however, these changes were inconsistent. Qualitative data showed that participants responded well to the simplicity of the tool, the immediacy of feedback, and the customized database containing foods available on campus. Future models should improve user engagement by increasing convenience, potentially by automation.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Many Web applications walk the thin line between the need for dynamic data and the need to meet user performance expectations. In environments where funds are not available to constantly upgrade hardware inline with user demand, alternative approaches need to be considered. This paper introduces a ‘Data farming’ model whereby dynamic data, which is ‘grown’ in operational applications, is ‘harvested’ and ‘packaged’ for various consumer markets. Like any well managed agricultural operation, crops are harvested according to historical and perceived demand as inferred by a self-optimising process. This approach aims to make enhanced use of available resources through better utlilisation of system downtime - thereby improving application performance and increasing the availability of key business data.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The open service network for marine environmental data (NETMAR) project uses semantic web technologies in its pilot system which aims to allow users to search, download and integrate satellite, in situ and model data from open ocean and coastal areas. The semantic web is an extension of the fundamental ideas of the World Wide Web, building a web of data through annotation of metadata and data with hyperlinked resources. Within the framework of the NETMAR project, an interconnected semantic web resource was developed to aid in data and web service discovery and to validate Open Geospatial Consortium Web Processing Service orchestration. A second semantic resource was developed to support interoperability of coastal web atlases across jurisdictional boundaries. This paper outlines the approach taken to producing the resource registry used within the NETMAR project and demonstrates the use of these semantic resources to support user interactions with systems. Such interconnected semantic resources allow the increased ability to share and disseminate data through the facilitation of interoperability between data providers. The formal representation of geospatial knowledge to advance geospatial interoperability is a growing research area. Tools and methods such as those outlined in this paper have the potential to support these efforts.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

A service is a remote computational facility which is made available for general use by means of a wide-area network. Several types of service arise in practice: stateless services, shared state services and services with states which are customised for individual users. A service-based orchestration is a multi-threaded computation which invokes remote services in order to deliver results back to a user (publication). In this paper a means of specifying services and reasoning about the correctness of orchestrations over stateless services is presented. As web services are potentially unreliable the termination of even finite orchestrations cannot be guaranteed. For this reason a partial-correctness powerdomain approach is proposed to capture the semantics of recursive orchestrations.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

A rapidly increasing number of Web databases are now become accessible via
their HTML form-based query interfaces. Query result pages are dynamically generated
in response to user queries, which encode structured data and are displayed for human
use. Query result pages usually contain other types of information in addition to query
results, e.g., advertisements, navigation bar etc. The problem of extracting structured data
from query result pages is critical for web data integration applications, such as comparison
shopping, meta-search engines etc, and has been intensively studied. A number of approaches
have been proposed. As the structures of Web pages become more and more complex, the
existing approaches start to fail, and most of them do not remove irrelevant contents which
may a®ect the accuracy of data record extraction. We propose an automated approach for
Web data extraction. First, it makes use of visual features and query terms to identify data
sections and extracts data records in these sections. We also represent several content and
visual features of visual blocks in a data section, and use them to ¯lter out noisy blocks.
Second, it measures similarity between data items in di®erent data records based on their
visual and content features, and aligns them into di®erent groups so that the data in the
same group have the same semantics. The results of our experiments with a large set of
Web query result pages in di®erent domains show that our proposed approaches are highly
e®ective.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

We consider the behaviour of a set of services in a stressed web environment where performance patterns may be difficult to predict. In stressed environments the performances of some providers may degrade while the performances of others, with elastic resources, may improve. The allocation of web-based providers to users (brokering) is modelled by a strategic non-cooperative angel-daemon game with risk profiles. A risk profile specifies a bound on the number of unreliable service providers within an environment without identifying the names of these providers. Risk profiles offer a means of analysing the behaviour of broker agents which allocate service providers to users. A Nash equilibrium is a fixed point of such a game in which no user can locally improve their choice of provider – thus, a Nash equilibrium is a viable solution to the provider/user allocation problem. Angel daemon games provide a means of reasoning about stressed environments and offer the possibility of designing brokers using risk profiles and Nash equilibria.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Web sites that rely on databases for their content are now ubiquitous. Query result pages are dynamically generated from these databases in response to user-submitted queries. Automatically extracting structured data from query result pages is a challenging problem, as the structure of the data is not explicitly represented. While humans have shown good intuition in visually understanding data records on a query result page as displayed by a web browser, no existing approach to data record extraction has made full use of this intuition. We propose a novel approach, in which we make use of the common sources of evidence that humans use to understand data records on a displayed query result page. These include structural regularity, and visual and content similarity between data records displayed on a query result page. Based on these observations we propose new techniques that can identify each data record individually, while ignoring noise items, such as navigation bars and adverts. We have implemented these techniques in a software prototype, rExtractor, and tested it using two datasets. Our experimental results show that our approach achieves significantly higher accuracy than previous approaches. Furthermore, it establishes the case for use of vision-based algorithms in the context of data extraction from web sites.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

An orchestration is a multi-threaded computation that invokes a number of remote services. In practice, the responsiveness of a web-service fluctuates with demand; during surges in activity service responsiveness may be degraded, perhaps even to the point of failure. An uncertainty profile formalizes a user's perception of the effects of stress on an orchestration of web-services; it describes a strategic situation, modelled by a zero-sum angel–daemon game. Stressed web-service scenarios are analysed, using game theory, in a realistic way, lying between over-optimism (services are entirely reliable) and over-pessimism (all services are broken). The ‘resilience’ of an uncertainty profile can be assessed using the valuation of its associated zero-sum game. In order to demonstrate the validity of the approach, we consider two measures of resilience and a number of different stress models. It is shown how (i) uncertainty profiles can be ordered by risk (as measured by game valuations) and (ii) the structural properties of risk partial orders can be analysed.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Esta dissertação descreve o processo de desenvolvimento de um sistema de informação para a gestão da informação académica de programas de pósgraduação - Sistema WebMaster - que tem como objectivo tornar aquela informação acessível aos utilizadores através da World Wide Web (WWW). Começa-se por apresentar alguns conceitos que se julgam relevantes para a compreensão da problemática dos sistemas de informação em toda a sua abrangência numa determinada organização, particularizando alguns conceitos para o caso das universidades. De seguida reflecte-se sobre os sistemas de informação com base na Web, confrontando-se os conceitos de Web Site (tradicional) e aplicação Web, a nível de arquitectura tecnológica, principais vantagens e desvantagens, fazendo-se, ainda, uma breve referência às principais tecnologias para a construção de soluções com geração dinâmica de conteúdos. Por último representa-se o sistema WebMaster ao longo das suas diferentes etapas de desenvolvimento, desde a análise de requisitos, projecto do sistema, até à fase da implementação. A fase análise de requisitos foi levada a cabo através de um inquérito realizado aos potenciais utilizadores no sentido de identificar as suas necessidades de informação. Com base nos resultados desta fase apresenta-se o projecto do sistema numa perspectiva conceptual, navegacional e de interface de utilizador, fazendo uso da metodologia OOHDM - Object-Oriented Hypermedia Design Method. Finalmente, passa-se à fase da implementação que, com base nas etapas anteriores e nas tecnologias seleccionadas na fase do planeamento, proporciona um espaço interactivo e de troca de informação a todos os interessados da comunidade académica envolvidos em cursos de pós-graduação.