862 resultados para Web, Html 5, JavaScript, Dart, Structured Web Programming


Relevância:

50.00% 50.00%

Publicador:

Resumo:

This report describes our attempt to add animation as another data type to be used on the World Wide Web. Our current network infrastructure, the Internet, is incapable of carrying video and audio streams for them to be used on the web for presentation purposes. In contrast, object-oriented animation proves to be efficient in terms of network resource requirements. We defined an animation model to support drawing-based and frame-based animation. We also extended the HyperText Markup Language in order to include this animation mode. BU-NCSA Mosanim, a modified version of the NCSA Mosaic for X(v2.5), is available to demonstrate the concept and potentials of animation in presentations an interactive game playing over the web.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

One role for workload generation is as a means for understanding how servers and networks respond to variation in load. This enables management and capacity planning based on current and projected usage. This paper applies a number of observations of Web server usage to create a realistic Web workload generation tool which mimics a set of real users accessing a server. The tool, called Surge (Scalable URL Reference Generator) generates references matching empirical measurements of 1) server file size distribution; 2) request size distribution; 3) relative file popularity; 4) embedded file references; 5) temporal locality of reference; and 6) idle periods of individual users. This paper reviews the essential elements required in the generation of a representative Web workload. It also addresses the technical challenges to satisfying this large set of simultaneous constraints on the properties of the reference stream, the solutions we adopted, and their associated accuracy. Finally, we present evidence that Surge exercises servers in a manner significantly different from other Web server benchmarks.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

There has been considerable work done in the study of Web reference streams: sequences of requests for Web objects. In particular, many studies have looked at the locality properties of such streams, because of the impact of locality on the design and performance of caching and prefetching systems. However, a general framework for understanding why reference streams exhibit given locality properties has not yet emerged. In this work we take a first step in this direction, based on viewing the Web as a set of reference streams that are transformed by Web components (clients, servers, and intermediaries). We propose a graph-based framework for describing this collection of streams and components. We identify three basic stream transformations that occur at nodes of the graph: aggregation, disaggregation and filtering, and we show how these transformations can be used to abstract the effects of different Web components on their associated reference streams. This view allows a structured approach to the analysis of why reference streams show given properties at different points in the Web. Applying this approach to the study of locality requires good metrics for locality. These metrics must meet three criteria: 1) they must accurately capture temporal locality; 2) they must be independent of trace artifacts such as trace length; and 3) they must not involve manual procedures or model-based assumptions. We describe two metrics meeting these criteria that each capture a different kind of temporal locality in reference streams. The popularity component of temporal locality is captured by entropy, while the correlation component is captured by interreference coefficient of variation. We argue that these metrics are more natural and more useful than previously proposed metrics for temporal locality. We use this framework to analyze a diverse set of Web reference traces. We find that this framework can shed light on how and why locality properties vary across different locations in the Web topology. For example, we find that filtering and aggregation have opposing effects on the popularity component of the temporal locality, which helps to explain why multilevel caching can be effective in the Web. Furthermore, we find that all transformations tend to diminish the correlation component of temporal locality, which has implications for the utility of different cache replacement policies at different points in the Web.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Some WWW image engines allow the user to form a query in terms of text keywords. To build the image index, keywords are extracted heuristically from HTML documents containing each image, and/or from the image URL and file headers. Unfortunately, text-based image engines have merely retro-fitted standard SQL database query methods, and it is difficult to include images cues within such a framework. On the other hand, visual statistics (e.g., color histograms) are often insufficient for helping users find desired images in a vast WWW index. By truly unifying textual and visual statistics, one would expect to get better results than either used separately. In this paper, we propose an approach that allows the combination of visual statistics with textual statistics in the vector space representation commonly used in query by image content systems. Text statistics are captured in vector form using latent semantic indexing (LSI). The LSI index for an HTML document is then associated with each of the images contained therein. Visual statistics (e.g., color, orientedness) are also computed for each image. The LSI and visual statistic vectors are then combined into a single index vector that can be used for content-based search of the resulting image database. By using an integrated approach, we are able to take advantage of possible statistical couplings between the topic of the document (latent semantic content) and the contents of images (visual statistics). This allows improved performance in conducting content-based search. This approach has been implemented in a WWW image search engine prototype.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

We present a highly accurate method for classifying web pages based on link percentage, which is the percentage of text characters that are parts of links normalized by the number of all text characters on a web page. K-means clustering is used to create unique thresholds to differentiate index pages and article pages on individual web sites. Index pages contain mostly links to articles and other indices, while article pages contain mostly text. We also present a novel link grouping algorithm using agglomerative hierarchical clustering that groups links in the same spatial neighborhood together while preserving link structure. Grouping allows users with severe disabilities to use a scan-based mechanism to tab through a web page and select items. In experiments, we saw up to a 40-fold reduction in the number of commands needed to click on a link with a scan-based interface, which shows that we can vastly improve the rate of communication for users with disabilities. We used web page classification and link grouping to alter web page display on an accessible web browser that we developed to make a usable browsing interface for users with disabilities. Our classification method consistently outperformed a baseline classifier even when using minimal data to generate article and index clusters, and achieved classification accuracy of 94.0% on web sites with well-formed or slightly malformed HTML, compared with 80.1% accuracy for the baseline classifier.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Global biodiversity is eroding at an alarming rate, through a combination of anthropogenic disturbance and environmental change. Ecological communities are bewildering in their complexity. Experimental ecologists strive to understand the mechanisms that drive the stability and structure of these complex communities in a bid to inform nature conservation and management. Two fields of research have had high profile success at developing theories related to these stabilising structures and testing them through controlled experimentation. Biodiversity-ecosystem functioning (BEF) research has explored the likely consequences of biodiversity loss on the functioning of natural systems and the provision of important ecosystem services. Empirical tests of BEF theory often consist of simplified laboratory and field experiments, carried out on subsets of ecological communities. Such experiments often overlook key information relating to patterns of interactions, important relationships, and fundamental ecosystem properties. The study of multi-species predator-prey interactions has also contributed much to our understanding of how complex systems are structured, particularly through the importance of indirect effects and predator suppression of prey populations. A growing number of studies describe these complex interactions in detailed food webs, which encompass all the interactions in a community. This has led to recent calls for an integration of BEF research with the comprehensive study of food web properties and patterns, to help elucidate the mechanisms that allow complex communities to persist in nature. This thesis adopts such an approach, through experimentation at Lough Hyne marine reserve, in southwest Ireland. Complex communities were allowed to develop naturally in exclusion cages, with only the diversity of top trophic levels controlled. Species removals were carried out and the resulting changes to predator-prey interactions, ecosystem functioning, food web properties, and stability were studied in detail. The findings of these experiments contribute greatly to our understanding of the stability and structure of complex natural communities.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

OBJECTIVE: The Veterans Health Administration has developed My HealtheVet (MHV), a Web-based portal that links veterans to their care in the veteran affairs (VA) system. The objective of this study was to measure diabetic veterans' access to and use of the Internet, and their interest in using MHV to help manage their diabetes. MATERIALS AND METHODS: Cross-sectional mailed survey of 201 patients with type 2 diabetes and hemoglobin A(1c) > 8.0% receiving primary care at any of five primary care clinic sites affiliated with a VA tertiary care facility. Main measures included Internet usage, access, and attitudes; computer skills; interest in using the Internet; awareness of and attitudes toward MHV; demographics; and socioeconomic status. RESULTS: A majority of respondents reported having access to the Internet at home. Nearly half of all respondents had searched online for information about diabetes, including some who did not have home Internet access. More than a third obtained "some" or "a lot" of their health-related information online. Forty-one percent reported being "very interested" in using MHV to help track their home blood glucose readings, a third of whom did not have home Internet access. Factors associated with being "very interested" were as follows: having access to the Internet at home (p < 0.001), "a lot/some" trust in the Internet as a source of health information (p = 0.002), lower age (p = 0.03), and some college (p = 0.04). Neither race (p = 0.44) nor income (p = 0.25) was significantly associated with interest in MHV. CONCLUSIONS: This study found that a diverse sample of older VA patients with sub-optimally controlled diabetes had a level of familiarity with and access to the Internet comparable to an age-matched national sample. In addition, there was a high degree of interest in using the Internet to help manage their diabetes.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

BACKGROUND: Web-based decision aids are increasingly important in medical research and clinical care. However, few have been studied in an intensive care unit setting. The objectives of this study were to develop a Web-based decision aid for family members of patients receiving prolonged mechanical ventilation and to evaluate its usability and acceptability. METHODS: Using an iterative process involving 48 critical illness survivors, family surrogate decision makers, and intensivists, we developed a Web-based decision aid addressing goals of care preferences for surrogate decision makers of patients with prolonged mechanical ventilation that could be either administered by study staff or completed independently by family members (Development Phase). After piloting the decision aid among 13 surrogate decision makers and seven intensivists, we assessed the decision aid's usability in the Evaluation Phase among a cohort of 30 surrogate decision makers using the Systems Usability Scale (SUS). Acceptability was assessed using measures of satisfaction and preference for electronic Collaborative Decision Support (eCODES) versus the original printed decision aid. RESULTS: The final decision aid, termed 'electronic Collaborative Decision Support', provides a framework for shared decision making, elicits relevant values and preferences, incorporates clinical data to personalize prognostic estimates generated from the ProVent prediction model, generates a printable document summarizing the user's interaction with the decision aid, and can digitally archive each user session. Usability was excellent (mean SUS, 80 ± 10) overall, but lower among those 56 years and older (73 ± 7) versus those who were younger (84 ± 9); p = 0.03. A total of 93% of users reported a preference for electronic versus printed versions. CONCLUSIONS: The Web-based decision aid for ICU surrogate decision makers can facilitate highly individualized information sharing with excellent usability and acceptability. Decision aids that employ an electronic format such as eCODES represent a strategy that could enhance patient-clinician collaboration and decision making quality in intensive care.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

PURPOSE: Risk-stratified guidelines can improve quality of care and cost-effectiveness, but their uptake in primary care has been limited. MeTree, a Web-based, patient-facing risk-assessment and clinical decision support tool, is designed to facilitate uptake of risk-stratified guidelines. METHODS: A hybrid implementation-effectiveness trial of three clinics (two intervention, one control). PARTICIPANTS: consentable nonadopted adults with upcoming appointments. PRIMARY OUTCOME: agreement between patient risk level and risk management for those meeting evidence-based criteria for increased-risk risk-management strategies (increased risk) and those who do not (average risk) before MeTree and after. MEASURES: chart abstraction was used to identify risk management related to colon, breast, and ovarian cancer, hereditary cancer, and thrombosis. RESULTS: Participants = 488, female = 284 (58.2%), white = 411 (85.7%), mean age = 58.7 (SD = 12.3). Agreement between risk management and risk level for all conditions for each participant, except for colon cancer, which was limited to those <50 years of age, was (i) 1.1% (N = 2/174) for the increased-risk group before MeTree and 16.1% (N = 28/174) after and (ii) 99.2% (N = 2,125/2,142) for the average-risk group before MeTree and 99.5% (N = 2,131/2,142) after. Of those receiving increased-risk risk-management strategies at baseline, 10.5% (N = 2/19) met criteria for increased risk. After MeTree, 80.7% (N = 46/57) met criteria. CONCLUSION: MeTree integration into primary care can improve uptake of risk-stratified guidelines and potentially reduce "overuse" and "underuse" of increased-risk services.Genet Med 18 10, 1020-1028.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

La idea de diseñar en la asignatura taller de matemáticas una página web surgió a lo largo del curso 2000-01. Buscábamos resaltar el carácter lúdico del taller y pensamos que una página web podría ser un buen elemento motivador de la asignatura, a la vez que en ella podíamos mostrar a otras personas parte del trabajo que allí realizamos. En el taller tratábamos de redescubrir las Matemáticas y en la página web hablábamos de matemáticas mostrando las investigaciones y curiosidades que se realizaban a lo largo del curso.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Web services based systems have recently found their way into many applications such as e-commerce, corporate integration and e-learning. Construction of new services or introducing new functions to existing services requires composition of web services. Current approaches to service composition often require major programming effort; this is time consuming and requires considerable developer expertise. In this paper, we explore the real and rich scenarios found in e-learning where education services are offered through the Internet by networked universities to potentially millions in the world. These services are derived from existing/emerging business operation processes and commonly offered through a web interface, combined with other services such as email and ftp services, to support partial/full business processes. We identify the requirements for a generic portal framework for easy integration of existing expertise and services of individual institutions (enterprises). We examine the existing technologies and standards, and point out the gaps to be filled in designing the architecture of the framework

Relevância:

50.00% 50.00%

Publicador:

Resumo:

With emergence of "Semantic Web" there has been much discussion about the impact of technologies such as XML and RDF on the way we use the Web for developing e-learning applications and perhaps more importantly on how we can personalise these applications. Personalisation of e-learning is viewed by many authors (see amongst others Eklund & Brusilovsky, 1998; Kurzel, Slay, & Hagenus, 2003; Martinez, 2000; Sampson, Karagiannidis, & Kinshuk, 2002; Voigt & Swatman, 2003) as the key challenge for the learning technologists. According to Kurzel (2004) the tailoring of e-learning applications can have an impact on content and how it's accesses; the media forms used; method of instruction employed and the learning styles supported. This paper will report on a research project currently underway at the eCentre in University of Greenwich which is exploring different approaches and methodologies to create an e-learning platform with personalisation built-in. This personalisation is proposed to be set from different levels of within the system starting from being guided by the information that the user inputs into the system down to the lower level of being set using information inferred by the system's processing engine.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This short position paper considers issues in developing Data Architecture for the Internet of Things (IoT) through the medium of an exemplar project, Domain Expertise Capture in Authoring and Development ­Environments (DECADE). A brief discussion sets the background for IoT, and the development of the ­distinction between things and computers. The paper makes a strong argument to avoid reinvention of the wheel, and to reuse approaches to distributed heterogeneous data architectures and the lessons learned from that work, and apply them to this situation. DECADE requires an autonomous recording system, ­local data storage, semi-autonomous verification model, sign-off mechanism, qualitative and ­quantitative ­analysis ­carried out when and where required through web-service architecture, based on ontology and analytic agents, with a self-maintaining ontology model. To develop this, we describe a web-service ­architecture, ­combining a distributed data warehouse, web services for analysis agents, ontology agents and a ­verification engine, with a centrally verified outcome database maintained by certifying body for qualification/­professional status.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Differential phenological responses to climate among species are predicted to disrupt trophic interactions, but datasets to evaluate this are scarce. We compared phenological trends for species from 4 levels of a North Sea food web over 24 yr when sea surface temperature (SST) increased significantly. We found little consistency in phenological trends between adjacent trophic levels, no significant relationships with SST, and no significant pairwise correlations between predator and prey phenologies, suggesting that trophic mismatching is occurring. Finer resolution data on timing of peak energy demand (mid-chick-rearing) for 5 seabird species at a major North Sea colony were compared to modelled daily changes in length of 0-group (young of the year) lesser sandeels Ammodytes marinus. The date at which sandeels reached a given threshold length became significantly later during the study. Although the phenology of all the species except shags also became later, these changes were insufficient to keep pace with sandeel length, and thus mean length (and energy value) of 0-group sandeels at mid-chick-rearing showed net declines. The magnitude of declines in energy value varied among the seabirds, being more marked in species showing no phenological response (shag, 4.80 kJ) and in later breeding species feeding on larger sandeels (kittiwake, 2.46 kJ) where, due to the relationship between sandeel length and energy value being non-linear, small reductions in length result in relatively large reductions in energy. However, despite the decline in energy value of 0-group sandeels during chick-rearing, there was no evidence of any adverse effect on breeding success for any of the seabird species. Trophic mismatch appears to be prevalent within the North Sea pelagic food web, suggesting that ecosystem functioning may be disrupted.