986 resultados para Wide-Base Tires.
Resumo:
We analyzed the logs of our departmental HTTP server http://cs-www.bu.edu as well as the logs of the more popular Rolling Stones HTTP server http://www.stones.com. These servers have very different purposes; the former caters primarily to local clients, whereas the latter caters exclusively to remote clients all over the world. In both cases, our analysis showed that remote HTTP accesses were confined to a very small subset of documents. Using a validated analytical model of server popularity and file access profiles, we show that by disseminating the most popular documents on servers (proxies) closer to the clients, network traffic could be reduced considerably, while server loads are balanced. We argue that this process could be generalized so as to provide for an automated demand-based duplication of documents. We believe that such server-based information dissemination protocols will be more effective at reducing both network bandwidth and document retrieval times than client-based caching protocols [2].
Resumo:
This report describes our attempt to add animation as another data type to be used on the World Wide Web. Our current network infrastructure, the Internet, is incapable of carrying video and audio streams for them to be used on the web for presentation purposes. In contrast, object-oriented animation proves to be efficient in terms of network resource requirements. We defined an animation model to support drawing-based and frame-based animation. We also extended the HyperText Markup Language in order to include this animation mode. BU-NCSA Mosanim, a modified version of the NCSA Mosaic for X(v2.5), is available to demonstrate the concept and potentials of animation in presentations an interactive game playing over the web.
Resumo:
Recently the notion of self-similarity has been shown to apply to wide-area and local-area network traffic. In this paper we examine the mechanisms that give rise to self-similar network traffic. We present an explanation for traffic self-similarity by using a particular subset of wide area traffic: traffic due to the World Wide Web (WWW). Using an extensive set of traces of actual user executions of NCSA Mosaic, reflecting over half a million requests for WWW documents, we show evidence that WWW traffic is self-similar. Then we show that the self-similarity in such traffic can be explained based on the underlying distributions of WWW document sizes, the effects of caching and user preference in file transfer, the effect of user "think time", and the superimposition of many such transfers in a local area network. To do this we rely on empirically measured distributions both from our traces and from data independently collected at over thirty WWW sites.
Resumo:
We propose the development of a world wide web image search engine that crawls the web collecting information about the images it finds, computes the appropriate image decompositions and indices, and stores this extracted information for searches based on image content. Indexing and searching images need not require solving the image understanding problem. Instead, the general approach should be to provide an arsenal of image decompositions and discriminants that can be precomputed for images. At search time, users can select a weighted subset of these decompositions to be used for computing image similarity measurements. While this approach avoids the search-time-dependent problem of labeling what is important in images, it still holds several important problems that require further research in the area of query by image content. We briefly explore some of these problems as they pertain to shape.
Resumo:
Replication is a commonly proposed solution to problems of scale associated with distributed services. However, when a service is replicated, each client must be assigned a server. Prior work has generally assumed that assignment to be static. In contrast, we propose dynamic server selection, and show that it enables application-level congestion avoidance. To make dynamic server selection practical, we demonstrate the use of three tools. In addition to direct measurements of round-trip latency, we introduce and validate two new tools: bprobe, which estimates the maximum possible bandwidth along a given path; and cprobe, which estimates the current congestion along a path. Using these tools we demonstrate dynamic server selection and compare it to previous static approaches. We show that dynamic server selection consistently outperforms static policies by as much as 50%. Furthermore, we demonstrate the importance of each of our tools in performing dynamic server selection.
Resumo:
Server performance has become a crucial issue for improving the overall performance of the World-Wide Web. This paper describes Webmonitor, a tool for evaluating and understanding server performance, and presents new results for a realistic workload. Webmonitor measures activity and resource consumption, both within the kernel and in HTTP processes running in user space. Webmonitor is implemented using an efficient combination of sampling and event-driven techniques that exhibit low overhead. Our initial implementation is for the Apache World-Wide Web server running on the Linux operating system. We demonstrate the utility of Webmonitor by measuring and understanding the performance of a Pentium-based PC acting as a dedicated WWW server. Our workload uses a file size distribution with a heavy tail. This captures the fact that Web servers must concurrently handle some requests for large audio and video files, and a large number of requests for small documents, containing text or images. Our results show that in a Web server saturated by client requests, over 90% of the time spent handling HTTP requests is spent in the kernel. Furthermore, keeping TCP connections open, as required by TCP, causes a factor of 2-9 increase in the elapsed time required to service an HTTP request. Data gathered from Webmonitor provide insight into the causes of this performance penalty. Specifically, we observe a significant increase in resource consumption along three dimensions: the number of HTTP processes running at the same time, CPU utilization, and memory utilization. These results emphasize the important role of operating system and network protocol implementation in determining Web server performance.
Resumo:
ImageRover is a search by image content navigation tool for the world wide web. To gather images expediently, the image collection subsystem utilizes a distributed fleet of WWW robots running on different computers. The image robots gather information about the images they find, computing the appropriate image decompositions and indices, and store this extracted information in vector form for searches based on image content. At search time, users can iteratively guide the search through the selection of relevant examples. Search performance is made efficient through the use of an approximate, optimized k-d tree algorithm. The system employs a novel relevance feedback algorithm that selects the distance metrics appropriate for a particular query.
Resumo:
Some WWW image engines allow the user to form a query in terms of text keywords. To build the image index, keywords are extracted heuristically from HTML documents containing each image, and/or from the image URL and file headers. Unfortunately, text-based image engines have merely retro-fitted standard SQL database query methods, and it is difficult to include images cues within such a framework. On the other hand, visual statistics (e.g., color histograms) are often insufficient for helping users find desired images in a vast WWW index. By truly unifying textual and visual statistics, one would expect to get better results than either used separately. In this paper, we propose an approach that allows the combination of visual statistics with textual statistics in the vector space representation commonly used in query by image content systems. Text statistics are captured in vector form using latent semantic indexing (LSI). The LSI index for an HTML document is then associated with each of the images contained therein. Visual statistics (e.g., color, orientedness) are also computed for each image. The LSI and visual statistic vectors are then combined into a single index vector that can be used for content-based search of the resulting image database. By using an integrated approach, we are able to take advantage of possible statistical couplings between the topic of the document (latent semantic content) and the contents of images (visual statistics). This allows improved performance in conducting content-based search. This approach has been implemented in a WWW image search engine prototype.
Resumo:
One of the most vexing questions facing researchers interested in the World Wide Web is why users often experience long delays in document retrieval. The Internet's size, complexity, and continued growth make this a difficult question to answer. We describe the Wide Area Web Measurement project (WAWM) which uses an infrastructure distributed across the Internet to study Web performance. The infrastructure enables simultaneous measurements of Web client performance, network performance and Web server performance. The infrastructure uses a Web traffic generator to create representative workloads on servers, and both active and passive tools to measure performance characteristics. Initial results based on a prototype installation of the infrastructure are presented in this paper.
Resumo:
Log-polar image architectures, motivated by the structure of the human visual field, have long been investigated in computer vision for use in estimating motion parameters from an optical flow vector field. Practical problems with this approach have been: (i) dependence on assumed alignment of the visual and motion axes; (ii) sensitivity to occlusion form moving and stationary objects in the central visual field, where much of the numerical sensitivity is concentrated; and (iii) inaccuracy of the log-polar architecture (which is an approximation to the central 20°) for wide-field biological vision. In the present paper, we show that an algorithm based on generalization of the log-polar architecture; termed the log-dipolar sensor, provides a large improvement in performance relative to the usual log-polar sampling. Specifically, our algorithm: (i) is tolerant of large misalignmnet of the optical and motion axes; (ii) is insensitive to significant occlusion by objects of unknown motion; and (iii) represents a more correct analogy to the wide-field structure of human vision. Using the Helmholtz-Hodge decomposition to estimate the optical flow vector field on a log-dipolar sensor, we demonstrate these advantages, using synthetic optical flow maps as well as natural image sequences.
Resumo:
In this thesis a novel theory of electrocatalysis at metal (especially noble metal)/solution interfaces was developed based on the assumption of metal adatom/incipient hydrous oxide cyclic redox transitions. Adatoms are considered as metastable, low coverage species that oxidise in-situ at potentials of often significantly cathodic to the regular metal/metal oxide transition. Because the adatom coverage is so low the electrochemical or spectroscopic response for oxidation is frequently overlooked; however, the product of such oxidation, referred to here as incipient hydrous oxide seems to be the important mediator in a wide variety of electrocatalytically demanding oxidation processes. Conversely, electrocatalytically demanding reductions apparently occur only at adatom sites at the metal/solution interface - such reactions generally occur only at potentials below, i.e. more cathodic than, the adatom/hydrous oxide transition. It was established that while silver in base oxidises in a regular manner (forming initially OHads species) at potentials above 1.0 V (RHE), there is a minor redox transition at much lower potentials, ca. o.35 v (RHE). The latter process is assumed to an adatom/hydrous oxide transition and the low coverage Ag(l) hydrous oxide (or hydroxide) species was shown to trigger or mediate the oxidation of aldehydes, e. g. HCHO. The results of a study of this system were shown to be in good agreement with a kinetic model based on the above assumptions; the similarity between this type of behaviour and enzyme-catalysed processes - both systems involve interfacial active sites - was pointed out. Similar behaviour was established for gold where both Au(l) and Au(lll) hydrous oxide mediators were shown to be the effective oxidants for different organic species. One of the most active electrocatalytic materials known at the present time is platinum. While the classical view of this high activity is based on the concept of activated chemisorption (and the important role of the latter is not discounted here) a vital role is attributed to the adatom/hydrous oxide transition. It was suggested that the well known intermediate (or anomalous) peak in the hydrogen region of the cyclic voltanmogram for platinum region is in fact due to an adatom/hydrous oxide transition. Using potential stepping procedures to minimise the effect of deactivating (COads) species, it was shown that the onset (anodic sweep) and termination (cathodic sweep) potential for the oxidation of a wide variety of organics coincided with the potential for the intermediate peak. The converse was also shown to apply; sluggish reduction reactions, that involve interaction with metal adatoms, occur at significant rates only in the region below the hydrous oxide/adatom transition.
Resumo:
Case-Based Reasoning (CBR) uses past experiences to solve new problems. The quality of the past experiences, which are stored as cases in a case base, is a big factor in the performance of a CBR system. The system's competence may be improved by adding problems to the case base after they have been solved and their solutions verified to be correct. However, from time to time, the case base may have to be refined to reduce redundancy and to get rid of any noisy cases that may have been introduced. Many case base maintenance algorithms have been developed to delete noisy and redundant cases. However, different algorithms work well in different situations and it may be difficult for a knowledge engineer to know which one is the best to use for a particular case base. In this thesis, we investigate ways to combine algorithms to produce better deletion decisions than the decisions made by individual algorithms, and ways to choose which algorithm is best for a given case base at a given time. We analyse five of the most commonly-used maintenance algorithms in detail and show how the different algorithms perform better on different datasets. This motivates us to develop a new approach: maintenance by a committee of experts (MACE). MACE allows us to combine maintenance algorithms to produce a composite algorithm which exploits the merits of each of the algorithms that it contains. By combining different algorithms in different ways we can also define algorithms that have different trade-offs between accuracy and deletion. While MACE allows us to define an infinite number of new composite algorithms, we still face the problem of choosing which algorithm to use. To make this choice, we need to be able to identify properties of a case base that are predictive of which maintenance algorithm is best. We examine a number of measures of dataset complexity for this purpose. These provide a numerical way to describe a case base at a given time. We use the numerical description to develop a meta-case-based classification system. This system uses previous experience about which maintenance algorithm was best to use for other case bases to predict which algorithm to use for a new case base. Finally, we give the knowledge engineer more control over the deletion process by creating incremental versions of the maintenance algorithms. These incremental algorithms suggest one case at a time for deletion rather than a group of cases, which allows the knowledge engineer to decide whether or not each case in turn should be deleted or kept. We also develop incremental versions of the complexity measures, allowing us to create an incremental version of our meta-case-based classification system. Since the case base changes after each deletion, the best algorithm to use may also change. The incremental system allows us to choose which algorithm is the best to use at each point in the deletion process.
Resumo:
Antifungal compounds produced by Lactic acid bacteria (LAB) metabolites can be natural and reliable alternative for reducing fungal infections pre- and post-harvest with a multitude of additional advantages for cereal-base products. Toxigenic and spoilage fungi are responsible for numerous diseases and economic losses. This thesis includes an overview of the impact fungi have on aspects of the cereal food chain. The applicability of LAB in plant protection and cereal industry is discussed in detail. Specific case studies include Fusarium head blight, and the impact of fungi in the malting and baking industry. The impact of Fusarium culmorum infected raw barley on the final malt quality was part of the investigation. In vitro infected barley grains were fully characterized. The study showed that the germinative energy of infected barley grains decreased by 45% and grains accumulated 199 μg.kg-1 of deoxynivalenol (DON). Barley grains were subsequently malted and fully characterized. Fungal biomass increased during all stages of malting. Infected malt accumulated 8-times its DON concentration during malting. Infected malt grains revealed extreme structural changes due to proteolytic, (hemi)-cellulolytic and starch degrading activity of the fungi, this led to increased friability and fragmentation. Infected grains also had higher protease and β-glucanase activities, lower amylase activity, a greater proportion of free amino and soluble nitrogen, and a lower β-glucan content. Malt loss was over 27% higher in infected malt when compared to the control. The protein compositional changes and respective enzymatic activity of infected barley and respective malt were characterized using a wide range of methods. F. culmorum infected barley grains showed an increase in proteolytic activity and protein extractability. Several metabolic proteins decreased and increased at different rates during infection and malting, showing a complex F. culmorum infection interdependence. In vitro F. culmorum infected malt was used to produce lager beer to investigate changes caused by the fungi during the brewing processes and their effect on beer quality attributes. It was found, that the wort containing infected malt had a lower pH, a higher FAN, higher β-glucan and a 45% increase in the purging rate, and led to premature yeast flocculation. The beer produced with infected malt (IB) had also a significantly different amino acid profile. IB flavour characterization revealed a higher concentration of esters, fusel alcohols, fatty acids, ketones, and dimethylsulfide, and in particular, acetaldehyde, when compared to the control. IB had a greater proportion of Strecker aldehydes and Maillard products contributing to an increased beer staling character. IB resulted in a 67% darker colour with a trend to better foam stability. It was also found that 78% of the accumulated mycotoxin deoxynivalenol in the malt was transferred into beer. A LAB cell-freesupernatant (cfs), produced in wort-base substrate, was investigated for its ability to inhibit Fusarium growth during malting. Wort was a suitable substrate for LAB exhibiting antifungal activity. Lactobacillus amylovorus DSM19280 inhibited 104 spores.mL-1 for 7 days, after 120 h of fermentation, while Lactobacillus reuteri R29 inhibited 105 spores.mL-1 for 7 days, after 48 h of fermentation. Both LAB cfs had significant different organic acid profiles. Acid-base antifungal compounds were identified and, phenyllactic, hydroxy-phenyllactic, and benzoic acids were present in higher concentrations when compared to the control. A 3 °P wort substrate inoculated With L. reuteri R29 (cfs) was applied in malting and successfully inhibited Fusarium growth by 23%, and mycotoxin DON by 80%. Malt attributes resulted in highly modified grains, lower pH, higher colouration, and higher extract yield. The implementation of selected LAB producing antifungal compounds can be used successfully in the malting process to reduce mould growth and mycotoxin production.
Resumo:
info:eu-repo/semantics/published
Resumo:
BACKGROUND: Palliative medicine has made rapid progress in establishing its scientific and clinical legitimacy, yet the evidence base to support clinical practice remains deficient in both the quantity and quality of published studies. Historically, the conduct of research in palliative care populations has been impeded by multiple barriers including health care system fragmentation, small number and size of potential sites for recruitment, vulnerability of the population, perceptions of inappropriateness, ethical concerns, and gate-keeping. METHODS: A group of experienced investigators with backgrounds in palliative care research convened to consider developing a research cooperative group as a mechanism for generating high-quality evidence on prioritized, clinically relevant topics in palliative care. RESULTS: The resulting Palliative Care Research Cooperative (PCRC) agreed on a set of core principles: active, interdisciplinary membership; commitment to shared research purposes; heterogeneity of participating sites; development of research capacity in participating sites; standardization of methodologies, such as consenting and data collection/management; agile response to research requests from government, industry, and investigators; focus on translation; education and training of future palliative care researchers; actionable results that can inform clinical practice and policy. Consensus was achieved on a first collaborative study, a randomized clinical trial of statin discontinuation versus continuation in patients with a prognosis of less than 6 months who are taking statins for primary or secondary prevention. This article describes the formation of the PCRC, highlighting processes and decisions taken to optimize the cooperative group's success.