968 resultados para Sites da Web


Relevância:

30.00% 30.00%

Publicador:

Resumo:

本文阐述了采用Internet/Intranet技术和利用ASP实现数据的动态发布技术以及基于分布网络环境下的异地设计与制造技术,设计了基于web的支持机器人异地设计制造的市场客户管理系统,从而探讨了Browser/Server结构的数据库发布系统的设计方法。

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently the notion of self-similarity has been shown to apply to wide-area and local-area network traffic. In this paper we examine the mechanisms that give rise to self-similar network traffic. We present an explanation for traffic self-similarity by using a particular subset of wide area traffic: traffic due to the World Wide Web (WWW). Using an extensive set of traces of actual user executions of NCSA Mosaic, reflecting over half a million requests for WWW documents, we show evidence that WWW traffic is self-similar. Then we show that the self-similarity in such traffic can be explained based on the underlying distributions of WWW document sizes, the effects of caching and user preference in file transfer, the effect of user "think time", and the superimposition of many such transfers in a local area network. To do this we rely on empirically measured distributions both from our traces and from data independently collected at over thirty WWW sites.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a highly accurate method for classifying web pages based on link percentage, which is the percentage of text characters that are parts of links normalized by the number of all text characters on a web page. K-means clustering is used to create unique thresholds to differentiate index pages and article pages on individual web sites. Index pages contain mostly links to articles and other indices, while article pages contain mostly text. We also present a novel link grouping algorithm using agglomerative hierarchical clustering that groups links in the same spatial neighborhood together while preserving link structure. Grouping allows users with severe disabilities to use a scan-based mechanism to tab through a web page and select items. In experiments, we saw up to a 40-fold reduction in the number of commands needed to click on a link with a scan-based interface, which shows that we can vastly improve the rate of communication for users with disabilities. We used web page classification and link grouping to alter web page display on an accessible web browser that we developed to make a usable browsing interface for users with disabilities. Our classification method consistently outperformed a baseline classifier even when using minimal data to generate article and index clusters, and achieved classification accuracy of 94.0% on web sites with well-formed or slightly malformed HTML, compared with 80.1% accuracy for the baseline classifier.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: The Veterans Health Administration has developed My HealtheVet (MHV), a Web-based portal that links veterans to their care in the veteran affairs (VA) system. The objective of this study was to measure diabetic veterans' access to and use of the Internet, and their interest in using MHV to help manage their diabetes. MATERIALS AND METHODS: Cross-sectional mailed survey of 201 patients with type 2 diabetes and hemoglobin A(1c) > 8.0% receiving primary care at any of five primary care clinic sites affiliated with a VA tertiary care facility. Main measures included Internet usage, access, and attitudes; computer skills; interest in using the Internet; awareness of and attitudes toward MHV; demographics; and socioeconomic status. RESULTS: A majority of respondents reported having access to the Internet at home. Nearly half of all respondents had searched online for information about diabetes, including some who did not have home Internet access. More than a third obtained "some" or "a lot" of their health-related information online. Forty-one percent reported being "very interested" in using MHV to help track their home blood glucose readings, a third of whom did not have home Internet access. Factors associated with being "very interested" were as follows: having access to the Internet at home (p < 0.001), "a lot/some" trust in the Internet as a source of health information (p = 0.002), lower age (p = 0.03), and some college (p = 0.04). Neither race (p = 0.44) nor income (p = 0.25) was significantly associated with interest in MHV. CONCLUSIONS: This study found that a diverse sample of older VA patients with sub-optimally controlled diabetes had a level of familiarity with and access to the Internet comparable to an age-matched national sample. In addition, there was a high degree of interest in using the Internet to help manage their diabetes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When mortality is high, animals run a risk if they wait to accumulate resources for improved reproduction so they may trade-off the time of reproduction with number and size of offspring. Animals may attempt to improve food acquisition by relocation, even in 'sit and wait' predators. We examine these factors in an isolated population of an orb-web spider Zygiella x-notata. The population was monitored for 200 days from first egg laying until all adults had died. Large females produced their first clutch earlier than did small females and there was a positive correlation between female size and the number and size of eggs produced. Many females, presumably without eggs, abandoned their web site and relocated their web position. This is presumed because female Zygiella typically guard their eggs. In total, c. 25% of females reproduced but those that relocated were less likely to do so, and if they did, they produced the clutch at a later date than those that remained. When the date of lay was controlled there was no effect of relocation on egg number but relocated females produced smaller eggs. The data are consistent with the idea that females in resource-poor sites are more likely to relocate. Relocation seems to be a gamble to find a more productive site but one that achieves only a late clutch of small eggs and few achieve that.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rate of species loss is increasing on a global scale and predators are most at risk from human-induced extinction. The effects of losing predators are difficult to predict, even with experimental single species removals, because different combinations of species interact in unpredictable ways. We tested the effects of the loss of groups of common predators on herbivore and algal assemblages in a model benthic marine system. The predator groups were fish, shrimp and crabs. Each group was represented by at least two characteristic species based on data collected at local field sites. We examined the effects of the loss of predators while controlling for the loss of predator biomass. The identity, not the number of predator groups, affected herbivore abundance and assemblage structure. Removing fish led to a large increase in the abundance of dominant herbivores, such as Ampithoids and Caprellids. Predator identity also affected algal assemblage structure. It did not, however, affect total algal mass. Removing fish led to an increase in the final biomass of the least common taxa (red algae) and reduced the mass of the dominant taxa (brown algae). This compensatory shift in the algal assemblage appeared to facilitate the maintenance of a constant total algal biomass. In the absence of fish, shrimp at higher than ambient densities had a similar effect on herbivore abundance, showing that other groups could partially compensate for the loss of dominant predators. Crabs had no effect on herbivore or algal populations, possibly because they were not at carrying capacity in our experimental system. These findings show that contrary to the assumptions of many food web models, predators cannot be classified into a single functional group and their role in food webs depends on their identity and density in 'real' systems and carrying capacities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

REMA is an interactive web-based program which predicts endonuclease cut sites in DNA sequences. It analyses Multiple sequences simultaneously and predicts the number and size of fragments as well as provides restriction maps. The users can select single or paired combinations of all commercially available enzymes. Additionally, REMA permits prediction of multiple sequence terminal fragment sizes and suggests suitable restriction enzymes for maximally discriminatory results. REMA is an easy to use, web based program which will have a wide application in molecular biology research. Availability: REMA is written in Perl and is freely available for non-commercial use. Detailed information on installation can be obtained from Jan Szubert (jan.szubert@gmail.com) and the web based application is accessible on the internet at the URL http://www.macaulay.ac.uk/rema. Contact: b.singh@macaulay.ac.uk. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective. To investigate students' use and views on social networking sites and assess differences in attitudes between genders and years in the program.

Methods. All pharmacy undergraduate students were invited via e-mail to complete an electronic questionnaire consisting of 21 questions relating to social networking.

Results. Most (91.8%) of the 377 respondents reported using social networking Web sites, with 98.6% using Facebook and 33.7% using Twitter. Female students were more likely than male students to agree that they had been made sufficiently aware of the professional behavior expected of them when using social networking sites (76.6% vs 58.1% p=0.002) and to agree that students should have the same professional standards whether on placement or using social networking sites (76.3% vs 61.6%; p<0.001).

Conclusions. A high level of social networking use and potentially inappropriate attitudes towards professionalism were found among pharmacy students. Further training may be useful to ensure pharmacy students are aware of how to apply codes of conduct when using social networking sites.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Web sites that rely on databases for their content are now ubiquitous. Query result pages are dynamically generated from these databases in response to user-submitted queries. Automatically extracting structured data from query result pages is a challenging problem, as the structure of the data is not explicitly represented. While humans have shown good intuition in visually understanding data records on a query result page as displayed by a web browser, no existing approach to data record extraction has made full use of this intuition. We propose a novel approach, in which we make use of the common sources of evidence that humans use to understand data records on a displayed query result page. These include structural regularity, and visual and content similarity between data records displayed on a query result page. Based on these observations we propose new techniques that can identify each data record individually, while ignoring noise items, such as navigation bars and adverts. We have implemented these techniques in a software prototype, rExtractor, and tested it using two datasets. Our experimental results show that our approach achieves significantly higher accuracy than previous approaches. Furthermore, it establishes the case for use of vision-based algorithms in the context of data extraction from web sites.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this paper is to examine website adoption and its resultant effects on credit union performance in Ireland over the period 2002 to 2010. While there has been a steady increase in web adoption over the period a sizeable proportion (53%) of credit unions did not have a web-based facility in 2010. To gauge web functionality the researchers accessed all websites in 2010/2011 and it transpired that most sites were classified as informational with limited transactional options. Panel data techniques are then used to capture the dynamic nature of website diffusion and to investigate the effect of website adoption on cost and performance. The empirical analysis reveals that credit unions that have web-based functionality have a reduced spread between the loan and pay-out rate with this primarily caused by reduced loan rates. This reduced spread, although small, is found to both persist and increase over time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatically determining and assigning shared and meaningful text labels to data extracted from an e-Commerce web page is a challenging problem. An e-Commerce web page can display a list of data records, each of which can contain a combination of data items (e.g. product name and price) and explicit labels, which describe some of these data items. Recent advances in extraction techniques have made it much easier to precisely extract individual data items and labels from a web page, however, there are two open problems: 1. assigning an explicit label to a data item, and 2. determining labels for the remaining data items. Furthermore, improvements in the availability and coverage of vocabularies, especially in the context of e-Commerce web sites, means that we now have access to a bank of relevant, meaningful and shared labels which can be assigned to extracted data items. However, there is a need for a technique which will take as input a set of extracted data items and assign automatically to them the most relevant and meaningful labels from a shared vocabulary. We observe that the Information Extraction (IE) community has developed a great number of techniques which solve problems similar to our own. In this work-in-progress paper we propose our intention to theoretically and experimentally evaluate different IE techniques to ascertain which is most suitable to solve this problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação mest., Gestão Empresarial, Universidade do Algarve, 2006

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cada vez mais a experiência dos utilizadores num ambiente virtual deve ser alvo de novas investigações. A crescente evolução da tecnologia, a modernização da forma como comunicamos e recebemos informações, bem como realizamos as nossas compras em novos ambientes online, constituem um novo espaço de pesquisa. A Experiência de Fluxo online, descrita como um estado em que o utilizador se sente cognitivamente eficiente, motivado e feliz, parece contribuir para a navegação dos utilizadores num site, maximizando a eficácia comercial de um produto apresentado nesse mesmo site. Nós acreditamos que no mundo virtual da internet, a experiência de fluxo dos utilizadores pode ser potenciada através da melhor interação humano-computador, nos sites em que o utilizador navega, oferecendo aos utilizadores experiências virtuais mais envolventes com um produto. Esta pesquisa tem como objetivo verificar em que modelo de site, conteúdo versus contexto, os visitantes experienciam níveis mais elevadores de fluxo online, com implicações a nível da sua experiência de consumo, aceitação do próprio produto, experiência virtual e intenção comportamental de uso para com o produto. Características individuais dos utilizadores, como a sua inovação na disponibilidade para a tecnologia também serão alvo de estudo. O produto tecnológico utilizado foram os Óculos da Google. Foram utilizados estudantes em laboratório, de ambos os géneros, num plano experimental 2 (site conteúdo versus site contexto) que foram solicitados a responder a um questionário após a navegação num destes sites, sempre imersos num ambiente virtual. Os resultados mostram que no site de contexto, os participantes experienciaram maiores níveis de fluxo online, sentimentos mais agradáveis durante a sua navegação, uma maior utilidade percebida do produto, avaliaram mais positivamente os Óculos da Google, manifestaram uma atitude ao uso do produto tendencialmente maior e navegaram durante mais tempo neste site em detrimento do site de conteúdo. Os resultados revelaram ainda existir um efeito de interação entre a inovação na disponibilidade para a tecnologia e o tipo de site para com a intenção de uso do produto, com aplicações a nível do marketing e publicidade online.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this quasi-experimental research study was to investigate whether guided small-group discussions that involved explaining, analysing or justifying design and followed a modeling session from the teacher could improve students' creativity in web design. The convenience sample comprised of 37 third year students of the ""Publication Design and Hypermedi Technology"" program at John Abbott College in Sainte-Anne-de-Bellvue, Quebec who had enrolled in the Web Design course offered in the Fall semester of 2011. The primary instrument of this study was a set of two assigments for the course. A traditional teaching method was used during the first assignment and a small-group teaching strategy was implemented during the second one. Another instrument used in this research was a questionnaire on willingness to participate in teamwork. The last instrument of this study was a questionnaire on the type of intelligences that students possessed. It is hoped that the knowledge gathered from the study will add to the information about group-work activities and critiquing in particular.