903 resultados para web sites


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Residue depth accurately measures burial and parameterizes local protein environment. Depth is the distance of any atom/residue to the closest bulk water. We consider the non-bulk waters to occupy cavities, whose volumes are determined using a Voronoi procedure. Our estimation of cavity sizes is statistically superior to estimates made by CASTp and VOIDOO, and on par with McVol over a data set of 40 cavities. Our calculated cavity volumes correlated best with the experimentally determined destabilization of 34 mutants from five proteins. Some of the cavities identified are capable of binding small molecule ligands. In this study, we have enhanced our depth-based predictions of binding sites by including evolutionary information. We have demonstrated that on a database (LigASite) of similar to 200 proteins, we perform on par with ConCavity and better than MetaPocket 2.0. Our predictions, while less sensitive, are more specific and precise. Finally, we use depth (and other features) to predict pK(a)s of GLU, ASP, LYS and HIS residues. Our results produce an average error of just <1 pH unit over 60 predictions. Our simple empirical method is statistically on par with two and superior to three other methods while inferior to only one. The DEPTH server (http://mspc.bii.a-star.edu.sg/depth/) is an ideal tool for rapid yet accurate structural analyses of protein structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta pesquisa tem por objetivo discutir os modelos de leitura subjacentes ao trabalho proposto em sites de ensino de Francês de Língua Estrangeira (FLE). Para compreender como se apresentam os modelos de leitura nesses contextos, consideramos como base teórica de partida a concepção sócio-interacional da língua. Para tal, contextualizamos a necessidade de uma constante reflexão acerca do processo de ensino/aprendizagem de FLE. Em seguida, apresentamos a motivação para desenvolver a pesquisa e apresentamos, resumidamente, o nosso percurso metodológico. Destacamos a revisão bibliográfica, apresentando os modelos de leitura e as estratégias que envolvem essa atividade em meio virtual. O primeiro momento de nossa pesquisa foi de cunho exploratório porque não tínhamos conhecimento do universo de sites voltados para o ensino de FLE. A pesquisa é, também, de natureza documental uma vez que trabalhamos com sites tomados como documentos. Optamos pelo caráter descritivo pois, a partir da descrição, baseada nos critérios de análise, do material retirado dos sites que fazem parte do nosso corpus, é que respondemos e confirmamos nossas hipóteses. Nosso método de análise é o qualitativo porque buscamos interpretar, a partir de nossas observações dos documentos selecionados em um primeiro momento. Após estabelecer os critérios, partimos para a discussão e análise dos dados e, em seguida, fazemos algumas orientações aos professores que quiserem utilizar o material disponibilizado pelos sites analisados. No capítulo final, fazemos considerações sobre a pesquisa, apresentamos os resultados das análises, explicitamos a importância do trabalho para a construção do conhecimento acerca da leitura em meio virtual, e, finalmente, recomendamos novos estudos, diante do que encontramos, para que o ensino da leitura em Língua Estrangeira contribua para a formação de leitores autônomos

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A colaboração de usuários em sites jornalísticos é um fenômeno crescente. Cada vez mais, a evolução tecnológica abre espaço para uma maior participação dos usuários no processo de construção da narrativa noticiosa. Nesse contexto, um olhar do design sobre os modelos colaborativos dos sites jornalísticos fornece subsídios para o entendimento deste fenômeno e para o aprofundamento em cada uma das etapas que compõe o processo colaborativo. Dessa forma, essa dissertação apresenta a análise teórica e prática dessas diferentes etapas, bem como das soluções de design aplicáveis aos modelos colaborativos, de maneira a estabelecer conceitos e diretrizes para a construção de modelos que otimizem o aproveitamento do conteúdo enviado por usuários e sua relação com o conteúdo editorial dos sites noticiosos.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta dissertação apresenta uma pesquisa que teve por objetivo descobrir qual a relação das crianças com a Internet e mais especificamente com os sites. Tendo como interlocutores cinco crianças que moram em uma mesma Vila Residencial, a pesquisa, que aconteceu neste espaço, pautou-se em questões do cotidiano, para investigar os usos que as crianças fazem dos sites que acessam. Os desafios de pesquisar em espaços particulares, onde questões como amizade, autoridade e metodologia de pesquisa ganharam destaque, fizeram-se presentes em todo o processo: do campo à escrita do texto. Uma grande questão que perpassa a discussão metodológica é sobre como, ao pesquisar através dos jogos, surge o desafio em conciliar os papéis de pesquisadora e jogadora. As reflexões sobre a construção de uma metodologia de pesquisa em espaços particulares contou com a contribuição de autores como Nilda Alves, Mikhail Bakhtin, Marília Amorim, Angela Borba, Fabiana Marcello dentre outros. As questões do cotidiano foram feitas a partir do debate principalmente com Michel de Certeau. As reflexões mais específicas sobre a Internet foram feitas a partir do que emergiu em campo, com as crianças, e contaram com o auxílio de, entre outros, André Lemos, Edméa Santos, Lucia Santaella e Marco Silva.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Planktonic microbial community structure and classical food web were investigated in the large shallow eutrophic Lake Taihu (2338 km(2), mean depth 1.9 m) located in subtropical Southeast China. The water column of the lake was sampled biweekly at two sites located 22 km apart over a period of twelve month. Site 1 is under the regime of heavy eutrophication while Site 2 is governed by wind-driven sediment resuspension. Within-lake comparison indicates that phosphorus enrichment resulted in increased abundance of microbial components. However, the coupling between total phosphorus and abundance of microbial components was different between the two sites. Much stronger coupling was observed at Site 1 than at Site 2. The weak coupling at Site 2 was mainly caused by strong sediment resuspension, which limited growth of phytoplankton and, consequently, growth of bacterioplankton and other microbial components. High percentages of attached bacteria, which were strongly correlated with the biomass of phytoplankton, especially Microcystis spp., were found at Site 1 during summer and early autumn, but no such correlation was observed at Site 2. This potentially leads to differences in carbon flow through microbial food web at different locations. Overall, significant heterogeneity of microbial food web structure between the two sites was observed. Site-specific differences in nutrient enrichment (i.e. nitrogen and phosphorus) and sediment resuspension were identified as driving forces of the observed intra-habitat differences in food web structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

本文阐述了采用Internet/Intranet技术和利用ASP实现数据的动态发布技术以及基于分布网络环境下的异地设计与制造技术,设计了基于web的支持机器人异地设计制造的市场客户管理系统,从而探讨了Browser/Server结构的数据库发布系统的设计方法。

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently the notion of self-similarity has been shown to apply to wide-area and local-area network traffic. In this paper we examine the mechanisms that give rise to self-similar network traffic. We present an explanation for traffic self-similarity by using a particular subset of wide area traffic: traffic due to the World Wide Web (WWW). Using an extensive set of traces of actual user executions of NCSA Mosaic, reflecting over half a million requests for WWW documents, we show evidence that WWW traffic is self-similar. Then we show that the self-similarity in such traffic can be explained based on the underlying distributions of WWW document sizes, the effects of caching and user preference in file transfer, the effect of user "think time", and the superimposition of many such transfers in a local area network. To do this we rely on empirically measured distributions both from our traces and from data independently collected at over thirty WWW sites.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: The Veterans Health Administration has developed My HealtheVet (MHV), a Web-based portal that links veterans to their care in the veteran affairs (VA) system. The objective of this study was to measure diabetic veterans' access to and use of the Internet, and their interest in using MHV to help manage their diabetes. MATERIALS AND METHODS: Cross-sectional mailed survey of 201 patients with type 2 diabetes and hemoglobin A(1c) > 8.0% receiving primary care at any of five primary care clinic sites affiliated with a VA tertiary care facility. Main measures included Internet usage, access, and attitudes; computer skills; interest in using the Internet; awareness of and attitudes toward MHV; demographics; and socioeconomic status. RESULTS: A majority of respondents reported having access to the Internet at home. Nearly half of all respondents had searched online for information about diabetes, including some who did not have home Internet access. More than a third obtained "some" or "a lot" of their health-related information online. Forty-one percent reported being "very interested" in using MHV to help track their home blood glucose readings, a third of whom did not have home Internet access. Factors associated with being "very interested" were as follows: having access to the Internet at home (p < 0.001), "a lot/some" trust in the Internet as a source of health information (p = 0.002), lower age (p = 0.03), and some college (p = 0.04). Neither race (p = 0.44) nor income (p = 0.25) was significantly associated with interest in MHV. CONCLUSIONS: This study found that a diverse sample of older VA patients with sub-optimally controlled diabetes had a level of familiarity with and access to the Internet comparable to an age-matched national sample. In addition, there was a high degree of interest in using the Internet to help manage their diabetes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When mortality is high, animals run a risk if they wait to accumulate resources for improved reproduction so they may trade-off the time of reproduction with number and size of offspring. Animals may attempt to improve food acquisition by relocation, even in 'sit and wait' predators. We examine these factors in an isolated population of an orb-web spider Zygiella x-notata. The population was monitored for 200 days from first egg laying until all adults had died. Large females produced their first clutch earlier than did small females and there was a positive correlation between female size and the number and size of eggs produced. Many females, presumably without eggs, abandoned their web site and relocated their web position. This is presumed because female Zygiella typically guard their eggs. In total, c. 25% of females reproduced but those that relocated were less likely to do so, and if they did, they produced the clutch at a later date than those that remained. When the date of lay was controlled there was no effect of relocation on egg number but relocated females produced smaller eggs. The data are consistent with the idea that females in resource-poor sites are more likely to relocate. Relocation seems to be a gamble to find a more productive site but one that achieves only a late clutch of small eggs and few achieve that.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rate of species loss is increasing on a global scale and predators are most at risk from human-induced extinction. The effects of losing predators are difficult to predict, even with experimental single species removals, because different combinations of species interact in unpredictable ways. We tested the effects of the loss of groups of common predators on herbivore and algal assemblages in a model benthic marine system. The predator groups were fish, shrimp and crabs. Each group was represented by at least two characteristic species based on data collected at local field sites. We examined the effects of the loss of predators while controlling for the loss of predator biomass. The identity, not the number of predator groups, affected herbivore abundance and assemblage structure. Removing fish led to a large increase in the abundance of dominant herbivores, such as Ampithoids and Caprellids. Predator identity also affected algal assemblage structure. It did not, however, affect total algal mass. Removing fish led to an increase in the final biomass of the least common taxa (red algae) and reduced the mass of the dominant taxa (brown algae). This compensatory shift in the algal assemblage appeared to facilitate the maintenance of a constant total algal biomass. In the absence of fish, shrimp at higher than ambient densities had a similar effect on herbivore abundance, showing that other groups could partially compensate for the loss of dominant predators. Crabs had no effect on herbivore or algal populations, possibly because they were not at carrying capacity in our experimental system. These findings show that contrary to the assumptions of many food web models, predators cannot be classified into a single functional group and their role in food webs depends on their identity and density in 'real' systems and carrying capacities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

REMA is an interactive web-based program which predicts endonuclease cut sites in DNA sequences. It analyses Multiple sequences simultaneously and predicts the number and size of fragments as well as provides restriction maps. The users can select single or paired combinations of all commercially available enzymes. Additionally, REMA permits prediction of multiple sequence terminal fragment sizes and suggests suitable restriction enzymes for maximally discriminatory results. REMA is an easy to use, web based program which will have a wide application in molecular biology research. Availability: REMA is written in Perl and is freely available for non-commercial use. Detailed information on installation can be obtained from Jan Szubert (jan.szubert@gmail.com) and the web based application is accessible on the internet at the URL http://www.macaulay.ac.uk/rema. Contact: b.singh@macaulay.ac.uk. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this paper is to examine website adoption and its resultant effects on credit union performance in Ireland over the period 2002 to 2010. While there has been a steady increase in web adoption over the period a sizeable proportion (53%) of credit unions did not have a web-based facility in 2010. To gauge web functionality the researchers accessed all websites in 2010/2011 and it transpired that most sites were classified as informational with limited transactional options. Panel data techniques are then used to capture the dynamic nature of website diffusion and to investigate the effect of website adoption on cost and performance. The empirical analysis reveals that credit unions that have web-based functionality have a reduced spread between the loan and pay-out rate with this primarily caused by reduced loan rates. This reduced spread, although small, is found to both persist and increase over time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cada vez mais a experiência dos utilizadores num ambiente virtual deve ser alvo de novas investigações. A crescente evolução da tecnologia, a modernização da forma como comunicamos e recebemos informações, bem como realizamos as nossas compras em novos ambientes online, constituem um novo espaço de pesquisa. A Experiência de Fluxo online, descrita como um estado em que o utilizador se sente cognitivamente eficiente, motivado e feliz, parece contribuir para a navegação dos utilizadores num site, maximizando a eficácia comercial de um produto apresentado nesse mesmo site. Nós acreditamos que no mundo virtual da internet, a experiência de fluxo dos utilizadores pode ser potenciada através da melhor interação humano-computador, nos sites em que o utilizador navega, oferecendo aos utilizadores experiências virtuais mais envolventes com um produto. Esta pesquisa tem como objetivo verificar em que modelo de site, conteúdo versus contexto, os visitantes experienciam níveis mais elevadores de fluxo online, com implicações a nível da sua experiência de consumo, aceitação do próprio produto, experiência virtual e intenção comportamental de uso para com o produto. Características individuais dos utilizadores, como a sua inovação na disponibilidade para a tecnologia também serão alvo de estudo. O produto tecnológico utilizado foram os Óculos da Google. Foram utilizados estudantes em laboratório, de ambos os géneros, num plano experimental 2 (site conteúdo versus site contexto) que foram solicitados a responder a um questionário após a navegação num destes sites, sempre imersos num ambiente virtual. Os resultados mostram que no site de contexto, os participantes experienciaram maiores níveis de fluxo online, sentimentos mais agradáveis durante a sua navegação, uma maior utilidade percebida do produto, avaliaram mais positivamente os Óculos da Google, manifestaram uma atitude ao uso do produto tendencialmente maior e navegaram durante mais tempo neste site em detrimento do site de conteúdo. Os resultados revelaram ainda existir um efeito de interação entre a inovação na disponibilidade para a tecnologia e o tipo de site para com a intenção de uso do produto, com aplicações a nível do marketing e publicidade online.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this quasi-experimental research study was to investigate whether guided small-group discussions that involved explaining, analysing or justifying design and followed a modeling session from the teacher could improve students' creativity in web design. The convenience sample comprised of 37 third year students of the ""Publication Design and Hypermedi Technology"" program at John Abbott College in Sainte-Anne-de-Bellvue, Quebec who had enrolled in the Web Design course offered in the Fall semester of 2011. The primary instrument of this study was a set of two assigments for the course. A traditional teaching method was used during the first assignment and a small-group teaching strategy was implemented during the second one. Another instrument used in this research was a questionnaire on willingness to participate in teamwork. The last instrument of this study was a questionnaire on the type of intelligences that students possessed. It is hoped that the knowledge gathered from the study will add to the information about group-work activities and critiquing in particular.