975 resultados para Sites Web


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Depth measures the extent of atom/residue burial within a protein. It correlates with properties such as protein stability, hydrogen exchange rate, protein-protein interaction hot spots, post-translational modification sites and sequence variability. Our server, DEPTH, accurately computes depth and solvent-accessible surface area (SASA) values. We show that depth can be used to predict small molecule ligand binding cavities in proteins. Often, some of the residues lining a ligand binding cavity are both deep and solvent exposed. Using the depth-SASA pair values for a residue, its likelihood to form part of a small molecule binding cavity is estimated. The parameters of the method were calibrated over a training set of 900 high-resolution X-ray crystal structures of single-domain proteins bound to small molecules (molecular weight < 1.5 KDa). The prediction accuracy of DEPTH is comparable to that of other geometry-based prediction methods including LIGSITE, SURFNET and Pocket-Finder (all with Matthew's correlation coefficient of similar to 0.4) over a testing set of 225 single and multi-chain protein structures. Users have the option of tuning several parameters to detect cavities of different sizes, for example, geometrically flat binding sites. The input to the server is a protein 3D structure in PDB format. The users have the option of tuning the values of four parameters associated with the computation of residue depth and the prediction of binding cavities. The computed depths, SASA and binding cavity predictions are displayed in 2D plots and mapped onto 3D representations of the protein structure using Jmol. Links are provided to download the outputs. Our server is useful for all structural analysis based on residue depth and SASA, such as guiding site-directed mutagenesis experiments and small molecule docking exercises, in the context of protein functional annotation and drug discovery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A fundamental task in bioinformatics involves a transfer of knowledge from one protein molecule onto another by way of recognizing similarities. Such similarities are obtained at different levels, that of sequence, whole fold, or important substructures. Comparison of binding sites is important to understand functional similarities among the proteins and also to understand drug cross-reactivities. Current methods in literature have their own merits and demerits, warranting exploration of newer concepts and algorithms, especially for large-scale comparisons and for obtaining accurate residue-wise mappings. Here, we report the development of a new algorithm, PocketAlign, for obtaining structural superpositions of binding sites. The software is available as a web-service at http://proline.physicslisc.emetin/pocketalign/. The algorithm encodes shape descriptors in the form of geometric perspectives, supplemented by chemical group classification. The shape descriptor considers several perspectives with each residue as the focus and captures relative distribution of residues around it in a given site. Residue-wise pairings are computed by comparing the set of perspectives of the first site with that of the second, followed by a greedy approach that incrementally combines residue pairings into a mapping. The mappings in different frames are then evaluated by different metrics encoding the extent of alignment of individual geometric perspectives. Different initial seed alignments are computed, each subsequently extended by detecting consequential atomic alignments in a three-dimensional grid, and the best 500 stored in a database. Alignments are then ranked, and the top scoring alignments reported, which are then streamed into Pymol for visualization and analyses. The method is validated for accuracy and sensitivity and benchmarked against existing methods. An advantage of PocketAlign, as compared to some of the existing tools available for binding site comparison in literature, is that it explores different schemes for identifying an alignment thus has a better potential to capture similarities in ligand recognition abilities. PocketAlign, by finding a detailed alignment of a pair of sites, provides insights as to why two sites are similar and which set of residues and atoms contribute to the similarity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many web sites incorporate dynamic web pages to deliver customized contents to their users. However, dynamic pages result in increased user response times due to their construction overheads. In this paper, we consider mechanisms for reducing these overheads by utilizing the excess capacity with which web servers are typically provisioned. Specifically, we present a caching technique that integrates fragment caching with anticipatory page pre-generation in order to deliver dynamic pages faster during normal operating situations. A feedback mechanism is used to tune the page pre-generation process to match the current system load. The experimental results from a detailed simulation study of our technique indicate that, given a fixed cache budget, page construction speedups of more than fifty percent can be consistently achieved as compared to a pure fragment caching approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Residue depth accurately measures burial and parameterizes local protein environment. Depth is the distance of any atom/residue to the closest bulk water. We consider the non-bulk waters to occupy cavities, whose volumes are determined using a Voronoi procedure. Our estimation of cavity sizes is statistically superior to estimates made by CASTp and VOIDOO, and on par with McVol over a data set of 40 cavities. Our calculated cavity volumes correlated best with the experimentally determined destabilization of 34 mutants from five proteins. Some of the cavities identified are capable of binding small molecule ligands. In this study, we have enhanced our depth-based predictions of binding sites by including evolutionary information. We have demonstrated that on a database (LigASite) of similar to 200 proteins, we perform on par with ConCavity and better than MetaPocket 2.0. Our predictions, while less sensitive, are more specific and precise. Finally, we use depth (and other features) to predict pK(a)s of GLU, ASP, LYS and HIS residues. Our results produce an average error of just <1 pH unit over 60 predictions. Our simple empirical method is statistically on par with two and superior to three other methods while inferior to only one. The DEPTH server (http://mspc.bii.a-star.edu.sg/depth/) is an ideal tool for rapid yet accurate structural analyses of protein structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta pesquisa tem por objetivo discutir os modelos de leitura subjacentes ao trabalho proposto em sites de ensino de Francês de Língua Estrangeira (FLE). Para compreender como se apresentam os modelos de leitura nesses contextos, consideramos como base teórica de partida a concepção sócio-interacional da língua. Para tal, contextualizamos a necessidade de uma constante reflexão acerca do processo de ensino/aprendizagem de FLE. Em seguida, apresentamos a motivação para desenvolver a pesquisa e apresentamos, resumidamente, o nosso percurso metodológico. Destacamos a revisão bibliográfica, apresentando os modelos de leitura e as estratégias que envolvem essa atividade em meio virtual. O primeiro momento de nossa pesquisa foi de cunho exploratório porque não tínhamos conhecimento do universo de sites voltados para o ensino de FLE. A pesquisa é, também, de natureza documental uma vez que trabalhamos com sites tomados como documentos. Optamos pelo caráter descritivo pois, a partir da descrição, baseada nos critérios de análise, do material retirado dos sites que fazem parte do nosso corpus, é que respondemos e confirmamos nossas hipóteses. Nosso método de análise é o qualitativo porque buscamos interpretar, a partir de nossas observações dos documentos selecionados em um primeiro momento. Após estabelecer os critérios, partimos para a discussão e análise dos dados e, em seguida, fazemos algumas orientações aos professores que quiserem utilizar o material disponibilizado pelos sites analisados. No capítulo final, fazemos considerações sobre a pesquisa, apresentamos os resultados das análises, explicitamos a importância do trabalho para a construção do conhecimento acerca da leitura em meio virtual, e, finalmente, recomendamos novos estudos, diante do que encontramos, para que o ensino da leitura em Língua Estrangeira contribua para a formação de leitores autônomos

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A colaboração de usuários em sites jornalísticos é um fenômeno crescente. Cada vez mais, a evolução tecnológica abre espaço para uma maior participação dos usuários no processo de construção da narrativa noticiosa. Nesse contexto, um olhar do design sobre os modelos colaborativos dos sites jornalísticos fornece subsídios para o entendimento deste fenômeno e para o aprofundamento em cada uma das etapas que compõe o processo colaborativo. Dessa forma, essa dissertação apresenta a análise teórica e prática dessas diferentes etapas, bem como das soluções de design aplicáveis aos modelos colaborativos, de maneira a estabelecer conceitos e diretrizes para a construção de modelos que otimizem o aproveitamento do conteúdo enviado por usuários e sua relação com o conteúdo editorial dos sites noticiosos.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta dissertação apresenta uma pesquisa que teve por objetivo descobrir qual a relação das crianças com a Internet e mais especificamente com os sites. Tendo como interlocutores cinco crianças que moram em uma mesma Vila Residencial, a pesquisa, que aconteceu neste espaço, pautou-se em questões do cotidiano, para investigar os usos que as crianças fazem dos sites que acessam. Os desafios de pesquisar em espaços particulares, onde questões como amizade, autoridade e metodologia de pesquisa ganharam destaque, fizeram-se presentes em todo o processo: do campo à escrita do texto. Uma grande questão que perpassa a discussão metodológica é sobre como, ao pesquisar através dos jogos, surge o desafio em conciliar os papéis de pesquisadora e jogadora. As reflexões sobre a construção de uma metodologia de pesquisa em espaços particulares contou com a contribuição de autores como Nilda Alves, Mikhail Bakhtin, Marília Amorim, Angela Borba, Fabiana Marcello dentre outros. As questões do cotidiano foram feitas a partir do debate principalmente com Michel de Certeau. As reflexões mais específicas sobre a Internet foram feitas a partir do que emergiu em campo, com as crianças, e contaram com o auxílio de, entre outros, André Lemos, Edméa Santos, Lucia Santaella e Marco Silva.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Planktonic microbial community structure and classical food web were investigated in the large shallow eutrophic Lake Taihu (2338 km(2), mean depth 1.9 m) located in subtropical Southeast China. The water column of the lake was sampled biweekly at two sites located 22 km apart over a period of twelve month. Site 1 is under the regime of heavy eutrophication while Site 2 is governed by wind-driven sediment resuspension. Within-lake comparison indicates that phosphorus enrichment resulted in increased abundance of microbial components. However, the coupling between total phosphorus and abundance of microbial components was different between the two sites. Much stronger coupling was observed at Site 1 than at Site 2. The weak coupling at Site 2 was mainly caused by strong sediment resuspension, which limited growth of phytoplankton and, consequently, growth of bacterioplankton and other microbial components. High percentages of attached bacteria, which were strongly correlated with the biomass of phytoplankton, especially Microcystis spp., were found at Site 1 during summer and early autumn, but no such correlation was observed at Site 2. This potentially leads to differences in carbon flow through microbial food web at different locations. Overall, significant heterogeneity of microbial food web structure between the two sites was observed. Site-specific differences in nutrient enrichment (i.e. nitrogen and phosphorus) and sediment resuspension were identified as driving forces of the observed intra-habitat differences in food web structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

本文阐述了采用Internet/Intranet技术和利用ASP实现数据的动态发布技术以及基于分布网络环境下的异地设计与制造技术,设计了基于web的支持机器人异地设计制造的市场客户管理系统,从而探讨了Browser/Server结构的数据库发布系统的设计方法。

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently the notion of self-similarity has been shown to apply to wide-area and local-area network traffic. In this paper we examine the mechanisms that give rise to self-similar network traffic. We present an explanation for traffic self-similarity by using a particular subset of wide area traffic: traffic due to the World Wide Web (WWW). Using an extensive set of traces of actual user executions of NCSA Mosaic, reflecting over half a million requests for WWW documents, we show evidence that WWW traffic is self-similar. Then we show that the self-similarity in such traffic can be explained based on the underlying distributions of WWW document sizes, the effects of caching and user preference in file transfer, the effect of user "think time", and the superimposition of many such transfers in a local area network. To do this we rely on empirically measured distributions both from our traces and from data independently collected at over thirty WWW sites.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a highly accurate method for classifying web pages based on link percentage, which is the percentage of text characters that are parts of links normalized by the number of all text characters on a web page. K-means clustering is used to create unique thresholds to differentiate index pages and article pages on individual web sites. Index pages contain mostly links to articles and other indices, while article pages contain mostly text. We also present a novel link grouping algorithm using agglomerative hierarchical clustering that groups links in the same spatial neighborhood together while preserving link structure. Grouping allows users with severe disabilities to use a scan-based mechanism to tab through a web page and select items. In experiments, we saw up to a 40-fold reduction in the number of commands needed to click on a link with a scan-based interface, which shows that we can vastly improve the rate of communication for users with disabilities. We used web page classification and link grouping to alter web page display on an accessible web browser that we developed to make a usable browsing interface for users with disabilities. Our classification method consistently outperformed a baseline classifier even when using minimal data to generate article and index clusters, and achieved classification accuracy of 94.0% on web sites with well-formed or slightly malformed HTML, compared with 80.1% accuracy for the baseline classifier.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: The Veterans Health Administration has developed My HealtheVet (MHV), a Web-based portal that links veterans to their care in the veteran affairs (VA) system. The objective of this study was to measure diabetic veterans' access to and use of the Internet, and their interest in using MHV to help manage their diabetes. MATERIALS AND METHODS: Cross-sectional mailed survey of 201 patients with type 2 diabetes and hemoglobin A(1c) > 8.0% receiving primary care at any of five primary care clinic sites affiliated with a VA tertiary care facility. Main measures included Internet usage, access, and attitudes; computer skills; interest in using the Internet; awareness of and attitudes toward MHV; demographics; and socioeconomic status. RESULTS: A majority of respondents reported having access to the Internet at home. Nearly half of all respondents had searched online for information about diabetes, including some who did not have home Internet access. More than a third obtained "some" or "a lot" of their health-related information online. Forty-one percent reported being "very interested" in using MHV to help track their home blood glucose readings, a third of whom did not have home Internet access. Factors associated with being "very interested" were as follows: having access to the Internet at home (p < 0.001), "a lot/some" trust in the Internet as a source of health information (p = 0.002), lower age (p = 0.03), and some college (p = 0.04). Neither race (p = 0.44) nor income (p = 0.25) was significantly associated with interest in MHV. CONCLUSIONS: This study found that a diverse sample of older VA patients with sub-optimally controlled diabetes had a level of familiarity with and access to the Internet comparable to an age-matched national sample. In addition, there was a high degree of interest in using the Internet to help manage their diabetes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When mortality is high, animals run a risk if they wait to accumulate resources for improved reproduction so they may trade-off the time of reproduction with number and size of offspring. Animals may attempt to improve food acquisition by relocation, even in 'sit and wait' predators. We examine these factors in an isolated population of an orb-web spider Zygiella x-notata. The population was monitored for 200 days from first egg laying until all adults had died. Large females produced their first clutch earlier than did small females and there was a positive correlation between female size and the number and size of eggs produced. Many females, presumably without eggs, abandoned their web site and relocated their web position. This is presumed because female Zygiella typically guard their eggs. In total, c. 25% of females reproduced but those that relocated were less likely to do so, and if they did, they produced the clutch at a later date than those that remained. When the date of lay was controlled there was no effect of relocation on egg number but relocated females produced smaller eggs. The data are consistent with the idea that females in resource-poor sites are more likely to relocate. Relocation seems to be a gamble to find a more productive site but one that achieves only a late clutch of small eggs and few achieve that.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rate of species loss is increasing on a global scale and predators are most at risk from human-induced extinction. The effects of losing predators are difficult to predict, even with experimental single species removals, because different combinations of species interact in unpredictable ways. We tested the effects of the loss of groups of common predators on herbivore and algal assemblages in a model benthic marine system. The predator groups were fish, shrimp and crabs. Each group was represented by at least two characteristic species based on data collected at local field sites. We examined the effects of the loss of predators while controlling for the loss of predator biomass. The identity, not the number of predator groups, affected herbivore abundance and assemblage structure. Removing fish led to a large increase in the abundance of dominant herbivores, such as Ampithoids and Caprellids. Predator identity also affected algal assemblage structure. It did not, however, affect total algal mass. Removing fish led to an increase in the final biomass of the least common taxa (red algae) and reduced the mass of the dominant taxa (brown algae). This compensatory shift in the algal assemblage appeared to facilitate the maintenance of a constant total algal biomass. In the absence of fish, shrimp at higher than ambient densities had a similar effect on herbivore abundance, showing that other groups could partially compensate for the loss of dominant predators. Crabs had no effect on herbivore or algal populations, possibly because they were not at carrying capacity in our experimental system. These findings show that contrary to the assumptions of many food web models, predators cannot be classified into a single functional group and their role in food webs depends on their identity and density in 'real' systems and carrying capacities.