936 resultados para Sites da Web acessíveis para deficientes


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A fundamental task in bioinformatics involves a transfer of knowledge from one protein molecule onto another by way of recognizing similarities. Such similarities are obtained at different levels, that of sequence, whole fold, or important substructures. Comparison of binding sites is important to understand functional similarities among the proteins and also to understand drug cross-reactivities. Current methods in literature have their own merits and demerits, warranting exploration of newer concepts and algorithms, especially for large-scale comparisons and for obtaining accurate residue-wise mappings. Here, we report the development of a new algorithm, PocketAlign, for obtaining structural superpositions of binding sites. The software is available as a web-service at http://proline.physicslisc.emetin/pocketalign/. The algorithm encodes shape descriptors in the form of geometric perspectives, supplemented by chemical group classification. The shape descriptor considers several perspectives with each residue as the focus and captures relative distribution of residues around it in a given site. Residue-wise pairings are computed by comparing the set of perspectives of the first site with that of the second, followed by a greedy approach that incrementally combines residue pairings into a mapping. The mappings in different frames are then evaluated by different metrics encoding the extent of alignment of individual geometric perspectives. Different initial seed alignments are computed, each subsequently extended by detecting consequential atomic alignments in a three-dimensional grid, and the best 500 stored in a database. Alignments are then ranked, and the top scoring alignments reported, which are then streamed into Pymol for visualization and analyses. The method is validated for accuracy and sensitivity and benchmarked against existing methods. An advantage of PocketAlign, as compared to some of the existing tools available for binding site comparison in literature, is that it explores different schemes for identifying an alignment thus has a better potential to capture similarities in ligand recognition abilities. PocketAlign, by finding a detailed alignment of a pair of sites, provides insights as to why two sites are similar and which set of residues and atoms contribute to the similarity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many web sites incorporate dynamic web pages to deliver customized contents to their users. However, dynamic pages result in increased user response times due to their construction overheads. In this paper, we consider mechanisms for reducing these overheads by utilizing the excess capacity with which web servers are typically provisioned. Specifically, we present a caching technique that integrates fragment caching with anticipatory page pre-generation in order to deliver dynamic pages faster during normal operating situations. A feedback mechanism is used to tune the page pre-generation process to match the current system load. The experimental results from a detailed simulation study of our technique indicate that, given a fixed cache budget, page construction speedups of more than fifty percent can be consistently achieved as compared to a pure fragment caching approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Residue depth accurately measures burial and parameterizes local protein environment. Depth is the distance of any atom/residue to the closest bulk water. We consider the non-bulk waters to occupy cavities, whose volumes are determined using a Voronoi procedure. Our estimation of cavity sizes is statistically superior to estimates made by CASTp and VOIDOO, and on par with McVol over a data set of 40 cavities. Our calculated cavity volumes correlated best with the experimentally determined destabilization of 34 mutants from five proteins. Some of the cavities identified are capable of binding small molecule ligands. In this study, we have enhanced our depth-based predictions of binding sites by including evolutionary information. We have demonstrated that on a database (LigASite) of similar to 200 proteins, we perform on par with ConCavity and better than MetaPocket 2.0. Our predictions, while less sensitive, are more specific and precise. Finally, we use depth (and other features) to predict pK(a)s of GLU, ASP, LYS and HIS residues. Our results produce an average error of just <1 pH unit over 60 predictions. Our simple empirical method is statistically on par with two and superior to three other methods while inferior to only one. The DEPTH server (http://mspc.bii.a-star.edu.sg/depth/) is an ideal tool for rapid yet accurate structural analyses of protein structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Planktonic microbial community structure and classical food web were investigated in the large shallow eutrophic Lake Taihu (2338 km(2), mean depth 1.9 m) located in subtropical Southeast China. The water column of the lake was sampled biweekly at two sites located 22 km apart over a period of twelve month. Site 1 is under the regime of heavy eutrophication while Site 2 is governed by wind-driven sediment resuspension. Within-lake comparison indicates that phosphorus enrichment resulted in increased abundance of microbial components. However, the coupling between total phosphorus and abundance of microbial components was different between the two sites. Much stronger coupling was observed at Site 1 than at Site 2. The weak coupling at Site 2 was mainly caused by strong sediment resuspension, which limited growth of phytoplankton and, consequently, growth of bacterioplankton and other microbial components. High percentages of attached bacteria, which were strongly correlated with the biomass of phytoplankton, especially Microcystis spp., were found at Site 1 during summer and early autumn, but no such correlation was observed at Site 2. This potentially leads to differences in carbon flow through microbial food web at different locations. Overall, significant heterogeneity of microbial food web structure between the two sites was observed. Site-specific differences in nutrient enrichment (i.e. nitrogen and phosphorus) and sediment resuspension were identified as driving forces of the observed intra-habitat differences in food web structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

本文阐述了采用Internet/Intranet技术和利用ASP实现数据的动态发布技术以及基于分布网络环境下的异地设计与制造技术,设计了基于web的支持机器人异地设计制造的市场客户管理系统,从而探讨了Browser/Server结构的数据库发布系统的设计方法。

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently the notion of self-similarity has been shown to apply to wide-area and local-area network traffic. In this paper we examine the mechanisms that give rise to self-similar network traffic. We present an explanation for traffic self-similarity by using a particular subset of wide area traffic: traffic due to the World Wide Web (WWW). Using an extensive set of traces of actual user executions of NCSA Mosaic, reflecting over half a million requests for WWW documents, we show evidence that WWW traffic is self-similar. Then we show that the self-similarity in such traffic can be explained based on the underlying distributions of WWW document sizes, the effects of caching and user preference in file transfer, the effect of user "think time", and the superimposition of many such transfers in a local area network. To do this we rely on empirically measured distributions both from our traces and from data independently collected at over thirty WWW sites.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a highly accurate method for classifying web pages based on link percentage, which is the percentage of text characters that are parts of links normalized by the number of all text characters on a web page. K-means clustering is used to create unique thresholds to differentiate index pages and article pages on individual web sites. Index pages contain mostly links to articles and other indices, while article pages contain mostly text. We also present a novel link grouping algorithm using agglomerative hierarchical clustering that groups links in the same spatial neighborhood together while preserving link structure. Grouping allows users with severe disabilities to use a scan-based mechanism to tab through a web page and select items. In experiments, we saw up to a 40-fold reduction in the number of commands needed to click on a link with a scan-based interface, which shows that we can vastly improve the rate of communication for users with disabilities. We used web page classification and link grouping to alter web page display on an accessible web browser that we developed to make a usable browsing interface for users with disabilities. Our classification method consistently outperformed a baseline classifier even when using minimal data to generate article and index clusters, and achieved classification accuracy of 94.0% on web sites with well-formed or slightly malformed HTML, compared with 80.1% accuracy for the baseline classifier.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: The Veterans Health Administration has developed My HealtheVet (MHV), a Web-based portal that links veterans to their care in the veteran affairs (VA) system. The objective of this study was to measure diabetic veterans' access to and use of the Internet, and their interest in using MHV to help manage their diabetes. MATERIALS AND METHODS: Cross-sectional mailed survey of 201 patients with type 2 diabetes and hemoglobin A(1c) > 8.0% receiving primary care at any of five primary care clinic sites affiliated with a VA tertiary care facility. Main measures included Internet usage, access, and attitudes; computer skills; interest in using the Internet; awareness of and attitudes toward MHV; demographics; and socioeconomic status. RESULTS: A majority of respondents reported having access to the Internet at home. Nearly half of all respondents had searched online for information about diabetes, including some who did not have home Internet access. More than a third obtained "some" or "a lot" of their health-related information online. Forty-one percent reported being "very interested" in using MHV to help track their home blood glucose readings, a third of whom did not have home Internet access. Factors associated with being "very interested" were as follows: having access to the Internet at home (p < 0.001), "a lot/some" trust in the Internet as a source of health information (p = 0.002), lower age (p = 0.03), and some college (p = 0.04). Neither race (p = 0.44) nor income (p = 0.25) was significantly associated with interest in MHV. CONCLUSIONS: This study found that a diverse sample of older VA patients with sub-optimally controlled diabetes had a level of familiarity with and access to the Internet comparable to an age-matched national sample. In addition, there was a high degree of interest in using the Internet to help manage their diabetes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When mortality is high, animals run a risk if they wait to accumulate resources for improved reproduction so they may trade-off the time of reproduction with number and size of offspring. Animals may attempt to improve food acquisition by relocation, even in 'sit and wait' predators. We examine these factors in an isolated population of an orb-web spider Zygiella x-notata. The population was monitored for 200 days from first egg laying until all adults had died. Large females produced their first clutch earlier than did small females and there was a positive correlation between female size and the number and size of eggs produced. Many females, presumably without eggs, abandoned their web site and relocated their web position. This is presumed because female Zygiella typically guard their eggs. In total, c. 25% of females reproduced but those that relocated were less likely to do so, and if they did, they produced the clutch at a later date than those that remained. When the date of lay was controlled there was no effect of relocation on egg number but relocated females produced smaller eggs. The data are consistent with the idea that females in resource-poor sites are more likely to relocate. Relocation seems to be a gamble to find a more productive site but one that achieves only a late clutch of small eggs and few achieve that.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rate of species loss is increasing on a global scale and predators are most at risk from human-induced extinction. The effects of losing predators are difficult to predict, even with experimental single species removals, because different combinations of species interact in unpredictable ways. We tested the effects of the loss of groups of common predators on herbivore and algal assemblages in a model benthic marine system. The predator groups were fish, shrimp and crabs. Each group was represented by at least two characteristic species based on data collected at local field sites. We examined the effects of the loss of predators while controlling for the loss of predator biomass. The identity, not the number of predator groups, affected herbivore abundance and assemblage structure. Removing fish led to a large increase in the abundance of dominant herbivores, such as Ampithoids and Caprellids. Predator identity also affected algal assemblage structure. It did not, however, affect total algal mass. Removing fish led to an increase in the final biomass of the least common taxa (red algae) and reduced the mass of the dominant taxa (brown algae). This compensatory shift in the algal assemblage appeared to facilitate the maintenance of a constant total algal biomass. In the absence of fish, shrimp at higher than ambient densities had a similar effect on herbivore abundance, showing that other groups could partially compensate for the loss of dominant predators. Crabs had no effect on herbivore or algal populations, possibly because they were not at carrying capacity in our experimental system. These findings show that contrary to the assumptions of many food web models, predators cannot be classified into a single functional group and their role in food webs depends on their identity and density in 'real' systems and carrying capacities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

REMA is an interactive web-based program which predicts endonuclease cut sites in DNA sequences. It analyses Multiple sequences simultaneously and predicts the number and size of fragments as well as provides restriction maps. The users can select single or paired combinations of all commercially available enzymes. Additionally, REMA permits prediction of multiple sequence terminal fragment sizes and suggests suitable restriction enzymes for maximally discriminatory results. REMA is an easy to use, web based program which will have a wide application in molecular biology research. Availability: REMA is written in Perl and is freely available for non-commercial use. Detailed information on installation can be obtained from Jan Szubert (jan.szubert@gmail.com) and the web based application is accessible on the internet at the URL http://www.macaulay.ac.uk/rema. Contact: b.singh@macaulay.ac.uk. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective. To investigate students' use and views on social networking sites and assess differences in attitudes between genders and years in the program.

Methods. All pharmacy undergraduate students were invited via e-mail to complete an electronic questionnaire consisting of 21 questions relating to social networking.

Results. Most (91.8%) of the 377 respondents reported using social networking Web sites, with 98.6% using Facebook and 33.7% using Twitter. Female students were more likely than male students to agree that they had been made sufficiently aware of the professional behavior expected of them when using social networking sites (76.6% vs 58.1% p=0.002) and to agree that students should have the same professional standards whether on placement or using social networking sites (76.3% vs 61.6%; p<0.001).

Conclusions. A high level of social networking use and potentially inappropriate attitudes towards professionalism were found among pharmacy students. Further training may be useful to ensure pharmacy students are aware of how to apply codes of conduct when using social networking sites.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Web sites that rely on databases for their content are now ubiquitous. Query result pages are dynamically generated from these databases in response to user-submitted queries. Automatically extracting structured data from query result pages is a challenging problem, as the structure of the data is not explicitly represented. While humans have shown good intuition in visually understanding data records on a query result page as displayed by a web browser, no existing approach to data record extraction has made full use of this intuition. We propose a novel approach, in which we make use of the common sources of evidence that humans use to understand data records on a displayed query result page. These include structural regularity, and visual and content similarity between data records displayed on a query result page. Based on these observations we propose new techniques that can identify each data record individually, while ignoring noise items, such as navigation bars and adverts. We have implemented these techniques in a software prototype, rExtractor, and tested it using two datasets. Our experimental results show that our approach achieves significantly higher accuracy than previous approaches. Furthermore, it establishes the case for use of vision-based algorithms in the context of data extraction from web sites.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this paper is to examine website adoption and its resultant effects on credit union performance in Ireland over the period 2002 to 2010. While there has been a steady increase in web adoption over the period a sizeable proportion (53%) of credit unions did not have a web-based facility in 2010. To gauge web functionality the researchers accessed all websites in 2010/2011 and it transpired that most sites were classified as informational with limited transactional options. Panel data techniques are then used to capture the dynamic nature of website diffusion and to investigate the effect of website adoption on cost and performance. The empirical analysis reveals that credit unions that have web-based functionality have a reduced spread between the loan and pay-out rate with this primarily caused by reduced loan rates. This reduced spread, although small, is found to both persist and increase over time.