968 resultados para Sites da Web


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – The work presented in this paper aims to provide an approach to classifying web logs by personal properties of users. Design/methodology/approach – The authors describe an iterative system that begins with a small set of manually labeled terms, which are used to label queries from the log. A set of background knowledge related to these labeled queries is acquired by combining web search results on these queries. This background set is used to obtain many terms that are related to the classification task. The system then ranks each of the related terms, choosing those that most fit the personal properties of the users. These terms are then used to begin the next iteration. Findings – The authors identify the difficulties of classifying web logs, by approaching this problem from a machine learning perspective. By applying the approach developed, the authors are able to show that many queries in a large query log can be classified. Research limitations/implications – Testing results in this type of classification work is difficult, as the true personal properties of web users are unknown. Evaluation of the classification results in terms of the comparison of classified queries to well known age-related sites is a direction that is currently being exploring. Practical implications – This research is background work that can be incorporated in search engines or other web-based applications, to help marketing companies and advertisers. Originality/value – This research enhances the current state of knowledge in short-text classification and query log learning. Classification schemes, Computer networks, Information retrieval, Man-machine systems, User interfaces

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Building Web 2.0 sites does not necessarily ensure the success of the site. We aim to better understand what improves the success of a site by drawing insight from biologically inspired design patterns. Web 2.0 sites provide a mechanism for human interaction enabling powerful intercommunication between massive volumes of users. Early Web 2.0 site providers that were previously dominant are being succeeded by newer sites providing innovative social interaction mechanisms. Understanding what site traits contribute to this success drives research into Web sites mechanics using models to describe the associated social networking behaviour. Some of these models attempt to show how the volume of users provides a self-organising and self-contextualisation of content. One model describing coordinated environments is called stigmergy, a term originally describing coordinated insect behavior. This paper explores how exploiting stigmergy can provide a valuable mechanism for identifying and analysing online user behavior specifically when considering that user freedom of choice is restricted by the provided web site functionality. This will aid our building better collaborative Web sites improving the collaborative processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As online business thrives, a company’s Web presence holds enormous importance as a source of information, entertainment, and customer service for Internet users. Besides being user-friendly, a Web site should offer interesting and enjoyable content to attract online visitors in an ever-changing multimedia environment. Companies that operate globally must know how cultural differences influence the way potential customers perceive their sites. This paper presents a model that highlights the importance of ease of use, enjoyment, content, and brand trust for Web site loyalty. The model is subsequently tested in four countries: Australia, Japan, Mongolia, and the USA. The results show that perceptual differences exist: while ease of use is crucial for Web site loyalty in all four countries, the importance of content, perceived enjoyment, and brand trust varies across different cultures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Flash Event (FE) represents a period of time when a web-server experiences a dramatic increase in incoming traffic, either following a newsworthy event that has prompted users to locate and access it, or as a result of redirection from other popular web or social media sites. This usually leads to network congestion and Quality-of-Service (QoS) degradation. These events can be mistaken for Distributed Denial-of-Service (DDoS) attacks aimed at disrupting the server. Accurate detection of FEs and their distinction from DDoS attacks is important, since different actions need to be undertaken by network administrators in these two cases. However, lack of public domain FE datasets hinders research in this area. In this paper we present a detailed study of flash events and classify them into three broad categories. In addition, the paper describes FEs in terms of three key components: the volume of incoming traffic, the related source IP-addresses, and the resources being accessed. We present such a FE model with minimal parameters and use publicly available datasets to analyse and validate our proposed model. The model can be used to generate different types of FE traffic, closely approximating real-world scenarios, in order to facilitate research into distinguishing FEs from DDoS attacks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software development and Web site development techniques have evolved significantly over the past 20 years. The relatively young Web Application development area has borrowed heavily from traditional software development methodologies primarily due to the similarities in areas of data persistence and User Interface (UI) design. Recent developments in this area propose a new Web Modeling Language (WebML) to facilitate the nuances specific to Web development. WebML is one of a number of implementations designed to enable modeling of web site interaction flows while being extendable to accommodate new features in Web site development into the future. Our research aims to extend WebML with a focus on stigmergy which is a biological term originally used to describe coordination between insects. We see design features in existing Web sites that mimic stigmergic mechanisms as part of the UI. We believe that we can synthesize and embed stigmergy in Web 2.0 sites. This paper focuses on the sub-topic of site UI design and stigmergic mechanism designs required to achieve this.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accepting the fact that culture and language are interrelated in second language learning (SLL), the web sites should be designed to integrate with the cultural aspects. Yet many SLL web sites fail to integrate with the cultural aspects and/or focus on language acquisition only. This study identified three issues: (1) anthropologists’ cultural models mostly adopted in cross-cultural web user interface have been superficially used; (2) web designers deal with culture as a fixed one which needs to be modeled into interface design elements, so (3) there is a need for a communication framework between educators and design practitioners, which can be utilized in web design processes. This paper discusses what anthropology can contribute to language learning, mediated through web design processes and suggests a cultural user experience framework for web-based SLL by presenting an exemplary matrix. To evaluate the effectiveness of the framework, the key stakeholders (learners, teachers, and designers) participated in a case scenario-based evaluation. The result shows a high possibility that the framework can enhance the effective communication and collaboration for the cultural integration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stigmergy is a biological term originally used when discussing insect or swarm behaviour, and describes a model supporting environment-based communication separating artefacts from agents. This phenomenon is demonstrated in the behavior of ants and their food foraging supported by pheromone trails, or similarly termites and their termite nest building process. What is interesting with this mechanism is that highly organized societies are formed without an apparent central management function. We see design features in Web sites that mimic stigmergic mechanisms as part of the User Interface and we have created generalizations of these patterns. Software development and Web site development techniques have evolved significantly over the past 20 years. Recent progress in this area proposes languages to model web applications to facilitate the nuances specific to these developments. These modeling languages provide a suitable framework for building reusable components encapsulating our design patterns of stigmergy. We hypothesize that incorporating stigmergy as a separate feature of a site’s primary function will ultimately lead to enhanced user coordination.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is of course recognised that technology can be gendered and implicated in gender relations. However, it continues to be the case that men’s experiences with technology are underexplored and the situation is even more problematic where digital media is concerned. Over the past 30 years we have witnessed a dramatic rise in the pervasiveness of digital media across many parts of the world and as associated with wide ranging aspects of our lives. This rise has been fuelled over the last decade by the emergence of Web 2.0 and particularly Social Networking Sites (SNS). Given this context, I believe it is necessary for us to undertake more work to understand men’s engagements with digital media, the implications this might have for masculinities and the analysis of gender relations more generally. To begin to unpack this area, I engage theorizations of the properties of digital media networks and integrate this with the masculinity studies field. Using this framework, I suggest we need to consider the rise in what I call networked masculinities – those masculinities (co)produced and reproduced with digitally networked publics. Through this analysis I discuss themes related to digital mediators, relationships, play and leisure, work and commerce, and ethics. I conclude that as masculinities can be, and are being, complicated and given agency by advancing notions and practices of connectivity, mobility, classification and convergence, those engaged with masculinity studies and digital media have much to contribute.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This project investigated 1) Australian web designers’ cultural perceptions towards Australian Indigenous users and 2) Australian Indigenous cultural features in terms of user interface design. In doing so, it reviews the literature of cross-cultural user interface design by focusing on feasible models and arguments to articulate and integrate Australian Indigenous Internet users’ cultural needs of web user interface. The online survey results collected from 101 Indigenous users and 126 Web designers showed a distinctive difference between them on the integration of Indigenous users' cultural in Web sites. The interview data collected from 14 Indigenous users and 14 web designers suggested practical approaches to the design implications of Indigenous culture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Social network sites (SNSs) such as Facebook have the potential to persuade people to adopt a lifestyle based on exercise and healthy nutrition. We report the findings of a qualitative study of an SNS for bodybuilders, looking at how bodybuilders present themselves online and how they orchestrate the SNS with their offline activities. Discussing the persuasive element of appreciation, we aim to extend previous work on persuasion in web 2.0 technologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Depth measures the extent of atom/residue burial within a protein. It correlates with properties such as protein stability, hydrogen exchange rate, protein-protein interaction hot spots, post-translational modification sites and sequence variability. Our server, DEPTH, accurately computes depth and solvent-accessible surface area (SASA) values. We show that depth can be used to predict small molecule ligand binding cavities in proteins. Often, some of the residues lining a ligand binding cavity are both deep and solvent exposed. Using the depth-SASA pair values for a residue, its likelihood to form part of a small molecule binding cavity is estimated. The parameters of the method were calibrated over a training set of 900 high-resolution X-ray crystal structures of single-domain proteins bound to small molecules (molecular weight < 1.5 KDa). The prediction accuracy of DEPTH is comparable to that of other geometry-based prediction methods including LIGSITE, SURFNET and Pocket-Finder (all with Matthew's correlation coefficient of similar to 0.4) over a testing set of 225 single and multi-chain protein structures. Users have the option of tuning several parameters to detect cavities of different sizes, for example, geometrically flat binding sites. The input to the server is a protein 3D structure in PDB format. The users have the option of tuning the values of four parameters associated with the computation of residue depth and the prediction of binding cavities. The computed depths, SASA and binding cavity predictions are displayed in 2D plots and mapped onto 3D representations of the protein structure using Jmol. Links are provided to download the outputs. Our server is useful for all structural analysis based on residue depth and SASA, such as guiding site-directed mutagenesis experiments and small molecule docking exercises, in the context of protein functional annotation and drug discovery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A fundamental task in bioinformatics involves a transfer of knowledge from one protein molecule onto another by way of recognizing similarities. Such similarities are obtained at different levels, that of sequence, whole fold, or important substructures. Comparison of binding sites is important to understand functional similarities among the proteins and also to understand drug cross-reactivities. Current methods in literature have their own merits and demerits, warranting exploration of newer concepts and algorithms, especially for large-scale comparisons and for obtaining accurate residue-wise mappings. Here, we report the development of a new algorithm, PocketAlign, for obtaining structural superpositions of binding sites. The software is available as a web-service at http://proline.physicslisc.emetin/pocketalign/. The algorithm encodes shape descriptors in the form of geometric perspectives, supplemented by chemical group classification. The shape descriptor considers several perspectives with each residue as the focus and captures relative distribution of residues around it in a given site. Residue-wise pairings are computed by comparing the set of perspectives of the first site with that of the second, followed by a greedy approach that incrementally combines residue pairings into a mapping. The mappings in different frames are then evaluated by different metrics encoding the extent of alignment of individual geometric perspectives. Different initial seed alignments are computed, each subsequently extended by detecting consequential atomic alignments in a three-dimensional grid, and the best 500 stored in a database. Alignments are then ranked, and the top scoring alignments reported, which are then streamed into Pymol for visualization and analyses. The method is validated for accuracy and sensitivity and benchmarked against existing methods. An advantage of PocketAlign, as compared to some of the existing tools available for binding site comparison in literature, is that it explores different schemes for identifying an alignment thus has a better potential to capture similarities in ligand recognition abilities. PocketAlign, by finding a detailed alignment of a pair of sites, provides insights as to why two sites are similar and which set of residues and atoms contribute to the similarity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many web sites incorporate dynamic web pages to deliver customized contents to their users. However, dynamic pages result in increased user response times due to their construction overheads. In this paper, we consider mechanisms for reducing these overheads by utilizing the excess capacity with which web servers are typically provisioned. Specifically, we present a caching technique that integrates fragment caching with anticipatory page pre-generation in order to deliver dynamic pages faster during normal operating situations. A feedback mechanism is used to tune the page pre-generation process to match the current system load. The experimental results from a detailed simulation study of our technique indicate that, given a fixed cache budget, page construction speedups of more than fifty percent can be consistently achieved as compared to a pure fragment caching approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Residue depth accurately measures burial and parameterizes local protein environment. Depth is the distance of any atom/residue to the closest bulk water. We consider the non-bulk waters to occupy cavities, whose volumes are determined using a Voronoi procedure. Our estimation of cavity sizes is statistically superior to estimates made by CASTp and VOIDOO, and on par with McVol over a data set of 40 cavities. Our calculated cavity volumes correlated best with the experimentally determined destabilization of 34 mutants from five proteins. Some of the cavities identified are capable of binding small molecule ligands. In this study, we have enhanced our depth-based predictions of binding sites by including evolutionary information. We have demonstrated that on a database (LigASite) of similar to 200 proteins, we perform on par with ConCavity and better than MetaPocket 2.0. Our predictions, while less sensitive, are more specific and precise. Finally, we use depth (and other features) to predict pK(a)s of GLU, ASP, LYS and HIS residues. Our results produce an average error of just <1 pH unit over 60 predictions. Our simple empirical method is statistically on par with two and superior to three other methods while inferior to only one. The DEPTH server (http://mspc.bii.a-star.edu.sg/depth/) is an ideal tool for rapid yet accurate structural analyses of protein structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Planktonic microbial community structure and classical food web were investigated in the large shallow eutrophic Lake Taihu (2338 km(2), mean depth 1.9 m) located in subtropical Southeast China. The water column of the lake was sampled biweekly at two sites located 22 km apart over a period of twelve month. Site 1 is under the regime of heavy eutrophication while Site 2 is governed by wind-driven sediment resuspension. Within-lake comparison indicates that phosphorus enrichment resulted in increased abundance of microbial components. However, the coupling between total phosphorus and abundance of microbial components was different between the two sites. Much stronger coupling was observed at Site 1 than at Site 2. The weak coupling at Site 2 was mainly caused by strong sediment resuspension, which limited growth of phytoplankton and, consequently, growth of bacterioplankton and other microbial components. High percentages of attached bacteria, which were strongly correlated with the biomass of phytoplankton, especially Microcystis spp., were found at Site 1 during summer and early autumn, but no such correlation was observed at Site 2. This potentially leads to differences in carbon flow through microbial food web at different locations. Overall, significant heterogeneity of microbial food web structure between the two sites was observed. Site-specific differences in nutrient enrichment (i.e. nitrogen and phosphorus) and sediment resuspension were identified as driving forces of the observed intra-habitat differences in food web structure.