884 resultados para web content


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present WebPut, a prototype system that adopts a novel web-based approach to the data imputation problem. Towards this, Webput utilizes the available information in an incomplete database in conjunction with the data consistency principle. Moreover, WebPut extends effective Information Extraction (IE) methods for the purpose of formulating web search queries that are capable of effectively retrieving missing values with high accuracy. WebPut employs a confidence-based scheme that efficiently leverages our suite of data imputation queries to automatically select the most effective imputation query for each missing value. A greedy iterative algorithm is also proposed to schedule the imputation order of the different missing values in a database, and in turn the issuing of their corresponding imputation queries, for improving the accuracy and efficiency of WebPut. Experiments based on several real-world data collections demonstrate that WebPut outperforms existing approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

What are the information practices of teen content creators? In the United States over two thirds of teens have participated in creating and sharing content in online communities that are developed for the purpose of allowing users to be producers of content. This study investigates how teens participating in digital participatory communities find and use information as well as how they experience the information. From this investigation emerged a model of their information practices while creating and sharing content such as film-making, visual art work, story telling, music, programming, and web site design in digital participatory communities. The research uses grounded theory methodology in a social constructionist framework to investigate the research problem: what are the information practices of teen content creators? Data was gathered through semi-structured interviews and observation of teen’s digital communities. Analysis occurred concurrently with data collection, and the principle of constant comparison was applied in analysis. As findings were constructed from the data, additional data was collected until a substantive theory was constructed and no new information emerged from data collection. The theory that was constructed from the data describes five information practices of teen content creators. The five information practices are learning community, negotiating aesthetic, negotiating control, negotiating capacity, and representing knowledge. In describing the five information practices there are three necessary descriptive components, the community of practice, the experiences of information and the information actions. The experiences of information include information as participation, inspiration, collaboration, process, and artifact. Information actions include activities that occur in the categories of gathering, thinking and creating. The experiences of information and information actions intersect in the information practices, which are situated within the specific community of practice, such as a digital participatory community. Finally, the information practices interact and build upon one another and this is represented in a graphic model and explanation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose – The article aims to review a university course, offered to students in both Australia and Germany, to encourage them to learn about designing, implementing, marketing and evaluating information programs and services in order to build active and engaged communities. The concepts and processes of Web 2.0 technologies come together in the learning activities, with students establishing their own personal learning networks (PLNs). Design/methodology/approach – The case study examines the principles of learning and teaching that underpin the course and presents the students' own experiences of the challenges they faced as they explored the interactive, participative and collaborative dimensions of the web. Findings – The online format of the course and the philosophy of learning through play provided students with a safe and supportive environment for them to move outside of their comfort zones, to be creative, to experiment and to develop their professional personas. Reflection on learning was a key component that stressed the value of reflective practice in assisting library and information science (LIS) professionals to adapt confidently to the rapidly changing work environment. Originality/value – This study provides insights into the opportunities for LIS courses to work across geographical boundaries, to allow students to critically appraise library practice in different contexts and to become active participants in wider professional networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rapid growth of visual information on Web has led to immense interest in multimedia information retrieval (MIR). While advancement in MIR systems has achieved some success in specific domains, particularly the content-based approaches, general Web users still struggle to find the images they want. Despite the success in content-based object recognition or concept extraction, the major problem in current Web image searching remains in the querying process. Since most online users only express their needs in semantic terms or objects, systems that utilize visual features (e.g., color or texture) to search images create a semantic gap which hinders general users from fully expressing their needs. In addition, query-by-example (QBE) retrieval imposes extra obstacles for exploratory search because users may not always have the representative image at hand or in mind when starting a search (i.e. the page zero problem). As a result, the majority of current online image search engines (e.g., Google, Yahoo, and Flickr) still primarily use textual queries to search. The problem with query-based retrieval systems is that they only capture users’ information need in terms of formal queries;; the implicit and abstract parts of users’ information needs are inevitably overlooked. Hence, users often struggle to formulate queries that best represent their needs, and some compromises have to be made. Studies of Web search logs suggest that multimedia searches are more difficult than textual Web searches, and Web image searching is the most difficult compared to video or audio searches. Hence, online users need to put in more effort when searching multimedia contents, especially for image searches. Most interactions in Web image searching occur during query reformulation. While log analysis provides intriguing views on how the majority of users search, their search needs or motivations are ultimately neglected. User studies on image searching have attempted to understand users’ search contexts in terms of users’ background (e.g., knowledge, profession, motivation for search and task types) and the search outcomes (e.g., use of retrieved images, search performance). However, these studies typically focused on particular domains with a selective group of professional users. General users’ Web image searching contexts and behaviors are little understood although they represent the majority of online image searching activities nowadays. We argue that only by understanding Web image users’ contexts can the current Web search engines further improve their usefulness and provide more efficient searches. In order to understand users’ search contexts, a user study was conducted based on university students’ Web image searching in News, Travel, and commercial Product domains. The three search domains were deliberately chosen to reflect image users’ interests in people, time, event, location, and objects. We investigated participants’ Web image searching behavior, with the focus on query reformulation and search strategies. Participants’ search contexts such as their search background, motivation for search, and search outcomes were gathered by questionnaires. The searching activity was recorded with participants’ think aloud data for analyzing significant search patterns. The relationships between participants’ search contexts and corresponding search strategies were discovered by Grounded Theory approach. Our key findings include the following aspects: - Effects of users' interactive intents on query reformulation patterns and search strategies - Effects of task domain on task specificity and task difficulty, as well as on some specific searching behaviors - Effects of searching experience on result expansion strategies A contextual image searching model was constructed based on these findings. The model helped us understand Web image searching from user perspective, and introduced a context-aware searching paradigm for current retrieval systems. A query recommendation tool was also developed to demonstrate how users’ query reformulation contexts can potentially contribute to more efficient searching.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Delirium is a significant problem for older hospitalized people and is associated with poor outcomes. It is poorly recognized and evidence suggests that a major reason is lack of education. Nurses, who are educated about delirium, can play a significant role in improving delirium recognition. This study evaluated the impact of a delirium specific educational website. A cluster randomized controlled trial, with a pretest/post-test time series design, was conducted to measure delirium knowledge (DK) and delirium recognition (DR) over three time-points. Statistically significant differences were found between the intervention and non-intervention group. The intervention groups' DK scores were higher and the change over time results were statistically significant [T3 and T1 (t=3.78 p=<0.001) and T2 and T1 baseline (t=5.83 p=<0.001)]. Statistically significant improvements were also seen for DR when comparing T2 and T1 results (t=2.56 p=0.011) between both groups but not for changes in DR scores between T3 and T1 (t=1.80 p=0.074). Participants rated the website highly on the visual, functional and content elements. This study supports the concept that web-based delirium learning is an effective and satisfying method of information delivery for registered nurses. Future research is required to investigate clinical outcomes as a result of this web-based education.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this chapter the authors discuss and informal learning settings such as fan fiction sites and their relations to teaching and learning within formal learning settings. Young people today spend a lot of time with social media built on user generated content. These media are often characterized by participatory culture which offers a good environment for developing skills and identity work. In this chapter the authors problematize fan fiction sites as informal learning settings where the possibilities to learn are powerful and significant. They also discuss the learning processes connected to the development of literacies. Here the rhetoric principle of “imitatio” plays a vital part as well as the co-production of texts on the sites, strongly supported by the beta reader and the power of positive feedback. They also display that some fans, through the online publication of fan fiction, are able to develop their craft in a way which previously have been impossible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article examines the design of ePortfolios for music postgraduate students utilizing a practice-led design iterative research process. It is suggested that the availability of Web 2.0 technologies such as blogs and social network software potentially provide creative artist with an opportunity to engage in a dialogue about art with artefacts of the artist products and processes present in that discussion. The design process applied Software Development as Research (SoDaR) methodology to simultaneously develop design and pedagogy. The approach to designing ePortfolio systems applied four theoretical protocols to examine the use of digitized artefacts to enable a dynamic and inclusive dialogue around representations of the students work. A negative case analysis identified a disjuncture between university access and control policy, and the relative openness of Web2.0 systems outside the institution that led to the design of an integrated model of ePortfolio.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To harness safe operation of Web-based systems in Web environments, we propose an SSPA (Server-based SHA-1 Page-digest Algorithm) to verify the integrity of Web contents before the server issues an HTTP response to a user request. In addition to standard security measures, our Java implementation of the SSPA, which is called the Dynamic Security Surveillance Agent (DSSA), provides further security in terms of content integrity to Web-based systems. Its function is to prevent the display of Web contents that have been altered through the malicious acts of attackers and intruders on client machines. This is to protect the reputation of organisations from cyber-attacks and to ensure the safe operation of Web systems by dynamically monitoring the integrity of a Web site's content on demand. We discuss our findings in terms of the applicability and practicality of the proposed system. We also discuss its time metrics, specifically in relation to its computational overhead at the Web server, as well as the overall latency from the clients' point of view, using different Internet access methods. The SSPA, our DSSA implementation, some experimental results and related work are all discussed

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With increased consolidation and a few large vendors dominating the market, how can software vendors distinguish themselves in order to maintain profitability and gain market share? Increasingly customers are becoming more proactive in selecting a vendor and a product, drawing upon various publications, market surveys, mailing lists, and, of course, other users. In particular, though, a company's Web site is the obvious place to begin information gathering. In sum, it may seem that the days of the uninformed customer prepared to be "sold to" are potentially all but gone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Platforms for content created by web users have been associated with some of the most significant economic paradigm shifts in digital media. At the same time, user created content has often been at the center of heated scholarly debates around the democratization of media production, cultural participation, and public communication. In this entry, we provide an overview of such debates within media and communication research, particularly in relation to the evolution of mainstream platforms for content creation, curation, and sharing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe the design of a digital noticeboard to support communication within a remote Aboriginal community whose aspiration is to live in "both worlds", nurturing and extending their Aboriginal culture and actively participating in Western society and economy. Three bi-cultural aspects have emerged and are presented here: the need for a bi-lingual noticeboard to span both oral and written language traditions, the tension between perfunctory information exchange and social, embodied protocols of telling in person and the different ways in which time is represented in both cultures. The design approach, developed iteratively through consultation, demonstration and testing led to an "unsurprising interface", aimed at maximizing use and appropriation across cultures by unifying visual, text and spoken contents in both passive and interactive displays in a modeless manner.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform’s suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the virtual world grows more complex, finding a standard way for storing data becomes increasingly important. Ideally, each data item would be brought into the computer system only once. References for data items need to be cryptographically verifiable, so the data can maintain its identity while being passed around. This way there will be only one copy of the users family photo album, while the user can use multiple tools to show or manipulate the album. Copies of users data could be stored on some of his family members computer, some of his computers, but also at some online services which he uses. When all actors operate over one replicated copy of the data, the system automatically avoids a single point of failure. Thus the data will not disappear with one computer breaking, or one service provider going out of business. One shared copy also makes it possible to delete a piece of data from all systems at once, on users request. In our research we tried to find a model that would make data manageable to users, and make it possible to have the same data stored at various locations. We studied three systems, Persona, Freenet, and GNUnet, that suggest different models for protecting user data. The main application areas of the systems studied include securing online social networks, providing anonymous web, and preventing censorship in file-sharing. Each of the systems studied store user data on machines belonging to third parties. The systems differ in measures they take to protect their users from data loss, forged information, censorship, and being monitored. All of the systems use cryptography to secure names used for the content, and to protect the data from outsiders. Based on the gained knowledge, we built a prototype platform called Peerscape, which stores user data in a synchronized, protected database. Data items themselves are protected with cryptography against forgery, but not encrypted as the focus has been disseminating the data directly among family and friends instead of letting third parties store the information. We turned the synchronizing database into peer-to-peer web by revealing its contents through an integrated http server. The REST-like http API supports development of applications in javascript. To evaluate the platform s suitability for application development we wrote some simple applications, including a public chat room, bittorrent site, and a flower growing game. During our early tests we came to the conclusion that using the platform for simple applications works well. As web standards develop further, writing applications for the platform should become easier. Any system this complex will have its problems, and we are not expecting our platform to replace the existing web, but are fairly impressed with the results and consider our work important from the perspective of managing user data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The protein-protein docking programs typically perform four major tasks: (i) generation of docking poses, (ii) selecting a subset of poses, (iii) their structural refinement and (iv) scoring, ranking for the final assessment of the true quaternary structure. Although the tasks can be integrated or performed in a serial order, they are by nature modular, allowing an opportunity to substitute one algorithm with another. We have implemented two modular web services, (i) PRUNE: to select a subset of docking poses generated during sampling search (http://pallab.serc.iisc.ernet.in/prune) and (ii) PROBE: to refine, score and rank them (http://pallab.serc.iisc.ernet.in/probe). The former uses a new interface area based edge-scoring function to eliminate > 95% of the poses generated during docking search. In contrast to other multi-parameter-based screening functions, this single parameter based elimination reduces the computational time significantly, in addition to increasing the chances of selecting native-like models in the top rank list. The PROBE server performs ranking of pruned poses, after structure refinement and scoring using a regression model for geometric compatibility, and normalized interaction energy. While web-service similar to PROBE is infrequent, no web-service akin to PRUNE has been described before. Both the servers are publicly accessible and free for use.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Depth measures the extent of atom/residue burial within a protein. It correlates with properties such as protein stability, hydrogen exchange rate, protein-protein interaction hot spots, post-translational modification sites and sequence variability. Our server, DEPTH, accurately computes depth and solvent-accessible surface area (SASA) values. We show that depth can be used to predict small molecule ligand binding cavities in proteins. Often, some of the residues lining a ligand binding cavity are both deep and solvent exposed. Using the depth-SASA pair values for a residue, its likelihood to form part of a small molecule binding cavity is estimated. The parameters of the method were calibrated over a training set of 900 high-resolution X-ray crystal structures of single-domain proteins bound to small molecules (molecular weight < 1.5 KDa). The prediction accuracy of DEPTH is comparable to that of other geometry-based prediction methods including LIGSITE, SURFNET and Pocket-Finder (all with Matthew's correlation coefficient of similar to 0.4) over a testing set of 225 single and multi-chain protein structures. Users have the option of tuning several parameters to detect cavities of different sizes, for example, geometrically flat binding sites. The input to the server is a protein 3D structure in PDB format. The users have the option of tuning the values of four parameters associated with the computation of residue depth and the prediction of binding cavities. The computed depths, SASA and binding cavity predictions are displayed in 2D plots and mapped onto 3D representations of the protein structure using Jmol. Links are provided to download the outputs. Our server is useful for all structural analysis based on residue depth and SASA, such as guiding site-directed mutagenesis experiments and small molecule docking exercises, in the context of protein functional annotation and drug discovery.