12 resultados para Web-news sites

em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast


Relevância:

30.00% 30.00%

Publicador:

Resumo:

When mortality is high, animals run a risk if they wait to accumulate resources for improved reproduction so they may trade-off the time of reproduction with number and size of offspring. Animals may attempt to improve food acquisition by relocation, even in 'sit and wait' predators. We examine these factors in an isolated population of an orb-web spider Zygiella x-notata. The population was monitored for 200 days from first egg laying until all adults had died. Large females produced their first clutch earlier than did small females and there was a positive correlation between female size and the number and size of eggs produced. Many females, presumably without eggs, abandoned their web site and relocated their web position. This is presumed because female Zygiella typically guard their eggs. In total, c. 25% of females reproduced but those that relocated were less likely to do so, and if they did, they produced the clutch at a later date than those that remained. When the date of lay was controlled there was no effect of relocation on egg number but relocated females produced smaller eggs. The data are consistent with the idea that females in resource-poor sites are more likely to relocate. Relocation seems to be a gamble to find a more productive site but one that achieves only a late clutch of small eggs and few achieve that.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rate of species loss is increasing on a global scale and predators are most at risk from human-induced extinction. The effects of losing predators are difficult to predict, even with experimental single species removals, because different combinations of species interact in unpredictable ways. We tested the effects of the loss of groups of common predators on herbivore and algal assemblages in a model benthic marine system. The predator groups were fish, shrimp and crabs. Each group was represented by at least two characteristic species based on data collected at local field sites. We examined the effects of the loss of predators while controlling for the loss of predator biomass. The identity, not the number of predator groups, affected herbivore abundance and assemblage structure. Removing fish led to a large increase in the abundance of dominant herbivores, such as Ampithoids and Caprellids. Predator identity also affected algal assemblage structure. It did not, however, affect total algal mass. Removing fish led to an increase in the final biomass of the least common taxa (red algae) and reduced the mass of the dominant taxa (brown algae). This compensatory shift in the algal assemblage appeared to facilitate the maintenance of a constant total algal biomass. In the absence of fish, shrimp at higher than ambient densities had a similar effect on herbivore abundance, showing that other groups could partially compensate for the loss of dominant predators. Crabs had no effect on herbivore or algal populations, possibly because they were not at carrying capacity in our experimental system. These findings show that contrary to the assumptions of many food web models, predators cannot be classified into a single functional group and their role in food webs depends on their identity and density in 'real' systems and carrying capacities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

REMA is an interactive web-based program which predicts endonuclease cut sites in DNA sequences. It analyses Multiple sequences simultaneously and predicts the number and size of fragments as well as provides restriction maps. The users can select single or paired combinations of all commercially available enzymes. Additionally, REMA permits prediction of multiple sequence terminal fragment sizes and suggests suitable restriction enzymes for maximally discriminatory results. REMA is an easy to use, web based program which will have a wide application in molecular biology research. Availability: REMA is written in Perl and is freely available for non-commercial use. Detailed information on installation can be obtained from Jan Szubert (jan.szubert@gmail.com) and the web based application is accessible on the internet at the URL http://www.macaulay.ac.uk/rema. Contact: b.singh@macaulay.ac.uk. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective. To investigate students' use and views on social networking sites and assess differences in attitudes between genders and years in the program.

Methods. All pharmacy undergraduate students were invited via e-mail to complete an electronic questionnaire consisting of 21 questions relating to social networking.

Results. Most (91.8%) of the 377 respondents reported using social networking Web sites, with 98.6% using Facebook and 33.7% using Twitter. Female students were more likely than male students to agree that they had been made sufficiently aware of the professional behavior expected of them when using social networking sites (76.6% vs 58.1% p=0.002) and to agree that students should have the same professional standards whether on placement or using social networking sites (76.3% vs 61.6%; p<0.001).

Conclusions. A high level of social networking use and potentially inappropriate attitudes towards professionalism were found among pharmacy students. Further training may be useful to ensure pharmacy students are aware of how to apply codes of conduct when using social networking sites.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Web sites that rely on databases for their content are now ubiquitous. Query result pages are dynamically generated from these databases in response to user-submitted queries. Automatically extracting structured data from query result pages is a challenging problem, as the structure of the data is not explicitly represented. While humans have shown good intuition in visually understanding data records on a query result page as displayed by a web browser, no existing approach to data record extraction has made full use of this intuition. We propose a novel approach, in which we make use of the common sources of evidence that humans use to understand data records on a displayed query result page. These include structural regularity, and visual and content similarity between data records displayed on a query result page. Based on these observations we propose new techniques that can identify each data record individually, while ignoring noise items, such as navigation bars and adverts. We have implemented these techniques in a software prototype, rExtractor, and tested it using two datasets. Our experimental results show that our approach achieves significantly higher accuracy than previous approaches. Furthermore, it establishes the case for use of vision-based algorithms in the context of data extraction from web sites.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this paper is to examine website adoption and its resultant effects on credit union performance in Ireland over the period 2002 to 2010. While there has been a steady increase in web adoption over the period a sizeable proportion (53%) of credit unions did not have a web-based facility in 2010. To gauge web functionality the researchers accessed all websites in 2010/2011 and it transpired that most sites were classified as informational with limited transactional options. Panel data techniques are then used to capture the dynamic nature of website diffusion and to investigate the effect of website adoption on cost and performance. The empirical analysis reveals that credit unions that have web-based functionality have a reduced spread between the loan and pay-out rate with this primarily caused by reduced loan rates. This reduced spread, although small, is found to both persist and increase over time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatically determining and assigning shared and meaningful text labels to data extracted from an e-Commerce web page is a challenging problem. An e-Commerce web page can display a list of data records, each of which can contain a combination of data items (e.g. product name and price) and explicit labels, which describe some of these data items. Recent advances in extraction techniques have made it much easier to precisely extract individual data items and labels from a web page, however, there are two open problems: 1. assigning an explicit label to a data item, and 2. determining labels for the remaining data items. Furthermore, improvements in the availability and coverage of vocabularies, especially in the context of e-Commerce web sites, means that we now have access to a bank of relevant, meaningful and shared labels which can be assigned to extracted data items. However, there is a need for a technique which will take as input a set of extracted data items and assign automatically to them the most relevant and meaningful labels from a shared vocabulary. We observe that the Information Extraction (IE) community has developed a great number of techniques which solve problems similar to our own. In this work-in-progress paper we propose our intention to theoretically and experimentally evaluate different IE techniques to ascertain which is most suitable to solve this problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: This study investigated the nature of newspaper reporting about online health information in the UK and US. Internet users frequently search for health information online, although the accuracy of the information retrieved varies greatly and can be misleading. Newspapers have the potential to influence public health behaviours, but information has been lacking in relation to how newspapers portray online health information to their readers.

Methods: The newspaper database Nexis (R) UK was searched for articles published from 2003 - 2012 relating to online health information. Systematic content analysis of articles published in the highest circulation newspapers in the UK and US was performed. A second researcher coded a 10% sample to establish inter-rater reliability of coding.

Results: In total, 161 newspaper articles were included in the analysis. Publication was most frequent in 2003, 2008 and 2009, which coincided with global threats to public health. UK broadsheet newspapers were significantly more likely to cover online health information than UK tabloid newspapers (p = 0.04) and only one article was identified in US tabloid newspapers. Articles most frequently appeared in health sections. Among the 79 articles that linked online health information to specific diseases or health topics, diabetes was the most frequently mentioned disease, cancer the commonest group of diseases and sexual health the most frequent health topic. Articles portrayed benefits of obtaining online health information more frequently than risks. Quotations from health professionals portrayed mixed opinions regarding public access to online health information. 108 (67.1%) articles directed readers to specific health-related web sites. 135 (83.9%) articles were rated as having balanced judgement and 76 (47.2%) were judged as having excellent quality reporting. No difference was found in the quality of reporting between UK and US articles.

Conclusions: Newspaper coverage of online health information was low during the 10-year period 2003 to 2012. Journalists tended to emphasise the benefits and understate the risks of online health information and the quality of reporting varied considerably. Newspapers directed readers to sources of online health information during global epidemics although, as most articles appeared in the health sections of broadsheet newspapers, coverage was limited to a relatively small readership.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the problem of linking web search queries to entities from a knowledge base such as Wikipedia. Such linking enables converting a user’s web search session to a footprint in the knowledge base that could be used to enrich the user profile. Traditional methods for entity linking have been directed towards finding entity mentions in text documents such as news reports, each of which are possibly linked to multiple entities enabling the usage of measures like entity set coherence. Since web search queries are very small text fragments, such criteria that rely on existence of a multitude of mentions do not work too well on them. We propose a three-phase method for linking web search queries to wikipedia entities. The first phase does IR-style scoring of entities against the search query to narrow down to a subset of entities that are expanded using hyperlink information in the second phase to a larger set. Lastly, we use a graph traversal approach to identify the top entities to link the query to. Through an empirical evaluation on real-world web search queries, we illustrate that our methods significantly enhance the linking accuracy over state-of-the-art methods.