14 resultados para World-wide-web
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
Title. A concept analysis of renal supportive care: the changing world of nephrology
Aim. This paper is a report of a concept analysis of renal supportive care.
Background. Approximately 1.5 million people worldwide are kept alive by renal dialysis. As services are required to support patients who decide not to start or to withdraw from dialysis, the term renal supportive care is emerging. Being similar to the terms palliative care, end-of-life care, terminal care and conservative management, there is a need for conceptual clarity.
Method. Rodgers' evolutionary method was used as the organizing framework for this concept analysis. Data were collected from a review of CINAHL, Medline, PsycINFO, British Nursing Index, International Bibliography of the Social Sciences and ASSIA (1806-2006) using, 'renal' and 'supportive care' as keywords. All articles with an abstract were considered. The World Wide Web was also searched in English utilizing the phrase 'renal supportive care'.
Results. Five attributes of renal supportive care were identified: available from diagnosis to death with an emphasis on honesty regarding prognosis and impact of disease; interdisciplinary approach to care; restorative care; family and carer support and effective, lucid communication to ensure informed choice and clear lines of decision-making.
Conclusion. Renal supportive care is a dynamic and emerging concept relevant, but not limited to, the end phase of life. It suggests a central philosophy underpinning renal service development that allows patients, carers and the multidisciplinary team time to work together to realize complex goals. It has relevance for the renal community and is likely to be integrated increasingly into everyday nephrology practice.
Resumo:
A service is a remote computational facility which is made available for general use by means of a wide-area network. Several types of service arise in practice: stateless services, shared state services and services with states which are customised for individual users. A service-based orchestration is a multi-threaded computation which invokes remote services in order to deliver results back to a user (publication). In this paper a means of specifying services and reasoning about the correctness of orchestrations over stateless services is presented. As web services are potentially unreliable the termination of even finite orchestrations cannot be guaranteed. For this reason a partial-correctness powerdomain approach is proposed to capture the semantics of recursive orchestrations.
Resumo:
The major current commercial applications of semiconductor photochemistry promoted on the world wide web are reviewed. The basic principles behind the different applications are discussed, including the use of semiconductor photochemistry to: photo-mineralise organics, photo-sterilise and photo-demist. The range of companies, and their products, which utilise semiconductor photochemistry are examined and typical examples listed. An analysis of the geographical distribution of current commercial activity in this area is made. The results indicate that commercial activity in this area is growing world-wide, but is especially strong in Japan. The number and geographical distribution of patents in semiconductor photocatalysis are also commented on. The trends in the numbers of US and Japanese patents over the last 6 years are discussed. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Web sites that rely on databases for their content are now ubiquitous. Query result pages are dynamically generated from these databases in response to user-submitted queries. Automatically extracting structured data from query result pages is a challenging problem, as the structure of the data is not explicitly represented. While humans have shown good intuition in visually understanding data records on a query result page as displayed by a web browser, no existing approach to data record extraction has made full use of this intuition. We propose a novel approach, in which we make use of the common sources of evidence that humans use to understand data records on a displayed query result page. These include structural regularity, and visual and content similarity between data records displayed on a query result page. Based on these observations we propose new techniques that can identify each data record individually, while ignoring noise items, such as navigation bars and adverts. We have implemented these techniques in a software prototype, rExtractor, and tested it using two datasets. Our experimental results show that our approach achieves significantly higher accuracy than previous approaches. Furthermore, it establishes the case for use of vision-based algorithms in the context of data extraction from web sites.
Resumo:
The global ETF industry provides more complicated investment vehicles than low-cost index trackers. Instead, we find that the real investments of ETFs that do not fully replicate their benchmarks may deviate from their benchmarks to leverage informational advantages (which leads to a surprising stock-selection ability), to benefit from the securities lending market, to support ETF-affiliated banks’ stock prices, and to help affiliated OEFs through cross-trading. These effects are more prevalent in ETFs domiciled in Europe. Market awareness of such additional risk is reflected in ETF outflows. These results have important normative implications for consumer protection and financial stability.
Resumo:
Automatically determining and assigning shared and meaningful text labels to data extracted from an e-Commerce web page is a challenging problem. An e-Commerce web page can display a list of data records, each of which can contain a combination of data items (e.g. product name and price) and explicit labels, which describe some of these data items. Recent advances in extraction techniques have made it much easier to precisely extract individual data items and labels from a web page, however, there are two open problems: 1. assigning an explicit label to a data item, and 2. determining labels for the remaining data items. Furthermore, improvements in the availability and coverage of vocabularies, especially in the context of e-Commerce web sites, means that we now have access to a bank of relevant, meaningful and shared labels which can be assigned to extracted data items. However, there is a need for a technique which will take as input a set of extracted data items and assign automatically to them the most relevant and meaningful labels from a shared vocabulary. We observe that the Information Extraction (IE) community has developed a great number of techniques which solve problems similar to our own. In this work-in-progress paper we propose our intention to theoretically and experimentally evaluate different IE techniques to ascertain which is most suitable to solve this problem.
Resumo:
We present a new version of the UMIST Database for Astrochemistry, the fourth such version to be released to the public. The current version contains some 4573 binary gas-phase reactions, an increase of 10% from the previous (1999) version, among 420 species, of which 23 are new to the database. Major updates have been made to ion-neutral reactions, neutral-neutral reactions, particularly at low temperature, and dissociative recombination reactions. We have included for the first time the interstellar chemistry of fluorine. In addition to the usual database, we have also released a reaction set in which the effects of dipole-enhanced ion-neutral rate coefficients are included. These two reactions sets have been used in a dark cloud model and the results of these models are presented and discussed briefly. The database and associated software are available on the World Wide Web at www.udfa.net. Tables 1, 2, 4 and 9 are only available in electronic form at http://www.aanda.org
Resumo:
A combination of linkage analyses and association studies are currently employed to promote the identification of genetic factors contributing to inherited renal disease. We have standardized and merged complex genetic data from disparate sources, creating unique chromosomal maps to enhance genetic epidemiological investigations. This database and novel renal maps effectively summarize genomic regions of suggested linkage, association, or chromosomal abnormalities implicated in renal disease. Chromosomal regions associated with potential intermediate clinical phenotypes have been integrated, adding support for particular genomic intervals. More than 500 reports from medical databases, published scientific literature, and the World Wide Web were interrogated for relevant renal-related information. Chromosomal regions highlighted for prioritized investigation of renal complications include 3q13-26, 6q22-27, 10p11-15, 16p11-13, and 18q22. Combined genetic and physical maps are effective tools to organize genetic data for complex diseases. These renal chromosome maps provide insights into renal phenotype-genotype relationships and act as a template for future genetic investigations into complex renal diseases. New data from individual researchers and/or future publications can be readily incorporated to this resource via a user-friendly web-form accessed from the website: www.qub.ac.uk/neph-res/CORGI/index.php.
Resumo:
We report a new version of the UMIST database for astrochemistry. The previous (1995) version has been updated and its format has been revised. The database contains the rate coefficients, temperature ranges and - where available - the temperature dependence of 4113 gas-phase reactions important in astrophysical environments. The data involve 396 species and 12 elements. We have also tabulated permanent electric dipole moments of the neutral species and heats of formation. A new table lists the photo process cross sections (ionisation, dissociation, fragmentation) for a few species for which these quantities have been measured. Data for Deuterium fractionation are given in a separate table. Finally, a new online Java applet for data extraction has been created and its use is explained in detail. The detailed new datafiles and associated software are available on the World Wide Web at http://www.rate99.co.uk.
Resumo:
In distributed networks, it is often useful for the nodes to be aware of dense subgraphs, e.g., such a dense subgraph could reveal dense substructures in otherwise sparse graphs (e.g. the World Wide Web or social networks); these might reveal community clusters or dense regions for possibly maintaining good communication infrastructure. In this work, we address the problem of self-awareness of nodes in a dynamic network with regards to graph density, i.e., we give distributed algorithms for maintaining dense subgraphs that the member nodes are aware of. The only knowledge that the nodes need is that of the dynamic diameter D, i.e., the maximum number of rounds it takes for a message to traverse the dynamic network. For our work, we consider a model where the number of nodes are fixed, but a powerful adversary can add or remove a limited number of edges from the network at each time step. The communication is by broadcast only and follows the CONGEST model. Our algorithms are continuously executed on the network, and at any time (after some initialization) each node will be aware if it is part (or not) of a particular dense subgraph. We give algorithms that (2 + e)-approximate the densest subgraph and (3 + e)-approximate the at-least-k-densest subgraph (for a given parameter k). Our algorithms work for a wide range of parameter values and run in O(D log n) time. Further, a special case of our results also gives the first fully decentralized approximation algorithms for densest and at-least-k-densest subgraph problems for static distributed graphs. © 2012 Springer-Verlag.
Resumo:
Product recommendation is an important aspect of many e-commerce systems. It provides an effective way to help users navigate complex product spaces. In this paper, we focus on critiquing-based recommenders. We present a new critiquing-based approach, History-Guided Recommendation (HGR), which is capable of using the recommendation pairs (item and critique) or critiques only so far in the current recommendation session to predict the most likely product recommendations and therefore short-cut the sometimes protracted recommendation sessions in standard critiquing approaches. The HGR approach shows a significant improvement in the interaction between the user and the recommender. It also enables successfully accepted recommendations to be made much earlier in the session
Resumo:
Predicting the next location of a user based on their previous visiting pattern is one of the primary tasks over data from location based social networks (LBSNs) such as Foursquare. Many different aspects of these so-called “check-in” profiles of a user have been made use of in this task, including spatial and temporal information of check-ins as well as the social network information of the user. Building more sophisticated prediction models by enriching these check-in data by combining them with information from other sources is challenging due to the limited data that these LBSNs expose due to privacy concerns. In this paper, we propose a framework to use the location data from LBSNs, combine it with the data from maps for associating a set of venue categories with these locations. For example, if the user is found to be checking in at a mall that has cafes, cinemas and restaurants according to the map, all these information is associated. This category information is then leveraged to predict the next checkin location by the user. Our experiments with publicly available check-in dataset show that this approach improves on the state-of-the-art methods for location prediction.