81 resultados para automated resource discovery
em University of Queensland eSpace - Australia
Resumo:
While others have attempted to determine, by way of mathematical formulae, optimal resource duplication strategies for random walk protocols, this paper is concerned with studying the emergent effects of dynamic resource propagation and replication. In particular, we show, via modelling and experimentation, that under any given decay (purge) rate the number of nodes that have knowledge of particular resource converges to a fixed point or a limit cycle. We also show that even for high rates of decay - that is, when few nodes have knowledge of a particular resource - the number of hops required to find that resource is small.
Resumo:
Arguably, the world has become one large pervasive computing environment. Our planet is growing a digital skin of a wide array of sensors, hand-held computers, mobile phones, laptops, web services and publicly accessible web-cams. Often, these devices and services are deployed in groups, forming small communities of interacting devices. Service discovery protocols allow processes executing on each device to discover services offered by other devices within the community. These communities can be linked together to form a wide-area pervasive environment, allowing processes in one p u p tu interact with services in another. However, the costs of communication and the protocols by which this communication is mediated in the wide-area differ from those of intra-group, or local-area, communication. Communication is an expensive operation for small, battery powered devices, but it is less expensive for servem and workstations, which have a constant power supply and 81'e connected to high bandwidth networks. This paper introduces Superstring, a peer to-peer service discovery protocol optimised fur use in the wide-area. Its goals are to minimise computation and memory overhead in the face of large numbers of resources. It achieves this memory and computation scalability by distributing the storage cost of service descriptions and the computation cost of queries over multiple resolvers.
Resumo:
Manual curation has long been held to be the gold standard for functional annotation of DNA sequence. Our experience with the annotation of more than 20,000 full-length cDNA sequences revealed problems with this approach, including inaccurate and inconsistent assignment of gene names, as well as many good assignments that were difficult to reproduce using only computational methods. For the FANTOM2 annotation of more than 60,000 cDNA clones, we developed a number of methods and tools to circumvent some of these problems, including an automated annotation pipeline that provides high-quality preliminary annotation for each sequence by introducing an uninformative filter that eliminates uninformative annotations, controlled vocabularies to accurately reflect both the functional assignments and the evidence supporting them, and a highly refined, Web-based manual annotation tool that allows users to view a wide array of sequence analyses and to assign gene names and putative functions using a consistent nomenclature. The ultimate utility of our approach is reflected in the low rate of reassignment of automated assignments by manual curation. Based on these results, we propose a new standard for large-scale annotation, in which the initial automated annotations are manually investigated and then computational methods are iteratively modified and improved based on the results of manual curation.
Resumo:
This paper reports on a system for automated agent negotiation, based on a formal and executable approach to capture the behavior of parties involved in a negotiation. It uses the JADE agent framework, and its major distinctive feature is the use of declarative negotiation strategies. The negotiation strategies are expressed in a declarative rules language, defeasible logic, and are applied using the implemented system DR-DEVICE. The key ideas and the overall system architecture are described, and a particular negotiation case is presented in detail.
Resumo:
The discovery of Neptune in September 1846 is a good example of scientific rationality but it has proven to be surprisingly difficult to explain how this is so from the perspective of Karl Popper, Thomas Kuhn and several other commentators who have been influenced by these thinkers. I try briefly to explain how to avoid these difficulties and understand the achievement of the astronomers who predicted the location of the new planet, Urbain J. J. Leverrier and John Couch Adams.
Resumo:
Knowledge of residual perturbations in the orbit of Uranus in the early 1840s did not lead to the refutation of Newton's law of gravitation but instead to the discovery of Neptune in 1846. Karl Popper asserts that this case is atypical of science and that the law of gravitation was at least prima facie falsified by these perturbations. I argue that these assertions are the product of a false, a priori methodological position I call, 'Weak Popperian Falsificationism' (WPF). Further, on the evidence the law was not prima facie false and was not generally considered so by astronomers at the time. Many of Popper's commentators (Kuhn, Lakatos, Feyerabend and others) presuppose WPF and their views on this case and its implications for scientific rationality and method suffer from this same defect.
Resumo:
A meeting was convened in Canberra, Australia, at the request of the Australian Drug Evaluation Committee (ADEC), on December 3-4, 1997 to discuss the role of population pharmacokinetics and pharmacodynamics in drug evaluation and development. The ADEC was particularly concerned about registration of drugs in the pediatric age group. The population approach could be used more often than is currently the case in pharmacokinetic and pharmacodynamic studies to provide valuable information for the safe and effective use of drugs in neonates, infants, and children. The meeting ultimately broadened to include discussion about other subgroups. The main conclusions of the meeting were: 1. The population approach, pharmacokinetic and pharmacodynamic analysis, is a valuable tool both for drug registration purposes and for optimal dosing of drugs in specific groups of patients, 2. Population pharmacokinetic and pharmacodynamic studies are able to fill in the gaps' in registration of drugs, for example, to provide information on optimal pediatric dosing. Such studies provide a basis for enhancing product information to improve rational prescribing, 3. Expertise is required to perform the population studies and expertise, with a clinical perspective, is also required to evaluate such studies if they are to be submitted as part of a drug registration dossier Such expertise is available in the Australasian region and is increasing. Centers of excellence with the appropriate expertise to advise and assist should be encouraged to develop and grow in the region, 4. The use of the population approach by the pharmaceutical industry needs to be encouraged to provide valuable information not obtainable by other techniques. The acceptance of population pharmacokinetic and pharmacodynamic analyses by regulatory agencies also needs to be encouraged, and 5. Development of the population approach to pharmacokinetics and pharmacodynamics is needed from a public health perspective to ensure that all available information is collected and used to improve the way drugs are used. This important endeavor needs funding and support at the local and international levels.
Resumo:
An automated method for extracting brain volumes from three commonly acquired three-dimensional (3D) MR images (proton density, T1 weighted, and T2-weighted) of the human head is described. The procedure is divided into four levels: preprocessing, segmentation, scalp removal, and postprocessing. A user-provided reference point is the sole operator-dependent input required, The method's parameters were first optimized and then fixed and applied to 30 repeat data sets from 15 normal older adult subjects to investigate its reproducibility. Percent differences between total brain volumes (TBVs) for the subjects' repeated data sets ranged from .5% to 2.2%. We conclude that the method is both robust and reproducible and has the potential for wide application.