55 resultados para secure European System for Applications in a Multi-Vendor Environment (SESAME)
em University of Queensland eSpace - Australia
Resumo:
This paper presents the unique collection of additional features of Qu-Prolog, a variant of the Al programming language Prolog, and illustrates how they can be used for implementing DAI applications. By this we mean applications comprising communicating information servers, expert systems, or agents, with sophisticated reasoning capabilities and internal concurrency. Such an application exploits the key features of Qu-Prolog: support for the programming of sound non-clausal inference systems, multi-threading, and high level inter-thread message communication between Qu-Prolog query threads anywhere on the internet. The inter-thread communication uses email style symbolic names for threads, allowing easy construction of distributed applications using public names for threads. How threads react to received messages is specified by a disjunction of reaction rules which the thread periodically executes. A communications API allows smooth integration of components written in C, which to Qu-Prolog, look like remote query threads.
Resumo:
The enormous amount of information generated through sequencing of the human genome has increased demands for more economical and flexible alternatives in genomics, proteomics and drug discovery. Many companies and institutions have recognised the potential of increasing the size and complexity of chemical libraries by producing large chemical libraries on colloidal support beads. Since colloid-based compounds in a suspension are randomly located, an encoding system such as optical barcoding is required to permit rapid elucidation of the compound structures. We describe in this article innovative methods for optical barcoding of colloids for use as support beads in both combinatorial and non-combinatorial libraries. We focus in particular on the difficult problem of barcoding extremely large libraries, which if solved, will transform the manner in which genomics, proteomics and drug discovery research is currently performed.
Resumo:
Granulation is one of the fundamental operations in particulate processing and has a very ancient history and widespread use. Much fundamental particle science has occurred in the last two decades to help understand the underlying phenomena. Yet, until recently the development of granulation systems was mostly based on popular practice. The use of process systems approaches to the integrated understanding of these operations is providing improved insight into the complex nature of the processes. Improved mathematical representations, new solution techniques and the application of the models to industrial processes are yielding better designs, improved optimisation and tighter control of these systems. The parallel development of advanced instrumentation and the use of inferential approaches provide real-time access to system parameters necessary for improvements in operation. The use of advanced models to help develop real-time plant diagnostic systems provides further evidence of the utility of process system approaches to granulation processes. This paper highlights some of those aspects of granulation. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Vector error-correction models (VECMs) have become increasingly important in their application to financial markets. Standard full-order VECM models assume non-zero entries in all their coefficient matrices. However, applications of VECM models to financial market data have revealed that zero entries are often a necessary part of efficient modelling. In such cases, the use of full-order VECM models may lead to incorrect inferences. Specifically, if indirect causality or Granger non-causality exists among the variables, the use of over-parameterised full-order VECM models may weaken the power of statistical inference. In this paper, it is argued that the zero–non-zero (ZNZ) patterned VECM is a more straightforward and effective means of testing for both indirect causality and Granger non-causality. For a ZNZ patterned VECM framework for time series of integrated order two, we provide a new algorithm to select cointegrating and loading vectors that can contain zero entries. Two case studies are used to demonstrate the usefulness of the algorithm in tests of purchasing power parity and a three-variable system involving the stock market.
Resumo:
This paper proposes an architecture for pervasive computing which utilizes context information to provide adaptations based on vertical handovers (handovers between heterogeneous networks) while supporting application Quality of Service (QoS). The future of mobile computing will see an increase in ubiquitous network connectivity which allows users to roam freely between heterogeneous networks. One of the requirements for pervasive computing is to adapt computing applications or their environment if current applications can no longer be provided with the requested QoS. One of possible adaptations is a vertical handover to a different network. Vertical handover operations include changing network interfaces on a single device or changes between different devices. Such handovers should be performed with minimal user distraction and minimal violation of communication QoS for user applications. The solution utilises context information regarding user devices, user location, application requirements, and network environment. The paper shows how vertical handover adaptations are incorporated into the whole infrastructure of a pervasive system
Resumo:
Current debates about educational theory are concerned with the relationship between knowledge and power and thereby issues such as who possesses a truth and how have they arrived at it, what questions are important to ask, and how should they best be answered. As such, these debates revolve around questions of preferred, appropriate, and useful theoretical perspectives. This paper overviews the key theoretical perspectives that are currently used in physical education pedagogy research and considers how these inform the questions we ask and shapes the conduct of research. It also addresses what is contested with respect to these perspectives. The paper concludes with some cautions about allegiances to and use of theories in line with concerns for the applicability of educational research to pressing social issues.
Resumo:
A system of secondary vessels emerging from the primary vessels as numerous coiled capillaries has been described in numerous teleost and holost fishes. The systemic secondary vessels of the teleost Tandanus tandanus are typical of this system and are described in this study. The existence of a secondary vessel system has been postulated in the elasmobranch group. No secondary vessel origins, as seen in the teleosts, are present in the elasmobranchs Rhinobatos typus and Carcharhinus melanopterus. Vessels with a similar distribution to secondary arteries are observed but these are venous rather than arterial in nature and do not connect with the primary arteries. Like the secondary veins in teleosts, the cutaneous veins in R. typus contain blood with a low haematocrit. There is no morphological evidence for a secondary vessel system in the dipnoan Neoceratodus forsteri.
Resumo:
The volume of the extracellular compartment (tubular system) within intact muscle fibres from cane toad and rat was measured under various conditions using confocal microscopy. Under physiological conditions at rest, the fractional volume of the tubular system (t-sys(Vol)) was 1.38 +/- 0.09% (n = 17),1.41 +/- 0.09% (n = 12) and 0.83 +/- 0.07% (n = 12) of the total fibre volume in the twitch fibres from toad iliofibularis muscle, rat extensor digitorum longus muscle and rat soleus muscle, respectively. In toad muscle fibres, the t-sys(Vol) decreased by 30% when the tubular system was fully depolarized and decreased by 15% when membrane cholesterol was depleted from the tubular system with methyl-beta-cyclodextrin but did not change as the sarcomere length was changed from 1.93 to 3.30 mum. There was also an increase by 30% and a decrease by 25% in t-sys(Vol) when toad fibres were equilibrated in solutions that were 2.5-fold hypertonic and 50% hypotonic, respectively. When the changes in total fibre volume were taken into consideration, the t-sys(Vol) expressed as a percentage of the isotonic fibre volume did actually decrease as tonicity increased, revealing that the tubular system in intact fibres cannot be compressed below 0.9% of the isotonic fibre volume. The results can be explained in terms of forces acting at the level of the tubular wall. These observations have important physiological implications showing that the tubular system is a dynamic membrane structure capable of changing its volume in response to the membrane potential, cholesterol depletion and osmotic stress but not when the sarcomere length is changed in resting muscle.
Resumo:
The field of protein crystallography inspires and enthrals, whether it be for the beauty and symmetry of a perfectly formed protein crystal, the unlocked secrets of a novel protein fold, or the precise atomic-level detail yielded from a protein-ligand complex. Since 1958, when the first protein structure was solved, there have been tremendous advances in all aspects of protein crystallography, from protein preparation and crystallisation through to diffraction data measurement and structure refinement. These advances have significantly reduced the time required to solve protein crystal structures, while at the same time substantially improving the quality and resolution of the resulting structures. Moreover, the technological developments have induced researchers to tackle ever more complex systems, including ribosomes and intact membrane-bound proteins, with a reasonable expectation of success. In this review, the steps involved in determining a protein crystal structure are described and the impact of recent methodological advances identified. Protein crystal structures have proved to be extraordinarily useful in medicinal chemistry research, particularly with respect to inhibitor design. The precise interaction between a drug and its receptor can be visualised at the molecular level using protein crystal structures, and this information then used to improve the complementarity and thus increase the potency and selectivity of an inhibitor. The use of protein crystal structures in receptor-based drug design is highlighted by (i) HIV protease, (ii) influenza virus neuraminidase and (iii) prostaglandin H-2-synthetase. These represent, respectively, examples of protein crystal structures that (i) influenced the design of drugs currently approved for use in the treatment of HIV infection, (ii) led to the design of compounds currently in clinical trials for the treatment of influenza infection and (iii) could enable the design of highly specific non-steroidal anti-inflammatory drugs that lack the common side-effects of this drug class.