125 resultados para computing
Resumo:
Formal Concept Analysis is an unsupervised machine learning technique that has successfully been applied to document organisation by considering documents as objects and keywords as attributes. The basic algorithms of Formal Concept Analysis then allow an intelligent information retrieval system to cluster documents according to keyword views. This paper investigates the scalability of this idea. In particular we present the results of applying spatial data structures to large datasets in formal concept analysis. Our experiments are motivated by the application of the Formal Concept Analysis idea of a virtual filesystem [11,17,15]. In particular the libferris [1] Semantic File System. This paper presents customizations to an RD-Tree Generalized Index Search Tree based index structure to better support the application of Formal Concept Analysis to large data sources.
Resumo:
Grobner bases have been generalised to polynomials over a commutative ring A in several ways. Here we focus on strong Grobner bases, also known as D-bases. Several authors have shown that strong Grobner bases can be effectively constructed over a principal ideal domain. We show that this extends to any principal ideal ring. We characterise Grobner bases and strong Grobner bases when A is a principal ideal ring. We also give algorithms for computing Grobner bases and strong Grobner bases which generalise known algorithms to principal ideal rings. In particular, we give an algorithm for computing a strong Grobner basis over a finite-chain ring, for example a Galois ring.
Resumo:
Classical dynamics is formulated as a Hamiltonian flow in phase space, while quantum mechanics is formulated as unitary dynamics in Hilbert space. These different formulations have made it difficult to directly compare quantum and classical nonlinear dynamics. Previous solutions have focused on computing quantities associated with a statistical ensemble such as variance or entropy. However a more diner comparison would compare classical predictions to the quantum predictions for continuous simultaneous measurement of position and momentum of a single system, in this paper we give a theory of such measurement and show that chaotic behavior in classical systems fan be reproduced by continuously measured quantum systems.
Resumo:
It has been argued that beyond software engineering and process engineering, ontological engineering is the third capability needed if successful e-commerce is to be realized. In our experience of building an ontological-based tendering system, we face the problem of building an ontology. In this paper, we demonstrate how to build ontologies in the tendering domain. The ontology life cycle is identified. Extracting concepts from existing resources like on-line catalogs is described. We have reused electronic data interchange (EDI) to build conceptual structures in the tendering domain. An algorithm to extract abstract ontological concepts from these structures is proposed.
Resumo:
The phase estimation algorithm is so named because it allows an estimation of the eigenvalues associated with an operator. However, it has been proposed that the algorithm can also be used to generate eigenstates. Here we extend this proposal for small quantum systems, identifying the conditions under which the phase-estimation algorithm can successfully generate eigenstates. We then propose an implementation scheme based on an ion trap quantum computer. This scheme allows us to illustrate two simple examples, one in which the algorithm effectively generates eigenstates, and one in which it does not.
Resumo:
The QU-GENE Computing Cluster (QCC) is a hardware and software solution to the automation and speedup of large QU-GENE (QUantitative GENEtics) simulation experiments that are designed to examine the properties of genetic models, particularly those that involve factorial combinations of treatment levels. QCC automates the management of the distribution of components of the simulation experiments among the networked single-processor computers to achieve the speedup.
Resumo:
Around 98% of all transcriptional output in humans is noncoding RNA. RNA-mediated gene regulation is widespread in higher eukaryotes and complex genetic phenomena like RNA interference, co-suppression, transgene silencing, imprinting, methylation, and possibly position-effect variegation and transvection, all involve intersecting pathways based on or connected to RNA signaling. I suggest that the central dogma is incomplete, and that intronic and other non-coding RNAs have evolved to comprise a second tier of gene expression in eukaryotes, which enables the integration and networking of complex suites of gene activity. Although proteins are the fundamental effectors of cellular function, the basis of eukaryotic complexity and phenotypic variation may lie primarily in a control architecture composed of a highly parallel system of trans-acting RNAs that relay state information required for the coordination and modulation of gene expression, via chromatin remodeling, RNA-DNA, RNA-RNA and RNA-protein interactions. This system has interesting and perhaps informative analogies with small world networks and dataflow computing.
Resumo:
The development of cropping systems simulation capabilities world-wide combined with easy access to powerful computing has resulted in a plethora of agricultural models and consequently, model applications. Nonetheless, the scientific credibility of such applications and their relevance to farming practice is still being questioned. Our objective in this paper is to highlight some of the model applications from which benefits for farmers were or could be obtained via changed agricultural practice or policy. Changed on-farm practice due to the direct contribution of modelling, while keenly sought after, may in some cases be less achievable than a contribution via agricultural policies. This paper is intended to give some guidance for future model applications. It is not a comprehensive review of model applications, nor is it intended to discuss modelling in the context of social science or extension policy. Rather, we take snapshots around the globe to 'take stock' and to demonstrate that well-defined financial and environmental benefits can be obtained on-farm from the use of models. We highlight the importance of 'relevance' and hence the importance of true partnerships between all stakeholders (farmer, scientists, advisers) for the successful development and adoption of simulation approaches. Specifically, we address some key points that are essential for successful model applications such as: (1) issues to be addressed must be neither trivial nor obvious; (2) a modelling approach must reduce complexity rather than proliferate choices in order to aid the decision-making process (3) the cropping systems must be sufficiently flexible to allow management interventions based on insights gained from models. The pro and cons of normative approaches (e.g. decision support software that can reach a wide audience quickly but are often poorly contextualized for any individual client) versus model applications within the context of an individual client's situation will also be discussed. We suggest that a tandem approach is necessary whereby the latter is used in the early stages of model application for confidence building amongst client groups. This paper focuses on five specific regions that differ fundamentally in terms of environment and socio-economic structure and hence in their requirements for successful model applications. Specifically, we will give examples from Australia and South America (high climatic variability, large areas, low input, technologically advanced); Africa (high climatic variability, small areas, low input, subsistence agriculture); India (high climatic variability, small areas, medium level inputs, technologically progressing; and Europe (relatively low climatic variability, small areas, high input, technologically advanced). The contrast between Australia and Europe will further demonstrate how successful model applications are strongly influenced by the policy framework within which producers operate. We suggest that this might eventually lead to better adoption of fully integrated systems approaches and result in the development of resilient farming systems that are in tune with current climatic conditions and are adaptable to biophysical and socioeconomic variability and change. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
As the use of technological devices in everyday environments becomes more prevalent, it is clear that access to these devices has become an important aspect of occupational performance. Children are increasingly required to competently manipulate technology such as the computer to fulfil occupational roles of student and player. Occupational therapists are in a position to facilitate the successful interface between children and standard computer technologies. The literature has supported the use of direct manipulation interfaces in computing that requires mastery of devices such as the mouse. Identification of children likely to experience difficulties with mouse use will inform the development of appropriate methods of intervention promoting mouse skill and further enhance participation in occupational tasks. The aim of this paper is to discuss the development of an assessment of mouse proficiency for children. It describes the construction of the assessment, the content of the test, and its content validity.
Resumo:
The two-node tandem Jackson network serves as a convenient reference model for the analysis and testing of different methodologies and techniques in rare event simulation. In this paper we consider a new approach to efficiently estimate the probability that the content of the second buffer exceeds some high level L before it becomes empty, starting from a given state. The approach is based on a Markov additive process representation of the buffer processes, leading to an exponential change of measure to be used in an importance sampling procedure. Unlike changes of measures proposed and studied in recent literature, the one derived here is a function of the content of the first buffer. We prove that when the first buffer is finite, this method yields asymptotically efficient simulation for any set of arrival and service rates. In fact, the relative error is bounded independent of the level L; a new result which is not established for any other known method. When the first buffer is infinite, we propose a natural extension of the exponential change of measure for the finite buffer case. In this case, the relative error is shown to be bounded (independent of L) only when the second server is the bottleneck; a result which is known to hold for some other methods derived through large deviations analysis. When the first server is the bottleneck, experimental results using our method seem to suggest that the relative error is bounded linearly in L.
Resumo:
Crop modelling has evolved over the last 30 or so years in concert with advances in crop physiology, crop ecology and computing technology. Having reached a respectable degree of acceptance, it is appropriate to review briefly the course of developments in crop modelling and to project what might be major contributions of crop modelling in the future. Two major opportunities are envisioned for increased modelling activity in the future. One opportunity is in a continuing central, heuristic role to support scientific investigation, to facilitate decision making by crop managers, and to aid in education. Heuristic activities will also extend to the broader system-level issues of environmental and ecological aspects of crop production. The second opportunity is projected as a prime contributor in understanding and advancing the genetic regulation of plant performance and plant improvement. Physiological dissection and modelling of traits provides an avenue by which crop modelling could contribute to enhancing integration of molecular genetic technologies in crop improvement. Crown Copyright (C) 2002 Published by Elsevier Science B.V. All rights reserved.
Resumo:
Magnetic resonance imaging (MRI) magnets have very stringent constraints on the homogeneity of the static magnetic field that they generate over desired imaging regions. The magnet system also preferably generates very little stray field external to its structure, so that ease of siting and safety are assured. This work concentrates on deriving, means of rapidly computing the effect of 'cold' and 'warm' ferromagnetic material in or around the superconducting magnet system, so as to facilitate the automated design of hybrid material MR magnets. A complete scheme for the direct calculation of the spherical harmonics of the magnetic field generated by a circular ring of ferromagnetic material is derived under the conditions of arbitrary external magnetizing fields. The magnetic field produced by the superconducting coils in the system is computed using previously developed methods. The final, hybrid algorithm is fast enough for use in large-scale optimization methods. The resultant fields from a practical example of a 4 T, clinical MRI magnet containing both superconducting coils and magnetic material are presented.