64 resultados para Towards Seamless Integration of Geoscience Models and Data
em University of Queensland eSpace - Australia
Resumo:
Small mesothermal vein quam-gold-base-metal sulfide deposits from which some 20 t of Au-Ag bullion have been extracted, are the most common gold deposits in the Georgetown region of north Queensland-several hundred were mined or prospected between 1870 and 1950. These deposits are mostly hosted by Proterozoic granitic and metamorphic rocks and are similar to the much larger Charters Towers deposits such as Day Dawn and Brilliant, and in some respects to the Motherlode deposits of California. The largest deposit in the region-Kidston (> 138 t of Au and Ag since 1985)- is substantially different. It is hosted by sheeted quartz veins and cavities in brecciated Silurian granite and Proterozoic metamorphics above nested high-level Carboniferous intrusives associated with a nearby cauldron subsidence structure. This paper provides new information (K-Ar and Rb-Sr isotopic ages, preliminary oxygen isotope and fluid-inclusion data) from some of the mesothermal deposits and compares it with the Kidston deposit. All six dated mesothermal deposits have Siluro-Devonian (about 425 to 400 Ma) ages. All nine of such deposits analysed have delta(18)O quartz values in the range 8.4 to 15.7 parts per thousand, Fluid-inclusion data indicate homogenisation temperatures in the range 230-350 degrees C. This information, and a re-interpretation of the spatial relationships of the deposits with various elements of the updated regional geology, is used to develop a preliminary metallogenic model of the mesothermal Etheridge Goldfield. The model indicates how the majority of deposits may have formed from hydrothermal systems initiated during the emplacement of granitic batholiths that were possibly, but not clearly, associated with Early Palaeozoic subduction, and that these fluid systems were dominated by substantially modified meteoric and/or magmatic fluids. The large Kidston deposit and a few small relatives are of Carboniferous age and formed more directly from magmatic systems much closer to the surface.
Resumo:
This paper presents a method of formally specifying, refining and verifying concurrent systems which uses the object-oriented state-based specification language Object-Z together with the process algebra CSP. Object-Z provides a convenient way of modelling complex data structures needed to define the component processes of such systems, and CSP enables the concise specification of process interactions. The basis of the integration is a semantics of Object-Z classes identical to that of CSP processes. This allows classes specified in Object-Z to he used directly within the CSP part of the specification. In addition to specification, we also discuss refinement and verification in this model. The common semantic basis enables a unified method of refinement to be used, based upon CSP refinement. To enable state-based techniques to be used fur the Object-Z components of a specification we develop state-based refinement relations which are sound and complete with respect to CSP refinement. In addition, a verification method for static and dynamic properties is presented. The method allows us to verify properties of the CSP system specification in terms of its component Object-Z classes by using the laws of the the CSP operators together with the logic for Object-Z.
Specification, refinement and verification of concurrent systems: an integration of Object-Z and CSP
Resumo:
Paediatric emergency research is hampered by a number of barriers that can be overcome by a multicentre approach. In 2004, an Australia and New Zealand-based paediatric emergency research network was formed, the Paediatric Research in Emergency Departments International Collaborative (PREDICT). The founding sites include all major tertiary children’s hospital EDs in Australia and New Zealand and a major mixed ED in Australia. PREDICT aims to provide leadership and infrastructure for multicentre research at the highest standard, facilitate collaboration between institutions, health-care providers and researchers and ultimately improve patient outcome. Initial network-wide projects have been determined. The present article describes the development of the network, its structure and future goals.
Resumo:
There is growing interest in the use of context-awareness as a technique for developing pervasive computing applications that are flexible, adaptable, and capable of acting autonomously on behalf of users. However, context-awareness introduces a variety of software engineering challenges. In this paper, we address these challenges by proposing a set of conceptual models designed to support the software engineering process, including context modelling techniques, a preference model for representing context-dependent requirements, and two programming models. We also present a software infrastructure and software engineering process that can be used in conjunction with our models. Finally, we discuss a case study that demonstrates the strengths of our models and software engineering approach with respect to a set of software quality metrics.
Resumo:
Traditional vegetation mapping methods use high cost, labour-intensive aerial photography interpretation. This approach can be subjective and is limited by factors such as the extent of remnant vegetation, and the differing scale and quality of aerial photography over time. An alternative approach is proposed which integrates a data model, a statistical model and an ecological model using sophisticated Geographic Information Systems (GIS) techniques and rule-based systems to support fine-scale vegetation community modelling. This approach is based on a more realistic representation of vegetation patterns with transitional gradients from one vegetation community to another. Arbitrary, though often unrealistic, sharp boundaries can be imposed on the model by the application of statistical methods. This GIS-integrated multivariate approach is applied to the problem of vegetation mapping in the complex vegetation communities of the Innisfail Lowlands in the Wet Tropics bioregion of Northeastern Australia. The paper presents the full cycle of this vegetation modelling approach including sampling sites, variable selection, model selection, model implementation, internal model assessment, model prediction assessments, models integration of discrete vegetation community models to generate a composite pre-clearing vegetation map, independent data set model validation and model prediction's scale assessments. An accurate pre-clearing vegetation map of the Innisfail Lowlands was generated (0.83r(2)) through GIS integration of 28 separate statistical models. This modelling approach has good potential for wider application, including provision of. vital information for conservation planning and management; a scientific basis for rehabilitation of disturbed and cleared areas; a viable method for the production of adequate vegetation maps for conservation and forestry planning of poorly-studied areas. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Testing ecological models for management is an increasingly important part of the maturation of ecology as an applied science. Consequently, we need to work at applying fair tests of models with adequate data. We demonstrate that a recent test of a discrete time, stochastic model was biased towards falsifying the predictions. If the model was a perfect description of reality, the test falsified the predictions 84% of the time. We introduce an alternative testing procedure for stochastic models, and show that it falsifies the predictions only 5% of the time when the model is a perfect description of reality. The example is used as a point of departure to discuss some of the philosophical aspects of model testing.
Resumo:
Models of population dynamics are commonly used to predict risks in ecology, particularly risks of population decline. There is often considerable uncertainty associated with these predictions. However, alternatives to predictions based on population models have not been assessed. We used simulation models of hypothetical species to generate the kinds of data that might typically be available to ecologists and then invited other researchers to predict risks of population declines using these data. The accuracy of the predictions was assessed by comparison with the forecasts of the original model. The researchers used either population models or subjective judgement to make their predictions. Predictions made using models were only slightly more accurate than subjective judgements of risk. However, predictions using models tended to be unbiased, while subjective judgements were biased towards over-estimation. Psychology literature suggests that the bias of subjective judgements is likely to vary somewhat unpredictably among people, depending on their stake in the outcome. This will make subjective predictions more uncertain and less transparent than those based on models. (C) 2004 Elsevier SAS. All rights reserved.
Resumo:
Information and content integration are believed to be a possible solution to the problem of information overload in the Internet. The article is an overview of a simple solution for integration of information and content on the Web. Previous approaches to content extraction and integration are discussed, followed by introduction of a novel technology to deal with the problems, based on XML processing. The article includes lessons learned from solving issues of changing webpage layout, incompatibility with HTML standards and multiplicity of the results returned. The method adopting relative XPath queries over DOM tree proves to be more robust than previous approaches to Web information integration. Furthermore, the prototype implementation demonstrates the simplicity that enables non-professional users to easily adopt this approach in their day-to-day information management routines.