67 resultados para Nuclear structure models and methods
em Queensland University of Technology - ePrints Archive
Resumo:
The impact-induced deposition of Al13 clusters with icosahedral structure on Ni(0 0 1) surface was studied by molecular dynamics (MD) simulation using Finnis–Sinclair potentials. The incident kinetic energy (Ein) ranged from 0.01 to 30 eV per atom. The structural and dynamical properties of Al clusters on Ni surfaces were found to be strongly dependent on the impact energy. At much lower energy, the Al cluster deposited on the surface as a bulk molecule. However, the original icosahedral structure was transformed to the fcc-like one due to the interaction and the structure mismatch between the Al cluster and Ni surface. With increasing the impinging energy, the cluster was deformed severely when it contacted the substrate, and then broken up due to dense collision cascade. The cluster atoms spread on the surface at last. When the impact energy was higher than 11 eV, the defects, such as Al substitutions and Ni ejections, were observed. The simulation indicated that there exists an optimum energy range, which is suitable for Al epitaxial growth in layer by layer. In addition, at higher impinging energy, the atomic exchange between Al and Ni atoms will be favourable to surface alloying.
Resumo:
Statistical methods are often used to analyse commercial catch and effort data to provide standardised fishing effort and/or a relative index of fish abundance for input into stock assessment models. Achieving reliable results has proved difficult in Australia's Northern Prawn Fishery (NPF), due to a combination of such factors as the biological characteristics of the animals, some aspects of the fleet dynamics, and the changes in fishing technology. For this set of data, we compared four modelling approaches (linear models, mixed models, generalised estimating equations, and generalised linear models) with respect to the outcomes of the standardised fishing effort or the relative index of abundance. We also varied the number and form of vessel covariates in the models. Within a subset of data from this fishery, modelling correlation structures did not alter the conclusions from simpler statistical models. The random-effects models also yielded similar results. This is because the estimators are all consistent even if the correlation structure is mis-specified, and the data set is very large. However, the standard errors from different models differed, suggesting that different methods have different statistical efficiency. We suggest that there is value in modelling the variance function and the correlation structure, to make valid and efficient statistical inferences and gain insight into the data. We found that fishing power was separable from the indices of prawn abundance only when we offset the impact of vessel characteristics at assumed values from external sources. This may be due to the large degree of confounding within the data, and the extreme temporal changes in certain aspects of individual vessels, the fleet and the fleet dynamics.
Resumo:
This study, in its exploration of the attached play scripts and their method of development, evaluates the forms, strategies, and methods of an organised model of formalised playwriting. Through the examination, reflection and reaction to a perceived crisis in playwriting in the Australian theatre sector, the notion of Industrial Playwriting is arrived at: a practice whereby plays are designed and constructed, and where the process of writing becomes central to the efficient creation of new work and the improvement of the writer’s skill and knowledge base. Using a practice-led methodology and action research the study examines a system of play construction appropriate to and addressing the challenges of the contemporary Australian theatre sector. Specifically, using the action research methodology known as design-based research a conceptual framework was constructed to form the basis of the notion of Industrial Playwriting. From this two plays were constructed using a case study method and the process recorded and used to create a practical, step-by-step system of Industrial Playwriting. In the creative practice of manufacturing a single authored play, and then a group-devised play, Industrial Playwriting was tested and found to also offer a valid alternative approach to playwriting in the training of new and even emerging playwrights. Finally, it offered insight into how Industrial Playwriting could be used to greatly facilitate theatre companies’ ongoing need to have access to new writers and new Australian works, and how it might form the basis of a cost effective writer development model. This study of the methods of formalised writing as a means to confront some of the challenges of the Australian theatre sector, the practice of playwriting and the history associated with it, makes an original and important contribution to contemporary playwriting practice.
Resumo:
The behaviour of ion channels within cardiac and neuronal cells is intrinsically stochastic in nature. When the number of channels is small this stochastic noise is large and can have an impact on the dynamics of the system which is potentially an issue when modelling small neurons and drug block in cardiac cells. While exact methods correctly capture the stochastic dynamics of a system they are computationally expensive, restricting their inclusion into tissue level models and so approximations to exact methods are often used instead. The other issue in modelling ion channel dynamics is that the transition rates are voltage dependent, adding a level of complexity as the channel dynamics are coupled to the membrane potential. By assuming that such transition rates are constant over each time step, it is possible to derive a stochastic differential equation (SDE), in the same manner as for biochemical reaction networks, that describes the stochastic dynamics of ion channels. While such a model is more computationally efficient than exact methods we show that there are analytical problems with the resulting SDE as well as issues in using current numerical schemes to solve such an equation. We therefore make two contributions: develop a different model to describe the stochastic ion channel dynamics that analytically behaves in the correct manner and also discuss numerical methods that preserve the analytical properties of the model.
Resumo:
Airports represent the epitome of complex systems with multiple stakeholders, multiple jurisdictions and complex interactions between many actors. The large number of existing models that capture different aspects of the airport are a testament to this. However, these existing models do not consider in a systematic sense modelling requirements nor how stakeholders such as airport operators or airlines would make use of these models. This can detrimentally impact on the verification and validation of models and makes the development of extensible and reusable modelling tools difficult. This paper develops from the Concept of Operations (CONOPS) framework a methodology to help structure the review and development of modelling capabilities and usage scenarios. The method is applied to the review of existing airport terminal passenger models. It is found that existing models can be broadly categorised according to four usage scenarios: capacity planning, operational planning and design, security policy and planning, and airport performance review. The models, the performance metrics that they evaluate and their usage scenarios are discussed. It is found that capacity and operational planning models predominantly focus on performance metrics such as waiting time, service time and congestion whereas performance review models attempt to link those to passenger satisfaction outcomes. Security policy models on the other hand focus on probabilistic risk assessment. However, there is an emerging focus on the need to be able to capture trade-offs between multiple criteria such as security and processing time. Based on the CONOPS framework and literature findings, guidance is provided for the development of future airport terminal models.
Resumo:
Objective The aim of this study was to demonstrate the potential of near-infrared (NIR) spectroscopy for categorizing cartilage degeneration induced in animal models. Method Three models of osteoarthritic degeneration were induced in laboratory rats via one of the following methods: (i) menisectomy (MSX); (ii) anterior cruciate ligament transaction (ACLT); and (iii) intra-articular injection of mono-ido-acetete (1 mg) (MIA), in the right knee joint, with 12 rats per model group. After 8 weeks, the animals were sacrificed and tibial knee joints were collected. A custom-made nearinfrared (NIR) probe of diameter 5 mm was placed on the cartilage surface and spectral data were acquired from each specimen in the wavenumber range 4 000 – 12 500 cm−1. Following spectral data acquisition, the specimens were fixed and Safranin–O staining was performed to assess disease severity based on the Mankin scoring system. Using multivariate statistical analysis based on principal component analysis and partial least squares regression, the spectral data were then related to the Mankinscores of the samples tested. Results Mild to severe degenerative cartilage changes were observed in the subject animals. The ACLT models showed mild cartilage degeneration, MSX models moderate, and MIA severe cartilage degenerative changes both morphologically and histologically. Our result demonstrate that NIR spectroscopic information is capable of separating the cartilage samples into different groups relative to the severity of degeneration, with NIR correlating significantly with their Mankinscore (R2 = 88.85%). Conclusion We conclude that NIR is a viable tool for evaluating articularcartilage health and physical properties such as change in thickness with degeneration.
Resumo:
The finite element (FE) analysis is an effective method to study the strength and predict the fracture risk of endodontically-treated teeth. This paper presents a rapid method developed to generate a comprehensive tooth FE model using data retrieved from micro-computed tomography (μCT). With this method, the inhomogeneity of material properties of teeth was included into the model without dividing the tooth model into different regions. The material properties of the tooth were assumed to be related to the mineral density. The fracture risk at different tooth portions was assessed for root canal treatments. The micro-CT images of a tooth were processed by a Matlab software programme and the CT numbers were retrieved. The tooth contours were obtained with thresholding segmentation using Amira. The inner and outer surfaces of the tooth were imported into Solidworks and a three-dimensional (3D) tooth model was constructed. An assembly of the tooth model with the periodontal ligament (PDL) layer and surrounding bone was imported into ABAQUS. The material properties of the tooth were calculated from the retrieved CT numbers via ABAQUS user's subroutines. Three root canal geometries (original and two enlargements) were investigated. The proposed method in this study can generate detailed 3D finite element models of a tooth with different root canal enlargements and filling materials, and would be very useful for the assessment of the fracture risk at different tooth portions after root canal treatments.
Resumo:
Dengue fever is one of the world’s most important vector-borne diseases. The transmission area of this disease continues to expand due to many factors including urban sprawl, increased travel and global warming. Current preventative techniques are primarily based on controlling mosquito vectors as other prophylactic measures, such as a tetravalent vaccine are unlikely to be available in the foreseeable future. However, the continually increasing dengue incidence suggests that this strategy alone is not sufficient. Epidemiological models attempt to predict future outbreaks using information on the risk factors of the disease. Through a systematic literature review, this paper aims at analyzing the different modeling methods and their outputs in terms of accurately predicting disease outbreaks. We found that many previous studies have not sufficiently accounted for the spatio-temporal features of the disease in the modeling process. Yet with advances in technology, the ability to incorporate such information as well as the socio-environmental aspect allowed for its use as an early warning system, albeit limited geographically to a local scale.
Resumo:
Ambiguity resolution plays a crucial role in real time kinematic GNSS positioning which gives centimetre precision positioning results if all the ambiguities in each epoch are correctly fixed to integers. However, the incorrectly fixed ambiguities can result in large positioning offset up to several meters without notice. Hence, ambiguity validation is essential to control the ambiguity resolution quality. Currently, the most popular ambiguity validation is ratio test. The criterion of ratio test is often empirically determined. Empirically determined criterion can be dangerous, because a fixed criterion cannot fit all scenarios and does not directly control the ambiguity resolution risk. In practice, depending on the underlying model strength, the ratio test criterion can be too conservative for some model and becomes too risky for others. A more rational test method is to determine the criterion according to the underlying model and user requirement. Miss-detected incorrect integers will lead to a hazardous result, which should be strictly controlled. In ambiguity resolution miss-detected rate is often known as failure rate. In this paper, a fixed failure rate ratio test method is presented and applied in analysis of GPS and Compass positioning scenarios. A fixed failure rate approach is derived from the integer aperture estimation theory, which is theoretically rigorous. The criteria table for ratio test is computed based on extensive data simulations in the approach. The real-time users can determine the ratio test criterion by looking up the criteria table. This method has been applied in medium distance GPS ambiguity resolution but multi-constellation and high dimensional scenarios haven't been discussed so far. In this paper, a general ambiguity validation model is derived based on hypothesis test theory, and fixed failure rate approach is introduced, especially the relationship between ratio test threshold and failure rate is examined. In the last, Factors that influence fixed failure rate approach ratio test threshold is discussed according to extensive data simulation. The result shows that fixed failure rate approach is a more reasonable ambiguity validation method with proper stochastic model.
Resumo:
Computational models in physiology often integrate functional and structural information from a large range of spatio-temporal scales from the ionic to the whole organ level. Their sophistication raises both expectations and scepticism concerning how computational methods can improve our understanding of living organisms and also how they can reduce, replace and refine animal experiments. A fundamental requirement to fulfil these expectations and achieve the full potential of computational physiology is a clear understanding of what models represent and how they can be validated. The present study aims at informing strategies for validation by elucidating the complex interrelations between experiments, models and simulations in cardiac electrophysiology. We describe the processes, data and knowledge involved in the construction of whole ventricular multiscale models of cardiac electrophysiology. Our analysis reveals that models, simulations, and experiments are intertwined, in an assemblage that is a system itself, namely the model-simulation-experiment (MSE) system. Validation must therefore take into account the complex interplay between models, simulations and experiments. Key points for developing strategies for validation are: 1) understanding sources of bio-variability is crucial to the comparison between simulation and experimental results; 2) robustness of techniques and tools is a pre-requisite to conducting physiological investigations using the MSE system; 3) definition and adoption of standards facilitates interoperability of experiments, models and simulations; 4) physiological validation must be understood as an iterative process that defines the specific aspects of electrophysiology the MSE system targets, and is driven by advancements in experimental and computational methods and the combination of both.
Resumo:
"First published in 1988, Ecological and Behavioral Methods for the Study of Bats is widely acknowledged as the primary reference for both amateur and professional bat researchers. Bats are the second most diverse group of mammals on the earth. They live on every continent except Antarctica, ranging from deserts to tropical forests to mountains, and their activities have a profound effect on the ecosystems in which they live. Despite their ubiquity and importance, bats are challenging to study. This volume provides researchers, conservationists, and consultants with the ecological background and specific information essential for studying bats in the wild and in captivity. Chapters detail many of the newest and most commonly used field and laboratory techniques needed to advance the study of bats, describe how these methods are applied to the study of the ecology and behavior of bats, and offer advice on how to interpret the results of research. The book includes forty-three chapters, fourteen of which are new to the second edition, with information on molecular ecology and evolution, bioacoustics, chemical communication, flight dynamics, population models, and methods for assessing postnatal growth and development. Fully illustrated and featuring contributions from the world’s leading experts in bat biology, this reference contains everything bat researchers and natural resource managers need to know for the study and conservation of this wide-ranging, ecologically vital, and diverse taxon."--Publisher website
Resumo:
Five significant problems hinder advances in understanding of the volcanology of kimberlites: (1) kimberlite geology is very model driven; (2) a highly genetic terminology drives deposit or facies interpretation; (3) the effects of alteration on preserved depositional textures have been grossly underestimated; (4) the level of understanding of the physical process significance of preserved textures is limited; and, (5) some inferred processes and deposits are not based on actual, modern volcanological processes. These issues need to be addressed in order to advance understanding of kimberlite volcanological pipe forming processes and deposits. The traditional, steep-sided southern African pipe model (Class I) consists of a steep tapering pipe with a deep root zone, a middle diatreme zone and an upper crater zone (if preserved). Each zone is thought to be dominated by distinctive facies, respectively: hypabyssal kimberlite (HK, descriptively called here massive coherent porphyritic kimberlite), tuffisitic kimberlite breccia (TKB, descriptively here called massive, poorly sorted lapilli tuff) and crater zone facies, which include variably bedded pyroclastic kimberlite and resedimented and reworked volcaniclastic kimberlite (RVK). Porphyritic coherent kimberlite may, however, also be emplaced at different levels in the pipe, as later stage intrusions, as well as dykes in the surrounding country rock. The relationship between HK and TKB is not always clear. Sub-terranean fluidisation as an emplacement process is a largely unsubstantiated hypothesis; modern in-vent volcanological processes should initially be considered to explain observed deposits. Crater zone volcaniclastic deposits can occur within the diatreme zone of some pipes, indicating that the pipe was largely empty at the end of the eruption, and subsequently began to fill-in largely through resedimentation and sourcing of pyroclastic deposits from nearby vents. Classes II and III Canadian kimberlite models have a more factual, descriptive basis, but are still inadequately documented given the recency of their discovery. The diversity amongst kimberlite bodies suggests that a three-model classification is an over-simplification. Every kimberlite is altered to varying degrees, which is an intrinsic consequence of the ultrabasic composition of kimberlite and the in-vent context; few preserve original textures. The effects of syn- to post-emplacement alteration on original textures have not been adequately considered to date, and should be back-stripped to identify original textural elements and configurations. Applying sedimentological textural configurations as a guide to emplacement processes would be useful. The traditional terminology has many connotations about spatial position in pipe and of process. Perhaps the traditional terminology can be retained in the industrial situation as a general lithofacies-mining terminological scheme because it is so entrenched. However, for research purposes a more descriptive lithofacies terminology should be adopted to facilitate detailed understanding of deposit characteristics, important variations in these, and the process origins. For example every deposit of TKB is different in componentry, texture, or depositional structure. However, because so many deposits in many different pipes are called TKB, there is an implication that they are all similar and that similar processes were involved, which is far from clear.
Resumo:
The photocatalytic ability of cubic Bi1.5ZnNb1.5O7 (BZN) pyrochlore for the decolorization of an acid orange 7 (AO7) azo dye in aqueous solution under ultraviolet (UV) irradiation has been investigated for the first time. BZN catalyst powders prepared using low temperature sol-gel and higher temperature solid-state methods have been evaluated and their reaction rates have been compared.The experimental band gap energy has been estimated from the optical absorption edge and has been used as reference for theoretical calculations. The electronic band structure of BZN has been investigated using first-principles density functional theory (DFT) calculations for random, completely and partially ordered solid solutions of Zn cations in both the A and B sites of the pyrochlore structure.The nature of the orbitals in the valence band (VB) and the conduction band (CB) has been identified and the theoretical band gap energy has been discussed in terms of the DFT model approximations.