975 resultados para Mathematical techniques


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper focuses on very young students' ability to engage in repeating pattern tasks and identifying strategies that assist them to ascertain the structure of the pattern. It describes results of a study which is part of the Early Years Generalising Project (EYGP) and involves Australian students in Years 1 to 4 (ages 5-10). This paper reports on the results from the early years' cohort (Year 1 and 2 students). Clinical interviews were used to collect data concerning students' ability to determine elements in different positions when two units of a repeating pattern were shown. This meant that students were required to identify the multiplicative structure of the pattern. Results indicate there are particular strategies that assist students to predict these elements, and there appears to be a hierarchy of pattern activities that help students to understand the structure of repeating patterns.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quantitative imaging methods to analyze cell migration assays are not standardized. Here we present a suite of two–dimensional barrier assays describing the collective spreading of an initially–confined population of 3T3 fibroblast cells. To quantify the motility rate we apply two different automatic image detection methods to locate the position of the leading edge of the spreading population after 24, 48 and 72 hours. These results are compared with a manual edge detection method where we systematically vary the detection threshold. Our results indicate that the observed spreading rates are very sensitive to the choice of image analysis tools and we show that a standard measure of cell migration can vary by as much as 25% for the same experimental images depending on the details of the image analysis tools. Our results imply that it is very difficult, if not impossible, to meaningfully compare previously published measures of cell migration since previous results have been obtained using different image analysis techniques and the details of these techniques are not always reported. Using a mathematical model, we provide a physical interpretation of our edge detection results. The physical interpretation is important since edge detection algorithms alone do not specify any physical measure, or physical definition, of the leading edge of the spreading population. Our modeling indicates that variations in the image threshold parameter correspond to a consistent variation in the local cell density. This means that varying the threshold parameter is equivalent to varying the location of the leading edge in the range of approximately 1–5% of the maximum cell density.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Controlled drug delivery is a key topic in modern pharmacotherapy, where controlled drug delivery devices are required to prolong the period of release, maintain a constant release rate, or release the drug with a predetermined release profile. In the pharmaceutical industry, the development process of a controlled drug delivery device may be facilitated enormously by the mathematical modelling of drug release mechanisms, directly decreasing the number of necessary experiments. Such mathematical modelling is difficult because several mechanisms are involved during the drug release process. The main drug release mechanisms of a controlled release device are based on the device’s physiochemical properties, and include diffusion, swelling and erosion. In this thesis, four controlled drug delivery models are investigated. These four models selectively involve the solvent penetration into the polymeric device, the swelling of the polymer, the polymer erosion and the drug diffusion out of the device but all share two common key features. The first is that the solvent penetration into the polymer causes the transition of the polymer from a glassy state into a rubbery state. The interface between the two states of the polymer is modelled as a moving boundary and the speed of this interface is governed by a kinetic law. The second feature is that drug diffusion only happens in the rubbery region of the polymer, with a nonlinear diffusion coefficient which is dependent on the concentration of solvent. These models are analysed by using both formal asymptotics and numerical computation, where front-fixing methods and the method of lines with finite difference approximations are used to solve these models numerically. This numerical scheme is conservative, accurate and easily implemented to the moving boundary problems and is thoroughly explained in Section 3.2. From the small time asymptotic analysis in Sections 5.3.1, 6.3.1 and 7.2.1, these models exhibit the non-Fickian behaviour referred to as Case II diffusion, and an initial constant rate of drug release which is appealing to the pharmaceutical industry because this indicates zeroorder release. The numerical results of the models qualitatively confirms the experimental behaviour identified in the literature. The knowledge obtained from investigating these models can help to develop more complex multi-layered drug delivery devices in order to achieve sophisticated drug release profiles. A multi-layer matrix tablet, which consists of a number of polymer layers designed to provide sustainable and constant drug release or bimodal drug release, is also discussed in this research. The moving boundary problem describing the solvent penetration into the polymer also arises in melting and freezing problems which have been modelled as the classical onephase Stefan problem. The classical one-phase Stefan problem has unrealistic singularities existed in the problem at the complete melting time. Hence we investigate the effect of including the kinetic undercooling to the melting problem and this problem is called the one-phase Stefan problem with kinetic undercooling. Interestingly we discover the unrealistic singularities existed in the classical one-phase Stefan problem at the complete melting time are regularised and also find out the small time behaviour of the one-phase Stefan problem with kinetic undercooling is different to the classical one-phase Stefan problem from the small time asymptotic analysis in Section 3.3. In the case of melting very small particles, it is known that surface tension effects are important. The effect of including the surface tension to the melting problem for nanoparticles (no kinetic undercooling) has been investigated in the past, however the one-phase Stefan problem with surface tension exhibits finite-time blow-up. Therefore we investigate the effect of including both the surface tension and kinetic undercooling to the melting problem for nanoparticles and find out the the solution continues to exist until complete melting. The investigation of including kinetic undercooling and surface tension to the melting problems reveals more insight into the regularisations of unphysical singularities in the classical one-phase Stefan problem. This investigation gives a better understanding of melting a particle, and contributes to the current body of knowledge related to melting and freezing due to heat conduction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Deterministic computer simulations of physical experiments are now common techniques in science and engineering. Often, physical experiments are too time consuming, expensive or impossible to conduct. Complex computer models or codes, rather than physical experiments lead to the study of computer experiments, which are used to investigate many scientific phenomena of this nature. A computer experiment consists of a number of runs of the computer code with different input choices. The Design and Analysis of Computer Experiments is a rapidly growing technique in statistical experimental design. This thesis investigates some practical issues in the design and analysis of computer experiments and attempts to answer some of the questions faced by experimenters using computer experiments. In particular, the question of the number of computer experiments and how they should be augmented is studied and attention is given to when the response is a function over time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

LiFePO4 is a commercially available battery material with good theoretical discharge capacity, excellent cycle life and increased safety compared with competing Li-ion chemistries. It has been the focus of considerable experimental and theoretical scrutiny in the past decade, resulting in LiFePO4 cathodes that perform well at high discharge rates. This scrutiny has raised several questions about the behaviour of LiFePO4 material during charge and discharge. In contrast to many other battery chemistries that intercalate homogeneously, LiFePO4 can phase-separate into highly and lowly lithiated phases, with intercalation proceeding by advancing an interface between these two phases. The main objective of this thesis is to construct mathematical models of LiFePO4 cathodes that can be validated against experimental discharge curves. This is in an attempt to understand some of the multi-scale dynamics of LiFePO4 cathodes that can be difficult to determine experimentally. The first section of this thesis constructs a three-scale mathematical model of LiFePO4 cathodes that uses a simple Stefan problem (which has been used previously in the literature) to describe the assumed phase-change. LiFePO4 crystals have been observed agglomerating in cathodes to form a porous collection of crystals and this morphology motivates the use of three size-scales in the model. The multi-scale model developed validates well against experimental data and this validated model is then used to examine the role of manufacturing parameters (including the agglomerate radius) on battery performance. The remainder of the thesis is concerned with investigating phase-field models as a replacement for the aforementioned Stefan problem. Phase-field models have recently been used in LiFePO4 and are a far more accurate representation of experimentally observed crystal-scale behaviour. They are based around the Cahn-Hilliard-reaction (CHR) IBVP, a fourth-order PDE with electrochemical (flux) boundary conditions that is very stiff and possesses multiple time and space scales. Numerical solutions to the CHR IBVP can be difficult to compute and hence a least-squares based Finite Volume Method (FVM) is developed for discretising both the full CHR IBVP and the more traditional Cahn-Hilliard IBVP. Phase-field models are subject to two main physicality constraints and the numerical scheme presented performs well under these constraints. This least-squares based FVM is then used to simulate the discharge of individual crystals of LiFePO4 in two dimensions. This discharge is subject to isotropic Li+ diffusion, based on experimental evidence that suggests the normally orthotropic transport of Li+ in LiFePO4 may become more isotropic in the presence of lattice defects. Numerical investigation shows that two-dimensional Li+ transport results in crystals that phase-separate, even at very high discharge rates. This is very different from results shown in the literature, where phase-separation in LiFePO4 crystals is suppressed during discharge with orthotropic Li+ transport. Finally, the three-scale cathodic model used at the beginning of the thesis is modified to simulate modern, high-rate LiFePO4 cathodes. High-rate cathodes typically do not contain (large) agglomerates and therefore a two-scale model is developed. The Stefan problem used previously is also replaced with the phase-field models examined in earlier chapters. The results from this model are then compared with experimental data and fit poorly, though a significant parameter regime could not be investigated numerically. Many-particle effects however, are evident in the simulated discharges, which match the conclusions of recent literature. These effects result in crystals that are subject to local currents very different from the discharge rate applied to the cathode, which impacts the phase-separating behaviour of the crystals and raises questions about the validity of using cathodic-scale experimental measurements in order to determine crystal-scale behaviour.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Availability has become a primary goal of information security and is as significant as other goals, in particular, confidentiality and integrity. Maintaining availability of essential services on the public Internet is an increasingly difficult task in the presence of sophisticated attackers. Attackers may abuse limited computational resources of a service provider and thus managing computational costs is a key strategy for achieving the goal of availability. In this thesis we focus on cryptographic approaches for managing computational costs, in particular computational effort. We focus on two cryptographic techniques: computational puzzles in cryptographic protocols and secure outsourcing of cryptographic computations. This thesis contributes to the area of cryptographic protocols in the following ways. First we propose the most efficient puzzle scheme based on modular exponentiations which, unlike previous schemes of the same type, involves only a few modular multiplications for solution verification; our scheme is provably secure. We then introduce a new efficient gradual authentication protocol by integrating a puzzle into a specific signature scheme. Our software implementation results for the new authentication protocol show that our approach is more efficient and effective than the traditional RSA signature-based one and improves the DoSresilience of Secure Socket Layer (SSL) protocol, the most widely used security protocol on the Internet. Our next contributions are related to capturing a specific property that enables secure outsourcing of cryptographic tasks in partial-decryption. We formally define the property of (non-trivial) public verifiability for general encryption schemes, key encapsulation mechanisms (KEMs), and hybrid encryption schemes, encompassing public-key, identity-based, and tag-based encryption avors. We show that some generic transformations and concrete constructions enjoy this property and then present a new public-key encryption (PKE) scheme having this property and proof of security under the standard assumptions. Finally, we combine puzzles with PKE schemes for enabling delayed decryption in applications such as e-auctions and e-voting. For this we first introduce the notion of effort-release PKE (ER-PKE), encompassing the well-known timedrelease encryption and encapsulated key escrow techniques. We then present a security model for ER-PKE and a generic construction of ER-PKE complying with our security notion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cell migration is a behaviour critical to many key biological effects, including wound healing, cancerous cell invasion and morphogenesis, the development of an organism from an embryo. However, given that each of these situations is distinctly different and cells are extremely complicated biological objects, interest lies in more basic experiments which seek to remove conflating factors and present a less complex environment within which cell migration can be experimentally examined. These include in vitro studies like the scratch assay or circle migration assay, and ex vivo studies like the colonisation of the hindgut by neural crest cells. The reduced complexity of these experiments also makes them much more enticing as problems to mathematically model, like done here. The primary goal of the mathematical models used in this thesis is to shed light on which cellular behaviours work to generate the travelling waves of invasion observed in these experiments, and to explore how variations in these behaviours can potentially predict differences in this invasive pattern which are experimentally observed when cell types or chemical environment are changed. Relevant literature has already identified the difficulty of distinguishing between these behaviours when using traditional mathematical biology techniques operating on a macroscopic scale, and so here a sophisticated individual-cell-level model, an extension of the Cellular Potts Model (CPM), is been constructed and used to model a scratch assay experiment. This model includes a novel mechanism for dealing with cell proliferations that allowed for the differing properties of quiescent and proliferative cells to be implemented into their behaviour. This model is considered both for its predictive power and used to make comparisons with the travelling waves which result in more traditional macroscopic simulations. These comparisons demonstrate a surprising amount of agreement between the two modelling frameworks, and suggest further novel modifications to the CPM that would allow it to better model cell migration. Considerations of the model’s behaviour are used to argue that the dominant effect governing cell migration (random motility or signal-driven taxis) likely depends on the sort of invasion demonstrated by cells, as easily seen by microscopic photography. Additionally, a scratch assay simulated on a non-homogeneous domain consisting of a ’fast’ and ’slow’ region is also used to further differentiate between these different potential cell motility behaviours. A heterogeneous domain is a novel situation which has not been considered mathematically in this context, nor has it been constructed experimentally to the best of the candidate’s knowledge. Thus this problem serves as a thought experiment used to test the conclusions arising from the simulations on homogeneous domains, and to suggest what might be observed should this non-homogeneous assay situation be experimentally realised. Non-intuitive cell invasion patterns are predicted for diffusely-invading cells which respond to a cell-consumed signal or nutrient, contrasted with rather expected behaviour in the case of random-motility-driven invasion. The potential experimental observation of these behaviours is demonstrated by the individual-cell-level model used in this thesis, which does agree with the PDE model in predicting these unexpected invasion patterns. In the interest of examining such a case of a non-homogeneous domain experimentally, some brief suggestion is made as to how this could be achieved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The feral pig, Sus scrofa, is a widespread and abundant invasive species in Australia. Feral pigs pose a significant threat to the environment, agricultural industry, and human health, and in far north Queensland they endanger World Heritage values of the Wet Tropics. Historical records document the first introduction of domestic pigs into Australia via European settlers in 1788 and subsequent introductions from Asia from 1827 onwards. Since this time, domestic pigs have been accidentally and deliberately released into the wild and significant feral pig populations have become established, resulting in the declaration of this species as a class 2 pest in Queensland. The overall objective of this study was to assess the population genetic structure of feral pigs in far north Queensland, in particular to enable delineation of demographically independent management units. The identification of ecologically meaningful management units using molecular techniques can assist in targeting feral pig control to bring about effective long-term management. Molecular genetic analysis was undertaken on 434 feral pigs from 35 localities between Tully and Innisfail. Seven polymorphic and unlinked microsatellite loci were screened and fixation indices (FST and analogues) and Bayesian clustering methods were used to identify population structure and management units in the study area. Sequencing of the hyper-variable mitochondrial control region (D-loop) of 35 feral pigs was also examined to identify pig ancestry. Three management units were identified in the study at a scale of 25 to 35 km. Even with the strong pattern of genetic structure identified in the study area, some evidence of long distance dispersal and/or translocation was found as a small number of individuals exhibited ancestry from a management unit outside of which they were sampled. Overall, gene flow in the study area was found to be influenced by environmental features such as topography and land use, but no distinct or obvious natural or anthropogenic geographic barriers were identified. Furthermore, strong evidence was found for non-random mating between pigs of European and Asian breeds indicating that feral pig ancestry influences their population genetic structure. Phylogenetic analysis revealed two distinct mitochondrial DNA clades, representing Asian domestic pig breeds and European breeds. A significant finding was that pigs of Asian origin living in Innisfail and south Tully were not mating randomly with European breed pigs populating the nearby Mission Beach area. Feral pig control should be implemented in each of the management units identified in this study. The control should be coordinated across properties within each management unit to prevent re-colonisation from adjacent localities. The adjacent rainforest and National Park Estates, as well as the rainforest-crop boundary should be included in a simultaneous control operation for greater success.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A user’s query is considered to be an imprecise description of their information need. Automatic query expansion is the process of reformulating the original query with the goal of improving retrieval effectiveness. Many successful query expansion techniques ignore information about the dependencies that exist between words in natural language. However, more recent approaches have demonstrated that by explicitly modeling associations between terms significant improvements in retrieval effectiveness can be achieved over those that ignore these dependencies. State-of-the-art dependency-based approaches have been shown to primarily model syntagmatic associations. Syntagmatic associations infer a likelihood that two terms co-occur more often than by chance. However, structural linguistics relies on both syntagmatic and paradigmatic associations to deduce the meaning of a word. Given the success of dependency-based approaches and the reliance on word meanings in the query formulation process, we argue that modeling both syntagmatic and paradigmatic information in the query expansion process will improve retrieval effectiveness. This article develops and evaluates a new query expansion technique that is based on a formal, corpus-based model of word meaning that models syntagmatic and paradigmatic associations. We demonstrate that when sufficient statistical information exists, as in the case of longer queries, including paradigmatic information alone provides significant improvements in retrieval effectiveness across a wide variety of data sets. More generally, when our new query expansion approach is applied to large-scale web retrieval it demonstrates significant improvements in retrieval effectiveness over a strong baseline system, based on a commercial search engine.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, three mathematical models describing the growth of solid tumour incorporating the host tissue and the immune system response are developed and investigated. The initial model describes the dynamics of the growing tumour and immune response before being extended in the second model by introducing a time-varying dendritic cell-based treatment strategy. Finally, in the third model, we present a mathematical model of a growing tumour using a hybrid cellular automata. These models can provide information to pre-experimental work to assist in designing more effective and efficient laboratory experiments related to tumour growth and interactions with the immune system and immunotherapy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: Recent advances in the planning and delivery of radiotherapy treatments have resulted in improvements in the accuracy and precision with which therapeutic radiation can be administered. As the complexity of the treatments increases it becomes more difficult to predict the dose distribution in the patient accurately. Monte Carlo (MC) methods have the potential to improve the accuracy of the dose calculations and are increasingly being recognised as the ‘gold standard’ for predicting dose deposition in the patient [1]. This project has three main aims: 1. To develop tools that enable the transfer of treatment plan information from the treatment planning system (TPS) to a MC dose calculation engine. 2. To develop tools for comparing the 3D dose distributions calculated by the TPS and the MC dose engine. 3. To investigate the radiobiological significance of any errors between the TPS patient dose distribution and the MC dose distribution in terms of Tumour Control Probability (TCP) and Normal Tissue Complication Probabilities (NTCP). The work presented here addresses the first two aims. Methods: (1a) Plan Importing: A database of commissioned accelerator models (Elekta Precise and Varian 2100CD) has been developed for treatment simulations in the MC system (EGSnrc/BEAMnrc). Beam descriptions can be exported from the TPS using the widespread DICOM framework, and the resultant files are parsed with the assistance of a software library (PixelMed Java DICOM Toolkit). The information in these files (such as the monitor units, the jaw positions and gantry orientation) is used to construct a plan-specific accelerator model which allows an accurate simulation of the patient treatment field. (1b) Dose Simulation: The calculation of a dose distribution requires patient CT images which are prepared for the MC simulation using a tool (CTCREATE) packaged with the system. Beam simulation results are converted to absolute dose per- MU using calibration factors recorded during the commissioning process and treatment simulation. These distributions are combined according to the MU meter settings stored in the exported plan to produce an accurate description of the prescribed dose to the patient. (2) Dose Comparison: TPS dose calculations can be obtained using either a DICOM export or by direct retrieval of binary dose files from the file system. Dose difference, gamma evaluation and normalised dose difference algorithms [2] were employed for the comparison of the TPS dose distribution and the MC dose distribution. These implementations are spatial resolution independent and able to interpolate for comparisons. Results and Discussion: The tools successfully produced Monte Carlo input files for a variety of plans exported from the Eclipse (Varian Medical Systems) and Pinnacle (Philips Medical Systems) planning systems: ranging in complexity from a single uniform square field to a five-field step and shoot IMRT treatment. The simulation of collimated beams has been verified geometrically, and validation of dose distributions in a simple body phantom (QUASAR) will follow. The developed dose comparison algorithms have also been tested with controlled dose distribution changes. Conclusion: The capability of the developed code to independently process treatment plans has been demonstrated. A number of limitations exist: only static fields are currently supported (dynamic wedges and dynamic IMRT will require further development), and the process has not been tested for planning systems other than Eclipse and Pinnacle. The tools will be used to independently assess the accuracy of the current treatment planning system dose calculation algorithms for complex treatment deliveries such as IMRT in treatment sites where patient inhomogeneities are expected to be significant. Acknowledgements: Computational resources and services used in this work were provided by the HPC and Research Support Group, Queensland University of Technology, Brisbane, Australia. Pinnacle dose parsing made possible with the help of Paul Reich, North Coast Cancer Institute, North Coast, New South Wales.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The increased adoption of business process management approaches, tools and practices, has led organizations to accumulate large collections of business process models. These collections can easily include hundred to thousand models, especially in the context of multinational corporations or as a result of organizational mergers and acquisitions. A concrete problem is thus how to maintain these large repositories in such a way that their complexity does not hamper their practical usefulness as a means to describe and communicate business operations. This paper proposes a technique to automatically infer suitable names for business process models and fragments thereof. This technique is useful for model abstraction scenarios, as for instance when user-specific views of a repository are required, or as part of a refactoring initiative aimed to simplify the repository’s complexity. The technique is grounded in an adaptation of the theory of meaning to the realm of business process models. We implemented the technique in a prototype tool and conducted an extensive evaluation using three process model collections from practice and a case study involving process modelers with different experience.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis explored the development of statistical methods to support the monitoring and improvement in quality of treatment delivered to patients undergoing coronary angioplasty procedures. To achieve this goal, a suite of outcome measures was identified to characterise performance of the service, statistical tools were developed to monitor the various indicators and measures to strengthen governance processes were implemented and validated. Although this work focused on pursuit of these aims in the context of a an angioplasty service located at a single clinical site, development of the tools and techniques was undertaken mindful of the potential application to other clinical specialties and a wider, potentially national, scope.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Genomic DNA obtained from patient whole blood samples is a key element for genomic research. Advantages and disadvantages, in terms of time-efficiency, cost-effectiveness and laboratory requirements, of procedures available to isolate nucleic acids need to be considered before choosing any particular method. These characteristics have not been fully evaluated for some laboratory techniques, such as the salting out method for DNA extraction, which has been excluded from comparison in different studies published to date. We compared three different protocols (a traditional salting out method, a modified salting out method and a commercially available kit method) to determine the most cost-effective and time-efficient method to extract DNA. We extracted genomic DNA from whole blood samples obtained from breast cancer patient volunteers and compared the results of the product obtained in terms of quantity (concentration of DNA extracted and DNA obtained per ml of blood used) and quality (260/280 ratio and polymerase chain reaction product amplification) of the obtained yield. On average, all three methods showed no statistically significant differences between the final result, but when we accounted for time and cost derived for each method, they showed very significant differences. The modified salting out method resulted in a seven- and twofold reduction in cost compared to the commercial kit and traditional salting out method, respectively and reduced time from 3 days to 1 hour compared to the traditional salting out method. This highlights a modified salting out method as a suitable choice to be used in laboratories and research centres, particularly when dealing with a large number of samples.