901 resultados para Mathematical Modeling
Resumo:
Recent advances in computational geodynamics are applied to explore the link between Earth’s heat, its chemistry and its mechanical behavior. Computational thermal-mechanical solutions are now allowing us to understand Earth patterns by solving the basic physics of heat transfer. This approach is currently used to solve basic convection patterns of terrestrial planets. Applying the same methodology to smaller scales delivers promising similarities between observed and predicted structures which are often the site of mineral deposits. The new approach involves a fully coupled solution to the energy, momentum and continuity equations of the system at all scales, allowing the prediction of fractures, shear zones and other typical geological patterns out of a randomly perturbed initial state. The results of this approach are linking a global geodynamic mechanical framework over regional-scale mineral deposits down to the underlying micro-scale processes. Ongoing work includes the challenge of incorporating chemistry into the formulation.
Resumo:
We examine which capabilities technologies provide to support collaborative process modeling. We develop a model that explains how technology capabilities impact cognitive group processes, and how they lead to improved modeling outcomes and positive technology beliefs. We test this model through a free simulation experiment of collaborative process modelers structured around a set of modeling tasks. With our study, we provide an understanding of the process of collaborative process modeling, and detail implications for research and guidelines for the practical design of collaborative process modeling.
Resumo:
This paper focuses on very young students' ability to engage in repeating pattern tasks and identifying strategies that assist them to ascertain the structure of the pattern. It describes results of a study which is part of the Early Years Generalising Project (EYGP) and involves Australian students in Years 1 to 4 (ages 5-10). This paper reports on the results from the early years' cohort (Year 1 and 2 students). Clinical interviews were used to collect data concerning students' ability to determine elements in different positions when two units of a repeating pattern were shown. This meant that students were required to identify the multiplicative structure of the pattern. Results indicate there are particular strategies that assist students to predict these elements, and there appears to be a hierarchy of pattern activities that help students to understand the structure of repeating patterns.
Resumo:
Quantitative imaging methods to analyze cell migration assays are not standardized. Here we present a suite of two–dimensional barrier assays describing the collective spreading of an initially–confined population of 3T3 fibroblast cells. To quantify the motility rate we apply two different automatic image detection methods to locate the position of the leading edge of the spreading population after 24, 48 and 72 hours. These results are compared with a manual edge detection method where we systematically vary the detection threshold. Our results indicate that the observed spreading rates are very sensitive to the choice of image analysis tools and we show that a standard measure of cell migration can vary by as much as 25% for the same experimental images depending on the details of the image analysis tools. Our results imply that it is very difficult, if not impossible, to meaningfully compare previously published measures of cell migration since previous results have been obtained using different image analysis techniques and the details of these techniques are not always reported. Using a mathematical model, we provide a physical interpretation of our edge detection results. The physical interpretation is important since edge detection algorithms alone do not specify any physical measure, or physical definition, of the leading edge of the spreading population. Our modeling indicates that variations in the image threshold parameter correspond to a consistent variation in the local cell density. This means that varying the threshold parameter is equivalent to varying the location of the leading edge in the range of approximately 1–5% of the maximum cell density.
Resumo:
Controlled drug delivery is a key topic in modern pharmacotherapy, where controlled drug delivery devices are required to prolong the period of release, maintain a constant release rate, or release the drug with a predetermined release profile. In the pharmaceutical industry, the development process of a controlled drug delivery device may be facilitated enormously by the mathematical modelling of drug release mechanisms, directly decreasing the number of necessary experiments. Such mathematical modelling is difficult because several mechanisms are involved during the drug release process. The main drug release mechanisms of a controlled release device are based on the device’s physiochemical properties, and include diffusion, swelling and erosion. In this thesis, four controlled drug delivery models are investigated. These four models selectively involve the solvent penetration into the polymeric device, the swelling of the polymer, the polymer erosion and the drug diffusion out of the device but all share two common key features. The first is that the solvent penetration into the polymer causes the transition of the polymer from a glassy state into a rubbery state. The interface between the two states of the polymer is modelled as a moving boundary and the speed of this interface is governed by a kinetic law. The second feature is that drug diffusion only happens in the rubbery region of the polymer, with a nonlinear diffusion coefficient which is dependent on the concentration of solvent. These models are analysed by using both formal asymptotics and numerical computation, where front-fixing methods and the method of lines with finite difference approximations are used to solve these models numerically. This numerical scheme is conservative, accurate and easily implemented to the moving boundary problems and is thoroughly explained in Section 3.2. From the small time asymptotic analysis in Sections 5.3.1, 6.3.1 and 7.2.1, these models exhibit the non-Fickian behaviour referred to as Case II diffusion, and an initial constant rate of drug release which is appealing to the pharmaceutical industry because this indicates zeroorder release. The numerical results of the models qualitatively confirms the experimental behaviour identified in the literature. The knowledge obtained from investigating these models can help to develop more complex multi-layered drug delivery devices in order to achieve sophisticated drug release profiles. A multi-layer matrix tablet, which consists of a number of polymer layers designed to provide sustainable and constant drug release or bimodal drug release, is also discussed in this research. The moving boundary problem describing the solvent penetration into the polymer also arises in melting and freezing problems which have been modelled as the classical onephase Stefan problem. The classical one-phase Stefan problem has unrealistic singularities existed in the problem at the complete melting time. Hence we investigate the effect of including the kinetic undercooling to the melting problem and this problem is called the one-phase Stefan problem with kinetic undercooling. Interestingly we discover the unrealistic singularities existed in the classical one-phase Stefan problem at the complete melting time are regularised and also find out the small time behaviour of the one-phase Stefan problem with kinetic undercooling is different to the classical one-phase Stefan problem from the small time asymptotic analysis in Section 3.3. In the case of melting very small particles, it is known that surface tension effects are important. The effect of including the surface tension to the melting problem for nanoparticles (no kinetic undercooling) has been investigated in the past, however the one-phase Stefan problem with surface tension exhibits finite-time blow-up. Therefore we investigate the effect of including both the surface tension and kinetic undercooling to the melting problem for nanoparticles and find out the the solution continues to exist until complete melting. The investigation of including kinetic undercooling and surface tension to the melting problems reveals more insight into the regularisations of unphysical singularities in the classical one-phase Stefan problem. This investigation gives a better understanding of melting a particle, and contributes to the current body of knowledge related to melting and freezing due to heat conduction.
Resumo:
LiFePO4 is a commercially available battery material with good theoretical discharge capacity, excellent cycle life and increased safety compared with competing Li-ion chemistries. It has been the focus of considerable experimental and theoretical scrutiny in the past decade, resulting in LiFePO4 cathodes that perform well at high discharge rates. This scrutiny has raised several questions about the behaviour of LiFePO4 material during charge and discharge. In contrast to many other battery chemistries that intercalate homogeneously, LiFePO4 can phase-separate into highly and lowly lithiated phases, with intercalation proceeding by advancing an interface between these two phases. The main objective of this thesis is to construct mathematical models of LiFePO4 cathodes that can be validated against experimental discharge curves. This is in an attempt to understand some of the multi-scale dynamics of LiFePO4 cathodes that can be difficult to determine experimentally. The first section of this thesis constructs a three-scale mathematical model of LiFePO4 cathodes that uses a simple Stefan problem (which has been used previously in the literature) to describe the assumed phase-change. LiFePO4 crystals have been observed agglomerating in cathodes to form a porous collection of crystals and this morphology motivates the use of three size-scales in the model. The multi-scale model developed validates well against experimental data and this validated model is then used to examine the role of manufacturing parameters (including the agglomerate radius) on battery performance. The remainder of the thesis is concerned with investigating phase-field models as a replacement for the aforementioned Stefan problem. Phase-field models have recently been used in LiFePO4 and are a far more accurate representation of experimentally observed crystal-scale behaviour. They are based around the Cahn-Hilliard-reaction (CHR) IBVP, a fourth-order PDE with electrochemical (flux) boundary conditions that is very stiff and possesses multiple time and space scales. Numerical solutions to the CHR IBVP can be difficult to compute and hence a least-squares based Finite Volume Method (FVM) is developed for discretising both the full CHR IBVP and the more traditional Cahn-Hilliard IBVP. Phase-field models are subject to two main physicality constraints and the numerical scheme presented performs well under these constraints. This least-squares based FVM is then used to simulate the discharge of individual crystals of LiFePO4 in two dimensions. This discharge is subject to isotropic Li+ diffusion, based on experimental evidence that suggests the normally orthotropic transport of Li+ in LiFePO4 may become more isotropic in the presence of lattice defects. Numerical investigation shows that two-dimensional Li+ transport results in crystals that phase-separate, even at very high discharge rates. This is very different from results shown in the literature, where phase-separation in LiFePO4 crystals is suppressed during discharge with orthotropic Li+ transport. Finally, the three-scale cathodic model used at the beginning of the thesis is modified to simulate modern, high-rate LiFePO4 cathodes. High-rate cathodes typically do not contain (large) agglomerates and therefore a two-scale model is developed. The Stefan problem used previously is also replaced with the phase-field models examined in earlier chapters. The results from this model are then compared with experimental data and fit poorly, though a significant parameter regime could not be investigated numerically. Many-particle effects however, are evident in the simulated discharges, which match the conclusions of recent literature. These effects result in crystals that are subject to local currents very different from the discharge rate applied to the cathode, which impacts the phase-separating behaviour of the crystals and raises questions about the validity of using cathodic-scale experimental measurements in order to determine crystal-scale behaviour.
Resumo:
Conceptual modelling supports developers and users of information systems in areas of documentation, analysis or system redesign. The ongoing interest in the modelling of business processes has led to a variety of different grammars, raising the question of the quality of these grammars for modelling. An established way of evaluating the quality of a modelling grammar is by means of an ontological analysis, which can determine the extent to which grammars contain construct deficit, overload, excess or redundancy. While several studies have shown the relevance of most of these criteria, predictions about construct redundancy have yielded inconsistent results in the past, with some studies suggesting that redundancy may even be beneficial for modelling in practice. In this paper we seek to contribute to clarifying the concept of construct redundancy by introducing a revision to the ontological analysis method. Based on the concept of inheritance we propose an approach that distinguishes between specialized and distinct construct redundancy. We demonstrate the potential explanatory power of the revised method by reviewing and clarifying previous results found in the literature.
Resumo:
A user’s query is considered to be an imprecise description of their information need. Automatic query expansion is the process of reformulating the original query with the goal of improving retrieval effectiveness. Many successful query expansion techniques ignore information about the dependencies that exist between words in natural language. However, more recent approaches have demonstrated that by explicitly modeling associations between terms significant improvements in retrieval effectiveness can be achieved over those that ignore these dependencies. State-of-the-art dependency-based approaches have been shown to primarily model syntagmatic associations. Syntagmatic associations infer a likelihood that two terms co-occur more often than by chance. However, structural linguistics relies on both syntagmatic and paradigmatic associations to deduce the meaning of a word. Given the success of dependency-based approaches and the reliance on word meanings in the query formulation process, we argue that modeling both syntagmatic and paradigmatic information in the query expansion process will improve retrieval effectiveness. This article develops and evaluates a new query expansion technique that is based on a formal, corpus-based model of word meaning that models syntagmatic and paradigmatic associations. We demonstrate that when sufficient statistical information exists, as in the case of longer queries, including paradigmatic information alone provides significant improvements in retrieval effectiveness across a wide variety of data sets. More generally, when our new query expansion approach is applied to large-scale web retrieval it demonstrates significant improvements in retrieval effectiveness over a strong baseline system, based on a commercial search engine.
Resumo:
In this thesis, three mathematical models describing the growth of solid tumour incorporating the host tissue and the immune system response are developed and investigated. The initial model describes the dynamics of the growing tumour and immune response before being extended in the second model by introducing a time-varying dendritic cell-based treatment strategy. Finally, in the third model, we present a mathematical model of a growing tumour using a hybrid cellular automata. These models can provide information to pre-experimental work to assist in designing more effective and efficient laboratory experiments related to tumour growth and interactions with the immune system and immunotherapy.
Resumo:
The impact induced chemisorption of hydrocarbon molecules (CH3 and CH2) on H-terminated diamond (001)-(2x1) surface was investigated by molecular dynamics simulation using the many-body Brenner potential. The deposition dynamics of the CH3 radical at impact energies of 0.1-50 eV per molecule was studied and the energy threshold for chemisorption was calculated. The impact-induced decomposition of hydrogen atoms and the dimer opening mechanism on the surface was investigated. Furthermore, the probability for dimer opening event induced by chemisorption of CH, was simulated by randomly varying the impact position as well as the orientation of the molecule relative to the surface. Finally, the energetic hydrocarbons were modeled, slowing down one after the other to simulate the initial fabrication of diamond-like carbon (DLC) films. The structure characteristic in synthesized films with different hydrogen flux was studied. Our results indicate that CH3, CH2 and H are highly reactive and important species in diamond growth. Especially, the fraction of C-atoms in the film having sp(3) hybridization will be enhanced in the presence of H atoms, which is in good agreement with experimental observations. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Recent literature has focused on realized volatility models to predict financial risk. This paper studies the benefit of explicitly modeling jumps in this class of models for value at risk (VaR) prediction. Several popular realized volatility models are compared in terms of their VaR forecasting performances through a Monte Carlo study and an analysis based on empirical data of eight Chinese stocks. The results suggest that careful modeling of jumps in realized volatility models can largely improve VaR prediction, especially for emerging markets where jumps play a stronger role than those in developed markets.
Resumo:
Molecular-level computer simulations of restricted water diffusion can be used to develop models for relating diffusion tensor imaging measurements of anisotropic tissue to microstructural tissue characteristics. The diffusion tensors resulting from these simulations can then be analyzed in terms of their relationship to the structural anisotropy of the model used. As the translational motion of water molecules is essentially random, their dynamics can be effectively simulated using computers. In addition to modeling water dynamics and water-tissue interactions, the simulation software of the present study was developed to automatically generate collagen fiber networks from user-defined parameters. This flexibility provides the opportunity for further investigations of the relationship between the diffusion tensor of water and morphologically different models representing different anisotropic tissues.
Resumo:
It is common for organizations to maintain multiple variants of a given business process, such as multiple sales processes for different products or multiple bookkeeping processes for different countries. Conventional business process modeling languages do not explicitly support the representation of such families of process variants. This gap triggered significant research efforts over the past decade leading to an array of approaches to business process variability modeling. This survey examines existing approaches in this field based on a common set of criteria and illustrates their key concepts using a running example. The analysis shows that existing approaches are characterized by the fact that they extend a conventional process mod- eling language with constructs that make it able to capture customizable process models. A customizable process model represents a family of process variants in a way that each variant can be derived by adding or deleting fragments according to configuration parameters or according to a domain model. The survey puts into evidence an abundance of customizable process modeling languages, embodying a diverse set of con- structs. In contrast, there is comparatively little tool support for analyzing and constructing customizable process models, as well as a scarcity of empirical evaluations of languages in the field.
Resumo:
Time plays an important role in norms. In this paper we start from our previously proposed classification of obligations, and point out some shortcomings of Event Calculus (EC) to represent obligations. We proposed an extension of EC that avoids such shortcomings and we show how to use it to model the various types of obligations.
Resumo:
Background: Multiple sclerosis (MS) is the most common cause of chronic neurologic disability beginning in early to middle adult life. Results from recent genome-wide association studies (GWAS) have substantially lengthened the list of disease loci and provide convincing evidence supporting a multifactorial and polygenic model of inheritance. Nevertheless, the knowledge of MS genetics remains incomplete, with many risk alleles still to be revealed. Methods: We used a discovery GWAS dataset (8,844 samples, 2,124 cases and 6,720 controls) and a multi-step logistic regression protocol to identify novel genetic associations. The emerging genetic profile included 350 independent markers and was used to calculate and estimate the cumulative genetic risk in an independent validation dataset (3,606 samples). Analysis of covariance (ANCOVA) was implemented to compare clinical characteristics of individuals with various degrees of genetic risk. Gene ontology and pathway enrichment analysis was done using the DAVID functional annotation tool, the GO Tree Machine, and the Pathway-Express profiling tool. Results: In the discovery dataset, the median cumulative genetic risk (P-Hat) was 0.903 and 0.007 in the case and control groups, respectively, together with 79.9% classification sensitivity and 95.8% specificity. The identified profile shows a significant enrichment of genes involved in the immune response, cell adhesion, cell communication/ signaling, nervous system development, and neuronal signaling, including ionotropic glutamate receptors, which have been implicated in the pathological mechanism driving neurodegeneration. In the validation dataset, the median cumulative genetic risk was 0.59 and 0.32 in the case and control groups, respectively, with classification sensitivity 62.3% and specificity 75.9%. No differences in disease progression or T2-lesion volumes were observed among four levels of predicted genetic risk groups (high, medium, low, misclassified). On the other hand, a significant difference (F = 2.75, P = 0.04) was detected for age of disease onset between the affected misclassified as controls (mean = 36 years) and the other three groups (high, 33.5 years; medium, 33.4 years; low, 33.1 years). Conclusions: The results are consistent with the polygenic model of inheritance. The cumulative genetic risk established using currently available genome-wide association data provides important insights into disease heterogeneity and completeness of current knowledge in MS genetics.