67 resultados para Nonrandom two-liquid model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lamb waves propagation in composite materials has been studied extensively since it was first observed in 1982. In this paper, we show a procedure to simulate the propagation of Lamb waves in composite laminates using a two-dimensional model in ANSYS. This is done by simulating the Lamb waves propagating along the plane of the structure in the form of a time dependent force excitation. In this paper, an 8-layered carbon reinforced fibre plastic (CRFP) is modelled as transversely isotropic and dissipative medium and the effect of flaws is analyzed with respect to the defects induced between various layers of the composite laminate. This effort is the basis for the future development of a 3D model for similar applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper contributes to the literature on subjective well-being (SWB) by taking into account different aspects of life, called domains, such as health, financial situation, job, leisure, housing, and environment. We postulate a two-layer model where individual total SWB depends on the different subjective domain satisfactions. A distinction is made between long-term and short-term effects. The individual domain satisfactions depend on objectively measurable variables, such as income. The model is estimated using a large German panel data set.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: The Brief Michigan Alcoholism Screening Test (bMAST) is a 10-item test derived from the 25-item Michigan Alcoholism Screening Test (MAST). It is widely used in the assessment of alcohol dependence. In the absence of previous validation studies, the principal aim of this study was to assess the validity and reliability of the bMAST as a measure of the severity of problem drinking. Method: There were 6,594 patients (4,854 men, 1,740 women) who had been referred for alcohol-use disorders to a hospital alcohol and drug service who voluntarily participated in this study. Results: An exploratory factor analysis defined a two-factor solution, consisting of Perception of Current Drinking and Drinking Consequences factors. Structural equation modeling confirmed that the fit of a nine-item, two-factor model was superior to the original one-factor model. Concurrent validity was assessed through simultaneous administration of the Alcohol Use Disorders Identification Test (AUDIT) and associations with alcohol consumption and clinically assessed features of alcohol dependence. The two-factor bMAST model showed moderate correlations with the AUDIT. The two-factor bMAST and AUDIT were similarly associated with quantity of alcohol consumption and clinically assessed dependence severity features. No differences were observed between the existing weighted scoring system and the proposed simple scoring system. Conclusions: In this study, both the existing bMAST total score and the two-factor model identified were as effective as the AUDIT in assessing problem drinking severity. There are additional advantages of employing the two-factor bMAST in the assessment and treatment planning of patients seeking treatment for alcohol-use disorders. (J. Stud. Alcohol Drugs 68: 771-779,2007)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The effect of conversion from forest-to-pasture upon soil carbon stocks has been intensively discussed, but few studies focus on how this land-use change affects carbon (C) distribution across soil fractions in the Amazon basin. We investigated this in the 20 cm depth along a chronosequence of sites from native forest to three successively older pastures. We performed a physicochemical fractionation of bulk soil samples to better understand the mechanisms by which soil C is stabilized and evaluate the contribution of each C fraction to total soil C. Additionally, we used a two-pool model to estimate the mean residence time (MRT) for the slow and active pool C in each fraction. Soil C increased with conversion from forest-to-pasture in the particulate organic matter (> 250 mu m), microaggregate (53-250 mu m), and d-clay (< 2 mu m) fractions. The microaggregate comprised the highest soil C content after the conversion from forest-to-pasture. The C content of the d-silt fraction decreased with time since conversion to pasture. Forest-derived C remained in all fractions with the highest concentration in the finest fractions, with the largest proportion of forest-derived soil C associated with clay minerals. Results from this work indicate that microaggregate formation is sensitive to changes in management and might serve as an indicator for management-induced soil carbon changes, and the soil C changes in the fractions are dependent on soil texture.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The uncertainty associated with how projected climate change will affect global C cycling could have a large impact on predictions of soil C stocks. The purpose of our study was to determine how various soil decomposition and chemistry characteristics relate to soil organic matter (SOM) temperature sensitivity. We accomplished this objective using long-term soil incubations at three temperatures (15, 25, and 35°C) and pyrolysis molecular beam mass spectrometry (py-MBMS) on 12 soils from 6 sites along a mean annual temperature (MAT) gradient (2–25.6°C). The Q10 values calculated from the CO2 respired during a long-term incubation using the Q10-q method showed decomposition of the more resistant fraction to be more temperature sensitive with a Q10-q of 1.95 ± 0.08 for the labile fraction and a Q10-q of 3.33 ± 0.04 for the more resistant fraction. We compared the fit of soil respiration data using a two-pool model (active and slow) with first-order kinetics with a three-pool model and found that the two and three-pool models statistically fit the data equally well. The three-pool model changed the size and rate constant for the more resistant pool. The size of the active pool in these soils, calculated using the two-pool model, increased with incubation temperature and ranged from 0.1 to 14.0% of initial soil organic C. Sites with an intermediate MAT and lowest C/N ratio had the largest active pool. Pyrolysis molecular beam mass spectrometry showed declines in carbohydrates with conversion from grassland to wheat cultivation and a greater amount of protected carbohydrates in allophanic soils which may have lead to differences found between the total amount of CO2 respired, the size of the active pool, and the Q10-q values of the soils.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Understanding the business value of IT has mostly been studied in developed countries, but because most investment in developing countries is derived from external sources, the influence of that investment on business value is likely to be different. We test this notion using a two-layer model. We examine the impact of IT investments on firm processes, and the relationship of these processes to firm performance in a developing country. Our findings suggest that investment in different areas of IT positively relates to improvements in intermediate business processes and these intermediate business processes positively relate to the overall financial performance of firms in a developing country.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This book provides the much needed international dimension on the payoffs of information technology investments. The bulk of the research on the impact of information technology investments has been undertaken in developed economies, mainly the United States. This research provides an alternative dimension - a developing country perspective on how information technology investments impacts organizations. Secondly, there has been much debate and controversy on how we measure information technology investment payoffs. This research uses an innovative two-stage model where it proposes that information technology investments will first impact the process and improvement in the processes will then impact the performance. In doing so, it considers sectors of information technology investment rather than taking it as one. Finally, almost all prior studies in this area have considered only the tangible impact of information technology investments. This research proposes that one can only better understand the benefits by looking at both the tangible and intangible benefits.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Extreme cold and heat waves, characterised by a number of cold or hot days in succession, place a strain on people’s cardiovascular and respiratory systems. The increase in deaths due to these waves may be greater than that predicted by extreme temperatures alone. We examined cold and heat waves in 99 US cities for 14 years (1987–2000) and investigated how the risk of death depended on the temperature threshold used to define a wave, and a wave’s timing, duration and intensity. We defined cold and heat waves using temperatures above and below cold and heat thresholds for two or more days. We tried five cold thresholds using the first to fifth percentiles of temperature, and five heat thresholds using the ninety-fifth to ninety-ninth percentiles. The extra wave effects were estimated using a two-stage model to ensure that their effects were estimated after removing the general effects of temperature. The increases in deaths associated with cold waves were generally small and not statistically significant, and there was even evidence of a decreased risk during the coldest waves. Heat waves generally increased the risk of death, particularly for the hottest heat threshold. Cold waves of a colder intensity or longer duration were not more dangerous. Cold waves earlier in the cool season were more dangerous, as were heat waves earlier in the warm season. In general there was no increased risk of death during cold waves above the known increased risk associated with cold temperatures. Cold or heat waves earlier in the cool or warm season may be more dangerous because of a build up in the susceptible pool or a lack of preparedness for cold or hot temperatures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Carbon nanotubes (CNTs) have excellent electrical, mechanical and electromechanical properties. When CNTs are incorporated into polymers, electrically conductive composites with high electrical conductivity at very low CNT content (often below 1% wt CNT) result. Due to the change in electrical properties under mechanical load, carbon nanotube/polymer composites have attracted significant research interest especially due to their potential for application in in-situ monitoring of stress distribution and active control of strain sensing in composite structures or as strain sensors. To sucessfully develop novel devices for such applications, some of the major challenges that need to be overcome include; in-depth understanding of structure-electrical conductivity relationships, response of the composites under changing environmental conditions and piezoresistivity of different types of carbon nanotube/polymer sensing devices. In this thesis, direct current (DC) and alternating current (AC) conductivity of CNT-epoxy composites was investigated. Details of microstructure obtained by scanning electron microscopy were used to link observed electrical properties with structure using equivalent circuit modeling. The role of polymer coatings on macro and micro level electrical conductivity was investigated using atomic force microscopy. Thermal analysis and Raman spectroscopy were used to evaluate the heat flow and deformation of carbon nanotubes embedded in the epoxy, respectively, and related to temperature induced resistivity changes. A comparative assessment of piezoresistivity was conducted using randomly mixed carbon nanotube/epoxy composites, and new concept epoxy- and polyurethane-coated carbon nanotube films. The results indicate that equivalent circuit modelling is a reliable technique for estimating values of the resistance and capacitive components in linear, low aspect ratio-epoxy composites. Using this approach, the dominant role of tunneling resistance in determining the electrical conductivity was confirmed, a result further verified using conductive-atomic force microscopy analysis. Randomly mixed CNT-epoxy composites were found to be highly sensitive to mechanical strain and temperature variation compared to polymer-coated CNT films. In the vicinity of the glass transition temperature, the CNT-epoxy composites exhibited pronounced resistivity peaks. Thermal and Raman spectroscopy analyses indicated that this phenomenon can be attributed to physical aging of the epoxy matrix phase and structural rearrangement of the conductive network induced by matrix expansion. The resistivity of polymercoated CNT composites was mainly dominated by the intrinsic resistivity of CNTs and the CNT junctions, and their linear, weakly temperature sensitive response can be described by a modified Luttinger liquid model. Piezoresistivity of the polymer coated sensors was dominated by break up of the conducting carbon nanotube network and the consequent degradation of nanotube-nanotube contacts while that of the randomly mixed CNT-epoxy composites was determined by tunnelling resistance between neighbouring CNTs. This thesis has demonstrated that it is possible to use microstructure information to develop equivalent circuit models that are capable of representing the electrical conductivity of CNT/epoxy composites accurately. New designs of carbon nanotube based sensing devices, utilising carbon nanotube films as the key functional element, can be used to overcome the high temperature sensitivity of randomly mixed CNT/polymer composites without compromising on desired high strain sensitivity. This concept can be extended to develop large area intelligent CNT based coatings and targeted weak-point specific strain sensors for use in structural health monitoring.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The standard approach to tax compliance applies the economics-of-crime methodology pioneered by Becker (1968): in its first application, due to Allingham and Sandmo (1972) it models the behaviour of agents as a decision involving a choice of the extent of their income to report to tax authorities, given a certain institutional environment, represented by parameters such as the probability of detection and penalties in the event the agent is caught. While this basic framework yields important insights on tax compliance behavior, it has some critical limitations. Specifically, it indicates a level of compliance that is significantly below what is observed in the data. This thesis revisits the original framework with a view towards addressing this issue, and examining the political economy implications of tax evasion for progressivity in the tax structure. The approach followed involves building a macroeconomic, dynamic equilibrium model for the purpose of examining these issues, by using a step-wise model building procedure starting with some very simple variations of the basic Allingham and Sandmo construct, which are eventually integrated to a dynamic general equilibrium overlapping generations framework with heterogeneous agents. One of the variations involves incorporating the Allingham and Sandmo construct into a two-period model of a small open economy of the type originally attributed to Fisher (1930). A further variation of this simple construct involves allowing agents to initially decide whether to evade taxes or not. In the event they decide to evade, the agents then have to decide the extent of income or wealth they wish to under-report. We find that the ‘evade or not’ assumption has strikingly different and more realistic implications for the extent of evasion, and demonstrate that it is a more appropriate modeling strategy in the context of macroeconomic models, which are essentially dynamic in nature, and involve consumption smoothing across time and across various states of nature. Specifically, since deciding to undertake tax evasion impacts on the consumption smoothing ability of the agent by creating two states of nature in which the agent is ‘caught’ or ‘not caught’, there is a possibility that their utility under certainty, when they choose not to evade, is higher than the expected utility obtained when they choose to evade. Furthermore, the simple two-period model incorporating an ‘evade or not’ choice can be used to demonstrate some strikingly different political economy implications relative to its Allingham and Sandmo counterpart. In variations of the two models that allow for voting on the tax parameter, we find that agents typically choose to vote for a high degree of progressivity by choosing the highest available tax rate from the menu of choices available to them. There is, however, a small range of inequality levels for which agents in the ‘evade or not’ model vote for a relatively low value of the tax rate. The final steps in the model building procedure involve grafting the two-period models with a political economy choice into a dynamic overlapping generations setting with more general, non-linear tax schedules and a ‘cost-of evasion’ function that is increasing in the extent of evasion. Results based on numerical simulations of these models show further improvement in the model’s ability to match empirically plausible levels of tax evasion. In addition, the differences between the political economy implications of the ‘evade or not’ version of the model and its Allingham and Sandmo counterpart are now very striking; there is now a large range of values of the inequality parameter for which agents in the ‘evade or not’ model vote for a low degree of progressivity. This is because, in the ‘evade or not’ version of the model, low values of the tax rate encourages a large number of agents to choose the ‘not-evade’ option, so that the redistributive mechanism is more ‘efficient’ relative to the situations in which tax rates are high. Some further implications of the models of this thesis relate to whether variations in the level of inequality, and parameters such as the probability of detection and penalties for tax evasion matter for the political economy results. We find that (i) the political economy outcomes for the tax rate are quite insensitive to changes in inequality, and (ii) the voting outcomes change in non-monotonic ways in response to changes in the probability of detection and penalty rates. Specifically, the model suggests that changes in inequality should not matter, although the political outcome for the tax rate for a given level of inequality is conditional on whether there is a large or small or large extent of evasion in the economy. We conclude that further theoretical research into macroeconomic models of tax evasion is required to identify the structural relationships underpinning the link between inequality and redistribution in the presence of tax evasion. The models of this thesis provide a necessary first step in that direction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Exponential growth of genomic data in the last two decades has made manual analyses impractical for all but trial studies. As genomic analyses have become more sophisticated, and move toward comparisons across large datasets, computational approaches have become essential. One of the most important biological questions is to understand the mechanisms underlying gene regulation. Genetic regulation is commonly investigated and modelled through the use of transcriptional regulatory network (TRN) structures. These model the regulatory interactions between two key components: transcription factors (TFs) and the target genes (TGs) they regulate. Transcriptional regulatory networks have proven to be invaluable scientific tools in Bioinformatics. When used in conjunction with comparative genomics, they have provided substantial insights into the evolution of regulatory interactions. Current approaches to regulatory network inference, however, omit two additional key entities: promoters and transcription factor binding sites (TFBSs). In this study, we attempted to explore the relationships among these regulatory components in bacteria. Our primary goal was to identify relationships that can assist in reducing the high false positive rates associated with transcription factor binding site predictions and thereupon enhance the reliability of the inferred transcription regulatory networks. In our preliminary exploration of relationships between the key regulatory components in Escherichia coli transcription, we discovered a number of potentially useful features. The combination of location score and sequence dissimilarity scores increased de novo binding site prediction accuracy by 13.6%. Another important observation made was with regards to the relationship between transcription factors grouped by their regulatory role and corresponding promoter strength. Our study of E.coli ��70 promoters, found support at the 0.1 significance level for our hypothesis | that weak promoters are preferentially associated with activator binding sites to enhance gene expression, whilst strong promoters have more repressor binding sites to repress or inhibit gene transcription. Although the observations were specific to �70, they nevertheless strongly encourage additional investigations when more experimentally confirmed data are available. In our preliminary exploration of relationships between the key regulatory components in E.coli transcription, we discovered a number of potentially useful features { some of which proved successful in reducing the number of false positives when applied to re-evaluate binding site predictions. Of chief interest was the relationship observed between promoter strength and TFs with respect to their regulatory role. Based on the common assumption, where promoter homology positively correlates with transcription rate, we hypothesised that weak promoters would have more transcription factors that enhance gene expression, whilst strong promoters would have more repressor binding sites. The t-tests assessed for E.coli �70 promoters returned a p-value of 0.072, which at 0.1 significance level suggested support for our (alternative) hypothesis; albeit this trend may only be present for promoters where corresponding TFBSs are either all repressors or all activators. Nevertheless, such suggestive results strongly encourage additional investigations when more experimentally confirmed data will become available. Much of the remainder of the thesis concerns a machine learning study of binding site prediction, using the SVM and kernel methods, principally the spectrum kernel. Spectrum kernels have been successfully applied in previous studies of protein classification [91, 92], as well as the related problem of promoter predictions [59], and we have here successfully applied the technique to refining TFBS predictions. The advantages provided by the SVM classifier were best seen in `moderately'-conserved transcription factor binding sites as represented by our E.coli CRP case study. Inclusion of additional position feature attributes further increased accuracy by 9.1% but more notable was the considerable decrease in false positive rate from 0.8 to 0.5 while retaining 0.9 sensitivity. Improved prediction of transcription factor binding sites is in turn extremely valuable in improving inference of regulatory relationships, a problem notoriously prone to false positive predictions. Here, the number of false regulatory interactions inferred using the conventional two-component model was substantially reduced when we integrated de novo transcription factor binding site predictions as an additional criterion for acceptance in a case study of inference in the Fur regulon. This initial work was extended to a comparative study of the iron regulatory system across 20 Yersinia strains. This work revealed interesting, strain-specific difierences, especially between pathogenic and non-pathogenic strains. Such difierences were made clear through interactive visualisations using the TRNDifi software developed as part of this work, and would have remained undetected using conventional methods. This approach led to the nomination of the Yfe iron-uptake system as a candidate for further wet-lab experimentation due to its potential active functionality in non-pathogens and its known participation in full virulence of the bubonic plague strain. Building on this work, we introduced novel structures we have labelled as `regulatory trees', inspired by the phylogenetic tree concept. Instead of using gene or protein sequence similarity, the regulatory trees were constructed based on the number of similar regulatory interactions. While the common phylogentic trees convey information regarding changes in gene repertoire, which we might regard being analogous to `hardware', the regulatory tree informs us of the changes in regulatory circuitry, in some respects analogous to `software'. In this context, we explored the `pan-regulatory network' for the Fur system, the entire set of regulatory interactions found for the Fur transcription factor across a group of genomes. In the pan-regulatory network, emphasis is placed on how the regulatory network for each target genome is inferred from multiple sources instead of a single source, as is the common approach. The benefit of using multiple reference networks, is a more comprehensive survey of the relationships, and increased confidence in the regulatory interactions predicted. In the present study, we distinguish between relationships found across the full set of genomes as the `core-regulatory-set', and interactions found only in a subset of genomes explored as the `sub-regulatory-set'. We found nine Fur target gene clusters present across the four genomes studied, this core set potentially identifying basic regulatory processes essential for survival. Species level difierences are seen at the sub-regulatory-set level; for example the known virulence factors, YbtA and PchR were found in Y.pestis and P.aerguinosa respectively, but were not present in both E.coli and B.subtilis. Such factors and the iron-uptake systems they regulate, are ideal candidates for wet-lab investigation to determine whether or not they are pathogenic specific. In this study, we employed a broad range of approaches to address our goals and assessed these methods using the Fur regulon as our initial case study. We identified a set of promising feature attributes; demonstrated their success in increasing transcription factor binding site prediction specificity while retaining sensitivity, and showed the importance of binding site predictions in enhancing the reliability of regulatory interaction inferences. Most importantly, these outcomes led to the introduction of a range of visualisations and techniques, which are applicable across the entire bacterial spectrum and can be utilised in studies beyond the understanding of transcriptional regulatory networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study compared the performance of a local and three robust optimality criteria in terms of the standard error for a one-parameter and a two-parameter nonlinear model with uncertainty in the parameter values. The designs were also compared in conditions where there was misspecification in the prior parameter distribution. The impact of different correlation between parameters on the optimal design was examined in the two-parameter model. The designs and standard errors were solved analytically whenever possible and numerically otherwise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Differing parental considerations for girls and boys in households are a primary cause of the gender gap in school enrolment and educational attainment in developing countries, particularly in Sub-Saharan Africa and South Asia. While a number of studies have focused on the inequality of educational opportunities in South Asia, little is known about Bhutan. This study uses recent household expenditure data from the Bhutan Living Standard Survey to evaluate the gender gap in the allocation of resources for schooling. The findings, based on cross-sectional as well as household fixed-effect approaches, suggest that girls are less likely to enrol in school but are not allocated fewer resources once they are enrolled.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The resection of DNA double-strand breaks (DSBs) to generate ssDNA tails is a pivotal event in the cellular response to these breaks. In the two-step model of resection, primarily elucidated in yeast, initial resection by Mre11-CtIP is followed by extensive resection by two distinct pathways involving Exo1 or BLM/WRN-Dna2. However, resection pathways and their exact contributions in humans in vivo are not as clearly worked out as in yeast. Here, we examined the contribution of Exo1 to DNA end resection in humans in vivo in response to ionizing radiation (IR) and its relationship with other resection pathways (Mre11-CtIP or BLM/WRN). We find that Exo1 plays a predominant role in resection in human cells along with an alternate pathway dependent on WRN. While Mre11 and CtIP stimulate resection in human cells, they are not absolutely required for this process and Exo1 can function in resection even in the absence of Mre11-CtIP. Interestingly, the recruitment of Exo1 to DNA breaks appears to be inhibited by the NHEJ protein Ku80, and the higher level of resection that occurs upon siRNA-mediated depletion of Ku80 is dependent on Exo1. In addition, Exo1 may be regulated by 53BP1 and Brca1, and the restoration of resection in BRCA1-deficient cells upon depletion of 53BP1 is dependent on Exo1. Finally, we find that Exo1-mediated resection facilitates a transition from ATM- to ATR-mediated cell cycle checkpoint signaling. Our results identify Exo1 as a key mediator of DNA end resection and DSB repair and damage signaling decisions in human cells.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

LiFePO4 is a commercially available battery material with good theoretical discharge capacity, excellent cycle life and increased safety compared with competing Li-ion chemistries. It has been the focus of considerable experimental and theoretical scrutiny in the past decade, resulting in LiFePO4 cathodes that perform well at high discharge rates. This scrutiny has raised several questions about the behaviour of LiFePO4 material during charge and discharge. In contrast to many other battery chemistries that intercalate homogeneously, LiFePO4 can phase-separate into highly and lowly lithiated phases, with intercalation proceeding by advancing an interface between these two phases. The main objective of this thesis is to construct mathematical models of LiFePO4 cathodes that can be validated against experimental discharge curves. This is in an attempt to understand some of the multi-scale dynamics of LiFePO4 cathodes that can be difficult to determine experimentally. The first section of this thesis constructs a three-scale mathematical model of LiFePO4 cathodes that uses a simple Stefan problem (which has been used previously in the literature) to describe the assumed phase-change. LiFePO4 crystals have been observed agglomerating in cathodes to form a porous collection of crystals and this morphology motivates the use of three size-scales in the model. The multi-scale model developed validates well against experimental data and this validated model is then used to examine the role of manufacturing parameters (including the agglomerate radius) on battery performance. The remainder of the thesis is concerned with investigating phase-field models as a replacement for the aforementioned Stefan problem. Phase-field models have recently been used in LiFePO4 and are a far more accurate representation of experimentally observed crystal-scale behaviour. They are based around the Cahn-Hilliard-reaction (CHR) IBVP, a fourth-order PDE with electrochemical (flux) boundary conditions that is very stiff and possesses multiple time and space scales. Numerical solutions to the CHR IBVP can be difficult to compute and hence a least-squares based Finite Volume Method (FVM) is developed for discretising both the full CHR IBVP and the more traditional Cahn-Hilliard IBVP. Phase-field models are subject to two main physicality constraints and the numerical scheme presented performs well under these constraints. This least-squares based FVM is then used to simulate the discharge of individual crystals of LiFePO4 in two dimensions. This discharge is subject to isotropic Li+ diffusion, based on experimental evidence that suggests the normally orthotropic transport of Li+ in LiFePO4 may become more isotropic in the presence of lattice defects. Numerical investigation shows that two-dimensional Li+ transport results in crystals that phase-separate, even at very high discharge rates. This is very different from results shown in the literature, where phase-separation in LiFePO4 crystals is suppressed during discharge with orthotropic Li+ transport. Finally, the three-scale cathodic model used at the beginning of the thesis is modified to simulate modern, high-rate LiFePO4 cathodes. High-rate cathodes typically do not contain (large) agglomerates and therefore a two-scale model is developed. The Stefan problem used previously is also replaced with the phase-field models examined in earlier chapters. The results from this model are then compared with experimental data and fit poorly, though a significant parameter regime could not be investigated numerically. Many-particle effects however, are evident in the simulated discharges, which match the conclusions of recent literature. These effects result in crystals that are subject to local currents very different from the discharge rate applied to the cathode, which impacts the phase-separating behaviour of the crystals and raises questions about the validity of using cathodic-scale experimental measurements in order to determine crystal-scale behaviour.