931 resultados para Mathematical methods


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work has led to the development of empirical mathematical models to quantitatively predicate the changes of morphology in osteocyte-like cell lines (MLO-Y4) in culture. MLO-Y4 cells were cultured at low density and the changes in morphology recorded over 11 hours. Cell area and three dimensional shape features including aspect ratio, circularity and solidity were then determined using widely accepted image analysis software (ImageJTM). Based on the data obtained from the imaging analysis, mathematical models were developed using the non-linear regression method. The developed mathematical models accurately predict the morphology of MLO-Y4 cells for different culture times and can, therefore, be used as a reference model for analyzing MLO-Y4 cell morphology changes within various biological/mechanical studies, as necessary.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We develop a fast Poisson preconditioner for the efficient numerical solution of a class of two-sided nonlinear space fractional diffusion equations in one and two dimensions using the method of lines. Using the shifted Gr¨unwald finite difference formulas to approximate the two-sided(i.e. the left and right Riemann-Liouville) fractional derivatives, the resulting semi-discrete nonlinear systems have dense Jacobian matrices owing to the non-local property of fractional derivatives. We employ a modern initial value problem solver utilising backward differentiation formulas and Jacobian-free Newton-Krylov methods to solve these systems. For efficient performance of the Jacobianfree Newton-Krylov method it is essential to apply an effective preconditioner to accelerate the convergence of the linear iterative solver. The key contribution of our work is to generalise the fast Poisson preconditioner, widely used for integer-order diffusion equations, so that it applies to the two-sided space fractional diffusion equation. A number of numerical experiments are presented to demonstrate the effectiveness of the preconditioner and the overall solution strategy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The method of lines is a standard method for advancing the solution of partial differential equations (PDEs) in time. In one sense, the method applies equally well to space-fractional PDEs as it does to integer-order PDEs. However, there is a significant challenge when solving space-fractional PDEs in this way, owing to the non-local nature of the fractional derivatives. Each equation in the resulting semi-discrete system involves contributions from every spatial node in the domain. This has important consequences for the efficiency of the numerical solver, especially when the system is large. First, the Jacobian matrix of the system is dense, and hence methods that avoid the need to form and factorise this matrix are preferred. Second, since the cost of evaluating the discrete equations is high, it is essential to minimise the number of evaluations required to advance the solution in time. In this paper, we show how an effective preconditioner is essential for improving the efficiency of the method of lines for solving a quite general two-sided, nonlinear space-fractional diffusion equation. A key contribution is to show, how to construct suitable banded approximations to the system Jacobian for preconditioning purposes that permit high orders and large stepsizes to be used in the temporal integration, without requiring dense matrices to be formed. The results of numerical experiments are presented that demonstrate the effectiveness of this approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider a two-dimensional space-fractional reaction diffusion equation with a fractional Laplacian operator and homogeneous Neumann boundary conditions. The finite volume method is used with the matrix transfer technique of Ilić et al. (2006) to discretise in space, yielding a system of equations that requires the action of a matrix function to solve at each timestep. Rather than form this matrix function explicitly, we use Krylov subspace techniques to approximate the action of this matrix function. Specifically, we apply the Lanczos method, after a suitable transformation of the problem to recover symmetry. To improve the convergence of this method, we utilise a preconditioner that deflates the smallest eigenvalues from the spectrum. We demonstrate the efficiency of our approach for a fractional Fisher’s equation on the unit disk.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: To test the reliability of Timed Up and Go Tests (TUGTs) in cardiac rehabilitation (CR) and compare TUGTs to the 6-Minute Walk Test (6MWT) for outcome measurement. METHODS: Sixty-one of 154 consecutive community-based CR patients were prospectively recruited. Subjects undertook repeated TUGTs and 6MWTs at the start of CR (start-CR), postdischarge from CR (post-CR), and 6 months postdischarge from CR (6 months post-CR). The main outcome measurements were TUGT time (TUGTT) and 6MWT distance (6MWD). RESULTS: Mean (SD) TUGTT1 and TUGTT2 at the 3 assessments were 6.29 (1.30) and 5.94 (1.20); 5.81 (1.22) and 5.53 (1.09); and 5.39 (1.60) and 5.01 (1.28) seconds, respectively. A reduction in TUGTT occurred between each outcome point (P ≤ .002). Repeated TUGTTs were strongly correlated at each assessment, intraclass correlation (95% CI) = 0.85 (0.76–0.91), 0.84 (0.73–0.91), and 0.90 (0.83–0.94), despite a reduction between TUGTT1 and TUGTT2 of 5%, 5%, and 7%, respectively (P ≤ .006). Relative decreases in TUGTT1 (TUGTT2) occurred from start-CR to post-CR and from start-CR to 6 months post-CR of −7.5% (−6.9%) and −14.2% (−15.5%), respectively, while relative increases in 6MWD1 (6MWD2) occurred, 5.1% (7.2%) and 8.4% (10.2%), respectively (P < .001 in all cases). Pearson correlation coefficients for 6MWD1 to TUGTT1 and TUGTT2 across all times were −0.60 and −0.68 (P < .001) and the intraclass correlations (95% CI) for the speeds derived from averaged 6MWDs and TUGTTs were 0.65 (0.54, 0.73) (P < .001). CONCLUSIONS: Similar relative changes occurred for the TUGT and the 6MWT in CR. A significant correlation between the TUGTT and 6MWD was demonstrated, and we suggest that the TUGT may provide a related or a supplementary measurement of functional capacity in CR.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Predicting protein subnuclear localization is a challenging problem. Some previous works based on non-sequence information including Gene Ontology annotations and kernel fusion have respective limitations. The aim of this work is twofold: one is to propose a novel individual feature extraction method; another is to develop an ensemble method to improve prediction performance using comprehensive information represented in the form of high dimensional feature vector obtained by 11 feature extraction methods. Methodology/Principal Findings A novel two-stage multiclass support vector machine is proposed to predict protein subnuclear localizations. It only considers those feature extraction methods based on amino acid classifications and physicochemical properties. In order to speed up our system, an automatic search method for the kernel parameter is used. The prediction performance of our method is evaluated on four datasets: Lei dataset, multi-localization dataset, SNL9 dataset and a new independent dataset. The overall accuracy of prediction for 6 localizations on Lei dataset is 75.2% and that for 9 localizations on SNL9 dataset is 72.1% in the leave-one-out cross validation, 71.7% for the multi-localization dataset and 69.8% for the new independent dataset, respectively. Comparisons with those existing methods show that our method performs better for both single-localization and multi-localization proteins and achieves more balanced sensitivities and specificities on large-size and small-size subcellular localizations. The overall accuracy improvements are 4.0% and 4.7% for single-localization proteins and 6.5% for multi-localization proteins. The reliability and stability of our classification model are further confirmed by permutation analysis. Conclusions It can be concluded that our method is effective and valuable for predicting protein subnuclear localizations. A web server has been designed to implement the proposed method. It is freely available at http://bioinformatics.awowshop.com/snlpr​ed_page.php.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Previous studies have enabled exact prediction of probabilities of identity-by-descent (IBD) in randommating populations for a few loci (up to four or so), with extension to more using approximate regression methods. Here we present a precise predictor of multiple-locus IBD using simple formulas based on exact results for two loci. In particular, the probability of non-IBD X ABC at each of ordered loci A, B, and C can be well approximated by XABC = XABXBC/XB and generalizes to X123. . .k = X12X23. . .Xk-1,k/ Xk-2, where X is the probability of non-IBD at each locus. Predictions from this chain rule are very precise with population bottlenecks and migration, but are rather poorer in the presence of mutation. From these coefficients, the probabilities of multilocus IBD and non-IBD can also be computed for genomic regions as functions of population size, time, and map distances. An approximate but simple recurrence formula is also developed, which generally is less accurate than the chain rule but is more robust with mutation. Used together with the chain rule it leads to explicit equations for non-IBD in a region. The results can be applied to detection of quantitative trait loci (QTL) by computing the probability of IBD at candidate loci in terms of identity-by-state at neighboring markers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For the timber industry, the ability to simulate the drying of wood is invaluable for manufacturing high quality wood products. Mathematically, however, modelling the drying of a wet porous material, such as wood, is a diffcult task due to its heterogeneous and anisotropic nature, and the complex geometry of the underlying pore structure. The well{ developed macroscopic modelling approach involves writing down classical conservation equations at a length scale where physical quantities (e.g., porosity) can be interpreted as averaged values over a small volume (typically containing hundreds or thousands of pores). This averaging procedure produces balance equations that resemble those of a continuum with the exception that effective coeffcients appear in their deffnitions. Exponential integrators are numerical schemes for initial value problems involving a system of ordinary differential equations. These methods differ from popular Newton{Krylov implicit methods (i.e., those based on the backward differentiation formulae (BDF)) in that they do not require the solution of a system of nonlinear equations at each time step but rather they require computation of matrix{vector products involving the exponential of the Jacobian matrix. Although originally appearing in the 1960s, exponential integrators have recently experienced a resurgence in interest due to a greater undertaking of research in Krylov subspace methods for matrix function approximation. One of the simplest examples of an exponential integrator is the exponential Euler method (EEM), which requires, at each time step, approximation of φ(A)b, where φ(z) = (ez - 1)/z, A E Rnxn and b E Rn. For drying in porous media, the most comprehensive macroscopic formulation is TransPore [Perre and Turner, Chem. Eng. J., 86: 117-131, 2002], which features three coupled, nonlinear partial differential equations. The focus of the first part of this thesis is the use of the exponential Euler method (EEM) for performing the time integration of the macroscopic set of equations featured in TransPore. In particular, a new variable{ stepsize algorithm for EEM is presented within a Krylov subspace framework, which allows control of the error during the integration process. The performance of the new algorithm highlights the great potential of exponential integrators not only for drying applications but across all disciplines of transport phenomena. For example, when applied to well{ known benchmark problems involving single{phase liquid ow in heterogeneous soils, the proposed algorithm requires half the number of function evaluations than that required for an equivalent (sophisticated) Newton{Krylov BDF implementation. Furthermore for all drying configurations tested, the new algorithm always produces, in less computational time, a solution of higher accuracy than the existing backward Euler module featured in TransPore. Some new results relating to Krylov subspace approximation of '(A)b are also developed in this thesis. Most notably, an alternative derivation of the approximation error estimate of Hochbruck, Lubich and Selhofer [SIAM J. Sci. Comput., 19(5): 1552{1574, 1998] is provided, which reveals why it performs well in the error control procedure. Two of the main drawbacks of the macroscopic approach outlined above include the effective coefficients must be supplied to the model, and it fails for some drying configurations, where typical dual{scale mechanisms occur. In the second part of this thesis, a new dual{scale approach for simulating wood drying is proposed that couples the porous medium (macroscale) with the underlying pore structure (microscale). The proposed model is applied to the convective drying of softwood at low temperatures and is valid in the so{called hygroscopic range, where hygroscopically held liquid water is present in the solid phase and water exits only as vapour in the pores. Coupling between scales is achieved by imposing the macroscopic gradient on the microscopic field using suitably defined periodic boundary conditions, which allows the macroscopic ux to be defined as an average of the microscopic ux over the unit cell. This formulation provides a first step for moving from the macroscopic formulation featured in TransPore to a comprehensive dual{scale formulation capable of addressing any drying configuration. Simulation results reported for a sample of spruce highlight the potential and flexibility of the new dual{scale approach. In particular, for a given unit cell configuration it is not necessary to supply the effective coefficients prior to each simulation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Compression ignition (CI) engine design is subject to many constraints which presents a multi-criteria optimisation problem that the engine researcher must solve. In particular, the modern CI engine must not only be efficient, but must also deliver low gaseous, particulate and life cycle greenhouse gas emissions so that its impact on urban air quality, human health, and global warming are minimised. Consequently, this study undertakes a multi-criteria analysis which seeks to identify alternative fuels, injection technologies and combustion strategies that could potentially satisfy these CI engine design constraints. Three datasets are analysed with the Preference Ranking Organization Method for Enrichment Evaluations and Geometrical Analysis for Interactive Aid (PROMETHEE-GAIA) algorithm to explore the impact of 1): an ethanol fumigation system, 2): alternative fuels (20 % biodiesel and synthetic diesel) and alternative injection technologies (mechanical direct injection and common rail injection), and 3): various biodiesel fuels made from 3 feedstocks (i.e. soy, tallow, and canola) tested at several blend percentages (20-100 %) on the resulting emissions and efficiency profile of the various test engines. The results show that moderate ethanol substitutions (~20 % by energy) at moderate load, high percentage soy blends (60-100 %), and alternative fuels (biodiesel and synthetic diesel) provide an efficiency and emissions profile that yields the most “preferred” solutions to this multi-criteria engine design problem. Further research is, however, required to reduce Reactive Oxygen Species (ROS) emissions with alternative fuels, and to deliver technologies that do not significantly reduce the median diameter of particle emissions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes an approach to investigate the adoption of Web 2.0 in the classroom using a mixed methods study. By using a combination of qualitative or quantitative data collection and analysis techniques, we attempt to synergize the results and provide a more valid understanding of Web 2.0 adoption for learning by both teachers and students. This approach is expected to yield a better holistic view on the adoption issues associated with the e-learning 2.0 concept in current higher education as opposed to single method studies done previously. This paper also presents some early findings of e-learning 2.0 adoption using this research method

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose. The purpose of this article was to present methods capable of estimating the size and shape of the human eye lens without resorting to phakometry or magnetic resonance imaging (MRI). Methods. Previously published biometry and phakometry data of 66 emmetropic eyes of 66 subjects (age range [18, 63] years, spherical equivalent range [−0.75, +0.75] D) were used to define multiple linear regressions for the radii of curvature and thickness of the lens, from which the lens refractive index could be derived. MRI biometry was also available for a subset of 30 subjects, from which regressions could be determined for the vertex radii of curvature, conic constants, equatorial diameter, volume, and surface area. All regressions were compared with the phakometry and MRI data; the radii of curvature regressions were also compared with a method proposed by Bennett and Royston et al. Results. The regressions were in good agreement with the original measurements. This was especially the case for the regressions of lens thickness, volume, and surface area, which each had an R2 > 0.6. The regression for the posterior radius of curvature had an R2 < 0.2, making this regression unreliable. For all other regressions we found 0.25 < R2 < 0.6. The Bennett-Royston method also produced a good estimation of the radii of curvature, provided its parameters were adjusted appropriately. Conclusions. The regressions presented in this article offer a valuable alternative in case no measured lens biometry values are available; however care must be taken for possible outliers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Metrics such as passengers per square metre have been developed to define optimum or crowded rail passenger density. Whilst such metrics are important to operational procedures, service evaluation and reporting, they fail to fully capture and convey the ways in which passengers experience crowded situations. This paper reports findings from a two year study of rail passenger crowding in five Australian capital cities which involved a novel mixed-methodology including ethnography, focus groups and an online stated preference choice experiment. The resulting data address the following four fundamental research questions: 1) to what extent are Australian rail passengers concerned by crowding, 2) what conditions exacerbate feelings of crowdedness, 3) what conditions mitigate feelings of crowdedness, and 4) how can we usefully understand passengers’ experiences of crowdedness? It concludes with some observations on the significance and implications of these findings for customer service provision. The findings outlined in this paper demonstrate that the experience of crowdedness (including its tolerance) cannot be understood in isolation from other customer services issues such as interior design, quality of environment, safety and public health concerns. It is hypothesised that tolerance of crowding will increase alongside improvements to overall customer service. This was the first comprehensive study of crowding in the Australian rail industry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We perform an analytic and numerical study of an inviscid contracting bubble in a two-dimensional Hele-Shaw cell, where the effects of both surface tension and kinetic undercooling on the moving bubble boundary are not neglected. In contrast to expanding bubbles, in which both boundary effects regularise the ill-posedness arising from the viscous (Saffman-Taylor) instability, we show that in contracting bubbles the two boundary effects are in competition, with surface tension stabilising the boundary, and kinetic undercooling destabilising it. This competition leads to interesting bifurcation behaviour in the asymptotic shape of the bubble in the limit it approaches extinction. In this limit, the boundary may tend to become either circular, or approach a line or "slit" of zero thickness, depending on the initial condition and the value of a nondimensional surface tension parameter. We show that over a critical range of surface tension values, both these asymptotic shapes are stable. In this regime there exists a third, unstable branch of limiting self-similar bubble shapes, with an asymptotic aspect ratio (dependent on the surface tension) between zero and one. We support our asymptotic analysis with a numerical scheme that utilises the applicability of complex variable theory to Hele-Shaw flow.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background and Objectives  In Australia, the risk of transfusion-transmitted malaria is managed through the identification of ‘at-risk’ donors, antibody screening enzyme-linked immunoassay (EIA) and, if reactive, exclusion from fresh blood component manufacture. Donor management depends on the duration of exposure in malarious regions (>6 months: ‘Resident’, <6 months: ‘Visitor’) or a history of malaria diagnosis. We analysed antibody testing and demographic data to investigate antibody persistence dynamics. To assess the yield from retesting 3 years after an initial EIA reactive result, we estimated the proportion of donors who would become non-reactive over this period. Materials and Methods  Test results and demographic data from donors who were malaria EIA reactive were analysed. Time since possible exposure was estimated and antibody survival modelled. Results  Among seroreverters, the time since last possible exposure was significantly shorter in ‘Visitors’ than in ‘Residents’. The antibody survival modelling predicted 20% of previously EIA reactive ‘Visitors’, but only 2% of ‘Residents’ would become non-reactive within 3 years of their first reactive EIA. Conclusion  Antibody persistence in donors correlates with exposure category, with semi-immune ‘Residents’ maintaining detectable antibodies significantly longer than non-immune ‘Visitors’.