62 resultados para method of lines
Resumo:
An existing model for solvent penetration and drug release from a spherically-shaped polymeric drug delivery device is revisited. The model has two moving boundaries, one that describes the interface between the glassy and rubbery states of polymer, and another that defines the interface between the polymer ball and the pool of solvent. The model is extended so that the nonlinear diffusion coefficient of drug explicitly depends on the concentration of solvent, and the resulting equations are solved numerically using a front-fixing transformation together with a finite difference spatial discretisation and the method of lines. We present evidence that our scheme is much more accurate than a previous scheme. Asymptotic results in the small-time limit are presented, which show how the use of a kinetic law as a boundary condition on the innermost moving boundary dictates qualitative behaviour, the scalings being very different to the similar moving boundary problem that arises from modelling the melting of an ice ball. The implication is that the model considered here exhibits what is referred to as ``non-Fickian'' or Case II diffusion which, together with the initially constant rate of drug release, has certain appeal from a pharmaceutical perspective.
Resumo:
The method of lines is a standard method for advancing the solution of partial differential equations (PDEs) in time. In one sense, the method applies equally well to space-fractional PDEs as it does to integer-order PDEs. However, there is a significant challenge when solving space-fractional PDEs in this way, owing to the non-local nature of the fractional derivatives. Each equation in the resulting semi-discrete system involves contributions from every spatial node in the domain. This has important consequences for the efficiency of the numerical solver, especially when the system is large. First, the Jacobian matrix of the system is dense, and hence methods that avoid the need to form and factorise this matrix are preferred. Second, since the cost of evaluating the discrete equations is high, it is essential to minimise the number of evaluations required to advance the solution in time. In this paper, we show how an effective preconditioner is essential for improving the efficiency of the method of lines for solving a quite general two-sided, nonlinear space-fractional diffusion equation. A key contribution is to show, how to construct suitable banded approximations to the system Jacobian for preconditioning purposes that permit high orders and large stepsizes to be used in the temporal integration, without requiring dense matrices to be formed. The results of numerical experiments are presented that demonstrate the effectiveness of this approach.
Resumo:
The objective of this PhD research program is to investigate numerical methods for simulating variably-saturated flow and sea water intrusion in coastal aquifers in a high-performance computing environment. The work is divided into three overlapping tasks: to develop an accurate and stable finite volume discretisation and numerical solution strategy for the variably-saturated flow and salt transport equations; to implement the chosen approach in a high performance computing environment that may have multiple GPUs or CPU cores; and to verify and test the implementation. The geological description of aquifers is often complex, with porous materials possessing highly variable properties, that are best described using unstructured meshes. The finite volume method is a popular method for the solution of the conservation laws that describe sea water intrusion, and is well-suited to unstructured meshes. In this work we apply a control volume-finite element (CV-FE) method to an extension of a recently proposed formulation (Kees and Miller, 2002) for variably saturated groundwater flow. The CV-FE method evaluates fluxes at points where material properties and gradients in pressure and concentration are consistently defined, making it both suitable for heterogeneous media and mass conservative. Using the method of lines, the CV-FE discretisation gives a set of differential algebraic equations (DAEs) amenable to solution using higher-order implicit solvers. Heterogeneous computer systems that use a combination of computational hardware such as CPUs and GPUs, are attractive for scientific computing due to the potential advantages offered by GPUs for accelerating data-parallel operations. We present a C++ library that implements data-parallel methods on both CPU and GPUs. The finite volume discretisation is expressed in terms of these data-parallel operations, which gives an efficient implementation of the nonlinear residual function. This makes the implicit solution of the DAE system possible on the GPU, because the inexact Newton-Krylov method used by the implicit time stepping scheme can approximate the action of a matrix on a vector using residual evaluations. We also propose preconditioning strategies that are amenable to GPU implementation, so that all computationally-intensive aspects of the implicit time stepping scheme are implemented on the GPU. Results are presented that demonstrate the efficiency and accuracy of the proposed numeric methods and formulation. The formulation offers excellent conservation of mass, and higher-order temporal integration increases both numeric efficiency and accuracy of the solutions. Flux limiting produces accurate, oscillation-free solutions on coarse meshes, where much finer meshes are required to obtain solutions with equivalent accuracy using upstream weighting. The computational efficiency of the software is investigated using CPUs and GPUs on a high-performance workstation. The GPU version offers considerable speedup over the CPU version, with one GPU giving speedup factor of 3 over the eight-core CPU implementation.
Resumo:
The high level of scholarly writing required for a doctoral thesis is a challenge for many research students. However, formal academic writing training is not a core component of many doctoral programs. Informal writing groups for doctoral students may be one method of contributing to the improvement of scholarly writing. In this paper, we report on a writing group that was initiated by an experienced writer and higher degree research supervisor to support and improve her doctoral students’ writing capabilities. Over time, this group developed a workable model to suit their varying needs and circumstances. The model comprised group sessions, an email group, and individual writing. Here, we use a narrative approach to explore the effectiveness and value of our research writing group model in improving scholarly writing. The data consisted of doctoral students’ reflections to stimulus questions about their writing progress and experiences. The stimulus questions sought to probe individual concerns about their own writing, what they had learned in the research writing group, the benefits of the group, and the disadvantages and challenges to participation. These reflections were analysed using thematic analysis. Following this analysis, the supervisor provided her perspective on the key themes that emerged. Results revealed that, through the writing group, members learned technical elements (e.g., paragraph structure), non-technical elements (e.g., working within limited timeframes), conceptual elements (e.g., constructing a cohesive arguments), collaborative writing processes, and how to edit and respond to feedback. In addition to improved writing quality, other benefits were opportunities for shared writing experiences, peer support, and increased confidence and motivation. The writing group provides a unique social learning environment with opportunities for: professional dialogue about writing, peer learning and review, and developing a supportive peer network. Thus our research writing group has proved an effective avenue for building doctoral students’ capability in scholarly writing. The proposed model for a research writing group could be applicable to any context, regardless of the type and location of the university, university faculty, doctoral program structure, or number of postgraduate students. It could also be used within a group of students with diverse research abilities, needs, topics and methodologies. However, it requires a group facilitator with sufficient expertise in scholarly writing and experience in doctoral supervision who can both engage the group in planned writing activities and also capitalise on fruitful lines of discussion related to students’ concerns as they arise. The research writing group is not intended to replace traditional supervision processes nor existing training. However it has clear benefits for improving scholarly writing in doctoral research programs particularly in an era of rapidly increasing student load.
Resumo:
We extended an earlier study (Vision Research, 45, 1967–1974, 2005) in which we investigated limits at which induced blur of letter targets becomes noticeable, troublesome and objectionable. Here we used a deformable adaptive optics mirror to vary spherical defocus for conditions of a white background with correction of astigmatism; a white background with reduction of all aberrations other than defocus; and a monochromatic background with reduction of all aberrations other than defocus. We used seven cyclopleged subjects, lines of three high-contrast letters as targets, 3–6 mm artificial pupils, and 0.1–0.6 logMAR letter sizes. Subjects used a method of adjustment to control the defocus component of the mirror to set the 'just noticeable', 'just troublesome' and 'just objectionable' defocus levels. For the white-no adaptive optics condition combined with 0.1 logMAR letter size, mean 'noticeable' blur limits were ±0.30, ±0.24 and ±0.23 D at 3, 4 and 6 mm pupils, respectively. White-adaptive optics and monochromatic-adaptive optics conditions reduced blur limits by 8% and 20%, respectively. Increasing pupil size from 3–6 mm decreased blur limits by 29%, and increasing letter size increased blur limits by 79%. Ratios of troublesome to noticeable, and of objectionable to noticeable, blur limits were 1.9 and 2.7 times, respectively. The study shows that the deformable mirror can be used to vary defocus in vision experiments. Overall, the results of noticeable, troublesome and objectionable blur agreed well with those of the previous study. Attempting to reduce higher-order aberrations or chromatic aberrations, reduced blur limits to only a small extent.
Resumo:
Purpose: The cornea is known to be susceptible to forces exerted by eyelids. There have been previous attempts to quantify eyelid pressure but the reliability of the results is unclear. The purpose of this study was to develop a technique using piezoresistive pressure sensors to measure upper eyelid pressure on the cornea. Methods: The technique was based on the use of thin (0.18 mm) tactile piezoresistive pressure sensors, which generate a signal related to the applied pressure. A range of factors that influence the response of this pressure sensor were investigated along with the optimal method of placing the sensor in the eye. Results: Curvature of the pressure sensor was found to impart force, so the sensor needed to remain flat during measurements. A large rigid contact lens was designed to have a flat region to which the sensor was attached. To stabilise the contact lens during measurement, an apparatus was designed to hold and position the sensor and contact lens combination on the eye. A calibration system was designed to apply even pressure to the sensor when attached to the contact lens, so the raw digital output could be converted to actual pressure units. Conclusions: Several novel procedures were developed to use tactile sensors to measure eyelid pressure. The quantification of eyelid pressure has a number of applications including eyelid reconstructive surgery and the design of soft and rigid contact lenses.
Resumo:
The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.
Resumo:
Green energy is one of the key factors, driving down electricity bill and zero carbon emission generating electricity to green building. However, the climate change and environmental policies are accelerating people to use renewable energy instead of coal-fired (convention type) energy for green building that energy is not environmental friendly. Therefore, solar energy is one of the clean energy solving environmental impact and paying less in electricity fee. The method of solar energy is collecting sun from solar array and saves in battery from which provides necessary electricity to whole house with zero carbon emission. However, in the market a lot of solar arrays suppliers, the aims of this paper attempted to use superiority and inferiority multi-criteria ranking (SIR) method with 13 constraints establishing I-flows and S-flows matrices to evaluate four alternatives solar energies and determining which alternative is the best, providing power to sustainable building. Furthermore, SIR is well-known structured approach of multi-criteria decision support tools and gradually used in construction and building. The outcome of this paper significantly gives an indication to user selecting solar energy.
Resumo:
Establishing age-at-death for skeletal remains is a vital component of forensic anthropology. The Suchey-Brooks (S-B) method of age estimation has been widely utilised since 1986 and relies on a visual assessment of the pubic symphyseal surface in comparison to a series of casts. Inter-population studies (Kimmerle et al., 2005; Djuric et al., 2007; Sakaue, 2006) demonstrate limitations of the S-B method, however, no assessment of this technique specific to Australian populations has been published. Aim: This investigation assessed the accuracy and applicability of the S-B method to an adult Australian Caucasian population by highlighting error rates associated with this technique. Methods: Computed tomography (CT) and contact scans of the S-B casts were performed; each geometrically modelled surface was extracted and quantified for reference purposes. A Queensland skeletal database for Caucasian remains aged 15 – 70 years was initiated at the Queensland Health Forensic and Scientific Services – Forensic Pathology Mortuary (n=350). Three-dimensional reconstruction of the bone surface using innovative volume visualisation protocols in Amira® and Rapidform® platforms was performed. Samples were allocated into 11 sub-sets of 5-year age intervals and changes associated with the surface geometry were quantified in relation to age, gender and asymmetry. Results: Preliminary results indicate that computational analysis was successfully applied to model morphological surface changes. Significant differences in observed versus actual ages were noted. Furthermore, initial morphological assessment demonstrates significant bilateral asymmetry of the pubic symphysis, which is unaccounted for in the S-B method. These results propose refinements to the S-B method, when applied to Australian casework. Conclusion: This investigation promises to transform anthropological analysis to be more quantitative and less invasive using CT imaging. The overarching goal contributes to improving skeletal identification and medico-legal death investigation in the coronial process by narrowing the range of age-at-death estimation in a biological profile.
Resumo:
This practice-based inquiry investigates the process of composing notated scores using improvised solos by saxophonists John Butcher and Anthony Braxton. To compose with these improvised sources, I developed a new method of analysis and through this method I developed new compositional techniques in applying these materials into a score. This method of analysis and composition utilizes the conceptual language of Gilles Deleuze and Felix Guattari found in A Thousand Plateaus. The conceptual language of Deleuze and Guattari, in particular the terms assemblage, refrain and deterritorialization are discussed in depth to give a context for the philosophical origins and also to explain how the language is used in reference to improvised music and the compositional process. The project seeks to elucidate the conceptual language through the creative practice and in turn for the creative practice to clarify the use of the conceptual terminology. The outcomes of the research resulted in four notated works being composed. Firstly, Gravity, for soloist and ensemble based on the improvisational language of John Butcher and secondly a series of 3 studies titled Transbraxton Studies for solo instruments based on the improvisational-compositional language of Anthony Braxton. The implications of this research include the application of the analysis method to a number of musical contexts including: to be used in the process of composing with improvised music; in the study of style and authorship in solo improvisation; as a way of analyzing group improvisation; in the analysis of textural music including electronic music; and in the analysis of music from different cultures—particularly cultures where improvisation and per formative aspects to the music are significant to the overall meaning of the work. The compositional technique that was developed has further applications in terms of an expressive method of composing with non-metered improvised materials and one that merges well with the transcription method developed of notating pitch and sounds to a timeline. It is hoped that this research can open further lines of enquiry into the application of the conceptual ideas of Deleuze and Guattari to the analysis of more forms of music.
Resumo:
A method of producing porous complex oxides includes the steps of providing a mixt. of (a) precursor elements suitable to produce the complex oxide, or (b) one or more precursor elements suitable to produce particles of the complex oxide and one or more metal oxide particles; and (c) a particulate carbon-contg. pore-forming material selected to provide pore sizes in the range of 7-250 nm, and treating the mixt. to (i) form the porous complex oxide in which two or more of the precursor elements from (a) above or one or more of the precursor elements and one or more of the metals in the metal oxide particles from (b) above are incorporated into a phase of the complex metal oxide and the complex metal oxide has grain sizes in the range of 1-150 nm, and (ii) removing the pore-forming material under conditions such that the porous structure and compn. of the complex oxide is substantially preserved. The method may be used to produce nonrefractory metal oxides as well. The mixt. further includes a surfactant, or a polymer. [on SciFinder(R)]
Resumo:
Cell line array (CMA) and tissue microarray (TMA) technologies are high-throughput methods for analysing both the abundance and distribution of gene expression in a panel of cell lines or multiple tissue specimens in an efficient and cost-effective manner. The process is based on Kononen's method of extracting a cylindrical core of paraffin-embedded donor tissue and inserting it into a recipient paraffin block. Donor tissue from surgically resected paraffin-embedded tissue blocks, frozen needle biopsies or cell line pellets can all be arrayed in the recipient block. The representative area of interest is identified and circled on a haematoxylin and eosin (H&E)-stained section of the donor block. Using a predesigned map showing a precise spacing pattern, a high density array of up to 1,000 cores of cell pellets and/or donor tissue can be embedded into the recipient block using a tissue arrayer from Beecher Instruments. Depending on the depth of the cell line/tissue removed from the donor block 100-300 consecutive sections can be cut from each CMA/TMA block. Sections can be stained for in situ detection of protein, DNA or RNA targets using immunohistochemistry (IHC), fluorescent in situ hybridisation (FISH) or mRNA in situ hybridisation (RNA-ISH), respectively. This chapter provides detailed methods for CMA/TMA design, construction and analysis with in-depth notes on all technical aspects including tips to deal with common pitfalls the user may encounter. © Springer Science+Business Media, LLC 2011.
Resumo:
Abstract BACKGROUND: An examination of melanoma incidence according to anatomical region may be one method of monitoring the impact of public health initiatives. OBJECTIVES: To examine melanoma incidence trends by body site, sex and age at diagnosis or body site and morphology in a population at high risk. MATERIALS AND METHODS: Population-based data on invasive melanoma cases (n = 51473) diagnosed between 1982 and 2008 were extracted from the Queensland Cancer Registry. Age-standardized incidence rates were calculated using the direct method (2000 world standard population) and joinpoint regression models were used to fit trend lines. RESULTS: Significantly decreasing trends for melanomas on the trunk and upper limbs/shoulders were observed during recent years for both sexes under the age of 40 years and among males aged 40-59years. However, in the 60 and over age group, the incidence of melanoma is continuing to increase at all sites (apart from the trunk) for males and on the scalp/neck and upper limbs/shoulders for females. Rates of nodular melanoma are currently decreasing on the trunk and lower limbs. In contrast, superficial spreading melanoma is significantly increasing on the scalp/neck and lower limbs, along with substantial increases in lentigo maligna melanoma since the late 1990s at all sites apart from the lower limbs. CONCLUSIONS: In this large study we have observed significant decreases in rates of invasive melanoma in the younger age groups on less frequently exposed body sites. These results may provide some indirect evidence of the impact of long-running primary prevention campaigns.
Resumo:
Reported homocysteine (HCY) concentrations in human serum show poor concordance amongst laboratories due to endogenous HCY in the matrices used for assay calibrators and QCs. Hence, we have developed a fully validated LC–MS/MS method for measurement of HCY concentrations in human serum samples that addresses this issue by minimising matrix effects. We used small volumes (20 μL) of 2% Bovine Serum Albumin (BSA) as surrogate matrix for making calibrators and QCs with concentrations adjusted for the endogenous HCY concentration in the surrogate matrix using the method of standard additions. To aliquots (20 μL) of human serum samples, calibrators or QCs, were added HCY-d4 (internal standard) and tris-(2-carboxyethyl) phosphine hydrochloride (TCEP) as reducing agent. After protein precipitation, diluted supernatants were injected into the LC–MS/MS. Calibration curves were linear; QCs were accurate (5.6% deviation from nominal), precise (CV% ≤ 9.6%), stable for four freeze–thaw cycles, and when stored at room temperature for 5 h or at −80 °C (27 days). Recoveries from QCs in surrogate matrix or pooled human serum were 91.9 and 95.9%, respectively. There was no matrix effect using 6 different individual serum samples including one that was haemolysed. Our LC–MS/MS method has satisfied all of the validation criteria of the 2012 EMA guideline.
Resumo:
Porosity is one of the key parameters of the macroscopic structure of porous media, generally defined as the ratio of the free spaces occupied (by the volume of air) within the material to the total volume of the material. Porosity is determined by measuring skeletal volume and the envelope volume. Solid displacement method is one of the inexpensive and easy methods to determine the envelope volume of a sample with an irregular shape. In this method, generally glass beads are used as a solid due to their uniform size, compactness and fluidity properties. The smaller size of the glass beads means that they enter into the open pores which have a larger diameter than the glass beads. Although extensive research has been carried out on porosity determination using displacement method, no study exists which adequately reports micro-level observation of the sample during measurement. This study set out with the aim of assessing the accuracy of solid displacement method of bulk density measurement of dried foods by micro-level observation. Solid displacement method of porosity determination was conducted using a cylindrical vial (cylindrical plastic container) and 57 µm glass beads in order to measure the bulk density of apple slices at different moisture contents. A scanning electron microscope (SEM), a profilometer and ImageJ software were used to investigate the penetration of glass beads into the surface pores during the determination of the porosity of dried food. A helium pycnometer was used to measure the particle density of the sample. Results show that a significant number of pores were large enough to allow the glass beads to enter into the pores, thereby causing some erroneous results. It was also found that coating the dried sample with appropriate coating material prior to measurement can resolve this problem.