174 resultados para Survival analysis (Biometry) Mathematical models
Resumo:
The concept of non-destructive testing (NDT) of materials and structures is of immense importance in engineering and medicine. Several NDT methods including electromagnetic (EM)-based e.g. X-ray and Infrared; ultrasound; and S-waves have been proposed for medical applications. This paper evaluates the viability of near infrared (NIR) spectroscopy, an EM method for rapid non-destructive evaluation of articular cartilage. Specifically, we tested the hypothesis that there is a correlation between the NIR spectrum and the physical and mechanical characteristics of articular cartilage such as thickness, stress and stiffness. Intact, visually normal cartilage-on-bone plugs from 2-3yr old bovine patellae were exposed to NIR light from a diffuse reflectance fibre-optic probe and tested mechanically to obtain their thickness, stress, and stiffness. Multivariate statistical analysis-based predictive models relating articular cartilage NIR spectra to these characterising parameters were developed. Our results show that there is a varying degree of correlation between the different parameters and the NIR spectra of the samples with R2 varying between 65 and 93%. We therefore conclude that NIR can be used to determine, nondestructively, the physical and functional characteristics of articular cartilage.
Resumo:
Prognostics and asset life prediction is one of research potentials in engineering asset health management. We previously developed the Explicit Hazard Model (EHM) to effectively and explicitly predict asset life using three types of information: population characteristics; condition indicators; and operating environment indicators. We have formerly studied the application of both the semi-parametric EHM and non-parametric EHM to the survival probability estimation in the reliability field. The survival time in these models is dependent not only upon the age of the asset monitored, but also upon the condition and operating environment information obtained. This paper is a further study of the semi-parametric and non-parametric EHMs to the hazard and residual life prediction of a set of resistance elements. The resistance elements were used as corrosion sensors for measuring the atmospheric corrosion rate in a laboratory experiment. In this paper, the estimated hazard of the resistance element using the semi-parametric EHM and the non-parametric EHM is compared to the traditional Weibull model and the Aalen Linear Regression Model (ALRM), respectively. Due to assuming a Weibull distribution in the baseline hazard of the semi-parametric EHM, the estimated hazard using this model is compared to the traditional Weibull model. The estimated hazard using the non-parametric EHM is compared to ALRM which is a well-known non-parametric covariate-based hazard model. At last, the predicted residual life of the resistance element using both EHMs is compared to the actual life data.
Resumo:
Almost 10% of all births are preterm and 2.2% are stillbirths globally. Recent research has suggested that environmental factors may be a contributory cause to these adverse birth outcomes. The authors examined the relationship between ambient temperature and preterm birth and stillbirth in Brisbane, Australia between 2005 and 2009 (n = 101,870). They used a Cox proportional hazard model with live birth and stillbirth as competing risks. They also examined if there were periods of the pregnancy where exposure to high temperatures had a greater effect. Exposure to higher ambient temperatures during pregnancy increased the risk of stillbirth. The hazard ratio for stillbirth was 0.3 at 12 °C relative to the reference temperature at 21 °C. The temperature effect was greatest for fetuses of less than 36 weeks of gestation. There was an association between higher temperature and shorter gestation, as the hazard ratio for live birth was 0.96 at 15 °C and 1.02 at 25 °C. This effect was greatest at later gestational ages. The results provide strong evidence of an association between increased temperature and increased risk of stillbirth and shorter gestations.
Resumo:
We study Krylov subspace methods for approximating the matrix-function vector product φ(tA)b where φ(z) = [exp(z) - 1]/z. This product arises in the numerical integration of large stiff systems of differential equations by the Exponential Euler Method, where A is the Jacobian matrix of the system. Recently, this method has found application in the simulation of transport phenomena in porous media within mathematical models of wood drying and groundwater flow. We develop an a posteriori upper bound on the Krylov subspace approximation error and provide a new interpretation of a previously published error estimate. This leads to an alternative Krylov approximation to φ(tA)b, the so-called Harmonic Ritz approximant, which we find does not exhibit oscillatory behaviour of the residual error.
Resumo:
This prospective study examined the association between physical activity and the incidence of self-reported stiff or painful joints (SPJ) among mid-age women and older women over a 3-year period. Data were collected from cohorts of mid-age (48–55 years at Time 1; n = 4,780) and older women (72–79 years at Time 1; n = 3,970) who completed mailed surveys 3 years apart for the Australian Longitudinal Study on Women's Health. Physical activity was measured with the Active Australia questions and categorized based on metabolic equivalent value minutes per week: none (<40 MET.min/week); very low (40 to <300 MET.min/week); low (300 to <600 MET.min/week); moderate (600 to <1,200 MET.min/week); and high (1,200+ MET.min/week). Cohort-specific logistic regression models were used to examine the association between physical activity at Time 1 and SPJ 'sometimes or often' and separately 'often' at Time 2. Respondents reporting SPJ 'sometimes or often' at Time 1 were excluded from analysis. In univariate models, the odds of reporting SPJ 'sometimes or often' were lower for mid-age respondents reporting low (odds ratio (OR) = 0.77, 95% confidence interval (CI) = 0.63–0.94), moderate (OR = 0.82, 95% CI = 0.68–0.99), and high (OR = 0.75, 95% CI = 0.62–0.90) physical activity levels and for older respondents who were moderately (OR = 0.80, 95% CI = 0.65–0.98) or highly active (OR = 0.83, 95% CI = 0.69–0.99) than for those who were sedentary. After adjustment for confounders, these associations were no longer statistically significant. The odds of reporting SPJ 'often' were lower for mid-age respondents who were moderately active (OR = 0.71, 95% CI = 0.52–0.97) than for sedentary respondents in univariate but not adjusted models. Older women in the low (OR = 0.72, 95% CI = 0.55–0.96), moderate (OR = 0.54, 95% CI = 0.39–0.76), and high (OR = 0.61, 95% CI = 0.46–0.82) physical activity categories had lower odds of reporting SPJ 'often' at Time 2 than their sedentary counterparts, even after adjustment for confounders. These results are the first to show a dose–response relationship between physical activity and arthritis symptoms in older women. They suggest that advice for older women not currently experiencing SPJ should routinely include counseling on the importance of physical activity for preventing the onset of these symptoms.
Resumo:
Stochastic models for competing clonotypes of T cells by multivariate, continuous-time, discrete state, Markov processes have been proposed in the literature by Stirk, Molina-París and van den Berg (2008). A stochastic modelling framework is important because of rare events associated with small populations of some critical cell types. Usually, computational methods for these problems employ a trajectory-based approach, based on Monte Carlo simulation. This is partly because the complementary, probability density function (PDF) approaches can be expensive but here we describe some efficient PDF approaches by directly solving the governing equations, known as the Master Equation. These computations are made very efficient through an approximation of the state space by the Finite State Projection and through the use of Krylov subspace methods when evolving the matrix exponential. These computational methods allow us to explore the evolution of the PDFs associated with these stochastic models, and bimodal distributions arise in some parameter regimes. Time-dependent propensities naturally arise in immunological processes due to, for example, age-dependent effects. Incorporating time-dependent propensities into the framework of the Master Equation significantly complicates the corresponding computational methods but here we describe an efficient approach via Magnus formulas. Although this contribution focuses on the example of competing clonotypes, the general principles are relevant to multivariate Markov processes and provide fundamental techniques for computational immunology.
Resumo:
The action potential (ap) of a cardiac cell is made up of a complex balance of ionic currents which flow across the cell membrane in response to electrical excitation of the cell. Biophysically detailed mathematical models of the ap have grown larger in terms of the variables and parameters required to model new findings in subcellular ionic mechanisms. The fitting of parameters to such models has seen a large degree of parameter and module re-use from earlier models. An alternative method for modelling electrically exciteable cardiac tissue is a phenomenological model, which reconstructs tissue level ap wave behaviour without subcellular details. A new parameter estimation technique to fit the morphology of the ap in a four variable phenomenological model is presented. An approximation of a nonlinear ordinary differential equation model is established that corresponds to the given phenomenological model of the cardiac ap. The parameter estimation problem is converted into a minimisation problem for the unknown parameters. A modified hybrid Nelder–Mead simplex search and particle swarm optimization is then used to solve the minimisation problem for the unknown parameters. The successful fitting of data generated from a well known biophysically detailed model is demonstrated. A successful fit to an experimental ap recording that contains both noise and experimental artefacts is also produced. The parameter estimation method’s ability to fit a complex morphology to a model with substantially more parameters than previously used is established.
Resumo:
There is worldwide interest in reducing aircraft emissions. The difficulty of reducing emissions including water vapour, carbon dioxide (CO2) and oxides of nitrogen (NOx) is mainly due from the fact that a commercial aircraft is usually designed for a particular optimal cruise altitude but may be requested or required to operate and deviate at different altitude and speeds to archive a desired or commanded flight plan, resulting in increased emissions. This is a multi- disciplinary problem with multiple trade-offs such as optimising engine efficiency, minimising fuel burnt, minimise emissions while maintaining aircraft separation and air safety. This project presents the coupling of an advanced optimisation technique with mathematical models and algorithms for aircraft emission reduction through flight optimisation. Numerical results show that the method is able to capture a set of useful trade-offs between aircraft range and NOx, and mission fuel consumption and NOx. In addition, alternative cruise operating conditions including Mach and altitude that produce minimum NOx and CO2 (minimum mission fuel weight) are suggested.
Resumo:
Experimental action potential (AP) recordings in isolated ventricular myoctes display significant temporal beat-to-beat variability in morphology and duration. Furthermore, significant cell-to-cell differences in AP also exist even for isolated cells originating from the same region of the same heart. However, current mathematical models of ventricular AP fail to replicate the temporal and cell-to-cell variability in AP observed experimentally. In this study, we propose a novel mathematical framework for the development of phenomenological AP models capable of capturing cell-to-cell and temporal variabilty in cardiac APs. A novel stochastic phenomenological model of the AP is developed, based on the deterministic Bueno-Orovio/Fentonmodel. Experimental recordings of AP are fit to the model to produce AP models of individual cells from the apex and the base of the guinea-pig ventricles. Our results show that the phenomenological model is able to capture the considerable differences in AP recorded from isolated cells originating from the location. We demonstrate the closeness of fit to the available experimental data which may be achieved using a phenomenological model, and also demonstrate the ability of the stochastic form of the model to capture the observed beat-to-beat variablity in action potential duration.
Resumo:
Popular wireless networks, such as IEEE 802.11/15/16, are not designed for real-time applications. Thus, supporting real-time quality of service (QoS) in wireless real-time control is challenging. This paper adopts the widely used IEEE 802.11, with the focus on its distributed coordination function (DCF), for soft-real-time control systems. The concept of the critical real-time traffic condition is introduced to characterize the marginal satisfaction of real-time requirements. Then, mathematical models are developed to describe the dynamics of DCF based real-time control networks with periodic traffic, a unique feature of control systems. Performance indices such as throughput and packet delay are evaluated using the developed models, particularly under the critical real-time traffic condition. Finally, the proposed modelling is applied to traffic rate control for cross-layer networked control system design.
Resumo:
Computer resource allocation represents a significant challenge particularly for multiprocessor systems, which consist of shared computing resources to be allocated among co-runner processes and threads. While an efficient resource allocation would result in a highly efficient and stable overall multiprocessor system and individual thread performance, ineffective poor resource allocation causes significant performance bottlenecks even for the system with high computing resources. This thesis proposes a cache aware adaptive closed loop scheduling framework as an efficient resource allocation strategy for the highly dynamic resource management problem, which requires instant estimation of highly uncertain and unpredictable resource patterns. Many different approaches to this highly dynamic resource allocation problem have been developed but neither the dynamic nature nor the time-varying and uncertain characteristics of the resource allocation problem is well considered. These approaches facilitate either static and dynamic optimization methods or advanced scheduling algorithms such as the Proportional Fair (PFair) scheduling algorithm. Some of these approaches, which consider the dynamic nature of multiprocessor systems, apply only a basic closed loop system; hence, they fail to take the time-varying and uncertainty of the system into account. Therefore, further research into the multiprocessor resource allocation is required. Our closed loop cache aware adaptive scheduling framework takes the resource availability and the resource usage patterns into account by measuring time-varying factors such as cache miss counts, stalls and instruction counts. More specifically, the cache usage pattern of the thread is identified using QR recursive least square algorithm (RLS) and cache miss count time series statistics. For the identified cache resource dynamics, our closed loop cache aware adaptive scheduling framework enforces instruction fairness for the threads. Fairness in the context of our research project is defined as a resource allocation equity, which reduces corunner thread dependence in a shared resource environment. In this way, instruction count degradation due to shared cache resource conflicts is overcome. In this respect, our closed loop cache aware adaptive scheduling framework contributes to the research field in two major and three minor aspects. The two major contributions lead to the cache aware scheduling system. The first major contribution is the development of the execution fairness algorithm, which degrades the co-runner cache impact on the thread performance. The second contribution is the development of relevant mathematical models, such as thread execution pattern and cache access pattern models, which in fact formulate the execution fairness algorithm in terms of mathematical quantities. Following the development of the cache aware scheduling system, our adaptive self-tuning control framework is constructed to add an adaptive closed loop aspect to the cache aware scheduling system. This control framework in fact consists of two main components: the parameter estimator, and the controller design module. The first minor contribution is the development of the parameter estimators; the QR Recursive Least Square(RLS) algorithm is applied into our closed loop cache aware adaptive scheduling framework to estimate highly uncertain and time-varying cache resource patterns of threads. The second minor contribution is the designing of a controller design module; the algebraic controller design algorithm, Pole Placement, is utilized to design the relevant controller, which is able to provide desired timevarying control action. The adaptive self-tuning control framework and cache aware scheduling system in fact constitute our final framework, closed loop cache aware adaptive scheduling framework. The third minor contribution is to validate this cache aware adaptive closed loop scheduling framework efficiency in overwhelming the co-runner cache dependency. The timeseries statistical counters are developed for M-Sim Multi-Core Simulator; and the theoretical findings and mathematical formulations are applied as MATLAB m-file software codes. In this way, the overall framework is tested and experiment outcomes are analyzed. According to our experiment outcomes, it is concluded that our closed loop cache aware adaptive scheduling framework successfully drives co-runner cache dependent thread instruction count to co-runner independent instruction count with an error margin up to 25% in case cache is highly utilized. In addition, thread cache access pattern is also estimated with 75% accuracy.
Resumo:
Handling information overload online, from the user's point of view is a big challenge, especially when the number of websites is growing rapidly due to growth in e-commerce and other related activities. Personalization based on user needs is the key to solving the problem of information overload. Personalization methods help in identifying relevant information, which may be liked by a user. User profile and object profile are the important elements of a personalization system. When creating user and object profiles, most of the existing methods adopt two-dimensional similarity methods based on vector or matrix models in order to find inter-user and inter-object similarity. Moreover, for recommending similar objects to users, personalization systems use the users-users, items-items and users-items similarity measures. In most cases similarity measures such as Euclidian, Manhattan, cosine and many others based on vector or matrix methods are used to find the similarities. Web logs are high-dimensional datasets, consisting of multiple users, multiple searches with many attributes to each. Two-dimensional data analysis methods may often overlook latent relationships that may exist between users and items. In contrast to other studies, this thesis utilises tensors, the high-dimensional data models, to build user and object profiles and to find the inter-relationships between users-users and users-items. To create an improved personalized Web system, this thesis proposes to build three types of profiles: individual user, group users and object profiles utilising decomposition factors of tensor data models. A hybrid recommendation approach utilising group profiles (forming the basis of a collaborative filtering method) and object profiles (forming the basis of a content-based method) in conjunction with individual user profiles (forming the basis of a model based approach) is proposed for making effective recommendations. A tensor-based clustering method is proposed that utilises the outcomes of popular tensor decomposition techniques such as PARAFAC, Tucker and HOSVD to group similar instances. An individual user profile, showing the user's highest interest, is represented by the top dimension values, extracted from the component matrix obtained after tensor decomposition. A group profile, showing similar users and their highest interest, is built by clustering similar users based on tensor decomposed values. A group profile is represented by the top association rules (containing various unique object combinations) that are derived from the searches made by the users of the cluster. An object profile is created to represent similar objects clustered on the basis of their similarity of features. Depending on the category of a user (known, anonymous or frequent visitor to the website), any of the profiles or their combinations is used for making personalized recommendations. A ranking algorithm is also proposed that utilizes the personalized information to order and rank the recommendations. The proposed methodology is evaluated on data collected from a real life car website. Empirical analysis confirms the effectiveness of recommendations made by the proposed approach over other collaborative filtering and content-based recommendation approaches based on two-dimensional data analysis methods.
Resumo:
A Multimodal Seaport Container Terminal (MSCT) is a complex system which requires careful planning and control in order to operate efficiently. It consists of a number of subsystems that require optimisation of the operations within them, as well as synchronisation of machines and containers between the various subsystems. Inefficiency in the terminal can delay ships from their scheduled timetables, as well as cause delays in delivering containers to their inland destinations, both of which can be very costly to their operators. The purpose of this PhD thesis is to use Operations Research methodologies to optimise and synchronise these subsystems as an integrated application. An initial model is developed for the overall MSCT; however, due to a large number of assumptions that had to be made, as well as other issues, it is found to be too inaccurate and infeasible for practical use. Instead, a method of developing models for each subsystem is proposed that then be integrated with each other. Mathematical models are developed for the Storage Area System (SAS) and Intra-terminal Transportation System (ITTS). The SAS deals with the movement and assignment of containers to stacks within the storage area, both when they arrive and when they are rehandled to retrieve containers below them. The ITTS deals with scheduling the movement of containers and machines between the storage areas and other sections of the terminal, such as the berth and road/rail terminals. Various constructive heuristics are explored and compared for these models to produce good initial solutions for large-sized problems, which are otherwise impractical to compute by exact methods. These initial solutions are further improved through the use of an innovative hyper-heuristic algorithm that integrates the SAS and ITTS solutions together and optimises them through meta-heuristic techniques. The method by which the two models can interact with each other as an integrated system will be discussed, as well as how this method can be extended to the other subsystems of the MSCT.