389 resultados para computational models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: The management of unruptured aneurysms remains controversial as treatment infers potential significant risk to the currently well patient. The decision to treat is based upon aneurysm location, size and abnormal morphology (e.g. bleb formation). A method to predict bleb formation would thus help stratify patient treatment. Our study aims to investigate possible associations between intra-aneurysmal flow dynamics and bleb formation within intracranial aneurysms. Competing theories on aetiology appear in the literature. Our purpose is to further clarify this issue. Methodology: We recruited data from 3D rotational angiograms (3DRA) of 30 patients with cerebral aneurysms and bleb formation. Models representing aneurysms pre-bleb formation were reconstructed by digitally removing the bleb, then computational fluid dynamics simulations were run on both pre and post bleb models. Pulsatile flow conditions and standard boundary conditions were imposed. Results: Aneurysmal flow structure, impingement regions, wall shear stress magnitude and gradients were produced for all models. Correlation of these parameters with bleb formation was sought. Certain CFD parameters show significant inter patient variability, making statistically significant correlation difficult on the partial data subset obtained currently. Conclusion: CFD models are readily producible from 3DRA data. Preliminary results indicate bleb formation appears to be related to regions of high wall shear stress and direct impingement regions of the aneurysm wall.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quality oriented management systems and methods have become the dominant business and governance paradigm. From this perspective, satisfying customers’ expectations by supplying reliable, good quality products and services is the key factor for an organization and even government. During recent decades, Statistical Quality Control (SQC) methods have been developed as the technical core of quality management and continuous improvement philosophy and now are being applied widely to improve the quality of products and services in industrial and business sectors. Recently SQC tools, in particular quality control charts, have been used in healthcare surveillance. In some cases, these tools have been modified and developed to better suit the health sector characteristics and needs. It seems that some of the work in the healthcare area has evolved independently of the development of industrial statistical process control methods. Therefore analysing and comparing paradigms and the characteristics of quality control charts and techniques across the different sectors presents some opportunities for transferring knowledge and future development in each sectors. Meanwhile considering capabilities of Bayesian approach particularly Bayesian hierarchical models and computational techniques in which all uncertainty are expressed as a structure of probability, facilitates decision making and cost-effectiveness analyses. Therefore, this research investigates the use of quality improvement cycle in a health vii setting using clinical data from a hospital. The need of clinical data for monitoring purposes is investigated in two aspects. A framework and appropriate tools from the industrial context are proposed and applied to evaluate and improve data quality in available datasets and data flow; then a data capturing algorithm using Bayesian decision making methods is developed to determine economical sample size for statistical analyses within the quality improvement cycle. Following ensuring clinical data quality, some characteristics of control charts in the health context including the necessity of monitoring attribute data and correlated quality characteristics are considered. To this end, multivariate control charts from an industrial context are adapted to monitor radiation delivered to patients undergoing diagnostic coronary angiogram and various risk-adjusted control charts are constructed and investigated in monitoring binary outcomes of clinical interventions as well as postintervention survival time. Meanwhile, adoption of a Bayesian approach is proposed as a new framework in estimation of change point following control chart’s signal. This estimate aims to facilitate root causes efforts in quality improvement cycle since it cuts the search for the potential causes of detected changes to a tighter time-frame prior to the signal. This approach enables us to obtain highly informative estimates for change point parameters since probability distribution based results are obtained. Using Bayesian hierarchical models and Markov chain Monte Carlo computational methods, Bayesian estimators of the time and the magnitude of various change scenarios including step change, linear trend and multiple change in a Poisson process are developed and investigated. The benefits of change point investigation is revisited and promoted in monitoring hospital outcomes where the developed Bayesian estimator reports the true time of the shifts, compared to priori known causes, detected by control charts in monitoring rate of excess usage of blood products and major adverse events during and after cardiac surgery in a local hospital. The development of the Bayesian change point estimators are then followed in a healthcare surveillances for processes in which pre-intervention characteristics of patients are viii affecting the outcomes. In this setting, at first, the Bayesian estimator is extended to capture the patient mix, covariates, through risk models underlying risk-adjusted control charts. Variations of the estimator are developed to estimate the true time of step changes and linear trends in odds ratio of intensive care unit outcomes in a local hospital. Secondly, the Bayesian estimator is extended to identify the time of a shift in mean survival time after a clinical intervention which is being monitored by riskadjusted survival time control charts. In this context, the survival time after a clinical intervention is also affected by patient mix and the survival function is constructed using survival prediction model. The simulation study undertaken in each research component and obtained results highly recommend the developed Bayesian estimators as a strong alternative in change point estimation within quality improvement cycle in healthcare surveillances as well as industrial and business contexts. The superiority of the proposed Bayesian framework and estimators are enhanced when probability quantification, flexibility and generalizability of the developed model are also considered. The empirical results and simulations indicate that the Bayesian estimators are a strong alternative in change point estimation within quality improvement cycle in healthcare surveillances. The superiority of the proposed Bayesian framework and estimators are enhanced when probability quantification, flexibility and generalizability of the developed model are also considered. The advantages of the Bayesian approach seen in general context of quality control may also be extended in the industrial and business domains where quality monitoring was initially developed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of Bayesian methodologies for solving optimal experimental design problems has increased. Many of these methods have been found to be computationally intensive for design problems that require a large number of design points. A simulation-based approach that can be used to solve optimal design problems in which one is interested in finding a large number of (near) optimal design points for a small number of design variables is presented. The approach involves the use of lower dimensional parameterisations that consist of a few design variables, which generate multiple design points. Using this approach, one simply has to search over a few design variables, rather than searching over a large number of optimal design points, thus providing substantial computational savings. The methodologies are demonstrated on four applications, including the selection of sampling times for pharmacokinetic and heat transfer studies, and involve nonlinear models. Several Bayesian design criteria are also compared and contrasted, as well as several different lower dimensional parameterisation schemes for generating the many design points.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Process mining encompasses the research area which is concerned with knowledge discovery from information system event logs. Within the process mining research area, two prominent tasks can be discerned. First of all, process discovery deals with the automatic construction of a process model out of an event log. Secondly, conformance checking focuses on the assessment of the quality of a discovered or designed process model in respect to the actual behavior as captured in event logs. Hereto, multiple techniques and metrics have been developed and described in the literature. However, the process mining domain still lacks a comprehensive framework for assessing the goodness of a process model from a quantitative perspective. In this study, we describe the architecture of an extensible framework within ProM, allowing for the consistent, comparative and repeatable calculation of conformance metrics. For the development and assessment of both process discovery as well as conformance techniques, such a framework is considered greatly valuable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Shoulder joint is a complex integration of soft and hard tissues. It plays an important role in performing daily activities and can be considered as a perfect compromise between mobility and stability. However, shoulder is vulnerable to complications such as dislocations and osteoarthritis. Finite element (FE) models have been developed to understand shoulder injury mechanisms, implications of disease on shoulder complex and in assessing the quality of shoulder implants. Further, although few, Finite element shoulder models have also been utilized to answer important clinical questions such as the difference between a normal and osteoarthritic shoulder joint. However, due to the absence of experimental validation, it is questionable whether the constitutive models applied in these FE models are adequate to represent mechanical behaviors of shoulder elements (Cartilages, Ligaments, Muscles etc), therefore the confidence of using current models in answering clinically relevant question. The main objective of this review is to critically evaluate the existing FE shoulder models that have been used to investigate clinical problems. Due concern is given to check the adequacy of representative constitutive models of shoulder elements in drawing clinically relevant conclusion. Suggestions have been given to improve the existing shoulder models by inclusion of adequate constitutive models for shoulder elements to confidently answer clinically relevant questions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motivation: Unravelling the genetic architecture of complex traits requires large amounts of data, sophisticated models and large computational resources. The lack of user-friendly software incorporating all these requisites is delaying progress in the analysis of complex traits. Methods: Linkage disequilibrium and linkage analysis (LDLA) is a high-resolution gene mapping approach based on sophisticated mixed linear models, applicable to any population structure. LDLA can use population history information in addition to pedigree and molecular markers to decompose traits into genetic components. Analyses are distributed in parallel over a large public grid of computers in the UK. Results: We have proven the performance of LDLA with analyses of simulated data. There are real gains in statistical power to detect quantitative trait loci when using historical information compared with traditional linkage analysis. Moreover, the use of a grid of computers significantly increases computational speed, hence allowing analyses that would have been prohibitive on a single computer. © The Author 2009. Published by Oxford University Press. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Travelling wave phenomena are observed in many biological applications. Mathematical theory of standard reaction-diffusion problems shows that simple partial differential equations exhibit travelling wave solutions with constant wavespeed and such models are used to describe, for example, waves of chemical concentrations, electrical signals, cell migration, waves of epidemics and population dynamics. However, as in the study of cell motion in complex spatial geometries, experimental data are often not consistent with constant wavespeed. Non-local spatial models have successfully been used to model anomalous diffusion and spatial heterogeneity in different physical contexts. In this paper, we develop a fractional model based on the Fisher-Kolmogoroff equation and analyse it for its wavespeed properties, attempting to relate the numerical results obtained from our simulations to experimental data describing enteric neural crest-derived cells migrating along the intact gut of mouse embryos. The model proposed essentially combines fractional and standard diffusion in different regions of the spatial domain and qualitatively reproduces the behaviour of neural crest-derived cells observed in the caecum and the hindgut of mouse embryos during in vivo experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper illustrates the use of finite element (FE) technique to investigate the behaviour of laminated glass (LG) panels under blast loads. Two and three dimensional (2D and 3D) modelling approaches available in LS-DYNA FE code to model LG panels are presented. Results from the FE analysis for mid-span deflection and principal stresses compared well with those from large deflection plate theory. The FE models are further validated using the results from a free field blast test on a LG panel. It is evident that both 2D and 3D LG models predict the experimental results with reasonable accuracy. The 3D LG models give slightly more accurate results but require considerably more computational time compared to the 2D LG models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper evaluates the efficiency of a number of popular corpus-based distributional models in performing discovery on very large document sets, including online collections. Literature-based discovery is the process of identifying previously unknown connections from text, often published literature, that could lead to the development of new techniques or technologies. Literature-based discovery has attracted growing research interest ever since Swanson's serendipitous discovery of the therapeutic effects of fish oil on Raynaud's disease in 1986. The successful application of distributional models in automating the identification of indirect associations underpinning literature-based discovery has been heavily demonstrated in the medical domain. However, we wish to investigate the computational complexity of distributional models for literature-based discovery on much larger document collections, as they may provide computationally tractable solutions to tasks including, predicting future disruptive innovations. In this paper we perform a computational complexity analysis on four successful corpus-based distributional models to evaluate their fit for such tasks. Our results indicate that corpus-based distributional models that store their representations in fixed dimensions provide superior efficiency on literature-based discovery tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background The management of unruptured aneurysms is controversial with the decision to treat influenced by aneurysm characteristics including size and morphology. Aneurysmal bleb formation is thought to be associated with an increased risk of rupture. Objective To correlate computational fluid dynamic (CFD) indices with bleb formation. Methods Anatomical models were constructed from three-dimensional rotational angiogram (3DRA) data in 27 patients with cerebral aneurysms harbouring single blebs. Additional models representing the aneurysm before bleb formation were constructed by digitally removing the bleb. We characterised haemodynamic features of models both with and without the bleb using CFDs. Flow structure, wall shear stress (WSS), pressure and oscillatory shear index (OSI) were analysed. Results There was a statistically significant association between bleb location at or adjacent to the point of maximal WSS (74.1%, p=0.019), irrespective of rupture status. Aneurysmal blebs were related to the inflow or outflow jet in 88.9% of cases (p<0.001) whilst 11.1% were unrelated. Maximal wall pressure and OSI were not significantly related to bleb location. The bleb region attained a lower WSS following its formation in 96.3% of cases (p<0.001) and was also lower than the average aneurysm WSS in 86% of cases (p<0.001). Conclusion Cerebral aneurysm blebs generally form at or adjacent to the point of maximal WSS and are aligned with major flow structures. Wall pressure and OSI do not contribute to determining bleb location. The measurement of WSS using CFD models may potentially predict bleb formation and thus improve the assessment of rupture risk in unruptured aneurysms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Flow induced shear stress plays an important role in regulating cell growth and distribution in scaffolds. This study sought to correlate wall shear stress and chondrocytes activity for engineering design of micro-porous osteochondral grafts based on the hypothesis that it is possible to capture and discriminate between the transmitted force and cell response at the inner irregularities. Unlike common tissue engineering therapies with perfusion bioreactors in which flow-mediated stress is the controlling parameter, this work assigned the associated stress as a function of porosity to influence in vitro proliferation of chondrocytes. D-optimality criterion was used to accommodate three pore characteristics for appraisal in a mixed level fractional design of experiment (DOE); namely, pore size (4 levels), distribution pattern (2 levels) and density (3 levels). Micro-porous scaffolds (n=12) were fabricated according to the DOE using rapid prototyping of an acrylic-based bio-photopolymer. Computational fluid dynamics (CFD) models were created correspondingly and used on an idealized boundary condition with a Newtonian fluid domain to simulate the dynamic microenvironment inside the pores. In vitro condition was reproduced for the 3D printed constructs seeded by high pellet densities of human chondrocytes and cultured for 72 hours. The results showed that cell proliferation was significantly different in the constructs (p<0.05). Inlet fluid velocity of 3×10-2mms-1 and average shear stress of 5.65×10-2 Pa corresponded with increased cell proliferation for scaffolds with smaller pores in hexagonal pattern and lower densities. Although the analytical solution of a Poiseuille flow inside the pores was found insufficient for the description of the flow profile probably due to the outside flow induced turbulence, it showed that the shear stress would increase with cell growth and decrease with pore size. This correlation demonstrated the basis for determining the relation between the induced stress and chondrocyte activity to optimize microfabrication of engineered cartilaginous constructs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Carbon nanotubes with specific nitrogen doping are proposed for controllable, highly selective, and reversible CO2 capture. Using density functional theory incorporating long-range dispersion corrections, we investigated the adsorption behavior of CO2 on (7,7) single-walled carbon nanotubes (CNTs) with several nitrogen doping configurations and varying charge states. Pyridinic-nitrogen incorporation in CNTs is found to induce an increasing CO2 adsorption strength with electron injecting, leading to a highly selective CO2 adsorption in comparison with N2. This functionality could induce intrinsically reversible CO2 adsorption as capture/release can be controlled by switching the charge carrying state of the system on/off. This phenomenon is verified for a number of different models and theoretical methods, with clear ramifications for the possibility of implementation with a broader class of graphene-based materials. A scheme for the implementation of this remarkable reversible electrocatalytic CO2-capture phenomenon is considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

MapReduce is a computation model for processing large data sets in parallel on large clusters of machines, in a reliable, fault-tolerant manner. A MapReduce computation is broken down into a number of map tasks and reduce tasks, which are performed by so called mappers and reducers, respectively. The placement of the mappers and reducers on the machines directly affects the performance and cost of the MapReduce computation in cloud computing. From the computational point of view, the mappers/reducers placement problem is a generation of the classical bin packing problem, which is NP-complete. Thus, in this paper we propose a new heuristic algorithm for the mappers/reducers placement problem in cloud computing and evaluate it by comparing with other several heuristics on solution quality and computation time by solving a set of test problems with various characteristics. The computational results show that our heuristic algorithm is much more efficient than the other heuristics and it can obtain a better solution in a reasonable time. Furthermore, we verify the effectiveness of our heuristic algorithm by comparing the mapper/reducer placement for a benchmark problem generated by our heuristic algorithm with a conventional mapper/reducer placement which puts a fixed number of mapper/reducer on each machine. The comparison results show that the computation using our mapper/reducer placement is much cheaper than the computation using the conventional placement while still satisfying the computation deadline.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many model-based investigation techniques, such as sensitivity analysis, optimization, and statistical inference, require a large number of model evaluations to be performed at different input and/or parameter values. This limits the application of these techniques to models that can be implemented in computationally efficient computer codes. Emulators, by providing efficient interpolation between outputs of deterministic simulation models, can considerably extend the field of applicability of such computationally demanding techniques. So far, the dominant techniques for developing emulators have been priors in the form of Gaussian stochastic processes (GASP) that were conditioned with a design data set of inputs and corresponding model outputs. In the context of dynamic models, this approach has two essential disadvantages: (i) these emulators do not consider our knowledge of the structure of the model, and (ii) they run into numerical difficulties if there are a large number of closely spaced input points as is often the case in the time dimension of dynamic models. To address both of these problems, a new concept of developing emulators for dynamic models is proposed. This concept is based on a prior that combines a simplified linear state space model of the temporal evolution of the dynamic model with Gaussian stochastic processes for the innovation terms as functions of model parameters and/or inputs. These innovation terms are intended to correct the error of the linear model at each output step. Conditioning this prior to the design data set is done by Kalman smoothing. This leads to an efficient emulator that, due to the consideration of our knowledge about dominant mechanisms built into the simulation model, can be expected to outperform purely statistical emulators at least in cases in which the design data set is small. The feasibility and potential difficulties of the proposed approach are demonstrated by the application to a simple hydrological model.