21 resultados para MARKOV MODEL
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
Quality of care is an important aspect of healthcare monitoring, which is used to ensure that the healthcare system is delivering care of the highest standard. With populations growing older there is an increased urgency in making sure that the healthcare delivered is of the highest standard. Healthcare providers are under increased pressure to ensure that this is the case with public and government demand expecting a healthcare system of the highest quality. Modelling quality of care is difficult to measure due to the many ways of defining it. This paper introduces a potential model which could be used to take quality of care into account when modelling length of stay. The Coxian phase-type distribution is used to model length of stay and the associated quality of care incorporated into the Coxian using a Hidden Markov model. Covariates are also introduced to determine their impact on the hidden level to find out what potentially can affect quality of care. This model is applied to geriatic patient data from the Lombardy region of Italy. The results obtained highlighted that bed numbers and the type of hospital (public or private) can have an effect on the quality of care delivered.
Resumo:
In this paper, a novel and effective lip-based biometric identification approach with the Discrete Hidden Markov Model Kernel (DHMMK) is developed. Lips are described by shape features (both geometrical and sequential) on two different grid layouts: rectangular and polar. These features are then specifically modeled by a DHMMK, and learnt by a support vector machine classifier. Our experiments are carried out in a ten-fold cross validation fashion on three different datasets, GPDS-ULPGC Face Dataset, PIE Face Dataset and RaFD Face Dataset. Results show that our approach has achieved an average classification accuracy of 99.8%, 97.13%, and 98.10%, using only two training images per class, on these three datasets, respectively. Our comparative studies further show that the DHMMK achieved a 53% improvement against the baseline HMM approach. The comparative ROC curves also confirm the efficacy of the proposed lip contour based biometrics learned by DHMMK. We also show that the performance of linear and RBF SVM is comparable under the frame work of DHMMK.
Resumo:
The aim of this paper is to use Markov modelling to
investigate survival for particular types of kidney patients
in relation to their exposure to anti-hypertensive treatment
drugs. In order to monitor kidney function an intuitive three
point assessment is proposed through the collection of blood
samples in relation to Chronic Kidney Disease for Northern
Ireland patients. A five state Markov Model was devised
using specific transition probabilities for males and
females over all age groups. These transition probabilities
were then adjusted appropriately using relative risk scores
for the event death for different subgroups of patients. The
model was built using TreeAge software package in order to
explore the effects of anti-hypertensive drugs on patients.
Resumo:
The number of hospital admissions in England due to heart failure is projected to increase by over 50% during the next 25 years. This will incur greater pressures on hospital managers to allocate resources in an effective manner. A reliable indicator for measuring the quantity of resources consumed by hospital patients is their length of stay (LOS) in care. This paper proposes modelling the length of time heart failure patients spend in hospital using a special type of Markov model, where the flow of patients through hospital can be thought of as consisting of three stages of care—short-, medium- and longer-term care. If it is assumed that new admissions into the ward are replacements for discharges, such a model may be used to investigate the case-mix of patients in hospital and the expected patient turnover during some specified period of time. An example is illustrated by considering hospital admissions to a Belfast hospital in Northern Ireland, between 2000 and 2004.
Resumo:
Coxian phase-type distributions are a special type of Markov model that describes duration until an event occurs in terms of a process consisting of a sequence of latent phases. This paper considers the use of Coxian phase-type distributions for modelling patient duration of stay for the elderly in hospital and investigates the potential for using the resulting distribution as a classifying variable to identify common characteristics between different groups of patients according to their (anticipated) length of stay in hospital. The identification of common characteristics for patient length of stay groups would offer hospital managers and clinicians possible insights into the overall management and bed allocation of the hospital wards.
Resumo:
Coxian phase-type distributions are a special type of Markov model that can be used to represent survival times in terms of phases through which an individual may progress until they eventually leave the system completely. Previous research has considered the Coxian phase-type distribution to be ideal in representing patient survival in hospital. However, problems exist in fitting the distributions. This paper investigates the problems that arise with the fitting process by simulating various Coxian phase-type models for the representation of patient survival and examining the estimated parameter values and eigenvalues obtained. The results indicate that numerical methods previously used for fitting the model parameters do not always converge. An alternative technique is therefore considered. All methods are influenced by the choice of initial parameter values. The investigation uses a data set of 1439 elderly patients and models their survival time, the length of time they spend in a UK hospital.
Resumo:
In this paper, we present a new approach to visual speech recognition which improves contextual modelling by combining Inter-Frame Dependent and Hidden Markov Models. This approach captures contextual information in visual speech that may be lost using a Hidden Markov Model alone. We apply contextual modelling to a large speaker independent isolated digit recognition task, and compare our approach to two commonly adopted feature based techniques for incorporating speech dynamics. Results are presented from baseline feature based systems and the combined modelling technique. We illustrate that both of these techniques achieve similar levels of performance when used independently. However significant improvements in performance can be achieved through a combination of the two. In particular we report an improvement in excess of 17% relative Word Error Rate in comparison to our best baseline system.
Resumo:
Baited cameras are often used for abundance estimation wherever alternative techniques are precluded, e.g. in abyssal systems and areas such as reefs. This method has thus far used models of the arrival process that are deterministic and, therefore, permit no estimate of precision.
Furthermore, errors due to multiple counting of fish and missing those not seen by the camera have restricted the technique to using only the time of first arrival, leaving a lot of data redundant. Here, we reformulate the arrival process using a stochastic model, which allows the precision of abundance
estimates to be quantified. Assuming a non-gregarious, cross-current-scavenging fish, we show that prediction of abundance from first arrival time is extremely uncertain. Using example data, we show
that simple regression-based prediction from the initial (rising) slope of numbers at the bait gives good precision, accepting certain assumptions. The most precise abundance estimates were obtained
by including the declining phase of the time series, using a simple model of departures, and taking account of scavengers beyond the camera’s view, using a hidden Markov model.
Resumo:
This paper presents a new algorithm for learning the structure of a special type of Bayesian network. The conditional phase-type (C-Ph) distribution is a Bayesian network that models the probabilistic causal relationships between a skewed continuous variable, modelled by the Coxian phase-type distribution, a special type of Markov model, and a set of interacting discrete variables. The algorithm takes a dataset as input and produces the structure, parameters and graphical representations of the fit of the C-Ph distribution as output.The algorithm, which uses a greedy-search technique and has been implemented in MATLAB, is evaluated using a simulated data set consisting of 20,000 cases. The results show that the original C-Ph distribution is recaptured and the fit of the network to the data is discussed.
Resumo:
BACKGROUND: Age-related macular degeneration is the most common cause of sight impairment in the UK. In neovascular age-related macular degeneration (nAMD), vision worsens rapidly (over weeks) due to abnormal blood vessels developing that leak fluid and blood at the macula.
OBJECTIVES: To determine the optimal role of optical coherence tomography (OCT) in diagnosing people newly presenting with suspected nAMD and monitoring those previously diagnosed with the disease.
DATA SOURCES: Databases searched: MEDLINE (1946 to March 2013), MEDLINE In-Process & Other Non-Indexed Citations (March 2013), EMBASE (1988 to March 2013), Biosciences Information Service (1995 to March 2013), Science Citation Index (1995 to March 2013), The Cochrane Library (Issue 2 2013), Database of Abstracts of Reviews of Effects (inception to March 2013), Medion (inception to March 2013), Health Technology Assessment database (inception to March 2013).
REVIEW METHODS: Types of studies: direct/indirect studies reporting diagnostic outcomes.
INDEX TEST: time domain optical coherence tomography (TD-OCT) or spectral domain optical coherence tomography (SD-OCT).
COMPARATORS: clinical evaluation, visual acuity, Amsler grid, colour fundus photographs, infrared reflectance, red-free images/blue reflectance, fundus autofluorescence imaging, indocyanine green angiography, preferential hyperacuity perimetry, microperimetry. Reference standard: fundus fluorescein angiography (FFA). Risk of bias was assessed using quality assessment of diagnostic accuracy studies, version 2. Meta-analysis models were fitted using hierarchical summary receiver operating characteristic curves. A Markov model was developed (65-year-old cohort, nAMD prevalence 70%), with nine strategies for diagnosis and/or monitoring, and cost-utility analysis conducted. NHS and Personal Social Services perspective was adopted. Costs (2011/12 prices) and quality-adjusted life-years (QALYs) were discounted (3.5%). Deterministic and probabilistic sensitivity analyses were performed.
RESULTS: In pooled estimates of diagnostic studies (all TD-OCT), sensitivity and specificity [95% confidence interval (CI)] was 88% (46% to 98%) and 78% (64% to 88%) respectively. For monitoring, the pooled sensitivity and specificity (95% CI) was 85% (72% to 93%) and 48% (30% to 67%) respectively. The FFA for diagnosis and nurse-technician-led monitoring strategy had the lowest cost (£39,769; QALYs 10.473) and dominated all others except FFA for diagnosis and ophthalmologist-led monitoring (£44,649; QALYs 10.575; incremental cost-effectiveness ratio £47,768). The least costly strategy had a 46.4% probability of being cost-effective at £30,000 willingness-to-pay threshold.
LIMITATIONS: Very few studies provided sufficient information for inclusion in meta-analyses. Only a few studies reported other tests; for some tests no studies were identified. The modelling was hampered by a lack of data on the diagnostic accuracy of strategies involving several tests.
CONCLUSIONS: Based on a small body of evidence of variable quality, OCT had high sensitivity and moderate specificity for diagnosis, and relatively high sensitivity but low specificity for monitoring. Strategies involving OCT alone for diagnosis and/or monitoring were unlikely to be cost-effective. Further research is required on (i) the performance of SD-OCT compared with FFA, especially for monitoring but also for diagnosis; (ii) the performance of strategies involving combinations/sequences of tests, for diagnosis and monitoring; (iii) the likelihood of active and inactive nAMD becoming inactive or active respectively; and (iv) assessment of treatment-associated utility weights (e.g. decrements), through a preference-based study.
STUDY REGISTRATION: This study is registered as PROSPERO CRD42012001930.
FUNDING: The National Institute for Health Research Health Technology Assessment programme.
Resumo:
To cope with the rapid growth of multimedia applications that requires dynamic levels of quality of service (QoS), cross-layer (CL) design, where multiple protocol layers are jointly combined, has been considered to provide diverse QoS provisions for mobile multimedia networks. However, there is a lack of a general mathematical framework to model such CL scheme in wireless networks with different types of multimedia classes. In this paper, to overcome this shortcoming, we therefore propose a novel CL design for integrated real-time/non-real-time traffic with strict preemptive priority via a finite-state Markov chain. The main strategy of the CL scheme is to design a Markov model by explicitly including adaptive modulation and coding at the physical layer, queuing at the data link layer, and the bursty nature of multimedia traffic classes at the application layer. Utilizing this Markov model, several important performance metrics in terms of packet loss rate, delay, and throughput are examined. In addition, our proposed framework is exploited in various multimedia applications, for example, the end-to-end real-time video streaming and CL optimization, which require the priority-based QoS adaptation for different applications. More importantly, the CL framework reveals important guidelines as to optimize the network performance
Resumo:
Credal networks are graph-based statistical models whose parameters take values in a set, instead of being sharply specified as in traditional statistical models (e.g., Bayesian networks). The computational complexity of inferences on such models depends on the irrelevance/independence concept adopted. In this paper, we study inferential complexity under the concepts of epistemic irrelevance and strong independence. We show that inferences under strong independence are NP-hard even in trees with binary variables except for a single ternary one. We prove that under epistemic irrelevance the polynomial-time complexity of inferences in credal trees is not likely to extend to more general models (e.g., singly connected topologies). These results clearly distinguish networks that admit efficient inferences and those where inferences are most likely hard, and settle several open questions regarding their computational complexity. We show that these results remain valid even if we disallow the use of zero probabilities. We also show that the computation of bounds on the probability of the future state in a hidden Markov model is the same whether we assume epistemic irrelevance or strong independence, and we prove an analogous result for inference in Naive Bayes structures. These inferential equivalences are important for practitioners, as hidden Markov models and Naive Bayes networks are used in real applications of imprecise probability.