844 resultados para hierarchical hidden Markov model
Resumo:
Translucent wavelength-division multiplexing optical networks use sparse placement of regenerators to overcome physical impairments and wavelength contention introduced by fully transparent networks, and achieve a performance close to fully opaque networks at a much less cost. In previous studies, we addressed the placement of regenerators based on static schemes, allowing for only a limited number of regenerators at fixed locations. This paper furthers those studies by proposing a dynamic resource allocation and dynamic routing scheme to operate translucent networks. This scheme is realized through dynamically sharing regeneration resources, including transmitters, receivers, and electronic interfaces, between regeneration and access functions under a multidomain hierarchical translucent network model. An intradomain routing algorithm, which takes into consideration optical-layer constraints as well as dynamic allocation of regeneration resources, is developed to address the problem of translucent dynamic routing in a single routing domain. Network performance in terms of blocking probability, resource utilization, and running times under different resource allocation and routing schemes is measured through simulation experiments.
Resumo:
In this paper the influence of a secondary variable as a function of the correlation with the primary variable for collocated cokriging is examined. For this study five exhaustive data sets were generated in computer, from which samples with 60 and 104 data points were drawn using the stratified random sampling method. These exhaustive data sets were generated departing from a pair of primary and secondary variables showing a good correlation. Then successive sets were generated by adding an amount of white noise in such a way that the correlation gets poorer. Using these samples, it was possible to find out how primary and secondary information is used to estimate an unsampled location according to the correlation level.
Resumo:
In modern society, individuals constantly pass judgments on their own body and physical competence as well as that of other people. All too often, the verdict is less favourable. For the person, these physical self-perceptions (PSP) may negatively affect global self-esteem, identity, and general mental well being. The overall aim of this thesis is to examine primarily the role that exercise, but also the roles that gender and culture, play in the formation of PSP. In Study I, using confirmatory factor analyses, strong support for the validity of a first-order, and a second-order hierarchical and multidimensional model of the Physical Self-Perception Profile (PSPP: Fox & Corbin, 1989) was found across three national samples (Great Britain, Sweden and Turkey) of university students. Cross-cultural differences were detected, with the British sample demonstrating higher latent means on all PSPP subdomains except for the physical condition subdomain (Condition), than the Swedish and Turkish samples. In Study II, a higher self-reported exercise frequency was associated with more positive PSP (in particular for Condition) and more importance attributed to PSP in Swedish university students. Males demonstrated higher overall PSPP-scores than females. In Study III, a true-experimental design with randomisation into an intervention and a control group was adopted. Strong support for the effects of an empowerment-based exercise intervention programme on PSP and social physique anxiety (SPA) over six months for adolescent girls was found. The relations of exercise, gender and culture with PSP, SPA and self-esteem are discussed from the standpoints of a variety of theoretical models (the EXSEM-model), and frameworks (self-presentation and objectification theory). The two theories of self-enhancement and skill-development are examined with regard to the direction of the exercise-physical self relationship and motivation for exercise. Arguments for the relevance of exercise and PSP for practitioners in promoting general mental well-being and preventing modern-day diseases are outlined.
Resumo:
In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.
Resumo:
We introduce the notation of Markov chains and their properties, and give the definition of ergodic, irreducible and aperiodic chains with correspective examples. Then, the definition of hidden Markov models is given and their characteristics are examined. We formulate three basic problems regarding the hidden Markov models and discuss the solution of two of them - the Viterbi algorithm and the forward-backward algorithm.
Resumo:
A simulation model adopting a health system perspective showed population-based screening with DXA, followed by alendronate treatment of persons with osteoporosis, or with anamnestic fracture and osteopenia, to be cost-effective in Swiss postmenopausal women from age 70, but not in men. INTRODUCTION: We assessed the cost-effectiveness of a population-based screen-and-treat strategy for osteoporosis (DXA followed by alendronate treatment if osteoporotic, or osteopenic in the presence of fracture), compared to no intervention, from the perspective of the Swiss health care system. METHODS: A published Markov model assessed by first-order Monte Carlo simulation was refined to reflect the diagnostic process and treatment effects. Women and men entered the model at age 50. Main screening ages were 65, 75, and 85 years. Age at bone densitometry was flexible for persons fracturing before the main screening age. Realistic assumptions were made with respect to persistence with intended 5 years of alendronate treatment. The main outcome was cost per quality-adjusted life year (QALY) gained. RESULTS: In women, costs per QALY were Swiss francs (CHF) 71,000, CHF 35,000, and CHF 28,000 for the main screening ages of 65, 75, and 85 years. The threshold of CHF 50,000 per QALY was reached between main screening ages 65 and 75 years. Population-based screening was not cost-effective in men. CONCLUSION: Population-based DXA screening, followed by alendronate treatment in the presence of osteoporosis, or of fracture and osteopenia, is a cost-effective option in Swiss postmenopausal women after age 70.
Resumo:
Ovariectomy interrupts the regulatory loop in the hypothalamus-pituitary-gonad axis, leading to a several-fold increase in gonadotropin levels. This rise in hormonal secretion may play a causal role in ovariectomy-related urinary incontinence. The purpose of this study was to examine the effect of ovariectomy in bitches on the expression of GnRH- and LH-receptors in the lower urinary tract, and assess the relationship between receptor expression and plasma gonadotropin concentrations. Plasma gonadotropins were measured in 37 client-owned bitches. Biopsies were harvested from the mid-ventral bladder wall in all dogs, and from nine further locations within the lower urinary tract in 17 of the 37 animals. Messenger RNA of the LH and GnRH receptors was quantified using RT-PCR with the TaqMan Universal PCR Master Mix. Gonadotropins were measured with a canine-specific FSH-immunoradiometric assay and LH-radioimmunoassay. The hierarchical mixed ANOVA model using MINITAB, Mann-Whitney U-test, unpaired means comparison and linear regressions using StatView were applied for statistical analyses. Messenger RNA for both receptors was detected in all biopsy samples. Age was negatively correlated to mRNA expression of the LH and the GnRH receptors. A relationship between the mRNA values and the plasma gonadotropin concentrations was not established. Evaluation of results within each of the biopsy locations revealed greater LH-receptor expression in the proximal second quarter of the urethra in spayed bitches than in intact bitches (P=0.0481). Increased mRNA expression of LH receptors in this location could possibly play a role in the decrease in closing pressure of the urethra following ovariectomy.
Resumo:
The past decade has seen the energy consumption in servers and Internet Data Centers (IDCs) skyrocket. A recent survey estimated that the worldwide spending on servers and cooling have risen to above $30 billion and is likely to exceed spending on the new server hardware . The rapid rise in energy consumption has posted a serious threat to both energy resources and the environment, which makes green computing not only worthwhile but also necessary. This dissertation intends to tackle the challenges of both reducing the energy consumption of server systems and by reducing the cost for Online Service Providers (OSPs). Two distinct subsystems account for most of IDC’s power: the server system, which accounts for 56% of the total power consumption of an IDC, and the cooling and humidifcation systems, which accounts for about 30% of the total power consumption. The server system dominates the energy consumption of an IDC, and its power draw can vary drastically with data center utilization. In this dissertation, we propose three models to achieve energy effciency in web server clusters: an energy proportional model, an optimal server allocation and frequency adjustment strategy, and a constrained Markov model. The proposed models have combined Dynamic Voltage/Frequency Scaling (DV/FS) and Vary-On, Vary-off (VOVF) mechanisms that work together for more energy savings. Meanwhile, corresponding strategies are proposed to deal with the transition overheads. We further extend server energy management to the IDC’s costs management, helping the OSPs to conserve, manage their own electricity cost, and lower the carbon emissions. We have developed an optimal energy-aware load dispatching strategy that periodically maps more requests to the locations with lower electricity prices. A carbon emission limit is placed, and the volatility of the carbon offset market is also considered. Two energy effcient strategies are applied to the server system and the cooling system respectively. With the rapid development of cloud services, we also carry out research to reduce the server energy in cloud computing environments. In this work, we propose a new live virtual machine (VM) placement scheme that can effectively map VMs to Physical Machines (PMs) with substantial energy savings in a heterogeneous server cluster. A VM/PM mapping probability matrix is constructed, in which each VM request is assigned with a probability running on PMs. The VM/PM mapping probability matrix takes into account resource limitations, VM operation overheads, server reliability as well as energy effciency. The evolution of Internet Data Centers and the increasing demands of web services raise great challenges to improve the energy effciency of IDCs. We also express several potential areas for future research in each chapter.
Resumo:
BACKGROUND/AIMS: While several risk factors for the histological progression of chronic hepatitis C have been identified, the contribution of HCV genotypes to liver fibrosis evolution remains controversial. The aim of this study was to assess independent predictors for fibrosis progression. METHODS: We identified 1189 patients from the Swiss Hepatitis C Cohort database with at least one biopsy prior to antiviral treatment and assessable date of infection. Stage-constant fibrosis progression rate was assessed using the ratio of fibrosis Metavir score to duration of infection. Stage-specific fibrosis progression rates were obtained using a Markov model. Risk factors were assessed by univariate and multivariate regression models. RESULTS: Independent risk factors for accelerated stage-constant fibrosis progression (>0.083 fibrosis units/year) included male sex (OR=1.60, [95% CI 1.21-2.12], P<0.001), age at infection (OR=1.08, [1.06-1.09], P<0.001), histological activity (OR=2.03, [1.54-2.68], P<0.001) and genotype 3 (OR=1.89, [1.37-2.61], P<0.001). Slower progression rates were observed in patients infected by blood transfusion (P=0.02) and invasive procedures or needle stick (P=0.03), compared to those infected by intravenous drug use. Maximum likelihood estimates (95% CI) of stage-specific progression rates (fibrosis units/year) for genotype 3 versus the other genotypes were: F0-->F1: 0.126 (0.106-0.145) versus 0.091 (0.083-0.100), F1-->F2: 0.099 (0.080-0.117) versus 0.065 (0.058-0.073), F2-->F3: 0.077 (0.058-0.096) versus 0.068 (0.057-0.080) and F3-->F4: 0.171 (0.106-0.236) versus 0.112 (0.083-0.142, overall P<0.001). CONCLUSIONS: This study shows a significant association of genotype 3 with accelerated fibrosis using both stage-constant and stage-specific estimates of fibrosis progression rates. This observation may have important consequences for the management of patients infected with this genotype.
Resumo:
Traditional methods do not actually measure peoples’ risk attitude naturally and precisely. Therefore, a fuzzy risk attitude classification method is developed. Since the prospect theory is usually considered as an effective model of decision making, the personalized parameters in prospect theory are firstly fuzzified to distinguish people with different risk attitudes, and then a fuzzy classification database schema is applied to calculate the exact value of risk value attitude and risk be- havior attitude. Finally, by applying a two-hierarchical clas- sification model, the precise value of synthetical risk attitude can be acquired.
Resumo:
PURPOSE: To develop and implement a method for improved cerebellar tissue classification on the MRI of brain by automatically isolating the cerebellum prior to segmentation. MATERIALS AND METHODS: Dual fast spin echo (FSE) and fluid attenuation inversion recovery (FLAIR) images were acquired on 18 normal volunteers on a 3 T Philips scanner. The cerebellum was isolated from the rest of the brain using a symmetric inverse consistent nonlinear registration of individual brain with the parcellated template. The cerebellum was then separated by masking the anatomical image with individual FLAIR images. Tissues in both the cerebellum and rest of the brain were separately classified using hidden Markov random field (HMRF), a parametric method, and then combined to obtain tissue classification of the whole brain. The proposed method for tissue classification on real MR brain images was evaluated subjectively by two experts. The segmentation results on Brainweb images with varying noise and intensity nonuniformity levels were quantitatively compared with the ground truth by computing the Dice similarity indices. RESULTS: The proposed method significantly improved the cerebellar tissue classification on all normal volunteers included in this study without compromising the classification in remaining part of the brain. The average similarity indices for gray matter (GM) and white matter (WM) in the cerebellum are 89.81 (+/-2.34) and 93.04 (+/-2.41), demonstrating excellent performance of the proposed methodology. CONCLUSION: The proposed method significantly improved tissue classification in the cerebellum. The GM was overestimated when segmentation was performed on the whole brain as a single object.
Resumo:
We present a search for a light (mass < 2 GeV) boson predicted by Hidden Valley supersymmetric models that decays into a final state consisting of collimated muons or electrons, denoted "lepton-jets". The analysis uses 5 fb(-1) of root s = 7 TeV proton-proton collision data recorded by the ATLAS detector at the Large Hadron Collider to search for the following signatures: single lepton-jets with at least four muons; pairs of lepton-jets, each with two or more muons; and pairs of lepton-jets with two or more electrons. This study finds no statistically significant deviation from the Standard Model prediction and places 95% confidence-level exclusion limits on the production cross section times branching ratio of light bosons for several parameter sets of a Hidden Valley model.
Resumo:
Mathematical models of disease progression predict disease outcomes and are useful epidemiological tools for planners and evaluators of health interventions. The R package gems is a tool that simulates disease progression in patients and predicts the effect of different interventions on patient outcome. Disease progression is represented by a series of events (e.g., diagnosis, treatment and death), displayed in a directed acyclic graph. The vertices correspond to disease states and the directed edges represent events. The package gems allows simulations based on a generalized multistate model that can be described by a directed acyclic graph with continuous transition-specific hazard functions. The user can specify an arbitrary hazard function and its parameters. The model includes parameter uncertainty, does not need to be a Markov model, and may take the history of previous events into account. Applications are not limited to the medical field and extend to other areas where multistate simulation is of interest. We provide a technical explanation of the multistate models used by gems, explain the functions of gems and their arguments, and show a sample application.
Resumo:
The ability to determine what activity of daily living a person performs is of interest in many application domains. It is possible to determine the physical and cognitive capabilities of the elderly by inferring what activities they perform in their houses. Our primary aim was to establish a proof of concept that a wireless sensor system can monitor and record physical activity and these data can be modeled to predict activities of daily living. The secondary aim was to determine the optimal placement of the sensor boxes for detecting activities in a room. A wireless sensor system was set up in a laboratory kitchen. The ten healthy participants were requested to make tea following a defined sequence of tasks. Data were collected from the eight wireless sensor boxes placed in specific places in the test kitchen and analyzed to detect the sequences of tasks performed by the participants. These sequence of tasks were trained and tested using the Markov Model. Data analysis focused on the reliability of the system and the integrity of the collected data. The sequence of tasks were successfully recognized for all subjects and the averaged data pattern of tasks sequences between the subjects had a high correlation. Analysis of the data collected indicates that sensors placed in different locations are capable of recognizing activities, with the movement detection sensor contributing the most to detection of tasks. The central top of the room with no obstruction of view was considered to be the best location to record data for activity detection. Wireless sensor systems show much promise as easily deployable to monitor and recognize activities of daily living.
Resumo:
Mental imagery and perception are thought to rely on similar neural circuits, and many recent behavioral studies have attempted to demonstrate interactions between actual physical stimulation and sensory imagery in the corresponding sensory modality. However, there has been a lack of theoretical understanding of the nature of these interactions, and both interferential and facilitatory effects have been found. Facilitatory effects appear strikingly similar to those that arise due to experimental manipulations of expectation. Using a self-motion discrimination task, we try to disentangle the effects of mental imagery from those of expectation by using a hierarchical drift diffusion model to investigate both choice data and response times. Manipulations of expectation are reasonably well understood in terms of their selective influence on parameters of the drift diffusion model, and in this study, we make the first attempt to similarly characterize the effects of mental imagery. We investigate mental imagery within the computational framework of control theory and state estimation. • Mental imagery and perception are thought to rely on similar neural circuits; however, on more theoretical grounds, imagery seems to be closely related to the output of forward models (sensory predictions). • We reanalyzed data from a study of imagined self-motion. • Bayesian modeling of response times may allow us to disentangle the effects of mental imagery on behavior from other cognitive (top-down) effects, such as expectation.