947 resultados para Random coefficient logit (RCL) model
Resumo:
Initializing the ocean for decadal predictability studies is a challenge, as it requires reconstructing the little observed subsurface trajectory of ocean variability. In this study we explore to what extent surface nudging using well-observed sea surface temperature (SST) can reconstruct the deeper ocean variations for the 1949–2005 period. An ensemble made with a nudged version of the IPSLCM5A model and compared to ocean reanalyses and reconstructed datasets. The SST is restored to observations using a physically-based relaxation coefficient, in contrast to earlier studies, which use a much larger value. The assessment is restricted to the regions where the ocean reanalyses agree, i.e. in the upper 500 m of the ocean, although this can be latitude and basin dependent. Significant reconstruction of the subsurface is achieved in specific regions, namely region of subduction in the subtropical Atlantic, below the thermocline in the equatorial Pacific and, in some cases, in the North Atlantic deep convection regions. Beyond the mean correlations, ocean integrals are used to explore the time evolution of the correlation over 20-year windows. Classical fixed depth heat content diagnostics do not exhibit any significant reconstruction between the different existing bservation-based references and can therefore not be used to assess global average time-varying correlations in the nudged simulations. Using the physically based average temperature above an isotherm (14°C) alleviates this issue in the tropics and subtropics and shows significant reconstruction of these quantities in the nudged simulations for several decades. This skill is attributed to the wind stress reconstruction in the tropics, as already demonstrated in a perfect model study using the same model. Thus, we also show here the robustness of this result in an historical and observational context.
Resumo:
Oscillations between high and low values of the membrane potential (UP and DOWN states respectively) are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon’s implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs) of the exponential integrate and fire (EIF) model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike’s preceding ISI. As we show, the EIF’s exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron’s ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing computational theories about UP states during slow wave sleep and present possible extensions of the model in the context of spike-frequency adaptation.
Resumo:
Peritoneal transport characteristics and residual renal function require regular control and subsequent adjustment of the peritoneal dialysis (PD) prescription. Prescription models shall facilitate the prediction of the outcome of such adaptations for a given patient. In the present study, the prescription model implemented in the PatientOnLine software was validated in patients requiring a prescription change. This multicenter, international prospective cohort study with the aim to validate a PD prescription model included patients treated with continuous ambulatory peritoneal dialysis. Patients were examined with the peritoneal function test (PFT) to determine the outcome of their current prescription and the necessity for a prescription change. For these patients, a new prescription was modeled using the PatientOnLine software (Fresenius Medical Care, Bad Homburg, Germany). Two to four weeks after implementation of the new PD regimen, a second PFT was performed. The validation of the prescription model included 54 patients. Predicted and measured peritoneal Kt/V were 1.52 ± 0.31 and 1.66 ± 0.35, and total (peritoneal + renal) Kt/V values were 1.96 ± 0.48 and 2.06 ± 0.44, respectively. Predicted and measured peritoneal creatinine clearances were 42.9 ± 8.6 and 43.0 ± 8.8 L/1.73 m2/week and total creatinine clearances were 65.3 ± 26.0 and 63.3 ± 21.8 L/1.73 m2/week, respectively. The analysis revealed a Pearson's correlation coefficient for peritoneal Kt/V of 0.911 and Lin's concordance coefficient of 0.829. The value of both coefficients was 0.853 for peritoneal creatinine clearance. Predicted and measured daily net ultrafiltration was 0.77 ± 0.49 and 1.16 ± 0.63 L/24 h, respectively. Pearson's correlation and Lin's concordance coefficient were 0.518 and 0.402, respectively. Predicted and measured peritoneal glucose absorption was 125.8 ± 38.8 and 79.9 ± 30.7 g/24 h, respectively, and Pearson's correlation and Lin's concordance coefficient were 0.914 and 0.477, respectively. With good predictability of peritoneal Kt/V and creatinine clearance, the present model provides support for individual dialysis prescription in clinical practice. Peritoneal glucose absorption and ultrafiltration are less predictable and are likely to be influenced by additional clinical factors to be taken into consideration.
Resumo:
We estimate the momentum diffusion coefficient of a heavy quark within a pure SU(3) plasma at a temperature of about 1.5Tc. Large-scale Monte Carlo simulations on a series of lattices extending up to 1923×48 permit us to carry out a continuum extrapolation of the so-called color-electric imaginary-time correlator. The extrapolated correlator is analyzed with the help of theoretically motivated models for the corresponding spectral function. Evidence for a nonzero transport coefficient is found and, incorporating systematic uncertainties reflecting model assumptions, we obtain κ=(1.8–3.4)T3. This implies that the “drag coefficient,” characterizing the time scale at which heavy quarks adjust to hydrodynamic flow, is η−1D=(1.8–3.4)(Tc/T)2(M/1.5 GeV) fm/c, where M is the heavy quark kinetic mass. The results apply to bottom and, with somewhat larger systematic uncertainties, to charm quarks.
Resumo:
BACKGROUND Cam-type femoroacetabular impingement (FAI) resulting from an abnormal nonspherical femoral head shape leads to chondrolabral damage and is considered a cause of early osteoarthritis. A previously developed experimental ovine FAI model induces a cam-type impingement that results in localized chondrolabral damage, replicating the patterns found in the human hip. Biochemical MRI modalities such as T2 and T2* may allow for evaluation of the cartilage biochemistry long before cartilage loss occurs and, for that reason, may be a worthwhile avenue of inquiry. QUESTIONS/PURPOSES We asked: (1) Does the histological grading of degenerated cartilage correlate with T2 or T2* values in this ovine FAI model? (2) How accurately can zones of degenerated cartilage be predicted with T2 or T2* MRI in this model? METHODS A cam-type FAI was induced in eight Swiss alpine sheep by performing a closing wedge intertrochanteric varus osteotomy. After ambulation of 10 to 14 weeks, the sheep were euthanized and a 3-T MRI of the hip was performed. T2 and T2* values were measured at six locations on the acetabulum and compared with the histological damage pattern using the Mankin score. This is an established histological scoring system to quantify cartilage degeneration. Both T2 and T2* values are determined by cartilage water content and its collagen fiber network. Of those, the T2* mapping is a more modern sequence with technical advantages (eg, shorter acquisition time). Correlation of the Mankin score and the T2 and T2* values, respectively, was evaluated using the Spearman's rank correlation coefficient. We used a hierarchical cluster analysis to calculate the positive and negative predictive values of T2 and T2* to predict advanced cartilage degeneration (Mankin ≥ 3). RESULTS We found a negative correlation between the Mankin score and both the T2 (p < 0.001, r = -0.79) and T2* values (p < 0.001, r = -0.90). For the T2 MRI technique, we found a positive predictive value of 100% (95% confidence interval [CI], 79%-100%) and a negative predictive value of 84% (95% CI, 67%-95%). For the T2* technique, we found a positive predictive value of 100% (95% CI, 79%-100%) and a negative predictive value of 94% (95% CI, 79%-99%). CONCLUSIONS T2 and T2* MRI modalities can reliably detect early cartilage degeneration in the experimental ovine FAI model. CLINICAL RELEVANCE T2 and T2* MRI modalities have the potential to allow for monitoring the natural course of osteoarthrosis noninvasively and to evaluate the results of surgical treatments targeted to joint preservation.
Resumo:
Potential home buyers may initiate contact with a real estate agent by asking to see a particular advertised house. This paper asks whether an agent's response to such a request depends on the race of the potential buyer or on whether the house is located in an integrated neighborhood. We build on previous research about the causes of discrimination in housing by using data from fair housing audits, a matched-pair technique for comparing the treatment of equllay qualified black and white home buyers. However, we shift the focus from differences in the treatment of paired buyers to agent decisions concerning an individual housing unit using a sample of all houses seen during he 1989 Housing Discrimination study. We estimate a random effect, multinomial logit model to explain a real estate agent's joint decisions concerning whether to show each unit to a black auditor and to a white auditor. We find evidence that agents withhold houses in suburban, integrated neighborhoods from all customers (redlining), that agents' decisions to show houses in integrated neighborhoods are not the same for black and white customers (steering), and that the houses agents show are more likely to deviate from the initial request when the customeris black than when the customer is white. These deviations are consistent with the possibility that agents act upon the belief that some types of transactions are relatively unlikely for black customers (statistical discrimination).
Resumo:
We present a framework for fitting multiple random walks to animal movement paths consisting of ordered sets of step lengths and turning angles. Each step and turn is assigned to one of a number of random walks, each characteristic of a different behavioral state. Behavioral state assignments may be inferred purely from movement data or may include the habitat type in which the animals are located. Switching between different behavioral states may be modeled explicitly using a state transition matrix estimated directly from data, or switching probabilities may take into account the proximity of animals to landscape features. Model fitting is undertaken within a Bayesian framework using the WinBUGS software. These methods allow for identification of different movement states using several properties of observed paths and lead naturally to the formulation of movement models. Analysis of relocation data from elk released in east-central Ontario, Canada, suggests a biphasic movement behavior: elk are either in an "encamped" state in which step lengths are small and turning angles are high, or in an "exploratory" state, in which daily step lengths are several kilometers and turning angles are small. Animals encamp in open habitat (agricultural fields and opened forest), but the exploratory state is not associated with any particular habitat type.
Resumo:
This study investigates a theoretical model where a longitudinal process, that is a stationary Markov-Chain, and a Weibull survival process share a bivariate random effect. Furthermore, a Quality-of-Life adjusted survival is calculated as the weighted sum of survival time. Theoretical values of population mean adjusted survival of the described model are computed numerically. The parameters of the bivariate random effect do significantly affect theoretical values of population mean. Maximum-Likelihood and Bayesian methods are applied on simulated data to estimate the model parameters. Based on the parameter estimates, predicated population mean adjusted survival can then be calculated numerically and compared with the theoretical values. Bayesian method and Maximum-Likelihood method provide parameter estimations and population mean prediction with comparable accuracy; however Bayesian method suffers from poor convergence due to autocorrelation and inter-variable correlation. ^
Resumo:
With the recognition of the importance of evidence-based medicine, there is an emerging need for methods to systematically synthesize available data. Specifically, methods to provide accurate estimates of test characteristics for diagnostic tests are needed to help physicians make better clinical decisions. To provide more flexible approaches for meta-analysis of diagnostic tests, we developed three Bayesian generalized linear models. Two of these models, a bivariate normal and a binomial model, analyzed pairs of sensitivity and specificity values while incorporating the correlation between these two outcome variables. Noninformative independent uniform priors were used for the variance of sensitivity, specificity and correlation. We also applied an inverse Wishart prior to check the sensitivity of the results. The third model was a multinomial model where the test results were modeled as multinomial random variables. All three models can include specific imaging techniques as covariates in order to compare performance. Vague normal priors were assigned to the coefficients of the covariates. The computations were carried out using the 'Bayesian inference using Gibbs sampling' implementation of Markov chain Monte Carlo techniques. We investigated the properties of the three proposed models through extensive simulation studies. We also applied these models to a previously published meta-analysis dataset on cervical cancer as well as to an unpublished melanoma dataset. In general, our findings show that the point estimates of sensitivity and specificity were consistent among Bayesian and frequentist bivariate normal and binomial models. However, in the simulation studies, the estimates of the correlation coefficient from Bayesian bivariate models are not as good as those obtained from frequentist estimation regardless of which prior distribution was used for the covariance matrix. The Bayesian multinomial model consistently underestimated the sensitivity and specificity regardless of the sample size and correlation coefficient. In conclusion, the Bayesian bivariate binomial model provides the most flexible framework for future applications because of its following strengths: (1) it facilitates direct comparison between different tests; (2) it captures the variability in both sensitivity and specificity simultaneously as well as the intercorrelation between the two; and (3) it can be directly applied to sparse data without ad hoc correction. ^
Resumo:
Objective. To measure the demand for primary care and its associated factors by building and estimating a demand model of primary care in urban settings.^ Data source. Secondary data from 2005 California Health Interview Survey (CHIS 2005), a population-based random-digit dial telephone survey, conducted by the UCLA Center for Health Policy Research in collaboration with the California Department of Health Services, and the Public Health Institute between July 2005 and April 2006.^ Study design. A literature review was done to specify the demand model by identifying relevant predictors and indicators. CHIS 2005 data was utilized for demand estimation.^ Analytical methods. The probit regression was used to estimate the use/non-use equation and the negative binomial regression was applied to the utilization equation with the non-negative integer dependent variable.^ Results. The model included two equations in which the use/non-use equation explained the probability of making a doctor visit in the past twelve months, and the utilization equation estimated the demand for primary conditional on at least one visit. Among independent variables, wage rate and income did not affect the primary care demand whereas age had a negative effect on demand. People with college and graduate educational level were associated with 1.03 (p < 0.05) and 1.58 (p < 0.01) more visits, respectively, compared to those with no formal education. Insurance was significantly and positively related to the demand for primary care (p < 0.01). Need for care variables exhibited positive effects on demand (p < 0.01). Existence of chronic disease was associated with 0.63 more visits, disability status was associated with 1.05 more visits, and people with poor health status had 4.24 more visits than those with excellent health status. ^ Conclusions. The average probability of visiting doctors in the past twelve months was 85% and the average number of visits was 3.45. The study emphasized the importance of need variables in explaining healthcare utilization, as well as the impact of insurance, employment and education on demand. The two-equation model of decision-making, and the probit and negative binomial regression methods, was a useful approach to demand estimation for primary care in urban settings.^
Resumo:
Patients who had started HAART (Highly Active Anti-Retroviral Treatment) under previous aggressive DHHS guidelines (1997) underwent a life-long continuous HAART that was associated with many short term as well as long term complications. Many interventions attempted to reduce those complications including intermittent treatment also called pulse therapy. Many studies were done to study the determinants of rate of fall in CD4 count after interruption as this data would help guide treatment interruptions. The data set used here was a part of a cohort study taking place at the Johns Hopkins AIDS service since January 1984, in which the data were collected both prospectively and retrospectively. The patients in this data set consisted of 47 patients receiving via pulse therapy with the aim of reducing the long-term complications. ^ The aim of this project was to study the impact of virologic and immunologic factors on the rate of CD4 loss after treatment interruption. The exposure variables under investigation included CD4 cell count and viral load at treatment initiation. The rates of change of CD4 cell count after treatment interruption was estimated from observed data using advanced longitudinal data analysis methods (i.e., linear mixed model). Using random effects accounted for repeated measures of CD4 per person after treatment interruption. The regression coefficient estimates from the model was then used to produce subject specific rates of CD4 change accounting for group trends in change. The exposure variables of interest were age, race, and gender, CD4 cell counts and HIV RNA levels at HAART initiation. ^ The rate of fall of CD4 count did not depend on CD4 cell count or viral load at initiation of treatment. Thus these factors may not be used to determine who can have a chance of successful treatment interruption. CD4 and viral load were again studied by t-tests and ANOVA test after grouping based on medians and quartiles to see any difference in means of rate of CD4 fall after interruption. There was no significant difference between the groups suggesting that there was no association between rate of fall of CD4 after treatment interruption and above mentioned exposure variables. ^
Resumo:
The problem of analyzing data with updated measurements in the time-dependent proportional hazards model arises frequently in practice. One available option is to reduce the number of intervals (or updated measurements) to be included in the Cox regression model. We empirically investigated the bias of the estimator of the time-dependent covariate while varying the effect of failure rate, sample size, true values of the parameters and the number of intervals. We also evaluated how often a time-dependent covariate needs to be collected and assessed the effect of sample size and failure rate on the power of testing a time-dependent effect.^ A time-dependent proportional hazards model with two binary covariates was considered. The time axis was partitioned into k intervals. The baseline hazard was assumed to be 1 so that the failure times were exponentially distributed in the ith interval. A type II censoring model was adopted to characterize the failure rate. The factors of interest were sample size (500, 1000), type II censoring with failure rates of 0.05, 0.10, and 0.20, and three values for each of the non-time-dependent and time-dependent covariates (1/4,1/2,3/4).^ The mean of the bias of the estimator of the coefficient of the time-dependent covariate decreased as sample size and number of intervals increased whereas the mean of the bias increased as failure rate and true values of the covariates increased. The mean of the bias of the estimator of the coefficient was smallest when all of the updated measurements were used in the model compared with two models that used selected measurements of the time-dependent covariate. For the model that included all the measurements, the coverage rates of the estimator of the coefficient of the time-dependent covariate was in most cases 90% or more except when the failure rate was high (0.20). The power associated with testing a time-dependent effect was highest when all of the measurements of the time-dependent covariate were used. An example from the Systolic Hypertension in the Elderly Program Cooperative Research Group is presented. ^
Resumo:
The silicoflagellate and ebridian assemblages in early middle Eocene Arctic cores obtained by IODP Expedition 302 (ACEX) were studied in order to decipher the paleoceanography of the upper water column. The assemblages in Lithologic Unit 2 (49.7-45.1 Ma), one of the biosiliceous intervals, were usually endemic as compared to the assemblages that occurred outside of the Arctic Ocean. The presence of these endemic assemblages is probably due to a unique environmental setting, controlled by the degree of mixing between the low-salinity Arctic waters and relatively high salinity waters supplied from outside the Arctic Ocean, such as the Atlantic and possibly the Western Siberian Sea. Using the basin-to-basin fractionation model, the early middle Eocene Arctic Ocean corresponds to an estuarine circulation type, which includes the modern-day Black Sea. The abundant down-core occurrence of ebridians strongly suggests the past presence of low-salinity waters, and may indicate that low oxygen concentrations prevailed in the euphotic layer, on the basis of the ecology of the modern ebridian Hermesinum adriaticum.
Resumo:
The global aerosol/climate model ECHAM5-HAM is used in order to investigate the dust cycle for four interglacial and one glacial climate conditions. The 20-year time-slices are the pre-industrial control (CTRL), mid-Holocene (6000 years BP), last glacial inception (115000 years BP), Eemian (126000 years BP) and Last Glacial Maximum (LGM) (21000 years BP) time intervals. The study is focused on the Antarctic region. The model is able to reproduce the magnitude order of dust deposition globally for the pre-industial and LGM climates. Correlation coefficient of the natural logarithm of the observed and modeled values is 0.78 for the CTRL and 0.81 for the LGM. For the pre-industrial simulation the model overestimates observed values in Antarctica by a factor of about 2-3 due to overestimation of the Australian dust source and too high wet deposition in the Antarctica interior. In the LGM, the model underestimates dust deposition in eastern Antarctica by a factor of about 4-5 due to underestimation of the South American dust source. More records are needed to validate dust deposition for the past interglacial time-slices. The modeled results show that dust deposition in Antarctica in the past interglacial time-slices is higher than in the CTRL simulation. The largest increase of dust deposition in Antarctica is simulated for the LGM, showing about 10-fold increase compared to CTRL.
Resumo:
The Armington Assumption in the context of multi-regional CGE models is commonly interpreted as follows: Same commodities with different origins are imperfect substitutes for each other. In this paper, a static spatial CGE model that is compatible with this assumption and explicitly considers the transport sector and regional price differentials is formulated. Trade coefficients, which are derived endogenously from the optimization behaviors of firms and households, are shown to take the form of a potential function. To investigate how the elasticity of substitutions affects equilibrium solutions, a simpler version of the model that incorporates three regions and two sectors (besides the transport sector) is introduced. Results indicate: (1) if commodities produced in different regions are perfect substitutes, regional economies will be either autarkic or completely symmetric and (2) if they are imperfect substitutes, the impact of elasticity on the price equilibrium system as well as trade coefficients will be nonlinear and sometimes very sensitive.