898 resultados para Exponential Random Graph Model
Resumo:
In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model.
Resumo:
BACKGROUND: Physicians need a specific risk-stratification tool to facilitate safe and cost-effective approaches to the management of patients with cancer and acute pulmonary embolism (PE). The objective of this study was to develop a simple risk score for predicting 30-day mortality in patients with PE and cancer by using measures readily obtained at the time of PE diagnosis. METHODS: Investigators randomly allocated 1,556 consecutive patients with cancer and acute PE from the international multicenter Registro Informatizado de la Enfermedad TromboEmbólica to derivation (67%) and internal validation (33%) samples. The external validation cohort for this study consisted of 261 patients with cancer and acute PE. Investigators compared 30-day all-cause mortality and nonfatal adverse medical outcomes across the derivation and two validation samples. RESULTS: In the derivation sample, multivariable analyses produced the risk score, which contained six variables: age > 80 years, heart rate ≥ 110/min, systolic BP < 100 mm Hg, body weight < 60 kg, recent immobility, and presence of metastases. In the internal validation cohort (n = 508), the 22.2% of patients (113 of 508) classified as low risk by the prognostic model had a 30-day mortality of 4.4% (95% CI, 0.6%-8.2%) compared with 29.9% (95% CI, 25.4%-34.4%) in the high-risk group. In the external validation cohort, the 18% of patients (47 of 261) classified as low risk by the prognostic model had a 30-day mortality of 0%, compared with 19.6% (95% CI, 14.3%-25.0%) in the high-risk group. CONCLUSIONS: The developed clinical prediction rule accurately identifies low-risk patients with cancer and acute PE.
Resumo:
In medical imaging, merging automated segmentations obtained from multiple atlases has become a standard practice for improving the accuracy. In this letter, we propose two new fusion methods: "Global Weighted Shape-Based Averaging" (GWSBA) and "Local Weighted Shape-Based Averaging" (LWSBA). These methods extend the well known Shape-Based Averaging (SBA) by additionally incorporating the similarity information between the reference (i.e., atlas) images and the target image to be segmented. We also propose a new spatially-varying similarity-weighted neighborhood prior model, and an edge-preserving smoothness term that can be used with many of the existing fusion methods. We first present our new Markov Random Field (MRF) based fusion framework that models the above mentioned information. The proposed methods are evaluated in the context of segmentation of lymph nodes in the head and neck 3D CT images, and they resulted in more accurate segmentations compared to the existing SBA.
Resumo:
BACKGROUND: Risks of significant infant drug exposurethrough breastmilk are poorly defined for many drugs, and largescalepopulation data are lacking. We used population pharmacokinetics(PK) modeling to predict fluoxetine exposure levels ofinfants via mother's milk in a simulated population of 1000 motherinfantpairs.METHODS: Using our original data on fluoxetine PK of 25breastfeeding women, a population PK model was developed withNONMEM and parameters, including milk concentrations, wereestimated. An exponential distribution model was used to account forindividual variation. Simulation random and distribution-constrainedassignment of doses, dosing time, feeding intervals and milk volumewas conducted to generate 1000 mother-infant pairs with characteristicssuch as the steady-state serum concentrations (Css) and infantdose relative to the maternal weight-adjusted dose (relative infantdose: RID). Full bioavailability and a conservative point estimate of1-month-old infant CYP2D6 activity to be 20% of the adult value(adjusted by weigth) according to a recent study, were assumed forinfant Css calculations.RESULTS: A linear 2-compartment model was selected as thebest model. Derived parameters, including milk-to-plasma ratios(mean: 0.66; SD: 0.34; range, 0 - 1.1) were consistent with the valuesreported in the literature. The estimated RID was below 10% in >95%of infants. The model predicted median infant-mother Css ratio was0.096 (range 0.035 - 0.25); literature reported mean was 0.07 (range0-0.59). Moreover, the predicted incidence of infant-mother Css ratioof >0.2 was less than 1%.CONCLUSION: Our in silico model prediction is consistent withclinical observations, suggesting that substantial systemic fluoxetineexposure in infants through human milk is rare, but further analysisshould include active metabolites. Our approach may be valid forother drugs. [supported by CIHR and Swiss National Science Foundation(SNSF)]
Resumo:
In the framework of the classical compound Poisson process in collective risk theory, we study a modification of the horizontal dividend barrier strategy by introducing random observation times at which dividends can be paid and ruin can be observed. This model contains both the continuous-time and the discrete-time risk model as a limit and represents a certain type of bridge between them which still enables the explicit calculation of moments of total discounted dividend payments until ruin. Numerical illustrations for several sets of parameters are given and the effect of random observation times on the performance of the dividend strategy is studied.
Resumo:
In Quantitative Microbial Risk Assessment, it is vital to understand how lag times of individual cells are distributed over a bacterial population. Such identified distributions can be used to predict the time by which, in a growth-supporting environment, a few pathogenic cells can multiply to a poisoning concentration level. We model the lag time of a single cell, inoculated into a new environment, by the delay of the growth function characterizing the generated subpopulation. We introduce an easy-to-implement procedure, based on the method of moments, to estimate the parameters of the distribution of single cell lag times. The advantage of the method is especially apparent for cases where the initial number of cells is small and random, and the culture is detectable only in the exponential growth phase.
Resumo:
We asked whether locally applied recombinant-Bone Morphogenic Protein-2 (rh-BMP-2) with an absorbable Type I collagen sponge (ACS) carrier could enhance the consolidation phase in a callotasis model. We performed unilateral transverse osteotomy of the tibia in 21 immature male rabbits. After a latency period of 7 days, a 3-weeks distraction was begun at a rate of 0.5mm/12h. At the end of the distraction period (Day 28) animals were randomly divided into three groups and underwent a second surgical procedure: 6 rabbits in Group I (Control group; the callus was exposed and nothing was added), 6 rabbits in Group II (ACS group; receiving the absorbable collagen sponge soaked with saline) and 9 rabbits in Group III (rh-BMP-2/ACS group; receiving the ACS soaked with 100μg/kg of rh-BMP-2, Inductos(®), Medtronic). Starting at Day 28 we assessed quantitative and qualitative radiographic parameters as well as densitometric parameters every two weeks (Days 28, 42, 56, 70 and 84). Animals were sacrificed after 8 weeks of consolidation (Day 84). Qualitative radiographic evaluation revealed hypertrophic calluses in the Group III animals. The rh-BMP-2/ACS also influenced the development of the cortex of the calluses as shown by the modified radiographic patterns in Group III when compared to Groups I and II. Densitometric analysis revealed the bone mineral content (BMC) was significantly higher in the rh-BMP-2/ACS treated animals (Group III).
Resumo:
PURPOSE: To compare the effect of a rat anti-VEGF antibody, administered either by topical or subconjunctival (SC) routes, on a rat model of corneal transplant rejection.METHODS: Twenty-four rats underwent corneal transplantation and were randomized into four treatment groups (n=6 in each group). G1 and G2 received six SC injections (0.02 ml 10 µg/ml) of denatured (G1) or active (G2) anti-VEGF from Day 0 to Day 21 every third day. G3 and G4 were instilled three times a day with denatured (G3) or active (G4) anti-VEGF drops (10 µg/ml) from Day 0 to Day 21. Corneal mean clinical scores (MCSs) of edema (E), transparency (T), and neovessels (nv) were recorded at Days 3, 9, 15, and 21. Quantification of neovessels was performed after lectin staining of vessels on flat mounted corneas.RESULTS: Twenty-one days after surgery, MCSs differed significantly between G1 and G2, but not between G3 and G4, and the rejection rate was significantly reduced in rats receiving active antibodies regardless of the route of administration (G2=50%, G4=66.65% versus G1 and G3=100%; p<0.05). The mean surfaces of neovessels were significantly reduced in groups treated with active anti-VEGF (G2, G4). However, anti-VEGF therapy did not completely suppress corneal neovessels.CONCLUSIONS: Specific rat anti-VEGF antibodies significantly reduced neovascularization and subsequent corneal graft rejection. The SC administration of the anti-VEGF antibody was more effective than topical instillation.
Resumo:
An incentives based theory of policing is developed which can explain the phenomenon of random “crackdowns,” i.e., intermittent periods of high interdiction/surveillance. For a variety of police objective functions, random crackdowns can be part of the optimal monitoring strategy. We demonstrate support for implications of the crackdown theory using traffic data gathered by the Belgian Police Department and use the model to estimate the deterrence effectof additional resources spent on speeding interdiction.
Resumo:
A mixture of 3 MAbs directed against 3 different CEA epitopes was radiolabelled with 131I and used for the treatment of a human colon carcinoma transplanted s.c. into nude mice. Intact MAbs and F(ab')2 fragments were mixed because it had been shown by autoradiography that these 2 antibody forms can penetrate into different areas of the tumor nodule. Ten days after transplantation of colon tumor T380 a single dose of 600 microCi of 131I MAbs was injected i.v. The tumor grafts were well established (as evidenced by exponential growth in untreated mice) and their size continued to increase up to 6 days after radiolabelled antibody injection. Tumor shrinking was then observed lasting for 4-12 weeks. In a control group injected with 600 microCi of 131I coupled to irrelevant monoclonal IgG, tumor growth was delayed, but no regression was observed. Tumors of mice injected with the corresponding amount of unlabelled antibodies grew like those of untreated mice. Based on measurements of the effective whole-body half-life of injected 131I, the mean radiation dose received by the animals was calculated to be 382 rads for the antibody group and 478 rads for the normal IgG controls. The genetically immunodeficient animals exhibited no increase in mortality, and only limited bone-marrow toxicity was observed. Direct measurement of radioactivity in mice dissected 1, 3 and 7 days after 131I-MAb injection showed that 25, 7.2 and 2.2% of injected dose were recovered per gram of tumor, the mean radiation dose delivered to the tumor being thus more than 5,000 rads. These experiments show that therapeutic doses of radioactivity can be selectively directed to human colon carcinoma by i.v. injection of 131I-labelled anti-CEA MAbs.
Resumo:
Random coefficient regression models have been applied in differentfields and they constitute a unifying setup for many statisticalproblems. The nonparametric study of this model started with Beranand Hall (1992) and it has become a fruitful framework. In thispaper we propose and study statistics for testing a basic hypothesisconcerning this model: the constancy of coefficients. The asymptoticbehavior of the statistics is investigated and bootstrapapproximations are used in order to determine the critical values ofthe test statistics. A simulation study illustrates the performanceof the proposals.
Resumo:
This paper proposes a common and tractable framework for analyzingdifferent definitions of fixed and random effects in a contant-slopevariable-intercept model. It is shown that, regardless of whethereffects (i) are treated as parameters or as an error term, (ii) areestimated in different stages of a hierarchical model, or whether (iii)correlation between effects and regressors is allowed, when the sameinformation on effects is introduced into all estimation methods, theresulting slope estimator is also the same across methods. If differentmethods produce different results, it is ultimately because differentinformation is being used for each methods.
Resumo:
In many research areas (such as public health, environmental contamination, and others) one deals with the necessity of using data to infer whether some proportion (%) of a population of interest is (or one wants it to be) below and/or over some threshold, through the computation of tolerance interval. The idea is, once a threshold is given, one computes the tolerance interval or limit (which might be one or two - sided bounded) and then to check if it satisfies the given threshold. Since in this work we deal with the computation of one - sided tolerance interval, for the two-sided case we recomend, for instance, Krishnamoorthy and Mathew [5]. Krishnamoorthy and Mathew [4] performed the computation of upper tolerance limit in balanced and unbalanced one-way random effects models, whereas Fonseca et al [3] performed it based in a similar ideas but in a tow-way nested mixed or random effects model. In case of random effects model, Fonseca et al [3] performed the computation of such interval only for the balanced data, whereas in the mixed effects case they dit it only for the unbalanced data. For the computation of twosided tolerance interval in models with mixed and/or random effects we recomend, for instance, Sharma and Mathew [7]. The purpose of this paper is the computation of upper and lower tolerance interval in a two-way nested mixed effects models in balanced data. For the case of unbalanced data, as mentioned above, Fonseca et al [3] have already computed upper tolerance interval. Hence, using the notions persented in Fonseca et al [3] and Krishnamoorthy and Mathew [4], we present some results on the construction of one-sided tolerance interval for the balanced case. Thus, in order to do so at first instance we perform the construction for the upper case, and then the construction for the lower case.
Resumo:
L'étude du mouvement des organismes est essentiel pour la compréhension du fonctionnement des écosystèmes. Dans le cas des écosystèmes marins exploités, cela amène à s'intéresser aux stratégies spatiales des pêcheurs. L'une des approches les plus utilisées pour la modélisation du mouvement des prédateurs supé- rieurs est la marche aléatoire de Lévy. Une marche aléatoire est un modèle mathématique composé par des déplacements aléatoires. Dans le cas de Lévy, les longueurs des déplacements suivent une loi stable de Lévy. Dans ce cas également, les longueurs, lorsqu'elles tendent vers l'in ni (in praxy lorsqu'elles sont grandes, grandes par rapport à la médiane ou au troisième quartile par exemple), suivent une loi puissance caractéristique du type de marche aléatoire de Lévy (Cauchy, Brownien ou strictement Lévy). Dans la pratique, outre que cette propriété est utilisée de façon réciproque sans fondement théorique, les queues de distribution, notion par ailleurs imprécise, sont modélisée par des lois puissances sans que soient discutées la sensibilité des résultats à la dé nition de la queue de distribution, et la pertinence des tests d'ajustement et des critères de choix de modèle. Dans ce travail portant sur les déplacements observés de trois bateaux de pêche à l'anchois du Pérou, plusieurs modèles de queues de distribution (log-normal, exponentiel, exponentiel tronqué, puissance et puissance tronqué) ont été comparés ainsi que deux dé nitions possible de queues de distribution (de la médiane à l'in ni ou du troisième quartile à l'in ni). Au plan des critères et tests statistiques utilisés, les lois tronquées (exponentielle et puissance) sont apparues les meilleures. Elles intègrent en outre le fait que, dans la pratique, les bateaux ne dépassent pas une certaine limite de longueur de déplacement. Le choix de modèle est apparu sensible au choix du début de la queue de distribution : pour un même bateau, le choix d'un modèle tronqué ou l'autre dépend de l'intervalle des valeurs de la variable sur lequel le modèle est ajusté. Pour nir, nous discutons les implications en écologie des résultats de ce travail.
Resumo:
The life history of the fruit fly (Drosophila melanogaster) is well understood, but fitness components are rarely measured by following single individuals over their lifetime, thereby limiting insights into lifetime reproductive success, reproductive senescence and post-reproductive lifespan. Moreover, most studies have examined long-established laboratory strains rather than freshly caught individuals and may thus be confounded by adaptation to laboratory culture, inbreeding or mutation accumulation. Here, we have followed the life histories of individual females from three recently caught, non-laboratory-adapted wild populations of D. melanogaster. Populations varied in a number of life-history traits, including ovariole number, fecundity, hatchability and lifespan. To describe individual patterns of age-specific fecundity, we developed a new model that allowed us to distinguish four phases during a female's life: a phase of reproductive maturation, followed by a period of linear and then exponential decline in fecundity and, finally, a post-ovipository period. Individual females exhibited clear-cut fecundity peaks, which contrasts with previous analyses, and post-peak levels of fecundity declined independently of how long females lived. Notably, females had a pronounced post-reproductive lifespan, which on average made up 40% of total lifespan. Post-reproductive lifespan did not differ among populations and was not correlated with reproductive fitness components, supporting the hypothesis that this period is a highly variable, random 'add-on' at the end of reproductive life rather than a correlate of selection on reproductive fitness. Most life-history traits were positively correlated, a pattern that might be due to genotype by environment interactions when wild flies are brought into a novel laboratory environment but that is unlikely explained by inbreeding or positive mutational covariance caused by mutation accumulation.