907 resultados para Generalized linear mixed model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: The objective of the present study is to test the validity of the integrated cognitive model (ICM) of depression proposed by Kwon and Oei with a Latin-American sample. The ICM of depression postulates that the interaction between negative life events with dysfunctional attitudes increases the frequency of negative automatic thoughts, which in turns affects the depressive symptomatology of a person. This model was developed for Western Europeans such as Americans and Australians and the validity of this model has not been tested on Latin-Americans. Method: Participants were 101 Latin-American migrants living permanently in Brisbane, including people from Chile, El Salvador, Nicaragua, Argentina and Guatemala. Participants completed the Beck Depression Inventory, the Dysfunctional Attitudes Scale, the Automatic Thoughts Questionnaire and the Life Events Inventory. Alternative or competing models of depression were examined, including the alternative aetiologies model, the linear mediational model and the symptom model. Results: Six models were tested and the results of the structural equation modelling analysis indicated that the symptom model only fits the Latin-American data. Conclusions: Results show that in the Latin-American sample depression symptoms can have an impact on negative cognitions. This finding adds to growing evidence in the literature that the relationship between cognitions and depression is bidirectional, rather than unidirectional from cognitions to symptoms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A combination of uni- and multiplex PCR assays targeting 58 virulence genes (VGs) associated with Escherichia coli strains causing intestinal and extraintestinal disease in humans and other mammals was used to analyze the VG repertoire of 23 commensal E. coli isolates from healthy pigs and 52 clinical isolates associated with porcine neonatal diarrhea (ND) and postweaning diarrhea (PWD). The relationship between the presence and absence of VGs was interrogated using three statistical methods. According to the generalized linear model, 17 of 58 VGs were found to be significant (P < 0.05) in distinguishing between commensal and clinical isolates. Nine of the 17 genes represented by iha, hlyA, aidA, east1, aah, fimH, iroN(E).(coli), traT, and saa have not been previously identified as important VGs in clinical porcine isolates in Australia. The remaining eight VGs code for fimbriae (F4, F5, F18, and F41) and toxins (STa, STh, LT, and Stx2), normally associated with porcine enterotoxigenic E. coli. Agglomerative hierarchical algorithm analysis grouped E. coli strains into subclusters based primarily on their serogroup. Multivariate analyses of clonal relationships based on the 17 VGs were collapsed into two-dimensional space by principal coordinate analysis. PWD clones were distributed in two quadrants, separated from ND and commensal clones, which tended to cluster within one quadrant. Clonal subclusters within quadrants were highly correlated with serogroups. These methods of analysis provide different perspectives in our attempts to understand how commensal and clinical porcine enterotoxigenic E. coli strains have evolved and are engaged in the dynamic process of losing or acquiring VGs within the pig population.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Count data with excess zeros relative to a Poisson distribution are common in many biomedical applications. A popular approach to the analysis of such data is to use a zero-inflated Poisson (ZIP) regression model. Often, because of the hierarchical Study design or the data collection procedure, zero-inflation and lack of independence may occur simultaneously, which tender the standard ZIP model inadequate. To account for the preponderance of zero counts and the inherent correlation of observations, a class of multi-level ZIP regression model with random effects is presented. Model fitting is facilitated using an expectation-maximization algorithm, whereas variance components are estimated via residual maximum likelihood estimating equations. A score test for zero-inflation is also presented. The multi-level ZIP model is then generalized to cope with a more complex correlation structure. Application to the analysis of correlated count data from a longitudinal infant feeding study illustrates the usefulness of the approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Based on the three-dimensional elastic inclusion model proposed by Dobrovolskii, we developed a rheological inclusion model to study earthquake preparation processes. By using the Corresponding Principle in the theory of rheologic mechanics, we derived the analytic expressions of viscoelastic displacement U(r, t) , V(r, t) and W(r, t), normal strains epsilon(xx) (r, t), epsilon(yy) (r, t) and epsilon(zz) (r, t) and the bulk strain theta (r, t) at an arbitrary point (x, y, z) in three directions of X axis, Y axis and Z axis produced by a three-dimensional inclusion in the semi-infinite rheologic medium defined by the standard linear rheologic model. Subsequent to the spatial-temporal variation of bulk strain being computed on the ground produced by such a spherical rheologic inclusion, interesting results are obtained, suggesting that the bulk strain produced by a hard inclusion change with time according to three stages (alpha, beta, gamma) with different characteristics, similar to that of geodetic deformation observations, but different with the results of a soft inclusion. These theoretical results can be used to explain the characteristics of spatial-temporal evolution, patterns, quadrant-distribution of earthquake precursors, the changeability, spontaneity and complexity of short-term and imminent-term precursors. It offers a theoretical base to build physical models for earthquake precursors and to predict the earthquakes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

E. L. DeLosh, J. R. Busemeyer, and M. A. McDaniel (1997) found that when learning a positive, linear relationship between a continuous predictor (x) and a continuous criterion (y), trainees tend to underestimate y on items that ask the trainee to extrapolate. In 3 experiments, the authors examined the phenomenon and found that the tendency to underestimate y is reliable only in the so-called lower extrapolation region-that is, new values of x that lie between zero and the edge of the training region. Existing models of function learning, such as the extrapolation-association model (DeLosh et al., 1997) and the population of linear experts model (M. L. Kalish, S. Lewandowsky, & J. Kruschke, 2004), cannot account for these results. The authors show that with minor changes, both models can predict the correct pattern of results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Exploratory analysis of data in all sciences seeks to find common patterns to gain insights into the structure and distribution of the data. Typically visualisation methods like principal components analysis are used but these methods are not easily able to deal with missing data nor can they capture non-linear structure in the data. One approach to discovering complex, non-linear structure in the data is through the use of linked plots, or brushing, while ignoring the missing data. In this technical report we discuss a complementary approach based on a non-linear probabilistic model. The generative topographic mapping enables the visualisation of the effects of very many variables on a single plot, which is able to incorporate far more structure than a two dimensional principal components plot could, and deal at the same time with missing data. We show that using the generative topographic mapping provides us with an optimal method to explore the data while being able to replace missing values in a dataset, particularly where a large proportion of the data is missing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For a submitted query to multiple search engines finding relevant results is an important task. This paper formulates the problem of aggregation and ranking of multiple search engines results in the form of a minimax linear programming model. Besides the novel application, this study detects the most relevant information among a return set of ranked lists of documents retrieved by distinct search engines. Furthermore, two numerical examples aree used to illustrate the usefulness of the proposed approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work the solution of a class of capital investment problems is considered within the framework of mathematical programming. Upon the basis of the net present value criterion, the problems in question are mainly characterized by the fact that the cost of capital is defined as a non-decreasing function of the investment requirements. Capital rationing and some cases of technological dependence are also included, this approach leading to zero-one non-linear programming problems, for which specifically designed solution procedures supported by a general branch and bound development are presented. In the context of both this development and the relevant mathematical properties of the previously mentioned zero-one programs, a generalized zero-one model is also discussed. Finally,a variant of the scheme, connected with the search sequencing of optimal solutions, is presented as an alternative in which reduced storage limitations are encountered.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Exploratory analysis of data seeks to find common patterns to gain insights into the structure and distribution of the data. In geochemistry it is a valuable means to gain insights into the complicated processes making up a petroleum system. Typically linear visualisation methods like principal components analysis, linked plots, or brushing are used. These methods can not directly be employed when dealing with missing data and they struggle to capture global non-linear structures in the data, however they can do so locally. This thesis discusses a complementary approach based on a non-linear probabilistic model. The generative topographic mapping (GTM) enables the visualisation of the effects of very many variables on a single plot, which is able to incorporate more structure than a two dimensional principal components plot. The model can deal with uncertainty, missing data and allows for the exploration of the non-linear structure in the data. In this thesis a novel approach to initialise the GTM with arbitrary projections is developed. This makes it possible to combine GTM with algorithms like Isomap and fit complex non-linear structure like the Swiss-roll. Another novel extension is the incorporation of prior knowledge about the structure of the covariance matrix. This extension greatly enhances the modelling capabilities of the algorithm resulting in better fit to the data and better imputation capabilities for missing data. Additionally an extensive benchmark study of the missing data imputation capabilities of GTM is performed. Further a novel approach, based on missing data, will be introduced to benchmark the fit of probabilistic visualisation algorithms on unlabelled data. Finally the work is complemented by evaluating the algorithms on real-life datasets from geochemical projects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Exploratory analysis of petroleum geochemical data seeks to find common patterns to help distinguish between different source rocks, oils and gases, and to explain their source, maturity and any intra-reservoir alteration. However, at the outset, one is typically faced with (a) a large matrix of samples, each with a range of molecular and isotopic properties, (b) a spatially and temporally unrepresentative sampling pattern, (c) noisy data and (d) often, a large number of missing values. This inhibits analysis using conventional statistical methods. Typically, visualisation methods like principal components analysis are used, but these methods are not easily able to deal with missing data nor can they capture non-linear structure in the data. One approach to discovering complex, non-linear structure in the data is through the use of linked plots, or brushing, while ignoring the missing data. In this paper we introduce a complementary approach based on a non-linear probabilistic model. Generative topographic mapping enables the visualisation of the effects of very many variables on a single plot, while also dealing with missing data. We show how using generative topographic mapping also provides an optimal method with which to replace missing values in two geochemical datasets, particularly where a large proportion of the data is missing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Parkinson’s disease (PD) is an incurable neurological disease with approximately 0.3% prevalence. The hallmark symptom is gradual movement deterioration. Current scientific consensus about disease progression holds that symptoms will worsen smoothly over time unless treated. Accurate information about symptom dynamics is of critical importance to patients, caregivers, and the scientific community for the design of new treatments, clinical decision making, and individual disease management. Long-term studies characterize the typical time course of the disease as an early linear progression gradually reaching a plateau in later stages. However, symptom dynamics over durations of days to weeks remains unquantified. Currently, there is a scarcity of objective clinical information about symptom dynamics at intervals shorter than 3 months stretching over several years, but Internet-based patient self-report platforms may change this. Objective: To assess the clinical value of online self-reported PD symptom data recorded by users of the health-focused Internet social research platform PatientsLikeMe (PLM), in which patients quantify their symptoms on a regular basis on a subset of the Unified Parkinson’s Disease Ratings Scale (UPDRS). By analyzing this data, we aim for a scientific window on the nature of symptom dynamics for assessment intervals shorter than 3 months over durations of several years. Methods: Online self-reported data was validated against the gold standard Parkinson’s Disease Data and Organizing Center (PD-DOC) database, containing clinical symptom data at intervals greater than 3 months. The data were compared visually using quantile-quantile plots, and numerically using the Kolmogorov-Smirnov test. By using a simple piecewise linear trend estimation algorithm, the PLM data was smoothed to separate random fluctuations from continuous symptom dynamics. Subtracting the trends from the original data revealed random fluctuations in symptom severity. The average magnitude of fluctuations versus time since diagnosis was modeled by using a gamma generalized linear model. Results: Distributions of ages at diagnosis and UPDRS in the PLM and PD-DOC databases were broadly consistent. The PLM patients were systematically younger than the PD-DOC patients and showed increased symptom severity in the PD off state. The average fluctuation in symptoms (UPDRS Parts I and II) was 2.6 points at the time of diagnosis, rising to 5.9 points 16 years after diagnosis. This fluctuation exceeds the estimated minimal and moderate clinically important differences, respectively. Not all patients conformed to the current clinical picture of gradual, smooth changes: many patients had regimes where symptom severity varied in an unpredictable manner, or underwent large rapid changes in an otherwise more stable progression. Conclusions: This information about short-term PD symptom dynamics contributes new scientific understanding about the disease progression, currently very costly to obtain without self-administered Internet-based reporting. This understanding should have implications for the optimization of clinical trials into new treatments and for the choice of treatment decision timescales.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The measurement of different aspects of information society has been problematic over along time, and the International Telecommunication Union (ITU) is spearheading in developing a single ICT index. In Geneva during the first World Summit on Information Society (WSIS) in December 2003, the heads of states declared their commitment to the importance of benchmarking and measuring progress toward the information society. Consequently, they re-affirmed their Geneva commitments in their second summit held in Tunis in 2005. In this paper, we propose a multiplicative linear programming model to measure Opportunity Index. We also compared our results with the common measure of ICT opportunity index and we found that the two indices are consistent in their measurement of digital opportunity though differences still exist among regions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aims - To build a population pharmacokinetic model that describes the apparent clearance of tacrolimus and the potential demographic, clinical and genetically controlled factors that could lead to inter-patient pharmacokinetic variability within children following liver transplantation. Methods - The present study retrospectively examined tacrolimus whole blood pre-dose concentrations (n = 628) of 43 children during their first year post-liver transplantation. Population pharmacokinetic analysis was performed using the non-linear mixed effects modelling program (nonmem) to determine the population mean parameter estimate of clearance and influential covariates. Results - The final model identified time post-transplantation and CYP3A5*1 allele as influential covariates on tacrolimus apparent clearance according to the following equation: TVCL = 12.9 x (Weight/13.2)0.35 x EXP (-0.0058 x TPT) x EXP (0.428 x CYP3A5) where TVCL is the typical value for apparent clearance, TPT is time post-transplantation in days and the CYP3A5 is 1 where *1 allele is present and 0 otherwise. The population estimate and inter-individual variability (%CV) of tacrolimus apparent clearance were found to be 0.977 l h−1 kg−1 (95% CI 0.958, 0.996) and 40.0%, respectively, while the residual variability between the observed and predicted concentrations was 35.4%. Conclusion Tacrolimus apparent clearance was influenced by time post-transplantation and CYP3A5 genotypes. The results of this study, once confirmed by a large scale prospective study, can be used in conjunction with therapeutic drug monitoring to recommend tacrolimus dose adjustments that take into account not only body weight but also genetic and time-related changes in tacrolimus clearance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background To determine the pharmacokinetics (PK) of a new i.v. formulation of paracetamol (Perfalgan) in children ≤15 yr of age. Methods After obtaining written informed consent, children under 16 yr of age were recruited to this study. Blood samples were obtained at 0, 15, 30 min, 1, 2, 4, 6, and 8 h after administration of a weight-dependent dose of i.v. paracetamol. Paracetamol concentration was measured using a validated high-performance liquid chromatographic assay with ultraviolet detection method, with a lower limit of quantification (LLOQ) of 900 pg on column and an intra-day coefficient of variation of 14.3% at the LLOQ. Population PK analysis was performed by non-linear mixed-effect modelling using NONMEM. Results One hundred and fifty-nine blood samples from 33 children aged 1.8–15 yr, weight 13.7–56 kg, were analysed. Data were best described by a two-compartment model. Only body weight as a covariate significantly improved the goodness of fit of the model. The final population models for paracetamol clearance (CL), V1 (central volume of distribution), Q (inter-compartmental clearance), and V2 (peripheral volume of distribution) were: 16.51×(WT/70)0.75, 28.4×(WT/70), 11.32×(WT/70)0.75, and 13.26×(WT/70), respectively (CL, Q in litres per hour, WT in kilograms, and V1 and V2 in litres). Conclusions In children aged 1.8–15 yr, the PK parameters for i.v. paracetamol were not influenced directly by age but were by total body weight and, using allometric size scaling, significantly affected the clearances (CL, Q) and volumes of distribution (V1, V2).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fuzzy data envelopment analysis (DEA) models emerge as another class of DEA models to account for imprecise inputs and outputs for decision making units (DMUs). Although several approaches for solving fuzzy DEA models have been developed, there are some drawbacks, ranging from the inability to provide satisfactory discrimination power to simplistic numerical examples that handles only triangular fuzzy numbers or symmetrical fuzzy numbers. To address these drawbacks, this paper proposes using the concept of expected value in generalized DEA (GDEA) model. This allows the unification of three models - fuzzy expected CCR, fuzzy expected BCC, and fuzzy expected FDH models - and the ability of these models to handle both symmetrical and asymmetrical fuzzy numbers. We also explored the role of fuzzy GDEA model as a ranking method and compared it to existing super-efficiency evaluation models. Our proposed model is always feasible, while infeasibility problems remain in certain cases under existing super-efficiency models. In order to illustrate the performance of the proposed method, it is first tested using two established numerical examples and compared with the results obtained from alternative methods. A third example on energy dependency among 23 European Union (EU) member countries is further used to validate and describe the efficacy of our approach under asymmetric fuzzy numbers.