917 resultados para linear mixed model
Resumo:
Objective: The objective of the present study is to test the validity of the integrated cognitive model (ICM) of depression proposed by Kwon and Oei with a Latin-American sample. The ICM of depression postulates that the interaction between negative life events with dysfunctional attitudes increases the frequency of negative automatic thoughts, which in turns affects the depressive symptomatology of a person. This model was developed for Western Europeans such as Americans and Australians and the validity of this model has not been tested on Latin-Americans. Method: Participants were 101 Latin-American migrants living permanently in Brisbane, including people from Chile, El Salvador, Nicaragua, Argentina and Guatemala. Participants completed the Beck Depression Inventory, the Dysfunctional Attitudes Scale, the Automatic Thoughts Questionnaire and the Life Events Inventory. Alternative or competing models of depression were examined, including the alternative aetiologies model, the linear mediational model and the symptom model. Results: Six models were tested and the results of the structural equation modelling analysis indicated that the symptom model only fits the Latin-American data. Conclusions: Results show that in the Latin-American sample depression symptoms can have an impact on negative cognitions. This finding adds to growing evidence in the literature that the relationship between cognitions and depression is bidirectional, rather than unidirectional from cognitions to symptoms.
Resumo:
Based on the three-dimensional elastic inclusion model proposed by Dobrovolskii, we developed a rheological inclusion model to study earthquake preparation processes. By using the Corresponding Principle in the theory of rheologic mechanics, we derived the analytic expressions of viscoelastic displacement U(r, t) , V(r, t) and W(r, t), normal strains epsilon(xx) (r, t), epsilon(yy) (r, t) and epsilon(zz) (r, t) and the bulk strain theta (r, t) at an arbitrary point (x, y, z) in three directions of X axis, Y axis and Z axis produced by a three-dimensional inclusion in the semi-infinite rheologic medium defined by the standard linear rheologic model. Subsequent to the spatial-temporal variation of bulk strain being computed on the ground produced by such a spherical rheologic inclusion, interesting results are obtained, suggesting that the bulk strain produced by a hard inclusion change with time according to three stages (alpha, beta, gamma) with different characteristics, similar to that of geodetic deformation observations, but different with the results of a soft inclusion. These theoretical results can be used to explain the characteristics of spatial-temporal evolution, patterns, quadrant-distribution of earthquake precursors, the changeability, spontaneity and complexity of short-term and imminent-term precursors. It offers a theoretical base to build physical models for earthquake precursors and to predict the earthquakes.
Resumo:
E. L. DeLosh, J. R. Busemeyer, and M. A. McDaniel (1997) found that when learning a positive, linear relationship between a continuous predictor (x) and a continuous criterion (y), trainees tend to underestimate y on items that ask the trainee to extrapolate. In 3 experiments, the authors examined the phenomenon and found that the tendency to underestimate y is reliable only in the so-called lower extrapolation region-that is, new values of x that lie between zero and the edge of the training region. Existing models of function learning, such as the extrapolation-association model (DeLosh et al., 1997) and the population of linear experts model (M. L. Kalish, S. Lewandowsky, & J. Kruschke, 2004), cannot account for these results. The authors show that with minor changes, both models can predict the correct pattern of results.
Resumo:
Many variables that are of interest in social science research are nominal variables with two or more categories, such as employment status, occupation, political preference, or self-reported health status. With longitudinal survey data it is possible to analyse the transitions of individuals between different employment states or occupations (for example). In the statistical literature, models for analysing categorical dependent variables with repeated observations belong to the family of models known as generalized linear mixed models (GLMMs). The specific GLMM for a dependent variable with three or more categories is the multinomial logit random effects model. For these models, the marginal distribution of the response does not have a closed form solution and hence numerical integration must be used to obtain maximum likelihood estimates for the model parameters. Techniques for implementing the numerical integration are available but are computationally intensive requiring a large amount of computer processing time that increases with the number of clusters (or individuals) in the data and are not always readily accessible to the practitioner in standard software. For the purposes of analysing categorical response data from a longitudinal social survey, there is clearly a need to evaluate the existing procedures for estimating multinomial logit random effects model in terms of accuracy, efficiency and computing time. The computational time will have significant implications as to the preferred approach by researchers. In this paper we evaluate statistical software procedures that utilise adaptive Gaussian quadrature and MCMC methods, with specific application to modeling employment status of women using a GLMM, over three waves of the HILDA survey.
Resumo:
Exploratory analysis of data in all sciences seeks to find common patterns to gain insights into the structure and distribution of the data. Typically visualisation methods like principal components analysis are used but these methods are not easily able to deal with missing data nor can they capture non-linear structure in the data. One approach to discovering complex, non-linear structure in the data is through the use of linked plots, or brushing, while ignoring the missing data. In this technical report we discuss a complementary approach based on a non-linear probabilistic model. The generative topographic mapping enables the visualisation of the effects of very many variables on a single plot, which is able to incorporate far more structure than a two dimensional principal components plot could, and deal at the same time with missing data. We show that using the generative topographic mapping provides us with an optimal method to explore the data while being able to replace missing values in a dataset, particularly where a large proportion of the data is missing.
Resumo:
For a submitted query to multiple search engines finding relevant results is an important task. This paper formulates the problem of aggregation and ranking of multiple search engines results in the form of a minimax linear programming model. Besides the novel application, this study detects the most relevant information among a return set of ranked lists of documents retrieved by distinct search engines. Furthermore, two numerical examples aree used to illustrate the usefulness of the proposed approach.
Resumo:
Exploratory analysis of data seeks to find common patterns to gain insights into the structure and distribution of the data. In geochemistry it is a valuable means to gain insights into the complicated processes making up a petroleum system. Typically linear visualisation methods like principal components analysis, linked plots, or brushing are used. These methods can not directly be employed when dealing with missing data and they struggle to capture global non-linear structures in the data, however they can do so locally. This thesis discusses a complementary approach based on a non-linear probabilistic model. The generative topographic mapping (GTM) enables the visualisation of the effects of very many variables on a single plot, which is able to incorporate more structure than a two dimensional principal components plot. The model can deal with uncertainty, missing data and allows for the exploration of the non-linear structure in the data. In this thesis a novel approach to initialise the GTM with arbitrary projections is developed. This makes it possible to combine GTM with algorithms like Isomap and fit complex non-linear structure like the Swiss-roll. Another novel extension is the incorporation of prior knowledge about the structure of the covariance matrix. This extension greatly enhances the modelling capabilities of the algorithm resulting in better fit to the data and better imputation capabilities for missing data. Additionally an extensive benchmark study of the missing data imputation capabilities of GTM is performed. Further a novel approach, based on missing data, will be introduced to benchmark the fit of probabilistic visualisation algorithms on unlabelled data. Finally the work is complemented by evaluating the algorithms on real-life datasets from geochemical projects.
Resumo:
Exploratory analysis of petroleum geochemical data seeks to find common patterns to help distinguish between different source rocks, oils and gases, and to explain their source, maturity and any intra-reservoir alteration. However, at the outset, one is typically faced with (a) a large matrix of samples, each with a range of molecular and isotopic properties, (b) a spatially and temporally unrepresentative sampling pattern, (c) noisy data and (d) often, a large number of missing values. This inhibits analysis using conventional statistical methods. Typically, visualisation methods like principal components analysis are used, but these methods are not easily able to deal with missing data nor can they capture non-linear structure in the data. One approach to discovering complex, non-linear structure in the data is through the use of linked plots, or brushing, while ignoring the missing data. In this paper we introduce a complementary approach based on a non-linear probabilistic model. Generative topographic mapping enables the visualisation of the effects of very many variables on a single plot, while also dealing with missing data. We show how using generative topographic mapping also provides an optimal method with which to replace missing values in two geochemical datasets, particularly where a large proportion of the data is missing.
Resumo:
The measurement of different aspects of information society has been problematic over along time, and the International Telecommunication Union (ITU) is spearheading in developing a single ICT index. In Geneva during the first World Summit on Information Society (WSIS) in December 2003, the heads of states declared their commitment to the importance of benchmarking and measuring progress toward the information society. Consequently, they re-affirmed their Geneva commitments in their second summit held in Tunis in 2005. In this paper, we propose a multiplicative linear programming model to measure Opportunity Index. We also compared our results with the common measure of ICT opportunity index and we found that the two indices are consistent in their measurement of digital opportunity though differences still exist among regions.
Resumo:
Aims - To build a population pharmacokinetic model that describes the apparent clearance of tacrolimus and the potential demographic, clinical and genetically controlled factors that could lead to inter-patient pharmacokinetic variability within children following liver transplantation. Methods - The present study retrospectively examined tacrolimus whole blood pre-dose concentrations (n = 628) of 43 children during their first year post-liver transplantation. Population pharmacokinetic analysis was performed using the non-linear mixed effects modelling program (nonmem) to determine the population mean parameter estimate of clearance and influential covariates. Results - The final model identified time post-transplantation and CYP3A5*1 allele as influential covariates on tacrolimus apparent clearance according to the following equation: TVCL = 12.9 x (Weight/13.2)0.35 x EXP (-0.0058 x TPT) x EXP (0.428 x CYP3A5) where TVCL is the typical value for apparent clearance, TPT is time post-transplantation in days and the CYP3A5 is 1 where *1 allele is present and 0 otherwise. The population estimate and inter-individual variability (%CV) of tacrolimus apparent clearance were found to be 0.977 l h−1 kg−1 (95% CI 0.958, 0.996) and 40.0%, respectively, while the residual variability between the observed and predicted concentrations was 35.4%. Conclusion Tacrolimus apparent clearance was influenced by time post-transplantation and CYP3A5 genotypes. The results of this study, once confirmed by a large scale prospective study, can be used in conjunction with therapeutic drug monitoring to recommend tacrolimus dose adjustments that take into account not only body weight but also genetic and time-related changes in tacrolimus clearance.
Resumo:
Background To determine the pharmacokinetics (PK) of a new i.v. formulation of paracetamol (Perfalgan) in children ≤15 yr of age. Methods After obtaining written informed consent, children under 16 yr of age were recruited to this study. Blood samples were obtained at 0, 15, 30 min, 1, 2, 4, 6, and 8 h after administration of a weight-dependent dose of i.v. paracetamol. Paracetamol concentration was measured using a validated high-performance liquid chromatographic assay with ultraviolet detection method, with a lower limit of quantification (LLOQ) of 900 pg on column and an intra-day coefficient of variation of 14.3% at the LLOQ. Population PK analysis was performed by non-linear mixed-effect modelling using NONMEM. Results One hundred and fifty-nine blood samples from 33 children aged 1.8–15 yr, weight 13.7–56 kg, were analysed. Data were best described by a two-compartment model. Only body weight as a covariate significantly improved the goodness of fit of the model. The final population models for paracetamol clearance (CL), V1 (central volume of distribution), Q (inter-compartmental clearance), and V2 (peripheral volume of distribution) were: 16.51×(WT/70)0.75, 28.4×(WT/70), 11.32×(WT/70)0.75, and 13.26×(WT/70), respectively (CL, Q in litres per hour, WT in kilograms, and V1 and V2 in litres). Conclusions In children aged 1.8–15 yr, the PK parameters for i.v. paracetamol were not influenced directly by age but were by total body weight and, using allometric size scaling, significantly affected the clearances (CL, Q) and volumes of distribution (V1, V2).
Resumo:
Objective: To describe the effect of age and body size on enantiomer selective pharmacokinetic (PK) of intravenous ketorolac in children using a microanalytical assay. Methods: Blood samples were obtained at 0, 15 and 30 min and at 1, 2, 4, 6, 8 and 12 h after a weight-dependent dose of ketorolac. Enantiomer concentration was measured using a liquid chromatography tandem mass spectrometry method. Non-linear mixed-effect modelling was used to assess PK parameters. Key findings: Data from 11 children (1.7–15.6 years, weight 10.7–67.4 kg) were best described by a two-compartment model for R(+), S(−) and racemic ketorolac. Only weight (WT) significantly improved the goodness of fit. The final population models were CL = 1.5 × (WT/46)0.75, V1 = 8.2 × (WT/46), Q = 3.4 × (WT/46)0.75, V2 = 7.9 × (WT/46), CL = 2.98 × (WT/46), V1 = 13.2 × (WT/46), Q = 2.8 × (WT/46)0.75, V2 = 51.5 × (WT/46), and CL = 1.1 × (WT/46)0.75, V1 = 4.9 × (WT/46), Q = 1.7 × (WT/46)0.75 and V2 = 6.3 × (WT/46)for R(+), S(−) and racemic ketorolac. Conclusions: Only body weight influenced the PK parameters for R(+) and S(−) ketorolac. Using allometric size scaling significantly affected the clearances (CL, Q) and volumes of distribution (V1, V2).
Resumo:
2000 Mathematics Subject Classification: 62J12, 62F35
Resumo:
One of the major challenges in measuring efficiency in terms of resources and outcomes is the assessment of the evolution of units over time. Although Data Envelopment Analysis (DEA) has been applied for time series datasets, DEA models, by construction, form the reference set for inefficient units (lambda values) based on their distance from the efficient frontier, that is, in a spatial manner. However, when dealing with temporal datasets, the proximity in time between units should also be taken into account, since it reflects the structural resemblance among time periods of a unit that evolves. In this paper, we propose a two-stage spatiotemporal DEA approach, which captures both the spatial and temporal dimension through a multi-objective programming model. In the first stage, DEA is solved iteratively extracting for each unit only previous DMUs as peers in its reference set. In the second stage, the lambda values derived from the first stage are fed to a Multiobjective Mixed Integer Linear Programming model, which filters peers in the reference set based on weights assigned to the spatial and temporal dimension. The approach is demonstrated on a real-world example drawn from software development.
Resumo:
Firms worldwide are taking major initiatives to reduce the carbon footprint of their supply chains in response to the growing governmental and consumer pressures. In real life, these supply chains face stochastic and non-stationary demand but most of the studies on inventory lot-sizing problem with emission concerns consider deterministic demand. In this paper, we study the inventory lot-sizing problem under non-stationary stochastic demand condition with emission and cycle service level constraints considering carbon cap-and-trade regulatory mechanism. Using a mixed integer linear programming model, this paper aims to investigate the effects of emission parameters, product- and system-related features on the supply chain performance through extensive computational experiments to cover general type business settings and not a specific scenario. Results show that cycle service level and demand coefficient of variation have significant impacts on total cost and emission irrespective of level of demand variability while the impact of product's demand pattern is significant only at lower level of demand variability. Finally, results also show that increasing value of carbon price reduces total cost, total emission and total inventory and the scope of emission reduction by increasing carbon price is greater at higher levels of cycle service level and demand coefficient of variation. The analysis of results helps supply chain managers to take right decision in different demand and service level situations.