884 resultados para Linear coregionalization model
Resumo:
In order to overcome the limitations of the linear-quadratic model and include synergistic effects of heat and radiation, a novel radiobiological model is proposed. The model is based on a chain of cell populations which are characterized by the number of radiation induced damages (hits). Cells can shift downward along the chain by collecting hits and upward by a repair process. The repair process is governed by a repair probability which depends upon state variables used for a simplistic description of the impact of heat and radiation upon repair proteins. Based on the parameters used, populations up to 4-5 hits are relevant for the calculation of the survival. The model describes intuitively the mathematical behaviour of apoptotic and nonapoptotic cell death. Linear-quadratic-linear behaviour of the logarithmic cell survival, fractionation, and (with one exception) the dose rate dependencies are described correctly. The model covers the time gap dependence of the synergistic cell killing due to combined application of heat and radiation, but further validation of the proposed approach based on experimental data is needed. However, the model offers a work bench for testing different biological concepts of damage induction, repair, and statistical approaches for calculating the variables of state.
Resumo:
Many image processing methods, such as techniques for people re-identification, assume photometric constancy between different images. This study addresses the correction of photometric variations based upon changes in background areas to correct foreground areas. The authors assume a multiple light source model where all light sources can have different colours and will change over time. In training mode, the authors learn per-location relations between foreground and background colour intensities. In correction mode, the authors apply a double linear correction model based on learned relations. This double linear correction includes a dynamic local illumination correction mapping as well as an inter-camera mapping. The authors evaluate their illumination correction by computing the similarity between two images based on the earth mover's distance. The authors compare the results to a representative auto-exposure algorithm found in the recent literature plus a colour correction one based on the inverse-intensity chromaticity. Especially in complex scenarios the authors’ method outperforms these state-of-the-art algorithms.
Finite mixture regression model with random effects: application to neonatal hospital length of stay
Resumo:
A two-component mixture regression model that allows simultaneously for heterogeneity and dependency among observations is proposed. By specifying random effects explicitly in the linear predictor of the mixture probability and the mixture components, parameter estimation is achieved by maximising the corresponding best linear unbiased prediction type log-likelihood. Approximate residual maximum likelihood estimates are obtained via an EM algorithm in the manner of generalised linear mixed model (GLMM). The method can be extended to a g-component mixture regression model with the component density from the exponential family, leading to the development of the class of finite mixture GLMM. For illustration, the method is applied to analyse neonatal length of stay (LOS). It is shown that identification of pertinent factors that influence hospital LOS can provide important information for health care planning and resource allocation. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Objective: The objective of the present study is to test the validity of the integrated cognitive model (ICM) of depression proposed by Kwon and Oei with a Latin-American sample. The ICM of depression postulates that the interaction between negative life events with dysfunctional attitudes increases the frequency of negative automatic thoughts, which in turns affects the depressive symptomatology of a person. This model was developed for Western Europeans such as Americans and Australians and the validity of this model has not been tested on Latin-Americans. Method: Participants were 101 Latin-American migrants living permanently in Brisbane, including people from Chile, El Salvador, Nicaragua, Argentina and Guatemala. Participants completed the Beck Depression Inventory, the Dysfunctional Attitudes Scale, the Automatic Thoughts Questionnaire and the Life Events Inventory. Alternative or competing models of depression were examined, including the alternative aetiologies model, the linear mediational model and the symptom model. Results: Six models were tested and the results of the structural equation modelling analysis indicated that the symptom model only fits the Latin-American data. Conclusions: Results show that in the Latin-American sample depression symptoms can have an impact on negative cognitions. This finding adds to growing evidence in the literature that the relationship between cognitions and depression is bidirectional, rather than unidirectional from cognitions to symptoms.
Resumo:
Based on the three-dimensional elastic inclusion model proposed by Dobrovolskii, we developed a rheological inclusion model to study earthquake preparation processes. By using the Corresponding Principle in the theory of rheologic mechanics, we derived the analytic expressions of viscoelastic displacement U(r, t) , V(r, t) and W(r, t), normal strains epsilon(xx) (r, t), epsilon(yy) (r, t) and epsilon(zz) (r, t) and the bulk strain theta (r, t) at an arbitrary point (x, y, z) in three directions of X axis, Y axis and Z axis produced by a three-dimensional inclusion in the semi-infinite rheologic medium defined by the standard linear rheologic model. Subsequent to the spatial-temporal variation of bulk strain being computed on the ground produced by such a spherical rheologic inclusion, interesting results are obtained, suggesting that the bulk strain produced by a hard inclusion change with time according to three stages (alpha, beta, gamma) with different characteristics, similar to that of geodetic deformation observations, but different with the results of a soft inclusion. These theoretical results can be used to explain the characteristics of spatial-temporal evolution, patterns, quadrant-distribution of earthquake precursors, the changeability, spontaneity and complexity of short-term and imminent-term precursors. It offers a theoretical base to build physical models for earthquake precursors and to predict the earthquakes.
Resumo:
E. L. DeLosh, J. R. Busemeyer, and M. A. McDaniel (1997) found that when learning a positive, linear relationship between a continuous predictor (x) and a continuous criterion (y), trainees tend to underestimate y on items that ask the trainee to extrapolate. In 3 experiments, the authors examined the phenomenon and found that the tendency to underestimate y is reliable only in the so-called lower extrapolation region-that is, new values of x that lie between zero and the edge of the training region. Existing models of function learning, such as the extrapolation-association model (DeLosh et al., 1997) and the population of linear experts model (M. L. Kalish, S. Lewandowsky, & J. Kruschke, 2004), cannot account for these results. The authors show that with minor changes, both models can predict the correct pattern of results.
Resumo:
Exploratory analysis of data in all sciences seeks to find common patterns to gain insights into the structure and distribution of the data. Typically visualisation methods like principal components analysis are used but these methods are not easily able to deal with missing data nor can they capture non-linear structure in the data. One approach to discovering complex, non-linear structure in the data is through the use of linked plots, or brushing, while ignoring the missing data. In this technical report we discuss a complementary approach based on a non-linear probabilistic model. The generative topographic mapping enables the visualisation of the effects of very many variables on a single plot, which is able to incorporate far more structure than a two dimensional principal components plot could, and deal at the same time with missing data. We show that using the generative topographic mapping provides us with an optimal method to explore the data while being able to replace missing values in a dataset, particularly where a large proportion of the data is missing.
Resumo:
For a submitted query to multiple search engines finding relevant results is an important task. This paper formulates the problem of aggregation and ranking of multiple search engines results in the form of a minimax linear programming model. Besides the novel application, this study detects the most relevant information among a return set of ranked lists of documents retrieved by distinct search engines. Furthermore, two numerical examples aree used to illustrate the usefulness of the proposed approach.
Resumo:
Exploratory analysis of data seeks to find common patterns to gain insights into the structure and distribution of the data. In geochemistry it is a valuable means to gain insights into the complicated processes making up a petroleum system. Typically linear visualisation methods like principal components analysis, linked plots, or brushing are used. These methods can not directly be employed when dealing with missing data and they struggle to capture global non-linear structures in the data, however they can do so locally. This thesis discusses a complementary approach based on a non-linear probabilistic model. The generative topographic mapping (GTM) enables the visualisation of the effects of very many variables on a single plot, which is able to incorporate more structure than a two dimensional principal components plot. The model can deal with uncertainty, missing data and allows for the exploration of the non-linear structure in the data. In this thesis a novel approach to initialise the GTM with arbitrary projections is developed. This makes it possible to combine GTM with algorithms like Isomap and fit complex non-linear structure like the Swiss-roll. Another novel extension is the incorporation of prior knowledge about the structure of the covariance matrix. This extension greatly enhances the modelling capabilities of the algorithm resulting in better fit to the data and better imputation capabilities for missing data. Additionally an extensive benchmark study of the missing data imputation capabilities of GTM is performed. Further a novel approach, based on missing data, will be introduced to benchmark the fit of probabilistic visualisation algorithms on unlabelled data. Finally the work is complemented by evaluating the algorithms on real-life datasets from geochemical projects.
Resumo:
Exploratory analysis of petroleum geochemical data seeks to find common patterns to help distinguish between different source rocks, oils and gases, and to explain their source, maturity and any intra-reservoir alteration. However, at the outset, one is typically faced with (a) a large matrix of samples, each with a range of molecular and isotopic properties, (b) a spatially and temporally unrepresentative sampling pattern, (c) noisy data and (d) often, a large number of missing values. This inhibits analysis using conventional statistical methods. Typically, visualisation methods like principal components analysis are used, but these methods are not easily able to deal with missing data nor can they capture non-linear structure in the data. One approach to discovering complex, non-linear structure in the data is through the use of linked plots, or brushing, while ignoring the missing data. In this paper we introduce a complementary approach based on a non-linear probabilistic model. Generative topographic mapping enables the visualisation of the effects of very many variables on a single plot, while also dealing with missing data. We show how using generative topographic mapping also provides an optimal method with which to replace missing values in two geochemical datasets, particularly where a large proportion of the data is missing.
Resumo:
The measurement of different aspects of information society has been problematic over along time, and the International Telecommunication Union (ITU) is spearheading in developing a single ICT index. In Geneva during the first World Summit on Information Society (WSIS) in December 2003, the heads of states declared their commitment to the importance of benchmarking and measuring progress toward the information society. Consequently, they re-affirmed their Geneva commitments in their second summit held in Tunis in 2005. In this paper, we propose a multiplicative linear programming model to measure Opportunity Index. We also compared our results with the common measure of ICT opportunity index and we found that the two indices are consistent in their measurement of digital opportunity though differences still exist among regions.
Resumo:
2000 Mathematics Subject Classification: 62J12, 62F35
Resumo:
The cell:cell bond between an immune cell and an antigen presenting cell is a necessary event in the activation of the adaptive immune response. At the juncture between the cells, cell surface molecules on the opposing cells form non-covalent bonds and a distinct patterning is observed that is termed the immunological synapse. An important binding molecule in the synapse is the T-cell receptor (TCR), that is responsible for antigen recognition through its binding with a major-histocompatibility complex with bound peptide (pMHC). This bond leads to intracellular signalling events that culminate in the activation of the T-cell, and ultimately leads to the expression of the immune eector function. The temporal analysis of the TCR bonds during the formation of the immunological synapse presents a problem to biologists, due to the spatio-temporal scales (nanometers and picoseconds) that compare with experimental uncertainty limits. In this study, a linear stochastic model, derived from a nonlinear model of the synapse, is used to analyse the temporal dynamics of the bond attachments for the TCR. Mathematical analysis and numerical methods are employed to analyse the qualitative dynamics of the nonequilibrium membrane dynamics, with the specic aim of calculating the average persistence time for the TCR:pMHC bond. A single-threshold method, that has been previously used to successfully calculate the TCR:pMHC contact path sizes in the synapse, is applied to produce results for the average contact times of the TCR:pMHC bonds. This method is extended through the development of a two-threshold method, that produces results suggesting the average time persistence for the TCR:pMHC bond is in the order of 2-4 seconds, values that agree with experimental evidence for TCR signalling. The study reveals two distinct scaling regimes in the time persistent survival probability density prole of these bonds, one dominated by thermal uctuations and the other associated with the TCR signalling. Analysis of the thermal fluctuation regime reveals a minimal contribution to the average time persistence calculation, that has an important biological implication when comparing the probabilistic models to experimental evidence. In cases where only a few statistics can be gathered from experimental conditions, the results are unlikely to match the probabilistic predictions. The results also identify a rescaling relationship between the thermal noise and the bond length, suggesting a recalibration of the experimental conditions, to adhere to this scaling relationship, will enable biologists to identify the start of the signalling regime for previously unobserved receptor:ligand bonds. Also, the regime associated with TCR signalling exhibits a universal decay rate for the persistence probability, that is independent of the bond length.
Resumo:
Spectral unmixing (SU) is a technique to characterize mixed pixels of the hyperspectral images measured by remote sensors. Most of the existing spectral unmixing algorithms are developed using the linear mixing models. Since the number of endmembers/materials present at each mixed pixel is normally scanty compared with the number of total endmembers (the dimension of spectral library), the problem becomes sparse. This thesis introduces sparse hyperspectral unmixing methods for the linear mixing model through two different scenarios. In the first scenario, the library of spectral signatures is assumed to be known and the main problem is to find the minimum number of endmembers under a reasonable small approximation error. Mathematically, the corresponding problem is called the $\ell_0$-norm problem which is NP-hard problem. Our main study for the first part of thesis is to find more accurate and reliable approximations of $\ell_0$-norm term and propose sparse unmixing methods via such approximations. The resulting methods are shown considerable improvements to reconstruct the fractional abundances of endmembers in comparison with state-of-the-art methods such as having lower reconstruction errors. In the second part of the thesis, the first scenario (i.e., dictionary-aided semiblind unmixing scheme) will be generalized as the blind unmixing scenario that the library of spectral signatures is also estimated. We apply the nonnegative matrix factorization (NMF) method for proposing new unmixing methods due to its noticeable supports such as considering the nonnegativity constraints of two decomposed matrices. Furthermore, we introduce new cost functions through some statistical and physical features of spectral signatures of materials (SSoM) and hyperspectral pixels such as the collaborative property of hyperspectral pixels and the mathematical representation of the concentrated energy of SSoM for the first few subbands. Finally, we introduce sparse unmixing methods for the blind scenario and evaluate the efficiency of the proposed methods via simulations over synthetic and real hyperspectral data sets. The results illustrate considerable enhancements to estimate the spectral library of materials and their fractional abundances such as smaller values of spectral angle distance (SAD) and abundance angle distance (AAD) as well.
Resumo:
A novel surrogate model is proposed in lieu of computational fluid dynamic (CFD) code for fast nonlinear aerodynamic modeling. First, a nonlinear function is identified on selected interpolation points defined by discrete empirical interpolation method (DEIM). The flow field is then reconstructed by a least square approximation of flow modes extracted by proper orthogonal decomposition (POD). The proposed model is applied in the prediction of limit cycle oscillation for a plunge/pitch airfoil and a delta wing with linear structural model, results are validate against a time accurate CFD-FEM code. The results show the model is able to replicate the aerodynamic forces and flow fields with sufficient accuracy while requiring a fraction of CFD cost.