187 resultados para Adjacency matrices


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Diffusion weighted magnetic resonance (MR) imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitized gradients along a minimum of 6 directions, second-order tensors can be computed to model dominant diffusion processes. However, conventional DTI is not sufficient to resolve crossing fiber tracts. Recently, a number of high-angular resolution schemes with greater than 6 gradient directions have been employed to address this issue. In this paper, we introduce the Tensor Distribution Function (TDF), a probability function defined on the space of symmetric positive definite matrices. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, the diffusion orientation distribution function (ODF) can easily be computed by analytic integration of the resulting displacement probability function.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As connectivity analyses become more popular, claims are often made about how the brain's anatomical networks depend on age, sex, or disease. It is unclear how results depend on tractography methods used to compute fiber networks. We applied 11 tractography methods to high angular resolution diffusion images of the brain (4-Tesla 105-gradient HARDI) from 536 healthy young adults. We parcellated 70 cortical regions, yielding 70×70 connectivity matrices, encoding fiber density. We computed popular graph theory metrics, including network efficiency, and characteristic path lengths. Both metrics were robust to the number of spherical harmonics used to model diffusion (4th-8th order). Age effects were detected only for networks computed with the probabilistic Hough transform method, which excludes smaller fibers. Sex and total brain volume affected networks measured with deterministic, tensor-based fiber tracking but not with the Hough method. Each tractography method includes different fibers, which affects inferences made about the reconstructed networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To classify each stage for a progressing disease such as Alzheimer’s disease is a key issue for the disease prevention and treatment. In this study, we derived structural brain networks from diffusion-weighted MRI using whole-brain tractography since there is growing interest in relating connectivity measures to clinical, cognitive, and genetic data. Relatively little work has usedmachine learning to make inferences about variations in brain networks in the progression of the Alzheimer’s disease. Here we developed a framework to utilize generalized low rank approximations of matrices (GLRAM) and modified linear discrimination analysis for unsupervised feature learning and classification of connectivity matrices. We apply the methods to brain networks derived from DWI scans of 41 people with Alzheimer’s disease, 73 people with EMCI, 38 people with LMCI, 47 elderly healthy controls and 221 young healthy controls. Our results show that this new framework can significantly improve classification accuracy when combining multiple datasets; this suggests the value of using data beyond the classification task at hand to model variations in brain connectivity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we discuss the development of a mathematical model to predict the shift in gas composition observed over time from a producing CSG (coal seam gas) well, and investigate the effect that physical properties of the coal seam have on gas production. A detailed (local) one-dimensional, two-scale mathematical model of a coal seam has been developed. The model describes the competitive adsorption and desorption of three gas species (CH4, CO2 and N2) within a microscopic, porous coal matrix structure. The (diffusive) flux of these gases between the coal matrices (microscale) and a cleat network (macroscale) is accounted for in the model. The cleat network is modelled as a one-dimensional, volume averaged, porous domain that extends radially from a central well. Diffusive and advective transport of the gases occurs within the cleat network, which also contains liquid water that can be advectively transported. The water and gas phases are assumed to be immiscible. The driving force for the advection in the gas and liquid phases is taken to be a pressure gradient with capillarity also accounted for. In addition, the relative permeabilities of the water and gas phases are considered as functions of the degree of water saturation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Diffusion weighted magnetic resonance imaging is a powerful tool that can be employed to study white matter microstructure by examining the 3D displacement profile of water molecules in brain tissue. By applying diffusion-sensitized gradients along a minimum of six directions, second-order tensors (represented by three-by-three positive definite matrices) can be computed to model dominant diffusion processes. However, conventional DTI is not sufficient to resolve more complicated white matter configurations, e.g., crossing fiber tracts. Recently, a number of high-angular resolution schemes with more than six gradient directions have been employed to address this issue. In this article, we introduce the tensor distribution function (TDF), a probability function defined on the space of symmetric positive definite matrices. Using the calculus of variations, we solve the TDF that optimally describes the observed data. Here, fiber crossing is modeled as an ensemble of Gaussian diffusion processes with weights specified by the TDF. Once this optimal TDF is determined, the orientation distribution function (ODF) can easily be computed by analytic integration of the resulting displacement probability function. Moreover, a tensor orientation distribution function (TOD) may also be derived from the TDF, allowing for the estimation of principal fiber directions and their corresponding eigenvalues.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Social enterprise is important. Yet, there has been diverse understanding of the phenomenon in the literature. This paper attempts to make sense of the social enterprise phenomenon in the literature from a two-layer framework of two-by-two matrices. The first layer juxtaposes social enterprise against other organizations (a typology of organizations) and the second layer classifies different types of social enterprises (a typology of social enterprise). This framework may provide researchers with tools to develop a clear and comprehensive definition of social enterprise. For practitioners, the ability to recognize structures of different types of social enterprises may offer them guideline to design the appropriate business model to serve their purposes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Projective Hjelmslev planes and affine Hjelmslev planes are generalisations of projective planes and affine planes. We present an algorithm for constructing projective Hjelmslev planes and affine Hjelmslev planes that uses projective planes, affine planes and orthogonal arrays. We show that all 2-uniform projective Hjelmslev planes, and all 2-uniform affine Hjelmslev planes can be constructed in this way. As a corollary it is shown that all $2$-uniform affine Hjelmslev planes are sub-geometries of $2$-uniform projective Hjelmslev planes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Molecular phylogenetic studies of homologous sequences of nucleotides often assume that the underlying evolutionary process was globally stationary, reversible, and homogeneous (SRH), and that a model of evolution with one or more site-specific and time-reversible rate matrices (e.g., the GTR rate matrix) is enough to accurately model the evolution of data over the whole tree. However, an increasing body of data suggests that evolution under these conditions is an exception, rather than the norm. To address this issue, several non-SRH models of molecular evolution have been proposed, but they either ignore heterogeneity in the substitution process across sites (HAS) or assume it can be modeled accurately using the distribution. As an alternative to these models of evolution, we introduce a family of mixture models that approximate HAS without the assumption of an underlying predefined statistical distribution. This family of mixture models is combined with non-SRH models of evolution that account for heterogeneity in the substitution process across lineages (HAL). We also present two algorithms for searching model space and identifying an optimal model of evolution that is less likely to over- or underparameterize the data. The performance of the two new algorithms was evaluated using alignments of nucleotides with 10 000 sites simulated under complex non-SRH conditions on a 25-tipped tree. The algorithms were found to be very successful, identifying the correct HAL model with a 75% success rate (the average success rate for assigning rate matrices to the tree's 48 edges was 99.25%) and, for the correct HAL model, identifying the correct HAS model with a 98% success rate. Finally, parameter estimates obtained under the correct HAL-HAS model were found to be accurate and precise. The merits of our new algorithms were illustrated with an analysis of 42 337 second codon sites extracted from a concatenation of 106 alignments of orthologous genes encoded by the nuclear genomes of Saccharomyces cerevisiae, S. paradoxus, S. mikatae, S. kudriavzevii, S. castellii, S. kluyveri, S. bayanus, and Candida albicans. Our results show that second codon sites in the ancestral genome of these species contained 49.1% invariable sites, 39.6% variable sites belonging to one rate category (V1), and 11.3% variable sites belonging to a second rate category (V2). The ancestral nucleotide content was found to differ markedly across these three sets of sites, and the evolutionary processes operating at the variable sites were found to be non-SRH and best modeled by a combination of eight edge-specific rate matrices (four for V1 and four for V2). The number of substitutions per site at the variable sites also differed markedly, with sites belonging to V1 evolving slower than those belonging to V2 along the lineages separating the seven species of Saccharomyces. Finally, sites belonging to V1 appeared to have ceased evolving along the lineages separating S. cerevisiae, S. paradoxus, S. mikatae, S. kudriavzevii, and S. bayanus, implying that they might have become so selectively constrained that they could be considered invariable sites in these species.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Structural equation modeling (SEM) is a powerful statistical approach for the testing of networks of direct and indirect theoretical causal relationships in complex data sets with intercorrelated dependent and independent variables. SEM is commonly applied in ecology, but the spatial information commonly found in ecological data remains difficult to model in a SEM framework. Here we propose a simple method for spatially explicit SEM (SE-SEM) based on the analysis of variance/covariance matrices calculated across a range of lag distances. This method provides readily interpretable plots of the change in path coefficients across scale and can be implemented using any standard SEM software package. We demonstrate the application of this method using three studies examining the relationships between environmental factors, plant community structure, nitrogen fixation, and plant competition. By design, these data sets had a spatial component, but were previously analyzed using standard SEM models. Using these data sets, we demonstrate the application of SE-SEM to regularly spaced, irregularly spaced, and ad hoc spatial sampling designs and discuss the increased inferential capability of this approach compared with standard SEM. We provide an R package, sesem, to easily implement spatial structural equation modeling.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In vitro pre-vascularization is one of the main vascularization strategies in the tissue engineering field. Culturing cells within a tissue-engineered construct (TEC) prior to implantation provides researchers with a greater degree of control over the fate of the cells. However, balancing the diverse range of different cell culture parameters in vitro is seldom easy and in most cases, especially in highly vascularized tissues, more than one cell type will reside within the cell culture system. Culturing multiple cell types in the same construct presents its own unique challenges and pitfalls. The following review examines endothelial-driven vascularization and evaluates the direct and indirect role other cell types have in vessel and capillary formation. The article then analyses the different parameters researchers can modulate in a co-culture system in order to design optimal tissue-engineered constructs to match desired clinical applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The application of decellularized extracellular matrices to aid tissue regeneration in reconstructive surgery and regenerative medicine has been promising. Several decellularization protocols for removing cellular materials from natural tissues such as heart valves are currently in use. This paper evaluates the feasibility of potential extension of this methodology relative to the desirable properties of load bearing joint tissues such as stiffness, porosity and ability to recover adequately after deformation to facilitate physiological function. Two decellularization protocols, namely: Trypsin and Triton X-100 were evaluated against their effects on bovine articular cartilage, using biomechanical, biochemical and microstructural techniques. These analyses revealed that decellularization with trypsin resulted in severe loss of mechanical stiffness including deleterious collapse of the collagen architecture which in turn significantly compromised the porosity of the construct. In contrast, triton X-100 detergent treatment yielded samples that retain mechanical stiffness relative to that of the normal intact cartilage sample, but the resulting construct contained ruminant cellular constituents. We conclude that both of these common decellularization protocols are inadequate for producing constructs that can serve as effective replacement and scaffolds to regenerate articular joint tissue.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wet-milling protocol was employed to produce pressed powder tablets with excellent cohesion and homogeneity suitable for laser ablation (LA) analysis of volatile and refractive elements in sediment. The influence of sample preparation on analytical performance was also investigated, including sample homogeneity, accuracy and limit of detection. Milling in volatile solvent for 40 min ensured sample is well mixed and could reasonably recover both volatile (Hg) and refractive (Zr) elements. With the exception of Cr (−52%) and Nb (+26%) major, minor and trace elements in STSD-1 and MESS-3 could be analysed within ±20% of the certified values. Comparison of the method with total digestion method using HF was tested by analysing 10 different sediment samples. The laser method recovers significantly higher amounts of analytes such as Ag, Cd, Sn and Sn than the total digestion method making it a more robust method for elements across the periodic table. LA-ICP-MS also eliminates the interferences from chemical reagents as well as the health and safety risks associated with digestion processes. Therefore, it can be considered as an enhanced method for the analysis of heterogeneous matrices such as river sediments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider rank regression for clustered data analysis and investigate the induced smoothing method for obtaining the asymptotic covariance matrices of the parameter estimators. We prove that the induced estimating functions are asymptotically unbiased and the resulting estimators are strongly consistent and asymptotically normal. The induced smoothing approach provides an effective way for obtaining asymptotic covariance matrices for between- and within-cluster estimators and for a combined estimator to take account of within-cluster correlations. We also carry out extensive simulation studies to assess the performance of different estimators. The proposed methodology is substantially Much faster in computation and more stable in numerical results than the existing methods. We apply the proposed methodology to a dataset from a randomized clinical trial.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the analysis of longitudinal data when the covariance function is modeled by additional parameters to the mean parameters. In general, inconsistent estimators of the covariance (variance/correlation) parameters will be produced when the "working" correlation matrix is misspecified, which may result in great loss of efficiency of the mean parameter estimators (albeit the consistency is preserved). We consider using different "Working" correlation models for the variance and the mean parameters. In particular, we find that an independence working model should be used for estimating the variance parameters to ensure their consistency in case the correlation structure is misspecified. The designated "working" correlation matrices should be used for estimating the mean and the correlation parameters to attain high efficiency for estimating the mean parameters. Simulation studies indicate that the proposed algorithm performs very well. We also applied different estimation procedures to a data set from a clinical trial for illustration.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The method of generalised estimating equations for regression modelling of clustered outcomes allows for specification of a working matrix that is intended to approximate the true correlation matrix of the observations. We investigate the asymptotic relative efficiency of the generalised estimating equation for the mean parameters when the correlation parameters are estimated by various methods. The asymptotic relative efficiency depends on three-features of the analysis, namely (i) the discrepancy between the working correlation structure and the unobservable true correlation structure, (ii) the method by which the correlation parameters are estimated and (iii) the 'design', by which we refer to both the structures of the predictor matrices within clusters and distribution of cluster sizes. Analytical and numerical studies of realistic data-analysis scenarios show that choice of working covariance model has a substantial impact on regression estimator efficiency. Protection against avoidable loss of efficiency associated with covariance misspecification is obtained when a 'Gaussian estimation' pseudolikelihood procedure is used with an AR(1) structure.