954 resultados para Bochner tensor


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A user’s query is considered to be an imprecise description of their information need. Automatic query expansion is the process of reformulating the original query with the goal of improving retrieval effectiveness. Many successful query expansion techniques ignore information about the dependencies that exist between words in natural language. However, more recent approaches have demonstrated that by explicitly modeling associations between terms significant improvements in retrieval effectiveness can be achieved over those that ignore these dependencies. State-of-the-art dependency-based approaches have been shown to primarily model syntagmatic associations. Syntagmatic associations infer a likelihood that two terms co-occur more often than by chance. However, structural linguistics relies on both syntagmatic and paradigmatic associations to deduce the meaning of a word. Given the success of dependency-based approaches and the reliance on word meanings in the query formulation process, we argue that modeling both syntagmatic and paradigmatic information in the query expansion process will improve retrieval effectiveness. This article develops and evaluates a new query expansion technique that is based on a formal, corpus-based model of word meaning that models syntagmatic and paradigmatic associations. We demonstrate that when sufficient statistical information exists, as in the case of longer queries, including paradigmatic information alone provides significant improvements in retrieval effectiveness across a wide variety of data sets. More generally, when our new query expansion approach is applied to large-scale web retrieval it demonstrates significant improvements in retrieval effectiveness over a strong baseline system, based on a commercial search engine.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Molecular-level computer simulations of restricted water diffusion can be used to develop models for relating diffusion tensor imaging measurements of anisotropic tissue to microstructural tissue characteristics. The diffusion tensors resulting from these simulations can then be analyzed in terms of their relationship to the structural anisotropy of the model used. As the translational motion of water molecules is essentially random, their dynamics can be effectively simulated using computers. In addition to modeling water dynamics and water-tissue interactions, the simulation software of the present study was developed to automatically generate collagen fiber networks from user-defined parameters. This flexibility provides the opportunity for further investigations of the relationship between the diffusion tensor of water and morphologically different models representing different anisotropic tissues.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many successful query expansion techniques ignore information about the term dependencies that exist within natural language. However, researchers have recently demonstrated that consistent and significant improvements in retrieval effectiveness can be achieved by explicitly modelling term dependencies within the query expansion process. This has created an increased interest in dependency-based models. State-of-the-art dependency-based approaches primarily model term associations known within structural linguistics as syntagmatic associations, which are formed when terms co-occur together more often than by chance. However, structural linguistics proposes that the meaning of a word is also dependent on its paradigmatic associations, which are formed between words that can substitute for each other without effecting the acceptability of a sentence. Given the reliance on word meanings when a user formulates their query, our approach takes the novel step of modelling both syntagmatic and paradigmatic associations within the query expansion process based on the (pseudo) relevant documents returned in web search. The results demonstrate that this approach can provide significant improvements in web re- trieval effectiveness when compared to a strong benchmark retrieval system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a mini-review of the development and contemporary applications of diffusion-sensitive nuclear magnetic resonance (NMR) techniques in biomedical sciences. Molecular diffusion is a fundamental physical phenomenon present in all biological systems. Due to the connection between experimentally measured diffusion metrics and the microscopic environment sensed by the diffusing molecules, diffusion measurements can be used for characterisation of molecular size, molecular binding and association, and the morphology of biological tissues. The emergence of magnetic resonance was instrumental to the development of biomedical applications of diffusion. We discuss the fundamental physical principles of diffusion NMR spectroscopy and diffusion MR imaging. The emphasis is placed on conceptual understanding, historical evolution and practical applications rather than complex technical details. Mathematical description of diffusion is presented to the extent that it is required for the basic understanding of the concepts. We present a wide range of spectroscopic and imaging applications of diffusion magnetic resonance, including colloidal drug delivery vehicles; protein association; characterisation of cell morphology; neural fibre tractography; cardiac imaging; and the imaging of load-bearing connective tissues. This paper is intended as an accessible introduction into the exciting and growing field of diffusion magnetic resonance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aiming at the large scale numerical simulation of particle reinforced materials, the concept of local Eshelby matrix has been introduced into the computational model of the eigenstrain boundary integral equation (BIE) to solve the problem of interactions among particles. The local Eshelby matrix can be considered as an extension of the concepts of Eshelby tensor and the equivalent inclusion in numerical form. Taking the subdomain boundary element method as the control, three-dimensional stress analyses are carried out for some ellipsoidal particles in full space with the proposed computational model. Through the numerical examples, it is verified not only the correctness and feasibility but also the high efficiency of the present model with the corresponding solution procedure, showing the potential of solving the problem of large scale numerical simulation of particle reinforced materials.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent advances suggest that encoding images through Symmetric Positive Definite (SPD) matrices and then interpreting such matrices as points on Riemannian manifolds can lead to increased classification performance. Taking into account manifold geometry is typically done via (1) embedding the manifolds in tangent spaces, or (2) embedding into Reproducing Kernel Hilbert Spaces (RKHS). While embedding into tangent spaces allows the use of existing Euclidean-based learning algorithms, manifold shape is only approximated which can cause loss of discriminatory information. The RKHS approach retains more of the manifold structure, but may require non-trivial effort to kernelise Euclidean-based learning algorithms. In contrast to the above approaches, in this paper we offer a novel solution that allows SPD matrices to be used with unmodified Euclidean-based learning algorithms, with the true manifold shape well-preserved. Specifically, we propose to project SPD matrices using a set of random projection hyperplanes over RKHS into a random projection space, which leads to representing each matrix as a vector of projection coefficients. Experiments on face recognition, person re-identification and texture classification show that the proposed approach outperforms several recent methods, such as Tensor Sparse Coding, Histogram Plus Epitome, Riemannian Locality Preserving Projection and Relational Divergence Classification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Strain-based failure criteria have several advantages over stress-based failure criteria: they can account for elastic and inelastic strains, they utilise direct, observables effects instead of inferred effects (strain gauges vs. stress estimates), and model complete stress-strain curves including pre-peak, non-linear elasticity and post-peak strain weakening. In this study, a strain-based failure criterion derived from thermodynamic first principles utilising the concepts of continuum damage mechanics is presented. Furthermore, implementation of this failure criterion into a finite-element simulation is demonstrated and applied to the stability of underground mining coal pillars. In numerical studies, pillar strength is usually expressed in terms of critical stresses or stress-based failure criteria where scaling with pillar width and height is common. Previous publications have employed the finite-element method for pillar stability analysis using stress-based failure criterion such as Mohr-Coulomb and Hoek-Brown or stress-based scalar damage models. A novel constitutive material model, which takes into consideration anisotropy as well as elastic strain and damage as state variables has been developed and is presented in this paper. The damage threshold and its evolution are strain-controlled, and coupling of the state variables is achieved through the damage-induced degradation of the elasticity tensor. This material model is implemented into the finite-element software ABAQUS and can be applied to 3D problems. Initial results show that this new material model is capable of describing the non-linear behaviour of geomaterials commonly observed before peak strength is reached as well as post-peak strain softening. Furthermore, it is demonstrated that the model can account for directional dependency of failure behaviour (i.e. anisotropy) and has the potential to be expanded to environmental controls like temperature or moisture.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A crucial issue with hybrid quantum secret sharing schemes is the amount of data that is allocated to the participants. The smaller the amount of allocated data, the better the performance of a scheme. Moreover, quantum data is very hard and expensive to deal with, therefore, it is desirable to use as little quantum data as possible. To achieve this goal, we first construct extended unitary operations by the tensor product of n, n ≥ 2, basic unitary operations, and then by using those extended operations, we design two quantum secret sharing schemes. The resulting dual compressible hybrid quantum secret sharing schemes, in which classical data play a complementary role to quantum data, range from threshold to access structure. Compared with the existing hybrid quantum secret sharing schemes, our proposed schemes not only reduce the number of quantum participants, but also the number of particles and the size of classical shares. To be exact, the number of particles that are used to carry quantum data is reduced to 1 while the size of classical secret shares also is also reduced to l−2 m−1 based on ((m+1, n′)) threshold and to l−2 r2 (where r2 is the number of maximal unqualified sets) based on adversary structure. Consequently, our proposed schemes can greatly reduce the cost and difficulty of generating and storing EPR pairs and lower the risk of transmitting encoded particles.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In an estuary, mixing and dispersion result from a combination of large-scale advection and smallscale turbulence, which are complex to estimate. The predictions of scalar transport and mixing are often inferred and rarely accurate, due to inadequate understanding of the contributions of these difference scales to estuarine recirculation. A multi-device field study was conducted in a small sub-tropical estuary under neap tide conditions with near-zero fresh water discharge for about 48 hours. During the study, acoustic Doppler velocimeters (ADV) were sampled at high frequency (50 Hz), while an acoustic Doppler current profiler (ADCP) and global positioning system (GPS) tracked drifters were used to obtain some lower frequency spatial distribution of the flow parameters within the estuary. The velocity measurements were complemented with some continuous measurement of water depth, conductivity, temperature and some other physiochemical parameters. Thorough quality control was carried out by implementation of relevant error removal filters on the individual data set to intercept spurious data. A triple decomposition (TD) technique was introduced to access the contributions of tides, resonance and ‘true’ turbulence in the flow field. The time series of mean flow measurements for both the ADCP and drifter were consistent with those of the mean ADV data when sampled within a similar spatial domain. The tidal scale fluctuation of velocity and water level were used to examine the response of the estuary to tidal inertial current. The channel exhibited a mixed type wave with a typical phase-lag between 0.035π– 0.116π. A striking feature of the ADV velocity data was the slow fluctuations, which exhibited large amplitudes of up to 50% of the tidal amplitude, particularly in slack waters. Such slow fluctuations were simultaneously observed in a number of physiochemical properties of the channel. The ensuing turbulence field showed some degree of anisotropy. For all ADV units, the horizontal turbulence ratio ranged between 0.4 and 0.9, and decreased towards the bed, while the vertical turbulence ratio was on average unity at z = 0.32 m and approximately 0.5 for the upper ADV (z = 0.55 m). The result of the statistical analysis suggested that the ebb phase turbulence field was dominated by eddies that evolved from ejection type process, while that of the flood phase contained mixed eddies with significant amount related to sweep type process. Over 65% of the skewness values fell within the range expected of a finite Gaussian distribution and the bulk of the excess kurtosis values (over 70%) fell within the range of -0.5 and +2. The TD technique described herein allowed the characterisation of a broader temporal scale of fluctuations of the high frequency data sampled within the durations of a few tidal cycles. The study provides characterisation of the ranges of fluctuation required for an accurate modelling of shallow water dispersion and mixing in a sub-tropical estuary.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Because brain structure and function are affected in neurological and psychiatric disorders, it is important to disentangle the sources of variation in these phenotypes. Over the past 15 years, twin studies have found evidence for both genetic and environmental influences on neuroimaging phenotypes, but considerable variation across studies makes it difficult to draw clear conclusions about the relative magnitude of these influences. Here we performed the first meta-analysis of structural MRI data from 48 studies on >1,250 twin pairs, and diffusion tensor imaging data from 10 studies on 444 twin pairs. The proportion of total variance accounted for by genes (A), shared environment (C), and unshared environment (E), was calculated by averaging A, C, and E estimates across studies from independent twin cohorts and weighting by sample size. The results indicated that additive genetic estimates were significantly different from zero for all metaanalyzed phenotypes, with the exception of fractional anisotropy (FA) of the callosal splenium, and cortical thickness (CT) of the uncus, left parahippocampal gyrus, and insula. For many phenotypes there was also a significant influence of C. We now have good estimates of heritability for many regional and lobar CT measures, in addition to the global volumes. Confidence intervals are wide and number of individuals small for many of the other phenotypes. In conclusion, while our meta-analysis shows that imaging measures are strongly influenced by genes, and that novel phenotypes such as CT measures, FA measures, and brain activation measures look especially promising, replication across independent samples and demographic groups is necessary.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The NTRK1 gene (also known as TRKA) encodes a high-affinity receptor for NGF, a neurotrophin involved in nervous system development and myelination. NTRK1 has been implicated in neurological function via links between the T allele at rs6336 (NTRK1-T) and schizophrenia risk. A variant in the neurotrophin gene, BDNF, was previously associated with white matter integrity in young adults, highlighting the importantce of neurotrophins to white matter development. We hypothesized that NTRK1-T would relate to lower fractional anisotropy in healthy adults. We scanned 391 healthy adult human twins and their siblings (mean age: 23.6 ± 2.2 years; 31 NTRK1-T carriers, 360 non-carriers) using 105-gradient diffusion tensor imaging at 4 tesla. We evaluated in brain white matter how NTRK1-T and NTRK1 rs4661063 allele A (rs4661063-A, which is in moderate linkage disequilibrium with rs6336) related to voxelwise fractional anisotropy-acommondiffusion tensor imaging measure of white matter microstructure. We used mixed-model regression to control for family relatedness, age, and sex. The sample was split in half to test reproducibility of results. The false discovery rate method corrected for voxelwise multiple comparisons. NTRK1-T and rs4661063-A correlated with lower white matter fractional anisotropy, independent of age and sex (multiple-comparisons corrected: false discovery rate critical p=0.038 forNTRK1-Tand0.013 for rs4661063-A). In each half-sample, theNTRK1-T effectwasreplicated in the cingulum, corpus callosum, superior and inferior longitudinal fasciculi, inferior fronto-occipital fasciculus, superior corona radiata, and uncinate fasciculus. Our results suggest that NTRK1-T is important for developing white matter microstructure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is a strong genetic risk for late-onset Alzheimer's disease (AD), but so far few gene variants have been identified that reliably contribute to that risk. A newly confirmed genetic risk allele C of the clusterin (CLU) gene variant rs11136000 is carried by ~88% of Caucasians. The C allele confers a 1.16 greater odds of developing late-onset AD than the T allele. AD patients have reductions in regional white matter integrity. We evaluated whether the CLU risk variant was similarly associated with lower white matter integrity in healthy young humans. Evidence of early brain differences would offer a target for intervention decades before symptom onset. We scanned 398 healthy young adults (mean age, 23.6 ± 2.2 years) with diffusion tensor imaging, a variation of magnetic resonance imaging sensitive to white matter integrity in the living brain. We assessed genetic associations using mixed-model regression at each point in the brain to map the profile of these associations with white matter integrity. Each C allele copy of the CLUvariant was associated with lower fractional anisotropy-a widely accepted measure of white matter integrity-in multiple brain regions, including several known to degenerate in AD. These regions included the splenium of the corpus callosum, the fornix, cingulum, and superior and inferior longitudinal fasciculi in both brain hemispheres. Young healthy carriers of the CLU gene risk variant showed a distinct profile of lower white matter integrity that may increase vulnerability to developing AD later in life.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The NTRK3 gene (also known as TRKC) encodes a high affinity receptor for the neurotrophin 3'-nucleotidase (NT3), which is implicated in oligodendrocyte and myelin development. We previously found that white matter integrity in young adults is related to common variants in genes encoding neurotrophins and their receptors. This underscores the importance of neurotrophins for white matter development. NTRK3 variants are putative risk factors for schizophrenia, bipolar disorder, and obsessive-compulsive disorder hoarding, suggesting that some NTRK3 variants may affect the brain.To test this, we scanned 392 healthy adult twins and their siblings (mean age, 23.6. ±. 2.2. years; range: 20-29. years) with 105-gradient 4-Tesla diffusion tensor imaging (DTI). We identified 18 single nucleotide polymorphisms (SNPs) in the NTRK3 gene that have been associated with neuropsychiatric disorders. We used a multi-SNP model, adjusting for family relatedness, age, and sex, to relate these variants to voxelwise fractional anisotropy (FA) - a DTI measure of white matter integrity.FA was optimally predicted (based on the highest false discovery rate critical p), by five SNPs (rs1017412, rs2114252, rs16941261, rs3784406, and rs7176429; overall FDR critical p=. 0.028). Gene effects were widespread and included the corpus callosum genu and inferior longitudinal fasciculus - regions implicated in several neuropsychiatric disorders and previously associated with other neurotrophin-related genetic variants in an overlapping sample of subjects. NTRK3 genetic variants, and neurotrophins more generally, may influence white matter integrity in brain regions implicated in neuropsychiatric disorders.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we develop and validate a new Statistically Assisted Fluid Registration Algorithm (SAFIRA) for brain images. A non-statistical version of this algorithm was first implemented in [2] and re-formulated using Lagrangian mechanics in [3]. Here we extend this algorithm to 3D: given 3D brain images from a population, vector fields and their corresponding deformation matrices are computed in a first round of registrations using the non-statistical implementation. Covariance matrices for both the deformation matrices and the vector fields are then obtained and incorporated (separately or jointly) in the regularizing (i.e., the non-conservative Lagrangian) terms, creating four versions of the algorithm. We evaluated the accuracy of each algorithm variant using the manually labeled LPBA40 dataset, which provides us with ground truth anatomical segmentations. We also compared the power of the different algorithms using tensor-based morphometry -a technique to analyze local volumetric differences in brain structure- applied to 46 3D brain scans from healthy monozygotic twins.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we used a nonconservative Lagrangian mechanics approach to formulate a new statistical algorithm for fluid registration of 3-D brain images. This algorithm is named SAFIRA, acronym for statistically-assisted fluid image registration algorithm. A nonstatistical version of this algorithm was implemented, where the deformation was regularized by penalizing deviations from a zero rate of strain. In, the terms regularizing the deformation included the covariance of the deformation matrices Σ and the vector fields (q). Here, we used a Lagrangian framework to reformulate this algorithm, showing that the regularizing terms essentially allow nonconservative work to occur during the flow. Given 3-D brain images from a group of subjects, vector fields and their corresponding deformation matrices are computed in a first round of registrations using the nonstatistical implementation. Covariance matrices for both the deformation matrices and the vector fields are then obtained and incorporated (separately or jointly) in the nonconservative terms, creating four versions of SAFIRA. We evaluated and compared our algorithms' performance on 92 3-D brain scans from healthy monozygotic and dizygotic twins; 2-D validations are also shown for corpus callosum shapes delineated at midline in the same subjects. After preliminary tests to demonstrate each method, we compared their detection power using tensor-based morphometry (TBM), a technique to analyze local volumetric differences in brain structure. We compared the accuracy of each algorithm variant using various statistical metrics derived from the images and deformation fields. All these tests were also run with a traditional fluid method, which has been quite widely used in TBM studies. The versions incorporating vector-based empirical statistics on brain variation were consistently more accurate than their counterparts, when used for automated volumetric quantification in new brain images. This suggests the advantages of this approach for large-scale neuroimaging studies.