976 resultados para Christoffel pairs
Resumo:
So far, low probability differentials for the key schedule of block ciphers have been used as a straightforward proof of security against related-key differential analysis. To achieve resistance, it is believed that for cipher with k-bit key it suffices the upper bound on the probability to be 2− k . Surprisingly, we show that this reasonable assumption is incorrect, and the probability should be (much) lower than 2− k . Our counter example is a related-key differential analysis of the well established block cipher CLEFIA-128. We show that although the key schedule of CLEFIA-128 prevents differentials with a probability higher than 2− 128, the linear part of the key schedule that produces the round keys, and the Feistel structure of the cipher, allow to exploit particularly chosen differentials with a probability as low as 2− 128. CLEFIA-128 has 214 such differentials, which translate to 214 pairs of weak keys. The probability of each differential is too low, but the weak keys have a special structure which allows with a divide-and-conquer approach to gain an advantage of 27 over generic analysis. We exploit the advantage and give a membership test for the weak-key class and provide analysis of the hashing modes. The proposed analysis has been tested with computer experiments on small-scale variants of CLEFIA-128. Our results do not threaten the practical use of CLEFIA.
Resumo:
This paper presents an enhanced algorithm for matching laser scan maps using histogram correlations. The histogram representation effectively summarizes a map's salient features such that pairs of maps can be matched efficiently without any prior guess as to their alignment. The histogram matching algorithm has been enhanced in order to work well in outdoor unstructured environments by using entropy metrics, weighted histograms and proper thresholding of quality metrics. Thus our large-scale scan-matching SLAM implementation has a vastly improved ability to close large loops in real-time even when odometry is not available. Our experimental results have demonstrated a successful mapping of the largest area ever mapped to date using only a single laser scanner. We also demonstrate our ability to solve the lost robot problem by localizing a robot to a previously built map without any prior initialization.
Resumo:
Relative abundance data is common in the life sciences, but appreciation that it needs special analysis and interpretation is scarce. Correlation is popular as a statistical measure of pairwise association but should not be used on data that carry only relative information. Using timecourse yeast gene expression data, we show how correlation of relative abundances can lead to conclusions opposite to those drawn from absolute abundances, and that its value changes when different components are included in the analysis. Once all absolute information has been removed, only a subset of those associations will reliably endure in the remaining relative data, specifically, associations where pairs of values behave proportionally across observations. We propose a new statistic φ to describe the strength of proportionality between two variables and demonstrate how it can be straightforwardly used instead of correlation as the basis of familiar analyses and visualization methods.
Resumo:
This paper addresses the problem of identifying and explaining behavioral differences between two business process event logs. The paper presents a method that, given two event logs, returns a set of statements in natural language capturing behavior that is present or frequent in one log, while absent or infrequent in the other. This log delta analysis method allows users to diagnose differences between normal and deviant executions of a process or between two versions or variants of a process. The method relies on a novel approach to losslessly encode an event log as an event structure, combined with a frequency-enhanced technique for differencing pairs of event structures. A validation of the proposed method shows that it accurately diagnoses typical change patterns and can explain differences between normal and deviant cases in a real-life log, more compactly and precisely than previously proposed methods.
Resumo:
Background Little is known about the relation between vitamin D status in early life and neurodevelopment outcomes. Objective This study was designed to examine the association of cord blood 25-hydroxyvitamin D [25(OH)D] at birth with neurocognitive development in toddlers. Methods As part of the China-Anhui Birth Cohort Study, 363 mother-infant pairs with completed data were selected. Concentrations of 25(OH)D in cord blood were measured by radioimmunoassay. Mental development index (MDI) and psychomotor development index (PDI) in toddlers were assessed at age 16–18 mo by using the Bayley Scales of Infant Development. The data on maternal sociodemographic characteristics and other confounding factors were also prospectively collected. Results Toddlers in the lowest quintile of cord blood 25(OH)D exhibited a deficit of 7.60 (95% CI: −12.4, −2.82; P = 0.002) and 8.04 (95% CI: −12.9, −3.11; P = 0.001) points in the MDI and PDI scores, respectively, compared with the reference category. Unexpectedly, toddlers in the highest quintile of cord blood 25(OH)D also had a significant deficit of 12.3 (95% CI: −17.9, −6.67; P < 0.001) points in PDI scores compared with the reference category. Conclusions This prospective study suggested that there was an inverted-U–shaped relation between neonatal vitamin D status and neurocognitive development in toddlers. Additional studies on the optimal 25(OH)D concentrations in early life are needed.
Resumo:
Traditional text classification technology based on machine learning and data mining techniques has made a big progress. However, it is still a big problem on how to draw an exact decision boundary between relevant and irrelevant objects in binary classification due to much uncertainty produced in the process of the traditional algorithms. The proposed model CTTC (Centroid Training for Text Classification) aims to build an uncertainty boundary to absorb as many indeterminate objects as possible so as to elevate the certainty of the relevant and irrelevant groups through the centroid clustering and training process. The clustering starts from the two training subsets labelled as relevant or irrelevant respectively to create two principal centroid vectors by which all the training samples are further separated into three groups: POS, NEG and BND, with all the indeterminate objects absorbed into the uncertain decision boundary BND. Two pairs of centroid vectors are proposed to be trained and optimized through the subsequent iterative multi-learning process, all of which are proposed to collaboratively help predict the polarities of the incoming objects thereafter. For the assessment of the proposed model, F1 and Accuracy have been chosen as the key evaluation measures. We stress the F1 measure because it can display the overall performance improvement of the final classifier better than Accuracy. A large number of experiments have been completed using the proposed model on the Reuters Corpus Volume 1 (RCV1) which is important standard dataset in the field. The experiment results show that the proposed model has significantly improved the binary text classification performance in both F1 and Accuracy compared with three other influential baseline models.
Resumo:
Advances in neural network language models have demonstrated that these models can effectively learn representations of words meaning. In this paper, we explore a variation of neural language models that can learn on concepts taken from structured ontologies and extracted from free-text, rather than directly from terms in free-text. This model is employed for the task of measuring semantic similarity between medical concepts, a task that is central to a number of techniques in medical informatics and information retrieval. The model is built with two medical corpora (journal abstracts and patient records) and empirically validated on two ground-truth datasets of human-judged concept pairs assessed by medical professionals. Empirically, our approach correlates closely with expert human assessors ($\approx$ 0.9) and outperforms a number of state-of-the-art benchmarks for medical semantic similarity. The demonstrated superiority of this model for providing an effective semantic similarity measure is promising in that this may translate into effectiveness gains for techniques in medical information retrieval and medical informatics (e.g., query expansion and literature-based discovery).
Resumo:
We propose a novel technique for conducting robust voice activity detection (VAD) in high-noise recordings. We use Gaussian mixture modeling (GMM) to train two generic models; speech and non-speech. We then score smaller segments of a given (unseen) recording against each of these GMMs to obtain two respective likelihood scores for each segment. These scores are used to compute a dissimilarity measure between pairs of segments and to carry out complete-linkage clustering of the segments into speech and non-speech clusters. We compare the accuracy of our method against state-of-the-art and standardised VAD techniques to demonstrate an absolute improvement of 15% in half-total error rate (HTER) over the best performing baseline system and across the QUT-NOISE-TIMIT database. We then apply our approach to the Audio-Visual Database of American English (AVDBAE) to demonstrate the performance of our algorithm in using visual, audio-visual or a proposed fusion of these features.
Resumo:
The 12.7-10.5 Ma Cougar Point Tuff in southern Idaho, USA, consists of 10 large-volume (>10²-10³ km³ each), high-temperature (800-1000 °C), rhyolitic ash-flow tuffs erupted from the Bruneau-Jarbidge volcanic center of the Yellowstone hotspot. These tuffs provide evidence for compositional and thermal zonation in pre-eruptive rhyolite magma, and suggest the presence of a long-lived reservoir that was tapped by numerous large explosive eruptions. Pyroxene compositions exhibit discrete compositional modes with respect to Fe and Mg that define a linear spectrum punctuated by conspicuous gaps. Airfall glass compositions also cluster into modes, and the presence of multiple modes indicates tapping of different magma volumes during early phases of eruption. Equilibrium assemblages of pigeonite and augite are used to reconstruct compositional and thermal gradients in the pre-eruptive reservoir. The recurrence of identical compositional modes and of mineral pairs equilibrated at high temperatures in successive eruptive units is consistent with the persistence of their respective liquids in the magma reservoir. Recurrence intervals of identical modes range from 0.3 to 0.9 Myr and suggest possible magma residence times of similar duration. Eruption ages, magma temperatures, Nd isotopes, and pyroxene and glass compositions are consistent with a long-lived, dynamically evolving magma reservoir that was chemically and thermally zoned and composed of multiple discrete magma volumes.
Resumo:
The phase relations have been investigated experimentally at 200 and 500 MPa as a function of water activity for one of the least evolved (Indian Batt Rhyolite) and of a more evolved rhyolite composition (Cougar Point Tuff XV) from the 12·8-8·1 Ma Bruneau-Jarbidge eruptive center of the Yellowstone hotspot. Particular priority was given to accurate determination of the water content of the quenched glasses using infrared spectroscopic techniques. Comparison of the composition of natural and experimentally synthesized phases confirms that high temperatures (>900°C) and extremely low melt water contents (<1·5 wt % H₂O) are required to reproduce the natural mineral assemblages. In melts containing 0·5-1·5 wt % H₂O, the liquidus phase is clinopyroxene (excluding Fe-Ti oxides, which are strongly dependent on fO₂), and the liquidus temperature of the more evolved Cougar Point Tuff sample (BJR; 940-1000°C) is at least 30°C lower than that of the Indian Batt Rhyolite lava sample (IBR2; 970-1030°C). For the composition BJR, the comparison of the compositions of the natural and experimental glasses indicates a pre-eruptive temperature of at least 900°C. The composition of clinopyroxene and pigeonite pairs can be reproduced only for water contents below 1·5 wt % H₂O at 900°C, or lower water contents if the temperature is higher. For the composition IBR2, a minimum temperature of 920°C is necessary to reproduce the main phases at 200 and 500 MPa. At 200 MPa, the pre-eruptive water content of the melt is constrained in the range 0·7-1·3 wt % at 950°C and 0·3-1·0 wt % at 1000°C. At 500 MPa, the pre-eruptive temperatures are slightly higher (by 30-50°C) for the same ranges of water concentration. The experimental results are used to explore possible proxies to constrain the depth of magma storage. The crystallization sequence of tectosilicates is strongly dependent on pressure between 200 and 500 MPa. In addition, the normative Qtz-Ab-Or contents of glasses quenched from melts coexisting with quartz, sanidine and plagioclase depend on pressure and melt water content, assuming that the normative Qtz and Ab/Or content of such melts is mainly dependent on pressure and water activity, respectively. The combination of results from the phase equilibria and from the composition of glasses indicates that the depth of magma storage for the IBR2 and BJR compositions may be in the range 300-400 MPa (13 km) and 200-300 MPa (10 km), respectively.
Resumo:
Because brain structure and function are affected in neurological and psychiatric disorders, it is important to disentangle the sources of variation in these phenotypes. Over the past 15 years, twin studies have found evidence for both genetic and environmental influences on neuroimaging phenotypes, but considerable variation across studies makes it difficult to draw clear conclusions about the relative magnitude of these influences. Here we performed the first meta-analysis of structural MRI data from 48 studies on >1,250 twin pairs, and diffusion tensor imaging data from 10 studies on 444 twin pairs. The proportion of total variance accounted for by genes (A), shared environment (C), and unshared environment (E), was calculated by averaging A, C, and E estimates across studies from independent twin cohorts and weighting by sample size. The results indicated that additive genetic estimates were significantly different from zero for all metaanalyzed phenotypes, with the exception of fractional anisotropy (FA) of the callosal splenium, and cortical thickness (CT) of the uncus, left parahippocampal gyrus, and insula. For many phenotypes there was also a significant influence of C. We now have good estimates of heritability for many regional and lobar CT measures, in addition to the global volumes. Confidence intervals are wide and number of individuals small for many of the other phenotypes. In conclusion, while our meta-analysis shows that imaging measures are strongly influenced by genes, and that novel phenotypes such as CT measures, FA measures, and brain activation measures look especially promising, replication across independent samples and demographic groups is necessary.
Resumo:
Working memory-related brain activation has been widely studied, and impaired activation patterns have been reported for several psychiatric disorders. We investigated whether variation in N-back working memory brain activation is genetically influenced in 60 pairs of twins, (29 monozygotic (MZ), 31 dizygotic (DZ); mean age 24.4 ± 1.7S.D.). Task-related brain response (BOLD percent signal difference of 2 minus 0-back) was measured in three regions of interest. Although statistical power was low due to the small sample size, for middle frontal gyrus, angular gyrus, and supramarginal gyrus, the MZ correlations were, in general, approximately twice those of the DZ pairs, with non-significant heritability estimates (14-30%) in the low-moderate range. Task performance was strongly influenced by genes (57-73%) and highly correlated with cognitive ability (0.44-0.55). This study, which will be expanded over the next 3 years, provides the first support that individual variation in working memory-related brain activation is to some extent influenced by genes.
Resumo:
We incorporated a new Riemannian fluid registration algorithm into a general MRI analysis method called tensor-based morphometry to map the heritability of brain morphology in MR images from 23 monozygotic and 23 dizygotic twin pairs. All 92 3D scans were fluidly registered to a common template. Voxelwise Jacobian determinants were computed from the deformation fields to assess local volumetric differences across subjects. Heritability maps were computed from the intraclass correlations and their significance was assessed using voxelwise permutation tests. Lobar volume heritability was also studied using the ACE genetic model. The performance of this Riemannian algorithm was compared to a more standard fluid registration algorithm: 3D maps from both registration techniques displayed similar heritability patterns throughout the brain. Power improvements were quantified by comparing the cumulative distribution functions of the p-values generated from both competing methods. The Riemannian algorithm outperformed the standard fluid registration.
Resumo:
Genetic and environmental factors influence brain structure and function profoundly. The search for heritable anatomical features and their influencing genes would be accelerated with detailed 3D maps showing the degree to which brain morphometry is genetically determined. As part of an MRI study that will scan 1150 twins, we applied Tensor-Based Morphometry to compute morphometric differences in 23 pairs of identical twins and 23 pairs of same-sex fraternal twins (mean age: 23.8 ± 1.8 SD years). All 92 twins' 3D brain MRI scans were nonlinearly registered to a common space using a Riemannian fluid-based warping approach to compute volumetric differences across subjects. A multi-template method was used to improve volume quantification. Vector fields driving each subject's anatomy onto the common template were analyzed to create maps of local volumetric excesses and deficits relative to the standard template. Using a new structural equation modeling method, we computed the voxelwise proportion of variance in volumes attributable to additive (A) or dominant (D) genetic factors versus shared environmental (C) or unique environmental factors (E). The method was also applied to various anatomical regions of interest (ROIs). As hypothesized, the overall volumes of the brain, basal ganglia, thalamus, and each lobe were under strong genetic control; local white matter volumes were mostly controlled by common environment. After adjusting for individual differences in overall brain scale, genetic influences were still relatively high in the corpus callosum and in early-maturing brain regions such as the occipital lobes, while environmental influences were greater in frontal brain regions that have a more protracted maturational time-course.
Resumo:
Genetic correlation (rg) analysis determines how much of the correlation between two measures is due to common genetic influences. In an analysis of 4 Tesla diffusion tensor images (DTI) from 531 healthy young adult twins and their siblings, we generalized the concept of genetic correlation to determine common genetic influences on white matter integrity, measured by fractional anisotropy (FA), at all points of the brain, yielding an NxN genetic correlation matrix rg(x,y) between FA values at all pairs of voxels in the brain. With hierarchical clustering, we identified brain regions with relatively homogeneous genetic determinants, to boost the power to identify causal single nucleotide polymorphisms (SNP). We applied genome-wide association (GWA) to assess associations between 529,497 SNPs and FA in clusters defined by hubs of the clustered genetic correlation matrix. We identified a network of genes, with a scale-free topology, that influences white matter integrity over multiple brain regions.