925 resultados para predictive coding
Resumo:
HD (Huntington's disease) is a late onset heritable neurodegenerative disorder that is characterized by neuronal dysfunction and death, particularly in the cerebral cortex and medium spiny neurons of the striatum. This is followed by progressive chorea, dementia and emotional dysfunction, eventually resulting in death. HD is caused by an expanded CAG repeat in the first exon of the HD gene that results in an abnormally elongated polyQ (polyglutamine) tract in its protein product, Htt (Huntingtin). Wild-type Htt is largely cytoplasmic; however, in HD, proteolytic N-terminal fragments of Htt form insoluble deposits in both the cytoplasm and nucleus, provoking the idea that mutHtt (mutant Htt) causes transcriptional dysfunction. While a number of specific transcription factors and co-factors have been proposed as mediators of mutHtt toxicity, the causal relationship between these Htt/transcription factor interactions and HD pathology remains unknown. Previous work has highlighted REST [RE1 (repressor element 1)-silencing transcription factor] as one such transcription factor. REST is a master regulator of neuronal genes, repressing their expression. Many of its direct target genes are known or suspected to have a role in HD pathogenesis, including BDNF (brain-derived neurotrophic factor). Recent evidence has also shown that REST regulates transcription of regulatory miRNAs (microRNAs), many of which are known to regulate neuronal gene expression and are dysregulated in HD. Thus repression of miRNAs constitutes a second, indirect mechanism by which REST can alter the neuronal transcriptome in HD. We will describe the evidence that disruption to the REST regulon brought about by a loss of interaction between REST and mutHtt may be a key contributory factor in the widespread dysregulation of gene expression in HD.
Resumo:
The complete sequences of the dsrA and dsrB genes coding for the α− and β−subunits, respectively, of the sulphite reductase enzyme in Desulfovibrio desulfuricans were determined. Analyses of the amino acid sequences indicated a number of serohaem/Fe4S4 binding consensus sequences whilst predictive secondary structure analysis revealed a similar pattern of α−helix and β−strand structures between the two subunits which was indicative of gene duplication.
Resumo:
There is strong evidence that neonates imitate previously unseen behaviors. These behaviors are predominantly used in social interactions, demonstrating neonates’ ability and motivation to engage with others. Research on neonatal imitation can provide a wealth of information about the early mirror neuron system (MNS): namely, its functional characteristics, its plasticity from birth, and its relation to skills later in development. Though numerous studies document the existence of neonatal imitation in the laboratory, little is known about its natural occurrence during parent-infant interactions and its plasticity as a consequence of experience. We review these critical aspects of imitation, which we argue are necessary for understanding the early action-perception system. We address common criticisms and misunderstandings about neonatal imitation and discuss methodological differences among studies. Recent work reveals that individual differences in neonatal imitation positively correlate with later social, cognitive, and motor development. We propose that such variation in neonatal imitation could reflect important individual differences of the MNS. Although postnatal experience is not necessary for imitation, we present evidence that neonatal imitation is influenced by experience in the first week of life.
Resumo:
The incidence and severity of light leaf spot epidemics caused by the ascomycete fungus Pyrenopeziza brassicae on UK oilseed rape crops is increasing. The disease is currently controlled by a combination of host resistance, cultural practices and fungicide applications. We report decreases in sensitivities of modern UK P. brassicae isolates to the azole (imidazole and triazole) class of fungicides. By cloning and sequencing the P. brassicae CYP51 (PbCYP51) gene, encoding the azole target sterol 14α-demethylase, we identified two non-synonymous mutations encoding substitutions G460S and S508T associated with reduced azole sensitivity. We confirmed the impact of the encoded PbCYP51 changes on azole sensitivity and protein activity by heterologous expression in a Saccharomyces cerevisiae mutant YUG37::erg11 carrying a controllable promoter of native CYP51 expression. In addition, we identified insertions in the predicted regulatory regions of PbCYP51 in isolates with reduced azole sensitivity. The presence of these insertions was associated with enhanced transcription of PbCYP51 in response to sub-inhibitory concentrations of the azole fungicide tebuconazole. Genetic analysis of in vitro crosses of sensitive and resistant isolates confirmed the impact of PbCYP51 alterations in coding and regulatory sequences on a reduced sensitivity phenotype, as well as identifying a second major gene at another locus contributing to resistance in some isolates. The least sensitive field isolates carry combinations of upstream insertions and non-synonymous mutations, suggesting PbCYP51 evolution is on-going and the progressive decline in azole sensitivity of UK P. brassicae populations will continue. The implications for the future control of light leaf spot are discussed.
Resumo:
When the sensory consequences of an action are systematically altered our brain can recalibrate the mappings between sensory cues and properties of our environment. This recalibration can be driven by both cue conflicts and altered sensory statistics, but neither mechanism offers a way for cues to be calibrated so they provide accurate information about the world, as sensory cues carry no information as to their own accuracy. Here, we explored whether sensory predictions based on internal physical models could be used to accurately calibrate visual cues to 3D surface slant. Human observers played a 3D kinematic game in which they adjusted the slant of a surface so that a moving ball would bounce off the surface and through a target hoop. In one group, the ball’s bounce was manipulated so that the surface behaved as if it had a different slant to that signaled by visual cues. With experience of this altered bounce, observers recalibrated their perception of slant so that it was more consistent with the assumed laws of kinematics and physical behavior of the surface. In another group, making the ball spin in a way that could physically explain its altered bounce eliminated this pattern of recalibration. Importantly, both groups adjusted their behavior in the kinematic game in the same way, experienced the same set of slants and were not presented with low-level cue conflicts that could drive the recalibration. We conclude that observers use predictive kinematic models to accurately calibrate visual cues to 3D properties of world.
Resumo:
Low-power medium access control (MAC) protocols used for communication of energy constraint wireless embedded devices do not cope well with situations where transmission channels are highly erroneous. Existing MAC protocols discard corrupted messages which lead to costly retransmissions. To improve transmission performance, it is possible to include an error correction scheme and transmit/receive diversity. It is possible to add redundant information to transmitted packets in order to recover data from corrupted packets. It is also possible to make use of transmit/receive diversity via multiple antennas to improve error resiliency of transmissions. Both schemes may be used in conjunction to further improve the performance. In this study, the authors show how an error correction scheme and transmit/receive diversity can be integrated in low-power MAC protocols. Furthermore, the authors investigate the achievable performance gains of both methods. This is important as both methods have associated costs (processing requirements; additional antennas and power) and for a given communication situation it must be decided which methods should be employed. The authors’ results show that, in many practical situations, error control coding outperforms transmission diversity; however, if very high reliability is required, it is useful to employ both schemes together.
Resumo:
Our digital universe is rapidly expanding,more and more daily activities are digitally recorded, data arrives in streams, it needs to be analyzed in real time and may evolve over time. In the last decade many adaptive learning algorithms and prediction systems, which can automatically update themselves with the new incoming data, have been developed. The majority of those algorithms focus on improving the predictive performance and assume that model update is always desired as soon as possible and as frequently as possible. In this study we consider potential model update as an investment decision, which, as in the financial markets, should be taken only if a certain return on investment is expected. We introduce and motivate a new research problem for data streams ? cost-sensitive adaptation. We propose a reference framework for analyzing adaptation strategies in terms of costs and benefits. Our framework allows to characterize and decompose the costs of model updates, and to asses and interpret the gains in performance due to model adaptation for a given learning algorithm on a given prediction task. Our proof-of-concept experiment demonstrates how the framework can aid in analyzing and managing adaptation decisions in the chemical industry.
Resumo:
This paper employs a probit and a Markov switching model using information from the Conference Board Leading Indicator and other predictor variables to forecast the signs of future rental growth in four key U.S. commercial rent series. We find that both approaches have considerable power to predict changes in the direction of commercial rents up to two years ahead, exhibiting strong improvements over a naïve model, especially for the warehouse and apartment sectors. We find that while the Markov switching model appears to be more successful, it lags behind actual turnarounds in market outcomes whereas the probit is able to detect whether rental growth will be positive or negative several quarters ahead.
Resumo:
Traditional dictionary learning algorithms are used for finding a sparse representation on high dimensional data by transforming samples into a one-dimensional (1D) vector. This 1D model loses the inherent spatial structure property of data. An alternative solution is to employ Tensor Decomposition for dictionary learning on their original structural form —a tensor— by learning multiple dictionaries along each mode and the corresponding sparse representation in respect to the Kronecker product of these dictionaries. To learn tensor dictionaries along each mode, all the existing methods update each dictionary iteratively in an alternating manner. Because atoms from each mode dictionary jointly make contributions to the sparsity of tensor, existing works ignore atoms correlations between different mode dictionaries by treating each mode dictionary independently. In this paper, we propose a joint multiple dictionary learning method for tensor sparse coding, which explores atom correlations for sparse representation and updates multiple atoms from each mode dictionary simultaneously. In this algorithm, the Frequent-Pattern Tree (FP-tree) mining algorithm is employed to exploit frequent atom patterns in the sparse representation. Inspired by the idea of K-SVD, we develop a new dictionary update method that jointly updates elements in each pattern. Experimental results demonstrate our method outperforms other tensor based dictionary learning algorithms.
Resumo:
Species distribution models (SDM) are increasingly used to understand the factors that regulate variation in biodiversity patterns and to help plan conservation strategies. However, these models are rarely validated with independently collected data and it is unclear whether SDM performance is maintained across distinct habitats and for species with different functional traits. Highly mobile species, such as bees, can be particularly challenging to model. Here, we use independent sets of occurrence data collected systematically in several agricultural habitats to test how the predictive performance of SDMs for wild bee species depends on species traits, habitat type, and sampling technique. We used a species distribution modeling approach parametrized for the Netherlands, with presence records from 1990 to 2010 for 193 Dutch wild bees. For each species, we built a Maxent model based on 13 climate and landscape variables. We tested the predictive performance of the SDMs with independent datasets collected from orchards and arable fields across the Netherlands from 2010 to 2013, using transect surveys or pan traps. Model predictive performance depended on species traits and habitat type. Occurrence of bee species specialized in habitat and diet was better predicted than generalist bees. Predictions of habitat suitability were also more precise for habitats that are temporally more stable (orchards) than for habitats that suffer regular alterations (arable), particularly for small, solitary bees. As a conservation tool, SDMs are best suited to modeling rarer, specialist species than more generalist and will work best in long-term stable habitats. The variability of complex, short-term habitats is difficult to capture in such models and historical land use generally has low thematic resolution. To improve SDMs’ usefulness, models require explanatory variables and collection data that include detailed landscape characteristics, for example, variability of crops and flower availability. Additionally, testing SDMs with field surveys should involve multiple collection techniques.
Resumo:
Regional information on climate change is urgently needed but often deemed unreliable. To achieve credible regional climate projections, it is essential to understand underlying physical processes, reduce model biases and evaluate their impact on projections, and adequately account for internal variability. In the tropics, where atmospheric internal variability is small compared with the forced change, advancing our understanding of the coupling between long-term changes in upper-ocean temperature and the atmospheric circulation will help most to narrow the uncertainty. In the extratropics, relatively large internal variability introduces substantial uncertainty, while exacerbating risks associated with extreme events. Large ensemble simulations are essential to estimate the probabilistic distribution of climate change on regional scales. Regional models inherit atmospheric circulation uncertainty from global models and do not automatically solve the problem of regional climate change. We conclude that the current priority is to understand and reduce uncertainties on scales greater than 100 km to aid assessments at finer scales.
Resumo:
Subspace clustering groups a set of samples from a union of several linear subspaces into clusters, so that the samples in the same cluster are drawn from the same linear subspace. In the majority of the existing work on subspace clustering, clusters are built based on feature information, while sample correlations in their original spatial structure are simply ignored. Besides, original high-dimensional feature vector contains noisy/redundant information, and the time complexity grows exponentially with the number of dimensions. To address these issues, we propose a tensor low-rank representation (TLRR) and sparse coding-based (TLRRSC) subspace clustering method by simultaneously considering feature information and spatial structures. TLRR seeks the lowest rank representation over original spatial structures along all spatial directions. Sparse coding learns a dictionary along feature spaces, so that each sample can be represented by a few atoms of the learned dictionary. The affinity matrix used for spectral clustering is built from the joint similarities in both spatial and feature spaces. TLRRSC can well capture the global structure and inherent feature information of data, and provide a robust subspace segmentation from corrupted data. Experimental results on both synthetic and real-world data sets show that TLRRSC outperforms several established state-of-the-art methods.