892 resultados para Sparse unmixing
Resumo:
A 77-year-old man with 8 year progressive language deterioration in the face of grossly intact memory was followed. No acute or chronic physiological or psychological event was associated with symptom onset. CT revealed small left basal ganglia infarct. Mild atrophy, no lacunar infarcts, mild diffuse periventricular changes registered on MRI. Gait normal but slow. Speech hesitant and sparse. Affect euthymic; neurobehavioral disturbance absent. MMSE 26/30; clock incorrect, concrete. Neuropsychological testing revealed simple attention intact; complex attention, processing speed impaired. Visuospatial copying and delayed recall of copy average with some perseveration. Apraxia absent. Recall mildly impaired. Mild deficits in planning, organization apparent. Patient severely aphasic, dysarthric without paraphasias. Repetition of automatic speech, recitation moderately impaired; prosody intact. Understanding of written language, nonverbal communication abilities, intact. Frontal release signs developed over last 12 months. Repeated cognitive testing revealed mild deterioration across all domains with significant further decrease in expressive, receptive language. Neurobehavioral changes remain absent to date; he remains interested, engaged and independent in basic ADLs. Speech completely deteriorated; gait and movements appreciably slowed. Although signs of frontal/executive dysfunction present, lack of behavioral abnormalities, psychiatric disturbance, personality change argue against focal or progressive frontal impairment or dementia. Relative intactness of memory and comprehension argue against Alzheimer’s disease. Lack of findings on neuroimaging argue against CVA or tumor. It is possible that the small basal ganglia infarct has resulted in a mild lateral prefrontal syndrome. However, the absence of depression as well as the relatively circumscribed language problem suggests otherwise. The progressive, severe nature of language impairments, with relatively minor impairments in attention and memory, argues for a possible diagnosis of primary progressive aphasia.
Resumo:
The performance of Gallager's error-correcting code is investigated via methods of statistical physics. In this method, the transmitted codeword comprises products of the original message bits selected by two randomly-constructed sparse matrices; the number of non-zero row/column elements in these matrices constitutes a family of codes. We show that Shannon's channel capacity is saturated for many of the codes while slightly lower performance is obtained for others which may be of higher practical relevance. Decoding aspects are considered by employing the TAP approach which is identical to the commonly used belief-propagation-based decoding.
Resumo:
We employ the methods presented in the previous chapter for decoding corrupted codewords, encoded using sparse parity check error correcting codes. We show the similarity between the equations derived from the TAP approach and those obtained from belief propagation, and examine their performance as practical decoding methods.
Resumo:
A variation of low-density parity check (LDPC) error-correcting codes defined over Galois fields (GF(q)) is investigated using statistical physics. A code of this type is characterised by a sparse random parity check matrix composed of C non-zero elements per column. We examine the dependence of the code performance on the value of q, for finite and infinite C values, both in terms of the thermodynamical transition point and the practical decoding phase characterised by the existence of a unique (ferromagnetic) solution. We find different q-dependence in the cases of C = 2 and C ≥ 3; the analytical solutions are in agreement with simulation results, providing a quantitative measure to the improvement in performance obtained using non-binary alphabets.
Resumo:
We employ the methods of statistical physics to study the performance of Gallager type error-correcting codes. In this approach, the transmitted codeword comprises Boolean sums of the original message bits selected by two randomly-constructed sparse matrices. We show that a broad range of these codes potentially saturate Shannon's bound but are limited due to the decoding dynamics used. Other codes show sub-optimal performance but are not restricted by the decoding dynamics. We show how these codes may also be employed as a practical public-key cryptosystem and are of competitive performance to modern cyptographical methods.
Resumo:
We study online approximations to Gaussian process models for spatially distributed systems. We apply our method to the prediction of wind fields over the ocean surface from scatterometer data. Our approach combines a sequential update of a Gaussian approximation to the posterior with a sparse representation that allows to treat problems with a large number of observations.
Resumo:
We study the performance of Low Density Parity Check (LDPC) error-correcting codes using the methods of statistical physics. LDPC codes are based on the generation of codewords using Boolean sums of the original message bits by employing two randomly-constructed sparse matrices. These codes can be mapped onto Ising spin models and studied using common methods of statistical physics. We examine various regular constructions and obtain insight into their theoretical and practical limitations. We also briefly report on results obtained for irregular code constructions, for codes with non-binary alphabet, and on how a finite system size effects the error probability.
Resumo:
The modem digital communication systems are made transmission reliable by employing error correction technique for the redundancies. Codes in the low-density parity-check work along the principles of Hamming code, and the parity-check matrix is very sparse, and multiple errors can be corrected. The sparseness of the matrix allows for the decoding process to be carried out by probability propagation methods similar to those employed in Turbo codes. The relation between spin systems in statistical physics and digital error correcting codes is based on the existence of a simple isomorphism between the additive Boolean group and the multiplicative binary group. Shannon proved general results on the natural limits of compression and error-correction by setting up the framework known as information theory. Error-correction codes are based on mapping the original space of words onto a higher dimensional space in such a way that the typical distance between encoded words increases.
Resumo:
The replica method, developed in statistical physics, is employed in conjunction with Gallager's methodology to accurately evaluate zero error noise thresholds for Gallager code ensembles. Our approach generally provides more optimistic evaluations than those reported in the information theory literature for sparse matrices; the difference vanishes as the parity check matrix becomes dense.
Resumo:
In many Environmental Information Systems the actual observations arise from a discrete monitoring network which might be rather heterogeneous in both location and types of measurements made. In this paper we describe the architecture and infrastructure for a system, developed as part of the EU FP6 funded INTAMAP project, to provide a service oriented solution that allows the construction of an interoperable, automatic, interpolation system. This system will be based on the Open Geospatial Consortium’s Web Feature Service (WFS) standard. The essence of our approach is to extend the GML3.1 observation feature to include information about the sensor using SensorML, and to further extend this to incorporate observation error characteristics. Our extended WFS will accept observations, and will store them in a database. The observations will be passed to our R-based interpolation server, which will use a range of methods, including a novel sparse, sequential kriging method (only briefly described here) to produce an internal representation of the interpolated field resulting from the observations currently uploaded to the system. The extended WFS will then accept queries, such as ‘What is the probability distribution of the desired variable at a given point’, ‘What is the mean value over a given region’, or ‘What is the probability of exceeding a certain threshold at a given location’. To support information-rich transfer of complex and uncertain predictions we are developing schema to represent probabilistic results in a GML3.1 (object-property) style. The system will also offer more easily accessible Web Map Service and Web Coverage Service interfaces to allow users to access the system at the level of complexity they require for their specific application. Such a system will offer a very valuable contribution to the next generation of Environmental Information Systems in the context of real time mapping for monitoring and security, particularly for systems that employ a service oriented architecture.
Resumo:
Problem: The vast majority of research examining the interplay between aggressive emotions, beliefs, behaviors, cognitions, and situational contingencies in competitive athletes has focused on Western populations and only select sports (e.g., ice hockey). Research involving Eastern, particularly Chinese, athletes is surprisingly sparse given the sheer size of these populations. Thus, this study examines the aggressive emotions, beliefs, behaviors, and cognitions, of competitive Chinese athletes. Method: Several measures related to aggression were distributed to a large sample (N ¼ 471) of male athletes, representing four sports (basketball, rugby union, association football/soccer, and squash). Results: Higher levels of anger and aggression tended to be associated with higher levels of play for rugby and low levels of play for contact (e.g., football, basketball) and individual sports (e.g., squash). Conclusions: The results suggest that the experience of angry emotions and aggressive behaviors of Chinese athletes are similar to Western populations, but that sport psychology practitioners should be aware of some potentially important differences, such as the general tendency of Chinese athletes to disapprove of aggressive behavior.
Resumo:
The rapidly increasing demand for cellular telephony is placing greater demand on the limited bandwidth resources available. This research is concerned with techniques which enhance the capacity of a Direct-Sequence Code-Division-Multiple-Access (DS-CDMA) mobile telephone network. The capacity of both Private Mobile Radio (PMR) and cellular networks are derived and the many techniques which are currently available are reviewed. Areas which may be further investigated are identified. One technique which is developed is the sectorisation of a cell into toroidal rings. This is shown to provide an increased system capacity when the cell is split into these concentric rings and this is compared with cell clustering and other sectorisation schemes. Another technique for increasing the capacity is achieved by adding to the amount of inherent randomness within the transmitted signal so that the system is better able to extract the wanted signal. A system model has been produced for a cellular DS-CDMA network and the results are presented for two possible strategies. One of these strategies is the variation of the chip duration over a signal bit period. Several different variation functions are tried and a sinusoidal function is shown to provide the greatest increase in the maximum number of system users for any given signal-to-noise ratio. The other strategy considered is the use of additive amplitude modulation together with data/chip phase-shift-keying. The amplitude variations are determined by a sparse code so that the average system power is held near its nominal level. This strategy is shown to provide no further capacity since the system is sensitive to amplitude variations. When both strategies are employed, however, the sensitivity to amplitude variations is shown to reduce, thus indicating that the first strategy both increases the capacity and the ability to handle fluctuations in the received signal power.
Resumo:
Methods for understanding classical disordered spin systems with interactions conforming to some idealized graphical structure are well developed. The equilibrium properties of the Sherrington-Kirkpatrick model, which has a densely connected structure, have become well understood. Many features generalize to sparse Erdös- Rényi graph structures above the percolation threshold and to Bethe lattices when appropriate boundary conditions apply. In this paper, we consider spin states subject to a combination of sparse strong interactions with weak dense interactions, which we term a composite model. The equilibrium properties are examined through the replica method, with exact analysis of the high-temperature paramagnetic, spin-glass, and ferromagnetic phases by perturbative schemes. We present results of replica symmetric variational approximations, where perturbative approaches fail at lower temperature. Results demonstrate re-entrant behaviors from spin glass to ferromagnetic phases as temperature is lowered, including transitions from replica symmetry broken to replica symmetric phases. The nature of high-temperature transitions is found to be sensitive to the connectivity profile in the sparse subgraph, with regular connectivity a discontinuous transition from the paramagnetic to ferromagnetic phases is apparent.
Resumo:
Colouring sparse graphs under various restrictions is a theoretical problem of significant practical relevance. Here we consider the problem of maximizing the number of different colours available at the nodes and their neighbourhoods, given a predetermined number of colours. In the analytical framework of a tree approximation, carried out at both zero and finite temperatures, solutions obtained by population dynamics give rise to estimates of the threshold connectivity for the incomplete to complete transition, which are consistent with those of existing algorithms. The nature of the transition as well as the validity of the tree approximation are investigated.
Resumo:
Sparse code division multiple access (CDMA), a variation on the standard CDMA method in which the spreading (signature) matrix contains only a relatively small number of nonzero elements, is presented and analysed using methods of statistical physics. The analysis provides results on the performance of maximum likelihood decoding for sparse spreading codes in the large system limit. We present results for both cases of regular and irregular spreading matrices for the binary additive white Gaussian noise channel (BIAWGN) with a comparison to the canonical (dense) random spreading code. © 2007 IOP Publishing Ltd.