623 resultados para Nonverbal Decoding
Resumo:
Terrigenous sediment supply, marine transport, and depositional processes along tectonically active margins are key to decoding turbidite successions as potential archives of climatic and seismic forcings. Sequence stratigraphic models predict coarse-grained sediment delivery to deep-marine sites mainly during sea-level fall and lowstand. Marine siliciclastic deposition during transgressions and highstands has been attributed to sustained connectivity between terrigenous sources and marine sinks facilitated by narrow shelves. To decipher the controls on Holocene highstand turbidite deposition, we analyzed 12 sediment cores from spatially discrete, coeval turbidite systems along the Chile margin (29° - 40°S) with changing climatic and geomorphic characteristics but uniform changes in sea level. Sediment cores from intraslope basins in north-central Chile (29° - 33°S) offshore a narrow to absent shelf record a shut-off of turbidite deposition during the Holocene due to postglacial aridification. In contrast, core sites in south-central Chile (36° - 40°S) offshore a wide shelf record frequent turbidite deposition during highstand conditions. Two core sites are linked to the Biobío river-canyon system and receive sediment directly from the river mouth. However, intraslope basins are not connected via canyons to fluvial systems but yield even higher turbidite frequencies. High sediment supply combined with a wide shelf and an undercurrent moving sediment toward the shelf edge appear to control Holocene turbidite sedimentation and distribution. Shelf undercurrents may play an important role in lateral sediment transport and supply to the deep sea and need to be accounted for in sediment-mass balances.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Genetic algorithms (GAs) are known to locate the global optimal solution provided sufficient population and/or generation is used. Practically, a near-optimal satisfactory result can be found by Gas with a limited number of generations. In wireless communications, the exhaustive searching approach is widely applied to many techniques, such as maximum likelihood decoding (MLD) and distance spectrum (DS) techniques. The complexity of the exhaustive searching approach in the MLD or the DS technique is exponential in the number of transmit antennas and the size of the signal constellation for the multiple-input multiple-output (MIMO) communication systems. If a large number of antennas and a large size of signal constellations, e.g. PSK and QAM, are employed in the MIMO systems, the exhaustive searching approach becomes impractical and time consuming. In this paper, the GAs are applied to the MLD and DS techniques to provide a near-optimal performance with a reduced computational complexity for the MIMO systems. Two different GA-based efficient searching approaches are proposed for the MLD and DS techniques, respectively. The first proposed approach is based on a GA with sharing function method, which is employed to locate the multiple solutions of the distance spectrum for the Space-time Trellis Coded Orthogonal Frequency Division Multiplexing (STTC-OFDM) systems. The second approach is the GA-based MLD that attempts to find the closest point to the transmitted signal. The proposed approach can return a satisfactory result with a good initial signal vector provided to the GA. Through simulation results, it is shown that the proposed GA-based efficient searching approaches can achieve near-optimal performance, but with a lower searching complexity comparing with the original MLD and DS techniques for the MIMO systems.
Resumo:
Operator quantum error correction is a recently developed theory that provides a generalized and unified framework for active error correction and passive error avoiding schemes. In this Letter, we describe these codes using the stabilizer formalism. This is achieved by adding a gauge group to stabilizer codes that defines an equivalence class between encoded states. Gauge transformations leave the encoded information unchanged; their effect is absorbed by virtual gauge qubits that do not carry useful information. We illustrate the construction by identifying a gauge symmetry in Shor's 9-qubit code that allows us to remove 3 of its 8 stabilizer generators, leading to a simpler decoding procedure and a wider class of logical operations without affecting its essential properties. This opens the path to possible improvements of the error threshold of fault-tolerant quantum computing.
Resumo:
Objectives: The objectives of this study were to examine the extent of clustering of smoking, high levels of television watching, overweight, and high blood pressure among adolescents and whether this clustering varies by socioeconomic position and Cognitive function. Methods: This study was a cross-sectional analysis of 3613 (1742 females) participants of an Australian birth cohort who were examined at age 14. Results: Three hundred fifty-three (9.8%) of the participants had co-occurrence of three or four risk factors. Risk factors clustered in these adolescents with a greater number of participants than would be predicted by assumptions of independence having no risk factors and three or four risk factors. The extent of clustering tended to be greater in those from lower-income families and among those with lower cognitive function. The age-adjusted ratio of observed to expected cooccurrence of three or four risk factors was 2.70 (95% confidence interval [Cl], 1.80-4.06) among those from low-income families and 1.70 (95% Cl, 1.34-2.16) among those from more affluent families. The ratio among those with low Raven's scores (nonverbal reasoning) was 2.36 (95% Cl, 1.69-3.30) and among those with higher scores was 1.51 (95% Cl, 1.19-1.92); similar results for the WRAT 3 score (reading ability) were 2.69 (95% Cl, 1.85-3.94) and 1.68 (95% Cl, 1.34-2.11). Clustering did not differ by sex. Conclusion: Among adolescents, coronary heart disease risk factors cluster, and there is some evidence that this clustering is greater among those from families with low income and those who have lower cognitive function.
Resumo:
THE RIGORS OF ESTABLISHING INNATENESS and domain specificity pose challenges to adaptationist models of music evolution. In articulating a series of constraints, the authors of the target articles provide strategies for investigating the potential origins of music. We propose additional approaches for exploring theories based on exaptation. We discuss a view of music as a multimodal system of engaging with affect, enabled by capacities of symbolism and a theory of mind.
Resumo:
A set of DCT domain properties for shifting and scaling by real amounts, and taking linear operations such as differentiation is described. The DCT coefficients of a sampled signal are subjected to a linear transform, which returns the DCT coefficients of the shifted, scaled and/or differentiated signal. The properties are derived by considering the inverse discrete transform as a cosine series expansion of the original continuous signal, assuming sampling in accordance with the Nyquist criterion. This approach can be applied in the signal domain, to give, for example, DCT based interpolation or derivatives. The same approach can be taken in decoding from the DCT to give, for example, derivatives in the signal domain. The techniques may prove useful in compressed domain processing applications, and are interesting because they allow operations from the continuous domain such as differentiation to be implemented in the discrete domain. An image matching algorithm illustrates the use of the properties, with improvements in computation time and matching quality.
Resumo:
In this paper, a channel emulator for assessing the performance of MIMO testbed implemented in a field programmable gate array technology is described. The FPGA based MIMO system includes a signal generator, modulation/demodulation and space time coding/decoding modules. The emulator uses information about a wireless channel from computer simulations or actual measurements. In simulations, a single bounce scattering model for an indoor environment is applied. The generated data is stored in the FPGA board. The tests are performed for a 2times2 MIMO system that uses Alamouti scheme for space coding/decoding. The performed tests show proper operation of the FPGA implemented MIMO testbed. Good agreement between the results using measured and simulated channel data is obtained.
Resumo:
This paper describes the design of a Multiple Input Multiple Output testbed for assessing various MIMO transmission schemes in rich scattering indoor environments. In the undertaken design, a Field Programmable Gate Array (FPGA) board is used for fast processing of Intermediate Frequency signals. At the present stage, the testbed performance is assessed when the channel emulator between transmitter and receiver modules is introduced. Here, the results are presented for the case when a 2x2 Alamouti scheme for space time coding/decoding at transmitter and receiver is used. Various programming details of the FPGA board along with the obtained simulation results are reported
Resumo:
A 77-year-old man with 8 year progressive language deterioration in the face of grossly intact memory was followed. No acute or chronic physiological or psychological event was associated with symptom onset. CT revealed small left basal ganglia infarct. Mild atrophy, no lacunar infarcts, mild diffuse periventricular changes registered on MRI. Gait normal but slow. Speech hesitant and sparse. Affect euthymic; neurobehavioral disturbance absent. MMSE 26/30; clock incorrect, concrete. Neuropsychological testing revealed simple attention intact; complex attention, processing speed impaired. Visuospatial copying and delayed recall of copy average with some perseveration. Apraxia absent. Recall mildly impaired. Mild deficits in planning, organization apparent. Patient severely aphasic, dysarthric without paraphasias. Repetition of automatic speech, recitation moderately impaired; prosody intact. Understanding of written language, nonverbal communication abilities, intact. Frontal release signs developed over last 12 months. Repeated cognitive testing revealed mild deterioration across all domains with significant further decrease in expressive, receptive language. Neurobehavioral changes remain absent to date; he remains interested, engaged and independent in basic ADLs. Speech completely deteriorated; gait and movements appreciably slowed. Although signs of frontal/executive dysfunction present, lack of behavioral abnormalities, psychiatric disturbance, personality change argue against focal or progressive frontal impairment or dementia. Relative intactness of memory and comprehension argue against Alzheimer’s disease. Lack of findings on neuroimaging argue against CVA or tumor. It is possible that the small basal ganglia infarct has resulted in a mild lateral prefrontal syndrome. However, the absence of depression as well as the relatively circumscribed language problem suggests otherwise. The progressive, severe nature of language impairments, with relatively minor impairments in attention and memory, argues for a possible diagnosis of primary progressive aphasia.
Resumo:
Despite extensive progress on the theoretical aspects of spectral efficient communication systems, hardware impairments, such as phase noise, are the key bottlenecks in next generation wireless communication systems. The presence of non-ideal oscillators at the transceiver introduces time varying phase noise and degrades the performance of the communication system. Significant research literature focuses on joint synchronization and decoding based on joint posterior distribution, which incorporate both the channel and code graph. These joint synchronization and decoding approaches operate on well designed sum-product algorithms, which involves calculating probabilistic messages iteratively passed between the channel statistical information and decoding information. Channel statistical information, generally entails a high computational complexity because its probabilistic model may involve continuous random variables. The detailed knowledge about the channel statistics for these algorithms make them an inadequate choice for real world applications due to power and computational limitations. In this thesis, novel phase estimation strategies are proposed, in which soft decision-directed iterative receivers for a separate A Posteriori Probability (APP)-based synchronization and decoding are proposed. These algorithms do not require any a priori statistical characterization of the phase noise process. The proposed approach relies on a Maximum A Posteriori (MAP)-based algorithm to perform phase noise estimation and does not depend on the considered modulation/coding scheme as it only exploits the APPs of the transmitted symbols. Different variants of APP-based phase estimation are considered. The proposed algorithm has significantly lower computational complexity with respect to joint synchronization/decoding approaches at the cost of slight performance degradation. With the aim to improve the robustness of the iterative receiver, we derive a new system model for an oversampled (more than one sample per symbol interval) phase noise channel. We extend the separate APP-based synchronization and decoding algorithm to a multi-sample receiver, which exploits the received information from the channel by exchanging the information in an iterative fashion to achieve robust convergence. Two algorithms based on sliding block-wise processing with soft ISI cancellation and detection are proposed, based on the use of reliable information from the channel decoder. Dually polarized systems provide a cost-and spatial-effective solution to increase spectral efficiency and are competitive candidates for next generation wireless communication systems. A novel soft decision-directed iterative receiver, for separate APP-based synchronization and decoding, is proposed. This algorithm relies on an Minimum Mean Square Error (MMSE)-based cancellation of the cross polarization interference (XPI) followed by phase estimation on the polarization of interest. This iterative receiver structure is motivated from Master/Slave Phase Estimation (M/S-PE), where M-PE corresponds to the polarization of interest. The operational principle of a M/S-PE block is to improve the phase tracking performance of both polarization branches: more precisely, the M-PE block tracks the co-polar phase and the S-PE block reduces the residual phase error on the cross-polar branch. Two variants of MMSE-based phase estimation are considered; BW and PLP.
Resumo:
We investigate the performance of error-correcting codes, where the code word comprises products of K bits selected from the original message and decoding is carried out utilizing a connectivity tensor with C connections per index. Shannon's bound for the channel capacity is recovered for large K and zero temperature when the code rate K/C is finite. Close to optimal error-correcting capability is obtained for finite K and C. We examine the finite-temperature case to assess the use of simulated annealing for decoding and extend the analysis to accommodate other types of noisy channels.
Resumo:
We investigate the performance of parity check codes using the mapping onto spin glasses proposed by Sourlas. We study codes where each parity check comprises products of K bits selected from the original digital message with exactly C parity checks per message bit. We show, using the replica method, that these codes saturate Shannon's coding bound for K?8 when the code rate K/C is finite. We then examine the finite temperature case to asses the use of simulated annealing methods for decoding, study the performance of the finite K case and extend the analysis to accommodate different types of noisy channels. The analogy between statistical physics methods and decoding by belief propagation is also discussed.
Resumo:
We show the similarity between belief propagation and TAP, for decoding corrupted messages encoded by Sourlas's method. The latter is a special case of the Gallager error- correcting code, where the code word comprises products of K bits selected randomly from the original message. We examine the efficacy of solutions obtained by the two methods for various values of K and show that solutions for K>=3 may be sensitive to the choice of initial conditions in the case of unbiased patterns. Good approximations are obtained generally for K=2 and for biased patterns in the case of K>=3, especially when Nishimori's temperature is being used.