44 resultados para Symbols
Resumo:
The field of Molecular Spectroscopy was surveyed in order to determine a set of conventions and symbols which are in common use in the spectroscopic literature. This document, which is Part 2 in a series, establishes the notations and conventions used for the description of symmetry in rigid molecules, using the Schoenflies notation. It deals firstly with the symmetry operators of the molecular point groups (also drawing attention to the difference between symmetry operators and elements). The conventions and notations of the molecular point groups are then established, followed by those of the representations of these groups as used in molecular spectroscopy. Further parts will follow, dealing inter alia with permutation and permutation-inversion symmetry notation, vibration-rotation spectroscopy and electronic spectroscopy.
Resumo:
The field of Molecular Spectroscopy was surveyed in order to determine a set of conventions and symbols which are in common use in the spectroscopic literature. This document, which is Part 3 in a series, deals with symmetry notation referring to groups that involve nuclear permutations and the inversion operation. Further parts will follow, dealing inter alia with vibration-rotation spectroscopy and electronic spectroscopy.
Resumo:
Some aspects of the use and misuse of scientific language are discussed, particularly in relation to quantity calculus, the names and symbols for quantities and units, and the choice of units – including the possible use of non-SI units. The discussion is intended to be constructive, and to suggest ways in which common usage can be improved.
Resumo:
Once unit-cell dimensions have been determined from a powder diffraction data set and therefore the crystal system is known (e.g. orthorhombic), the method presented by Markvardsen, David, Johnson & Shankland [Acta Cryst. (2001), A57, 47-54] can be used to generate a table ranking the extinction symbols of the given crystal system according to probability. Markvardsen et al. tested a computer program (ExtSym) implementing the method against Pawley refinement outputs generated using the TF12LS program [David, Ibberson & Matthewman (1992). Report RAL-92-032. Rutherford Appleton Laboratory, Chilton, Didcot, Oxon, UK]. Here, it is shown that ExtSym can be used successfully with many well known powder diffraction analysis packages, namely DASH [David, Shankland, van de Streek, Pidcock, Motherwell & Cole (2006). J. Appl. Cryst. 39, 910-915], FullProf [Rodriguez-Carvajal (1993). Physica B, 192, 55-69], GSAS [Larson & Von Dreele (1994). Report LAUR 86-748. Los Alamos National Laboratory, New Mexico, USA], PRODD [Wright (2004). Z. Kristallogr. 219, 1-11] and TOPAS [Coelho (2003). Bruker AXS GmbH, Karlsruhe, Germany]. In addition, a precise description of the optimal input for ExtSym is given to enable other software packages to interface with ExtSym and to allow the improvement/modification of existing interfacing scripts. ExtSym takes as input the powder data in the form of integrated intensities and error estimates for these intensities. The output returned by ExtSym is demonstrated to be strongly dependent on the accuracy of these error estimates and the reason for this is explained. ExtSym is tested against a wide range of data sets, confirming the algorithm to be very successful at ranking the published extinction symbol as the most likely. (C) 2008 International Union of Crystallography Printed in Singapore - all rights reserved.
Resumo:
Identifying 2 target stimuli in a rapid stream of visual symbols is much easier if the 2nd target appears immediately after the 1st target (i.e., at Lag 1) than if distractor stimuli intervene. As this phenomenon comes with a strong tendency to confuse the order of the targets, it seems to be due to the integration of both targets into the same attentional episode or object file. The authors investigated the degree to which people can control the temporal extension of their (episodic) integration windows by manipulating the expectations participants had with regard to the time available for target processing. As predicted, expecting more time to process increased the number of order confusions at Lag 1. This was true for between-subjects and within-subjects (trial-to-trial) manipulations, suggesting that integration windows can be adapted actively and rather quickly.
Resumo:
Finding an estimate of the channel impulse response (CIR) by correlating a received known (training) sequence with the sent training sequence is commonplace. Where required, it is also common to truncate the longer correlation to a sub-set of correlation coefficients by finding the set of N sequential correlation coefficients with the maximum power. This paper presents a new approach to selecting the optimal set of N CIR coefficients from the correlation rather than relying on power. The algorithm reconstructs a set of predicted symbols using the training sequence and various sub-sets of the correlation to find the sub-set that results in the minimum mean squared error between the actual received symbols and the reconstructed symbols. The application of the algorithm is presented in the context of the TDMA based GSM/GPRS system to demonstrate an improvement in the system performance with the new algorithm and the results are presented in the paper. However, the application lends itself to any training sequence based communication system often found within wireless consumer electronic device(1).
Resumo:
Wireless Personal Area Networks (WPANs) are offering high data rates suitable for interconnecting high bandwidth personal consumer devices (Wireless HD streaming, Wireless-USB and Bluetooth EDR). ECMA-368 is the Physical (PHY) and Media Access Control (MAC) backbone of many of these wireless devices. WPAN devices tend to operate in an ad-hoc based network and therefore it is important to successfully latch onto the network and become part of one of the available piconets. This paper presents a new algorithm for detecting the Packet/Fame Sync (PFS) signal in ECMA-368 to identify piconets and aid symbol timing. The algorithm is based on correlating the received PFS symbols with the expected locally stored symbols over the 24 or 12 PFS symbols, but selecting the likely TFC based on the highest statistical mode from the 24 or 12 best correlation results. The results are very favorable showing an improvement margin in the order of 11.5dB in reference sensitivity tests between the required performance using this algorithm and the performance of comparable systems.
Resumo:
This study investigates the superposition-based cooperative transmission system. In this system, a key point is for the relay node to detect data transmitted from the source node. This issued was less considered in the existing literature as the channel is usually assumed to be flat fading and a priori known. In practice, however, the channel is not only a priori unknown but subject to frequency selective fading. Channel estimation is thus necessary. Of particular interest is the channel estimation at the relay node which imposes extra requirement for the system resources. The authors propose a novel turbo least-square channel estimator by exploring the superposition structure of the transmission data. The proposed channel estimator not only requires no pilot symbols but also has significantly better performance than the classic approach. The soft-in-soft-out minimum mean square error (MMSE) equaliser is also re-derived to match the superimposed data structure. Finally computer simulation results are shown to verify the proposed algorithm.
Resumo:
This correspondence proposes a new algorithm for the OFDM joint data detection and phase noise (PHN) cancellation for constant modulus modulations. We highlight that it is important to address the overfitting problem since this is a major detrimental factor impairing the joint detection process. In order to attack the overfitting problem we propose an iterative approach based on minimum mean square prediction error (MMSPE) subject to the constraint that the estimated data symbols have constant power. The proposed constrained MMSPE algorithm (C-MMSPE) significantly improves the performance of existing approaches with little extra complexity being imposed. Simulation results are also given to verify the proposed algorithm.
Resumo:
Let $A$ be an infinite Toeplitz matrix with a real symbol $f$ defined on $[-\pi, \pi]$. It is well known that the sequence of spectra of finite truncations $A_N$ of $A$ converges to the convex hull of the range of $f$. Recently, Levitin and Shargorodsky, on the basis of some numerical experiments, conjectured, for symbols $f$ with two discontinuities located at rational multiples of $\pi$, that the eigenvalues of $A_N$ located in the gap of $f$ asymptotically exhibit periodicity in $N$, and suggested a formula for the period as a function of the position of discontinuities. In this paper, we quantify and prove the analog of this conjecture for the matrix $A^2$ in a particular case when $f$ is a piecewise constant function taking values $-1$ and $1$.
Resumo:
In developing Isotype, Otto Neurath and his colleagues were the first to systematically explore a consistent visual language as part of an encyclopedic approach to representing all aspects of the physical world. The pictograms used in Isotype have a secure legacy in today's public information symbols, but Isotype was more than this: it was designed to communicate social facts memorably to less educated groups, including schoolchildren and workers, reflecting its initial testing ground in the socialist municipality of Vienna during the 1920s. The social engagement and methodology of Isotype are examined here in order to draw some lessons for information design today.
Resumo:
Adaptive filters used in code division multiple access (CDMA) receivers to counter interference have been formulated both with and without the assumption of training symbols being transmitted. They are known as training-based and blind detectors respectively. We show that the convergence behaviour of the blind minimum-output-energy (MOE) detector can be quite easily derived, unlike what was implied by the procedure outlined in a previous paper. The simplification results from the observation that the correlation matrix determining convergence performance can be made symmetric, after which many standard results from the literature on least mean square (LMS) filters apply immediately.
Resumo:
Dictionary compilers and designers use punctuation to structure and clarify entries and to encode information. Dictionaries with a relatively simple structure can have simple typography, and simple punctuation; as dictionaries grew more complex, and encountered the space constraints of the printed page, complex encoding systems were developed, using punctuation and symbols. Two recent trends have emerged in dictionary design: to eliminate punctuation, and sometimes to use a larger number of fonts, so that the boundaries between elements are indicated by font change, not punctuation.
Resumo:
Treating algebraic symbols as objects (eg. “‘a’ means ‘apple’”) is a means of introducing elementary simplification of algebra, but causes problems further on. This current school-based research included an examination of texts still in use in the mathematics department, and interviews with mathematics teachers, year 7 pupils and then year 10 pupils asking them how they would explain, “3a + 2a = 5a” to year 7 pupils. Results included the notion that the ‘algebra as object’ analogy can be found in textbooks in current usage, including those recently published. Teachers knew that they were not ‘supposed’ to use the analogy but not always clear why, nevertheless stating methods of teaching consistent with an‘algebra as object’ approach. Year 7 pupils did not explicitly refer to ‘algebra as object’, although some of their responses could be so interpreted. In the main, year 10 pupils used ‘algebra as object’ to explain simplification of algebra, with some complicated attempts to get round the limitations. Further research would look to establish whether the appearance of ‘algebra as object’ in pupils’ thinking between year 7 and 10 is consistent and, if so, where it arises. Implications also are for on-going teacher training with alternatives to introducing such simplification.
Resumo:
We revisit the boundedness of Hankel and Toeplitz operators acting on the Hardy space H 1 and give a new proof of the old result stating that the Hankel operator H a is bounded if and only if a has bounded logarithmic mean oscillation. We also establish a sufficient and necessary condition for H a to be compact on H 1. The Fredholm properties of Toeplitz operators on H 1 are studied for symbols in a Banach algebra similar to C + H ∞ under mild additional conditions caused by the differences in the boundedness of Toeplitz operators acting on H 1 and H 2.