68 resultados para Hilbert symbol
Resumo:
Practice-led or multi modal theses (describing examinable outcomes of postgraduate study which comprise the practice of dancing/choreography with an accompanying exegesis) are an emerging strength of dance scholarship; a form of enquiry that has been gaining momentum for over a decade, particularly in Australia and the United Kingdom. It has been strongly argued that, in this form of research, legitimate claims to new knowledge are embodied predominantly within the practice itself (Pakes, 2003) and that these findings are emergent, contingent and often interstitial, contained within both the material form of the practice and in the symbolic languages surrounding the form. In a recent study on ‘dancing’ theses Phillips, Stock, Vincs (2009) found that there was general agreement from academics and artists that ‘there could be more flexibility in matching written language with conceptual thought expressed in practice’. The authors discuss how the seemingly intangible nature of danced / embodied research, reliant on what Melrose (2003) terms ‘performance mastery’ by the ‘expert practitioner’ (2006, Point 4) involving ‘expert’ intuition (2006, Point 5), might be accessed, articulated and validated in terms of alternative ways of knowing through exploring an ongoing dialogue in which the danced practice develops emergent theory. They also propose ways in which the danced thesis can be ‘converted’ into the required ‘durable’ artefact which the ephemerality of live performance denies, drawing on the work of Rye’s ‘multi-view’ digital record (2003) and Stapleton’s ‘multi-voiced audio visual document’(2006, 82). Building on a two-year research project (2007-2008) Dancing Between Diversity and Consistency: Refining Assessment in Postgraduate Degrees in Dance, which examined such issues in relation to assessment in an Australian context, the three researchers have further explored issues around interdisciplinarity, cultural differences and documentation through engaging with the following questions: How do we represent research in which understandings, meanings and findings are situated within the body of the dancer/choreographer? Do these need a form of ‘translating’ into textual form in order to be accessed as research? What kind of language structures can be developed to effect this translation: metaphor, allusion, symbol? How important is contextualising the creative practice? How do we incorporate differing cultural inflections and practices into our reading and evaluation? What kind of layered documentation can assist in producing a ‘durable’ research artefact from a non-reproduce-able live event?
Resumo:
Aims: This study investigated the effect of simulated visual impairment on the speed and accuracy of performance on a series of commonly used cognitive tests. ----- Methods: Cognitive performance was assessed for 30 young, visually normal subjects (M=22.0yrs ± 3.1 yrs) using the Digit Symbol Substitution Test (DSST), Trail Making Test (TMT) A and B and the Stroop Colour Word Test under three visual conditions: normal vision and two levels of visually degrading filters (VistechTM) administered in a random order. Distance visual acuity and contrast sensitivity were also assessed for each filter condition. ----- Results: The visual filters, which degraded contrast sensitivity to a greater extent than visual acuity, significantly increased the time to complete (p<0.05), but not the number of errors made, on the DSST and the TMT A and B and affected only some components of the Stroop test.----- Conclusions: Reduced contrast sensitivity had a marked effect on the speed but not the accuracy of performance on commonly used cognitive tests, even in young individuals; the implications of these findings are discussed.
Resumo:
The Sydney Harbour Bridge provides an imaginative space that is revisited by Australian writers in particular ways. In this space novelists, poets, and cultural historians negotiate questions of emotional and psychological transformation as well as reflect on social and environmental change in the city of Sydney. The writerly tensions that mark these accounts often alter, or query, representations of the Bridge as a symbol of material progress and demonstrate a complex creative engagement with the Bridge. This discussion of ‘the Bridge’ focuses on the work of four authors, Eleanor Dark, P.R. Stephensen, Peter Carey and Vicki Hastrich and includes a range of other fictional and non-fictional accounts of ‘Bridge-writing.’ The ideas proffered are framed by a theorising of space, especially referencing the work of Michel de Certeau, whose writing on the spatial ambiguity of a bridge is important to the examination of the diverse ways in which Australian writers have engaged with the imaginative potential and almost mythic resonance of the Sydney Harbour Bridge.
Resumo:
Purpose: To evaluate the on-road driving performance of persons with homonymous hemianopia or quadrantanopia in comparison to age-matched controls with normal visual fields. Methods: Participants were 22 hemianopes and eight quadrantanopes (mean age 53 years) and 30 persons with normal visual fields (mean age 52 years) and were either current drivers or aiming to resume driving. All participants completed a battery of tests of vision (ETDRS visual acuity, Pelli-Robson letter contrast sensitivity, Humphrey visual fields), cognitive tests (trials A and B, Mini Mental State Examination, Digit Symbol Substitution) and an on-road driving assessment. Driving performance was assessed in a dual-brake vehicle with safety monitored by a certified driving rehabilitation specialist. Backseat evaluators masked to the clinical characteristics of participants independently rated driving performance along a 22.7 kilometre route involving urban and interstate driving. Results: Seventy-three per cent of the hemianopes, 88 per cent of quadrantanopes and all of the drivers with normal fields received safe driving ratings. Those hemianopic and quadrantanopic drivers rated as unsafe tended to have problems with maintaining appropriate lane position, steering steadiness and gap judgment compared to controls. Unsafe driving was associated with slower visual processing speed and impairments in contrast sensitivity, visual field sensitivity and executive function. Conclusions: Our findings suggest that some drivers with hemianopia or quadrantanopia are capable of safe driving performance, when compared to those of the same age with normal visual fields. This finding has important implications for the assessment of fitness to drive in this population.
Resumo:
This paper investigates the impact of carrier frequency offset (CFO) on Single Carrier wireless communication systems with Frequency Domain Equalization (SC-FDE). We show that CFO in SC-FDE systems causes irrecoverable channel estimation error, which leads to inter-symbol-interference (ISI). The impact of CFO on SC-FDE and OFDM is compared in the presence of CFO and channel estimation errors. Closed form expressions of signal to interference and noise ratio (SINR) are derived for both systems, and verified by simulation results. We find that when channel estimation errors are considered, SC-FDE is similarly or even more sensitive to CFO, compared to OFDM. In particular, in SC-FDE systems, CFO mainly deteriorates the system performance via degrading the channel estimation. Both analytical and simulation results highlight the importance of accurate CFO estimation in SC-FDE systems.
Resumo:
Structural health is a vital aspect of infrastructure sustainability. As a part of a vital infrastructure and transportation network, bridge structures must function safely at all times. However, due to heavier and faster moving vehicular loads and function adjustment, such as Busway accommodation, many bridges are now operating at an overload beyond their design capacity. Additionally, the huge renovation and replacement costs are a difficult burden for infrastructure owners. The structural health monitoring (SHM) systems proposed recently are incorporated with vibration-based damage detection techniques, statistical methods and signal processing techniques and have been regarded as efficient and economical ways to assess bridge condition and foresee probable costly failures. In this chapter, the recent developments in damage detection and condition assessment techniques based on vibration-based damage detection and statistical methods are reviewed. The vibration-based damage detection methods based on changes in natural frequencies, curvature or strain modes, modal strain energy, dynamic flexibility, artificial neural networks, before and after damage, and other signal processing methods such as Wavelet techniques, empirical mode decomposition and Hilbert spectrum methods are discussed in this chapter.
Resumo:
This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent
Resumo:
The topic of the present work is to study the relationship between the power of the learning algorithms on the one hand, and the expressive power of the logical language which is used to represent the problems to be learned on the other hand. The central question is whether enriching the language results in more learning power. In order to make the question relevant and nontrivial, it is required that both texts (sequences of data) and hypotheses (guesses) be translatable from the “rich” language into the “poor” one. The issue is considered for several logical languages suitable to describe structures whose domain is the set of natural numbers. It is shown that enriching the language does not give any advantage for those languages which define a monadic second-order language being decidable in the following sense: there is a fixed interpretation in the structure of natural numbers such that the set of sentences of this extended language true in that structure is decidable. But enriching the original language even by only one constant gives an advantage if this language contains a binary function symbol (which will be interpreted as addition). Furthermore, it is shown that behaviourally correct learning has exactly the same power as learning in the limit for those languages which define a monadic second-order language with the property given above, but has more power in case of languages containing a binary function symbol. Adding the natural requirement that the set of all structures to be learned is recursively enumerable, it is shown that it pays o6 to enrich the language of arithmetics for both finite learning and learning in the limit, but it does not pay off to enrich the language for behaviourally correct learning.
Resumo:
Type unions, pointer variables and function pointers are a long standing source of subtle security bugs in C program code. Their use can lead to hard-to-diagnose crashes or exploitable vulnerabilities that allow an attacker to attain privileged access over classified data. This paper describes an automatable framework for detecting such weaknesses in C programs statically, where possible, and for generating assertions that will detect them dynamically, in other cases. Exclusively based on analysis of the source code, it identifies required assertions using a type inference system supported by a custom made symbol table. In our preliminary findings, our type system was able to infer the correct type of unions in different scopes, without manual code annotations or rewriting. Whenever an evaluation is not possible or is difficult to resolve, appropriate runtime assertions are formed and inserted into the source code. The approach is demonstrated via a prototype C analysis tool.
Resumo:
In computational linguistics, information retrieval and applied cognition, words and concepts are often represented as vectors in high dimensional spaces computed from a corpus of text. These high dimensional spaces are often referred to as Semantic Spaces. We describe a novel and efficient approach to computing these semantic spaces via the use of complex valued vector representations. We report on the practical implementation of the proposed method and some associated experiments. We also briefly discuss how the proposed system relates to previous theoretical work in Information Retrieval and Quantum Mechanics and how the notions of probability, logic and geometry are integrated within a single Hilbert space representation. In this sense the proposed system has more general application and gives rise to a variety of opportunities for future research.
Resumo:
Objective: To investigate how age-related declines in vision (particularly contrast sensitivity), simulated using cataract-goggles and low-contrast stimuli, influence the accuracy and speed of cognitive test performance in older adults. An additional aim was to investigate whether declines in vision differentially affect secondary more than primary memory. Method: Using a fully within-subjects design, 50 older drivers aged 66-87 years completed two tests of cognitive performance - letter matching (perceptual speed) and symbol recall (short-term memory) - under different viewing conditions that degraded visual input (low-contrast stimuli, cataract-goggles, and low-contrast stimuli combined with cataract-goggles, compared with normal viewing). However, presentation time was also manipulated for letter matching. Visual function, as measured using standard charts, was taken into account in statistical analyses. Results: Accuracy and speed for cognitive tasks were significantly impaired when visual input was degraded. Furthermore, cognitive performance was positively associated with contrast sensitivity. Presentation time did not influence cognitive performance, and visual gradation did not differentially influence primary and secondary memory. Conclusion: Age-related declines in visual function can impact on the accuracy and speed of cognitive performance, and therefore the cognitive abilities of older adults may be underestimated in neuropsychological testing. It is thus critical that visual function be assessed prior to testing, and that stimuli be adapted to older adults' sensory capabilities (e.g., by maximising stimuli contrast).
Resumo:
PURPOSE: To investigate the impact of different levels of simulated visual impairment on the cognitive test performance of older adults and to compare this with previous findings in younger adults. METHODS.: Cognitive performance was assessed in 30 visually normal, community-dwelling older adults (mean = 70.2 ± 3.9 years). Four standard cognitive tests were used including the Digit Symbol Substitution Test, Trail Making Tests A and B, and the Stroop Color Word Test under three visual conditions: normal baseline vision and two levels of cataract simulating filters (Vistech), which were administered in a random order. Distance high-contrast visual acuity and Pelli-Robson letter contrast sensitivity were also assessed for all three visual conditions. RESULTS.: Simulated cataract significantly impaired performance across all cognitive test performance measures. In addition, the impact of simulated cataract was significantly greater in this older cohort than in a younger cohort previously investigated. Individual differences in contrast sensitivity better predicted cognitive test performance than did visual acuity. CONCLUSIONS.: Visual impairment can lead to slowing of cognitive performance in older adults; these effects are greater than those observed in younger participants. This has important implications for neuropsychological testing of older populations who have a high prevalence of cataract.
Resumo:
Signal Processing (SP) is a subject of central importance in engineering and the applied sciences. Signals are information-bearing functions, and SP deals with the analysis and processing of signals (by dedicated systems) to extract or modify information. Signal processing is necessary because signals normally contain information that is not readily usable or understandable, or which might be disturbed by unwanted sources such as noise. Although many signals are non-electrical, it is common to convert them into electrical signals for processing. Most natural signals (such as acoustic and biomedical signals) are continuous functions of time, with these signals being referred to as analog signals. Prior to the onset of digital computers, Analog Signal Processing (ASP) and analog systems were the only tool to deal with analog signals. Although ASP and analog systems are still widely used, Digital Signal Processing (DSP) and digital systems are attracting more attention, due in large part to the significant advantages of digital systems over the analog counterparts. These advantages include superiority in performance,s peed, reliability, efficiency of storage, size and cost. In addition, DSP can solve problems that cannot be solved using ASP, like the spectral analysis of multicomonent signals, adaptive filtering, and operations at very low frequencies. Following the recent developments in engineering which occurred in the 1980's and 1990's, DSP became one of the world's fastest growing industries. Since that time DSP has not only impacted on traditional areas of electrical engineering, but has had far reaching effects on other domains that deal with information such as economics, meteorology, seismology, bioengineering, oceanology, communications, astronomy, radar engineering, control engineering and various other applications. This book is based on the Lecture Notes of Associate Professor Zahir M. Hussain at RMIT University (Melbourne, 2001-2009), the research of Dr. Amin Z. Sadik (at QUT & RMIT, 2005-2008), and the Note of Professor Peter O'Shea at Queensland University of Technology. Part I of the book addresses the representation of analog and digital signals and systems in the time domain and in the frequency domain. The core topics covered are convolution, transforms (Fourier, Laplace, Z. Discrete-time Fourier, and Discrete Fourier), filters, and random signal analysis. There is also a treatment of some important applications of DSP, including signal detection in noise, radar range estimation, banking and financial applications, and audio effects production. Design and implementation of digital systems (such as integrators, differentiators, resonators and oscillators are also considered, along with the design of conventional digital filters. Part I is suitable for an elementary course in DSP. Part II (which is suitable for an advanced signal processing course), considers selected signal processing systems and techniques. Core topics covered are the Hilbert transformer, binary signal transmission, phase-locked loops, sigma-delta modulation, noise shaping, quantization, adaptive filters, and non-stationary signal analysis. Part III presents some selected advanced DSP topics. We hope that this book will contribute to the advancement of engineering education and that it will serve as a general reference book on digital signal processing.