292 resultados para graphics processing unit (GPU)
Resumo:
The School of Electrical and Electronic Systems Engineering at Queensland University of Technology, Brisbane, Australia (QUT), offers three bachelor degree courses in electrical and computer engineering. In all its courses there is a strong emphasis on signal processing. A newly established Signal Processing Research Centre (SPRC) has played an important role in the development of the signal processing units in these courses. This paper describes the unique design of the undergraduate program in signal processing at QUT, the laboratories developed to support it, and the criteria that influenced the design.
Resumo:
The School of Electrical and Electronic Systems Engineering of Queensland University of Technology (like many other universities around the world) has recognised the importance of complementing the teaching of signal processing with computer based experiments. A laboratory has been developed to provide a "hands-on" approach to the teaching of signal processing techniques. The motivation for the development of this laboratory was the cliche "What I hear I remember but what I do I understand." The laboratory has been named as the "Signal Computing and Real-time DSP Laboratory" and provides practical training to approximately 150 final year undergraduate students each year. The paper describes the novel features of the laboratory, techniques used in the laboratory based teaching, interesting aspects of the experiments that have been developed and student evaluation of the teaching techniques
Resumo:
As the graphics race subsides and gamers grow weary of predictable and deterministic game characters, game developers must put aside their “old faithful” finite state machines and look to more advanced techniques that give the users the gaming experience they crave. The next industry breakthrough will be with characters that behave realistically and that can learn and adapt, rather than more polygons, higher resolution textures and more frames-per-second. This paper explores the various artificial intelligence techniques that are currently being used by game developers, as well as techniques that are new to the industry. The techniques covered in this paper are finite state machines, scripting, agents, flocking, fuzzy logic and fuzzy state machines decision trees, neural networks, genetic algorithms and extensible AI. This paper introduces each of these technique, explains how they can be applied to games and how commercial games are currently making use of them. Finally, the effectiveness of these techniques and their future role in the industry are evaluated.
Resumo:
We have developed digital image registration program for a MC 68000 based fundus image processing system (FIPS). FIPS not only is capable of executing typical image processing algorithms in spatial as well as Fourier domain, the execution time for many operations has been made much quicker by using a hybrid of "C", Fortran and MC6000 assembly languages.
Resumo:
Research has shown that people with a mental illness are an at-risk group for sexually transmitted infections. A programme for preventing risk behaviours for sexually transmitted infections among people with psychiatric disorder was designed and implemented by mental health occupational therapists. This programme used an interactive didactic approach to provide education and awareness of sexual health issues to acute psychiatric inpatients. Twenty-four participants completed a sexual health questionnaire, which was designed for this study, both before and after attending the programme. They had a higher than expected knowledge of sexually transmitted infections and safe sex practices at pre-test. The education programme resulted in a statistically significant but modest increase in sexual health knowledge. These findings indicate that there are benefits in providing sexual health education to clients with a mental illness. Further programme development should be directed towards sexual health decision-making and behaviour change.
Resumo:
This paper describes the feasibility of the application of an Imputer in a multiple choice answer sheet marking system based on image processing techniques.
Resumo:
The Queensland University of Technology (QUT) allows the presentation of a thesis for the Degree of Doctor of Philosophy in the format of published or submitted papers, where such papers have been published, accepted or submitted during the period of candidature. This thesis is composed of seven published/submitted papers, of which one has been published, three accepted for publication and the other three are under review. This project is financially supported by an Australian Research Council (ARC) Discovery Grant with the aim of proposing strategies for the performance control of Distributed Generation (DG) system with digital estimation of power system signal parameters. Distributed Generation (DG) has been recently introduced as a new concept for the generation of power and the enhancement of conventionally produced electricity. Global warming issue calls for renewable energy resources in electricity production. Distributed generation based on solar energy (photovoltaic and solar thermal), wind, biomass, mini-hydro along with use of fuel cell and micro turbine will gain substantial momentum in the near future. Technically, DG can be a viable solution for the issue of the integration of renewable or non-conventional energy resources. Basically, DG sources can be connected to local power system through power electronic devices, i.e. inverters or ac-ac converters. The interconnection of DG systems to power system as a compensator or a power source with high quality performance is the main aim of this study. Source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, distortion at the point of common coupling in weak source cases, source current power factor, and synchronism of generated currents or voltages are the issues of concern. The interconnection of DG sources shall be carried out by using power electronics switching devices that inject high frequency components rather than the desired current. Also, noise and harmonic distortions can impact the performance of the control strategies. To be able to mitigate the negative effect of high frequency and harmonic as well as noise distortion to achieve satisfactory performance of DG systems, new methods of signal parameter estimation have been proposed in this thesis. These methods are based on processing the digital samples of power system signals. Thus, proposing advanced techniques for the digital estimation of signal parameters and methods for the generation of DG reference currents using the estimates provided is the targeted scope of this thesis. An introduction to this research – including a description of the research problem, the literature review and an account of the research progress linking the research papers – is presented in Chapter 1. One of the main parameters of a power system signal is its frequency. Phasor Measurement (PM) technique is one of the renowned and advanced techniques used for the estimation of power system frequency. Chapter 2 focuses on an in-depth analysis conducted on the PM technique to reveal its strengths and drawbacks. The analysis will be followed by a new technique proposed to enhance the speed of the PM technique while the input signal is free of even-order harmonics. The other techniques proposed in this thesis as the novel ones will be compared with the PM technique comprehensively studied in Chapter 2. An algorithm based on the concept of Kalman filtering is proposed in Chapter 3. The algorithm is intended to estimate signal parameters like amplitude, frequency and phase angle in the online mode. The Kalman filter is modified to operate on the output signal of a Finite Impulse Response (FIR) filter designed by a plain summation. The frequency estimation unit is independent from the Kalman filter and uses the samples refined by the FIR filter. The frequency estimated is given to the Kalman filter to be used in building the transition matrices. The initial settings for the modified Kalman filter are obtained through a trial and error exercise. Another algorithm again based on the concept of Kalman filtering is proposed in Chapter 4 for the estimation of signal parameters. The Kalman filter is also modified to operate on the output signal of the same FIR filter explained above. Nevertheless, the frequency estimation unit, unlike the one proposed in Chapter 3, is not segregated and it interacts with the Kalman filter. The frequency estimated is given to the Kalman filter and other parameters such as the amplitudes and phase angles estimated by the Kalman filter is taken to the frequency estimation unit. Chapter 5 proposes another algorithm based on the concept of Kalman filtering. This time, the state parameters are obtained through matrix arrangements where the noise level is reduced on the sample vector. The purified state vector is used to obtain a new measurement vector for a basic Kalman filter applied. The Kalman filter used has similar structure to a basic Kalman filter except the initial settings are computed through an extensive math-work with regards to the matrix arrangement utilized. Chapter 6 proposes another algorithm based on the concept of Kalman filtering similar to that of Chapter 3. However, this time the initial settings required for the better performance of the modified Kalman filter are calculated instead of being guessed by trial and error exercises. The simulations results for the parameters of signal estimated are enhanced due to the correct settings applied. Moreover, an enhanced Least Error Square (LES) technique is proposed to take on the estimation when a critical transient is detected in the input signal. In fact, some large, sudden changes in the parameters of the signal at these critical transients are not very well tracked by Kalman filtering. However, the proposed LES technique is found to be much faster in tracking these changes. Therefore, an appropriate combination of the LES and modified Kalman filtering is proposed in Chapter 6. Also, this time the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 7 proposes the other algorithm based on the concept of Kalman filtering similar to those of Chapter 3 and 6. However, this time an optimal digital filter is designed instead of the simple summation FIR filter. New initial settings for the modified Kalman filter are calculated based on the coefficients of the digital filter applied. Also, the ability of the proposed algorithm is verified on the real data obtained from a prototype test object. Chapter 8 uses the estimation algorithm proposed in Chapter 7 for the interconnection scheme of a DG to power network. Robust estimates of the signal amplitudes and phase angles obtained by the estimation approach are used in the reference generation of the compensation scheme. Several simulation tests provided in this chapter show that the proposed scheme can very well handle the source and load unbalance, load non-linearity, interharmonic distortion, supply voltage distortion, and synchronism of generated currents or voltages. The purposed compensation scheme also prevents distortion in voltage at the point of common coupling in weak source cases, balances the source currents, and makes the supply side power factor a desired value.
Resumo:
Gray‘s (2000) revised Reinforcement Sensitivity Theory (r-RST) was used to investigate personality effects on information processing biases to gain-framed and loss-framed anti-speeding messages and the persuasiveness of these messages. The r-RST postulates that behaviour is regulated by two major motivational systems: reward system or punishment system. It was hypothesised that both message processing and persuasiveness would be dependent upon an individual‘s sensitivity to reward or punishment. Student drivers (N = 133) were randomly assigned to view one of four anti-speeding messages or no message (control group). Individual processing differences were then measured using a lexical decision task, prior to participants completing a personality and persuasion questionnaire. Results indicated that participants who were more sensitive to reward showed a marginally significant (p = .050) tendency to report higher intentions to comply with the social gain-framed message and demonstrate a cognitive processing bias towards this message, than those with lower reward sensitivity.
Resumo:
Niklas Luhmann's theory of social systems has been widely influential in the German-speaking countries in the past few decades. However, despite its significance, particularly for organization studies, it is only very recently that Luhmann's work has attracted attention on the international stage as well. This Special Issue is in response to that. In this introductory paper, we provide a systematic overview of Luhmann's theory. Reading his work as a theory about distinction generating and processing systems, we especially highlight the following aspects: (i) Organizations are processes that come into being by permanently constructing and reconstructing themselves by means of using distinctions, which mark what is part of their realm and what not. (ii) Such an organizational process belongs to a social sphere sui generis possessing its own logic, which cannot be traced back to human actors or subjects. (iii) Organizations are a specific kind of social process characterized by a specific kind of distinction: decision, which makes up what is specifically organizational about organizations as social phenomena. We conclude by introducing the papers in this Special Issue. Copyright © 2006 SAGE.
Resumo:
In Chapter 10, Adam and Dougherty describe the application of medical image processing to the assessment and treatment of spinal deformity, with a focus on the surgical treatment of idiopathic scoliosis. The natural history of spinal deformity and current approaches to surgical and non-surgical treatment are briefly described, followed by an overview of current clinically used imaging modalities. The key metrics currently used to assess the severity and progression of spinal deformities from medical images are presented, followed by a discussion of the errors and uncertainties involved in manual measurements. This provides the context for an analysis of automated and semi-automated image processing approaches to measure spinal curve shape and severity in two and three dimensions.
Resumo:
Virtual environments can provide, through digital games and online social interfaces, extremely exciting forms of interactive entertainment. Because of their capability in displaying and manipulating information in natural and intuitive ways, such environments have found extensive applications in decision support, education and training in the health and science domains amongst others. Currently, the burden of validating both the interactive functionality and visual consistency of a virtual environment content is entirely carried out by developers and play-testers. While considerable research has been conducted in assisting the design of virtual world content and mechanics, to date, only limited contributions have been made regarding the automatic testing of the underpinning graphics software and hardware. The aim of this thesis is to determine whether the correctness of the images generated by a virtual environment can be quantitatively defined, and automatically measured, in order to facilitate the validation of the content. In an attempt to provide an environment-independent definition of visual consistency, a number of classification approaches were developed. First, a novel model-based object description was proposed in order to enable reasoning about the color and geometry change of virtual entities during a play-session. From such an analysis, two view-based connectionist approaches were developed to map from geometry and color spaces to a single, environment-independent, geometric transformation space; we used such a mapping to predict the correct visualization of the scene. Finally, an appearance-based aliasing detector was developed to show how incorrectness too, can be quantified for debugging purposes. Since computer games heavily rely on the use of highly complex and interactive virtual worlds, they provide an excellent test bed against which to develop, calibrate and validate our techniques. Experiments were conducted on a game engine and other virtual worlds prototypes to determine the applicability and effectiveness of our algorithms. The results show that quantifying visual correctness in virtual scenes is a feasible enterprise, and that effective automatic bug detection can be performed through the techniques we have developed. We expect these techniques to find application in large 3D games and virtual world studios that require a scalable solution to testing their virtual world software and digital content.
Resumo:
The present study used ERPs to compare processing of fear-relevant (FR) animals (snakes and spiders) and non-fear-relevant (NFR) animals similar in appearance (worms and beetles). EEG was recorded from 18 undergraduate participants (10 females) as they completed two animal-viewing tasks that required simple categorization decisions. Participants were divided on a post hoc basis into low snake/spider fear and high snake/spider fear groups. Overall, FR animals were rated higher on fear and elicited a larger LPC. However, individual differences qualified these effects. Participants in the low fear group showed clear differentiation between FR and NFR animals on subjective ratings of fear and LPC modulation. In contrast, participants in the high fear group did not show such differentiation between FR and NFR animals. These findings suggest that the salience of feared-FR animals may generalize on both a behavioural and electro-cortical level to other animals of similar appearance but of a non-harmful nature.
Resumo:
The ability to decode graphics is an increasingly important component of mathematics assessment and curricula. This study examined 50, 9- to 10-year-old students (23 male, 27 female), as they solved items from six distinct graphical languages (e.g., maps) that are commonly used to convey mathematical information. The results of the study revealed: 1) factors which contribute to success or hinder performance on tasks with various graphical representations; and 2) how the literacy and graphical demands of tasks influence the mathematical sense making of students. The outcomes of this study highlight the changing nature of assessment in school mathematics and identify the function and influence of graphics in the design of assessment tasks.
Resumo:
The Graphics-Decoding Proficiency (G-DP) instrument was developed as a screening test for the purpose of measuring students’ (aged 8-11 years) capacity to solve graphics-based mathematics tasks. These tasks include number lines, column graphs, maps and pie charts. The instrument was developed within a theoretical framework which highlights the various types of information graphics commonly presented to students in large-scale national and international assessments. The instrument provides researchers, classroom teachers and test designers with an assessment tool which measures students’ graphics decoding proficiency across and within five broad categories of information graphics. The instrument has implications for a number of stakeholders in an era where graphics have become an increasingly important way of representing information.