187 resultados para applied field
em Queensland University of Technology - ePrints Archive
Resumo:
The electron field emission (EFE) characteristics from vertically aligned carbon nanotubes (VACNTs) without and with treatment by the nitrogen plasma are investigated. The VACNTs with the plasma treatment showed a significant improvement in the EFE property compared to the untreated VACNTs. The morphological, structural, and compositional properties of the VACNTs are extensively examined by scanning electron microscopy, transmission electron microscopy, Raman spectroscopy, and energy dispersive X-ray spectroscopy. It is shown that the significant EFE improvement of the VACNTs after the nitrogen plasma treatment is closely related to the variation of the morphological and structural properties of the VACNTs. The high current density (299.6 μA/cm2) achieved at a low applied field (3.50 V/μm) suggests that the VACNTs after nitrogen plasma treatment can serve as effective electron field emission sources for numerous applications.
Resumo:
Utilities worldwide are focused on supplying peak electricity demand reliably and cost effectively, requiring a thorough understanding of all the factors influencing residential electricity use at peak times. An electricity demand reduction project based on comprehensive residential consumer engagement was established within an Australian community in 2008, and by 2011, peak demand had decreased to below pre-intervention levels. This paper applied field data discovered through qualitative in-depth interviews of 22 residential households at the community to a Bayesian Network complex system model to examine whether the system model could explain successful peak demand reduction in the case study location. The knowledge and understanding acquired through insights into the major influential factors and the potential impact of changes to these factors on peak demand would underpin demand reduction intervention strategies for a wider target group.
Resumo:
Participatory evaluation and participatory action research (PAR) are increasingly used in community-based programs and initiatives and there is a growing acknowledgement of their value. These methodologies focus more on knowledge generated and constructed through lived experience than through social science (Vanderplaat 1995). The scientific ideal of objectivity is usually rejected in favour of a holistic approach that acknowledges and takes into account the diverse perspectives, values and interpretations of participants and evaluation professionals. However, evaluation rigour need not be lost in this approach. Increasing the rigour and trustworthiness of participatory evaluations and PAR increases the likelihood that results are seen as credible and are used to continually improve programs and policies.----- Drawing on learnings and critical reflections about the use of feminist and participatory forms of evaluation and PAR over a 10-year period, significant sources of rigour identified include:----- • participation and communication methods that develop relations of mutual trust and open communication----- • using multiple theories and methodologies, multiple sources of data, and multiple methods of data collection----- • ongoing meta-evaluation and critical reflection----- • critically assessing the intended and unintended impacts of evaluations, using relevant theoretical models----- • using rigorous data analysis and reporting processes----- • participant reviews of evaluation case studies, impact assessments and reports.
Resumo:
Matrix function approximation is a current focus of worldwide interest and finds application in a variety of areas of applied mathematics and statistics. In this thesis we focus on the approximation of A^(-α/2)b, where A ∈ ℝ^(n×n) is a large, sparse symmetric positive definite matrix and b ∈ ℝ^n is a vector. In particular, we will focus on matrix function techniques for sampling from Gaussian Markov random fields in applied statistics and the solution of fractional-in-space partial differential equations. Gaussian Markov random fields (GMRFs) are multivariate normal random variables characterised by a sparse precision (inverse covariance) matrix. GMRFs are popular models in computational spatial statistics as the sparse structure can be exploited, typically through the use of the sparse Cholesky decomposition, to construct fast sampling methods. It is well known, however, that for sufficiently large problems, iterative methods for solving linear systems outperform direct methods. Fractional-in-space partial differential equations arise in models of processes undergoing anomalous diffusion. Unfortunately, as the fractional Laplacian is a non-local operator, numerical methods based on the direct discretisation of these equations typically requires the solution of dense linear systems, which is impractical for fine discretisations. In this thesis, novel applications of Krylov subspace approximations to matrix functions for both of these problems are investigated. Matrix functions arise when sampling from a GMRF by noting that the Cholesky decomposition A = LL^T is, essentially, a `square root' of the precision matrix A. Therefore, we can replace the usual sampling method, which forms x = L^(-T)z, with x = A^(-1/2)z, where z is a vector of independent and identically distributed standard normal random variables. Similarly, the matrix transfer technique can be used to build solutions to the fractional Poisson equation of the form ϕn = A^(-α/2)b, where A is the finite difference approximation to the Laplacian. Hence both applications require the approximation of f(A)b, where f(t) = t^(-α/2) and A is sparse. In this thesis we will compare the Lanczos approximation, the shift-and-invert Lanczos approximation, the extended Krylov subspace method, rational approximations and the restarted Lanczos approximation for approximating matrix functions of this form. A number of new and novel results are presented in this thesis. Firstly, we prove the convergence of the matrix transfer technique for the solution of the fractional Poisson equation and we give conditions by which the finite difference discretisation can be replaced by other methods for discretising the Laplacian. We then investigate a number of methods for approximating matrix functions of the form A^(-α/2)b and investigate stopping criteria for these methods. In particular, we derive a new method for restarting the Lanczos approximation to f(A)b. We then apply these techniques to the problem of sampling from a GMRF and construct a full suite of methods for sampling conditioned on linear constraints and approximating the likelihood. Finally, we consider the problem of sampling from a generalised Matern random field, which combines our techniques for solving fractional-in-space partial differential equations with our method for sampling from GMRFs.
Looking awkward when winning and foolish when losing: Inequity aversion and performance in the field
Resumo:
Current-voltage (I-V) curves of Poly(3-hexyl-thiophene) (P3HT) diodes have been collected to investigate the polymer hole-dominated charge transport. At room temperature and at low electric fields the I-V characteristic is purely Ohmic whereas at medium-high electric fields, experimental data shows that the hole transport is Trap Dominated - Space Charge Limited Current (TD-SCLC). In this regime, it is possible to extract the I-V characteristic of the P3HT/Al junction showing the ideal Schottky diode behaviour over five orders of magnitude. At high-applied electric fields, holes’ transport is found to be in the trap free SCLC regime. We have measured and modelled in this regime the holes’ mobility to evaluate its dependence from the electric field applied and the temperature of the device.
Resumo:
The material presented in this thesis may be viewed as comprising two key parts, the first part concerns batch cryptography specifically, whilst the second deals with how this form of cryptography may be applied to security related applications such as electronic cash for improving efficiency of the protocols. The objective of batch cryptography is to devise more efficient primitive cryptographic protocols. In general, these primitives make use of some property such as homomorphism to perform a computationally expensive operation on a collective input set. The idea is to amortise an expensive operation, such as modular exponentiation, over the input. Most of the research work in this field has concentrated on its employment as a batch verifier of digital signatures. It is shown that several new attacks may be launched against these published schemes as some weaknesses are exposed. Another common use of batch cryptography is the simultaneous generation of digital signatures. There is significantly less previous work on this area, and the present schemes have some limited use in practical applications. Several new batch signatures schemes are introduced that improve upon the existing techniques and some practical uses are illustrated. Electronic cash is a technology that demands complex protocols in order to furnish several security properties. These typically include anonymity, traceability of a double spender, and off-line payment features. Presently, the most efficient schemes make use of coin divisibility to withdraw one large financial amount that may be progressively spent with one or more merchants. Several new cash schemes are introduced here that make use of batch cryptography for improving the withdrawal, payment, and deposit of electronic coins. The devised schemes apply both to the batch signature and verification techniques introduced, demonstrating improved performance over the contemporary divisible based structures. The solutions also provide an alternative paradigm for the construction of electronic cash systems. Whilst electronic cash is used as the vehicle for demonstrating the relevance of batch cryptography to security related applications, the applicability of the techniques introduced extends well beyond this.
Resumo:
On the microscale, migration, proliferation and death are crucial in the development, homeostasis and repair of an organism; on the macroscale, such effects are important in the sustainability of a population in its environment. Dependent on the relative rates of migration, proliferation and death, spatial heterogeneity may arise within an initially uniform field; this leads to the formation of spatial correlations and can have a negative impact upon population growth. Usually, such effects are neglected in modeling studies and simple phenomenological descriptions, such as the logistic model, are used to model population growth. In this work we outline some methods for analyzing exclusion processes which include agent proliferation, death and motility in two and three spatial dimensions with spatially homogeneous initial conditions. The mean-field description for these types of processes is of logistic form; we show that, under certain parameter conditions, such systems may display large deviations from the mean field, and suggest computationally tractable methods to correct the logistic-type description.
Resumo:
In the exclusion-process literature, mean-field models are often derived by assuming that the occupancy status of lattice sites is independent. Although this assumption is questionable, it is the foundation of many mean-field models. In this work we develop methods to relax the independence assumption for a range of discrete exclusion process-based mechanisms motivated by applications from cell biology. Previous investigations that focussed on relaxing the independence assumption have been limited to studying initially-uniform populations and ignored any spatial variations. By ignoring spatial variations these previous studies were greatly simplified due to translational invariance of the lattice. These previous corrected mean-field models could not be applied to many important problems in cell biology such as invasion waves of cells that are characterised by moving fronts. Here we propose generalised methods that relax the independence assumption for spatially inhomogeneous problems, leading to corrected mean-field descriptions of a range of exclusion process-based models that incorporate (i) unbiased motility, (ii) biased motility, and (iii) unbiased motility with agent birth and death processes. The corrected mean-field models derived here are applicable to spatially variable processes including invasion wave type problems. We show that there can be large deviations between simulation data and traditional mean-field models based on invoking the independence assumption. Furthermore, we show that the corrected mean-field models give an improved match to the simulation data in all cases considered.
Resumo:
For many people, a relatively large proportion of daily exposure to a multitude of pollutants may occur inside an automobile. A key determinant of exposure is the amount of outdoor air entering the cabin (i.e. air change or flow rate). We have quantified this parameter in six passenger vehicles ranging in age from 18 years to <1 year, at three vehicle speeds and under four different ventilation settings. Average infiltration into the cabin with all operable air entry pathways closed was between 1 and 33.1 air changes per hour (ACH) at a vehicle speed of 60 km/h, and between 2.6 and 47.3 ACH at 110 km/h, with these results representing the most (2005 Volkswagen Golf) and least air-tight (1989 Mazda 121) vehicles, respectively. Average infiltration into stationary vehicles parked outdoors varied between ~0 and 1.4 ACH and was moderately related to wind speed. Measurements were also performed under an air recirculation setting with low fan speed, while airflow rate measurements were conducted under two non-recirculate ventilation settings with low and high fan speeds. The windows were closed in all cases, and over 200 measurements were performed. The results can be applied to estimate pollutant exposure inside vehicles.
Resumo:
This chapter provides a historical materialist review of the development of applied and critical linguistics and their extensions and applications to the fields of English Language studies. Following Bourdieu, we view intellectual fields and their affiliated discourses as constructed in relation to specific economic and political formations and sociocultural contexts. We therefore take ‘applied linguistics’, ‘critical language studies’ and ‘English language studies’ as fields in dynamic and contested formation and relationship. Our review focuses on three historical moments. In the postwar period, we describe the technologisation of linguistics – with the enlistment of linguistics in the applied fields of language planning, literacy education and second/foreign language teaching. We then turn to document the multinationalisation of English, which, we argue entails a rationalisation of English as a universal form of economic capital in globalised economic and cultural flows. We conclude by exploring scenarios for the displacement of English language studies as a major field by other emergent economic lingua franca (e.g., Mandarin, Spanish) and shifts in the economic and cultural nexus of control over English from an Anglo/American centre to East and West Asia.
Resumo:
Over the last few decades, electric and electromagnetic fields have achieved important role as stimulator and therapeutic facility in biology and medicine. In particular, low magnitude, low frequency, pulsed electromagnetic field has shown significant positive effect on bone fracture healing and some bone diseases treatment. Nevertheless, to date, little attention has been paid to investigate the possible effect of high frequency, high magnitude pulsed electromagnetic field (pulse power) on functional behaviour and biomechanical properties of bone tissue. Bone is a dynamic, complex organ, which is made of bone materials (consisting of organic components, inorganic mineral and water) known as extracellular matrix, and bone cells (live part). The cells give the bone the capability of self-repairing by adapting itself to its mechanical environment. The specific bone material composite comprising of collagen matrix reinforced with mineral apatite provides the bone with particular biomechanical properties in an anisotropic, inhomogeneous structure. This project hypothesized to investigate the possible effect of pulse power signals on cortical bone characteristics through evaluating the fundamental mechanical properties of bone material. A positive buck-boost converter was applied to generate adjustable high voltage, high frequency pulses up to 500 V and 10 kHz. Bone shows distinctive characteristics in different loading mode. Thus, functional behaviour of bone in response to pulse power excitation were elucidated by using three different conventional mechanical tests applying three-point bending load in elastic region, tensile and compressive loading until failure. Flexural stiffness, tensile and compressive strength, hysteresis and total fracture energy were determined as measure of main bone characteristics. To assess bone structure variation due to pulse power excitation in deeper aspect, a supplementary fractographic study was also conducted using scanning electron micrograph from tensile fracture surfaces. Furthermore, a non-destructive ultrasonic technique was applied for determination and comparison of bone elasticity before and after pulse power stimulation. This method provided the ability to evaluate the stiffness of millimetre-sized bone samples in three orthogonal directions. According to the results of non-destructive bending test, the flexural elasticity of cortical bone samples appeared to remain unchanged due to pulse power excitation. Similar results were observed in the bone stiffness for all three orthogonal directions obtained from ultrasonic technique and in the bone stiffness from the compression test. From tensile tests, no significant changes were found in tensile strength and total strain energy absorption of the bone samples exposed to pulse power compared with those of the control samples. Also, the apparent microstructure of the fracture surfaces of PP-exposed samples (including porosity and microcracks diffusion) showed no significant variation due to pulse power stimulation. Nevertheless, the compressive strength and toughness of millimetre-sized samples appeared to increase when the samples were exposed to 66 hours high power pulsed electromagnetic field through screws with small contact cross-section (increasing the pulsed electric field intensity) compare to the control samples. This can show the different load-bearing characteristics of cortical bone tissue in response to pulse power excitation and effectiveness of this type of stimulation on smaller-sized samples. These overall results may address that although, the pulse power stimulation can influence the arrangement or the quality of the collagen network causing the bone strength and toughness augmentation, it apparently did not affect the mineral phase of the cortical bone material. The results also confirmed that the indirect application of high power pulsed electromagnetic field at 500 V and 10 kHz through capacitive coupling method, was athermal and did not damage the bone tissue construction.
Resumo:
Many computationally intensive scientific applications involve repetitive floating point operations other than addition and multiplication which may present a significant performance bottleneck due to the relatively large latency or low throughput involved in executing such arithmetic primitives on commod- ity processors. A promising alternative is to execute such primitives on Field Programmable Gate Array (FPGA) hardware acting as an application-specific custom co-processor in a high performance reconfig- urable computing platform. The use of FPGAs can provide advantages such as fine-grain parallelism but issues relating to code development in a hardware description language and efficient data transfer to and from the FPGA chip can present significant application development challenges. In this paper, we discuss our practical experiences in developing a selection of floating point hardware designs to be implemented using FPGAs. Our designs include some basic mathemati cal library functions which can be implemented for user defined precisions suitable for novel applications requiring non-standard floating point represen- tation. We discuss the details of our designs along with results from performance and accuracy analysis tests.
Resumo:
Reviews have criticised universities for not embedding sufficient praxis for preparing preservice teachers for the profession. The Teacher Education Done Differently (TEDD) project explored praxis development for preservice teachers within existing university coursework. This mixed-method investigation involved an analysis of multiple case studies with preservice teacher involvement in university programs, namely: Ed Start for practicum I (n=26), III (n=23), and IV (n=12); Move It Use It (Health and Physical Education program; n=38), Studies of Society and its Environment (SOSE, n=24), and Science in Schools (n=38). The project included preservice teachers teaching primary students at the campus site in gifted education (the B-GR8 program, n=22). The percentage range for preservice teacher agreement of their praxis development leading up to practicum I, III, and IV was between 91-100% with a high mean score range (4.26-5.00). Other university units had similar findings except for SOSE (i.e., percentage range: 10-86%; M range: 2.33-4.00; SD range: 0.55-1.32). Qualitative data presented an understanding of the praxis development leading to the conclusion that additional applied learning experiences as lead-up days for field experiences and as avenues for exploring the teaching of specific subject areas presented opportunities for enhancing praxis.