22 resultados para Scintillation counters.
em Queensland University of Technology - ePrints Archive
Resumo:
An alternative approach to digital PWM generation using an adder rather than a counter is presented. This offers several advantages. The resolution and gain of the pulse width modulator remain constant regardless of the module clock frequency and PWM output frequency. The PWM resolution also becomes fixed at the register width. Even at high PWM frequencies, the resolution remains high when averaged over a number of PWM cycles. An inherent dithering of the PWM waveform introduced over successive cycles blurs the switching spectra without distorting the modulating waveform. The technique also lends itself to easily generating several phase shifted PWM waveforms suitable for multilevel converter modulation.
Resumo:
Railway signaling facilitates two main functions, namely, train detection and train control, in order to maintain safe separations among the trains. Track circuits are the most commonly used train detection means with the simple open/close circuit principles; and subsequent adoption of axle counters further allows the detection of trains under adverse track conditions. However, with electrification and power electronics traction drive systems, aggravated by the electromagnetic interference in the vicinity of the signaling system, railway engineers often find unstable or even faulty operations of track circuits and axle counting systems, which inevitably jeopardizes the safe operation of trains. A new means of train detection, which is completely free from electromagnetic interference, is therefore required for the modern railway signaling system. This paper presents a novel optical fiber sensor signaling system. The sensor operation, field setup, axle detection solution set, and test results of an installation in a trial system on a busy suburban railway line are given.
Resumo:
The knowledge and skills of fashion and textiles design have traditionally been transferred through the indenture of an apprentice to a master. This relationship relied heavily on the transfer of explicit methods of design and making but also on the transfer of tacit knowledge, explained by Michael Polanyi as knowledge that cannot be explicitly known. By watching the master and emulating his efforts in the presence of his example, the apprentice unconsciously picks up the rules of the art, including those which are not explicitly known to the master himself (Polanyi, 1962 p.53). However, it has been almost half a century since Michael Polanyi defined the tacit dimension as a state in which “we can know more than we can tell” (Polanyi, 1967 p.4) at a time when the accepted means of ‘telling’ was through academic writing and publishing in hardcopy format. The idea that tacit knowledge transfer involves a one to one relationship between apprentice and master would appear to have dire consequences for a discipline, such as fashion design, where there is no such tradition of academic writing. This paper counters this point of view by providing examples of strategies currently being employed in online environments (principally through ‘craft’) and explains how these methods might prove useful to support tacit knowledge transfer in respect to academic research within the field of fashion design, and in the wider academic community involved in creative practice research. A summary of the implications of these new ideas for contemporary fashion research will conclude the paper.
Resumo:
This paper describes a novel probabilistic approach to incorporating odometric information into appearance-based SLAM systems, without performing metric map construction or calculating relative feature geometry. The proposed system, dubbed Continuous Appearance-based Trajectory SLAM (CAT-SLAM), represents location as a probability distribution along a trajectory, and represents appearance continuously over the trajectory rather than at discrete locations. The distribution is evaluated using a Rao-Blackwellised particle filter, which weights particles based on local appearance and odometric similarity and explicitly models both the likelihood of revisiting previous locations and visiting new locations. A modified resampling scheme counters particle deprivation and allows loop closure updates to be performed in constant time regardless of map size. We compare the performance of CAT-SLAM to FAB-MAP (an appearance-only SLAM algorithm) in an outdoor environment, demonstrating a threefold increase in the number of correct loop closures detected by CAT-SLAM.
Resumo:
Arguing that Baz Luhrmann's "Australia" (2008) is a big-budget, non-independent film espousing a left-leaning political ideology in its non-racist representations of Aborigines on film, this paper suggests the addition of a 'fourth formation' to the 1984 Moore and Muecke model is warranted. According to their theorising, racist "first formation" films promote policies of assimilation whereas "second formation" films avoid overt political statements in favour of more acceptable multicultural liberalism. Moore and Muecke's seemingly ultimate "third formation films", however, blatantly foreground the director's leftist political dogma in a necessarily low budget, independent production. "Australia", on the other hand, is an advance on the third formation because its feminised Aboriginal voice is safely backed by a colossal production budget and indicates a transformation in public perceptions of Aboriginal issues. Furthermore, this paper argues that the use of low-cost post-production techniques such as voice-over narration by racially appropriate individuals and the use of diegetic song in Australia work to ensure the positive reception of the left-leaning message regarding the Stolen Generations. With these devices Luhrmann effectively counters the claims of right-wing denialists such as Andrew Bolt and Keith Windschuttle.
Resumo:
Computer resource allocation represents a significant challenge particularly for multiprocessor systems, which consist of shared computing resources to be allocated among co-runner processes and threads. While an efficient resource allocation would result in a highly efficient and stable overall multiprocessor system and individual thread performance, ineffective poor resource allocation causes significant performance bottlenecks even for the system with high computing resources. This thesis proposes a cache aware adaptive closed loop scheduling framework as an efficient resource allocation strategy for the highly dynamic resource management problem, which requires instant estimation of highly uncertain and unpredictable resource patterns. Many different approaches to this highly dynamic resource allocation problem have been developed but neither the dynamic nature nor the time-varying and uncertain characteristics of the resource allocation problem is well considered. These approaches facilitate either static and dynamic optimization methods or advanced scheduling algorithms such as the Proportional Fair (PFair) scheduling algorithm. Some of these approaches, which consider the dynamic nature of multiprocessor systems, apply only a basic closed loop system; hence, they fail to take the time-varying and uncertainty of the system into account. Therefore, further research into the multiprocessor resource allocation is required. Our closed loop cache aware adaptive scheduling framework takes the resource availability and the resource usage patterns into account by measuring time-varying factors such as cache miss counts, stalls and instruction counts. More specifically, the cache usage pattern of the thread is identified using QR recursive least square algorithm (RLS) and cache miss count time series statistics. For the identified cache resource dynamics, our closed loop cache aware adaptive scheduling framework enforces instruction fairness for the threads. Fairness in the context of our research project is defined as a resource allocation equity, which reduces corunner thread dependence in a shared resource environment. In this way, instruction count degradation due to shared cache resource conflicts is overcome. In this respect, our closed loop cache aware adaptive scheduling framework contributes to the research field in two major and three minor aspects. The two major contributions lead to the cache aware scheduling system. The first major contribution is the development of the execution fairness algorithm, which degrades the co-runner cache impact on the thread performance. The second contribution is the development of relevant mathematical models, such as thread execution pattern and cache access pattern models, which in fact formulate the execution fairness algorithm in terms of mathematical quantities. Following the development of the cache aware scheduling system, our adaptive self-tuning control framework is constructed to add an adaptive closed loop aspect to the cache aware scheduling system. This control framework in fact consists of two main components: the parameter estimator, and the controller design module. The first minor contribution is the development of the parameter estimators; the QR Recursive Least Square(RLS) algorithm is applied into our closed loop cache aware adaptive scheduling framework to estimate highly uncertain and time-varying cache resource patterns of threads. The second minor contribution is the designing of a controller design module; the algebraic controller design algorithm, Pole Placement, is utilized to design the relevant controller, which is able to provide desired timevarying control action. The adaptive self-tuning control framework and cache aware scheduling system in fact constitute our final framework, closed loop cache aware adaptive scheduling framework. The third minor contribution is to validate this cache aware adaptive closed loop scheduling framework efficiency in overwhelming the co-runner cache dependency. The timeseries statistical counters are developed for M-Sim Multi-Core Simulator; and the theoretical findings and mathematical formulations are applied as MATLAB m-file software codes. In this way, the overall framework is tested and experiment outcomes are analyzed. According to our experiment outcomes, it is concluded that our closed loop cache aware adaptive scheduling framework successfully drives co-runner cache dependent thread instruction count to co-runner independent instruction count with an error margin up to 25% in case cache is highly utilized. In addition, thread cache access pattern is also estimated with 75% accuracy.
Resumo:
This paper describes a new system, dubbed Continuous Appearance-based Trajectory Simultaneous Localisation and Mapping (CAT-SLAM), which augments sequential appearance-based place recognition with local metric pose filtering to improve the frequency and reliability of appearance-based loop closure. As in other approaches to appearance-based mapping, loop closure is performed without calculating global feature geometry or performing 3D map construction. Loop-closure filtering uses a probabilistic distribution of possible loop closures along the robot’s previous trajectory, which is represented by a linked list of previously visited locations linked by odometric information. Sequential appearance-based place recognition and local metric pose filtering are evaluated simultaneously using a Rao–Blackwellised particle filter, which weights particles based on appearance matching over sequential frames and the similarity of robot motion along the trajectory. The particle filter explicitly models both the likelihood of revisiting previous locations and exploring new locations. A modified resampling scheme counters particle deprivation and allows loop-closure updates to be performed in constant time for a given environment. We compare the performance of CAT-SLAM with FAB-MAP (a state-of-the-art appearance-only SLAM algorithm) using multiple real-world datasets, demonstrating an increase in the number of correct loop closures detected by CAT-SLAM.
Resumo:
The position of housing demand and supply is not consistent. The Australian situation counters the experience demonstrated in many other parts of the world in the aftermath of the Global Financial Crisis, with residential housing prices proving particularly resilient. A seemingly inexorable housing demand remains a critical issue affecting the socio-economic landscape. Underpinned by high levels of population growth fuelled by immigration, and further buoyed by sustained historically low interest rates, increasing income levels, and increased government assistance for first home buyers, this strong housing demand level ensures problems related to housing affordability continue almost unabated. A significant, but less visible factor impacting housing affordability relates to holding costs. Although only one contributor in the housing affordability matrix, the nature and extent of holding cost impact requires elucidation: for example, the computation and methodology behind the calculation of holding costs varies widely - and in some instances completely ignored. In addition, ambiguity exists in terms of the inclusion of various elements that comprise holding costs, thereby affecting the assessment of their relative contribution. Such anomalies may be explained by considering that assessment is conducted over time in an ever-changing environment. A strong relationship with opportunity cost - in turn dependant inter alia upon prevailing inflation and / or interest rates - adds further complexity. By extending research in the general area of housing affordability, this thesis seeks to provide a detailed investigation of those elements related to holding costs specifically in the context of midsized (i.e. between 15-200 lots) greenfield residential property developments in South East Queensland. With the dimensions of holding costs and their influence over housing affordability determined, the null hypothesis H0 that holding costs are not passed on can be addressed. Arriving at these conclusions involves the development of robust economic and econometric models which seek to clarify the componentry impacts of holding cost elements. An explanatory sequential design research methodology has been adopted, whereby the compilation and analysis of quantitative data and the development of an economic model is informed by the subsequent collection and analysis of primarily qualitative data derived from surveying development related organisations. Ultimately, there are significant policy implications in relation to the framework used in Australian jurisdictions that promote, retain, or otherwise maximise, the opportunities for affordable housing.
Resumo:
The overall aim of this project was to contribute to existing knowledge regarding methods for measuring characteristics of airborne nanoparticles and controlling occupational exposure to airborne nanoparticles, and to gather data on nanoparticle emission and transport in various workplaces. The scope of this study involved investigating the characteristics and behaviour of particles arising from the operation of six nanotechnology processes, subdivided into nine processes for measurement purposes. It did not include the toxicological evaluation of the aerosol and therefore, no direct conclusion was made regarding the health effects of exposure to these particles. Our research included real-time measurement of sub, and supermicrometre particle number and mass concentration, count median diameter, and alveolar deposited surface area using condensation particle counters, an optical particle counter, DustTrak photometer, scanning mobility particle sizer, and nanoparticle surface area monitor, respectively. Off-line particle analysis included scanning and transmission electron microscopy, energy-dispersive x-ray spectrometry, and thermal optical analysis of elemental carbon. Sources of fibrous and non-fibrous particles were included.
Resumo:
A frame-rate stereo vision system, based on non-parametric matching metrics, is described. Traditional metrics, such as normalized cross-correlation, are expensive in terms of logic. Non-parametric measures require only simple, parallelizable, functions such as comparators, counters and exclusive-or, and are thus very well suited to implementation in reprogrammable logic.
Resumo:
An accurate evaluation of the airborne particle dose-response relationship requires detailed measurements of the actual particle concentration levels that people are exposed to, in every microenvironment in which they reside. The aim of this work was to perform an exposure assessment of children in relation to two different aerosol species: ultrafine particles (UFPs) and black carbon (BC). To this purpose, personal exposure measurements, in terms of UFP and BC concentrations, were performed on 103 children aged 8-11 years (10.1 ± 1.1 years) using hand-held particle counters and aethalometers. Simultaneously, a time-activity diary and a portable GPS were used to determine the children’s daily time-activity pattern and estimate their inhaled dose of UFPs and BC. The median concentration to which the study population was exposed was found to be comparable to the high levels typically detected in urban traffic microenvironments, in terms of both particle number (2.2×104 part. cm-3) and BC (3.8 μg m-3) concentrations. Daily inhaled doses were also found to be relatively high and were equal to 3.35×1011 part. day-1 and 3.92×101 μg day-1 for UFPs and BC, respectively. Cooking and using transportation were recognized as the main activities contributing to overall daily exposure, when normalized according to their corresponding time contribution for UFPs and BC, respectively. Therefore, UFPs and BC could represent tracers of children exposure to particulate pollution from indoor cooking activities and transportation microenvironments, respectively.
Resumo:
According to a study conducted by the International Maritime organisation (IMO) shipping sector is responsible for 3.3% of the global Greenhouse Gas (GHG) emissions. The 1997 Kyoto Protocol calls upon states to pursue limitation or reduction of emissions of GHG from marine bunker fuels working through the IMO. In 2011, 14 years after the adoption of the Kyoto Protocol, the Marine Environment Protection Committee (MEPC) of the IMO has adopted mandatory energy efficiency measures for international shipping which can be treated as the first ever mandatory global GHG reduction instrument for an international industry. The MEPC approved an amendment of Annex VI of the 1973 International Convention for the Prevention of Pollution from Ships (MARPOL 73/78) to introduce a mandatory Energy Efficiency Design Index (EEDI) for new ships and the Ship Energy Efficiency Management Plan (SEEMP) for all ships. Considering the growth projections of human population and world trade the technical and operational measures may not be able to reduce the amount of GHG emissions from international shipping in a satisfactory level. Therefore, the IMO is considering to introduce market-based mechanisms that may serve two purposes including providing a fiscal incentive for the maritime industry to invest in more energy efficient manner and off-setting of growing ship emissions. Some leading developing countries already voiced their serious reservations on the newly adopted IMO regulations stating that by imposing the same obligation on all countries, irrespective of their economic status, this amendment has rejected the Principle of Common but Differentiated Responsibility (the CBDR Principle), which has always been the cornerstone of international climate change law discourses. They also claimed that negotiation for a market based mechanism should not be continued without a clear commitment from the developed counters for promotion of technical co-operation and transfer of technology relating to the improvement of energy efficiency of ships. Against this backdrop, this article explores the challenges for the developing counters in the implementation of already adopted technical and operational measures.
Resumo:
The development of global navigation satellite systems (GNSS) provides a solution of many applied problems with increasingly higher quality and accuracy nowadays. Researches that are carried out by the Bavarian Academy of Sciences and Humanities in Munich (BAW) in the field of airborne gravimetry are based on sophisticated data processing from high frequency GNSS receiver for kinematic aircraft positioning. Applied algorithms for inertial acceleration determination are based on the high sampling rate (50Hz) and on reducing of such factors as ionosphere scintillation and multipath at aircraft /antenna near field effects. The quality of the GNSS derived kinematic height are studied also by intercomparison with lift height variations collected by a precise high sampling rate vertical scale [1]. This work is aimed at the ways of more accurate determination of mini-aircraft altitude by means of high frequency GNSS receivers, in particular by considering their dynamic behaviour.
Resumo:
This paper describes the results of experiments made in the vicinity of EHV overhead lines to investigate sources of clouds of charged particles using simultaneously-recording arrays of electric field meters to measure direct electric fields produced under ion clouds. E-field measurements, made at one metre above ground level, are correlated with wind speed and direction, and with measurements from ionisation counters and audible corona effects to identify possible positions of sources of corona on adjacent power lines. Measurements made in dry conditions on EHV lines in flat remote locations with no adjacent buildings or large vegetation indicate the presence of discrete ion sources associated with high stress points on some types of line hardware such as connectors and conductor spacers. Faulty line components such as insulators and line fittings are also found to be a possible source of ion clouds.