46 resultados para Molar - Average distances
em Queensland University of Technology - ePrints Archive
Resumo:
Effective flocculation and dewatering of mineral processing streams containing clays are microstructure dependent in clay-water systems. Initial clay flocculation is crucial in the design and for the development of a new methodology of gas exploitation. Microstructural engineering of clay aggregates using covalent cations and Keggin macromolecules have been monitored using the new state of the art Transmission X-ray Microscope (TXM) with 60 nm tomography resolution installed in a Taiwanese synchrotron. The 3-D reconstructions from TXM images show complex aggregation structures in montmorillonite aqueous suspensions after treatment with Na+, Ca2+ and Al13 Keggin macromolecules. Na-montmorillonite displays elongated, parallel, well-orientated and closed-void cellular networks, 0.5–3 μm in diameter. After treatment by covalent cations, the coagulated structure displays much smaller, randomly orientated and openly connected cells, 300–600 nm in diameter. The average distances measured between montmorillonite sheets was around 450 nm, which is less than half of the cell dimension measured in Na-montmorillonite. The most dramatic structural changes were observed after treatment by Al13 Keggin; aggregates then became arranged in compacted domains of a 300 nm average diameter composed of thick face-to-face oriented sheets, which forms porous aggregates with larger intra-aggregate open and connected voids.
Resumo:
The paper provides an assessment of the performance of commercial Real Time Kinematic (RTK) systems over longer than recommended inter-station distances. The experiments were set up to test and analyse solutions from the i-MAX, MAX and VRS systems being operated with three triangle shaped network cells, each having an average inter-station distance of 69km, 118km and 166km. The performance characteristics appraised included initialization success rate, initialization time, RTK position accuracy and availability, ambiguity resolution risk and RTK integrity risk in order to provide a wider perspective of the performance of the testing systems. ----- ----- The results showed that the performances of all network RTK solutions assessed were affected by the increase in the inter-station distances to similar degrees. The MAX solution achieved the highest initialization success rate of 96.6% on average, albeit with a longer initialisation time. Two VRS approaches achieved lower initialization success rate of 80% over the large triangle. In terms of RTK positioning accuracy after successful initialisation, the results indicated a good agreement between the actual error growth in both horizontal and vertical components and the accuracy specified in the RMS and part per million (ppm) values by the manufacturers. ----- ----- Additionally, the VRS approaches performed better than the MAX and i-MAX when being tested under the standard triangle network with a mean inter-station distance of 69km. However as the inter-station distance increases, the network RTK software may fail to generate VRS correction and then may turn to operate in the nearest single-base RTK (or RAW) mode. The position uncertainty reached beyond 2 meters occasionally, showing that the RTK rover software was using an incorrect ambiguity fixed solution to estimate the rover position rather than automatically dropping back to using an ambiguity float solution. Results identified that the risk of incorrectly resolving ambiguities reached 18%, 20%, 13% and 25% for i-MAX, MAX, Leica VRS and Trimble VRS respectively when operating over the large triangle network. Additionally, the Coordinate Quality indicator values given by the Leica GX1230 GG rover receiver tended to be over-optimistic and not functioning well with the identification of incorrectly fixed integer ambiguity solutions. In summary, this independent assessment has identified some problems and failures that can occur in all of the systems tested, especially when being pushed beyond the recommended limits. While such failures are expected, they can offer useful insights into where users should be wary and how manufacturers might improve their products. The results also demonstrate that integrity monitoring of RTK solutions is indeed necessary for precision applications, thus deserving serious attention from researchers and system providers.
Resumo:
Reasons for performing study: The distance travelled by Australian feral horses in an unrestricted environment has not previously been determined. It is important to investigate horse movement in wilderness environments to establish baseline data against which the movement of domestically managed horses and wild equids can be compared. Objectives: To determine the travel dynamics of 2 groups of feral horses in unrestricted but different wilderness environments. Methods: Twelve feral horses living in 2 wilderness environments (2000 vs. 20,000 km2) in outback Australia were tracked for 6.5 consecutive days using custom designed, collar mounted global positioning systems (GPS). Collars were attached after darting and immobilising the horses. The collars were recovered after a minimum of 6.5 days by re-darting the horses. Average daily distance travelled was calculated. Range use and watering patterns of horses were analysed by viewing GPS tracks overlaid on satellite photographs of the study area. Results: Average distance travelled was 15.9 ± 1.9 km/day (range 8.1–28.3 km/day). Horses were recorded up to 55 km from their watering points and some horses walked for 12 h to water from feeding grounds. Mean watering frequency was 2.67 days (range 1–4 days). Central Australian horses watered less frequently and showed a different range use compared to horses from central Queensland. Central Australian horses walked for long distances in direct lines to patchy food sources whereas central Queensland horses were able to graze close to water sources and moved in a more or less circular pattern around the central water source. Conclusions: The distances travelled by feral horses were far greater than those previously observed for managed domestic horses and other species of equid. Feral horses are able to travel long distances and withstand long periods without water, allowing them to survive in semi-arid conditions.
Resumo:
The third edition of the Australian Standard AS1742 Manual of Uniform Traffic Control Devices Part 7 provides a method of calculating the sighting distance required to safely proceed at passive level crossings based on the physics of moving vehicles. This required distance becomes greater with higher line speeds and slower, heavier vehicles so that it may return quite a long sighting distance. However, at such distances, there are also concerns around whether drivers would be able to reliably identify a train in order to make an informed decision regarding whether it would be safe to proceed across the level crossing. In order to determine whether drivers are able to make reliable judgements to proceed in these circumstances, this study assessed the distance at which a train first becomes identifiable to a driver as well as their, ability to detect the movement of the train. A site was selected in Victoria, and 36 participants with good visual acuity observed 4 trains in the 100-140 km/h range. While most participants could detect the train from a very long distance (2.2 km on average), they could only detect that the train was moving at much shorter distances (1.3 km on average). Large variability was observed between participants, with 4 participants consistently detecting trains later than other participants. Participants tended to improve in their capacity to detect the presence of the train with practice, but a similar trend was not observed for detection of the movement of the train. Participants were consistently poor at accurately judging the approach speed of trains, with large underestimations at all investigated distances.
Resumo:
Infrared spectroscopy has been used to study nano to micro sized gallium oxyhydroxide α-GaO(OH), prepared using a low temperature hydrothermal route. Rod-like α-GaO(OH) crystals with average length of ~2.5 μm and width of 1.5 μm were prepared when the initial molar ratio of Ga to OH was 1:3. β-Ga2O3 nano and micro-rods were prepared through the calcination of α-GaO(OH) The initial morphology of α-GaO(OH) is retained in the β-Ga2O3 nanorods. The combination of infrared and infrared emission spectroscopy complimented with dynamic thermal analysis were used to characterise the α-GaO(OH) nanotubes and the formation of β-Ga2O3 nanorods. Bands at around 2903 and 2836 cm-1 are assigned to the -OH stretching vibration of α-GaO(OH) nanorods. Infrared bands at around 952 and 1026 cm-1 are assigned to the Ga-OH deformation modes of α-GaO(OH). A significant number of bands are observed in the 620 to 725 cm-1 region and are assigned to GaO stretching vibrations.
Resumo:
The importance of agriculture in many countries has tended to reduce as their economies move from a resource base to a manufacturing industry base. Although the level of agricultural production in first world countries has increased over the past two decades, this increase has generally been at a less significant rate compared to other sectors of the economies. Despite this increase in secondary and high technology industries, developed countries have continued to encourage and support their agricultural industries. This support has been through both tariffs and price support. Following pressure from developing economies, particularly through the World Trade Organisation (WTO), GATT Uruguay round and the Cairns Group developed countries are now in various stages of winding back or de-coupling agricultural support within their economies. A major concern of farmers in protected agricultural markets is the impact of a free market trade in agricultural commodities on farm incomes, profitability and land values. This paper will analyse both the capital and income performance of the NSW rural land market over the period 1990-1999. This analysis will be based on several rural land use classifications and will compare the total return from rural properties based on the farm income generated by both the average farmer and those farmers considered to be in the top 20% of the various land use areas. The analysis will provide a comprehensive overview of rural production in a free trade economy.
Resumo:
Purpose: Students with low vision may be disadvantaged when compared with their normally sighted peers, as they frequently work at very short working distances and need to use low vision devices. The aim of this study was to examine the sustained reading rates of students with low vision and compare them with their peers with normal vision. The effects of visual acuity, acuity reserve and age on reading rate were also examined. Method: Fifty-six students (10 to 16 years of age), 26 with low vision and 30 with normal vision were required to read text continuously for 30 minutes. Their position in the text was recorded at two-minute intervals. Distance and near visual acuity, working distance, cause of low vision, reading rates and reading habits were recorded. Results: A total of 80.7 per cent of the students with low vision maintained a constant reading rate during the 30 minutes of reading, although they read at approximately half the rate (104 wpm) compared with their normally sighted peers (195 wpm). Only four of the low vision subjects could not complete the reading task. Reading rates increased significantly with acuity reserve and distance and near visual acuity but there was no significant relationship between age and sustained reading rate. Conclusions: The majority of students with low vision were able to maintain appropriate reading rates to cope in integrated educational settings. Surprisingly only relatively few subjects (16 per cent) used their prescribed low vision devices even though the average accommodative demand was 9 D and generally, they revealed a greater dislike of reading compared to students with normal vision.
Resumo:
In this chapter we propose clipping with amplitude and phase corrections to reduce the peak-to-average power ratio (PAR) of orthogonal frequency division multiplexed (OFDM) signals in high-speed wireless local area networks defined in IEEE 802.11a physical layer. The proposed techniques can be implemented with a small modification at the transmitter and the receiver remains standard compliant. PAR reduction as much as 4dB can be achieved by selecting a suitable clipping ratio and a correction factor depending on the constellation used. Out of band noise (OBN) is also reduced.
Resumo:
Parallel combinatory orthogonal frequency division multiplexing (PC-OFDM yields lower maximum peak-to-average power ratio (PAR), high bandwidth efficiency and lower bit error rate (BER) on Gaussian channels compared to OFDM systems. However, PC-OFDM does not improve the statistics of PAR significantly. In this chapter, the use of a set of fixed permutations to improve the statistics of the PAR of a PC-OFDM signal is presented. For this technique, interleavers are used to produce K-1 permuted sequences from the same information sequence. The sequence with the lowest PAR, among K sequences is chosen for the transmission. The PAR of a PC-OFDM signal can be further reduced by 3-4 dB by this technique. Mathematical expressions for the complementary cumulative density function (CCDF)of PAR of PC-OFDM signal and interleaved PC-OFDM signal are also presented.
Resumo:
A pragmatic method for assessing the accuracy and precision of a given processing pipeline required for converting computed tomography (CT) image data of bones into representative three dimensional (3D) models of bone shapes is proposed. The method is based on coprocessing a control object with known geometry which enables the assessment of the quality of resulting 3D models. At three stages of the conversion process, distance measurements were obtained and statistically evaluated. For this study, 31 CT datasets were processed. The final 3D model of the control object contained an average deviation from reference values of −1.07±0.52 mm standard deviation (SD) for edge distances and −0.647±0.43 mm SD for parallel side distances of the control object. Coprocessing a reference object enables the assessment of the accuracy and precision of a given processing pipeline for creating CTbased 3D bone models and is suitable for detecting most systematic or human errors when processing a CT-scan. Typical errors have about the same size as the scan resolution.
Resumo:
Freeways are divided roadways designed to facilitate the uninterrupted movement of motor vehicles. However, many freeways now experience demand flows in excess of capacity, leading to recurrent congestion. The Highway Capacity Manual (TRB, 1994) uses empirical macroscopic relationships between speed, flow and density to quantify freeway operations and performance. Capacity may be predicted as the maximum uncongested flow achievable. Although they are effective tools for design and analysis, macroscopic models lack an understanding of the nature of processes taking place in the system. Szwed and Smith (1972, 1974) and Makigami and Matsuo (1990) have shown that microscopic modelling is also applicable to freeway operations. Such models facilitate an understanding of the processes whilst providing for the assessment of performance, through measures of capacity and delay. However, these models are limited to only a few circumstances. The aim of this study was to produce more comprehensive and practical microscopic models. These models were required to accurately portray the mechanisms of freeway operations at the specific locations under consideration. The models needed to be able to be calibrated using data acquired at these locations. The output of the models needed to be able to be validated with data acquired at these sites. Therefore, the outputs should be truly descriptive of the performance of the facility. A theoretical basis needed to underlie the form of these models, rather than empiricism, which is the case for the macroscopic models currently used. And the models needed to be adaptable to variable operating conditions, so that they may be applied, where possible, to other similar systems and facilities. It was not possible to produce a stand-alone model which is applicable to all facilities and locations, in this single study, however the scene has been set for the application of the models to a much broader range of operating conditions. Opportunities for further development of the models were identified, and procedures provided for the calibration and validation of the models to a wide range of conditions. The models developed, do however, have limitations in their applicability. Only uncongested operations were studied and represented. Driver behaviour in Brisbane was applied to the models. Different mechanisms are likely in other locations due to variability in road rules and driving cultures. Not all manoeuvres evident were modelled. Some unusual manoeuvres were considered unwarranted to model. However the models developed contain the principal processes of freeway operations, merging and lane changing. Gap acceptance theory was applied to these critical operations to assess freeway performance. Gap acceptance theory was found to be applicable to merging, however the major stream, the kerb lane traffic, exercises only a limited priority over the minor stream, the on-ramp traffic. Theory was established to account for this activity. Kerb lane drivers were also found to change to the median lane where possible, to assist coincident mergers. The net limited priority model accounts for this by predicting a reduced major stream flow rate, which excludes lane changers. Cowan's M3 model as calibrated for both streams. On-ramp and total upstream flow are required as input. Relationships between proportion of headways greater than 1 s and flow differed for on-ramps where traffic leaves signalised intersections and unsignalised intersections. Constant departure onramp metering was also modelled. Minimum follow-on times of 1 to 1.2 s were calibrated. Critical gaps were shown to lie between the minimum follow-on time, and the sum of the minimum follow-on time and the 1 s minimum headway. Limited priority capacity and other boundary relationships were established by Troutbeck (1995). The minimum average minor stream delay and corresponding proportion of drivers delayed were quantified theoretically in this study. A simulation model was constructed to predict intermediate minor and major stream delays across all minor and major stream flows. Pseudo-empirical relationships were established to predict average delays. Major stream average delays are limited to 0.5 s, insignificant compared with minor stream delay, which reach infinity at capacity. Minor stream delays were shown to be less when unsignalised intersections are located upstream of on-ramps than signalised intersections, and less still when ramp metering is installed. Smaller delays correspond to improved merge area performance. A more tangible performance measure, the distribution of distances required to merge, was established by including design speeds. This distribution can be measured to validate the model. Merging probabilities can be predicted for given taper lengths, a most useful performance measure. This model was also shown to be applicable to lane changing. Tolerable limits to merging probabilities require calibration. From these, practical capacities can be estimated. Further calibration is required of traffic inputs, critical gap and minimum follow-on time, for both merging and lane changing. A general relationship to predict proportion of drivers delayed requires development. These models can then be used to complement existing macroscopic models to assess performance, and provide further insight into the nature of operations.
Resumo:
This paper presents a study on estimating the latent demand for rail transit in Australian context. Based on travel mode-choice modelling, a two-stage analysis approach is proposed, namely market population identification and mode share estimation. A case study is conducted on Midland-Fremantle rail transit corridor in Perth, Western Australia. The required data mainly include journey-to-work trip data from Australian Bureau of Statistics Census 2006 and work-purpose mode-choice model in Perth Strategic Transport Evaluation Model. The market profile is analysed, such as catchment areas, market population, mode shares, mode specific trip distributions and average trip distances. A numerical simulation is performed to test the sensitivity of the transit ridership to the change of fuel price. A corridor-level transit demand function of fuel price is thus obtained and its characteristics of elasticity are discussed. This study explores a viable approach to developing a decision-support tool for the assessment of short-term impacts of policy and operational adjustments on corridor-level demand for rail transit.
Resumo:
Using GIS to evaluate travel behaviour is an important technique to increase our understanding of the relationship between accessibility and transport demand. In this paper, the activity space concept was used to identify the nature of participation in activities (or lack of it) amongst a group of students using a 2 day travel-activity diary. Three different indicators such as the number of unique locations visited, average daily distance travelled, and average daily activity duration were used to measure the size of activity spaces. These indicators reflect levels of accessibility, personal mobility, and the extent of participation respectively. Multiple regression analyses were used to assess the impacts of students socio-economic status and the spatial characteristics of home location. Although no differences were found in the levels of accessibility and the extent of participation measures, home location with respect to a demand responsive transport (DRT) service was found to be the most important determinant of their mobility patterns. Despite being able to travel longer distances, students who live outside of the DRT service area were found to be temporally excluded from some opportunities. Student activity spaces were also visualised within a GIS environment and a spatial analysis was conducted to underpin the evaluation of the performance of the DRT. This approach was also used to identify the activity spaces of individuals that are geographically excluded from the service. Evaluation of these results indicated that although the service currently covers areas of high demand, 90% of the activity spaces remained un-served by the DRT service. Using this data six new routes were designed to meet the coverage goal of public transport based on a measure of network impedance based on inverse activity density. Following assessment of public transport service coverage, the study was extended using a Spatial Multi Criteria Evaluation (SMCE) technique to assess the effect of service provision on patronage.
Resumo:
A forced landing is an unscheduled event in flight requiring an emergency landing, and is most commonly attributed to engine failure, failure of avionics or adverse weather. Since the ability to conduct a successful forced landing is the primary indicator for safety in the aviation industry, automating this capability for unmanned aerial vehicles (UAVs) will help facilitate their integration into, and subsequent routine operations over civilian airspace. Currently, there is no commercial system available to perform this task; however, a team at the Australian Research Centre for Aerospace Automation (ARCAA) is working towards developing such an automated forced landing system. This system, codenamed Flight Guardian, will operate onboard the aircraft and use machine vision for site identification, artificial intelligence for data assessment and evaluation, and path planning, guidance and control techniques to actualize the landing. This thesis focuses on research specific to the third category, and presents the design, testing and evaluation of a Trajectory Generation and Guidance System (TGGS) that navigates the aircraft to land at a chosen site, following an engine failure. Firstly, two algorithms are developed that adapts manned aircraft forced landing techniques to suit the UAV planning problem. Algorithm 1 allows the UAV to select a route (from a library) based on a fixed glide range and the ambient wind conditions, while Algorithm 2 uses a series of adjustable waypoints to cater for changing winds. A comparison of both algorithms in over 200 simulated forced landings found that using Algorithm 2, twice as many landings were within the designated area, with an average lateral miss distance of 200 m at the aimpoint. These results present a baseline for further refinements to the planning algorithms. A significant contribution is seen in the design of the 3-D Dubins Curves planning algorithm, which extends the elementary concepts underlying 2-D Dubins paths to account for powerless flight in three dimensions. This has also resulted in the development of new methods in testing for path traversability, in losing excess altitude, and in the actual path formation to ensure aircraft stability. Simulations using this algorithm have demonstrated lateral and vertical miss distances of under 20 m at the approach point, in wind speeds of up to 9 m/s. This is greater than a tenfold improvement on Algorithm 2 and emulates the performance of manned, powered aircraft. The lateral guidance algorithm originally developed by Park, Deyst, and How (2007) is enhanced to include wind information in the guidance logic. A simple assumption is also made that reduces the complexity of the algorithm in following a circular path, yet without sacrificing performance. Finally, a specific method of supplying the correct turning direction is also used. Simulations have shown that this new algorithm, named the Enhanced Nonlinear Guidance (ENG) algorithm, performs much better in changing winds, with cross-track errors at the approach point within 2 m, compared to over 10 m using Park's algorithm. A fourth contribution is made in designing the Flight Path Following Guidance (FPFG) algorithm, which uses path angle calculations and the MacCready theory to determine the optimal speed to fly in winds. This algorithm also uses proportional integral- derivative (PID) gain schedules to finely tune the tracking accuracies, and has demonstrated in simulation vertical miss distances of under 2 m in changing winds. A fifth contribution is made in designing the Modified Proportional Navigation (MPN) algorithm, which uses principles from proportional navigation and the ENG algorithm, as well as methods specifically its own, to calculate the required pitch to fly. This algorithm is robust to wind changes, and is easily adaptable to any aircraft type. Tracking accuracies obtained with this algorithm are also comparable to those obtained using the FPFG algorithm. For all three preceding guidance algorithms, a novel method utilising the geometric and time relationship between aircraft and path is also employed to ensure that the aircraft is still able to track the desired path to completion in strong winds, while remaining stabilised. Finally, a derived contribution is made in modifying the 3-D Dubins Curves algorithm to suit helicopter flight dynamics. This modification allows a helicopter to autonomously track both stationary and moving targets in flight, and is highly advantageous for applications such as traffic surveillance, police pursuit, security or payload delivery. Each of these achievements serves to enhance the on-board autonomy and safety of a UAV, which in turn will help facilitate the integration of UAVs into civilian airspace for a wider appreciation of the good that they can provide. The automated UAV forced landing planning and guidance strategies presented in this thesis will allow the progression of this technology from the design and developmental stages, through to a prototype system that can demonstrate its effectiveness to the UAV research and operations community.