295 resultados para speed reduction
Practical improvements to simultaneous computation of multi-view geometry and radial lens distortion
Resumo:
This paper discusses practical issues related to the use of the division model for lens distortion in multi-view geometry computation. A data normalisation strategy is presented, which has been absent from previous discussions on the topic. The convergence properties of the Rectangular Quadric Eigenvalue Problem solution for computing division model distortion are examined. It is shown that the existing method can require more than 1000 iterations when dealing with severe distortion. A method is presented for accelerating convergence to less than 10 iterations for any amount of distortion. The new method is shown to produce equivalent or better results than the existing method with up to two orders of magnitude reduction in iterations. Through detailed simulation it is found that the number of data points used to compute geometry and lens distortion has a strong influence on convergence speed and solution accuracy. It is recommended that more than the minimal number of data points be used when computing geometry using a robust estimator such as RANSAC. Adding two to four extra samples improves the convergence rate and accuracy sufficiently to compensate for the increased number of samples required by the RANSAC process.
Resumo:
Signal-degrading speckle is one factor that can reduce the quality of optical coherence tomography images. We demonstrate the use of a hierarchical model-based motion estimation processing scheme based on an affine-motion model to reduce speckle in optical coherence tomography imaging, by image registration and the averaging of multiple B-scans. The proposed technique is evaluated against other methods available in the literature. The results from a set of retinal images show the benefit of the proposed technique, which provides an improvement in signal-to-noise ratio of the square root of the number of averaged images, leading to clearer visual information in the averaged image. The benefits of the proposed technique are also explored in the case of ocular anterior segment imaging.
Resumo:
Introduction: An observer, looking sideways from a moving vehicle, while wearing a neutral density filter over one eye, can have a distorted perception of speed, known as the Enright phenomenon. The purpose of this study was to determine how the Enright phenomenon influences driving behaviour. Methods: A geometric model of the Enright phenomenon was developed. Ten young, visually normal, participants (mean age = 25.4 years) were tested on a straight section of a closed driving circuit and instructed to look out of the right side of the vehicle and drive at either 40 Km/h or 60 Km/h under the following binocular viewing conditions: with a 0.9 ND filter over the left eye (leading eye); 0.9 ND filter over the right eye (trailing eye); 0.9 ND filters over both eyes, and with no filters over either eye. The order of filter conditions was randomised and the speed driven recorded for each condition. Results: Speed judgments did not differ significantly between the two baseline conditions (no filters and both eyes filtered) for either speed tested. For the baseline conditions, when subjects were asked to drive at 60 Km/h they matched this speed well (61 ± 10.2 Km/h) but drove significantly faster than requested (51.6 ± 9.4 Km/h) when asked to drive at 40 Km/h. Subjects significantly exceeded baseline speeds by 8.7± 5.0 Km/h, when the trailing eye was filtered and travelled slower than baseline speeds by 3.7± 4.6 Km/h when the leading eye was filtered. Conclusions: This is the first quantitative study demonstrating how the Enright effect can influence perceptions of driving speed, and demonstrates that monocular filtering of an eye can significantly impact driving speeds, albeit to a lesser extent than predicted by geometric models of the phenomenon.
Resumo:
Background: Few studies have specifically investigated the functional effects of uncorrected astigmatism on measures of reading fluency. This information is important to provide evidence for the development of clinical guidelines for the correction of astigmatism. Methods: Participants included 30 visually normal, young adults (mean age 21.7 ± 3.4 years). Distance and near visual acuity and reading fluency were assessed with optimal spectacle correction (baseline) and for two levels of astigmatism, 1.00DC and 2.00DC, at two axes (90° and 180°) to induce both against-the-rule (ATR) and with-the-rule (WTR) astigmatism. Reading and eye movement fluency were assessed using standardized clinical measures including the test of Discrete Reading Rate (DRR), the Developmental Eye Movement (DEM) test and by recording eye movement patterns with the Visagraph (III) during reading for comprehension. Results: Both distance and near acuity were significantly decreased compared to baseline for all of the astigmatic lens conditions (p < 0.001). Reading speed with the DRR for N16 print size was significantly reduced for the 2.00DC ATR condition (a reduction of 10%), while for smaller text sizes reading speed was reduced by up to 24% for the 1.00DC ATR and 2.00DC condition in both axis directions (p<0.05). For the DEM, sub-test completion speeds were significantly impaired, with the 2.00DC condition affecting both vertical and horizontal times and the 1.00DC ATR condition affecting only horizontal times (p<0.05). Visagraph reading eye movements were not significantly affected by the induced astigmatism. Conclusions: Induced astigmatism impaired performance on selected tests of reading fluency, with ATR astigmatism having significantly greater effects on performance than did WTR, even for relatively small amounts of astigmatic blur of 1.00DC. These findings have implications for the minimal prescribing criteria for astigmatic refractive errors.
Resumo:
-
Resumo:
Contamination of packaged foods due to micro-organisms entering through air leaks can cause serious public health issues and cost companies large amounts of money due to product recalls, consumer impact and subsequent loss of market share. The main source of contamination is leaks in packaging which allow air, moisture and microorganisms to enter the package. In the food processing and packaging industry worldwide, there is an increasing demand for cost effective state of the art inspection technologies that are capable of reliably detecting leaky seals and delivering products at six-sigma. The new technology will develop non-destructive testing technology using digital imaging and sensing combined with a differential vacuum technique to assess seal integrity of food packages on a high-speed production line. The cost of leaky packages in Australian food industries is estimated close to AUD $35 Million per year. Contamination of packaged foods due to micro-organisms entering through air leaks can cause serious public health issues and cost companies large sums of money due to product recalls, compensation claims and loss of market share. The main source of contamination is leaks in packaging which allow air, moisture and micro-organisms to enter the package. Flexible plastic packages are widely used, and are the least expensive form of retaining the quality of the product. These packets can be used to seal, and therefore maximise, the shelf life of both dry and moist products. The seals of food packages need to be airtight so that the food content is not contaminated due to contact with microorganisms that enter as a result of air leakage. Airtight seals also extend the shelf life of packaged foods, and manufacturers attempt to prevent food products with leaky seals being sold to consumers. There are many current NDT (non-destructive testing) methods of testing the seal of flexible packages best suited to random sampling, and for laboratory purposes. The three most commonly used methods are vacuum/pressure decay, bubble test, and helium leak detection. Although these methods can detect very fine leaks, they are limited by their high processing time and are not viable in a production line. Two nondestructive in-line packaging inspection machines are currently available and are discussed in the literature review. The detailed design and development of the High-Speed Sensing and Detection System (HSDS) is the fundamental requirement of this project and the future prototype and production unit. Successful laboratory testing was completed and a methodical design procedure was needed for a successful concept. The Mechanical tests confirmed the vacuum hypothesis and seal integrity with good consistent results. Electrically, the testing also provided solid results to enable the researcher to move the project forward with a certain amount of confidence. The laboratory design testing allowed the researcher to confirm theoretical assumptions before moving into the detailed design phase. Discussion on the development of the alternative concepts in both mechanical and electrical disciplines enables the researcher to make an informed decision. Each major mechanical and electrical component is detailed through the research and design process. The design procedure methodically works through the various major functions both from a mechanical and electrical perspective. It opens up alternative ideas for the major components that although are sometimes not practical in this application, show that the researcher has exhausted all engineering and functionality thoughts. Further concepts were then designed and developed for the entire HSDS unit based on previous practice and theory. In the future, it would be envisaged that both the Prototype and Production version of the HSDS would utilise standard industry available components, manufactured and distributed locally. Future research and testing of the prototype unit could result in a successful trial unit being incorporated in a working food processing production environment. Recommendations and future works are discussed, along with options in other food processing and packaging disciplines, and other areas in the non-food processing industry.
Resumo:
Within Australia, motor vehicle injury is the leading cause of hospital admissions and fatalities. Road crash data reveals that among the factors contributing to crashes in Queensland, speed and alcohol continue to be overrepresented. While alcohol is the number one contributing factor to fatal crashes, speeding also contributes to a high proportion of crashes. Research indicates that risky driving is an important contributor to road crashes. However, it has been debated whether all risky driving behaviours are similar enough to be explained by the same combination of factors. Further, road safety authorities have traditionally relied upon deterrence based countermeasures to reduce the incidence of illegal driving behaviours such as speeding and drink driving. However, more recent research has focussed on social factors to explain illegal driving behaviours. The purpose of this research was to examine and compare the psychological, legal, and social factors contributing to two illegal driving behaviours: exceeding the posted speed limit and driving when over the legal blood alcohol concentration (BAC) for the drivers licence type. Complementary theoretical perspectives were chosen to comprehensively examine these two behaviours including Akers’ social learning theory, Stafford and Warr’s expanded deterrence theory, and personality perspectives encompassing alcohol misuse, sensation seeking, and Type-A behaviour pattern. The program of research consisted of two phases: a preliminary pilot study, and the main quantitative phase. The preliminary pilot study was undertaken to inform the development of the quantitative study and to ensure the clarity of the theoretical constructs operationalised in this research. Semi-structured interviews were conducted with 11 Queensland drivers recruited from Queensland Transport Licensing Centres and Queensland University of Technology (QUT). These interviews demonstrated that the majority of participants had engaged in at least one of the behaviours, or knew of someone who had. It was also found among these drivers that the social environment in which both behaviours operated, including family and friends, and the social rewards and punishments associated with the behaviours, are important in their decision making. The main quantitative phase of the research involved a cross-sectional survey of 547 Queensland licensed drivers. The aim of this study was to determine the relationship between speeding and drink driving and whether there were any similarities or differences in the factors that contribute to a driver’s decision to engage in one or the other. A comparison of the participants self-reported speeding and self-reported drink driving behaviour demonstrated that there was a weak positive association between these two behaviours. Further, participants reported engaging in more frequent speeding at both low (i.e., up to 10 kilometres per hour) and high (i.e., 10 kilometres per hour or more) levels, than engaging in drink driving behaviour. It was noted that those who indicated they drove when they may be over the legal limit for their licence type, more frequently exceeded the posted speed limit by 10 kilometres per hour or more than those who complied with the regulatory limits for drink driving. A series of regression analyses were conducted to investigate the factors that predict self-reported speeding, self-reported drink driving, and the preparedness to engage in both behaviours. In relation to self-reported speeding (n = 465), it was found that among the sociodemographic and person-related factors, younger drivers and those who score high on measures of sensation seeking were more likely to report exceeding the posted speed limit. In addition, among the legal and psychosocial factors it was observed that direct exposure to punishment (i.e., being detected by police), direct punishment avoidance (i.e., engaging in an illegal driving behaviour and not being detected by police), personal definitions (i.e., personal orientation or attitudes toward the behaviour), both the normative and behavioural dimensions of differential association (i.e., refers to both the orientation or attitude of their friends and family, as well as the behaviour of these individuals), and anticipated punishments were significant predictors of self-reported speeding. It was interesting to note that associating with significant others who held unfavourable definitions towards speeding (the normative dimension of differential association) and anticipating punishments from others were both significant predictors of a reduction in self-reported speeding. In relation to self-reported drink driving (n = 462), a logistic regression analysis indicated that there were a number of significant predictors which increased the likelihood of whether participants had driven in the last six months when they thought they may have been over the legal alcohol limit. These included: experiences of direct punishment avoidance; having a family member convicted of drink driving; higher levels of Type-A behaviour pattern; greater alcohol misuse (as measured by the AUDIT); and the normative dimension of differential association (i.e., associating with others who held favourable attitudes to drink driving). A final logistic regression analysis examined the predictors of whether the participants reported engaging in both drink driving and speeding versus those who reported engaging in only speeding (the more common of the two behaviours) (n = 465). It was found that experiences of punishment avoidance for speeding decreased the likelihood of engaging in both speeding and drink driving; whereas in the case of drink driving, direct punishment avoidance increased the likelihood of engaging in both behaviours. It was also noted that holding favourable personal definitions toward speeding and drink driving, as well as higher levels of on Type-A behaviour pattern, and greater alcohol misuse significantly increased the likelihood of engaging in both speeding and drink driving. This research has demonstrated that the compliance with the regulatory limits was much higher for drink driving than it was for speeding. It is acknowledged that while speed limits are a fundamental component of speed management practices in Australia, the countermeasures applied to both speeding and drink driving do not appear to elicit the same level of compliance across the driving population. Further, the findings suggest that while the principles underpinning the current regime of deterrence based countermeasures are sound, current enforcement practices are insufficient to force compliance among the driving population, particularly in the case of speeding. Future research should further examine the degree of overlap between speeding and drink driving behaviour and whether punishment avoidance experiences for a specific illegal driving behaviour serve to undermine the deterrent effect of countermeasures aimed at reducing the incidence of another illegal driving behaviour. Furthermore, future work should seek to understand the factors which predict engaging in speeding and drink driving behaviours at the same time. Speeding has shown itself to be a pervasive and persistent behaviour, hence it would be useful to examine why road safety authorities have been successful in convincing the majority of drivers of the dangers of drink driving, but not those associated with speeding. In conclusion, the challenge for road safety practitioners will be to convince drivers that speeding and drink driving are equally risky behaviours, with the ultimate goal to reduce the prevalence of both behaviours.
Resumo:
Gold nanoparticles supported on CeO2 were found to be efficient photocatalysts for three selective reductions of organic compounds at ambient temperatures, under irradiation of visible light; their reduction ability can be tuned by manipulating the irradiation wavelength.
Resumo:
High Speed Rail (HSR) is rapidly gaining popularity worldwide as a safe and efficient transport option for long-distance travel. Designed to win market shares from air transport, HSR systems optimise their productivity between increasing speeds and station spacing to offer high quality service and gain ridership. Recent studies have investigated the effects that the deployment of HSR infrastructure has on spatial distribution and the economic development of cities and regions. Findings appear mostly positive at higher geographical scales, where HSR links connect major urban centres several hundred kilometres apart and already well positioned within a national or international context. Also, at the urban level, studies have shown regeneration and concentration effects around HSR station areas with positive returns on city’s image and economy. However, doubts persist on the effects of HSR at an intermediate scale, where the accessibility trade off on station spacing limits access to many small and medium agglomerations. Thereby, their ability to participate in the development opportunities facilitated by HSR infrastructure is significantly reduced. The locational advantages deriving from transport improvements appear contrasting especially in regions that tend to have a polycentric structure, where cities may present greater accessibility disparities between those served by HSR and those left behind. This thesis fits in this context where intermediate and regional cities do not directly enjoy the presence of an HSR station while having an existing or planned proximate HSR corridor. With the aim of understanding whether there might be a solution to this apparent incongruity, the research investigates strategies to integrate HSR accessibility at the regional level. While current literature recommends to commit with ancillary investments to the uplift of station areas and the renewal of feeder systems, I hypothesised the interoperability between the HSR and the conventional networks to explore the possibilities offered by mixed traffic and infrastructure sharing. Thus, I developed a methodology to quantify the exchange of benefits deriving from this synergistic interaction. In this way, it was possible to understand which level of service quality offered by alternative transit strategies best facilitates the distribution of accessibility benefits for areas far from actual HSR stations. Therefore, strategies were selected for their type of service capable of regional extensions and urban penetrations, while incorporating a combination of specific advantages (e.g. speed, sub-urbanity, capacity, frequency and automation) in order to emulate HSR quality with increasingly efficient services. The North-eastern Italian macro region was selected as case study to ground the research offering concurrently a peripheral polycentric metropolitan form, the presence of a planned HSR corridor with some portions of HSR infrastructure implementation, and the project to develop a suburban rail service extended regionally. Results show significant distributive potential, in terms of network effects produced in relation with HSR, in increasing proportions for all the strategies considered: a regional metro rail strategy (abbreviated RMR), a regional high speed rail strategy (abbreviated RHSR), a regional light rail transit (abbreviated LRT) strategy, and a non-stopping continuous railway system (abbreviated CRS) strategy. The provision of additional tools to value HSR infrastructure against its accessibility benefits and their regional distribution through alternative strategies beyond the actual HSR stations, would have great implications, both politically and technically, in moving towards new dimensions of HSR evaluation and development.
Resumo:
This final report outlines the research conducted by the Centre for Accident Research and Road Safety – Queensland (CARRS-Q) for the research project (title above). This report provides an outline of the project methodology, literature review, three stages of research results (including the focus group discussions, review of organisational records, documentation and initiatives, and analysis of previous CARRS-Q occupational road safety self-report surveys), and recommendations for intervention strategy and initiatives development and implementation.
Resumo:
The paper investigates a detailed Active Shock Control Bump Design Optimisation on a Natural Laminar Flow (NLF) aerofoil; RAE 5243 to reduce cruise drag at transonic flow conditions using Evolutionary Algorithms (EAs) coupled to a robust design approach. For the uncertainty design parameters, the positions of boundary layer transition (xtr) and the coefficient of lift (Cl) are considered (250 stochastic samples in total). In this paper, two robust design methods are considered; the first approach uses a standard robust design method, which evaluates one design model at 250 stochastic conditions for uncertainty. The second approach is the combination of a standard robust design method and the concept of hierarchical (multi-population) sampling (250, 50, 15) for uncertainty. Numerical results show that the evolutionary optimization method coupled to uncertainty design techniques produces useful and reliable Pareto optimal SCB shapes which have low sensitivity and high aerodynamic performance while having significant total drag reduction. In addition,it also shows the benefit of using hierarchical robust method for detailed uncertainty design optimization.
Resumo:
Background When observers are asked to identify two targets in rapid sequence, they often suffer profound performance deficits for the second target, even when the spatial location of the targets is known. This attentional blink (AB) is usually attributed to the time required to process a previous target, implying that a link should exist between individual differences in information processing speed and the AB. Methodology/Principal Findings The present work investigated this question by examining the relationship between a rapid automatized naming task typically used to assess information-processing speed and the magnitude of the AB. The results indicated that faster processing actually resulted in a greater AB, but only when targets were presented amongst high similarity distractors. When target-distractor similarity was minimal, processing speed was unrelated to the AB. Conclusions/Significance Our findings indicate that information-processing speed is unrelated to target processing efficiency per se, but rather to individual differences in observers' ability to suppress distractors. This is consistent with evidence that individuals who are able to avoid distraction are more efficient at deploying temporal attention, but argues against a direct link between general processing speed and efficient information selection.
Resumo:
Contact lenses are a common method for the correction of refractive errors of the eye. While there have been significant advancements in contact lens designs and materials over the past few decades, the lenses still represent a foreign object in the ocular environment and may lead to physiological as well as mechanical effects on the eye. When contact lenses are placed in the eye, the ocular anatomical structures behind and in front of the lenses are directly affected. This thesis presents a series of experiments that investigate the mechanical and physiological effects of the short-term use of contact lenses on anterior and posterior corneal topography, corneal thickness, the eyelids, tarsal conjunctiva and tear film surface quality. The experimental paradigm used in these studies was a repeated measures, cross-over study design where subjects wore various types of contact lenses on different days and the lenses were varied in one or more key parameters (e.g. material or design). Both, old and newer lens materials were investigated, soft and rigid lenses were used, high and low oxygen permeability materials were tested, toric and spherical lens designs were examined, high and low powers and small and large diameter lenses were used in the studies. To establish the natural variability in the ocular measurements used in the studies, each experiment also contained at least one “baseline” day where an identical measurement protocol was followed, with no contact lenses worn. In this way, changes associated with contact lens wear were considered in relation to those changes that occurred naturally during the 8 hour period of the experiment. In the first study, the regional distribution and magnitude of change in corneal thickness and topography was investigated in the anterior and posterior cornea after short-term use of soft contact lenses in 12 young adults using the Pentacam. Four different types of contact lenses (Silicone hydrogel/ Spherical/–3D, Silicone Hydrogel/Spherical/–7D, Silicone Hydrogel/Toric/–3D and HEMA/Toric/–3D) of different materials, designs and powers were worn for 8 hours each, on 4 different days. The natural diurnal changes in corneal thickness and curvature were measured on two separate days before any contact lens wear. Significant diurnal changes in corneal thickness and curvature within the duration of the study were observed and these were taken into consideration for calculating the contact lens induced corneal changes. Corneal thickness changed significantly with lens wear and the greatest corneal swelling was seen with the hydrogel (HEMA) toric lens with a noticeable regional swelling of the cornea beneath the stabilization zones, the thickest regions of the lenses. The anterior corneal surface generally showed a slight flattening with lens wear. All contact lenses resulted in central posterior corneal steepening, which correlated with the relative degree of corneal swelling. The corneal swelling induced by the silicone hydrogel contact lenses was typically less than the natural diurnal thinning of the cornea over this same period (i.e. net thinning). This highlights why it is important to consider the natural diurnal variations in corneal thickness observed from morning to afternoon to accurately interpret contact lens induced corneal swelling. In the second experiment, the relative influence of lenses of different rigidity (polymethyl methacrylate – PMMA, rigid gas permeable – RGP and silicone hydrogel – SiHy) and diameters (9.5, 10.5 and 14.0) on corneal thickness, topography, refractive power and wavefront error were investigated. Four different types of contact lenses (PMMA/9.5, RGP/9.5, RGP/10.5, SiHy/14.0), were worn by 14 young healthy adults for a period of 8 hours on 4 different days. There was a clear association between fluorescein fitting pattern characteristics (i.e. regions of minimum clearance in the fluorescein pattern) and the resulting corneal shape changes. PMMA lenses resulted in significant corneal swelling (more in the centre than periphery) along with anterior corneal steepening and posterior flattening. RGP lenses, on the other hand, caused less corneal swelling (more in the periphery than centre) along with opposite effects on corneal curvature, anterior corneal flattening and posterior steepening. RGP lenses also resulted in a clinically and statistically significant decrease in corneal refractive power (ranging from 0.99 to 0.01 D), large enough to affect vision and require adjustment in the lens power. Wavefront analysis also showed a significant increase in higher order aberrations after PMMA lens wear, which may partly explain previous reports of "spectacle blur" following PMMA lens wear. We further explored corneal curvature, thickness and refractive changes with back surface toric and spherical RGP lenses in a group of 6 subjects with toric corneas. The lenses were worn for 8 hours and measurements were taken before and after lens wear, as in previous experiments. Both lens types caused anterior corneal flattening and a decrease in corneal refractive power but the changes were greater with the spherical lens. The spherical lens also caused a significant decrease in WTR astigmatism (WRT astigmatism defined as major axis within 30 degrees of horizontal). Both the lenses caused slight posterior corneal steepening and corneal swelling, with a greater effect in the periphery compared to the central cornea. Eyelid position, lid-wiper and tarsal conjunctival staining were also measured in Experiment 2 after short-term use of the rigid and SiHy contact lenses. Digital photos of the external eyes were captured for lid position analysis. The lid-wiper region of the marginal conjunctiva was stained using fluorescein and lissamine green dyes and digital photos were graded by an independent masked observer. A grading scale was developed in order to describe the tarsal conjunctival staining. A significant decrease in the palpebral aperture height (blepharoptosis) was found after wearing of PMMA/9.5 and RGP/10.5 lenses. All three rigid contact lenses caused a significant increase in lid-wiper and tarsal staining after 8 hours of lens wear. There was also a significant diurnal increase in tarsal staining, even without contact lens wear. These findings highlight the need for better contact lens edge design to minimise the interactions between the lid and contact lens edge during blinking and more lubricious contact lens surfaces to reduce ocular surface micro-trauma due to friction and for. Tear film surface quality (TFSQ) was measured using a high-speed videokeratoscopy technique in Experiment 2. TFSQ was worse with all the lenses compared to baseline (PMMA/9.5, RGP/9.5, RGP/10.5, and SiHy/14) in the afternoon (after 8 hours) during normal and suppressed blinking conditions. The reduction in TFSQ was similar with all the contact lenses used, irrespective of their material and diameter. An unusual pattern of change in TFSQ in suppressed blinking conditions was also found. The TFSQ with contact lens was found to decrease until a certain time after which it improved to a value even better than the bare eye. This is likely to be due to the tear film drying completely over the surface of the contact lenses. The findings of this study also show that there is still a scope for improvement in contact lens materials in terms of better wettability and hydrophilicity in order to improve TFSQ and patient comfort. These experiments showed that a variety of changes can occur in the anterior eye as a result of the short-term use of a range of commonly used contact lens types. The greatest corneal changes occurred with lenses manufactured from older HEMA and PMMA lens materials, whereas modern SiHy and rigid gas permeable materials caused more subtle changes in corneal shape and thickness. All lenses caused signs of micro-trauma to the eyelid wiper and palpebral conjunctiva, although rigid lenses appeared to cause more significant changes. Tear film surface quality was also significantly reduced with all types of contact lenses. These short-term changes in the anterior eye are potential markers for further long term changes and the relative differences between lens types that we have identified provide an indication of areas of contact lens design and manufacture that warrant further development.
Resumo:
A 4-cylinder Ford 2701C test engine was used in this study to explore the impact of ethanol fumigation on gaseous and particle emission concentrations. The fumigation technique delivered vaporised ethanol into the intake manifold of the engine, using an injector, a pump and pressure regulator, a heat exchanger for vaporising ethanol and a separate fuel tank and lines. Gaseous (Nitric oxide (NO), Carbon monoxide (CO) and hydrocarbons (HC)) and particulate emissions (particle mass (PM2.5) and particle number) testing was conducted at intermediate speed (1700 rpm) using 4 load settings with ethanol substitution percentages ranging from 10-40 % (by energy). With ethanol fumigation, NO and PM2.5 emissions were reduced, whereas CO and HC emissions increased considerably and particle number emissions increased at most test settings. It was found that ethanol fumigation reduced the excess air factor for the engine and this led to increased emissions of CO and HC, but decreased emissions of NO. PM2.5 emissions were reduced with ethanol fumigation, as ethanol has a very low “sooting” tendency. This is due to the higher hydrogen-to-carbon ratio of this fuel, and also because ethanol does not contain aromatics, both of which are known soot precursors. The use of a diesel oxidation catalyst (as an after-treatment device) is recommended to achieve a reduction in the four pollutants that are currently regulated for compression ignition engines. The increase in particle number emissions with ethanol fumigation was due to the formation of volatile (organic) particles; consequently, using a diesel oxidation catalyst will also assist in reducing particle number emissions.
Resumo:
In order to support intelligent transportation system (ITS) road safety applications such as collision avoidance, lane departure warnings and lane keeping, Global Navigation Satellite Systems (GNSS) based vehicle positioning system has to provide lane-level (0.5 to 1 m) or even in-lane-level (0.1 to 0.3 m) accurate and reliable positioning information to vehicle users. However, current vehicle navigation systems equipped with a single frequency GPS receiver can only provide road-level accuracy at 5-10 meters. The positioning accuracy can be improved to sub-meter or higher with the augmented GNSS techniques such as Real Time Kinematic (RTK) and Precise Point Positioning (PPP) which have been traditionally used in land surveying and or in slowly moving environment. In these techniques, GNSS corrections data generated from a local or regional or global network of GNSS ground stations are broadcast to the users via various communication data links, mostly 3G cellular networks and communication satellites. This research aimed to investigate the precise positioning system performances when operating in the high mobility environments. This involves evaluation of the performances of both RTK and PPP techniques using: i) the state-of-art dual frequency GPS receiver; and ii) low-cost single frequency GNSS receiver. Additionally, this research evaluates the effectiveness of several operational strategies in reducing the load on data communication networks due to correction data transmission, which may be problematic for the future wide-area ITS services deployment. These strategies include the use of different data transmission protocols, different correction data format standards, and correction data transmission at the less-frequent interval. A series of field experiments were designed and conducted for each research task. Firstly, the performances of RTK and PPP techniques were evaluated in both static and kinematic (highway with speed exceed 80km) experiments. RTK solutions achieved the RMS precision of 0.09 to 0.2 meter accuracy in static and 0.2 to 0.3 meter in kinematic tests, while PPP reported 0.5 to 1.5 meters in static and 1 to 1.8 meter in kinematic tests by using the RTKlib software. These RMS precision values could be further improved if the better RTK and PPP algorithms are adopted. The tests results also showed that RTK may be more suitable in the lane-level accuracy vehicle positioning. The professional grade (dual frequency) and mass-market grade (single frequency) GNSS receivers were tested for their performance using RTK in static and kinematic modes. The analysis has shown that mass-market grade receivers provide the good solution continuity, although the overall positioning accuracy is worse than the professional grade receivers. In an attempt to reduce the load on data communication network, we firstly evaluate the use of different correction data format standards, namely RTCM version 2.x and RTCM version 3.0 format. A 24 hours transmission test was conducted to compare the network throughput. The results have shown that 66% of network throughput reduction can be achieved by using the newer RTCM version 3.0, comparing to the older RTCM version 2.x format. Secondly, experiments were conducted to examine the use of two data transmission protocols, TCP and UDP, for correction data transmission through the Telstra 3G cellular network. The performance of each transmission method was analysed in terms of packet transmission latency, packet dropout, packet throughput, packet retransmission rate etc. The overall network throughput and latency of UDP data transmission are 76.5% and 83.6% of TCP data transmission, while the overall accuracy of positioning solutions remains in the same level. Additionally, due to the nature of UDP transmission, it is also found that 0.17% of UDP packets were lost during the kinematic tests, but this loss doesn't lead to significant reduction of the quality of positioning results. The experimental results from the static and the kinematic field tests have also shown that the mobile network communication may be blocked for a couple of seconds, but the positioning solutions can be kept at the required accuracy level by setting of the Age of Differential. Finally, we investigate the effects of using less-frequent correction data (transmitted at 1, 5, 10, 15, 20, 30 and 60 seconds interval) on the precise positioning system. As the time interval increasing, the percentage of ambiguity fixed solutions gradually decreases, while the positioning error increases from 0.1 to 0.5 meter. The results showed the position accuracy could still be kept at the in-lane-level (0.1 to 0.3 m) when using up to 20 seconds interval correction data transmission.