946 resultados para Possible distribution range
Resumo:
Dispersion characteristics of respiratory droplets in indoor environments are of special interest in controlling transmission of airborne diseases. This study adopts an Eulerian method to investigate the spatial concentration distribution and temporal evolution of exhaled and sneezed/coughed droplets within the range of 1.0~10.0μm in an office room with three air distribution methods, i.e. mixing ventilation (MV), displacement ventilation (DV), and under-floor air distribution (UFAD). The diffusion, gravitational settling, and deposition mechanism of particulate matters are well accounted in the one-way coupling Eulerian approach. The simulation results find that exhaled droplets with diameters up to 10.0μm from normal respiration process are uniformly distributed in MV, while they are trapped in the breathing height by thermal stratifications in DV and UFAD, resulting in a high droplet concentration and a high exposure risk to other occupants. Sneezed/coughed droplets are diluted much slower in DV/UFAD than in MV. Low air speed in the breathing zone in DV/UFAD can lead to prolonged residence of droplets in the breathing zone.
Resumo:
Bag sampling techniques can be used to temporarily store an aerosol and therefore provide sufficient time to utilize sensitive but slow instrumental techniques for recording detailed particle size distributions. Laboratory based assessment of the method were conducted to examine size dependant deposition loss coefficients for aerosols held in VelostatTM bags conforming to a horizontal cylindrical geometry. Deposition losses of NaCl particles in the range of 10 nm to 160 nm were analysed in relation to the bag size, storage time, and sampling flow rate. Results of this study suggest that the bag sampling method is most useful for moderately short sampling periods of about 5 minutes.
Resumo:
Problem: This study considers whether requiring learner drivers to complete a set number of hours while on a learner licence affects the amount of hours of supervised practice that they undertake. It compares the amount of practice that learners in Queensland and New South Wales report undertaking. At the time the study was conducted, learner drivers in New South Wales were required to complete 50 hours of supervised practice while those from Queensland were not. Method: Participants were approached outside driver licensing centres after they had just completed their practical driving test to obtain their provisional (intermediate) licence. Those agreeing to participate were interviewed over the phone later and asked a range of questions to obtain information including socio-demographic details and amount of supervised practice completed. Results: There was a significant difference in the amount of practice that learners reported undertaking. Participants from New South Wales reported completing a significantly greater amount of practice (M = 73.3 hours, sd = 29.12 hours) on their learner licence than those from Queensland (M = 64.1 hours, sd = 51.05 hours). However, the distribution of hours of practice among the Queensland participants was bimodal in nature. Participants from Queensland reported either completing much less or much more practice than the New South Wales average. Summary: While it appears that the requirement that learner drivers complete a set number of hours may increase the average amount of hours of practice obtained, it may also serve to discourage drivers from obtaining additional practice, over and above the required hours. Impact on Industry: The results of this study suggest that the implications of requiring learner drivers to complete a set number of hours of supervised practice are complex. In some cases, policy makers may inadvertently limit the amount of hours learners obtain to the mandated amount rather than encouraging them to obtain as much practice as possible.
Resumo:
In this study, biometric and structural engineering tool have been used to examine a possible relationship within Chuaria–Tawuia complex and micro-FTIR (Fourier Transform Infrared Spectroscopy) analyses to understand the biological affinity of Chuaria circularis Walcott, collected from the Mesoproterozoic Suket Shales of the Vindhyan Supergroup and the Neoproterozoic Halkal Shales of the Bhima Group of peninsular India. Biometric analyses of well preserved carbonized specimens show wide variation in morphology and uni-modal distribution. We believe and demonstrate to a reasonable extent that C. circularis most likely was a part of Tawuia-like cylindrical body of algal origin. Specimens with notch/cleft and overlapping preservation, mostly recorded in the size range of 3–5 mm, are of special interest. Five different models proposed earlier on the life cycle of C. circularis are discussed. A new model, termed as ‘Hybrid model’ based on present multidisciplinary study assessing cylindrical and spherical shapes suggesting variable cell wall strength and algal affinity is proposed. This model discusses and demonstrates varied geometrical morphologies assumed by Chuaria and Tawuia, and also shows the inter-relationship of Chuaria–Tawuia complex. Structural engineering tool (thin walled pressure vessel theory) was applied to investigate the implications of possible geometrical shapes (sphere and cylinder), membrane (cell wall) stresses and ambient pressure environment on morphologically similar C. circularis and Tawuia. The results suggest that membrane stresses developed on the structures similar to Chuaria–Tawuia complex were directly proportional to radius and inversely proportional to the thickness in both cases. In case of hollow cylindrical structure, the membrane stresses in circumferential direction (hoop stress) are twice of the longitudinal direction indicating that rupture or fragmentation in the body of Tawuia would have occurred due to hoop stress. It appears that notches and discontinuities seen in some of the specimens of Chuaria may be related to rupture suggesting their possible location in 3D Chuaria. The micro-FTIR spectra of C. circularis are characterized by both aliphatic and aromatic absorption bands. The aliphaticity is indicated by prominent alkyl group bands between 2800–3000 and 1300–1500 cm−1. The prominent absorption signals at 700–900 cm−1 (peaking at 875 and 860 cm−1) are due to aromatic CH out of plane deformation. A narrow, strong band is centred at 1540 cm−1 which could be COOH band. The presence of strong aliphatic bands in FTIR spectra suggests that the biogeopolymer of C. circularis is of aliphatic nature. The wall chemistry indicates the presence of ‘algaenan’—a biopolymer of algae.
Resumo:
Principal Topic : According to Shane & Venkataraman (2000) entrepreneurship consists of the recognition and exploitation of venture ideas - or opportunities as they often called - to create future goods and services. This definition puts venture ideas is at the heart of entrepreneurship research. Substantial research has been done on venture ideas in order to enhance our understanding of this phenomenon (e.g. Choi & Shepherd, 2004; Shane, 2000; Shepherd & DeTienne, 2005). However, we are yet to learn what factors drive entrepreneurs' perceptions of the relative attractiveness of venture ideas, and how important different idea characteristics are for such assessments. Ruef (2002) recognized that there is an uneven distribution of venture ideas undertaken by entrepreneurs in the USA. A majority introduce either a new product/service or access a new market or market segment. A smaller percentage of entrepreneurs introduce a new method of production, organizing, or distribution. This implies that some forms of venture ideas are perceived by entrepreneurs as more important or valuable than others. However, Ruef does not provide any information regarding why some forms of venture ideas are more common than others among entrepreneurs. Therefore, this study empirically investigates what factors affect the attractiveness of venture ideas as well as their relative importance. Based on two key characteristics of venture ideas, namely venture idea newness and relatedness, our study investigates how different types and degrees of newness and relatedness of venture ideas affect their attractiveness as perceived by expert entrepreneurs. Methodology/Key : Propositions According to Schumpeter (1934) entrepreneurs introduce different types of venture ideas such as new products/services, new method of production, enter into new markets/customer and new method of promotion. Further, according to Schumpeter (1934) and Kirzner (1973) venture ideas introduced to the market range along a continuum of innovative to imitative ideas. The distinction between these two extremes of venture idea highlights an important property of venture idea, namely their newness. Entrepreneurs, in order to gain competitive advantage or above average returns introduce their venture ideas which may be either new to the world, new to the market that they seek to enter, substantially improved from current offerings and an imitative form of existing offerings. Expert entrepreneurs may be more attracted to venture ideas that exhibit high degree of newness because of the higher newness is coupled with increased market potential (Drucker, 1985) Moreover, certain individual characteristics also affect the attractiveness of venture idea. According to Shane (2000), individual's prior knowledge is closely associated with the recognition of venture ideas. Sarasvathy's (2001) Effectuation theory proposes a high degree of relatedness between venture ideas and the resource position of the individual. Thus, entrepreneurs may be more attracted to venture ideas that are closely aligned with the knowledge and/or resources they already possess. On the other hand, the potential financial gain (Shepherd & DeTienne, 2005) may be larger for ideas that are not close to the entrepreneurs' home turf. Therefore, potential financial gain is a stimulus that has to be considered separately. We aim to examine how entrepreneurs weigh considerations of different forms of newness and relatedness as well as potential financial gain in assessing the attractiveness of venture ideas. We use conjoint analysis to determine how expert entrepreneurs develop preferences for venture ideas which involved with different degrees of newness, relatedness and potential gain. This analytical method paves way to measure the trade-offs they make when choosing a particular venture idea. The conjoint analysis estimates respondents' preferences in terms of utilities (or part-worth) for each level of newness, relatedness and potential gain of venture ideas. A sample of 50 expert entrepreneurs who were awarded young entrepreneurship awards in Sri Lanka in 2007 is used for interviews. Each respondent is interviewed providing with 32 scenarios which explicate different combinations of possible profiles open them into consideration. Conjoint software (SPSS) is used to analyse data. Results and Implications : The data collection of this study is still underway. However, results of this study will provide information regarding the attractiveness of each level of newness, relatedness and potential gain of venture idea and their relative importance in a business model. Additionally, these results provide important implications for entrepreneurs, consultants and other stakeholders as regards the importance of different of attributes of venture idea coupled with different levels. Entrepreneurs, consultants and other stakeholders could make decisions accordingly.
Resumo:
Conventional cameras have limited dynamic range, and as a result vision-based robots cannot effectively view an environment made up of both sunny outdoor areas and darker indoor areas. This paper presents an approach to extend the effective dynamic range of a camera, achieved by changing the exposure level of the camera in real-time to form a sequence of images which collectively cover a wide range of radiance. Individual control algorithms for each image have been developed to maximize the viewable area across the sequence. Spatial discrepancies between images, caused by the moving robot, are improved by a real-time image registration process. The sequence is then combined by merging color and contour information. By integrating these techniques it becomes possible to operate a vision-based robot in wide radiance range scenes.
Resumo:
Background: The quality of stormwater runoff from ports is significant as it can be an important source of pollution to the marine environment. This is also a significant issue for the Port of Brisbane as it is located in an area of high environmental values. Therefore, it is imperative to develop an in-depth understanding of stormwater runoff quality to ensure that appropriate strategies are in place for quality improvement, where necessary. To this end, the Port of Brisbane Corporation aimed to develop a port specific stormwater model for the Fisherman Islands facility. The need has to be considered in the context of the proposed future developments of the Port area. ----------------- The Project: The research project is an outcome of the collaborative Partnership between the Port of Brisbane Corporation (POBC) and Queensland University of Technology (QUT). A key feature of this Partnership is that it seeks to undertake research to assist the Port in strengthening the environmental custodianship of the Port area through ‘cutting edge’ research and its translation into practical application. ------------------ The project was separated into two stages. The first stage developed a quantitative understanding of the generation potential of pollutant loads in the existing land uses. This knowledge was then used as input for the stormwater quality model developed in the subsequent stage. The aim is to expand this model across the yet to be developed port expansion area. This is in order to predict pollutant loads associated with stormwater flows from this area with the longer term objective of contributing to the development of ecological risk mitigation strategies for future expansion scenarios. ----------------- Study approach: Stage 1 of the overall study confirmed that Port land uses are unique in terms of the anthropogenic activities occurring on them. This uniqueness in land use results in distinctive stormwater quality characteristics different to other conventional urban land uses. Therefore, it was not scientifically valid to consider the Port as belonging to a single land use category or to consider as being similar to any typical urban land use. The approach adopted in this study was very different to conventional modelling studies where modelling parameters are developed using calibration. The field investigations undertaken in Stage 1 of the overall study helped to create fundamental knowledge on pollutant build-up and wash-off in different Port land uses. This knowledge was then used in computer modelling so that the specific characteristics of pollutant build-up and wash-off can be replicated. This meant that no calibration processes were involved due to the use of measured parameters for build-up and wash-off. ---------------- Conclusions: Stage 2 of the study was primarily undertaken using the SWMM stormwater quality model. It is a physically based model which replicates natural processes as closely as possible. The time step used and catchment variability considered was adequate to accommodate the temporal and spatial variability of input parameters and the parameters used in the modelling reflect the true nature of rainfall-runoff and pollutant processes to the best of currently available knowledge. In this study, the initial loss values adopted for the impervious surfaces are relatively high compared to values noted in research literature. However, given the scientifically valid approach used for the field investigations, it is appropriate to adopt the initial losses derived from this study for future modelling of Port land uses. The relatively high initial losses will reduce the runoff volume generated as well as the frequency of runoff events significantly. Apart from initial losses, most of the other parameters used in SWMM modelling are generic to most modelling studies. Development of parameters for MUSIC model source nodes was one of the primary objectives of this study. MUSIC, uses the mean and standard deviation of pollutant parameters based on a normal distribution. However, based on the values generated in this study, the variation of Event Mean Concentrations (EMCs) for Port land uses within the given investigation period does not fit a normal distribution. This is possibly due to the fact that only one specific location was considered, namely the Port of Brisbane unlike in the case of the MUSIC model where a range of areas with different geographic and climatic conditions were investigated. Consequently, the assumptions used in MUSIC are not totally applicable for the analysis of water quality in Port land uses. Therefore, in using the parameters included in this report for MUSIC modelling, it is important to note that it may result in under or over estimations of annual pollutant loads. It is recommended that the annual pollutant load values given in the report should be used as a guide to assess the accuracy of the modelling outcomes. A step by step guide for using the knowledge generated from this study for MUSIC modelling is given in Table 4.6. ------------------ Recommendations: The following recommendations are provided to further strengthen the cutting edge nature of the work undertaken: * It is important to further validate the approach recommended for stormwater quality modelling at the Port. Validation will require data collection in relation to rainfall, runoff and water quality from the selected Port land uses. Additionally, the recommended modelling approach could be applied to a soon-to-be-developed area to assess ‘before’ and ‘after’ scenarios. * In the modelling study, TSS was adopted as the surrogate parameter for other pollutants. This approach was based on other urban water quality research undertaken at QUT. The validity of this approach should be further assessed for Port land uses. * The adoption of TSS as a surrogate parameter for other pollutants and the confirmation that the <150 m particle size range was predominant in suspended solids for pollutant wash-off gives rise to a number of important considerations. The ability of the existing structural stormwater mitigation measures to remove the <150 m particle size range need to be assessed. The feasibility of introducing source control measures as opposed to end-of-pipe measures for stormwater quality improvement may also need to be considered.
Resumo:
Multi-level concrete buildings requrre substantial temporary formwork structures to support the slabs during construction. The primary function of this formwork is to safely disperse the applied loads so that the slab being constructed, or the portion of the permanent structure already constructed, is not overloaded. Multi-level formwork is a procedure in which a limited number of formwork and shoring sets are cycled up the building as construction progresses. In this process, each new slab is supported by a number of lower level slabs. The new slab load is, essentially, distributed to these supporting slabs in direct proportion to their relative stiffness. When a slab is post-tensioned using draped tendons, slab lift occurs as a portion of the slab self-weight is balanced. The formwork and shores supporting that slab are unloaded by an amount equivalent to the load balanced by the post-tensioning. This produces a load distribution inherently different from that of a conventionally reinforced slab. Through , theoretical modelling and extensive on-site shore load measurement, this research examines the effects of post-tensioning on multilevel formwork load distribution. The research demonstrates that the load distribution process for post-tensioned slabs allows for improvements to current construction practice. These enhancements include a shortening of the construction period; an improvement in the safety of multi-level form work operations; and a reduction in the quantity of form work materials required for a project. These enhancements are achieved through the general improvement in safety offered by post-tensioning during the various formwork operations. The research demonstrates that there is generally a significant improvement in the factors of safety over those for conventionally reinforced slabs. This improvement in the factor of safety occurs at all stages of the multi-level formwork operation. The general improvement in the factors of safety with post-tensioned slabs allows for a shortening of the slab construction cycle time. Further, the low level of load redistribution that occurs during the stripping operations makes post-tensioned slabs ideally suited to reshoring procedures. Provided the overall number of interconnected levels remains unaltered, it is possible to increase the number of reshored levels while reducing the number of undisturbed shoring levels without altering the factors of safety, thereby, reducing the overall quantity of formwork and shoring materials.
Resumo:
The concept of radar was developed for the estimation of the distance (range) and velocity of a target from a receiver. The distance measurement is obtained by measuring the time taken for the transmitted signal to propagate to the target and return to the receiver. The target's velocity is determined by measuring the Doppler induced frequency shift of the returned signal caused by the rate of change of the time- delay from the target. As researchers further developed conventional radar systems it become apparent that additional information was contained in the backscattered signal and that this information could in fact be used to describe the shape of the target itself. It is due to the fact that a target can be considered to be a collection of individual point scatterers, each of which has its own velocity and time- delay. DelayDoppler parameter estimation of each of these point scatterers thus corresponds to a mapping of the target's range and cross range, thus producing an image of the target. Much research has been done in this area since the early radar imaging work of the 1960s. At present there are two main categories into which radar imaging falls. The first of these is related to the case where the backscattered signal is considered to be deterministic. The second is related to the case where the backscattered signal is of a stochastic nature. In both cases the information which describes the target's scattering function is extracted by the use of the ambiguity function, a function which correlates the backscattered signal in time and frequency with the transmitted signal. In practical situations, it is often necessary to have the transmitter and the receiver of the radar system sited at different locations. The problem in these situations is 'that a reference signal must then be present in order to calculate the ambiguity function. This causes an additional problem in that detailed phase information about the transmitted signal is then required at the receiver. It is this latter problem which has led to the investigation of radar imaging using time- frequency distributions. As will be shown in this thesis, the phase information about the transmitted signal can be extracted from the backscattered signal using time- frequency distributions. The principle aim of this thesis was in the development, and subsequent discussion into the theory of radar imaging, using time- frequency distributions. Consideration is first given to the case where the target is diffuse, ie. where the backscattered signal has temporal stationarity and a spatially white power spectral density. The complementary situation is also investigated, ie. where the target is no longer diffuse, but some degree of correlation exists between the time- frequency points. Computer simulations are presented to demonstrate the concepts and theories developed in the thesis. For the proposed radar system to be practically realisable, both the time- frequency distributions and the associated algorithms developed must be able to be implemented in a timely manner. For this reason an optical architecture is proposed. This architecture is specifically designed to obtain the required time and frequency resolution when using laser radar imaging. The complex light amplitude distributions produced by this architecture have been computer simulated using an optical compiler.
Resumo:
Voltage Unbalance (VU) is a power quality issue arising within the low voltage residential distribution networks due to the random location and rating of single-phase rooftop photovoltaic cells (PVs). In this paper, an analysis has been carried out to investigate how PV installations, their random location and power generation capacity can cause an increase in VU. Several efficient practical methods are discussed for VU reduction. Based on this analysis, it has been shown that the installation of a DSTATCOM can reduce VU. In this paper, the best possible location for DSTATCOM and its efficient control method to reduce VU will be presented. The results are verified through PSCAD/EMTDC and Monte Carlo simulations.
Resumo:
There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros
Resumo:
Before 2001, most Africans immigrating to Australia were white South Africans and Zimbabweans who arrived as economic and family-reunion migrants (Cox, Cooper & Adepoju, 1999). Black African communities are a more recent addition to the Australian landscape, with most entering Australia as refugees after 2001. African refugees are a particularly disadvantaged immigrant group, which the Department of Immigration and Multicultural Affairs (in the Community Relations Commission of New South Wales, 2006) suggests require high levels of settlement support (p.23). Decision makers and settlement service providers need to have settlement data on the communities so that they can be effective in planning, budgeting and delivering support where it is most needed. Settlement data are also useful for determining the challenges that these communities face in trying to establish themselves in resettlement. There has been no verification of existing secondary data sources, however, or previous formal study of African refugee settlement geography in Southeast Queensland. This research addresses the knowledge gap by using a mixed-method approach to identify and describe the distribution and population size of eight African communities in Southeast Queensland, examine secondary migration patterns in these communities and assess the relationship between these geographic features and housing, a critical factor in successful settlement. Significant discrepancies exist between the primary data gathered in the study and existing secondary data relating to population size and distribution of the communities. Results also reveal a tension between the socio-cultural forces and the housing and economic imperatives driving secondary migration in the communities, and a general lack of engagement by African refugees with structured support networks. These findings have a wide range of implications for policy and for groups that provide settlement support to these communities.
Resumo:
Tobacco yellow dwarf virus (TbYDV, family Geminiviridae, genus Mastrevirus) is an economically important pathogen causing summer death and yellow dwarf disease in bean (Phaseolus vulgaris L.) and tobacco (Nicotiana tabacum L.), respectively. Prior to the commencement of this project, little was known about the epidemiology of TbYDV, its vector and host-plant range. As a result, disease control strategies have been restricted to regular poorly timed insecticide applications which are largely ineffective, environmentally hazardous and expensive. In an effort to address this problem, this PhD project was carried out in order to better understand the epidemiology of TbYDV, to identify its host-plant and vectors as well as to characterise the population dynamics and feeding physiology of the main insect vector and other possible vectors. The host-plants and possible leafhopper vectors of TbYDV were assessed over three consecutive growing seasons at seven field sites in the Ovens Valley, Northeastern Victoria, in commercial tobacco and bean growing properties. Leafhoppers and plants were collected and tested for the presence of TbYDV by PCR. Using sweep nets, twenty-three leafhopper species were identified at the seven sites with Orosius orientalis the predominant leafhopper. Of the 23 leafhopper species screened for TbYDV, only Orosius orientalis and Anzygina zealandica tested positive. Forty-two different plant species were also identified at the seven sites and tested. Of these, TbYDV was only detected in four dicotyledonous species, Amaranthus retroflexus, Phaseolus vulgaris, Nicotiana tabacum and Raphanus raphanistrum. Using a quadrat survey, the temporal distribution and diversity of vegetation at four of the field sites was monitored in order to assess the presence of, and changes in, potential host-plants for the leafhopper vector(s) and the virus. These surveys showed that plant composition and the climatic conditions at each site were the major influences on vector numbers, virus presence and the subsequent occurrence of tobacco yellow dwarf and bean summer death diseases. Forty-two plant species were identified from all sites and it was found that sites with the lowest incidence of disease had the highest proportion of monocotyledonous plants that are non hosts for both vector and the virus. In contrast, the sites with the highest disease incidence had more host-plant species for both vector and virus, and experienced higher temperatures and less rainfall. It is likely that these climatic conditions forced the leafhopper to move into the irrigated commercial tobacco and bean crop resulting in disease. In an attempt to understand leafhopper species diversity and abundance, in and around the field borders of commercially grown tobacco crops, leafhoppers were collected from four field sites using three different sampling techniques, namely pan trap, sticky trap and sweep net. Over 51000 leafhopper samples were collected, which comprised 57 species from 11 subfamilies and 19 tribes. Twentythree leafhopper species were recorded for the first time in Victoria in addition to several economically important pest species of crops other than tobacco and bean. The highest number and greatest diversity of leafhoppers were collected in yellow pan traps follow by sticky trap and sweep nets. Orosius orientalis was found to be the most abundant leafhopper collected from all sites with greatest numbers of this leafhopper also caught using the yellow pan trap. Using the three sampling methods mentioned above, the seasonal distribution and population dynamics of O. orientalis was studied at four field sites over three successive growing seasons. The population dynamics of the leafhopper was characterised by trimodal peaks of activity, occurring in the spring and summer months. Although O. orientalis was present in large numbers early in the growing season (September-October), TbYDV was only detected in these leafhoppers between late November and the end of January. The peak in the detection of TbYDV in O. orientalis correlated with the observation of disease symptoms in tobacco and bean and was also associated with warmer temperatures and lower rainfall. To understand the feeding requirements of Orosius orientalis and to enable screening of potential control agents, a chemically-defined artificial diet (designated PT-07) and feeding system was developed. This novel diet formulation allowed survival for O. orientalis for up to 46 days including complete development from first instar through to adulthood. The effect of three selected plant derived proteins, cowpea trypsin inhibitor (CpTi), Galanthus nivalis agglutinin (GNA) and wheat germ agglutinin (WGA), on leafhopper survival and development was assessed. Both GNA and WGA were shown to reduce leafhopper survival and development significantly when incorporated at a 0.1% (w/v) concentration. In contrast, CpTi at the same concentration did not exhibit significant antimetabolic properties. Based on these results, GNA and WGA are potentially useful antimetabolic agents for expression in genetically modified crops to improve the management of O. orientalis, TbYDV and the other pathogens it vectors. Finally, an electrical penetration graph (EPG) was used to study the feeding behaviour of O. orientalis to provide insights into TbYDV acquisition and transmission. Waveforms representing different feeding activity were acquired by EPG from adult O. orientalis feeding on two plant species, Phaseolus vulgaris and Nicotiana tabacum and a simple sucrose-based artificial diet. Five waveforms (designated O1-O5) were observed when O. orientalis fed on P. vulgaris, while only four (O1-O4) and three (O1-O3) waveforms were observed during feeding on N. tabacum and the artificial diet, respectively. The mean duration of each waveform and the waveform type differed markedly depending on the food source. This is the first detailed study on the tritrophic interactions between TbYDV, its leafhopper vector, O. orientalis, and host-plants. The results of this research have provided important fundamental information which can be used to develop more effective control strategies not only for O. orientalis, but also for TbYDV and other pathogens vectored by the leafhopper.
Resumo:
Freeways are divided roadways designed to facilitate the uninterrupted movement of motor vehicles. However, many freeways now experience demand flows in excess of capacity, leading to recurrent congestion. The Highway Capacity Manual (TRB, 1994) uses empirical macroscopic relationships between speed, flow and density to quantify freeway operations and performance. Capacity may be predicted as the maximum uncongested flow achievable. Although they are effective tools for design and analysis, macroscopic models lack an understanding of the nature of processes taking place in the system. Szwed and Smith (1972, 1974) and Makigami and Matsuo (1990) have shown that microscopic modelling is also applicable to freeway operations. Such models facilitate an understanding of the processes whilst providing for the assessment of performance, through measures of capacity and delay. However, these models are limited to only a few circumstances. The aim of this study was to produce more comprehensive and practical microscopic models. These models were required to accurately portray the mechanisms of freeway operations at the specific locations under consideration. The models needed to be able to be calibrated using data acquired at these locations. The output of the models needed to be able to be validated with data acquired at these sites. Therefore, the outputs should be truly descriptive of the performance of the facility. A theoretical basis needed to underlie the form of these models, rather than empiricism, which is the case for the macroscopic models currently used. And the models needed to be adaptable to variable operating conditions, so that they may be applied, where possible, to other similar systems and facilities. It was not possible to produce a stand-alone model which is applicable to all facilities and locations, in this single study, however the scene has been set for the application of the models to a much broader range of operating conditions. Opportunities for further development of the models were identified, and procedures provided for the calibration and validation of the models to a wide range of conditions. The models developed, do however, have limitations in their applicability. Only uncongested operations were studied and represented. Driver behaviour in Brisbane was applied to the models. Different mechanisms are likely in other locations due to variability in road rules and driving cultures. Not all manoeuvres evident were modelled. Some unusual manoeuvres were considered unwarranted to model. However the models developed contain the principal processes of freeway operations, merging and lane changing. Gap acceptance theory was applied to these critical operations to assess freeway performance. Gap acceptance theory was found to be applicable to merging, however the major stream, the kerb lane traffic, exercises only a limited priority over the minor stream, the on-ramp traffic. Theory was established to account for this activity. Kerb lane drivers were also found to change to the median lane where possible, to assist coincident mergers. The net limited priority model accounts for this by predicting a reduced major stream flow rate, which excludes lane changers. Cowan's M3 model as calibrated for both streams. On-ramp and total upstream flow are required as input. Relationships between proportion of headways greater than 1 s and flow differed for on-ramps where traffic leaves signalised intersections and unsignalised intersections. Constant departure onramp metering was also modelled. Minimum follow-on times of 1 to 1.2 s were calibrated. Critical gaps were shown to lie between the minimum follow-on time, and the sum of the minimum follow-on time and the 1 s minimum headway. Limited priority capacity and other boundary relationships were established by Troutbeck (1995). The minimum average minor stream delay and corresponding proportion of drivers delayed were quantified theoretically in this study. A simulation model was constructed to predict intermediate minor and major stream delays across all minor and major stream flows. Pseudo-empirical relationships were established to predict average delays. Major stream average delays are limited to 0.5 s, insignificant compared with minor stream delay, which reach infinity at capacity. Minor stream delays were shown to be less when unsignalised intersections are located upstream of on-ramps than signalised intersections, and less still when ramp metering is installed. Smaller delays correspond to improved merge area performance. A more tangible performance measure, the distribution of distances required to merge, was established by including design speeds. This distribution can be measured to validate the model. Merging probabilities can be predicted for given taper lengths, a most useful performance measure. This model was also shown to be applicable to lane changing. Tolerable limits to merging probabilities require calibration. From these, practical capacities can be estimated. Further calibration is required of traffic inputs, critical gap and minimum follow-on time, for both merging and lane changing. A general relationship to predict proportion of drivers delayed requires development. These models can then be used to complement existing macroscopic models to assess performance, and provide further insight into the nature of operations.