471 resultados para DISTRIBUTION RANGE
Resumo:
The measurement of submicrometre (< 1.0 m) and ultrafine particles (diameter < 0.1 m) number concentration have attracted attention since the last decade because the potential health impacts associated with exposure to these particles can be more significant than those due to exposure to larger particles. At present, ultrafine particles are not regularly monitored and they are yet to be incorporated into air quality monitoring programs. As a result, very few studies have analysed their long-term and spatial variations in ultrafine particle concentration, and none have been in Australia. To address this gap in scientific knowledge, the aim of this research was to investigate the long-term trends and seasonal variations in particle number concentrations in Brisbane, Australia. Data collected over a five-year period were analysed using weighted regression models. Monthly mean concentrations in the morning (6:00-10:00) and the afternoon (16:00-19:00) were plotted against time in months, using the monthly variance as the weights. During the five-year period, submicrometre and ultrafine particle concentrations increased in the morning by 105.7% and 81.5% respectively whereas in the afternoon there was no significant trend. The morning concentrations were associated with fresh traffic emissions and the afternoon concentrations with the background. The statistical tests applied to the seasonal models, on the other hand, indicated that there was no seasonal component. The spatial variation in size distribution in a large urban area was investigated using particle number size distribution data collected at nine different locations during different campaigns. The size distributions were represented by the modal structures and cumulative size distributions. Particle number peaked at around 30 nm, except at an isolated site dominated by diesel trucks, where the particle number peaked at around 60 nm. It was found that ultrafine particles contributed to 82%-90% of the total particle number. At the sites dominated by petrol vehicles, nanoparticles (< 50 nm) contributed 60%-70% of the total particle number, and at the site dominated by diesel trucks they contributed 50%. Although the sampling campaigns took place during different seasons and were of varying duration these variations did not have an effect on the particle size distributions. The results suggested that the distributions were rather affected by differences in traffic composition and distance to the road. To investigate the occurrence of nucleation events, that is, secondary particle formation from gaseous precursors, particle size distribution data collected over a 13 month period during 5 different campaigns were analysed. The study area was a complex urban environment influenced by anthropogenic and natural sources. The study introduced a new application of time series differencing for the identification of nucleation events. To evaluate the conditions favourable to nucleation, the meteorological conditions and gaseous concentrations prior to and during nucleation events were recorded. Gaseous concentrations did not exhibit a clear pattern of change in concentration. It was also found that nucleation was associated with sea breeze and long-range transport. The implications of this finding are that whilst vehicles are the most important source of ultrafine particles, sea breeze and aged gaseous emissions play a more important role in secondary particle formation in the study area.
Resumo:
This review outlines current international patterns in prostate cancer incidence and mortality rates and survival, including recent trends and a discussion of the possible impact of prostate-specific antigen (PSA) testing on the observed data. Internationally, prostate cancer is the second most common cancer diagnosed among men (behind lung cancer), and is the sixth most common cause of cancer death among men. Prostate cancer is particularly prevalent in developed countries such as the United States and the Scandinavian countries, with about a six-fold difference between high-incidence and low-incidence countries. Interpretation of trends in incidence and survival are complicated by the increasing impact of PSA testing, particularly in more developed countries. As Western influences become more pronounced in less developed countries, prostate cancer incidence rates in those countries are tending to increase, even though the prevalence of PSA testing is relatively low. Larger proportions of younger men are being diagnosed with prostate cancer and living longer following diagnosis of prostate cancer, which has many implications for health systems. Decreasing mortality rates are becoming widespread among more developed countries, although it is not clear whether this is due to earlier diagnosis (PSA testing), improved treatment, or some combination of these or other factors.
Resumo:
This paper reports the initial steps of research on planning of rural networks for MV and LV. In this paper, two different cases are studied. In the first case, 100 loads are distributed uniformly on a 100 km transmission line in a distribution network and in the second case, the load structure become closer to the rural situation. In case 2, 21 loads are located in a distribution system so that their distance is increasing, distance between load 1 and 2 is 3 km, between 2 and 3 is 6 km, etc). These two models to some extent represent the distribution system in urban and rural areas, respectively. The objective function for the design of the optimal system consists of three main parts: cost of transformers, and MV and LV conductors. The bus voltage is expressed as a constraint and should be maintained within a standard level, rising or falling by no more than 5%.
Resumo:
Key topics: Since the birth of the Open Source movement in the mid-80's, open source software has become more and more widespread. Amongst others, the Linux operating system, the Apache web server and the Firefox internet explorer have taken substantial market shares to their proprietary competitors. Open source software is governed by particular types of licenses. As proprietary licenses only allow the software's use in exchange for a fee, open source licenses grant users more rights like the free use, free copy, free modification and free distribution of the software, as well as free access to the source code. This new phenomenon has raised many managerial questions: organizational issues related to the system of governance that underlie such open source communities (Raymond, 1999a; Lerner and Tirole, 2002; Lee and Cole 2003; Mockus et al. 2000; Tuomi, 2000; Demil and Lecocq, 2006; O'Mahony and Ferraro, 2007;Fleming and Waguespack, 2007), collaborative innovation issues (Von Hippel, 2003; Von Krogh et al., 2003; Von Hippel and Von Krogh, 2003; Dahlander, 2005; Osterloh, 2007; David, 2008), issues related to the nature as well as the motivations of developers (Lerner and Tirole, 2002; Hertel, 2003; Dahlander and McKelvey, 2005; Jeppesen and Frederiksen, 2006), public policy and innovation issues (Jullien and Zimmermann, 2005; Lee, 2006), technological competitions issues related to standard battles between proprietary and open source software (Bonaccorsi and Rossi, 2003; Bonaccorsi et al. 2004, Economides and Katsamakas, 2005; Chen, 2007), intellectual property rights and licensing issues (Laat 2005; Lerner and Tirole, 2005; Gambardella, 2006; Determann et al., 2007). A major unresolved issue concerns open source business models and revenue capture, given that open source licenses imply no fee for users. On this topic, articles show that a commercial activity based on open source software is possible, as they describe different possible ways of doing business around open source (Raymond, 1999; Dahlander, 2004; Daffara, 2007; Bonaccorsi and Merito, 2007). These studies usually look at open source-based companies. Open source-based companies encompass a wide range of firms with different categories of activities: providers of packaged open source solutions, IT Services&Software Engineering firms and open source software publishers. However, business models implications are different for each of these categories: providers of packaged solutions and IT Services&Software Engineering firms' activities are based on software developed outside their boundaries, whereas commercial software publishers sponsor the development of the open source software. This paper focuses on open source software publishers' business models as this issue is even more crucial for this category of firms which take the risk of investing in the development of the software. Literature at last identifies and depicts only two generic types of business models for open source software publishers: the business models of ''bundling'' (Pal and Madanmohan, 2002; Dahlander 2004) and the dual licensing business models (Välimäki, 2003; Comino and Manenti, 2007). Nevertheless, these business models are not applicable in all circumstances. Methodology: The objectives of this paper are: (1) to explore in which contexts the two generic business models described in literature can be implemented successfully and (2) to depict an additional business model for open source software publishers which can be used in a different context. To do so, this paper draws upon an explorative case study of IdealX, a French open source security software publisher. This case study consists in a series of 3 interviews conducted between February 2005 and April 2006 with the co-founder and the business manager. It aims at depicting the process of IdealX's search for the appropriate business model between its creation in 2000 and 2006. This software publisher has tried both generic types of open source software publishers' business models before designing its own. Consequently, through IdealX's trials and errors, I investigate the conditions under which such generic business models can be effective. Moreover, this study describes the business model finally designed and adopted by IdealX: an additional open source software publisher's business model based on the principle of ''mutualisation'', which is applicable in a different context. Results and implications: Finally, this article contributes to ongoing empirical work within entrepreneurship and strategic management on open source software publishers' business models: it provides the characteristics of three generic business models (the business model of bundling, the dual licensing business model and the business model of mutualisation) as well as conditions under which they can be successfully implemented (regarding the type of product developed and the competencies of the firm). This paper also goes further into the traditional concept of business model used by scholars in the open source related literature. In this article, a business model is not only considered as a way of generating incomes (''revenue model'' (Amit and Zott, 2001)), but rather as the necessary conjunction of value creation and value capture, according to the recent literature about business models (Amit and Zott, 2001; Chresbrough and Rosenblum, 2002; Teece, 2007). Consequently, this paper analyses the business models from these two components' point of view.
Resumo:
Quantum key distribution (QKD) promises secure key agreement by using quantum mechanical systems. We argue that QKD will be an important part of future cryptographic infrastructures. It can provide long-term confidentiality for encrypted information without reliance on computational assumptions. Although QKD still requires authentication to prevent man-in-the-middle attacks, it can make use of either information-theoretically secure symmetric key authentication or computationally secure public key authentication: even when using public key authentication, we argue that QKD still offers stronger security than classical key agreement.
Resumo:
In this paper, the placement of sectionalizers, as well as, a cross-connection is optimally determined so that the objective function is minimized. The objective function employed in this paper consists of two main parts, the switch cost and the reliability cost. The switch cost is composed of the cost of sectionalizers and cross-connection and the reliability cost is assumed to be proportional to a reliability index, SAIDI. To optimize the allocation of sectionalizers and cross-connection problem realistically, the cost related to each element is considered as discrete. In consequence of binary variables for the availability of sectionalizers, the problem is extremely discrete. Therefore, the probability of local minimum risk is high and a heuristic-based optimization method is needed. A Discrete Particle Swarm Optimization (DPSO) is employed in this paper to deal with this discrete problem. Finally, a testing distribution system is used to validate the proposed method.
Resumo:
Isolation of a faulted segment, from either side of a fault, in a radial feeder that has several converter interfaced DGs is a challenging task when current sensing protective devices are employed. The protective device, even if it senses a downstream fault, may not operate if fault current level is low due to the current limiting operation of converters. In this paper, a new inverse type relay is introduced based on line admittance measurement to protect a distribution network, which has several converter interfaced DGs. The basic operation of this relay, its grading and reach settings are explained. Moreover a method is proposed to compensate the fault resistance such that the relay operation under this condition is reliable. Then designed relay performances are evaluated in a radial distribution network. The results are validated through PSCAD/EMTDC simulation and MATLAB calculations.
Resumo:
The rise of videosharing and self-(re)broadcasting Web services is posing new threats to a television industry already struggling with the impact of filesharing networks. This paper outlines these threats, focussing especially on the DIY re-broadcasting of live sports using Websites such as Justin.tv and a range of streaming media networks built on peer-to-peer filesharing technology.
Resumo:
The distribution network reliability can be increased if distributed generators (DGs) are allowed to operate in both grid-connected and islanded operations when the network has a high DG penetration level. However, the current utility regulations do not allow for the islanded operation. The arc faults are the one of the major issues preventing the islanded operation, since the arc will not extinguish if the DGs are not disconnected. In this paper, the effect of a converter interfaced DG on an arc fault is investigated by considering different control strategies for the converter. The foldback current control characteristic is proposed to a converter interfaced DG to achieve quick arc extinction and self-restoration without disconnecting the DG in the event of an arc fault. The results are validated through PSCAD/EMTDC simulations.
Resumo:
Curriculum initiatives in Australia emphasise the use of technologies and new media in classrooms. Some English teachers might fear this deployment of technologies because we are not all ‘digital natives’ like our students. If we embrace new media forms such as podcasts, blogs, vodcasts, and digital stories, a whole new world of possibilities open up for literary response and recreative texts, with new audiences and publication spaces. This article encourages English teachers to embrace these new digital forms and how shows we can go about it.
Resumo:
Citrus canker is a disease of citrus and closely related species, caused by the bacterium Xanthomonas citri subsp. citri. This disease, previously exotic to Australia, was detected on a single farm [infested premise-1, (IP1). IP is the terminology used in official biosecurity protocols to describe a locality at which an exotic plant pest has been confirmed or is presumed to exist. IP are numbered sequentially as they are detected] in Emerald, Queensland in July 2004. During the following 10 months the disease was subsequently detected on two other farms (IP2 and IP3) within the same area and studies indicated the disease first occurred on IP1 and spread to IP2 and IP3. The oldest, naturally infected plant tissue observed on any of these farms indicated the disease was present on IP1 for several months before detection and established on IP2 and IP3 during the second quarter (i.e. autumn) 2004. Transect studies on some IP1 blocks showed disease incidences ranged between 52 and 100% (trees infected). This contrasted to very low disease incidence, less than 4% of trees within a block, on IP2 and IP3. The mechanisms proposed for disease spread within blocks include weather-assisted dispersal of the bacterium (e.g. wind-driven rain) and movement of contaminated farm equipment, in particular by pivot irrigator towers via mechanical damage in combination with abundant water. Spread between blocks on IP2 was attributed to movement of contaminated farm equipment and/or people. Epidemiology results suggest: (i) successive surveillance rounds increase the likelihood of disease detection; (ii) surveillance sensitivity is affected by tree size; and (iii) individual destruction zones (for the purpose of eradication) could be determined using disease incidence and severity data rather than a predefined set area.
Resumo:
Neurodegenerative disorders are heterogenous in nature and include a range of ataxias with oculomotor apraxia, which are characterised by a wide variety of neurological and ophthalmological features. This family includes recessive and dominant disorders. A subfamily of autosomal recessive cerebellar ataxias are characterised by defects in the cellular response to DNA damage. These include the well characterised disorders Ataxia-Telangiectasia (A-T) and Ataxia-Telangiectasia Like Disorder (A-TLD) as well as the recently identified diseases Spinocerebellar ataxia with axonal neuropathy Type 1 (SCAN1), Ataxia with Oculomotor Apraxia Type 2 (AOA2), as well as the subject of this thesis, Ataxia with Oculomotor Apraxia Type 1 (AOA1). AOA1 is caused by mutations in the APTX gene, which is located at chromosomal locus 9p13. This gene codes for the 342 amino acid protein Aprataxin. Mutations in APTX cause destabilization of Aprataxin, thus AOA1 is a result of Aprataxin deficiency. Aprataxin has three functional domains, an N-terminal Forkhead Associated (FHA) phosphoprotein interaction domain, a central Histidine Triad (HIT) nucleotide hydrolase domain and a C-terminal C2H2 zinc finger. Aprataxins FHA domain has homology to FHA domain of the DNA repair protein 5’ polynucleotide kinase 3’ phosphatase (PNKP). PNKP interacts with a range of DNA repair proteins via its FHA domain and plays a critical role in processing damaged DNA termini. The presence of this domain with a nucleotide hydrolase domain and a DNA binding motif implicated that Aprataxin may be involved in DNA repair and that AOA1 may be caused by a DNA repair deficit. This was substantiated by the interaction of Aprataxin with proteins involved in the repair of both single and double strand DNA breaks (XRay Cross-Complementing 1, XRCC4 and Poly-ADP Ribose Polymerase-1) and the hypersensitivity of AOA1 patient cell lines to single and double strand break inducing agents. At the commencement of this study little was known about the in vitro and in vivo properties of Aprataxin. Initially this study focused on generation of recombinant Aprataxin proteins to facilitate examination of the in vitro properties of Aprataxin. Using recombinant Aprataxin proteins I found that Aprataxin binds to double stranded DNA. Consistent with a role for Aprataxin as a DNA repair enzyme, this binding is not sequence specific. I also report that the HIT domain of Aprataxin hydrolyses adenosine derivatives and interestingly found that this activity is competitively inhibited by DNA. This provided initial evidence that DNA binds to the HIT domain of Aprataxin. The interaction of DNA with the nucleotide hydrolase domain of Aprataxin provided initial evidence that Aprataxin may be a DNA-processing factor. Following these studies, Aprataxin was found to hydrolyse 5’adenylated DNA, which can be generated by unscheduled ligation at DNA breaks with non-standard termini. I found that cell extracts from AOA1 patients do not have DNA-adenylate hydrolase activity indicating that Aprataxin is the only DNA-adenylate hydrolase in mammalian cells. I further characterised this activity by examining the contribution of the zinc finger and FHA domains to DNA-adenylate hydrolysis by the HIT domain. I found that deletion of the zinc finger ablated the activity of the HIT domain against adenylated DNA, indicating that the zinc finger may be required for the formation of a stable enzyme-substrate complex. Deletion of the FHA domain stimulated DNA-adenylate hydrolysis, which indicated that the activity of the HIT domain may be regulated by the FHA domain. Given that the FHA domain is involved in protein-protein interactions I propose that the activity of Aprataxins HIT domain may be regulated by proteins which interact with its FHA domain. We examined this possibility by measuring the DNA-adenylate hydrolase activity of extracts from cells deficient for the Aprataxin-interacting DNA repair proteins XRCC1 and PARP-1. XRCC1 deficiency did not affect Aprataxin activity but I found that Aprataxin is destabilized in the absence of PARP-1, resulting in a deficiency of DNA-adenylate hydrolase activity in PARP-1 knockout cells. This implies a critical role for PARP-1 in the stabilization of Aprataxin. Conversely I found that PARP-1 is destabilized in the absence of Aprataxin. PARP-1 is a central player in a number of DNA repair mechanisms and this implies that not only do AOA1 cells lack Aprataxin, they may also have defects in PARP-1 dependant cellular functions. Based on this I identified a defect in a PARP-1 dependant DNA repair mechanism in AOA1 cells. Additionally, I identified elevated levels of oxidized DNA in AOA1 cells, which is indicative of a defect in Base Excision Repair (BER). I attribute this to the reduced level of the BER protein Apurinic Endonuclease 1 (APE1) I identified in Aprataxin deficient cells. This study has identified and characterised multiple DNA repair defects in AOA1 cells, indicating that Aprataxin deficiency has far-reaching cellular consequences. Consistent with the literature, I show that Aprataxin is a nuclear protein with nucleoplasmic and nucleolar distribution. Previous studies have shown that Aprataxin interacts with the nucleolar rRNA processing factor nucleolin and that AOA1 cells appear to have a mild defect in rRNA synthesis. Given the nucleolar localization of Aprataxin I examined the protein-protein interactions of Aprataxin and found that Aprataxin interacts with a number of rRNA transcription and processing factors. Based on this and the nucleolar localization of Aprataxin I proposed that Aprataxin may have an alternative role in the nucleolus. I therefore examined the transcriptional activity of Aprataxin deficient cells using nucleotide analogue incorporation. I found that AOA1 cells do not display a defect in basal levels of RNA synthesis, however they display defective transcriptional responses to DNA damage. In summary, this thesis demonstrates that Aprataxin is a DNA repair enzyme responsible for the repair of adenylated DNA termini and that it is required for stabilization of at least two other DNA repair proteins. Thus not only do AOA1 cells have no Aprataxin protein or activity, they have additional deficiencies in PolyADP Ribose Polymerase-1 and Apurinic Endonuclease 1 dependant DNA repair mechanisms. I additionally demonstrate DNA-damage inducible transcriptional defects in AOA1 cells, indicating that Aprataxin deficiency confers a broad range of cellular defects and highlighting the complexity of the cellular response to DNA damage and the multiple defects which result from Aprataxin deficiency. My detailed characterization of the cellular consequences of Aprataxin deficiency provides an important contribution to our understanding of interlinking DNA repair processes.
Resumo:
This paper examines an aspect of the data taken from a larger study evaluating the effect of speeding penalty changes on speeding recidivism in Queensland. Traffic offence data from May 1996 to August 2007 were provided to the research team for two cohorts of offenders: individuals who committed a speeding offence in May 2001; and individuals who committed a speeding offence in May 2003. Data included details of the offenders’ index offence, previous and subsequent traffic offences (speeding and other) and their demographic characteristics. Using this data the aim of this component of the research was to use demographic data and the previous traffic offences of these individuals to explore the characteristics and predictors of high-range speeding offenders. High-range offenders were identified as those individuals who committed two or more speeding offences with a recorded speed of 30 km/hr or more above the speed limit. For the purposes of comparison, low-range offenders (committed one speeding offence in the time-frame and that offence was less than 15 km/hr over the speed limit) and mid-range offenders (all other offenders) were identified. Using Chi-square and logistic regression analyses, characteristics and predictors of high-range speeding offenders were identified. The implications and limitations of this study are also discussed.
Resumo:
Emissions from airport operations are of significant concern because of their potential impact on local air quality and human health. The currently limited scientific knowledge of aircraft emissions is an important issue worldwide, when considering air pollution associated with airport operation, and this is especially so for ultrafine particles. This limited knowledge is due to scientific complexities associated with measuring aircraft emissions during normal operations on the ground. In particular this type of research has required the development of novel sampling techniques which must take into account aircraft plume dispersion and dilution as well as the various particle dynamics that can affect the measurements of the aircraft engine plume from an operational aircraft. In order to address this scientific problem, a novel mobile emission measurement method called the Plume Capture and Analysis System (PCAS), was developed and tested. The PCAS permits the capture and analysis of aircraft exhaust during ground level operations including landing, taxiing, takeoff and idle. The PCAS uses a sampling bag to temporarily store a sample, providing sufficient time to utilize sensitive but slow instrumental techniques to be employed to measure gas and particle emissions simultaneously and to record detailed particle size distributions. The challenges in relation to the development of the technique include complexities associated with the assessment of the various particle loss and deposition mechanisms which are active during storage in the PCAS. Laboratory based assessment of the method showed that the bag sampling technique can be used to accurately measure particle emissions (e.g. particle number, mass and size distribution) from a moving aircraft or vehicle. Further assessment of the sensitivity of PCAS results to distance from the source and plume concentration was conducted in the airfield with taxiing aircraft. The results showed that the PCAS is a robust method capable of capturing the plume in only 10 seconds. The PCAS is able to account for aircraft plume dispersion and dilution at distances of 60 to 180 meters downwind of moving a aircraft along with particle deposition loss mechanisms during the measurements. Characterization of the plume in terms of particle number, mass (PM2.5), gaseous emissions and particle size distribution takes only 5 minutes allowing large numbers of tests to be completed in a short time. The results were broadly consistent and compared well with the available data. Comprehensive measurements and analyses of the aircraft plumes during various modes of the landing and takeoff (LTO) cycle (e.g. idle, taxi, landing and takeoff) were conducted at Brisbane Airport (BNE). Gaseous (NOx, CO2) emission factors, particle number and mass (PM2.5) emission factors and size distributions were determined for a range of Boeing and Airbus aircraft, as a function of aircraft type and engine thrust level. The scientific complexities including the analysis of the often multimodal particle size distributions to describe the contributions of different particle source processes during the various stages of aircraft operation were addressed through comprehensive data analysis and interpretation. The measurement results were used to develop an inventory of aircraft emissions at BNE, including all modes of the aircraft LTO cycle and ground running procedures (GRP). Measurements of the actual duration of aircraft activity in each mode of operation (time-in-mode) and compiling a comprehensive matrix of gas and particle emission rates as a function of aircraft type and engine thrust level for real world situations was crucial for developing the inventory. The significance of the resulting matrix of emission rates in this study lies in the estimate it provides of the annual particle emissions due to aircraft operations, especially in terms of particle number. In summary, this PhD thesis presents for the first time a comprehensive study of the particle and NOx emission factors and rates along with the particle size distributions from aircraft operations and provides a basis for estimating such emissions at other airports. This is a significant addition to the scientific knowledge in terms of particle emissions from aircraft operations, since the standard particle number emissions rates are not currently available for aircraft activities.
Resumo:
Exposure to particles emitted by cooking activities may be responsible for a variety of respiratory health effects. However, the relationship between these exposures and their subsequent effects on health cannot be evaluated without understanding the properties of the emitted aerosol or the main parameters that influence particle emissions during cooking. Whilst traffic-related emissions, stack emissions and ultrafine particle concentrations (UFP, diameter < 100 nm) in urban ambient air have been widely investigated for many years, indoor exposure to UFPs is a relatively new field and in order to evaluate indoor UFP emissions accurately, it is vital to improve scientific understanding of the main parameters that influence particle number, surface area and mass emissions. The main purpose of this study was to characterise the particle emissions produced during grilling and frying as a function of the food, source, cooking temperature and type of oil. Emission factors, along with particle number concentrations and size distributions were determined in the size range 0.006-20 m using a Scanning Mobility Particle Sizer (SMPS) and an Aerodynamic Particle Sizer (APS). An infrared camera was used to measure the temperature field. Overall, increased emission factors were observed to be a function of increased cooking temperatures. Cooking fatty foods also produced higher particle emission factors than vegetables, mainly in terms of mass concentration, and particle emission factors also varied significantly according to the type of oil used.