658 resultados para Intruder state problem
Resumo:
Urbanisation significantly changes the characteristics of a catchment as natural areas are transformed to impervious surfaces such as roads, roofs and parking lots. The increased fraction of impervious surfaces leads to changes to the stormwater runoff characteristics, whilst a variety of anthropogenic activities common to urban areas generate a range of pollutants such as nutrients, solids and organic matter. These pollutants accumulate on catchment surfaces and are removed and trans- ported by stormwater runoff and thereby contribute pollutant loads to receiving waters. In summary, urbanisation influences the stormwater characteristics of a catchment, including hydrology and water quality. Due to the growing recognition that stormwater pollution is a significant environmental problem, the implementation of mitigation strategies to improve the quality of stormwater runoff is becoming increasingly common in urban areas. A scientifically robust stormwater quality treatment strategy is an essential requirement for effective urban stormwater management. The efficient design of treatment systems is closely dependent on the state of knowledge in relation to the primary factors influencing stormwater quality. In this regard, stormwater modelling outcomes provide designers with important guidance and datasets which significantly underpin the design of effective stormwater treatment systems. Therefore, the accuracy of modelling approaches and the reliability modelling outcomes are of particular concern. This book discusses the inherent complexity and key characteristics in the areas of urban hydrology and stormwater quality, based on the influence exerted by a range of rainfall and catchment characteristics. A comprehensive field sampling and testing programme in relation to pollutant build-up, an urban catchment monitoring programme in relation to stormwater quality and the outcomes from advanced statistical analyses provided the platform for the knowledge creation. Two case studies and two real-world applications are discussed to illustrate the translation of the knowledge created to practical use in relation to the role of rainfall and catchment characteristics on urban stormwater quality. An innovative rainfall classification based on stormwater quality was developed to support the effective and scientifically robust design of stormwater treatment systems. Underpinned by the rainfall classification methodology, a reliable approach for design rainfall selection is proposed in order to optimise stormwater treatment based on both, stormwater quality and quantity. This is a paradigm shift from the common approach where stormwater treatment systems are designed based solely on stormwater quantity data. Additionally, how pollutant build-up and stormwater runoff quality vary with a range of catchment characteristics was also investigated. Based on the study out- comes, it can be concluded that the use of only a limited number of catchment parameters such as land use and impervious surface percentage, as it is the case in current modelling approaches, could result in appreciable error in water quality estimation. Influential factors which should be incorporated into modelling in relation to catchment characteristics, should also include urban form and impervious surface area distribution. The knowledge created through the research investigations discussed in this monograph is expected to make a significant contribution to engineering practice such as hydrologic and stormwater quality modelling, stormwater treatment design and urban planning, as the study outcomes provide practical approaches and recommendations for urban stormwater quality enhancement. Furthermore, this monograph also demonstrates how fundamental knowledge of stormwater quality processes can be translated to provide guidance on engineering practice, the comprehensive application of multivariate data analyses techniques and a paradigm on integrative use of computer models and mathematical models to derive practical outcomes.
Resumo:
Guaranteeing Quality of Service (QoS) with minimum computation cost is the most important objective of cloud-based MapReduce computations. Minimizing the total computation cost of cloud-based MapReduce computations is done through MapReduce placement optimization. MapReduce placement optimization approaches can be classified into two categories: homogeneous MapReduce placement optimization and heterogeneous MapReduce placement optimization. It is generally believed that heterogeneous MapReduce placement optimization is more effective than homogeneous MapReduce placement optimization in reducing the total running cost of cloud-based MapReduce computations. This paper proposes a new approach to the heterogeneous MapReduce placement optimization problem. In this new approach, the heterogeneous MapReduce placement optimization problem is transformed into a constrained combinatorial optimization problem and is solved by an innovative constructive algorithm. Experimental results show that the running cost of the cloud-based MapReduce computation platform using this new approach is 24:3%-44:0% lower than that using the most popular homogeneous MapReduce placement approach, and 2:0%-36:2% lower than that using the heterogeneous MapReduce placement approach not considering the spare resources from the existing MapReduce computations. The experimental results have also demonstrated the good scalability of this new approach.
Resumo:
The advent of the Australian Charities and Not-for-profits Commission (ACNC) in 2012 and submission of Annual Information Statements (AIS) in 2013 by those charities which registered with them, have allowed new measures to be taken of charities and their activities. This report examines the filed AIS data for Queensland charities and compares it with the overall Australian population of charities.
Resumo:
This thesis provides a review of 199 papers published on Green IT/IS between 2007−2014, in order to present taxonomy of segments in Green IT/IS publications, where the segments are later used for multiple analyses to facilitate future research and to provide a retrospective analysis of existing knowledge and gaps thereof. This research also attempts to make a unique contribution to our understanding of Green IT/IS, by consolidating papers it observes current patterns of literature through approach analysis and segmentation, as well as allocating studies to the technology, process, or outcome (TPO) stage. Highlighting the necessity of a consolidated approach, these classification systems have been combined into a TPO matrix so that the studies could be arranged according to which stage of the Green IT/IS cycle they were focused on. We believe that these analyses will provide a solid platform from which future Green IT/IS research can be launched.
Resumo:
Place recognition has long been an incompletely solved problem in that all approaches involve significant compromises. Current methods address many but never all of the critical challenges of place recognition – viewpoint-invariance, condition-invariance and minimizing training requirements. Here we present an approach that adapts state-of-the-art object proposal techniques to identify potential landmarks within an image for place recognition. We use the astonishing power of convolutional neural network features to identify matching landmark proposals between images to perform place recognition over extreme appearance and viewpoint variations. Our system does not require any form of training, all components are generic enough to be used off-the-shelf. We present a range of challenging experiments in varied viewpoint and environmental conditions. We demonstrate superior performance to current state-of-the- art techniques. Furthermore, by building on existing and widely used recognition frameworks, this approach provides a highly compatible place recognition system with the potential for easy integration of other techniques such as object detection and semantic scene interpretation.
Resumo:
Background The overrepresentation of young drivers in road crashes, injuries and fatalities around the world has resulted in a breadth of injury prevention efforts including education, enforcement, engineering, and exposure control. Despite multifaceted intervention, the young driver problem remains a challenge for injury prevention researchers, practitioners and policy-makers. The intractable nature of young driver crash risks suggests that a deeper understanding of their car use – that is, the purpose of their driving – is required to inform the design of more effective young driver countermeasures. Aims This research examined the driving purpose reported by young drivers, including the relationship with self-reported risky driving behaviours including offences. Methods Young drivers with a Learner or Provisional licence participated in three online surveys (N1 = 656, 17–20 years; N2 = 1051, 17–20 years; N3 = 351, 17–21 years) as part of a larger state-wide project in Queensland, Australia. Results A driving purpose scale was developed (the PsychoSocial Purpose Driving Scale, PSPDS), revealing that young drivers drove for psychosocial reasons such as for a sense of freedom and to feel independent. Drivers who reported the greatest psychosocial purpose for driving were more likely to be male and to report more risky driving behaviours such as speeding. Drivers who deliberately avoided on-road police presence and reported a prior driving-related offence had significantly greater PSPDS scores, and higher reporting of psychosocial driving purposes was found over time as drivers transitioned from the supervised Learner licence phase to the independent Provisional (intermediate) licence phase. Discussion and conclusions The psychosocial needs met by driving suggest that effective intervention to prevent young driver injury requires further consideration of their driving purpose. Enforcement, education, and engineering efforts which consider the psychosocial purpose of the driving are likely to be more efficacious than those which presently do not. Road safety countermeasures could reduce the young driver’s exposure to risk through such mechanisms as encouraging the use of public transport.
Resumo:
The rise of the peer economy poses complex new regulatory challenges for policy-makers. The peer economy, typified by services like Uber and AirBnB, promises substantial productivity gains through the more efficient use of existing resources and a marked reduction in regulatory overheads. These services are rapidly disrupting existing established markets, but the regulatory trade-offs they present are difficult to evaluate. In this paper, we examine the peer economy through the context of ride-sharing and the ongoing struggle over regulatory legitimacy between the taxi industry and new entrants Uber and Lyft. We first sketch the outlines of ride-sharing as a complex regulatory problem, showing how questions of efficiency are necessarily bound up in questions about levels of service, controls over pricing, and different approaches to setting, upholding, and enforcing standards. We outline the need for data-driven policy to understand the way that algorithmic systems work and what effects these might have in the medium to long term on measures of service quality, safety, labour relations, and equality. Finally, we discuss how the competition for legitimacy is not primarily being fought on utilitarian grounds, but is instead carried out within the context of a heated ideological battle between different conceptions of the role of the state and private firms as regulators. We ultimately argue that the key to understanding these regulatory challenges is to develop better conceptual models of the governance of complex systems by private actors and the available methods the state has of influencing their actions. These struggles are not, as is often thought, struggles between regulated and unregulated systems. The key to understanding these regulatory challenges is to better understand the important regulatory work carried out by powerful, centralised private firms – both the incumbents of existing markets and the disruptive network operators in the peer-economy.
Resumo:
The rise of the peer economy poses complex new regulatory challenges for policy-makers. The peer economy, typified by services like Uber and AirBnB, promises substantial productivity gains through the more efficient use of existing resources and a marked reduction in regulatory overheads. These services are rapidly disrupting existing established markets, but the regulatory trade-offs they present are difficult to evaluate. In this paper, we examine the peer economy through the context of ride-sharing and the ongoing struggle over regulatory legitimacy between the taxi industry and new entrants Uber and Lyft. We first sketch the outlines of ride-sharing as a complex regulatory problem, showing how questions of efficiency are necessarily bound up in questions about levels of service, controls over pricing, and different approaches to setting, upholding, and enforcing standards. We outline the need for data-driven policy to understand the way that algorithmic systems work and what effects these might have in the medium to long term on measures of service quality, safety, labour relations, and equality. Finally, we discuss how the competition for legitimacy is not primarily being fought on utilitarian grounds, but is instead carried out within the context of a heated ideological battle between different conceptions of the role of the state and private firms as regulators. We ultimately argue that the key to understanding these regulatory challenges is to develop better conceptual models of the governance of complex systems by private actors and the available methods the state has of influencing their actions. These struggles are not, as is often thought, struggles between regulated and unregulated systems. The key to understanding these regulatory challenges is to better understand the important regulatory work carried out by powerful, centralised private firms – both the incumbents of existing markets and the disruptive network operators in the peer-economy.
Resumo:
This presentation discussed the growing recognition of sustainable diets at international governance levels and how this reflects the challenges and win-win opportunities of living within our ecological limits. I assert that sustainable diets provide an example of how living within our ecological limits would actually make us better off even apart from environmental benefits. After determining whether Australians’ generally have a sustainable diet, I outlined how Australian regulators are attempting to address sustainable diets. I argued that the personal responsibility approach coupled with the focus on preventing or reducing overweight and obesity levels are proving incapable of bringing about long-term sustainable diets that will contribute to the health and well-being of Australian people.
Resumo:
Particle Swarm Optimization (PSO) is a biologically inspired computational search and optimization method based on the social behaviors of birds flocking or fish schooling. Although, PSO is represented in solving many well-known numerical test problems, but it suffers from the premature convergence. A number of basic variations have been developed due to solve the premature convergence problem and improve quality of solution founded by the PSO. This study presents a comprehensive survey of the various PSO-based algorithms. As part of this survey, the authors have included a classification of the approaches and they have identify the main features of each proposal. In the last part of the study, some of the topics within this field that are considered as promising areas of future research are listed.
Resumo:
There has been much debate about the relationship between international trade, the environment, biodiversity protection, and climate change.The Obama Administration has pushed such issues into sharp relief, with its advocacy for sweeping international trade agreements, such as the Trans-Pacific Partnership and the Trans-Atlantic Trade and Investment Partnership. There has been much public concern about the impact of the mega-trade deals upon the protection of the environment. In particular, there has been a debate about whether the Trans-Pacific Partnership will promote dirty fracking. Will the Trans-Pacific Partnership transform the Pacific Rim into a Gasland?There has been a particular focus upon investor-state dispute settlement being used by unconventional mining companies. Investor-state dispute settlement is a mechanism which enables foreign investors to seek compensation from national governments at international arbitration tribunals. In her prescient 2009 book, The Expropriation of Environmental Governance, Kyla Tienhaara foresaw the rise of investor-state dispute resolution of environmental matters. She observed:'Over the last decade there has been an explosive increase of cases investment arbitration. This is significant in terms of not only the number of disputes that have arisen and the number of states that have been involved, but also the novel types of dispute that have emerged. Rather than solely involving straightforward incidences of nationalization or breach of contract, modern disputes often revolve around public policy measures and implicate sensitive issues such as access to drinking water, development on sacred indigenous sites and the protection of biodiversity.'In her study, Kyla Tienhaara observed that investment agreements, foreign investment contracts and investment arbitration had significant implications for the protection for the protection of the environment. She concluded that arbitrators have made it clear that they can, and will, award compensation to investors that claim to have been harmed by environmental regulation. She also found that some of the cases suggest that the mere threat of arbitration is sufficient to chill environmental policy development. Tienhaara was equally concerned by the possibility that a government may use the threat of arbitration as an excuse or cover for its failure to improve environmental regulation. In her view, it is evident that arbitrators have expropriated certain fundamental aspects of environmental governance from states. Tienhaara held: As a result, environmental regulation has become riskier, more expensive, and less democratic, especially in developing countries. This article provides a comparative analysis of the battles over fracking, investment, trade, and the environment in a number of key jurisdictions including the United States, Canada, Australia, and New Zealand. Part 1 focuses upon the United States. Part 2 examines the dispute between the Lone Pine Resources Inc. and the Government of Canada over a fracking moratorium in Quebec. Part 3 charts the rise of the Lock the Gate Alliance in Australia, and its demands for a moratorium in respect of coal seam gas and unconventional mining. Part 4 focuses upon parallel developments in New Zealand. This article concludes that Pacific Rim countries should withdraw from investor-state dispute settlement procedures, because of the threat posed to environmental regulation in respect of air, land, and water.
Resumo:
Head motion (HM) is a critical confounding factor in functional MRI. Here we investigate whether HM during resting state functional MRI (RS-fMRI) is influenced by genetic factors in a sample of 462 twins (65% fema≤ 101 MZ (monozygotic) and 130 DZ (dizygotic) twin pairs; mean age: 21 (SD=3.16), range 16-29). Heritability estimates for three HM components-mean translation (MT), maximum translation (MAXT) and mean rotation (MR)-ranged from 37 to 51%. We detected a significant common genetic influence on HM variability, with about two-thirds (genetic correlations range 0.76-1.00) of the variance shared between MR, MT and MAXT. A composite metric (HM-PC1), which aggregated these three, was also moderately heritable (h2=42%). Using a sub-sample (N=35) of the twins we confirmed that mean and maximum translational and rotational motions were consistent "traits" over repeated scans (r=0.53-0.59); reliability was even higher for the composite metric (r=0.66). In addition, phenotypic and cross-trait cross-twin correlations between HM and resting state functional connectivities (RS-FCs) with Brodmann areas (BA) 44 and 45, in which RS-FCs were found to be moderately heritable (BA44: h2-=0.23 (sd=0.041), BA45: h2-=0.26 (sd=0.061)), indicated that HM might not represent a major bias in genetic studies using FCs. Even so, the HM effect on FC was not completely eliminated after regression. HM may be a valuable endophenotype whose relationship with brain disorders remains to be elucidated.
Resumo:
Background: The majority of studies investigating the neural mechanisms underlying treatment in people with aphasia have examined task-based brain activity. However, the use of resting-state fMRI may provide another method of examining the brain mechanisms responsible for treatment-induced recovery, and allows for investigation into connectivity within complex functional networks Methods: Eight people with aphasia underwent 12 treatment sessions that aimed to improve object naming. Half the sessions employed a phonologically-based task, and half the sessions employed a semantic-based task, with resting-state fMRI conducted pre- and post-treatment. Brain regions in which the amplitude of low frequency fluctuations (ALFF) correlated with treatment outcomes were used as seeds for functional connectivity (FC) analysis. FC maps were compared from pre- to post-treatment, as well as with a group of 12 healthy older controls Results: Pre-treatment ALFF in the right middle temporal gyrus (MTG) correlated with greater outcomes for the phonological treatment, with a shift to the left MTG and supramarginal gyrus, as well as the right inferior frontal gyrus, post-treatment. When compared to controls, participants with aphasia showed both normalization and up-regulation of connectivity within language networks post-treatment, predominantly in the left hemisphere Conclusions: The results provide preliminary evidence that treatments for naming impairments affect the FC of language networks, and may aid in understanding the neural mechanisms underlying the rehabilitation of language post-stroke.