197 resultados para flood basalt
Resumo:
Waterfront communities in the Mekong Delta live with the inundation of their homes and businesses from seasonal flooding every year. This project investigated housing types, social practices and feelings of vulnerability of local people in the Cai Rang waterfront community in Can Tho City. The project made a significant contribution to methods for assessing vulnerability, adaptability and resilience of inhabitants of flood-prone housing in Vietnam. It also developed a new concept of 'Deltaic Urbanism' that offers a better urbanist approach specifically for deltaic regions subject to the potential impacts of climate change.
Resumo:
This study investigates the impact floods on property values using the hedonic property price approach and other relevant econometric techniques. The main objectives of this research are to investigate (1) the impact of the release of flood-risk information and the actual floods on property values (2) the temporal behaviour of negative impacts (3) the property submarket behaviour (4) the behaviour of flood affected vs flood non-affected areas and (5) the property market efficiency. The thesis expanded on the existing literature on natural disasters by applying a range of econometric techniques. Findings of this research are useful for policy decision-making which is aimed at minimizing the negative impacts of natural hazards on property markets. The thesis findings also provide a better framework for decision-making in the property insurance market. The methodological improvements that are made in the thesis will be invaluable for analysing the impacts of natural hazards elsewhere.
Resumo:
The thick package of ~2.7 Ga mafic and ultramafic lavas and intrusions preserved among the Neoarchean of the Kalgoorlie Terrene in Western Australia provides valuable insight into geological processes controlling the most prodigious episode of growth and preservation of juvenile continental crust in Earth’s history. Limited exposure of these rocks results in uncertainty about their age, physical and chemical characteristics, and stratigraphic relationships. This in turn prevents confident correlation of regional occurrences of mafic and ultramafic successions (both intrusive and extrusive) and hinders the interpretation of tectonic setting and magmatic evolution. A recent stratigraphic drilling program of the Neoarchean stratigraphy of the Agnew Greenstone Belt in Western Australia has provided continuous exposures through a c. 7 km thick sequence of mafic and ultramafic units. In this study, we present a volcanological, lithogeochemical and chronological study of the Agnew Greenstone Belt, and provide the first pre-2690 Ma regional correlation across the Kalgoorlie Terrane. The Agnew Greenstone Belt records ~30 m.y. of episodic ultramafic-mafic magmatism that includes two cycles, each defined by a komatiite that is overlain by units that become more evolved and contaminated with time. The sequence is divided into nine conformable packages, each consisting of stacked subaqueous lava flows and comagmatic intrusions, as well as two sills without associated extrusions. Lavas, with the exception of intercalations between two units, form a layer-cake stratigraphy and were likely erupted from a system of fissures tapping the same magma source. The komatiites are not contaminated by continental crust ([La/Sm]PM ~0.7) and are of the Al-undepleted Munro-type. Crustal contamination is evident in many units (Songvang Basalt, Never Can Tell Basalt, Redeemer Basalt, and Turrett Dolerite), as judged by [La/Sm]>1, negative Nb and Ti anomalies, and geochemical mixing trends towards felsic contaminants. Crystal fractionation was also significant, with early olivine and chromite (Mg#>65) followed by plagioclase and clinopyroxene removal (Mg<65), and in the most evolved case, titanomagnetite accumulation. Three new TIMS dates on granophyric zones of mafic sills and one ICP-MS date from an interflow felsic tuff are presented and used for regional stratigraphic correlation. Cycle I magmatism began at ~2720 Ma and ended ~2705 Ma, whereas cycle II began ~2705 Ma and ended at 2690.7±1.2 Ma. Regional correlations indicate the western Kalgoorlie Terrane consists of a remarkably similar stratigraphy that can be recognised at Agnew, Ora Banda and Coolgardie, whereas the eastern part of the terrane (e.g., Kambalda Domain) does not include cycle I, but correlates well with cycle II. This research supports an autochthonous model of greenstone formation, in which one large igneous province, represented by two complete cycles, is constructed on sialic crust. New stratigraphic correlations for the Kalgoorlie Terrane indicate that many units can be traced over distances >100 km, which has implications for exploration targeting for stratigraphically hosted ultramafic Ni and VMS deposits.
Resumo:
Nepal, as a consequence of its geographical location and changing climate, faces frequent threats of natural disasters. According to the World Bank’s 2005 Natural Disasters Hotspots Report, Nepal is ranked the 11th most vulnerable country to earthquake and 30th to flood risk. Geo-Hazards International (2011) has classified Kathmandu as one of the world’s most vulnerable cities to earthquakes. In the last four decades more than 32,000 people in Nepal have lost their lives and annual monetary loss is estimated at more than 15 million (US) dollars. This review identifies gaps in knowledge, and progress towards implementation of the Post Hyogo Framework of Action. Nepal has identified priority areas: community resilience, sustainable development and climate change induced disaster risk reduction. However, one gap between policy and action lies in the ability of Nepal to act effectively in accordance with an appropriate framework for media activities. Supporting media agencies include the Press Council, Federation of Nepalese Journalists, Nepal Television, Radio Nepal and Telecommunications Authority and community based organizations. The challenge lies in further strengthening traditional and new media to undertake systematic work supported by government bodies and the National Risk Reduction Consortium (NRRC). Within this context, the ideal role for media is one that is proactive where journalists pay attention to a range of appropriate angles or frames when preparing and disseminating information. It is important to develop policy for effective information collection, sharing and dissemination in collaboration with Telecommunication, Media and Journalists. The aim of this paper is to describe the developments in disaster management in Nepal and their implications for media management. This study provides lessons for government, community and the media to help improve the framing of disaster messages. Significantly, the research highlights the prominence that should be given to flood, landslides, lightning and earthquakes.
Resumo:
The world is rich with information such as signage and maps to assist humans to navigate. We present a method to extract topological spatial information from a generic bitmap floor plan and build a topometric graph that can be used by a mobile robot for tasks such as path planning and guided exploration. The algorithm first detects and extracts text in an image of the floor plan. Using the locations of the extracted text, flood fill is used to find the rooms and hallways. Doors are found by matching SURF features and these form the connections between rooms, which are the edges of the topological graph. Our system is able to automatically detect doors and differentiate between hallways and rooms, which is important for effective navigation. We show that our method can extract a topometric graph from a floor plan and is robust against ambiguous cases most commonly seen in floor plans including elevators and stairwells.
Resumo:
We consider the development of statistical models for prediction of constituent concentration of riverine pollutants, which is a key step in load estimation from frequent flow rate data and less frequently collected concentration data. We consider how to capture the impacts of past flow patterns via the average discounted flow (ADF) which discounts the past flux based on the time lapsed - more recent fluxes are given more weight. However, the effectiveness of ADF depends critically on the choice of the discount factor which reflects the unknown environmental cumulating process of the concentration compounds. We propose to choose the discount factor by maximizing the adjusted R-2 values or the Nash-Sutcliffe model efficiency coefficient. The R2 values are also adjusted to take account of the number of parameters in the model fit. The resulting optimal discount factor can be interpreted as a measure of constituent exhaustion rate during flood events. To evaluate the performance of the proposed regression estimators, we examine two different sampling scenarios by resampling fortnightly and opportunistically from two real daily datasets, which come from two United States Geological Survey (USGS) gaging stations located in Des Plaines River and Illinois River basin. The generalized rating-curve approach produces biased estimates of the total sediment loads by -30% to 83%, whereas the new approaches produce relatively much lower biases, ranging from -24% to 35%. This substantial improvement in the estimates of the total load is due to the fact that predictability of concentration is greatly improved by the additional predictors.
Resumo:
The Three-Georges Dam holds many records in the history of engineering. While the dam has produced benefits in terms of flood control, hydropower generation and increased navigation capacity of the Yangtze River, serious questions have been raised concerning its impact on both upstream and downstream ecosystems. It has been suggested that the dam operation intensifies the extremes of wet and dry conditions in the downstream Poyang Lake, and affects adversely important local wetlands. A floodgate has been proposed to maintain the lake water level by controlling the flow between the Poyang Lake and Yangtze River. Using extensive hydrological data and generalized linear statistical models, we demonstrated that the dam operation induces major changes in the downstream river discharge near the dam, including an average "water loss". The analysis also revealed considerable effects on the Poyang Lake water level, particularly a reduced level over the dry period from late summer to autumn. However, the dam impact needs to be further assessed based on long-term monitoring of the lake ecosystem, covering a wide range of parameters related to hydrological and hydraulic characteristics of the lake, water quality, geomorphological characteristics, aquatic biota and their habitat, wetland vegetation and associated fauna.
Resumo:
We consider estimating the total load from frequent flow data but less frequent concentration data. There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates that minimizes the biases and makes use of informative predictive variables. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized rating-curve approach with additional predictors that capture unique features in the flow data, such as the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and the discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. Forming this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach for two rivers delivering to the Great Barrier Reef, Queensland, Australia. One is a data set from the Burdekin River, and consists of the total suspended sediment (TSS) and nitrogen oxide (NO(x)) and gauged flow for 1997. The other dataset is from the Tully River, for the period of July 2000 to June 2008. For NO(x) Burdekin, the new estimates are very similar to the ratio estimates even when there is no relationship between the concentration and the flow. However, for the Tully dataset, by incorporating the additional predictive variables namely the discounted flow and flow phases (rising or recessing), we substantially improved the model fit, and thus the certainty with which the load is estimated.
Resumo:
There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates by minimizing the biases and making use of possible predictive variables. The load estimation procedure can be summarized by the following four steps: - (i) output the flow rates at regular time intervals (e.g. 10 minutes) using a time series model that captures all the peak flows; - (ii) output the predicted flow rates as in (i) at the concentration sampling times, if the corresponding flow rates are not collected; - (iii) establish a predictive model for the concentration data, which incorporates all possible predictor variables and output the predicted concentrations at the regular time intervals as in (i), and; - (iv) obtain the sum of all the products of the predicted flow and the predicted concentration over the regular time intervals to represent an estimate of the load. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized regression (rating-curve) approach with additional predictors that capture unique features in the flow data, namely the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and cumulative discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. The model also has the capacity to accommodate autocorrelation in model errors which are the result of intensive sampling during floods. Incorporating this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach using the concentrations of total suspended sediment (TSS) and nitrogen oxide (NOx) and gauged flow data from the Burdekin River, a catchment delivering to the Great Barrier Reef. The sampling biases for NOx concentrations range from 2 to 10 times indicating severe biases. As we expect, the traditional average and extrapolation methods produce much higher estimates than those when bias in sampling is taken into account.
Resumo:
This study aims to help broaden the use of electronic portal imaging devices (EPIDs) for pre-treatment patient positioning verification, from photon-beam radiotherapy to photon- and electron-beam radiotherapy, by proposing and testing a method for acquiring clinicallyuseful EPID images of patient anatomy using electron beams, with a view to enabling and encouraging further research in this area. EPID images used in this study were acquired using all available beams from a linac configured to deliver electron beams with nominal energies of 6, 9, 12, 16 and 20 MeV, as well as photon beams with nominal energies of 6 and 10 MV. A widely-available heterogeneous, approximately-humanoid, thorax phantom was used, to provide an indication of the contrast and noise produced when imaging different types of tissue with comparatively realistic thicknesses. The acquired images were automatically calibrated, corrected for the effects of variations in the sensitivity of individual photodiodes, using a flood field image. For electron beam imaging, flood field EPID calibration images were acquired with and without the placement of blocks of water-equivalent plastic (with thicknesses approximately equal to the practical range of electrons in the plastic) placed upstream of the EPID, to filter out the primary electron beam, leaving only the bremsstrahlung photon signal. While the electron beam images acquired using a standard (unfiltered) flood field calibration were observed to be noisy and difficult to interpret, the electron beam images acquired using the filtered flood field calibration showed tissues and bony anatomy with levels of contrast and noise that were similar to the contrast and noise levels seen in the clinically acceptable photon beam EPID images. The best electron beam imaging results (highest contrast, signal-to-noise and contrast-to-noise ratios) were achieved when the images were acquired using the higher energy electron beams (16 and 20 MeV) when the EPID was calibrated using an intermediate (12 MeV) electron beam energy. These results demonstrate the feasibility of acquiring clinically-useful EPID images of patient anatomy using electron beams and suggest important avenues for future investigation, thus enabling and encouraging further research in this area. There is manifest potential for the EPID imaging method proposed in this work to lead to the clinical use of electron beam imaging for geometric verification of electron treatments in the future.
Resumo:
The QUT Centre for Subtropical Design conducted a design-led interdisciplinary collaborative workshop (charrette) to develop some initial ideas for how innovation in research and practice can be applied to the complex problem of resilient future-focussed urban renewal in Rockhampton’s flood-prone suburbs and core grid. Three creative teams explored a range of scenarios for Rockhampton’s resilience in built form over the longer term. A large number of sketches, drawings and text were produced over two days. This report identifies themes, principles and strategies which emerged from the charrette. Each group proposed multiple guiding principles that fell into three strategic approaches: defend (through construction of a levee); adapt (by designing with flood in mind); retreat (a long term view to relocate populations in flood-prone areas). All three groups identified the importance of design that accommodates art, heritage, recreation, sustainability and tourism, and proposed these as principles to guide future strategies that mediate between Rockhampton’s broader ecological landscape and urban living to accommodate more affordable housing options, demonstrate sustainability and be climate responsive to predicted increased extreme weather events including flooding. The charrette outcomes pave the way to investigate wider issues and solutions to Rockhampton’s resilient future, beyond a levee as an isolated structure.
Resumo:
Introduction The last half-century of epidemiological enquiry into schizophrenia can be characterized by the search for neurological imbalances and lesions for genetic factors. The growing consensus is that these directions have failed, and there is now a growing interest in psychosocial and developmental models. Another area of recent interest is in epigenetics – the multiplication of genetic influences by environmental factors. Methods This integrative review comparatively maps current psychosocial, developmental and epigenetic models for schizophrenia epidemiology to identify crossover and theoretical gaps. Results In the flood of data that is being produced around the schizophrenia epidemiology, one of the most consistent findings is that schizophrenia is an urban syndrome. Once demographic factors have been discounted, between one-quarter and one-third of all incidence is repeatedly traced back to urbanicity – potentially threatening more established models, such as the psychosocial, genetic and developmental hypotheses. Conclusions Close analysis demonstrates how current models for schizophrenia epidemiology appear to miss the mark. Furthermore, the built environment appears to be an inextricable factor in all current models and indeed may be a valid epidemiological factor on its own. The reason the built environment hasn’t already become a de rigueur area of epidemiological research is possibly trivial – it just doesn’t attract enough science, and lacks a hero to promote it alongside other hypotheses.
Resumo:
Purification of drinking water is routinely achieved by use of conventional coagulants and disinfection procedures. However, there are instances such as flood events when the level of turbidity reaches extreme levels while NOM may be an issue throughout the year. Consequently, there is a need to develop technologies which can effectively treat water of high turbidity during flood events and natural organic matter (NOM) content year round. It was our hypothesis that pebble matrix filtration potentially offered a relatively cheap, simple and reliable means to clarify such challenging water samples. Therefore, a laboratory scale pebble matrix filter (PMF) column was used to evaluate the turbidity and natural organic matter (NOM) pre-treatment performance in relation to 2013 Brisbane River flood water. Since the high turbidity was only a seasonal and short term problem, the general applicability of pebble matrix filters for NOM removal was also investigated. A 1.0 m deep bed of pebbles (the matrix) partly in-filled with either sand or crushed glass was tested, upon which was situated a layer of granular activated carbon (GAC). Turbidity was measured as a surrogate for suspended solids (SS), whereas, total organic carbon (TOC) and UV Absorbance at 254 nm were measured as surrogate parameters for NOM. Experiments using natural flood water showed that without the addition of any chemical coagulants, PMF columns achieved at least 50% turbidity reduction when the source water contained moderate hardness levels. For harder water samples, above 85% turbidity reduction was obtained. The ability to remove 50% turbidity without chemical coagulants may represent significant cost savings to water treatment plants and added environmental benefits accrue due to less sludge formation. A TOC reduction of 35-47% and UV-254 nm reduction of 24-38% was also observed. In addition to turbidity removal during flood periods, the ability to remove NOM using the pebble matrix filter throughout the year may have the benefit of reducing disinfection by-products (DBP) formation potential and coagulant demand at water treatment plants. Final head losses were remarkably low, reaching only 11 cm at a filtration velocity of 0.70 m/h.
Resumo:
This paper presents a novel RTK-based GNSS Lagrangian drifter system that is capable of monitoring water velocity, turbulence and dispersion coefficients of river and estuarine. The Lagrangian drifters use the dual-frequency real time kinematic (RTK) technique for both position and velocity estimations. The capsule is designed to meet the requirements such as minimizing height, diameter, minimizing the direct wind drag, positive buoyancy for satellite signal reception and stability, and waterproof housing for electronic components, such as GNSS receiver and computing board. The collected GNSS data are processed with post-processing RTK software. Several experiments have been carried out in two rivers in Brisbane and Sunshine Coast in Queensland. Results show that the high accuracy GNSS-drifters can be used to measure dispersion coefficient resulting from sub-tidal velocity fluctuations in shallow tidal water. In addition, the RTK-GNSS drifters respond well to vertical motion and thus could be applicable to flood monitoring.
Resumo:
Purpose The purpose of this paper is to reduce the potential for litigation by improving valuers’ awareness of water risks. As part of a valuer’s due diligence, the paper provides guidance as to how to identify such risks by explaining the different types and examining how online search tools can be used in conjunction with more traditional methods to evaluate the probability of these risks occurring. Design/methodology/approach The paper builds on prior research, which examined the impact of water to and for valuations. By means of legal/doctrinal analysis, this paper considers relevant issues from the perspective of managing client expectations and needs. In so doing it identifies online tools available to assist in identifying at risk properties and better informing clients. Findings While the internet provides a variety of tools to gain access to relevant information, this information most commonly is only provided subject to disclaimer. Valuers need to ensure that blind reliance is not given to use of these tools but that the tools are used in conjunction with individual property inspections. Research limitations/implications Although the examples considered primarily are Australian, increasing water risks generally make the issues considered relevant for any jurisdiction. The research will be of particular interests to practitioners in coastal or riverine areas. Practical implications Valuation reports are sought for a variety of purposes from a variety of clients. These range from the experienced, knowledgeable developer looking to maximise available equity to the inexperienced, uneducated individual looking to acquire their home and thinking more often than not with their heart not their head. More informed practices by valuers will lead to valuation reports being more easily understood by clients, thus lessening the likelihood of litigation against the valuer for negligence. Originality/value The paper highlights the issue of water risks; the need for valuers to properly address potential and actual risks in their reports; and the corresponding need to undertake all appropriate searches and enquiries of the property to be valued. It reinforces the importance of access to the internet as a tool in the valuation process.