931 resultados para Extraction of Sedimentary Phosphorus


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Synchronous fluorescence spectroscopy (SFS) was applied for the investigation of interactions of the antibiotic, tetracycline (TC), with DNA in the presence of aluminium ions (Al3+). The study was facilitated by the use of the Methylene Blue (MB) dye probe, and the interpretation of the spectral data with the aid of the chemometrics method, parallel factor analysis (PARAFAC). Three-way synchronous fluorescence analysis extracted the important optimum constant wavelength differences, Δλ, and showed that for the TC–Al3+–DNA, TC–Al3+ and MB dye systems, the associated Δλ values were different (Δλ = 80, 75 and 30 nm, respectively). Subsequent PARAFAC analysis demonstrated the extraction of the equilibrium concentration profiles for the TC–Al3+, TC–Al3+–DNA and MB probe systems. This information is unobtainable by conventional means of data interpretation. The results indicated that the MB dye interacted with the TC–Al3+–DNA surface complex, presumably via a reaction intermediate, TC–Al3+–DNA–MB, leading to the displacement of the TC–Al3+ by the incoming MB dye probe.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To summarise the extent to which narrative text fields in administrative health data are used to gather information about the event resulting in presentation to a health care provider for treatment of an injury, and to highlight best practise approaches to conducting narrative text interrogation for injury surveillance purposes.----- Design: Systematic review----- Data sources: Electronic databases searched included CINAHL, Google Scholar, Medline, Proquest, PubMed and PubMed Central.. Snowballing strategies were employed by searching the bibliographies of retrieved references to identify relevant associated articles.----- Selection criteria: Papers were selected if the study used a health-related database and if the study objectives were to a) use text field to identify injury cases or use text fields to extract additional information on injury circumstances not available from coded data or b) use text fields to assess accuracy of coded data fields for injury-related cases or c) describe methods/approaches for extracting injury information from text fields.----- Methods: The papers identified through the search were independently screened by two authors for inclusion, resulting in 41 papers selected for review. Due to heterogeneity between studies metaanalysis was not performed.----- Results: The majority of papers reviewed focused on describing injury epidemiology trends using coded data and text fields to supplement coded data (28 papers), with these studies demonstrating the value of text data for providing more specific information beyond what had been coded to enable case selection or provide circumstantial information. Caveats were expressed in terms of the consistency and completeness of recording of text information resulting in underestimates when using these data. Four coding validation papers were reviewed with these studies showing the utility of text data for validating and checking the accuracy of coded data. Seven studies (9 papers) described methods for interrogating injury text fields for systematic extraction of information, with a combination of manual and semi-automated methods used to refine and develop algorithms for extraction and classification of coded data from text. Quality assurance approaches to assessing the robustness of the methods for extracting text data was only discussed in 8 of the epidemiology papers, and 1 of the coding validation papers. All of the text interrogation methodology papers described systematic approaches to ensuring the quality of the approach.----- Conclusions: Manual review and coding approaches, text search methods, and statistical tools have been utilised to extract data from narrative text and translate it into useable, detailed injury event information. These techniques can and have been applied to administrative datasets to identify specific injury types and add value to previously coded injury datasets. Only a few studies thoroughly described the methods which were used for text mining and less than half of the studies which were reviewed used/described quality assurance methods for ensuring the robustness of the approach. New techniques utilising semi-automated computerised approaches and Bayesian/clustering statistical methods offer the potential to further develop and standardise the analysis of narrative text for injury surveillance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The automatic extraction of road features from remote sensed images has been a topic of great interest within the photogrammetric and remote sensing communities for over 3 decades. Although various techniques have been reported in the literature, it is still challenging to efficiently extract the road details with the increasing of image resolution as well as the requirement for accurate and up-to-date road data. In this paper, we will focus on the automatic detection of road lane markings, which are crucial for many applications, including lane level navigation and lane departure warning. The approach consists of four steps: i) data preprocessing, ii) image segmentation and road surface detection, iii) road lane marking extraction based on the generated road surface, and iv) testing and system evaluation. The proposed approach utilized the unsupervised ISODATA image segmentation algorithm, which segments the image into vegetation regions, and road surface based only on the Cb component of YCbCr color space. A shadow detection method based on YCbCr color space is also employed to detect and recover the shadows from the road surface casted by the vehicles and trees. Finally, the lane marking features are detected from the road surface using the histogram clustering. The experiments of applying the proposed method to the aerial imagery dataset of Gympie, Queensland demonstrate the efficiency of the approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This chapter looks at issues of non-stationarity in determining when a transient has occurred and when it is possible to fit a linear model to a non-linear response. The first issue is associated with the detection of loss of damping of power system modes. When some control device such as an SVC fails, the operator needs to know whether the damping of key power system oscillation modes has deteriorated significantly. This question is posed here as an alarm detection problem rather than an identification problem to get a fast detection of a change. The second issue concerns when a significant disturbance has occurred and the operator is seeking to characterize the system oscillation. The disturbance initially is large giving a nonlinear response; this then decays and can then be smaller than the noise level ofnormal customer load changes. The difficulty is one of determining when a linear response can be reliably identified between the non-linear phase and the large noise phase of thesignal. The solution proposed in this chapter uses “Time-Frequency” analysis tools to assistthe extraction of the linear model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The contribution of risky behaviour to the increased crash and fatality rates of young novice drivers is recognised in the road safety literature around the world. Exploring such risky driver behaviour has led to the development of tools like the Driver Behaviour Questionnaire (DBQ) to examine driving violations, errors, and lapses [1]. Whilst the DBQ has been utilised in young novice driver research, some items within this tool seem specifically designed for the older, more experienced driver, whilst others appear to asses both behaviour and related motives. The current study was prompted by the need for a risky behaviour measurement tool that can be utilised with young drivers with a provisional driving licence. Sixty-three items exploring young driver risky behaviour developed from the road safety literature were incorporated into an online survey. These items assessed driver, passenger, journey, car and crash-related issues. A sample of 476 drivers aged 17-25 years (M = 19, SD = 1.59 years) with a provisional driving licence and matched for age, gender, and education were drawn from a state-wide sample of 761 young drivers who completed the survey. Factor analysis based upon a principal components extraction of factors was followed by an oblique rotation to investigate the underlying dimensions to young novice driver risky behaviour. A five factor solution comprising 44 items was identified, accounting for 55% of the variance in young driver risky behaviour. Factor 1 accounted for 32.5% of the variance and appeared to measure driving violations that were transient in nature - risky behaviours that followed risky decisions that occurred during the journey (e.g., speeding). Factor 2 accounted for 10.0% of variance and appeared to measure driving violations that were fixed in nature; the risky decisions being undertaken before the journey (e.g., drink driving). Factor 3 accounted for 5.4% of variance and appeared to measure misjudgment (e.g., misjudged speed of oncoming vehicle). Factor 4 accounted for 4.3% of variance and appeared to measure risky driving exposure (e.g., driving at night with friends as passengers). Factor 5 accounted for 2.8% of variance and appeared to measure driver emotions or mood (e.g., anger). Given that the aim of the study was to create a research tool, the factors informed the development of five subscales and one composite scale. The composite scale had a very high internal consistency measure (Cronbach’s alpha) of .947. Self-reported data relating to police-detected driving offences, their crash involvement, and their intentions to break road rules within the next year were also collected. While the composite scale was only weakly correlated with self-reported crashes (r = .16, p < .001), it was moderately correlated with offences (r = .26, p < .001), and highly correlated with their intentions to break the road rules (r = .57, p < .001). Further application of the developed scale is needed to confirm the factor structure within other samples of young drivers both in Australia and in other countries. In addition, future research could explore the applicability of the scale for investigating the behaviour of other types of drivers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advances in symptom management strategies through a better understanding of cancer symptom clusters depend on the identification of symptom clusters that are valid and reliable. The purpose of this exploratory research was to investigate alternative analytical approaches to identify symptom clusters for patients with cancer, using readily accessible statistical methods, and to justify which methods of identification may be appropriate for this context. Three studies were undertaken: (1) a systematic review of the literature, to identify analytical methods commonly used for symptom cluster identification for cancer patients; (2) a secondary data analysis to identify symptom clusters and compare alternative methods, as a guide to best practice approaches in cross-sectional studies; and (3) a secondary data analysis to investigate the stability of symptom clusters over time. The systematic literature review identified, in 10 years prior to March 2007, 13 cross-sectional studies implementing multivariate methods to identify cancer related symptom clusters. The methods commonly used to group symptoms were exploratory factor analysis, hierarchical cluster analysis and principal components analysis. Common factor analysis methods were recommended as the best practice cross-sectional methods for cancer symptom cluster identification. A comparison of alternative common factor analysis methods was conducted, in a secondary analysis of a sample of 219 ambulatory cancer patients with mixed diagnoses, assessed within one month of commencing chemotherapy treatment. Principal axis factoring, unweighted least squares and image factor analysis identified five consistent symptom clusters, based on patient self-reported distress ratings of 42 physical symptoms. Extraction of an additional cluster was necessary when using alpha factor analysis to determine clinically relevant symptom clusters. The recommended approaches for symptom cluster identification using nonmultivariate normal data were: principal axis factoring or unweighted least squares for factor extraction, followed by oblique rotation; and use of the scree plot and Minimum Average Partial procedure to determine the number of factors. In contrast to other studies which typically interpret pattern coefficients alone, in these studies symptom clusters were determined on the basis of structure coefficients. This approach was adopted for the stability of the results as structure coefficients are correlations between factors and symptoms unaffected by the correlations between factors. Symptoms could be associated with multiple clusters as a foundation for investigating potential interventions. The stability of these five symptom clusters was investigated in separate common factor analyses, 6 and 12 months after chemotherapy commenced. Five qualitatively consistent symptom clusters were identified over time (Musculoskeletal-discomforts/lethargy, Oral-discomforts, Gastrointestinaldiscomforts, Vasomotor-symptoms, Gastrointestinal-toxicities), but at 12 months two additional clusters were determined (Lethargy and Gastrointestinal/digestive symptoms). Future studies should include physical, psychological, and cognitive symptoms. Further investigation of the identified symptom clusters is required for validation, to examine causality, and potentially to suggest interventions for symptom management. Future studies should use longitudinal analyses to investigate change in symptom clusters, the influence of patient related factors, and the impact on outcomes (e.g., daily functioning) over time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite recent developments in fixed-film combined biological nutrients removal (BNR) technology; fixed-film systems (i.e., biofilters), are still at the early stages of development and their application has been limited to a few laboratory-scale experiments. Achieving enhanced biological phosphorus removal in fixed-film systems requires exposing the micro-organisms and the waste stream to alternating anaerobic/aerobic or anaerobic/anoxic conditions in cycles. The concept of cycle duration (CD) as a process control parameter is unique to fixed-film BNR systems, has not been previously investigated, and can be used to optimise the performance of such systems. The CD refers to the elapsed time before the biomass is re-exposed to the same environmental conditions in cycles. Fixed-film systems offer many advantages over suspended growth systems such as reduced operating costs, simplicity of operation, absence of sludge recycling problems, and compactness. The control of nutrient discharges to water bodies, improves water quality, fish production, and allow water reuse. The main objective of this study was to develop a fundamental understanding of the effect of CD on the transformations of nutrients in fixed-film biofilter systems subjected to alternating aeration I no-aeration cycles A fixed-film biofilter system consisting of three up-flow biofilters connected in series was developed and tested. The first and third biofilters were operated in a cyclic mode in which the biomass was subjected to aeration/no-aeration cycles. The influent wastewater was simulated aquaculture whose composition was based on actual water quality parameters of aquacuture wastewater from a prawn grow-out facility. The influent contained 8.5 - 9:3 mg!L a111monia-N, 8.5- 8.7 mg/L phosphate-P, and 45- 50 mg!L acetate. Two independent studies were conducted at two biofiltration rates to evaluate and confirm the effect of CD on nutrient transformations in the biofilter system for application in aquaculture: A third study was conducted to enhance denitrification in the system using an external carbon- source at a rate varying from 0-24 ml/min. The CD was varied in the range of0.25- 120 hours for the first two studies and fixed at 12 hours for the third study. This study identified the CD as an important process control parameter that can be used to optimise the performance of full-scale fixed-film systems for BNR which represents a novel contribution in this field of research. The CD resulted in environmental conditions that inhibited or enhanced nutrient transformations. The effect of CD on BNR in fixed-film systems in terms of phosphorus biomass saturation and depletion has been established. Short CDs did not permit the establishment of anaerobic activity in the un-aerated biofilter and, thus, inhibited phosphorus release. Long CDs resulted in extended anaerobic activity and, thus, resulted in active phosphorus release. Long CDs, however, resulted in depleting the biomass phosphorus reservoir in the releasing biofilter and saturating the biomass phosphorus reservoir in the up-taking biofilter in the cycle. This phosphorus biomass saturation/depletion phenomenon imposes a practical limit on how short or long the CD can be. The length of the CD should be somewhere just before saturation or depletion occur and for the system tested, the optimal CD was 12 hours for the biofiltration rates tested. The system achieved limited net phosphorus removal due to the limited sludge wasting and lack of external carbon supply during phosphorus uptake. The phosphorus saturation and depletion reflected the need to extract phosphorus from the phosphorus-rich micro-organisms, for example, through back-washing. The major challenges of achieving phosphorus removal in the system included: (I) overcoming the deterioration in the performance of the system during the transition period following the start of each new cycle; and (2) wasting excess phosphorus-saturated biomass following the aeration cycle. Denitrification occurred in poorly aerated sections of the third biofilter and generally declined as the CD increased and as the time progressed in the individual cycle. Denitrification and phosphorus uptake were supplied by an internal organic carbon source, and the addition of an external carbon source (acetate) to the third biofilter resulted in improved denitrification efficiency in the system from 18.4 without supplemental carbon to 88.7% when the carbon dose reached 24 mL/min The removal of TOC and nitrification improved as the CD increased, as a result of the reduction in the frequency of transition periods between the cycles. A conceptual design of an effective fixed-film BNR biofilter system for the treatment of the influent simulated aquaculture wastewater was proposed based on the findings of the study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis applies Monte Carlo techniques to the study of X-ray absorptiometric methods of bone mineral measurement. These studies seek to obtain information that can be used in efforts to improve the accuracy of the bone mineral measurements. A Monte Carlo computer code for X-ray photon transport at diagnostic energies has been developed from first principles. This development was undertaken as there was no readily available code which included electron binding energy corrections for incoherent scattering and one of the objectives of the project was to study the effects of inclusion of these corrections in Monte Carlo models. The code includes the main Monte Carlo program plus utilities for dealing with input data. A number of geometrical subroutines which can be used to construct complex geometries have also been written. The accuracy of the Monte Carlo code has been evaluated against the predictions of theory and the results of experiments. The results show a high correlation with theoretical predictions. In comparisons of model results with those of direct experimental measurements, agreement to within the model and experimental variances is obtained. The code is an accurate and valid modelling tool. A study of the significance of inclusion of electron binding energy corrections for incoherent scatter in the Monte Carlo code has been made. The results show this significance to be very dependent upon the type of application. The most significant effect is a reduction of low angle scatter flux for high atomic number scatterers. To effectively apply the Monte Carlo code to the study of bone mineral density measurement by photon absorptiometry the results must be considered in the context of a theoretical framework for the extraction of energy dependent information from planar X-ray beams. Such a theoretical framework is developed and the two-dimensional nature of tissue decomposition based on attenuation measurements alone is explained. This theoretical framework forms the basis for analytical models of bone mineral measurement by dual energy X-ray photon absorptiometry techniques. Monte Carlo models of dual energy X-ray absorptiometry (DEXA) have been established. These models have been used to study the contribution of scattered radiation to the measurements. It has been demonstrated that the measurement geometry has a significant effect upon the scatter contribution to the detected signal. For the geometry of the models studied in this work the scatter has no significant effect upon the results of the measurements. The model has also been used to study a proposed technique which involves dual energy X-ray transmission measurements plus a linear measurement of the distance along the ray path. This is designated as the DPA( +) technique. The addition of the linear measurement enables the tissue decomposition to be extended to three components. Bone mineral, fat and lean soft tissue are the components considered here. The results of the model demonstrate that the measurement of bone mineral using this technique is stable over a wide range of soft tissue compositions and hence would indicate the potential to overcome a major problem of the two component DEXA technique. However, the results also show that the accuracy of the DPA( +) technique is highly dependent upon the composition of the non-mineral components of bone and has poorer precision (approximately twice the coefficient of variation) than the standard DEXA measurements. These factors may limit the usefulness of the technique. These studies illustrate the value of Monte Carlo computer modelling of quantitative X-ray measurement techniques. The Monte Carlo models of bone densitometry measurement have:- 1. demonstrated the significant effects of the measurement geometry upon the contribution of scattered radiation to the measurements, 2. demonstrated that the statistical precision of the proposed DPA( +) three tissue component technique is poorer than that of the standard DEXA two tissue component technique, 3. demonstrated that the proposed DPA(+) technique has difficulty providing accurate simultaneous measurement of body composition in terms of a three component model of fat, lean soft tissue and bone mineral,4. and provided a knowledge base for input to decisions about development (or otherwise) of a physical prototype DPA( +) imaging system. The Monte Carlo computer code, data, utilities and associated models represent a set of significant, accurate and valid modelling tools for quantitative studies of physical problems in the fields of diagnostic radiology and radiography.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Biomass represents an abundant and relatively low cost carbon resource that can be utilized to produce platform chemicals such as levulinic acid. Current processing technology limits the cost-effective production of levulinic acid in commercial quantities from biomass. The key to improving the yield and effi ciency of levulinic acid production from biomass lies in the ability to optimize and isolate the intermediate products at each step of the reaction pathway and reduce re-polymerization and side reactions. New technologies (including the use of microwave irradiation and ionic liquids) and the development of highly selective catalysts would provide the necessary step change for the optimization of key reactions. A processing environment that allows the use of biphasic systems and/or continuous extraction of products would increase reaction rates, yields and product quality. This review outlines the chemistry of levulinic acid synthesis and discusses current and potential technologies for producing levulinic acid from lignocellulosics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The size of rat-race and branch-line couplers can be reduced by using periodic loading or artificial transmission lines. The objective of this work is to extend the idea of size reduction through periodic loading to coupled-line 90° hybrids. A procedure for the extraction of the characteristic parameters of a coupled-line 4-port from a single set of S-parameters is described. This method can be employed to design of coupled artificial transmission line couplers of arbitrary geometry. The procedure is illustrated through the design a broadside-coupled stripline hybrid, periodically loaded with stubs. Measured results for a prototype coupler confirm the validity of the theory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study examines whether, in the presentation of financial information, digital formats address the concern over users’ functional fixation. The accounting literature indicates that the presentation of financial information either within the financial statements or in the notes to the financial statements often creates functional fixation where users of financial statements fail to adjust for differences in accounting policy. This leads users to judge what would otherwise be identical financial situations as being different due to the different accounting policies and methods adopted. It has been suggested that the use of digital formats in presenting financial reports may overcome functional fixation. Using an experimental design involving accountants in public practice, the results indicate that the use of digital formats to present financial reports does not fully overcome the issue of functional fixation in the processing of financial information. Although the participants were able to identify and extract relevant information, irrespective of whether or not the information was presented within the financial statements or in the notes to the accounts, the evidence indicates that functional fixation remained when the participants made final decisions based on available information. This suggests that functional fixation may not be caused by access to or extraction of information but by the level of perceived significance based on where the information is reported in the financial statements. In general, the results indicate that current technology may not be able to fully reduce functional fixation in the evaluation of financial information prepared in accordance with different accounting policies and methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new method for the detection of abnormal vehicle trajectories is proposed. It couples optical flow extraction of vehicle velocities with a neural network classifier. Abnormal trajectories are indicative of drunk or sleepy drivers. A single feature of the vehicle, eg., a tail light, is isolated and the optical flow computed only around this feature rather than at each pixel in the image.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current approaches to the regulation of coal mining activities in Australia have facilitated the extraction of substantial amounts of coal and coal seam gas. The regulation of coal mining activities must now achieve the reduction or mitigation of greenhouse gas emissions in order to address the challenge of climate change and achieve ecologically sustainable development. Several legislative mechanisms currently exist which appear to offer the means to bring about the reduction or mitigation of greenhouse gas emissions from coal mining activities, yet Australia’s emissions from coal mining continue to rise. This article critiques these existing legislative mechanisms and presents recommendations for reform.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Efficient management of domestic wastewater is a primary requirement for human well being. Failure to adequately address issues of wastewater collection, treatment and disposal can lead to adverse public health and environmental impacts. The increasing spread of urbanisation has led to the conversion of previously rural land into urban developments and the more intensive development of semi urban areas. However the provision of reticulated sewerage facilities has not kept pace with this expansion in urbanisation. This has resulted in a growing dependency on onsite sewage treatment. Though considered only as a temporary measure in the past, these systems are now considered as the most cost effective option and have become a permanent feature in some urban areas. This report is the first of a series of reports to be produced and is the outcome of a research project initiated by the Brisbane City Council. The primary objective of the research undertaken was to relate the treatment performance of onsite sewage treatment systems with soil conditions at site, with the emphasis being on septic tanks. This report consists of a ‘state of the art’ review of research undertaken in the arena of onsite sewage treatment. The evaluation of research brings together significant work undertaken locally and overseas. It focuses mainly on septic tanks in keeping with the primary objectives of the project. This report has acted as the springboard for the later field investigations and analysis undertaken as part of the project. Septic tanks still continue to be used widely due to their simplicity and low cost. Generally the treatment performance of septic tanks can be highly variable due to numerous factors, but a properly designed, operated and maintained septic tank can produce effluent of satisfactory quality. The reduction of hydraulic surges from washing machines and dishwashers, regular removal of accumulated septage and the elimination of harmful chemicals are some of the practices that can improve system performance considerably. The relative advantages of multi chamber over single chamber septic tanks is an issue that needs to be resolved in view of the conflicting research outcomes. In recent years, aerobic wastewater treatment systems (AWTS) have been gaining in popularity. This can be mainly attributed to the desire to avoid subsurface effluent disposal, which is the main cause of septic tank failure. The use of aerobic processes for treatment of wastewater and the disinfection of effluent prior to disposal is capable of producing effluent of a quality suitable for surface disposal. However the field performance of these has been disappointing. A significant number of these systems do not perform to stipulated standards and quality can be highly variable. This is primarily due to houseowner neglect or ignorance of correct operational and maintenance procedures. The other problems include greater susceptibility to shock loadings and sludge bulking. As identified in literature a number of design features can also contribute to this wide variation in quality. The other treatment processes in common use are the various types of filter systems. These include intermittent and recirculating sand filters. These systems too have their inherent advantages and disadvantages. Furthermore as in the case of aerobic systems, their performance is very much dependent on individual houseowner operation and maintenance practices. In recent years the use of biofilters has attracted research interest and particularly the use of peat. High removal rates of various wastewater pollutants have been reported in research literature. Despite these satisfactory results, leachate from peat has been reported in various studies. This is an issue that needs further investigations and as such biofilters can still be considered to be in the experimental stage. The use of other filter media such as absorbent plastic and bark has also been reported in literature. The safe and hygienic disposal of treated effluent is a matter of concern in the case of onsite sewage treatment. Subsurface disposal is the most common and the only option in the case of septic tank treatment. Soil is an excellent treatment medium if suitable conditions are present. The processes of sorption, filtration and oxidation can remove the various wastewater pollutants. The subsurface characteristics of the disposal area are among the most important parameters governing process performance. Therefore it is important that the soil and topographic conditions are taken into consideration in the design of the soil absorption system. Seepage trenches and beds are the common systems in use. Seepage pits or chambers can be used where subsurface conditions warrant, whilst above grade mounds have been recommended for a variety of difficult site conditions. All these systems have their inherent advantages and disadvantages and the preferable soil absorption system should be selected based on site characteristics. The use of gravel as in-fill for beds and trenches is open to question. It does not contribute to effluent treatment and has been shown to reduce the effective infiltrative surface area. This is due to physical obstruction and the migration of fines entrained in the gravel, into the soil matrix. The surface application of effluent is coming into increasing use with the advent of aerobic treatment systems. This has the advantage that treatment is undertaken on the upper soil horizons, which is chemically and biologically the most effective in effluent renovation. Numerous research studies have demonstrated the feasibility of this practice. However the overriding criteria is the quality of the effluent. It has to be of exceptionally good quality in order to ensure that there are no resulting public health impacts due to aerosol drift. This essentially is the main issue of concern, due to the unreliability of the effluent quality from aerobic systems. Secondly, it has also been found that most householders do not take adequate care in the operation of spray irrigation systems or in the maintenance of the irrigation area. Under these circumstances surface disposal of effluent should be approached with caution and would require appropriate householder education and stringent compliance requirements. However despite all this, the efficiency with which the process is undertaken will ultimately rest with the individual householder and this is where most concern rests. Greywater too should require similar considerations. Surface irrigation of greywater is currently being permitted in a number of local authority jurisdictions in Queensland. Considering the fact that greywater constitutes the largest fraction of the total wastewater generated in a household, it could be considered to be a potential resource. Unfortunately in most circumstances the only pretreatment that is required to be undertaken prior to reuse is the removal of oil and grease. This is an issue of concern as greywater can considered to be a weak to medium sewage as it contains primary pollutants such as BOD material and nutrients and may also include microbial contamination. Therefore its use for surface irrigation can pose a potential health risk. This is further compounded by the fact that most householders are unaware of the potential adverse impacts of indiscriminate greywater reuse. As in the case of blackwater effluent reuse, there have been suggestions that greywater should also be subjected to stringent guidelines. Under these circumstances the surface application of any wastewater requires careful consideration. The other option available for the disposal effluent is the use of evaporation systems. The use of evapotranspiration systems has been covered in this report. Research has shown that these systems are susceptible to a number of factors and in particular to climatic conditions. As such their applicability is location specific. Also the design of systems based solely on evapotranspiration is questionable. In order to ensure more reliability, the systems should be designed to include soil absorption. The successful use of these systems for intermittent usage has been noted in literature. Taking into consideration the issues discussed above, subsurface disposal of effluent is the safest under most conditions. This is provided the facility has been designed to accommodate site conditions. The main problem associated with subsurface disposal is the formation of a clogging mat on the infiltrative surfaces. Due to the formation of the clogging mat, the capacity of the soil to handle effluent is no longer governed by the soil’s hydraulic conductivity as measured by the percolation test, but rather by the infiltration rate through the clogged zone. The characteristics of the clogging mat have been shown to be influenced by various soil and effluent characteristics. Secondly, the mechanisms of clogging mat formation have been found to be influenced by various physical, chemical and biological processes. Biological clogging is the most common process taking place and occurs due to bacterial growth or its by-products reducing the soil pore diameters. Biological clogging is generally associated with anaerobic conditions. The formation of the clogging mat provides significant benefits. It acts as an efficient filter for the removal of microorganisms. Also as the clogging mat increases the hydraulic impedance to flow, unsaturated flow conditions will occur below the mat. This permits greater contact between effluent and soil particles thereby enhancing the purification process. This is particularly important in the case of highly permeable soils. However the adverse impacts of the clogging mat formation cannot be ignored as they can lead to significant reduction in the infiltration rate. This in fact is the most common cause of soil absorption systems failure. As the formation of the clogging mat is inevitable, it is important to ensure that it does not impede effluent infiltration beyond tolerable limits. Various strategies have been investigated to either control clogging mat formation or to remediate its severity. Intermittent dosing of effluent is one such strategy that has attracted considerable attention. Research conclusions with regard to short duration time intervals are contradictory. It has been claimed that the intermittent rest periods would result in the aerobic decomposition of the clogging mat leading to a subsequent increase in the infiltration rate. Contrary to this, it has also been claimed that short duration rest periods are insufficient to completely decompose the clogging mat, and the intermediate by-products that form as a result of aerobic processes would in fact lead to even more severe clogging. It has been further recommended that the rest periods should be much longer and should be in the range of about six months. This entails the provision of a second and alternating seepage bed. The other concepts that have been investigated are the design of the bed to meet the equilibrium infiltration rate that would eventuate after clogging mat formation; improved geometry such as the use of seepage trenches instead of beds; serial instead of parallel effluent distribution and low pressure dosing of effluent. The use of physical measures such as oxidation with hydrogen peroxide and replacement of the infiltration surface have been shown to be only of short-term benefit. Another issue of importance is the degree of pretreatment that should be provided to the effluent prior to subsurface application and the influence exerted by pollutant loadings on the clogging mat formation. Laboratory studies have shown that the total mass loadings of BOD and suspended solids are important factors in the formation of the clogging mat. It has also been found that the nature of the suspended solids is also an important factor. The finer particles from extended aeration systems when compared to those from septic tanks will penetrate deeper into the soil and hence will ultimately cause a more dense clogging mat. However the importance of improved pretreatment in clogging mat formation may need to be qualified in view of other research studies. It has also shown that effluent quality may be a factor in the case of highly permeable soils but this may not be the case with fine structured soils. The ultimate test of onsite sewage treatment system efficiency rests with the final disposal of effluent. The implication of system failure as evidenced from the surface ponding of effluent or the seepage of contaminants into the groundwater can be very serious as it can lead to environmental and public health impacts. Significant microbial contamination of surface and groundwater has been attributed to septic tank effluent. There are a number of documented instances of septic tank related waterborne disease outbreaks affecting large numbers of people. In a recent incident, the local authority was found liable for an outbreak of viral hepatitis A and not the individual septic tank owners as no action had been taken to remedy septic tank failure. This illustrates the responsibility placed on local authorities in terms of ensuring the proper operation of onsite sewage treatment systems. Even a properly functioning soil absorption system is only capable of removing phosphorus and microorganisms. The nitrogen remaining after plant uptake will not be retained in the soil column, but will instead gradually seep into the groundwater as nitrate. Conditions for nitrogen removal by denitrification are not generally present in a soil absorption bed. Dilution by groundwater is the only treatment available for reducing the nitrogen concentration to specified levels. Therefore based on subsurface conditions, this essentially entails a maximum allowable concentration of septic tanks in a given area. Unfortunately nitrogen is not the only wastewater pollutant of concern. Relatively long survival times and travel distances have been noted for microorganisms originating from soil absorption systems. This is likely to happen if saturated conditions persist under the soil absorption bed or due to surface runoff of effluent as a result of system failure. Soils have a finite capacity for the removal of phosphorus. Once this capacity is exceeded, phosphorus too will seep into the groundwater. The relatively high mobility of phosphorus in sandy soils have been noted in the literature. These issues have serious implications in the design and siting of soil absorption systems. It is not only important to ensure that the system design is based on subsurface conditions but also the density of these systems in given areas is a critical issue. This essentially involves the adoption of a land capability approach to determine the limitations of an individual site for onsite sewage disposal. The most limiting factor at a particular site would determine the overall capability classification for that site which would also dictate the type of effluent disposal method to be adopted.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract. For interactive systems, recognition, reproduction, and generalization of observed motion data are crucial for successful interaction. In this paper, we present a novel method for analysis of motion data that we refer to as K-OMM-trees. K-OMM-trees combine Ordered Means Models (OMMs) a model-based machine learning approach for time series with an hierarchical analysis technique for very large data sets, the K-tree algorithm. The proposed K-OMM-trees enable unsupervised prototype extraction of motion time series data with hierarchical data representation. After introducing the algorithmic details, we apply the proposed method to a gesture data set that includes substantial inter-class variations. Results from our studies show that K-OMM-trees are able to substantially increase the recognition performance and to learn an inherent data hierarchy with meaningful gesture abstractions.