81 resultados para Area Under Curve


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The bentiromide test was evaluated using plasma p-aminobenzoic acid as an indirect test of pancreatic insufficiency in young children between 2 months and 4 years of age. To determine the optimal test method, the following were examined: (a) the best dose of bentiromide (15 mg/kg or 30 mg/kg); (b) the optimal sampling time for plasma p-aminobenzoic acid, and; (c) the effect of coadministration of a liquid meal. Sixty-nine children (1.6 ± 1.0 years) were studied, including 34 controls with normal fat absorption and 35 patients (34 with cystic fibrosis) with fat maldigestion due to pancreatic insufficiency. Control and pancreatic insufficient subjects were studied in three age-matched groups: (a) low-dose bentiromide (15 mg/kg) with clear fluids; (b) high-dose bentiromide (30 mg/kg) with clear fluids, and; (c) high-dose bentiromide with a liquid meal. Plasma p-aminobenzoic acid was determined at 0, 30, 60, and 90 minutes then hourly for 6 hours. The dose effect of bentiromide with clear liquids was evaluated. High-dose bentiromide best discriminated control and pancreatic insufficient subjects, due to a higher peak plasma p-aminobenzoic acid level in controls, but poor sensitivity and specificity remained. High-dose bentiromide with a liquid meal produced a delayed increase in plasma p-aminobenzoic acid in the control subjects probably caused by retarded gastric emptying. However, in the pancreatic insufficient subjects, use of a liquid meal resulted in significantly lower plasma p-aminobenzoic acid levels at all time points; plasma p-aminobenzoic acid at 2 and 3 hours completely discriminated between control and pancreatic insufficient patients. Evaluation of the data by area under the time-concentration curve failed to improve test results. In conclusion, the bentiromide test is a simple, clinically useful means of detecting pancreatic insufficiency in young children, but a higher dose administered with a liquid meal is recommended.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVE This study determined if deficits in corneal nerve fiber length (CNFL) assessed using corneal confocal microscopy (CCM) can predict future onset of diabetic peripheral neuropathy (DPN). RESEARCH DESIGN AND METHODS CNFL and a range of other baseline measures were compared between 90 nonneuropathic patients with type 1 diabetes who did or did not develop DPN after 4 years. The receiver operator characteristic (ROC) curve was used to determine the capability of single and combined measures of neuropathy to predict DPN. RESULTS DPN developed in 16 participants (18%) after 4 years. Factors predictive of 4-year incident DPN were lower CNFL (P = 0.041); longer duration of diabetes (P = 0.002); higher triglycerides (P = 0.023); retinopathy (higher on the Early Treatment of Diabetic Retinopathy Study scale) (P = 0.008); nephropathy (higher albumin-to-creatinine ratio) (P = 0.001); higher neuropathy disability score (P = 0.037); lower cold sensation (P = 0.001) and cold pain (P = 0.027) thresholds; higher warm sensation (P = 0.008), warm pain (P = 0.024), and vibration (P = 0.003) thresholds; impaired monofilament response (P = 0.003); and slower peroneal (P = 0.013) and sural (P = 0.002) nerve conduction velocity. CCM could predict the 4-year incident DPN with 63% sensitivity and 74% specificity for a CNFL threshold cutoff of 14.1 mm/mm2 (area under ROC curve = 0.66, P = 0.041). Combining neuropathy measures did not improve predictive capability. CONCLUSIONS DPN can be predicted by various demographic, metabolic, and conventional neuropathy measures. The ability of CCM to predict DPN broadens the already impressive diagnostic capabilities of this novel ophthalmic marker.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Species distribution modelling (SDM) typically analyses species’ presence together with some form of absence information. Ideally absences comprise observations or are inferred from comprehensive sampling. When such information is not available, then pseudo-absences are often generated from the background locations within the study region of interest containing the presences, or else absence is implied through the comparison of presences to the whole study region, e.g. as is the case in Maximum Entropy (MaxEnt) or Poisson point process modelling. However, the choice of which absence information to include can be both challenging and highly influential on SDM predictions (e.g. Oksanen and Minchin, 2002). In practice, the use of pseudo- or implied absences often leads to an imbalance where absences far outnumber presences. This leaves analysis highly susceptible to ‘naughty-noughts’: absences that occur beyond the envelope of the species, which can exert strong influence on the model and its predictions (Austin and Meyers, 1996). Also known as ‘excess zeros’, naughty noughts can be estimated via an overall proportion in simple hurdle or mixture models (Martin et al., 2005). However, absences, especially those that occur beyond the species envelope, can often be more diverse than presences. Here we consider an extension to excess zero models. The two-staged approach first exploits the compartmentalisation provided by classification trees (CTs) (as in O’Leary, 2008) to identify multiple sources of naughty noughts and simultaneously delineate several species envelopes. Then SDMs can be fit separately within each envelope, and for this stage, we examine both CTs (as in Falk et al., 2014) and the popular MaxEnt (Elith et al., 2006). We introduce a wider range of model performance measures to improve treatment of naughty noughts in SDM. We retain an overall measure of model performance, the area under the curve (AUC) of the Receiver-Operating Curve (ROC), but focus on its constituent measures of false negative rate (FNR) and false positive rate (FPR), and how these relate to the threshold in the predicted probability of presence that delimits predicted presence from absence. We also propose error rates more relevant to users of predictions: false omission rate (FOR), the chance that a predicted absence corresponds to (and hence wastes) an observed presence, and the false discovery rate (FDR), reflecting those predicted (or potential) presences that correspond to absence. A high FDR may be desirable since it could help target future search efforts, whereas zero or low FOR is desirable since it indicates none of the (often valuable) presences have been ignored in the SDM. For illustration, we chose Bradypus variegatus, a species that has previously been published as an exemplar species for MaxEnt, proposed by Phillips et al. (2006). We used CTs to increasingly refine the species envelope, starting with the whole study region (E0), eliminating more and more potential naughty noughts (E1–E3). When combined with an SDM fit within the species envelope, the best CT SDM had similar AUC and FPR to the best MaxEnt SDM, but otherwise performed better. The FNR and FOR were greatly reduced, suggesting that CTs handle absences better. Interestingly, MaxEnt predictions showed low discriminatory performance, with the most common predicted probability of presence being in the same range (0.00-0.20) for both true absences and presences. In summary, this example shows that SDMs can be improved by introducing an initial hurdle to identify naughty noughts and partition the envelope before applying SDMs. This improvement was barely detectable via AUC and FPR yet visible in FOR, FNR, and the comparison of predicted probability of presence distribution for pres/absence.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVE Quantitative assessment of small fiber damage is key to the early diagnosis and assessment of progression or regression of diabetic sensorimotor polyneuropathy (DSPN). Intraepidermal nerve fiber density (IENFD) is the current gold standard, but corneal confocal microscopy (CCM), an in vivo ophthalmic imaging modality, has the potential to be a noninvasive and objective image biomarker for identifying small fiber damage. The purpose of this study was to determine the diagnostic performance of CCM and IENFD by using the current guidelines as the reference standard. RESEARCH DESIGN AND METHODS Eighty-nine subjects (26 control subjects and 63 patients with type 1 diabetes), with and without DSPN, underwent a detailed assessment of neuropathy, including CCM and skin biopsy. RESULTS Manual and automated corneal nerve fiber density (CNFD) (P < 0.0001), branch density (CNBD) (P < 0.0001) and length (CNFL) (P < 0.0001), and IENFD (P < 0.001) were significantly reduced in patients with diabetes with DSPN compared with control subjects. The area under the receiver operating characteristic curve for identifying DSPN was 0.82 for manual CNFD, 0.80 for automated CNFD, and 0.66 for IENFD, which did not differ significantly (P = 0.14). CONCLUSIONS This study shows comparable diagnostic efficiency between CCM and IENFD, providing further support for the clinical utility of CCM as a surrogate end point for DSPN.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

PURPOSE: In vivo corneal confocal microscopy (CCM) is increasingly used as a surrogate endpoint in studies of diabetic polyneuropathy (DPN). However, it is not clear whether imaging the central cornea provides optimal diagnostic utility for DPN. Therefore, we compared nerve morphology in the central cornea and the inferior whorl, a more distal and densely innervated area located inferior and nasal to the central cornea. METHODS: A total of 53 subjects with type 1/type 2 diabetes and 15 age-matched control subjects underwent detailed assessment of neuropathic symptoms (NPS), deficits (neuropathy disability score [NDS]), quantitative sensory testing (vibration perception threshold [VPT], cold and warm threshold [CT/WT], and cold- and heat-induced pain [CIP/HIP]), and electrophysiology (sural and peroneal nerve conduction velocity [SSNCV/PMNCV], and sural and peroneal nerve amplitude [SSNA/PMNA]) to diagnose patients with (DPN+) and without (DPN-) neuropathy. Corneal nerve fiber density (CNFD) and length (CNFL) in the central cornea, and inferior whorl length (IWL) were quantified. RESULTS: Comparing control subjects to DPN- and DPN+ patients, there was a significant increase in NDS (0 vs. 2.6 ± 2.3 vs. 3.3 ± 2.7, P < 0.01), VPT (V; 5.4 ± 3.0 vs. 10.6 ± 10.3 vs. 17.7 ± 11.8, P < 0.01), WT (°C; 37.7 ± 3.5 vs. 39.1 ± 5.1 vs. 41.7 ± 4.7, P < 0.05), and a significant decrease in SSNCV (m/s; 50.2 ± 5.4 vs. 48.4 ± 5.0 vs. 39.5 ± 10.6, P < 0.05), CNFD (fibers/mm2; 37.8 ± 4.9 vs. 29.7 ± 7.7 vs. 27.1 ± 9.9, P < 0.01), CNFL (mm/mm2; 27.5 ± 3.6 vs. 24.4 ± 7.8 vs. 20.7 ± 7.1, P < 0.01), and IWL (mm/mm2; 35.1 ± 6.5 vs. 26.2 ± 10.5 vs. 23.6 ± 11.4, P < 0.05). For the diagnosis of DPN, CNFD, CNFL, and IWL achieved an area under the curve (AUC) of 0.75, 0.74, and 0.70, respectively, and a combination of IWL-CNFD achieved an AUC of 0.76. CONCLUSIONS: The parameters of CNFD, CNFL, and IWL have a comparable ability to diagnose patients with DPN. However, IWL detects an abnormality even in patients without DPN. Combining IWL with CNFD may improve the diagnostic performance of CCM.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVE Public health organizations recommend that preschool-aged children accumulate at least 3h of physical activity (PA) daily. Objective monitoring using pedometers offers an opportunity to measure preschooler's PA and assess compliance with this recommendation. The purpose of this study was to derive step-based recommendations consistent with the 3h PA recommendation for preschool-aged children. METHOD The study sample comprised 916 preschool-aged children, aged 3 to 6years (mean age=5.0+/-0.8years). Children were recruited from kindergartens located in Portugal, between 2009 and 2013. Children wore an ActiGraph GT1M accelerometer that measured PA intensity and steps per day simultaneously over a 7-day monitoring period. Receiver operating characteristic (ROC) curve analysis was used to identify the daily step count threshold associated with meeting the daily 3hour PA recommendation. RESULTS A significant correlation was observed between minutes of total PA and steps per day (r=0.76, p<0.001). The optimal step count for >/=3h of total PA was 9099 steps per day (sensitivity (90%) and specificity (66%)) with area under the ROC curve=0.86 (95% CI: 0.84 to 0.88). CONCLUSION Preschool-aged children who accumulate less than 9000 steps per day may be considered Insufficiently Active.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background Skin temperature assessment is a promising modality for early detection of diabetic foot problems, but its diagnostic value has not been studied. Our aims were to investigate the diagnostic value of different cutoff skin temperature values for detecting diabetes-related foot complications such as ulceration, infection, and Charcot foot and to determine urgency of treatment in case of diagnosed infection or a red-hot swollen foot. Materials and Methods The plantar foot surfaces of 54 patients with diabetes visiting the outpatient foot clinic were imaged with an infrared camera. Nine patients had complications requiring immediate treatment, 25 patients had complications requiring non-immediate treatment, and 20 patients had no complications requiring treatment. Average pixel temperature was calculated for six predefined spots and for the whole foot. We calculated the area under the receiver operating characteristic curve for different cutoff skin temperature values using clinical assessment as reference and defined the sensitivity and specificity for the most optimal cutoff temperature value. Mean temperature difference between feet was analyzed using the Kruskal–Wallis tests. Results The most optimal cutoff skin temperature value for detection of diabetes-related foot complications was a 2.2°C difference between contralateral spots (sensitivity, 76%; specificity, 40%). The most optimal cutoff skin temperature value for determining urgency of treatment was a 1.35°C difference between the mean temperature of the left and right foot (sensitivity, 89%; specificity, 78%). Conclusions Detection of diabetes-related foot complications based on local skin temperature assessment is hindered by low diagnostic values. Mean temperature difference between two feet may be an adequate marker for determining urgency of treatment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND Polygenic risk scores comprising established susceptibility variants have shown to be informative classifiers for several complex diseases including prostate cancer. For prostate cancer it is unknown if inclusion of genetic markers that have so far not been associated with prostate cancer risk at a genome-wide significant level will improve disease prediction. METHODS We built polygenic risk scores in a large training set comprising over 25,000 individuals. Initially 65 established prostate cancer susceptibility variants were selected. After LD pruning additional variants were prioritized based on their association with prostate cancer. Six-fold cross validation was performed to assess genetic risk scores and optimize the number of additional variants to be included. The final model was evaluated in an independent study population including 1,370 cases and 1,239 controls. RESULTS The polygenic risk score with 65 established susceptibility variants provided an area under the curve (AUC) of 0.67. Adding an additional 68 novel variants significantly increased the AUC to 0.68 (P = 0.0012) and the net reclassification index with 0.21 (P = 8.5E-08). All novel variants were located in genomic regions established as associated with prostate cancer risk. CONCLUSIONS Inclusion of additional genetic variants from established prostate cancer susceptibility regions improves disease prediction. Prostate 75:1467–1474, 2015. © 2015 Wiley Periodicals, Inc.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Aim To describe glycaemia in both breastfeeding women and artificially feeding women with Type 1 diabetes, and the changes in glycaemia induced by suckling. Methods A blinded continuous glucose monitor was applied for up to 6 days in eight breastfeeding and eight artificially feeding women with Type 1 diabetes 2–4 months postpartum. Women recorded glucose levels, insulin dosages, oral intake and breastfeeding episodes. A standardized breakfast was consumed on 2 days. A third group (clinic controls) were identified from a historical database. Results Carbohydrate intake tended to be higher in breastfeeding than artificially feeding women (P = 0.09) despite similar insulin requirements. Compared with breastfeeding women, the high blood glucose index and standard deviation of glucose were higher in artificially feeding women (P = 0.02 and 0.06, respectively) and in the clinical control group (P = 0.02 and 0.05, respectively). The low blood glucose index and hypoglycaemia were similar. After suckling, the low blood glucose index increased compared with before (P < 0.01) and during (P < 0.01) suckling. Hypoglycaemia (blood glucose < 4.0 mmol/l) occurred within 3 h of suckling in 14% of suckling episodes, and was associated with time from last oral intake (P = 0.04) and last rapid-acting insulin (P = 0.03). After a standardized breakfast, the area under the glucose curve was positive. In breastfeeding women the area under the glucose curve was positive if suckling was avoided for 1 h after eating and negative if suckling occurred within 30 min of eating. Conclusions Breastfeeding women with Type 1 diabetes had similar hypoglycaemia but lower glucose variability than artificially feeding women. Suckling reduced maternal glucose levels but did not cause hypoglycaemia in most episodes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Aims – To develop local contemporary coefficients for the Trauma Injury Severity Score in New Zealand, TRISS(NZ), and to evaluate their performance at predicting survival against the original TRISS coefficients. Methods – Retrospective cohort study of adults who sustained a serious traumatic injury, and who survived until presentation at Auckland City, Middlemore, Waikato, or North Shore Hospitals between 2002 and 2006. Coefficients were estimated using ordinary and multilevel mixed-effects logistic regression models. Results – 1735 eligible patients were identified, 1672 (96%) injured from a blunt mechanism and 63 (4%) from a penetrating mechanism. For blunt mechanism trauma, 1250 (75%) were male and average age was 38 years (range: 15-94 years). TRISS information was available for 1565 patients of whom 204 (13%) died. Area under the Receiver Operating Characteristic (ROC) curves was 0.901 (95%CI: 0.879-0.923) for the TRISS(NZ) model and 0.890 (95% CI: 0.866-0.913) for TRISS (P<0.001). Insufficient data were available to determine coefficients for penetrating mechanism TRISS(NZ) models. Conclusions – Both TRISS models accurately predicted survival for blunt mechanism trauma. However, TRISS(NZ) coefficients were statistically superior to TRISS coefficients. A strong case exists for replacing TRISS coefficients in the New Zealand benchmarking software with these updated TRISS(NZ) estimates.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Rapidly changing economic, social, and environmental conditions have created a need for urban and regional planning practitioners who are resilient, innovative, and able to cope with the increasingly complex and cosmopolitan nature of major metropolitan areas. This need should be reflected in planning education that allows students to experience a diverse range of approaches to problems and challenges, and that exposes students to the diverse array of perspectives on planning issues. This paper investigates the outcomes of a collaborative regional planning exercise organised jointly by planning academics from both Queensland University of Technology and the International Islamic University of Malaysia, and involving planning students from both universities. The regional planning exercise consisted of a regional appraisal and report topics of the area under investigation, Klang Valley – Kuala Lumpur, Malaysia. It culminated with the presentation of regional development strategies for the area, with a field trip to Malaysia being the cornerstone of the project. The collaborative exercise involved a series of workshops and seminars organised locally, in which both Australian and Malaysian planning students participated, as well as meetings with local and federal planning officials, and also a forum for Young Planners of Australian and Malaysian Planning Institutes. The experience attempted to bridge the teaching of theoretical concepts of regional planning and development and the regional, more professional knowledge of planning practice, as it relates to specific political, institutional and cultural contexts. A survey of participating students, from both Queensland University of Technology and the International Islamic University of Malaysia, highlights the benefits of such project in terms of leaning experience and exposure to different cultural contexts.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Motorised countries have more fatal road crashes in rural areas than in urban areas. In Australia, over two thirds of the population live in urban areas, yet approximately 55 percent of the road fatalities occur in rural areas (ABS, 2006; Tziotis, Mabbot, Edmonston, Sheehan & Dwyer, 2005). Road and environmental factors increase the challenges of rural driving, but do not fully account for the disparity. Rural drivers are less compliant with recommendations regarding the “fatal four” behaviours of speeding, drink driving, seatbelt non-use and fatigue, and the reasons for their lower apparent receptivity for road safety messages are not well understood. Countermeasures targeting driver behaviour that have been effective in reducing road crashes in urban areas have been less successful in rural areas (FORS, 1995). However, potential barriers to receptivity for road safety information among rural road users have not been systematically investigated. This thesis aims to develop a road safety countermeasure that addresses three areas that potentially affect receptivity to rural road safety information. The first is psychological barriers of road users’ attitudes, including risk evaluation, optimism bias, locus of control and readiness to change. A second area is the timing and method of intervention delivery, which includes the production of a brief intervention and the feasibility of delivering it at a “teachable moment”. The third area under investigation is the content of the brief intervention. This study describes the process of developing an intervention that includes content to address road safety attitudes and improve safety behaviours of rural road users regarding the “fatal four”. The research commences with a review of the literature on rural road crashes, brief interventions, intervention design and implementation, and potential psychological barriers to receptivity. This literature provides a rationale for the development of a brief intervention for rural road safety with a focus on driver attitudes and behaviour. The research is then divided into four studies. The primary aim of Study One and Study Two is to investigate the receptivity of rural drivers to road safety interventions, with a view to identifying barriers to the efficacy of these strategies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: Currently used Trauma and Injury Severity Score (TRISS) coefficients, which measure probability of survival (Ps), were derived from the Major Trauma Outcome Study (MTOS) in 1995 and are now unlikely to be optimal. This study aims to estimate new TRISS coefficients using a contemporary database of injured patients presenting to emergency departments in the United States; and to compare these against the MTOS coefficients.---------- Methods: Data were obtained from the National Trauma Data Bank (NTDB) and the NTDB National Sample Project (NSP). TRISS coefficients were estimated using logistic regression. Separate coefficients were derived from complete case and multistage multiple imputation analyses for each NTDB and NSP dataset. Associated Ps over Injury Severity Score values were graphed and compared by age (adult ≥ 15 years; pediatric < 15 years) and injury mechanism (blunt; penetrating) groups. Area under the Receiver Operating Characteristic curves was used to assess coefficients’ predictive performance.---------- Results: Overall 1,072,033 NTDB and 1,278,563 weighted NSP injury events were included, compared with 23,177 used in the original MTOS analyses. Large differences were seen between results from complete case and imputed analyses. For blunt mechanism and adult penetrating mechanism injuries, there were similarities between coefficients estimated on imputed samples, and marked divergences between associated Ps estimated and those from the MTOS. However, negligible differences existed between area under the receiver operating characteristic curves estimates because the overwhelming majority of patients had minor trauma and survived. For pediatric penetrating mechanism injuries, variability in coefficients was large and Ps estimates unreliable.---------- Conclusions: Imputed NTDB coefficients are recommended as the TRISS coefficients 2009 revision for blunt mechanism and adult penetrating mechanism injuries. Coefficients for pediatric penetrating mechanism injuries could not be reliably estimated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Efficient management of domestic wastewater is a primary requirement for human well being. Failure to adequately address issues of wastewater collection, treatment and disposal can lead to adverse public health and environmental impacts. The increasing spread of urbanisation has led to the conversion of previously rural land into urban developments and the more intensive development of semi urban areas. However the provision of reticulated sewerage facilities has not kept pace with this expansion in urbanisation. This has resulted in a growing dependency on onsite sewage treatment. Though considered only as a temporary measure in the past, these systems are now considered as the most cost effective option and have become a permanent feature in some urban areas. This report is the first of a series of reports to be produced and is the outcome of a research project initiated by the Brisbane City Council. The primary objective of the research undertaken was to relate the treatment performance of onsite sewage treatment systems with soil conditions at site, with the emphasis being on septic tanks. This report consists of a ‘state of the art’ review of research undertaken in the arena of onsite sewage treatment. The evaluation of research brings together significant work undertaken locally and overseas. It focuses mainly on septic tanks in keeping with the primary objectives of the project. This report has acted as the springboard for the later field investigations and analysis undertaken as part of the project. Septic tanks still continue to be used widely due to their simplicity and low cost. Generally the treatment performance of septic tanks can be highly variable due to numerous factors, but a properly designed, operated and maintained septic tank can produce effluent of satisfactory quality. The reduction of hydraulic surges from washing machines and dishwashers, regular removal of accumulated septage and the elimination of harmful chemicals are some of the practices that can improve system performance considerably. The relative advantages of multi chamber over single chamber septic tanks is an issue that needs to be resolved in view of the conflicting research outcomes. In recent years, aerobic wastewater treatment systems (AWTS) have been gaining in popularity. This can be mainly attributed to the desire to avoid subsurface effluent disposal, which is the main cause of septic tank failure. The use of aerobic processes for treatment of wastewater and the disinfection of effluent prior to disposal is capable of producing effluent of a quality suitable for surface disposal. However the field performance of these has been disappointing. A significant number of these systems do not perform to stipulated standards and quality can be highly variable. This is primarily due to houseowner neglect or ignorance of correct operational and maintenance procedures. The other problems include greater susceptibility to shock loadings and sludge bulking. As identified in literature a number of design features can also contribute to this wide variation in quality. The other treatment processes in common use are the various types of filter systems. These include intermittent and recirculating sand filters. These systems too have their inherent advantages and disadvantages. Furthermore as in the case of aerobic systems, their performance is very much dependent on individual houseowner operation and maintenance practices. In recent years the use of biofilters has attracted research interest and particularly the use of peat. High removal rates of various wastewater pollutants have been reported in research literature. Despite these satisfactory results, leachate from peat has been reported in various studies. This is an issue that needs further investigations and as such biofilters can still be considered to be in the experimental stage. The use of other filter media such as absorbent plastic and bark has also been reported in literature. The safe and hygienic disposal of treated effluent is a matter of concern in the case of onsite sewage treatment. Subsurface disposal is the most common and the only option in the case of septic tank treatment. Soil is an excellent treatment medium if suitable conditions are present. The processes of sorption, filtration and oxidation can remove the various wastewater pollutants. The subsurface characteristics of the disposal area are among the most important parameters governing process performance. Therefore it is important that the soil and topographic conditions are taken into consideration in the design of the soil absorption system. Seepage trenches and beds are the common systems in use. Seepage pits or chambers can be used where subsurface conditions warrant, whilst above grade mounds have been recommended for a variety of difficult site conditions. All these systems have their inherent advantages and disadvantages and the preferable soil absorption system should be selected based on site characteristics. The use of gravel as in-fill for beds and trenches is open to question. It does not contribute to effluent treatment and has been shown to reduce the effective infiltrative surface area. This is due to physical obstruction and the migration of fines entrained in the gravel, into the soil matrix. The surface application of effluent is coming into increasing use with the advent of aerobic treatment systems. This has the advantage that treatment is undertaken on the upper soil horizons, which is chemically and biologically the most effective in effluent renovation. Numerous research studies have demonstrated the feasibility of this practice. However the overriding criteria is the quality of the effluent. It has to be of exceptionally good quality in order to ensure that there are no resulting public health impacts due to aerosol drift. This essentially is the main issue of concern, due to the unreliability of the effluent quality from aerobic systems. Secondly, it has also been found that most householders do not take adequate care in the operation of spray irrigation systems or in the maintenance of the irrigation area. Under these circumstances surface disposal of effluent should be approached with caution and would require appropriate householder education and stringent compliance requirements. However despite all this, the efficiency with which the process is undertaken will ultimately rest with the individual householder and this is where most concern rests. Greywater too should require similar considerations. Surface irrigation of greywater is currently being permitted in a number of local authority jurisdictions in Queensland. Considering the fact that greywater constitutes the largest fraction of the total wastewater generated in a household, it could be considered to be a potential resource. Unfortunately in most circumstances the only pretreatment that is required to be undertaken prior to reuse is the removal of oil and grease. This is an issue of concern as greywater can considered to be a weak to medium sewage as it contains primary pollutants such as BOD material and nutrients and may also include microbial contamination. Therefore its use for surface irrigation can pose a potential health risk. This is further compounded by the fact that most householders are unaware of the potential adverse impacts of indiscriminate greywater reuse. As in the case of blackwater effluent reuse, there have been suggestions that greywater should also be subjected to stringent guidelines. Under these circumstances the surface application of any wastewater requires careful consideration. The other option available for the disposal effluent is the use of evaporation systems. The use of evapotranspiration systems has been covered in this report. Research has shown that these systems are susceptible to a number of factors and in particular to climatic conditions. As such their applicability is location specific. Also the design of systems based solely on evapotranspiration is questionable. In order to ensure more reliability, the systems should be designed to include soil absorption. The successful use of these systems for intermittent usage has been noted in literature. Taking into consideration the issues discussed above, subsurface disposal of effluent is the safest under most conditions. This is provided the facility has been designed to accommodate site conditions. The main problem associated with subsurface disposal is the formation of a clogging mat on the infiltrative surfaces. Due to the formation of the clogging mat, the capacity of the soil to handle effluent is no longer governed by the soil’s hydraulic conductivity as measured by the percolation test, but rather by the infiltration rate through the clogged zone. The characteristics of the clogging mat have been shown to be influenced by various soil and effluent characteristics. Secondly, the mechanisms of clogging mat formation have been found to be influenced by various physical, chemical and biological processes. Biological clogging is the most common process taking place and occurs due to bacterial growth or its by-products reducing the soil pore diameters. Biological clogging is generally associated with anaerobic conditions. The formation of the clogging mat provides significant benefits. It acts as an efficient filter for the removal of microorganisms. Also as the clogging mat increases the hydraulic impedance to flow, unsaturated flow conditions will occur below the mat. This permits greater contact between effluent and soil particles thereby enhancing the purification process. This is particularly important in the case of highly permeable soils. However the adverse impacts of the clogging mat formation cannot be ignored as they can lead to significant reduction in the infiltration rate. This in fact is the most common cause of soil absorption systems failure. As the formation of the clogging mat is inevitable, it is important to ensure that it does not impede effluent infiltration beyond tolerable limits. Various strategies have been investigated to either control clogging mat formation or to remediate its severity. Intermittent dosing of effluent is one such strategy that has attracted considerable attention. Research conclusions with regard to short duration time intervals are contradictory. It has been claimed that the intermittent rest periods would result in the aerobic decomposition of the clogging mat leading to a subsequent increase in the infiltration rate. Contrary to this, it has also been claimed that short duration rest periods are insufficient to completely decompose the clogging mat, and the intermediate by-products that form as a result of aerobic processes would in fact lead to even more severe clogging. It has been further recommended that the rest periods should be much longer and should be in the range of about six months. This entails the provision of a second and alternating seepage bed. The other concepts that have been investigated are the design of the bed to meet the equilibrium infiltration rate that would eventuate after clogging mat formation; improved geometry such as the use of seepage trenches instead of beds; serial instead of parallel effluent distribution and low pressure dosing of effluent. The use of physical measures such as oxidation with hydrogen peroxide and replacement of the infiltration surface have been shown to be only of short-term benefit. Another issue of importance is the degree of pretreatment that should be provided to the effluent prior to subsurface application and the influence exerted by pollutant loadings on the clogging mat formation. Laboratory studies have shown that the total mass loadings of BOD and suspended solids are important factors in the formation of the clogging mat. It has also been found that the nature of the suspended solids is also an important factor. The finer particles from extended aeration systems when compared to those from septic tanks will penetrate deeper into the soil and hence will ultimately cause a more dense clogging mat. However the importance of improved pretreatment in clogging mat formation may need to be qualified in view of other research studies. It has also shown that effluent quality may be a factor in the case of highly permeable soils but this may not be the case with fine structured soils. The ultimate test of onsite sewage treatment system efficiency rests with the final disposal of effluent. The implication of system failure as evidenced from the surface ponding of effluent or the seepage of contaminants into the groundwater can be very serious as it can lead to environmental and public health impacts. Significant microbial contamination of surface and groundwater has been attributed to septic tank effluent. There are a number of documented instances of septic tank related waterborne disease outbreaks affecting large numbers of people. In a recent incident, the local authority was found liable for an outbreak of viral hepatitis A and not the individual septic tank owners as no action had been taken to remedy septic tank failure. This illustrates the responsibility placed on local authorities in terms of ensuring the proper operation of onsite sewage treatment systems. Even a properly functioning soil absorption system is only capable of removing phosphorus and microorganisms. The nitrogen remaining after plant uptake will not be retained in the soil column, but will instead gradually seep into the groundwater as nitrate. Conditions for nitrogen removal by denitrification are not generally present in a soil absorption bed. Dilution by groundwater is the only treatment available for reducing the nitrogen concentration to specified levels. Therefore based on subsurface conditions, this essentially entails a maximum allowable concentration of septic tanks in a given area. Unfortunately nitrogen is not the only wastewater pollutant of concern. Relatively long survival times and travel distances have been noted for microorganisms originating from soil absorption systems. This is likely to happen if saturated conditions persist under the soil absorption bed or due to surface runoff of effluent as a result of system failure. Soils have a finite capacity for the removal of phosphorus. Once this capacity is exceeded, phosphorus too will seep into the groundwater. The relatively high mobility of phosphorus in sandy soils have been noted in the literature. These issues have serious implications in the design and siting of soil absorption systems. It is not only important to ensure that the system design is based on subsurface conditions but also the density of these systems in given areas is a critical issue. This essentially involves the adoption of a land capability approach to determine the limitations of an individual site for onsite sewage disposal. The most limiting factor at a particular site would determine the overall capability classification for that site which would also dictate the type of effluent disposal method to be adopted.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The texture of agricultural crops changes during harvesting, post harvesting and processing stages due to different loading processes. There are different source of loading that deform agricultural crop tissues and these include impact, compression, and tension. Scanning Electron Microscope (SEM) method is a common way of analysing cellular changes of materials before and after these loading operations. This paper examines the structural changes of pumpkin peel and flesh tissues under mechanical loading. Compression and indentation tests were performed on peel and flesh samples. Samples structure were then fixed and dehydrated in order to capture the cellular changes under SEM. The results were compared with the images of normal peel and flesh tissues. The findings suggest that normal flesh tissue had bigger size cells, while the cellular arrangement of peel was smaller. Structural damage was clearly observed in tissue structure after compression and indentation. However, the damages that resulted from the flat end indenter was much more severe than that from the spherical end indenter and compression test. An integrated deformed tissue layer was observed in compressed tissue, while the indentation tests shaped a deformed area under the indenter and left the rest of the tissue unharmed. There was an obvious broken layer of cells on the walls of the hole after the flat end indentations, whereas the spherical indenter created a squashed layer all around the hole. Furthermore, the influence of loading was lower on peel samples in comparison with the flesh samples. The experiments have shown that the rate of damage on tissue under constant rate of loading is highly dependent on the shape of equipment. This fact and observed structural changes after loading underline the significance of deigning post harvesting equipments to reduce the rate of damage on agricultural crop tissues.