999 resultados para U.S. Army Environmental Hygiene Agency.
Resumo:
Bacteria, yeasts, and viruses are rapidly killed on metallic copper surfaces, and the term "contact killing" has been coined for this process. While the phenomenon was already known in ancient times, it is currently receiving renewed attention. This is due to the potential use of copper as an antibacterial material in health care settings. Contact killing was observed to take place at a rate of at least 7 to 8 logs per hour, and no live microorganisms were generally recovered from copper surfaces after prolonged incubation. The antimicrobial activity of copper and copper alloys is now well established, and copper has recently been registered at the U.S. Environmental Protection Agency as the first solid antimicrobial material. In several clinical studies, copper has been evaluated for use on touch surfaces, such as door handles, bathroom fixtures, or bed rails, in attempts to curb nosocomial infections. In connection to these new applications of copper, it is important to understand the mechanism of contact killing since it may bear on central issues, such as the possibility of the emergence and spread of resistant organisms, cleaning procedures, and questions of material and object engineering. Recent work has shed light on mechanistic aspects of contact killing. These findings will be reviewed here and juxtaposed with the toxicity mechanisms of ionic copper. The merit of copper as a hygienic material in hospitals and related settings will also be discussed.
Resumo:
The purpose of this study is to provide a procedure to include emissions to the atmosphere resulting from the combustion of diesel fuel during dredging operations into the decision-making process of dredging equipment selection. The proposed procedure is demonstrated for typical dredging methods and data from the Illinois Waterway as performed by the U.S. Army Corps of Engineers, Rock Island District. The equipment included in this study is a 16-inch cutterhead pipeline dredge and a mechanical bucket dredge used during the 2005 dredging season on the Illinois Waterway. Considerable effort has been put forth to identify and reduce environmental impacts from dredging operations. Though environmental impacts of dredging have been studied no efforts have been applied to the evaluation of air emissions from comparable types of dredging equipment, as in this study. By identifying the type of dredging equipment with the lowest air emissions, when cost, site conditions, and equipment availability are comparable, adverse environmental impacts can be minimized without compromising the dredging project. A total of 48 scenarios were developed by varying the dredged material quantity, transport distance, and production rates. This produced an “envelope” of results applicable to a broad range of site conditions. Total diesel fuel consumed was calculated using standard cost estimating practices as defined in the U.S. Army Corps of Engineers Construction Equipment Ownership and Operating Expense Schedule (USACE, 2005). The diesel fuel usage was estimated for all equipment used to mobilize and/or operate each dredging crew for every scenario. A Limited Life Cycle Assessment (LCA) was used to estimate the air emissions from two comparable dredging operations utilizing SimaPro LCA software. An Environmental Impact Single Score (EISS) was the SimaPro output selected for comparison with the cost per CY of dredging, potential production rates, and transport distances to identify possible decision points. The total dredging time was estimated for each dredging crew and scenario. An average hourly cost for both dredging crews was calculated based on Rock Island District 2005 dredging season records (Graham 2007/08). The results from this study confirm commonly used rules of thumb in the dredging industry by indicating that mechanical bucket dredges are better suited for long transport distances and have lower air emissions and cost per CY for smaller quantities of dredged material. In addition, the results show that a cutterhead pipeline dredge would be preferable for moderate and large volumes of dredged material when no additional booster pumps are required. Finally, the results indicate that production rates can be a significant factor when evaluating the air emissions from comparable dredging equipment.
Resumo:
The occurrence of elevated uranium (U) in sandstone aquifers was investigated in the Upper Peninsula of Michigan, focusing on aquifers of the Jacobsville Sandstone. The hydrogeochemical controls on groundwater U concentrations were characterized using a combination of water sampling and spectral gamma-ray logging of sandstone cliffs and residential water wells. 235U/238U isotope ratios were consistent with naturally occurring U. Approximately 25% of the 270 wells tested had U concentrations above the U.S. Environmental Protection Agency Maximum Contaminant Level (MCL) of 30 μg/L, with elevated U generally occurring in localized clusters. Water wells were logged to determine whether groundwater U anomalies could be explained by the heterogeneous distribution of U in the sandstone. Not all wells with relative U enrichment in the sandstone produced water with U above the MCL, indicating that the effect of U enrichment in the sandstone may be modified by other hydrogeochemical factors. Well water had high redox, indicating U is in its highly soluble (VI) valence. Equilibrium modeling indicated that aqueous U is complexed with carbonates. In general, wells with elevated U concentrations had low 235U/238U activity ratios. However, in some areas U concentrations and 235U/238U activity ratios were simultaneously high, possibly indicating differences in rock-water interactions. Limited groundwater age dating suggested that residence time may also help explain variations in well water U concentrations. Low levels of U enrichment (4 to 30 ppm) in the Jacobsville sandstone may make wells in its oxidized aquifers at risk for U concentrations above the MCL. On average, U concentrations were highest in heavy mineral and clay layers and rip up conglomerates. Uranium concentrations above 4 ppm also occurred in siltstones, sandstones and conglomerates. Uranium enrichment was likely controlled by deposition processes, sorption to clays, and groundwater flow, which was controlled by permeability variations in the sandstone. Low levels of U enrichment were found at deltaic, lacustrine and alluvial fan deposits and were not isolated to specific depositional environments.
Resumo:
This dissertation examines the global technological and environmental history of copper smelting and the conflict that developed between historic preservation and environmental remediation at major copper smelting sites in the United States after their productive periods ended. Part I of the dissertation is a synthetic overview of the history of copper smelting and its environmental impact. After reviewing the basic metallurgy of copper ores, the dissertation contains successive chapters on the history of copper smelting to 1640, culminating in the so-called German, or Continental, processing system; on the emergence of the rival Welsh system during the British industrial revolution; and on the growth of American dominance in copper production the late 19th and early 20th centuries. The latter chapter focuses, in particular, on three of the most important early American copper districts: Michigan’s Keweenaw Peninsula, Tennessee’s Copper Basin, and Butte-Anaconda, Montana. As these three districts went into decline and ultimately out of production, they left a rich industrial heritage and significant waste and pollution problems generated by increasingly more sophisticated technologies capable of commercially processing steadily growing volumes of decreasingly rich ores. Part II of the dissertation looks at the conflict between historic preservation and environmental remediation that emerged locally and nationally in copper districts as they went into decline and eventually ceased production. Locally, former copper mining communities often split between those who wished to commemorate a region’s past importance and develop heritage tourism, and local developers who wished to clear up and clean out old industrial sites for other purposes. Nationally, Congress passed laws in the 1960s and 1970s mandating the preservation of historical resources (National Historic Preservation Act) and laws mandating the cleanup of contaminated landscapes (CERCLA, or Superfund), objectives sometimes in conflict – especially in the case of copper smelting sites. The dissertation devotes individual chapters to the conflicts that developed between environmental remediation, particularly involving the Environmental Protection Agency and the heritage movement in the Tennessee, Montana, and Michigan copper districts. A concluding chapter provides a broad model to illustrate the relationship between industrial decline, federal environmental remediation activities, and the growth of heritage consciousness in former copper mining and smelting areas, analyzes why the outcome varied in the three areas, and suggests methods for dealing with heritage-remediation issues to minimize conflict and maximize heritage preservation.
Resumo:
Purpose. To examine the association between living in proximity to Toxics Release Inventory (TRI) facilities and the incidence of childhood cancer in the State of Texas. ^ Design. This is a secondary data analysis utilizing the publicly available Toxics release inventory (TRI), maintained by the U.S. Environmental protection agency that lists the facilities that release any of the 650 TRI chemicals. Total childhood cancer cases and childhood cancer rate (age 0-14 years) by county, for the years 1995-2003 were used from the Texas cancer registry, available at the Texas department of State Health Services website. Setting: This study was limited to the children population of the State of Texas. ^ Method. Analysis was done using Stata version 9 and SPSS version 15.0. Satscan was used for geographical spatial clustering of childhood cancer cases based on county centroids using the Poisson clustering algorithm which adjusts for population density. Pictorial maps were created using MapInfo professional version 8.0. ^ Results. One hundred and twenty five counties had no TRI facilities in their region, while 129 facilities had at least one TRI facility. An increasing trend for number of facilities and total disposal was observed except for the highest category based on cancer rate quartiles. Linear regression analysis using log transformation for number of facilities and total disposal in predicting cancer rates was computed, however both these variables were not found to be significant predictors. Seven significant geographical spatial clusters of counties for high childhood cancer rates (p<0.05) were indicated. Binomial logistic regression by categorizing the cancer rate in to two groups (<=150 and >150) indicated an odds ratio of 1.58 (CI 1.127, 2.222) for the natural log of number of facilities. ^ Conclusion. We have used a unique methodology by combining GIS and spatial clustering techniques with existing statistical approaches in examining the association between living in proximity to TRI facilities and the incidence of childhood cancer in the State of Texas. Although a concrete association was not indicated, further studies are required examining specific TRI chemicals. Use of this information can enable the researchers and public to identify potential concerns, gain a better understanding of potential risks, and work with industry and government to reduce toxic chemical use, disposal or other releases and the risks associated with them. TRI data, in conjunction with other information, can be used as a starting point in evaluating exposures and risks. ^
Resumo:
This study is a secondary analysis of a survey developed by Dr. Jimmy Perkins and administered by San Antonio/Bexar County Metropolitan Health District. The survey was developed subsequent to the implementation of the city smoking ordinance effective January 1, 2004. The survey had a multi-purpose plan to establish the number of restaurants having smoke free status prior to and following the ordinance, determine compliance as it relates to a necessary smoking section and proper signage, and expose the rationale for restaurants to become smoke free. The data resulting from the survey was presented to the San Antonio/Bexar County Metropolitan Health District. The summary presented the types of establishments surveyed, smoking status of the establishment, reasons for the establishment becoming smoke free, compliance with smoking sections, compliance with signage requirements, awareness of ordinance, and chain status of the establishment. ^ The results of this study display the relationships among the variables previously mentioned. The following relationships have been examined and the outcomes have determined whether each is significant. After careful analysis, knowledge translates into compliance with signage regulations, which then translate into ordinance compliance. Size does matter as it relates to an establishment's number of employees and seating capacity. The smaller the establishment the more likely the establishment is to have become smoke free before the ordinance went into effect. Restaurants, rather than fast food establishments most commonly cited their reason for becoming smoke free was to comply with the ordinance and only ten percent of restaurants gave policy as the main reason for becoming smoke free. ^ This study is important for public health because the negative health effects of environmental tobacco smoke (ETS) are still an overwhelming problem in the United States (3). ETS is a Known Human Group A Carcinogen (5). The Environmental Protection Agency (EPA) has estimated that around 3,000 non-smoking Americans die every year from lung cancer caused by ETS (6). This information illustrates the importance of providing smoke free establishments, especially to non-smoking patrons. ^
Resumo:
The occurrence of waste pharmaceuticals has been identified and well documented in water sources throughout North America and Europe. Many studies have been conducted which identify the occurrence of various pharmaceutical compounds in these waters. This project is an extensive review of the documented evidence of this occurrence published in the scientific literature. This review was performed to determine if this occurrence has a significant impact on the environment and public health. This project and review found that pharmaceuticals such as sex hormone drugs, antibiotic drugs and antineoplastic/cytostatic agents as well as their metabolites have been found to occur in water sources throughout the United States at levels high enough to have noticeable impacts on human health and the environment. It was determined that the primary sources of this occurrence of pharmaceuticals were waste water effluent and solid wastes from sewage treatment plants, pharmaceutical manufacturing plants, healthcare and biomedical research facilities, as well as runoff from veterinary medicine applications (including aquaculture). ^ In addition, current public policies of US governmental agencies such as the Environmental Protection Agency (EPA), Food and Drug Administration (FDA), and Drug Enforcement Agency (DEA) have been evaluated to see if they are doing a sufficient job at controlling this issue. Specific recommendations for developing these EPA, FDA, and DEA policies have been made to mitigate, prevent, or eliminate this issue.^ Other possible interventions such as implementing engineering controls were also evaluated in order to mitigate, prevent and eliminate this issue. These engineering controls include implementing improved current treatment technologies such as the advancement and improvement of waste water treatment processes utilized by conventional sewage treatment and pharmaceutical manufacturing plants. In addition, administrative controls such as the use of “green chemistry” in drug synthesis and design were also explored and evaluated as possible alternatives to mitigate, prevent, or eliminate this issue. Specific recommendations for incorporating these engineering and administrative controls into the applicable EPA, FDA, and DEA policies have also been made.^
Resumo:
A life table methodology was developed which estimates the expected remaining Army service time and the expected remaining Army sick time by years of service for the United States Army population. A measure of illness impact was defined as the ratio of expected remaining Army sick time to the expected remaining Army service time. The variances of the resulting estimators were developed on the basis of current data. The theory of partial and complete competing risks was considered for each type of decrement (death, administrative separation, and medical separation) and for the causes of sick time.^ The methodology was applied to world-wide U.S. Army data for calendar year 1978. A total of 669,493 enlisted personnel and 97,704 officers were reported on active duty as of 30 September 1978. During calendar year 1978, the Army Medical Department reported 114,647 inpatient discharges and 1,767,146 sick days. Although the methodology is completely general with respect to the definition of sick time, only sick time associated with an inpatient episode was considered in this study.^ Since the temporal measure was years of Army service, an age-adjusting process was applied to the life tables for comparative purposes. Analyses were conducted by rank (enlisted and officer), race and sex, and were based on the ratio of expected remaining Army sick time to expected remaining Army service time. Seventeen major diagnostic groups, classified by the Eighth Revision, International Classification of Diseases, Adapted for Use In The United States, were ranked according to their cumulative (across years of service) contribution to expected remaining sick time.^ The study results indicated that enlisted personnel tend to have more expected hospital-associated sick time relative to their expected Army service time than officers. Non-white officers generally have more expected sick time relative to their expected Army service time than white officers. This racial differential was not supported within the enlisted population. Females tend to have more expected sick time relative to their expected Army service time than males. This tendency remained after diagnostic groups 580-629 (Genitourinary System) and 630-678 (Pregnancy and Childbirth) were removed. Problems associated with the circulatory system, digestive system and musculoskeletal system were among the three leading causes of cumulative sick time across years of service. ^
Resumo:
Background. Research has shown that elevations of only 10 mmHg diastolic blood pressure (BP) and 5 mmHg systolic BP are associated with substantial (as large as 50%) increases in risks for cardiovascular disease, a leading cause of death, worldwide. Epidemiological studies have found that particulate matter (PM) increases blood pressure (BP) and many biological mechanisms which may suggest that the organic matter of PM contributes to the increase in BP. To understand components of PM which may contribute to the increase in BP, this study focuses on diesel particulate matter (DPM) and polycyclic aromatic hydrocarbons (PAHs). To our knowledge, there have been only four epidemiological studies on BP and DPM, and no epidemiological studies on BP and PAHs. ^ Objective. Our objective was to evaluate the association between prevalent hypertension and two ambient exposures: DPM and PAHs amongst the Mano a Mano cohort. ^ Methods. The Mano a Mano cohort which was established by the M.D. Anderson Cancer Center in 2001, is comprised of individuals of Mexican origin residing in Houston, TX. Using geographical information systems, we linked modeled annual estimates of PAHs and DPM at the census track level from the U.S. Environmental Protection Agency's National-Scale Air Toxics Assessment to residential addresses of cohort members. Mixed-effects logistic regression models were applied to determine associations between DPM and PAHs and hypertension while adjusting for confounders. ^ Results. Ambient levels of DPM, categorized into quartiles, were not statistically associated with hypertension and did not indicate a dose response relationship. Ambient levels of PAHs, categorized into quartiles, were not associated with hypertension, but did indicate a dose response relationship in multiple models (for example: Q2: OR = 0.98; 95% CI, 0.73–1.31, Q3: OR = 1.08; 95% CI, 0.82–1.41, Q4: OR = 1.26; 95% CI, 0.94–1.70). ^ Conclusion. This is the first assessment to analyze the relationship between ambient levels of PAHs and hypertension and it is amongst a few studies investigating the association between ambient levels of DPM and hypertension. Future analyses are warranted to explore the effects DPM and PAHs using different categorizations in order to clarify their relationships with hypertension.^
Resumo:
There is scant evidence regarding the associations between ambient levels of combustion pollutants and small for gestational age (SGA) infants. No studies of this type have been completed in the Southern United States. The main objective of the project presented was to determine associations between combustion pollutants and SGA infants in Texas using three different exposure assessments. ^ Birth certificate data that contained information on maternal and infant characteristics were obtained from the Texas Department of State Health Services (TX DSHS). Exposure assessment data for the three aims came from: (1) U.S. Environmental Protection Agency (EPA) National Air Toxics Assessment (NATA), (2) U.S. EPA Air Quality System (AQS), and (3) TX Department of Transportation (DOT), respectively. Multiple logistic regression models were used to determine the associations between combustion pollutants and SGA. ^ For the first study looked at annual estimates of four air toxics at the census tract level in the Greater Houston Area. After controlling for maternal race, maternal education, tobacco use, maternal age, number of prenatal visits, marital status, maternal weight gain, and median census tract income level, adjusted ORs and 95% confidence intervals (CI) for exposure to PAHs (per 10 ng/m3), naphthalene (per 10 ng/m3), benzene (per 1 µg/m3), and diesel engine emissions (per 10 µg/m3) were 1.01 (0.97–1.05), 1.00 (0.99–1.01), 1.01 (0.97–1.05), and 1.08 (0.95–1.23) respectively. For the second study looking at Hispanics in El Paso County, AORs and 95% confidence intervals (CI) for increases of 5 ng/m3 for the sum of carcinogenic PAHs (Σ c-PAHs), 1 ng/m3 of benzo[a]pyrene, and 100 ng/m3 in naphthalene during the third trimester of pregnancy were 1.02 (0.97–1.07), 1.03 (0.96–1.11), and 1.01 (0.97–1.06), respectively. For the third study using maternal proximity to major roadways as the exposure metric, there was a negative association with increasing distance from a maternal residence to the nearest major roadway (Odds Ratio (OR) = 0.96; 95% CI = 0.94–0.97) per 1000 m); however, once adjusted for covariates this effect was no longer significant (AOR = 0.98; 95% CI = 0.96–1.00). There was no association with distance weighted traffic density (DWTD). ^ This project is the first to look at SGA and combustion pollutants in the Southern United States with three different exposure metrics. Although there was no evidence of associations found between SGA and the air pollutants mentioned in these studies, the results contribute to the body of literature assessing maternal exposure to ambient air pollution and adverse birth outcomes. ^
Resumo:
Birth defects are a leading cause of infant mortality in the United States. About one in 33 births in the United States is diagnosed with birth defects. Common birth defects include neural tube defects, Down syndrome and oral clefts. The present study focused on oral clefts. ^ Oral clefts refer to the malformation of lip, mouth or both. Birth prevalence of oral clefts in Texas is about 11 per 10,000 births. Etiologically, oral clefts have been classified into two groups, cleft lip with or without cleft palate (CL±P) and isolated cleft palate (CP). In spite of their high prevalence and clinical significance, the etiology of oral clefts in humans has not been well understood. Though a number of risk factors have been identified in epidemiological studies, most of them do not explain the majority of the cases. The need to identify novel risk factors associated with oral clefts provided the motivation for this study. The present study focused on maternal exposure to several hazardous air pollutants. A common subgroup of hazardous air pollutants is the volatile organic compounds found in petroleum derivatives. Four important hydrocarbons in this group are benzene, toluene, ethyl benzene and xylenes (BTEX). ^ The specific aim of this study was to evaluate the association between maternal exposure to environmental levels of BTEX and oral clefts among offspring in Texas for the period 1999-2008. ^ A case-control study design was used to assess if maternal exposure to BTEX increased the risk of oral clefts. The Texas Birth Defects Registry provided data on cases of non-syndromic oral clefts delivered in Texas during the period 1999-2008. Census tract level maternal exposure to BTEX concentrations were obtained from the Hazardous Air Pollutant Exposure Model (HAPEM) developed by the U.S. Environmental Protection Agency. Unconditional logistic regression was used to assess the relationship between maternal exposure to BTEX levels and risk of oral clefts in offspring. ^ In the selected population, mothers who had high estimated exposure to any of the BTEX compounds were not more likely to deliver an offspring with oral clefts. Future research efforts will focus on additional birth defects and thorough assessment of additional potential confounders.^
Resumo:
Saharan dust incursions and particulates emitted from human activities degrade air quality throughout West Africa, especially in the rapidly expanding urban centers in the region. Particulate matter (PM) that can be inhaled is strongly associated with increased incidence of and mortality from cardiovascular and respiratory diseases and cancer. Air samples collected in the capital of a Saharan-Sahelian country (Bamako, Mali) between September 2012 - July 2013 were found to contain inhalable PM concentrations that exceeded World Health Organization (WHO) and US Environmental Protection Agency (USEPA) PM2.5 and PM10 24-h limits 58 - 98% of days and European Union (EU) PM10 24-h limit 98% of days. Mean concentrations were 1.2-to-4.5 fold greater than existing limits. Inhalable PM was enriched in transition metals, known to produce reactive oxygen species and initiate the inflammatory reaction, and other potentially bioactive and biotoxic metals/metalloids. Eroded mineral dust composed the bulk of inhalable PM, whereas most enriched metals/metalloids were likely emitted from oil combustion, biomass burning, refuse incineration, vehicle traffic, and mining activities. Human exposure to inhalable PM and associated metals/metalloids over 24-h was estimated. The findings indicate that inhalable PM in the Sahara-Sahel region may present a threat to human health, especially in urban areas with greater inhalable PM and transition metal exposure.
Resumo:
La reutilización de efluentes depurados siempre ha sido una opción en lugares con déficit coyuntural o estructural de recursos hídricos, se haya o no procedido a la regulación y planificación de esta práctica. La necesidad se crea a partir de las demandas de una zona, normalmente riego agrícola, que ven un mejor desarrollo de su actividad por contar con este recurso. España es el país de la UE que más caudal reutiliza, y está dentro de los diez primeros a nivel mundial. La regulación de esta práctica por el RD 1620/2007, ayudó a incorporar la reutilización de efluentes depurados a la planificación hidrológica como parte de los programas de medidas, con objeto de mitigar presiones, como son las extracciones de agua superficial y subterránea, o mejoras medioambientales evitando un vertido. El objeto de este trabajo es conocer la situación de la reutilización de efluentes depurados en España, los diferentes escenarios y planteamientos de esta actividad, el desarrollo del marco normativo y su aplicabilidad, junto a los tratamientos que permiten alcanzar los límites de calidad establecidos en la normativa vigente, en función de los distintos usos. Además, se aporta un análisis de costes de las distintas unidades de tratamiento y tipologías de líneas de regeneración, tanto de las utilizadas después de un tratamiento secundario como de otras opciones de depuración, como son los biorreactores de membrana (MBRs). Para el desarrollo de estos objetivos, en primer lugar, se aborda el conocimiento de la situación de la reutilización en España a través de una base de datos diseñada para cubrir todos los aspectos de esta actividad: datos de la estación depuradora de aguas residuales (EDAR), de la estación regeneradora (ERA), caudales depurados, reutilizados, volúmenes utilizados y ubicación de los distintos usos, tipos de líneas de tratamiento, calidades del agua reutilizada, etc. Las principales fuentes de información son las Confederaciones Hidrográficas (CCHH) a través de las concesiones de uso del agua depurada, las entidades de saneamiento y depuración de las distintas comunidades autónomas (CCAA), ayuntamientos, Planes Hidrológicos de Cuenca (PHC) y visitas a las zonas más emblemáticas. Además, se revisan planes y programas con el fin de realizar una retrospectiva de cómo se ha ido consolidando y desarrollando esta práctica en las distintas zonas de la geografía española. Se han inventariado 322 sistemas de reutilización y 216 tratamientos de regeneración siendo el más extendido la filtración mediante filtro arena seguido de una desinfección mediante hipoclorito, aunque este tratamiento se ha ido sustituyendo por un físico-químico con decantación lamelar, filtro de arena y radiación ultravioleta, tratamiento de regeneración convencional (TRC), y otros tratamientos que pueden incluir membranas, tratamientos de regeneración avanzados (TRA), con dosificación de hipoclorito como desinfección residual, para adaptarse al actual marco normativo. El uso más extendido es el agrícola con el 70% del caudal total reutilizado, estimado en 408 hm3, aunque la capacidad de los tratamientos de regeneración esperada para 2015, tras el Plan Nacional de Reutilización de Aguas (PNRA), es tres veces superior. Respecto al desarrollo normativo, en las zonas donde la reutilización ha sido pionera, las administraciones competentes han ido desarrollando diferentes recomendaciones de calidad y manejo de este tipo de agua. El uso agrícola, y en zonas turísticas, el riego de campos de golf, fueron los dos primeros usos que tuvieron algún tipo de recomendación incluso reglamentación. Esta situación inicial, sin una normativa a nivel estatal ni recomendaciones europeas, creó cierta incertidumbre en el avance de la reutilización tanto a nivel de concesiones como de planificación. En la actualidad sigue sin existir una normativa internacional para la reutilización y regeneración de efluentes depurados. Las recomendaciones de referencia a nivel mundial, y en concreto para el uso agrícola, son las de la OMS (Organización Mundial de la Salud) publicadas 1989, con sus posteriores revisiones y ampliaciones (OMS, 2006). Esta norma combina tratamientos básicos de depuración y unas buenas prácticas basadas en diferentes niveles de protección para evitar problemas sanitarios. Otra normativa que ha sido referencia en el desarrollo del marco normativo en países donde se realiza esta práctica, son las recomendaciones dadas por la Agencia Medioambiente Estadunidense (USEPA, 2012) o las publicadas por el Estado de California (Título 22, 2001). Estas normas establecen unos indicadores y valores máximos dónde el tratamiento de regeneración es el responsable de la calidad final en función del uso. Durante 2015, la ISO trabajaba en un documento para el uso urbano donde se muestra tanto los posibles parámetros que habría que controlar como la manera de actuar para evitar posibles riesgos. Por otro lado, la Comisión Europea (CE) viene impulsando desde el 2014 la reutilización de aguas depuradas dentro del marco de la Estrategia Común de Implantación de la Directiva Marco del Agua, y fundamentalmente a través del grupo de trabajo de “Programas de medidas”. Para el desarrollo de esta iniciativa se está planteando sacar para 2016 una guía de recomendaciones que podría venir a completar el marco normativo de los distintos Estados Miembros (EM). El Real Decreto 1620/2007, donde se establece el marco jurídico de la reutilización de efluentes depurados, tiende más a la filosofía implantada por la USEPA, aunque la UE parece más partidaria de una gestión del riesgo, donde se establecen unos niveles de tolerancia y unos puntos de control en función de las condiciones socioeconómicas de los distintos Estados, sin entrar a concretar indicadores, valores máximos o tratamientos. Sin embargo, en la normativa estadounidense se indican una serie de tratamientos de regeneración, mientras que, en la española, se hacen recomendaciones a este respecto en una Guía sin validez legal. Por tanto, queda sin regular los procesos para alcanzar estos estándares de calidad, pudiendo ser éstos no apropiados para esta práctica. Es el caso de la desinfección donde el uso de hipoclorito puede generar subproductos indeseables. En la Guía de recomendaciones para la aplicación del RD, publicada por el Ministerio de Agricultura y Medioambiente (MAGRAMA) en 2010, se aclaran cuestiones frecuentes sobre la aplicación del RD, prescripciones técnicas básicas para los sistemas de reutilización, y buenas prácticas en función del uso. Aun así, el RD sigue teniendo deficiencias en su aplicación siendo necesaria una revisión de la misma, como en las frecuencias de muestreo incluso la omisión de algunos parámetros como huevos de nematodos que se ha demostrado ser inexistentes tras un tratamiento de regeneración convencional. En este sentido, existe una tendencia a nivel mundial a reutilizar las aguas con fines de abastecimiento, incluir indicadores de presencia de virus o protozoos, o incluir ciertas tecnologías como las membranas u oxidaciones avanzadas para afrontar temas como los contaminantes emergentes. Otro de los objetivos de este trabajo es el estudio de tipologías de tratamiento en función de los usos establecidos en el RD 1620/2007 y sus costes asociados, siendo base de lo establecido a este respecto en la Guía y PNRA anteriormente indicados. Las tipologías de tratamiento propuestas se dividen en líneas con capacidad de desalar y las que no cuentan con una unidad de desalación de aguas salobres de ósmosis inversa o electrodiálisis reversible. Se realiza esta división al tener actuaciones en zonas costeras donde el agua de mar entra en los colectores, adquiriendo el agua residual un contenido en sales que es limitante en algunos usos. Para desarrollar este objetivo se han estudiado las unidades de tratamiento más implantadas en ERAs españolas en cuanto a fiabilidad para conseguir determinada calidad y coste, tanto de implantación como de explotación. El TRC, tiene un coste de implantación de 28 a 48 €.m-3.d y de explotación de 0,06 a 0,09 €. m-3, mientras que, si se precisara desalar, este coste se multiplica por diez en la implantación y por cinco en la explotación. En caso de los usos que requieren de TRA, como los domiciliarios o algunos industriales, los costes serían de 185 a 398 €.m-3.d en implantación y de 0,14 a 0,20 €.m-3 en explotación. En la selección de tecnologías de regeneración, la capacidad del tratamiento en relación al coste es un indicador fundamental. Este trabajo aporta curvas de tendencia coste-capacidad que sirven de herramienta de selección frente a otros tratamientos de regeneración de reciente implantación como son los MBR, u otros como la desalación de agua de mar o los trasvases entre cuencas dentro de la planificación hidrológica. En España, el aumento de las necesidades de agua de alta calidad en zonas con recursos escasos, aumento de zonas sensibles como puntos de captación para potables, zonas de baño o zonas de producción piscícola, y en ocasiones, el escaso terreno disponible para la implantación de nuevas plantas depuradoras (EDARs), han convertido a los MBRs, en una opción dentro del marco de la reutilización de aguas depuradas. En este trabajo, se estudia esta tecnología frente a los TRC y TRA, aportando igualmente curvas de tendencia coste-capacidad, e identificando cuando esta opción tecnológica puede ser más competitiva frente a los otros tratamientos de regeneración. Un MBR es un tratamiento de depuración de fangos activos donde el decantador secundario es sustituido por un sistema de membranas de UF o MF. La calidad del efluente, por tanto, es la misma que el de una EDAR seguida de un TRA. Los MBRs aseguran una calidad del efluente para todos los usos establecidos en el RD, incluso dan un efluente que permite ser directamente tratado por las unidades de desalación de OI o EDR. La implantación de esta tecnología en España ha tenido un crecimiento exponencial, pasando de 13 instalaciones de menos de 5.000 m3. d-1 en el 2006, a más de 55 instalaciones en operación o construcción a finales del 2014, seis de ellas con capacidades por encima de los 15.000 m3. d-1. Los sistemas de filtración en los MBR son los que marcan la operación y diseño de este tipo de instalaciones. El sistema más implantado en España es de membrana de fibra hueca (MFH), sobre todo para instalaciones de gran capacidad, destacando Zenon que cuenta con el 57% de la capacidad total instalada. La segunda casa comercial con mayor número de plantas es Kubota, con membranas de configuración placa plana (MPP), que cuenta con el 30 % de la capacidad total instalada. Existen otras casas comerciales implantadas en MBR españoles como son Toray, Huber, Koch o Microdym. En este documento se realiza la descripción de los sistemas de filtración de todas estas casas comerciales, aportando información de sus características, parámetros de diseño y operación más relevantes. El estudio de 14 MBRs ha posibilitado realizar otro de los objetivos de este trabajo, la estimación de los costes de explotación e implantación de este tipo de sistemas frente a otras alternativas de tratamiento de regeneración. En este estudio han participado activamente ACA y ESAMUR, entidades públicas de saneamiento y depuración de Cataluña y Murcia respectivamente, que cuentan con una amplia experiencia en la explotación de este tipo de sistemas. Este documento expone los problemas de operación encontrados y sus posibles soluciones, tanto en la explotación como en los futuros diseños de este tipo de plantas. El trabajo concluye que los MBRs son una opción más para la reutilización de efluentes depurados, siendo ventajosos en costes, tanto de implantación como de explotación, respecto a EDARs seguidas de TRA en capacidades por encima de los 10.000 m3.d-1. ABSTRACT The reuse of treated effluent has always been an option in places where a situational or structural water deficit exists, whether regulatory and/or planning efforts are completed or not. The need arises from the demand of a sector, commonly agricultural irrigation, which benefits of this new resource. Within the EU, Spain is ahead in the annual volume of reclaimed water, and is among the top ten countries at a global scale. The regulation of this practice through the Royal Decree 1620/2007 has helped to incorporate the water reuse to the hydrological plans as a part of the programme of measures to mitigate pressures such as surface or ground water extraction, or environmental improvements preventing discharges. The object of this study is to gain an overview of the state of the water reuse in Spain, the different scenarios and approaches to this activity, the development of the legal framework and its enforceability, together with the treatments that achieve the quality levels according to the current law, broken down by applications. Additionally, a cost analysis of technologies and regeneration treatment lines for water reclamation is performed, whereas the regeneration treatment is located after a wastewater treatment or other options such as membrane bioreactors (MBR). To develop the abovementioned objectives, the state of water reuse in Spain is studied by means of a database designed to encompass all aspects of the activity: data from the wastewater treatment plants (WWTP), from the water reclamation plants (WRP), the use of reclaimed water, treated water and reclaimed water annual volumes and qualities, facilities and applications, geographic references, technologies, regeneration treatment lines, etc. The main data providers are the River Basin authorities, through the concession or authorization for water reuse, (sanitary and wastewater treatment managers from the territorial governments, local governments, Hydrological Plans of the River Basins and field visits to the main water reuse systems. Additionally, a review of different plans and programmes on wastewater treatment or water reuse is done, aiming to put the development and consolidation process of this activity in the different regions of Spain in perspective. An inventory of 322 reuse systems and 216 regeneration treatments has been gathered on the database, where the most extended regeneration treatment line was sand filtration followed by hypochlorite disinfection, even though recently it is being replaced by physical–chemical treatment with a lamella settling system, depth sand filtration, and a disinfection with ultraviolet radiation and hypochlorite as residual disinfectant, named conventional regeneration treatment (CRT), and another treatment that may include a membrane process, named advanced regeneration treatment (ART), to adapt to legal requirements. Agricultural use is the most extended, accumulating 70% of the reclaimed demand, estimated at 408 hm3, even though the expected total capacity of WRPs for 2015, after the implementation of the National Water Reuse Plan (NWRP) is three times higher. According to the development of the water reuse legal framework, there were pioneer areas where competent authorities developed different quality and use recommendations for this new resource. Agricultural use and golf course irrigation in touristic areas were the first two uses with recommendations and even legislation. The initial lack of common legislation for water reuse at a national or European level created some doubts which affected the implementation of water reuse, both from a planning and a licensing point of view. Currently there is still a lack of common international legislation regarding water reuse, technologies and applications. Regarding agricultural use, the model recommendations at a global scale are those set by the World Health Organization published in 1989, and subsequent reviews and extensions about risk prevention (WHO, 2006). These documents combine wastewater treatments with basic regeneration treatments reinforced by good practices based on different levels of protection to avoid deleterious health effects. Another relevant legal reference for this practices has been the Environmental Protection Agency of the US (USEPA, 2012), or those published by the State of California (Title 22, 2001). These establish indicator targets and maximum thresholds where regeneration treatment lines are responsible for the final quality according to the different uses. During 2015, the ISO has worked on a document aimed at urban use, where the possible parameters to be monitored together with risk prevention have been studied. On the other hand, the European Commission has been promoting the reuse of treated effluents within the Common Implementation Strategy of the Water Framework Directive, mainly through the work of the Programme of Measures Working Group. Within this context, the publication of a recommendation guide during 2016 is intended, as a useful tool to fill in the legal gaps of different Member States on the matter. The Royal Decree 1620/2007, where the water reuse regulation is set, resembles the principles of the USEPA more closely, even though the EU shows a tendency to prioritize risk assessment by establishing tolerance levels and thresholds according to socioeconomic conditions of the different countries, without going into details of indicators, maximum thresholds or treatments. In contrast, in the US law, regeneration treatments are indicated, while in the Spanish legislation, the only recommendations to this respect are compiled in a non-compulsory guide. Therefore, there is no regulation on the different treatment lines used to achieve the required quality standards, giving room for inappropriate practices in this respect. This is the case of disinfection, where the use of hypochlorite may produce harmful byproducts. In the recommendation Guide for the application of the Royal Decree (RD), published by the Ministry of Agriculture and Environment (MAGRAMA) in 2010, clarifications of typical issues that may arise from the application of the RD are given, as well as basic technical parameters to consider in reuse setups, or good practices according to final use. Even so, the RD still presents difficulties in its application and requires a review on issues such as the sampling frequency of current quality parameters or even the omission of nematode eggs indicator, which have been shown to be absent after CRT. In this regard, there is a global tendency to employ water reuse for drinking water, including indicators for the presence of viruses and protozoans, or to include certain technologies such as membranes or advanced oxidation processes to tackle problems like emerging pollutants. Another of the objectives of this study is to provide different regeneration treatment lines to meet the quality requirements established in the RD 1620/2007 broken down by applications, and to estimate establishment and operational costs. This proposal has been based on what is established in the above mentioned Guide and NWRP. The proposed treatment typologies are divided in treatment trains with desalination, like reverse osmosis or reversible electrodialisis, and those that lack this treatment for brackish water. This separation is done due to coastal facilities, where sea water may permeate the collecting pipes, rising salt contents in the wastewater, hence limiting certain uses. To develop this objective a study of the most common treatment units set up in Spanish WRPs is conducted in terms of treatment train reliability to obtain an acceptable relationship between the required quality and the capital and operational costs. The CRT has an establishment cost of 28 to 48 €.m-3.d and an operation cost of 0.06 to 0.09 €.m-3, while, if desalination was required, these costs would increase tenfold for implementation and fivefold for operation. In the cases of uses that require ART, such as residential or certain industrial uses, the costs would be of 185 to 398 €.m-3.d for implementation and of 0.14 to 0.20 €.m-3 for operation. When selecting regeneration treatment lines, the relation between treatment capacity and cost is a paramount indicator. This project provides cost-capacity models for regeneration treatment trains. These may serve as a tool when selecting between different options to fulfill water demands with MBR facilities, or others such as sea water desalination plants or inter-basin water transfer into a water planning framework. In Spain, the requirement for high quality water in areas with low resource availability, the increasing number of sensitive zones, such as drinking water extraction, recreational bathing areas, fish protected areas and the lack of available land to set up new WWTPs, have turned MBRs into a suitable option for water reuse. In this work this technology is analyzed in contrast to CRT and ART, providing cost-capacity models, and identifying when and where this treatment option may outcompete other regeneration treatments. An MBR is an activated sludge treatment where the secondary settling is substituted by a membrane system of UF or MF. The quality of the effluent is, therefore, comparable to that of a WWTP followed by an ART. MBRs ensure a sufficient quality level for the requirements of the different uses established in the RD, even producing an effluent that can be directly treated in OI or EDR processes. The implementation of this technology in Spain has grown exponentially, growing from 13 facilities with less than 5000 m3.d-1 in 2006 to above 55 facilities operating by the end of 2014, 6 of them with capacities over 15000 m3.d-1. The membrane filtration systems for MBR are the ones that set the pace of operation and design of this type of facilities. The most widespread system in Spain is the hollow fiber membrane configuration, especially on high flow capacities, being Zenon commercial technology, which mounts up to 57% of the total installed capacity, the main contributor. The next commercial technology according to plant number is Kubota, which uses flat sheet membrane configuration, which mounts up to 30% of the total installed capacity. Other commercial technologies exist within the Spanish MBR context, such as Toray, Huber, Koch or Microdym. In this document an analysis of all of these membrane filtration systems is done, providing information about their characteristics and relevant design and operation parameters. The study of 14 full scale running MBRs has enabled to pursue another of the objectives of this work: the estimation of the implementation and operation costs of this type of systems in contrast to other regeneration alternatives. Active participation of ACA and ESAMUR, public wastewater treatment and reuse entities of Cataluña and Murcia respectively, has helped attaining this objective. A number of typical operative problems and their possible solutions are discussed, both for operation and plant design purposes. The conclusion of this study is that MBRs are another option to consider for water reuse, being advantageous in terms of both implementation and operational costs, when compared with WWTPs followed by ART, when considering flow capacities above 10000 m3.d-1.