743 resultados para automated assessment tool
National Centers for Coastal Ocean Science Coastal Ecosystem Assessment Program: a manual of methods
Resumo:
Environmental managers strive to preserve natural resources for future generations but have limited decision-making tools to define ecosystem health. Many programs offer relevant broad-scale, environmental policy information on regional ecosystem health. These programs provide evidence of environmental condition and change, but lack connections between local impacts and direct effects on living resources. To address this need, the National Oceanic and Atmospheric Administration/National Ocean Service (NOAA/NOS) Cooperative Oxford Laboratory (COL), in cooperation with federal, state, and academic partners, implemented an integrated biotic ecosystem assessment on a sub-watershed 14-digit Hydrologic Unit Code (HUD) scale in Chesapeake Bay. The goals of this effort were to 1) establish a suite of bioindicators that are sensitive to ecosystem change, 2) establish the effects of varying land-use patterns on water quality and the subsequent health of living resources, 3) communicate these findings to local decision-makers, and 4) evaluate the success of management decisions in these systems. To establish indicators, three sub-watersheds were chosen based on statistical analysis of land-use patterns to represent a gradient from developed to agricultural. The Magothy (developed), Corsica (agricultural), and Rhode (reference) Rivers were identified. A random stratified design was developed based on depth (2m contour) and river mile. Sampling approaches were coordinated within this structure to allow for robust system comparisons. The sampling approach was hierarchal, with metrics chosen to represent a range from community to cellular level responses across multiple organisms. This approach allowed for the identification of sub-lethal stressors, and assessment of their impact on the organism and subsequently the population. Fish, crabs, clams, oysters, benthic organisms, and bacteria were targeted, as each occupies a separate ecological niche and may respond dissimilarly to environmental stressors. Particular attention was focused on the use of pathobiology as a tool for assessing environmental condition. By integrating the biotic component with water quality, sediment indices, and land- use information, this holistic evaluation of ecosystem health will provide management entities with information needed to inform local decision-making processes and establish benchmarks for future restoration efforts.
Resumo:
This report is the second in a series from a project to assess land-based sources of pollution (LBSP) and effects in the St. Thomas East End Reserves (STEER) in St. Thomas, USVI, and is the result of a collaborative effort between NOAA’s National Centers for Coastal Ocean Science, the USVI Department of Planning and Natural Resources, the University of the Virgin Islands, and The Nature Conservancy. Passive water samplers (POCIS) were deployed in the STEER in February 2012. Developed by the US Geological Survey (USGS) as a tool to detect the presence of water soluble contaminants in the environment, POCIS samplers were deployed in the STEER at five locations. In addition to the February 2012 deployment, the results from an earlier POCIS deployment in May 2010 in Turpentine Gut, a perennial freshwater stream which drains to the STEER, are also reported. A total of 26 stormwater contaminants were detected at least once during the February 2012 deployment in the STEER. Detections were high enough to estimate ambient water concentrations for nine contaminants using USGS sampling rate values. From the May 2010 deployment in Turpentine Gut, 31 stormwater contaminants were detected, and ambient water concentrations could be estimated for 17 compounds. Ambient water concentrations were estimated for a number of contaminants including the detergent/surfactant metabolite 4-tert-octylphenol, phthalate ester plasticizers DEHP and DEP, bromoform, personal care products including menthol, indole, n,n-diethyltoluamide (DEET), along with the animal/plant sterol cholesterol, and the plant sterol beta-sitosterol. Only DEHP appeared to have exceeded a water quality guideline for the protection of aquatic organisms.
Resumo:
The primary objective of this project, “the Assessment of Existing Information on Atlantic Coastal Fish Habitat”, is to inform conservation planning for the Atlantic Coastal Fish Habitat Partnership (ACFHP). ACFHP is recognized as a Partnership by the National Fish Habitat Action Plan (NFHAP), whose overall mission is to protect, restore, and enhance the nation’s fish and aquatic communities through partnerships that foster fish habitat conservation. This project is a cooperative effort of NOAA/NOS Center for Coastal Monitoring and Assessment (CCMA) Biogeography Branch and ACFHP. The Assessment includes three components; 1. a representative bibliographic and assessment database, 2. a Geographical Information System (GIS) spatial framework, and 3. a summary document with description of methods, analyses of habitat assessment information, and recommendations for further work. The spatial bibliography was created by linking the bibliographic table developed in Microsoft Excel and exported to SQL Server, with the spatial framework developed in ArcGIS and exported to GoogleMaps. The bibliography is a comprehensive, searchable database of over 500 selected documents and data sources on Atlantic coastal fish species and habitats. Key information captured for each entry includes basic bibliographic data, spatial footprint (e.g. waterbody or watershed), species and habitats covered, and electronic availability. Information on habitat condition indicators, threats, and conservation recommendations are extracted from each entry and recorded in a separate linked table. The spatial framework is a functional digital map based on polygon layers of watersheds, estuarine and marine waterbodies derived from NOAA’s Coastal Assessment Framework, MMS/NOAA’s Multipurpose Marine Cadastre, and other sources, providing spatial reference for all of the documents cited in the bibliography. Together, the bibliography and assessment tables and their spatial framework provide a powerful tool to query and assess available information through a publicly available web interface. They were designed to support the development of priorities for ACFHP’s conservation efforts within a geographic area extending from Maine to Florida, and from coastal watersheds seaward to the edge of the continental shelf. The Atlantic Coastal Fish Habitat Partnership has made initial use of the Assessment of Existing Information. Though it has not yet applied the AEI in a systematic or structured manner, it expects to find further uses as the draft conservation strategic plan is refined, and as regional action plans are developed. It also provides a means to move beyond an “assessment of existing information” towards an “assessment of fish habitat”, and is being applied towards the National Fish Habitat Action Plan (NFHAP) 2010 Assessment. Beyond the scope of the current project, there may be application to broader initiatives such as Integrated Ecosystem Assessments (IEAs), Ecosystem Based Management (EBM), and Marine Spatial Planning (MSP).
Resumo:
Tailored sustainability assessment represents one approach to addressing sustainability issues on large-scale urban projects with varying geographical, social and political constraints and diverse incentives among stakeholders. This paper examines the value and limitations of this approach. Three case studies of tailored systems developed by the authors for three unique masterplanning projects are discussed in terms of: contextual sustainability drivers; nature and evolution of systems developed; outcomes of implementation; and overall value delivered. Analysis Leads to conclusions on the key features of effective tailored assessment, the value of tailored sustainability assessment from various perspectives (including client, designer, end-users and the environment), and the limitations of tailored assessment as a tool for comparative analysis between projects. Although systems considered here are specific to individual projects and developed commercially, the challenges and lessons learned are relevant to a range of sustainability assessment approaches developed under different conditions.
Resumo:
Consumer goods manufacturers aiming to reduce the environmental impact associated with their products commonly pursue incremental change strategies, but more radical approaches may be required if we are to address the challenges of sustainable consumption. One strategy to realize step change reductions is to prepare a portfolio of innovations providing different levels of impact reduction in exchange for different levels of organizational resource commitment. In this research a tool is developed to support this strategy, starting with the assumption that through brainstorming or other eco-innovation approaches, a long-list of candidate innovations has been created. The tool assesses the potential greenhouse gas benefit of an innovative option against the difficulty of its implementation. A simple greenhouse gas benefit assessment method based on streamlined LCA was used to analyze impact reduction potential, and a novel measure of implementation difficulty was developed. The predictions of implementation difficulty were compared against expert opinion, and showed similar results indicating the measure can be used sensibly to predict implementation difficulty. The assessment of the environmental gain versus implementation difficulty is visualized in a matrix, showing the trade-offs of several options. The tool is deliberately simple with scalar measures of CO 2 emissions benefits and implementation difficulty so tool users must remain aware of other potential environmental burdens besides greenhouse gases (e.g. water, waste). In addition, although relative life cycle emissions benefits of an option may be low, the absolute impact of an option can be high and there may be other co-benefits, which could justify higher levels of implementation difficulty. Different types of consumer products (e.g. household, personal care, foods) have been evaluated using the tool. Initial trials of the tool within Unilever demonstrate that the tool facilitates rapid evaluation of low-carbon innovations. © 2011 Elsevier Ltd. All rights reserved.
Resumo:
Industrialists have few example processes they can benchmark against in order to choose a multi-agent development kit. In this paper we present a review of commercial and academic agent tools with the aim of selecting one for developing an intelligent, self-serving asset architecture. In doing so, we map and enhance relevant assessment criteria found in literature. After a preliminary review of 20 multiagent platforms, we examine in further detail those of JADE, JACK and Cougaar. Our findings indicate that Cougaar is well suited for our requirements, showing excellent support for criteria such as scalability, persistence, mobility and lightweightness. © 2010 IEEE.
Resumo:
In recent years, many industrial firms have been able to use roadmapping as an effective process methodology for projecting future technology and for coordinating technology planning and strategy. Firms potentially realize a number of benefits in deploying technology roadmapping (TRM) processes. Roadmaps provide information identifying which new technologies will meet firms' future product demands, allowing companies to leverage R&D investments through choosing appropriately out of a range of alternative technologies. Moreover, the roadmapping process serves an important communication tool helping to bring about consensus among roadmap developers, as well as between participants brought in during the development process, who may communicate their understanding of shared corporate goals through the roadmap. However, there are few conceptual accounts or case studies have made the argument that roadmapping processes may be used effectively as communication tools. This paper, therefore, seeks to elaborate a theoretical foundation for identifying the factors that must be considered in setting up a roadmap and for analyzing the effect of these factors on technology roadmap credibility as perceived by its users. Based on the survey results of 120 different R&D units, this empirical study found that firms need to explore further how they can enable frequent interactions between the TRM development team and TRM participants. A high level of interaction will improve the credibility of a TRM, with communication channels selected by the organization also positively affecting TRM credibility. © 2011 Elsevier Inc.
Resumo:
RFID is a technology that enables the automated capture of observations of uniquely identified physical objects as they move through supply chains. Discovery Services provide links to repositories that have traceability information about specific physical objects. Each supply chain party publishes records to a Discovery Service to create such links and also specifies access control policies to restrict who has visibility of link information, since it is commercially sensitive and could reveal inventory levels, flow patterns, trading relationships, etc. The requirement of being able to share information on a need-to-know basis, e.g. within the specific chain of custody of an individual object, poses a particular challenge for authorization and access control, because in many supply chain situations the information owner might not have sufficient knowledge about all the companies who should be authorized to view the information, because the path taken by an individual physical object only emerges over time, rather than being fully pre-determined at the time of manufacture. This led us to consider novel approaches to delegate trust and to control access to information. This paper presents an assessment of visibility restriction mechanisms for Discovery Services capable of handling emergent object paths. We compare three approaches: enumerated access control (EAC), chain-of-communication tokens (CCT), and chain-of-trust assertions (CTA). A cost model was developed to estimate the additional cost of restricting visibility in a baseline traceability system and the estimates were used to compare the approaches and to discuss the trade-offs. © 2012 IEEE.
Resumo:
The safety of post-earthquake structures is evaluated manually through inspecting the visible damage inflicted on structural elements. This process is time-consuming and costly. In order to automate this type of assessment, several crack detection methods have been created. However, they focus on locating crack points. The next step, retrieving useful properties (e.g. crack width, length, and orientation) from the crack points, has not yet been adequately investigated. This paper presents a novel method of retrieving crack properties. In the method, crack points are first located through state-of-the-art crack detection techniques. Then, the skeleton configurations of the points are identified using image thinning. The configurations are integrated into the distance field of crack points calculated through a distance transform. This way, crack width, length, and orientation can be automatically retrieved. The method was implemented using Microsoft Visual Studio and its effectiveness was tested on real crack images collected from Haiti.
Resumo:
There are over 600,000 bridges in the US, and not all of them can be inspected and maintained within the specified time frame. This is because manually inspecting bridges is a time-consuming and costly task, and some state Departments of Transportation (DOT) cannot afford the essential costs and manpower. In this paper, a novel method that can detect large-scale bridge concrete columns is proposed for the purpose of eventually creating an automated bridge condition assessment system. The method employs image stitching techniques (feature detection and matching, image affine transformation and blending) to combine images containing different segments of one column into a single image. Following that, bridge columns are detected by locating their boundaries and classifying the material within each boundary in the stitched image. Preliminary test results of 114 concrete bridge columns stitched from 373 close-up, partial images of the columns indicate that the method can correctly detect 89.7% of these elements, and thus, the viability of the application of this research.
Resumo:
Manually inspecting concrete surface defects (e.g., cracks and air pockets) is not always reliable. Also, it is labor-intensive. In order to overcome these limitations, automated inspection using image processing techniques was proposed. However, the current work can only detect defects in an image without the ability of evaluating them. This paper presents a novel approach for automatically assessing the impact of two common surface defects (i.e., air pockets and discoloration). These two defects are first located using the developed detection methods. Their attributes, such as the number of air pockets and the area of discoloration regions, are then retrieved to calculate defects’ visual impact ratios (VIRs). The appropriate threshold values for these VIRs are selected through a manual rating survey. This way, for a given concrete surface image, its quality in terms of air pockets and discoloration can be automatically measured by judging whether their VIRs are below the threshold values or not. The method presented in this paper was implemented in C++ and a database of concrete surface images was tested to validate its performance. Read More: http://ascelibrary.org/doi/abs/10.1061/%28ASCE%29CO.1943-7862.0000126?journalCode=jcemd4
Resumo:
After earthquakes, licensed inspectors use the established codes to assess the impact of damage on structural elements. It always takes them days to weeks. However, emergency responders (e.g. firefighters) must act within hours of a disaster event to enter damaged structures to save lives, and therefore cannot wait till an official assessment completes. This is a risk that firefighters have to take. Although Search and Rescue Organizations offer training seminars to familiarize firefighters with structural damage assessment, its effectiveness is hard to guarantee when firefighters perform life rescue and damage assessment operations together. Also, the training is not available to every firefighter. The authors therefore proposed a novel framework that can provide firefighters with a quick but crude assessment of damaged buildings through evaluating the visible damage on their critical structural elements (i.e. concrete columns in the study). This paper presents the first step of the framework. It aims to automate the detection of concrete columns from visual data. To achieve this, the typical shape of columns (long vertical lines) is recognized using edge detection and the Hough transform. The bounding rectangle for each pair of long vertical lines is then formed. When the resulting rectangle resembles a column and the material contained in the region of two long vertical lines is recognized as concrete, the region is marked as a concrete column surface. Real video/image data are used to test the method. The preliminary results indicate that concrete columns can be detected when they are not distant and have at least one surface visible.
Resumo:
In an effort to develop cultured cell models for toxicity screening and environmental biomonitoring, we compared primary cultured gill epithelia and hepatocytes from freshwater tilapia (Oreochromis niloticus) to assess their sensitivity to AhR agonist toxicants. Epithelia were cultured on permeable supports (terephthalate membranes, "filters") and bathed on the apical with waterborne toxicants (pseudo in vivo asymmetrical culture conditions). Hepatocytes were cultured in multi-well plates and exposed to toxicants in culture medium. Cytochrome P4501A (measured as 7-Ethoxyresorufin-O-deethylase, EROD) was selected as a biomarker. For cultured gill epithelia, the integrity of the epithelia remained unchanged on exposure to model toxicants, such as 1,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD), benzo(a)pyrene B[a]P, polychlorinated biphenyl (PCB) mixture (Aroclor 1254), and polybrominated diphenyl ether (PBDE) mixture (DE71). A good concentration-dependent response of EROD activity was clearly observed in both cultured gill epithelia and hepatocytes. The time-course response of EROD was measured as early as 3 h, and was maximal after 6 h of exposure to TCDD, B [alp and Aroclor 1254. The estimated 6 h EC50 for TCDD, B [a]P, and Aroclor 1254 was 1.2x10(-9), 5.7x10(-8) and 6.6x10(-6) M. For the cultured hepatocytes, time-course study showed that a significant induction of EROD took place at 18 h, and the maximal induction of EROD was observed at 24 h after exposure. The estimated 24 It EC50 for TCDD, B[a]P, and Aroclor 1254 was 1.4x10(-9), 8.1x10(-8) and 7.3x10(-6) M. There was no induction or inhibition of EROD in DE71 exposure to both gill epithelia and hepatocytes. The results show that cultured gill epithelia more rapidly induce EROD and are slightly more sensitive than cultured hepatocytes, and could be used as a rapid and sensitive tool for screening chemicals and monitoring environmental AhR agonist toxicants. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Q. Meng and M. H Lee, Automated cross-modal mapping in robotic eye/hand systems using plastic radial basis function networks, Connection Science, 19(1), pp 25-52, 2007.
Resumo:
BACKGROUND: Biomonitoring studies can provide information about individual and population-wide exposure. However they must be designed in a way that protects the rights and welfare of participants. This descriptive qualitative study was conducted as a follow-up to a breastmilk biomonitoring study. The primary objectives were to assess participants' experiences in the study, including the report-back of individual body burden results, and to determine if participation in the study negatively affected breastfeeding rates or duration. METHODS: Participants of the Greater Boston PBDE Breastmilk Biomonitoring Study were contacted and asked about their experiences in the study: the impact of study recruitment materials on attitudes towards breastfeeding; if participants had wanted individual biomonitoring results; if the protocol by which individual results were distributed met participants' needs; and the impact of individual results on attitudes towards breastfeeding. RESULTS: No participants reported reducing the duration of breastfeeding because of the biomonitoring study, but some responses suggested that breastmilk biomonitoring studies have the potential to raise anxieties about breastfeeding. Almost all participants wished to obtain individual results. Although several reported some concern about individual body burden, none reported reducing the duration of breastfeeding because of biomonitoring results. The study literature and report-back method were found to mitigate potential negative impacts. CONCLUSION: Biomonitoring study design, including clear communication about the benefits of breastfeeding and the manner in which individual results are distributed, can prevent negative impacts of biomonitoring on breastfeeding. Adoption of more specific standards for biomonitoring studies and continued study of risk communication issues related to biomonitoring will help protect participants from harm.