967 resultados para Habitat quality assessment
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Resumo:
The availability of BRAF inhibitors has given metastatic melanoma patients an effective new treatment choice and molecular testing to determine the presence or absence of a BRAF codon 600 mutation is pivotal in the clinical management of these patients. This molecular test must be performed accurately and appropriately to ensure that the patient receives the most suitable treatment in a timely manner. Laboratories have introduced such testing; however, some experience low sample throughput making it critical that an external quality assurance programme is available to help promote a high standard of testing, reporting and provide an educational aspect for BRAF molecular testing. Laboratories took part in three rounds of external quality assessment (EQA) during a 12-month period giving participants a measure of the accuracy of genotyping, clinical interpretation of the result and experience in testing a range of different samples. Formalin fixed paraffin embedded tissue sections from malignant melanoma patients were distributed to participants for BRAF molecular testing. The standard of testing was generally high but distribution of a mutation other than the most common, p.(Val600Glu), highlighted concerns with detection or reporting of the presence of rarer mutations. The main issues raised in the interpretation of the results were the importance of clear unambiguous interpretation of the result tailored to the patient and the understanding that the treatment is different from that given to other stratified medicine programmes. The variability in reporting and wide range of methodologies used indicate a continuing need for EQA in this field.
Resumo:
Nanotechnology has revolutionised humanity's capability in building microscopic systems by manipulating materials on a molecular and atomic scale. Nan-osystems are becoming increasingly smaller and more complex from the chemical perspective which increases the demand for microscopic characterisation techniques. Among others, transmission electron microscopy (TEM) is an indispensable tool that is increasingly used to study the structures of nanosystems down to the molecular and atomic scale. However, despite the effectivity of this tool, it can only provide 2-dimensional projection (shadow) images of the 3D structure, leaving the 3-dimensional information hidden which can lead to incomplete or erroneous characterization. One very promising inspection method is Electron Tomography (ET), which is rapidly becoming an important tool to explore the 3D nano-world. ET provides (sub-)nanometer resolution in all three dimensions of the sample under investigation. However, the fidelity of the ET tomogram that is achieved by current ET reconstruction procedures remains a major challenge. This thesis addresses the assessment and advancement of electron tomographic methods to enable high-fidelity three-dimensional investigations. A quality assessment investigation was conducted to provide a quality quantitative analysis of the main established ET reconstruction algorithms and to study the influence of the experimental conditions on the quality of the reconstructed ET tomogram. Regular shaped nanoparticles were used as a ground-truth for this study. It is concluded that the fidelity of the post-reconstruction quantitative analysis and segmentation is limited, mainly by the fidelity of the reconstructed ET tomogram. This motivates the development of an improved tomographic reconstruction process. In this thesis, a novel ET method was proposed, named dictionary learning electron tomography (DLET). DLET is based on the recent mathematical theorem of compressed sensing (CS) which employs the sparsity of ET tomograms to enable accurate reconstruction from undersampled (S)TEM tilt series. DLET learns the sparsifying transform (dictionary) in an adaptive way and reconstructs the tomogram simultaneously from highly undersampled tilt series. In this method, the sparsity is applied on overlapping image patches favouring local structures. Furthermore, the dictionary is adapted to the specific tomogram instance, thereby favouring better sparsity and consequently higher quality reconstructions. The reconstruction algorithm is based on an alternating procedure that learns the sparsifying dictionary and employs it to remove artifacts and noise in one step, and then restores the tomogram data in the other step. Simulation and real ET experiments of several morphologies are performed with a variety of setups. Reconstruction results validate its efficiency in both noiseless and noisy cases and show that it yields an improved reconstruction quality with fast convergence. The proposed method enables the recovery of high-fidelity information without the need to worry about what sparsifying transform to select or whether the images used strictly follow the pre-conditions of a certain transform (e.g. strictly piecewise constant for Total Variation minimisation). This can also avoid artifacts that can be introduced by specific sparsifying transforms (e.g. the staircase artifacts the may result when using Total Variation minimisation). Moreover, this thesis shows how reliable elementally sensitive tomography using EELS is possible with the aid of both appropriate use of Dual electron energy loss spectroscopy (DualEELS) and the DLET compressed sensing algorithm to make the best use of the limited data volume and signal to noise inherent in core-loss electron energy loss spectroscopy (EELS) from nanoparticles of an industrially important material. Taken together, the results presented in this thesis demonstrates how high-fidelity ET reconstructions can be achieved using a compressed sensing approach.
Resumo:
Sediment quality from Paranagua Estuarine System (PES), a highly important port and ecological zone, was evaluated by assessing three lines of evidence: (1) sediment physical-chemical characteristics; (2) sediment toxicity (elutriates, sediment-water interface, and whole sediment); and (3) benthic community structure. Results revealed a gradient of increasing degradation of sediments (i.e. higher concentrations of trace metals, higher toxicity, and impoverishment of benthic community structure) towards inner PES. Data integration by principal component analysis (PCA) showed positive correlation between some contaminants (mainly As, Cr, Ni, and Pb) and toxicity in samples collected from stations located in upper estuary and one station placed away from contamination sources. Benthic community structure seems to be affected by both pollution and natural fine characteristics of the sediments, which reinforces the importance of a weight-of-evidence approach to evaluate sediments of PES. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Context: Neonatal mortality rate is declining globally. The aim of the present study is to identify relevant indicators for assessing newborn care in hospitals by a systematic review. Evidence Acquisition: A search on electronic data base and manual searches of personal files for studies on quality indicators of newborn care were carried out. Searching 9 bibliographic databases, we found 85 articles of which 22 exactly related ones were selected and studied. Hand search yielded 1 record were also searched and 2 records were included. Results: A list of 87 structure, process and outcome indicators was formulated from the articles. Also 26 excess measures were identified in gray literature. After removing duplicates, and categorizing in 3 domains, 18 measures were input, 41 process and 34 outcome measures. Conclusions: These 93 indicators provide a framework for assessing how well the hospitals are providing neonatal care. These measures should be discussed in each context expert panels to address nationally applicable indices of neonatal care and may be adapted for local health settings.
Resumo:
The assessment of water quality has changed markedly worldwide over the last years, especially in Europe due to the implementation of the Water Framework Directive. Fish was considered a key-element in this context and several fish-based multi-metric indices have been proposed. In this study, we propose a multi-metric index, the Estuarine Fish Assessment Index (EFAI), developed for Portuguese estuaries, designed for the overall assessment of transitional waters, which could also be applied at the water body level within an estuary. The EFAI integrates seven metrics: species richness, percentage of marine migrants, number of species and abundance of estuarine resident species, number of species and abundance of piscivorous species, status of diadromous species, status of introduced species and status of disturbance sensitive species. Fish sampling surveys were conducted in 2006, 2009 and 2010, using beam trawl, in 13 estuarine systems along the Portuguese coast. Most of the metrics presented a high variability among the transitional systems surveyed. According to the EFAI values, Portuguese estuaries presented a "Good" water quality status (except the Douro in a particular year). The assessments in different years were generally concordant, with a few exceptions. The relationship between the EFAI and the Anthropogenic Pressure Index (API) was not significant, but a negative and significant correlation was registered between the EFAI and the expert judgement pressure index, at both estuary and water body level. The ordination analysis performed to evaluate similarities among North-East Atlantic Geographical Intercalibration Group (NEAGIG) fish-based indices put in evidence four main groups: the French index, since it is substantially different from all the other indices (uses only four metrics based on densities); indices from Ireland, United Kingdom and Spain (Asturias and Cantabria); the Dutch and German indices; and the indices of Belgium. Portugal and Spain (Basque country). The need for detailed studies, including comparative approaches, on several aspects of these assessment tools, especially in what regards their response to anthropogenic pressures was stressed. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Soil is a key resource that provides the basis of food production and sustains and delivers several ecosystems services including regulating and supporting services such as water and climate regulation, soil formation and the cycling of nutrients carbon and water. During the last decades, population growth, dietary changes and the subsequent pressure on food production, have caused severe damages on soil quality as a consequence of intensive, high input-based agriculture. While agriculture is supposed to maintain and steward its most important resource base, it compromises soil quality and fertility through its impact on erosion, soil organic matter and biodiversity decline, compaction, etc., and thus the necessary yield increases for the next decades. New or improved cropping systems and agricultural practices are needed to ensure a sustainable use of this resource and to fully take the advantages of its associated ecosystem services. Also, new and better soil quality indicators are crucial for fast and in-field soil diagnosis to help farmers decide on the best management practices to adopt under specific pedo-climatic conditions. Conservation Agriculture and its fundamental principles: minimum (or no) soil disturbance, permanent organic soil cover and crop rotation /intercropping certainly figure among the possibilities capable to guarantee sustainable soil management. The iSQAPER project – Interactive Soil Quality Assessment in Europe and China for Agricultural Productivity and Environmental Resilience – is tackling this problem with the development of a Soil Quality application (SQAPP) that links soil and agricultural management practices to soil quality indicators and will provide an easy-to-use tool for farmers and land managers to judge their soil status. The University of Évora is the leader of WP6 - Evaluating and demonstrating measures to improve Soil Quality. In this work package, several promising soil and agricultural management practices will be tested at selected sites and evaluated using the set of soil quality indicators defined for the SQAPP tool. The project as a whole and WP6 in specific can contribute to proof and demonstrate under different pedoclimatic conditions the impact of Conservation Agriculture practices on soil quality and function as was named the call under which this project was submitted.
Resumo:
São várias as perturbações antropogénicas que afectam os rios e as ribeiras em Portugal, razão pela qual é urgente a implementação de medidas que visem a sua protecção. Neste contexto, foi criada em 2000 a directiva quadro da água (DQA) que visa atingir o bom estado das massas de água até 2015. Este trabalho teve como objectivos: contribuir para o conhecimento da comunidade de macroinvertebrados nas ribeiras do Algarve; verificar a existência de diferenças na comunidade de macroinvertebrados entre ribeiras e em função dos habitats; comparar duas metodologias (INAG e “Habitats”) de amostragem para avaliação da qualidade ecológica da água; avaliar o estado ecológico (através de índices bióticos: IBMWP and IPtIs) das ribeiras de Odelouca, Foupana e Odeleite utilizando para este fim os macroinvertebrados. Estes índices são baseados no grau de tolerância à poluição e abundância de cada família. Os macroinvertebrados bentónicos (bioindicadores) foram recolhidos com uma rede de arrasto manual (malha 500μm), sendo o esforço de amostragem de 6 arrastos/ponto. Não se encontraram diferenças em termos de índices ecológicos e da estrutura da comunidade de macroinvertebrados (ao nível das famílias) quer entre ribeiras como entre habitats. A análise da estrutura da comunidade não diferiu para ambas as metodologias testadas, sugerindo que na metodologia INAG não existe perda de informação relacionada com a insuficiência de amostragens em determinados habitats. No entanto, verifica-se uma tendência para valores dos índices bióticos apresentarem sempre valores iguais ou superiores no caso da metodologia “Habitats. Os valores dos índices de qualidade da água permitiram classificar na generalidade a qualidade da água nas ribeiras em estudo como boa. A informação relativa aos índices bióticos deve ser complementadas com análises físico-químicas, ou com métodos como o River Habitat Survey (RHS), que permite determinar o índice “Habitat Quality Assessment” (HQA), que expressa as características naturais nos rios importantes para os organismos (número de zonas de corrente rápida, árvores), e “Habitat Modification Score” (HMS), que quantifica as alterações ocorridas nas ribeiras (ocupação do solo em torno dos rios: agricultura, pontes).
Resumo:
This paper considers the conditions that are necessary at system and local levels for teacher assessment to be valid, reliable and rigorous. With sustainable assessment cultures as a goal, the paper examines how education systems can support local level efforts for quality learning and dependable teacher assessment. This is achieved through discussion of relevant research and consideration of a case study involving an evaluation of a cross-sectoral approach to promoting confidence in school-based assessment in Queensland, Australia. Building on the reported case study, essential characteristics for developing sustainable assessment cultures are presented, including: leadership in learning; alignment of curriculum, pedagogy and assessment; the design of quality assessment tasks and accompanying standards, and evidence-based judgement and moderation. Taken together, these elements constitute a new framework for building assessment capabilities and promoting quality assurance.
Resumo:
- Considers broad-scale assessment approaches and how they impact on educational opportunity and outcomes. - Brings together internationally recognised scholars providing new insights into assessment for learning improvement and accountability. - Presents different theoretical and methodological perspectives influential in the field of assessment, learning and social change. - Contributes to the theorising of assessment in contexts characterised by heightened accountability requirements and constant change. This book brings together internationally recognised scholars with an interest in how to use the power of assessment to improve student learning and to engage with accountability priorities at both national and global levels. It includes distinguished writers who have worked together for some two decades to shift the assessment paradigm from a dominant focus on assessment as measurement towards assessment as central to efforts to improve learning. These writers have worked with the teaching profession and, in so doing, have researched and generated key insights into different ways of understanding assessment and its relationship to learning. The volume contributes to the theorising of assessment in contexts characterised by heightened accountability requirements and constant change. The book’s structure and content reflect already significant and growing international interest in assessment as contextualised practice, as well as theories of learning and teaching that underpin and drive particular assessment approaches. Learning theories and practices, assessment literacies, teachers’ responsibilities in assessment, the role of leadership, and assessment futures are the organisers within the book’s structure and content. The contributors to this book have in common the view that quality assessment, and quality learning and teaching are integrally related. Another shared view is that the alignment of assessment with curriculum, teaching and learning is linchpin to efforts to improve both learning opportunities and outcomes for all. Essentially, the book presents new perspectives on the enabling power of assessment. In so doing, the writers recognise that validity and reliability - the traditional canons of assessment – remain foundational and therefore necessary. However, they are not of themselves sufficient for quality education. The book argues that assessment needs to be radically reconsidered in the context of unprecedented societal change. Increasingly, communities are segregating more by wealth, with clear signs of social, political, economic and environmental instability. These changes raise important issues relating to ethics and equity, taken to be core dimensions in enabling the power of assessment to contribute to quality learning for all. This book offers readers new knowledge about how assessment can be used to re/engage learners across all phases of education.
Resumo:
Vision-based underwater navigation and obstacle avoidance demands robust computer vision algorithms, particularly for operation in turbid water with reduced visibility. This paper describes a novel method for the simultaneous underwater image quality assessment, visibility enhancement and disparity computation to increase stereo range resolution under dynamic, natural lighting and turbid conditions. The technique estimates the visibility properties from a sparse 3D map of the original degraded image using a physical underwater light attenuation model. Firstly, an iterated distance-adaptive image contrast enhancement enables a dense disparity computation and visibility estimation. Secondly, using a light attenuation model for ocean water, a color corrected stereo underwater image is obtained along with a visibility distance estimate. Experimental results in shallow, naturally lit, high-turbidity coastal environments show the proposed technique improves range estimation over the original images as well as image quality and color for habitat classification. Furthermore, the recursiveness and robustness of the technique allows implementation onboard an Autonomous Underwater Vehicle for improving navigation and obstacle avoidance performance.
Resumo:
Over the past four decades, the state of Hawaii has developed a system of eleven Marine Life Conservation Districts (MLCDs) to conserve and replenish marine resources around the state. Initially established to provide opportunities for public interaction with the marine environment, these MLCDs vary in size, habitat quality, and management regimes, providing an excellent opportunity to test hypotheses concerning marine protected area (MPA) design and function using multiple discreet sampling units. NOAA/NOS/NCCOS/Center for Coastal Monitoring and Assessment’s Biogeography Team developed digital benthic habitat maps for all MLCD and adjacent habitats. These maps were used to evaluate the efficacy of existing MLCDs for biodiversity conservation and fisheries replenishment, using a spatially explicit stratified random sampling design. Coupling the distribution of habitats and species habitat affinities using GIS technology elucidates species habitat utilization patterns at scales that are commensurate with ecosystem processes and is useful in defining essential fish habitat and biologically relevant boundaries for MPAs. Analysis of benthic cover validated the a priori classification of habitat types and provided justification for using these habitat strata to conduct stratified random sampling and analyses of fish habitat utilization patterns. Results showed that the abundance and distribution of species and assemblages exhibited strong correlations with habitat types. Fish assemblages in the colonized and uncolonized hardbottom habitats were found to be most similar among all of the habitat types. Much of the macroalgae habitat sampled was macroalgae growing on hard substrate, and as a result showed similarities with the other hardbottom assemblages. The fish assemblages in the sand habitats were highly variable but distinct from the other habitat types. Management regime also played an important role in the abundance and distribution of fish assemblages. MLCDs had higher values for most fish assemblage characteristics (e.g. biomass, size, diversity) compared with adjacent fished areas and Fisheries Management Areas (FMAs) across all habitat types. In addition, apex predators and other targeted resources species were more abundant and larger in the MLCDs, illustrating the effectiveness of these closures in conserving fish populations. Habitat complexity, quality, size and level of protection from fishing were important determinates of MLCD effectiveness with respect to their associated fish assemblages. (PDF contains 217 pages)
Resumo:
In recent collaborative biological sampling exercises organised by the Nottingham Regional Laboratory of the Severn-Trent Water Authority, the effect of handnet sampling variation on the quality and usefulness of the data obtained has been questioned, especially when this data is transcribed into one or more of the commonly used biological methods of water quality assessment. This study investigates if this effect is constant at sites with similar typography but differing water quality states when the sampling method is standardized and carried out by a single operator. An argument is made for the use of a lowest common denominator approach to give a more consistent result and obviate the effect of sampling variation on these biological assessment methods.
Resumo:
There is a need to determine quantitative relationships between fishery status and water quality in order to make informed judgements concerning fishery health and the setting of environmental quality standards for fishery protection. Such relationships would also assist in the formulation of a system for classifying fisheries. A national database of fisheries and water quality has been collated from the archives of pollution control authorities throughout the UK. A number of probable and potential water quality effects on fish populations have been identified from a thorough analysis of the database, notwithstanding large confounding effects such as habitat variation and fish mobility, and the generally sparse nature of water quality information. A number of different approaches to data analysis was utilised, and the value of each has been appraised. Recommendations concerning the integration of water quality assessment approaches have been made and further research on fishery status, and its measurement, in relation to water quality has been suggested.