974 resultados para Spatial Resolution
Resumo:
Low-rise buildings are often subjected to high wind loads during hurricanes that lead to severe damage and cause water intrusion. It is therefore important to estimate accurate wind pressures for design purposes to reduce losses. Wind loads on low-rise buildings can differ significantly depending upon the laboratory in which they were measured. The differences are due in large part to inadequate simulations of the low-frequency content of atmospheric velocity fluctuations in the laboratory and to the small scale of the models used for the measurements. A new partial turbulence simulation methodology was developed for simulating the effect of low-frequency flow fluctuations on low-rise buildings more effectively from the point of view of testing accuracy and repeatability than is currently the case. The methodology was validated by comparing aerodynamic pressure data for building models obtained in the open-jet 12-Fan Wall of Wind (WOW) facility against their counterparts in a boundary-layer wind tunnel. Field measurements of pressures on Texas Tech University building and Silsoe building were also used for validation purposes. The tests in partial simulation are freed of integral length scale constraints, meaning that model length scales in such testing are only limited by blockage considerations. Thus the partial simulation methodology can be used to produce aerodynamic data for low-rise buildings by using large-scale models in wind tunnels and WOW-like facilities. This is a major advantage, because large-scale models allow for accurate modeling of architectural details, testing at higher Reynolds number, using greater spatial resolution of the pressure taps in high pressure zones, and assessing the performance of aerodynamic devices to reduce wind effects. The technique eliminates a major cause of discrepancies among measurements conducted in different laboratories and can help to standardize flow simulations for testing residential homes as well as significantly improving testing accuracy and repeatability. Partial turbulence simulation was used in the WOW to determine the performance of discontinuous perforated parapets in mitigating roof pressures. The comparisons of pressures with and without parapets showed significant reductions in pressure coefficients in the zones with high suctions. This demonstrated the potential of such aerodynamic add-on devices to reduce uplift forces.
Resumo:
The purpose of this project was to evaluate the use of remote sensing 1) to detect and map Everglades wetland plant communities at different scales; and 2) to compare map products delineated and resampled at various scales with the intent to quantify and describe the quantitative and qualitative differences between such products. We evaluated data provided by Digital Globe’s WorldView 2 (WV2) sensor with a spatial resolution of 2m and data from Landsat’s Thematic and Enhanced Thematic Mapper (TM and ETM+) sensors with a spatial resolution of 30m. We were also interested in the comparability and scalability of products derived from these data sources. The adequacy of each data set to map wetland plant communities was evaluated utilizing two metrics: 1) model-based accuracy estimates of the classification procedures; and 2) design-based post-classification accuracy estimates of derived maps.
Resumo:
The development of the ecosystem approach and models for the management of ocean marine resources requires easy access to standard validated datasets of historical catch data for the main exploited species. They are used to measure the impact of biomass removal by fisheries and to evaluate the models skills, while the use of standard dataset facilitates models inter-comparison. North Atlantic albacore tuna is exploited all year round by longline and in summer and autumn by surface fisheries and fishery statistics compiled by the International Commission for the Conservation of Atlantic Tunas (ICCAT). Catch and effort with geographical coordinates at monthly spatial resolution of 1° or 5° squares were extracted for this species with a careful definition of fisheries and data screening. In total, thirteen fisheries were defined for the period 1956-2010, with fishing gears longline, troll, mid-water trawl and bait fishing. However, the spatialized catch effort data available in ICCAT database represent a fraction of the entire total catch. Length frequencies of catch were also extracted according to the definition of fisheries above for the period 1956-2010 with a quarterly temporal resolution and spatial resolutions varying from 1°x 1° to 10°x 20°. The resolution used to measure the fish also varies with size-bins of 1, 2 or 5 cm (Fork Length). The screening of data allowed detecting inconsistencies with a relatively large number of samples larger than 150 cm while all studies on the growth of albacore suggest that fish rarely grow up over 130 cm. Therefore, a threshold value of 130 cm has been arbitrarily fixed and all length frequency data above this value removed from the original data set.
Resumo:
An emerging approach to downscaling the projections from General Circulation Models (GCMs) to scales relevant for basin hydrology is to use output of GCMs to force higher-resolution Regional Climate Models (RCMs). With spatial resolution often in the tens of kilometers, however, even RCM output will likely fail to resolve local topography that may be climatically significant in high-relief basins. Here we develop and apply an approach for downscaling RCM output using local topographic lapse rates (empirically-estimated spatially and seasonally variable changes in climate variables with elevation). We calculate monthly local topographic lapse rates from the 800-m Parameter-elevation Regressions on Independent Slopes Model (PRISM) dataset, which is based on regressions of observed climate against topographic variables. We then use these lapse rates to elevationally correct two sources of regional climate-model output: (1) the North American Regional Reanalysis (NARR), a retrospective dataset produced from a regional forecasting model constrained by observations, and (2) a range of baseline climate scenarios from the North American Regional Climate Change Assessment Program (NARCCAP), which is produced by a series of RCMs driven by GCMs. By running a calibrated and validated hydrologic model, the Soil and Water Assessment Tool (SWAT), using observed station data and elevationally-adjusted NARR and NARCCAP output, we are able to estimate the sensitivity of hydrologic modeling to the source of the input climate data. Topographic correction of regional climate-model data is a promising method for modeling the hydrology of mountainous basins for which no weather station datasets are available or for simulating hydrology under past or future climates.
Resumo:
Geo-referenced catch and fishing effort data of the bigeye tuna fisheries in the Indian Ocean over 1952-2014 were analysed and standardized to facilitate population dynamics modelling studies. During this sixty-two years historical period of exploitation, many changes occurred both in the fishing techniques and the monitoring of activity. This study includes a series of processing steps used for standardization of spatial resolution, conversion and standardization of catch and effort units, raising of geo-referenced catch into nominal catch level, screening and correction of outliers, and detection of major catchability changes over long time series of fishing data, i.e., the Japanese longline fleet operating in the tropical Indian Ocean. A total of thirty fisheries were finally determined from longline, purse seine and other-gears data sets, from which 10 longline and four purse seine fisheries represented 96% of the whole historical catch. The geo-referenced records consists of catch, fishing effort and associated length frequency samples of all fisheries.
Resumo:
This dataset provides an inventory of thermo-erosional landforms and streams in three lowland areas underlain by ice-rich permafrost of the Yedoma-type Ice Complex at the Siberian Laptev Sea coast. It consists of two shapefiles per study region: one shapefile for the digitized thermo-erosional landforms and streams, one for the study area extent. Thermo-erosional landforms were manually digitized from topographic maps and satellite data as line features and subsequently analyzed in a Geographic Information System (GIS) using ArcGIS 10.0. The mapping included in particular thermo-erosional gullies and valleys as well as streams and rivers, since development of all of these features potentially involved thermo-erosional processes. For the Cape Mamontov Klyk site, data from Grosse et al. [2006], which had been digitized from 1:100000 topographic map sheets, were clipped to the Ice Complex extent of Cape Mamontov Klyk, which excludes the hill range in the southwest with outcropping bedrock and rocky slope debris, coastal barrens, and a large sandy floodplain area in the southeast. The mapped features (streams, intermittent streams) were then visually compared with panchromatic Landsat-7 ETM+ satellite data (4 August 2000, 15 m spatial resolution) and panchromatic Hexagon data (14 July 1975, 10 m spatial resolution). Smaller valleys and gullies not captured in the maps were subsequently digitized from the satellite data. The criterion for the mapping of linear features as thermo-erosional valleys and gullies was their clear incision into the surface with visible slopes. Thermo-erosional features of the Lena Delta site were mapped on the basis of a Landsat-7 ETM+ image mosaic (2000 and 2001, 30 m ground resolution) [Schneider et al., 2009] and a Hexagon satellite image mosaic (1975, 10 m ground resolution) [G. Grosse, unpublished data] of the Lena River Delta within the extent of the Lena Delta Ice Complex [Morgenstern et al., 2011]. For the Buor Khaya Peninsula, data from Arcos [2012], which had been digitized based on RapidEye satellite data (8 August 2010, 6.5 m ground resolution), were completed for smaller thermo-erosional features using the same RapidEye scene as a mapping basis. The spatial resolution, acquisition date, time of the day, and viewing geometry of the satellite data used may have influenced the identification of thermo-erosional landforms in the images. For Cape Mamontov Klyk and the Lena Delta, thermo-erosional features were digitized using both Hexagon and Landsat data; Hexagon provided higher resolution and Landsat provided the modern extent of features. Allowance of up to decameters was made for the lateral expansion of features between Hexagon and Landsat acquisitions (between 1975 and 2000).
Resumo:
The Lena River Delta, situated in Northern Siberia (72.0 - 73.8° N, 122.0 - 129.5° E), is the largest Arctic delta and covers 29,000 km**2. Since natural deltas are characterised by complex geomorphological patterns and various types of ecosystems, high spatial resolution information on the distribution and extent of the delta environments is necessary for a spatial assessment and accurate quantification of biogeochemical processes as drivers for the emission of greenhouse gases from tundra soils. In this study, the first land cover classification for the entire Lena Delta based on Landsat 7 Enhanced Thematic Mapper (ETM+) images was conducted and used for the quantification of methane emissions from the delta ecosystems on the regional scale. The applied supervised minimum distance classification was very effective with the few ancillary data that were available for training site selection. Nine land cover classes of aquatic and terrestrial ecosystems in the wetland dominated (72%) Lena Delta could be defined by this classification approach. The mean daily methane emission of the entire Lena Delta was calculated with 10.35 mg CH4/m**2/d. Taking our multi-scale approach into account we find that the methane source strength of certain tundra wetland types is lower than calculated previously on coarser scales.
Resumo:
Underwater photo-transect surveys were conducted on September 23-27, 2007 at different sections of the reef flat, reef crest and reef slope in Heron Reef. This survey was done by swimming along pre-defined transect sites and taking a picture of the bottom substrate parallel to the bottom at constant vertical distance (30cm) every two to three metres. A total of 3,586 benthic photos were taken. A floating GPS setup connected to the swimmer/diver by a line enabled recording of coordinates of transect surveys. Approximation of the coordinates for each benthic photo was based on the photo timestamp and GPS coordinate time stamp, using GPS Photo Link Software. Coordinates of each photo were interpolated by finding the the gps coordinates that were logged at a set time before and after the photo was captured. The output of this process was an ArcMap point shapefile, a Google Earth KML file and a thumbnail of each benthic photo taken. The data in the ArcMap shapefile and in the Google Earth KML file consisted of the approximated coordinate of each benthic photo taken during the survey. Using the GPS Photo Link extension within the ArcMap environment, opening the ArcMap shapefile will enable thumbnail to be displayed on the associated benthic cover photo whenever hovering with the mouse over a point on the transect. By downloading the GPSPhotoLink software from the www.geospatialexperts.com, and installing it as a trial version the ArcMap exstension will be installed in the ArcMap environment.
Resumo:
This data sets contains LPJ-LMfire dynamic global vegetation model output covering Europe and the Mediterranean for the Last Glacial Maximum (LGM; 21 ka) and for a preindustrial control simulation (20th century detrended climate). The netCDF data files are time averages of the final 30 years of the model simulation. Each netCDF file contains four or five variables: fractional cover of 9 plant functional types (PFTs; cover), total fractional coverage of trees (treecover), population density of hunter-gatherers (foragerPD; only for the "people" simulations), fraction of the gridcell burned on 30-year average (burnedf), and vegetation net primary productivity (NPP). The model spatial resolution is 0.5-degrees For the LGM simulations, LPJ-LMfire was driven by the PMIP3 suite of eight GCMs for which LGM climate simulations were available. Also provided in this archive is the result of an LPJ-LMfire run that was forced by the average climate of all GCMs (the "GCM-mean" files), and the average of each of the individual LPJ-LMfire runs over the eight LGM scenarios individually (the "LPJ-mean" files). The model simulations are provided that include the influence of human presence on the landscape (the "people" files), and in a "world without humans" scenario (the "natural" files). Finally this archive contains the preindustrial reference simulation with and without human influence ("PI_reference_people" and "PI_reference_nat", respectively). There are therefore 22 netCDF files in this archive: 8 each of LGM simulations with and without people (total 16) and the "GCM mean" simulation (2 files) and the "LPJ mean" aggregate (2 files), and finally the two preindustrial "control" simulations ("PI"), with and without humans (2 files). In addition to the LPJ-LMfire model output (netCDF files), this archive also contains a table of arboreal pollen percent calculated from pollen samples dated to the LGM at sites throughout (lgmAP.txt), and a table containing the location of archaeological sites dated to the LGM (LGM_archaeological_site_locations.txt).
Resumo:
Sediment dynamics on a storm-dominated shelf (western Bay of Plenty, New Zealand) were mapped and analyzed using the newly developed multi-sensor benthic profiler MARUM NERIDIS III. An area of 60 km × 7 km between 2 and 35 m water depth was surveyed with this bottom-towed sled equipped with a high-resolution camera for continuous close-up seafloor photography and a CTD with connected turbidity sensor. Here we introduce our approach of using this multi-parameter dataset combined with sidescan sonography and sedimentological analyses to create detailed lithofacies and bedform distribution maps and to derive regional sediment transport patterns. For the assessment of sediment distribution, photographs were classified and their spatial distribution mapped out according to associated acoustic backscatter from a sidescan sonar. This provisional map was used to choose target locations for surficial sediment sampling and subsequent laboratory analysis of grain size distribution and mineralogical composition. Finally, photographic, granulometric and mineralogical facies were combined into a unified lithofacies map and corresponding stratigraphic model. Eight distinct types of lithofacies with seawards increasing grain size were discriminated and interpreted as reworked relict deposits overlain by post-transgressional fluvial sediments. The dominant transport processes in different water depths were identified based on type and orientation of bedforms, as well as bottom water turbidity and lithofacies distribution. Observed bedforms include subaquatic dunes, coarse sand ribbons and sorted bedforms of varying dimensions, which were interpreted as being initially formed by erosion. Under fair weather conditions, sediment is transported from the northwest towards the southeast by littoral drift. During storm events, a current from the southeast to the northweast is induced which is transporting sediment along the shore in up to 35 m water depth. Shorewards oriented cross-shore transport is taking place in up to 60 m water depth and is likewise initiated by storm events. Our study demonstrates how benthic photographic profiling delivers comprehensive compositional, structural and environmental information, which compares well with results obtained by traditional probing methods, but offers much higher spatial resolution while covering larger areas. Multi-sensor benthic profiling enhances the interpretability of acoustic seafloor mapping techniques and is a rapid and economic approach to seabed and habitat mapping especially in muddy to sandy facies.
Resumo:
The composition and abundance of algal pigments provide information on phytoplankton community characteristics such as photoacclimation, overall biomass and taxonomic composition. In particular, pigments play a major role in photoprotection and in the light-driven part of photosynthesis. Most phytoplankton pigments can be measured by high-performance liquid chromatography (HPLC) techniques applied to filtered water samples. This method, as well as other laboratory analyses, is time consuming and therefore limits the number of samples that can be processed in a given time. In order to receive information on phytoplankton pigment composition with a higher temporal and spatial resolution, we have developed a method to assess pigment concentrations from continuous optical measurements. The method applies an empirical orthogonal function (EOF) analysis to remote-sensing reflectance data derived from ship-based hyperspectral underwater radiometry and from multispectral satellite data (using the Medium Resolution Imaging Spectrometer - MERIS - Polymer product developed by Steinmetz et al., 2011, doi:10.1364/OE.19.009783) measured in the Atlantic Ocean. Subsequently we developed multiple linear regression models with measured (collocated) pigment concentrations as the response variable and EOF loadings as predictor variables. The model results show that surface concentrations of a suite of pigments and pigment groups can be well predicted from the ship-based reflectance measurements, even when only a multispectral resolution is chosen (i.e., eight bands, similar to those used by MERIS). Based on the MERIS reflectance data, concentrations of total and monovinyl chlorophyll a and the groups of photoprotective and photosynthetic carotenoids can be predicted with high quality. As a demonstration of the utility of the approach, the fitted model based on satellite reflectance data as input was applied to 1 month of MERIS Polymer data to predict the concentration of those pigment groups for the whole eastern tropical Atlantic area. Bootstrapping explorations of cross-validation error indicate that the method can produce reliable predictions with relatively small data sets (e.g., < 50 collocated values of reflectance and pigment concentration). The method allows for the derivation of time series from continuous reflectance data of various pigment groups at various regions, which can be used to study variability and change of phytoplankton composition and photophysiology.
Resumo:
A new approach for the estimation of soil organic carbon (SOC) pools north of the tree line has been developed based on synthetic aperture radar (SAR; ENVISAT Advanced SAR Global Monitoring mode) data. SOC values are directly determined from backscatter values instead of upscaling using land cover or soil classes. The multi-mode capability of SAR allows application across scales. It can be shown that measurements in C band under frozen conditions represent vegetation and surface structure properties which relate to soil properties, specifically SOC. It is estimated that at least 29 Pg C is stored in the upper 30 cm of soils north of the tree line. This is approximately 25 % less than stocks derived from the soil-map-based Northern Circumpolar Soil Carbon Database (NCSCD). The total stored carbon is underestimated since the established empirical relationship is not valid for peatlands or strongly cryoturbated soils. The approach does, however, provide the first spatially consistent account of soil organic carbon across the Arctic. Furthermore, it could be shown that values obtained from 1 km resolution SAR correspond to accounts based on a high spatial resolution (2 m) land cover map over a study area of about 7 × 7 km in NE Siberia. The approach can be also potentially transferred to medium-resolution C-band SAR data such as ENVISAT ASAR Wide Swath with ~120 m resolution but it is in general limited to regions without woody vegetation. Global Monitoring-mode-derived SOC increases with unfrozen period length. This indicates the importance of this parameter for modelling of the spatial distribution of soil organic carbon storage.
Resumo:
Cs atom beams, transversely collimated and cooled, passing through material masks in the form of arrays of reactive-ion-etched hollow Si pyramidal tips and optical masks formed by intense standing light waves, write submicron features on self-assembled monolayers (SAMs). Features with widths as narrow as 43 ± 6 nm and spatial resolution limited only by the grain boundaries of the substrate have been realized in SAMs of alkanethiols. The material masks write two-dimensional arrays of submicron holes; the optical masks result in parallel lines spaced by half the optical wavelength. Both types of feature are written to the substrate by exposure of the masked SAM to the Cs flux and a subsequent wet chemical etch. For the arrays of pyramidal tips, acting as passive shadow masks, the resolution and size of the resultant feature depends on the distance of the mask array from the SAM, an effect caused by the residual divergence of the Cs atom beam. The standing wave optical mask acts as an array of microlenses focusing the atom flux onto the substrate. Atom 'pencils' writing on SAMs have the potential to create arbitrary submicron figures in massively parallel arrays. The smallest features and highest resolutions were realized with SAMs grown on smooth, sputtered gold substrates.
Resumo:
This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others.
This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system.
Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity.
Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.
Resumo:
'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption.
This dissertation explores systems and methods capable of efficiently improving sensitivity and performance for image volume cameras, and specifically proposes several sampling strategies that utilize temporal coding to improve imaging system performance and enhance our awareness for a variety of dynamic applications.
Video cameras and camcorders sample the video volume (x,y,t) at fixed intervals to gain understanding of the volume's temporal evolution. Conventionally, one must reduce the spatial resolution to increase the framerate of such cameras. Using temporal coding via physical translation of an optical element known as a coded aperture, the compressive temporal imaging (CACTI) camera emonstrates a method which which to embed the temporal dimension of the video volume into spatial (x,y) measurements, thereby greatly improving temporal resolution with minimal loss of spatial resolution. This technique, which is among a family of compressive sampling strategies developed at Duke University, temporally codes the exposure readout functions at the pixel level.
Since video cameras nominally integrate the remaining image volume dimensions (e.g. spectrum and focus) at capture time, spectral (x,y,t,\lambda) and focal (x,y,t,z) image volumes are traditionally captured via sequential changes to the spectral and focal state of the system, respectively. The CACTI camera's ability to embed video volumes into images leads to exploration of other information within that video; namely, focal and spectral information. The next part of the thesis demonstrates derivative works of CACTI: compressive extended depth of field and compressive spectral-temporal imaging. These works successfully show the technique's extension of temporal coding to improve sensing performance in these other dimensions.
Geometrical optics-related tradeoffs, such as the classic challenges of wide-field-of-view and high resolution photography, have motivated the development of mulitscale camera arrays. The advent of such designs less than a decade ago heralds a new era of research- and engineering-related challenges. One significant challenge is that of managing the focal volume (x,y,z) over wide fields of view and resolutions. The fourth chapter shows advances on focus and image quality assessment for a class of multiscale gigapixel cameras developed at Duke.
Along the same line of work, we have explored methods for dynamic and adaptive addressing of focus via point spread function engineering. We demonstrate another form of temporal coding in the form of physical translation of the image plane from its nominal focal position. We demonstrate this technique's capability to generate arbitrary point spread functions.