893 resultados para SPARSE


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Global databases of calcium carbonate concentrations and mass accumulation rates in Holocene and last glacial maximum sediments were used to estimate the deep-sea sedimentary calcium carbonate burial rate during these two time intervals. Sparse calcite mass accumulation rate data were extrapolated across regions of varying calcium carbonate concentration using a gridded map of calcium carbonate concentrations and the assumption that accumulation of noncarbonate material is uncorrelated with calcite concentration within some geographical region. Mean noncarbonate accumulation rates were estimated within each of nine regions, determined by the distribution and nature of the accumulation rate data. For core-top sediments the regions of reasonable data coverage encompass 67% of the high-calcite (>75%) sediments globally, and within these regions we estimate an accumulation rate of 55.9 ± 3.6 x 10**11 mol/yr. The same regions cover 48% of glacial high-CaCO3 sediments (the smaller fraction is due to a shift of calcite deposition to the poorly sampled South Pacific) and total 44.1 ± 6.0 x 10**11 mol/yr. Projecting both estimates to 100 % coverage yields accumulation estimates of 8.3 x 10**12 mol/yr today and 9.2 x 10**12 mol/yr during glacial time. This is little better than a guess given the incomplete data coverage, but it suggests that glacial deep sea calcite burial rate was probably not considerably faster than today in spite of a presumed decrease in shallow water burial during glacial time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Vegetation changes, such as shrub encroachment and wetland expansion, have been observed in many Arctic tundra regions. These changes feed back to permafrost and climate. Permafrost can be protected by soil shading through vegetation as it reduces the amount of solar energy available for thawing. Regional climate can be affected by a reduction in surface albedo as more energy is available for atmospheric and soil heating. Here, we compared the shortwave radiation budget of two common Arctic tundra vegetation types dominated by dwarf shrubs (Betula nana) and wet sedges (Eriophorum angustifolium) in North-East Siberia. We measured time series of the shortwave and longwave radiation budget above the canopy and transmitted radiation below the canopy. Additionally, we quantified soil temperature and heat flux as well as active layer thickness. The mean growing season albedo of dwarf shrubs was 0.15 ± 0.01, for sedges it was higher (0.17 ± 0.02). Dwarf shrub transmittance was 0.36 ± 0.07 on average, and sedge transmittance was 0.28 ± 0.08. The standing dead leaves contributed strongly to the soil shading of wet sedges. Despite a lower albedo and less soil shading, the soil below dwarf shrubs conducted less heat resulting in a 17 cm shallower active layer as compared to sedges. This result was supported by additional, spatially distributed measurements of both vegetation types. Clouds were a major influencing factor for albedo and transmittance, particularly in sedge vegetation. Cloud cover reduced the albedo by 0.01 in dwarf shrubs and by 0.03 in sedges, while transmittance was increased by 0.08 and 0.10 in dwarf shrubs and sedges, respectively. Our results suggest that the observed deeper active layer below wet sedges is not primarily a result of the summer canopy radiation budget. Soil properties, such as soil albedo, moisture, and thermal conductivity, may be more influential, at least in our comparison between dwarf shrub vegetation on relatively dry patches and sedge vegetation with higher soil moisture.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Registration of point clouds captured by depth sensors is an important task in 3D reconstruction applications based on computer vision. In many applications with strict performance requirements, the registration should be executed not only with precision, but also in the same frequency as data is acquired by the sensor. This thesis proposes theuse of the pyramidal sparse optical flow algorithm to incrementally register point clouds captured by RGB-D sensors (e.g. Microsoft Kinect) in real time. The accumulated errorinherent to the process is posteriorly minimized by utilizing a marker and pose graph optimization. Experimental results gathered by processing several RGB-D datasets validatethe system proposed by this thesis in visual odometry and simultaneous localization and mapping (SLAM) applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

How experience alters neuronal ensemble dynamics and how locus coeruleus-mediated norepinephrine release facilitates memory formation in the brain are the topics of this thesis. Here we employed a visualization technique, cellular compartment analysis of temporal activity by fluorescence in situ hybridization (catFISH), to assess activation patterns of neuronal ensembles in the olfactory bulb (OB) and anterior piriform cortex (aPC) to repeated odor inputs. Two associative learning models were used, early odor preference learning in rat pups and adult rat go-no-go odor discrimination learning. With catFISH of an immediate early gene, Arc, we showed that odor representation in the OB and aPC was sparse (~5-10%) and widely distributed. Odor associative learning enhanced the stability of the rewarded odor representation in the OB and aPC. The stable component, indexed by the overlap between the two ensembles activated by the rewarded odor at two time points, increased from ~25% to ~50% (p = 0.004-1.43E⁻4; Chapter 3 and 4). Adult odor discrimination learning promoted pattern separation between rewarded and unrewarded odor representations in the aPC. The overlap between rewarded and unrewarded odor representations reduced from ~25% to ~14% (p = 2.28E⁻⁵). However, learning an odor mixture as a rewarded odor increased the overlap of the component odor representations in the aPC from ~23% to ~44% (p = 0.010; Chapter 4). Blocking both α- and β-adrenoreceptors in the aPC prevented highly similar odor discrimination learning in adult rats, and reduced OB mitral and granule ensemble stability to the rewarded odor. Similar treatment in the OB only slowed odor discrimination learning. However, OB adrenoceptor blockade disrupted pattern separation and ensemble stability in the aPC when the rats demonstrated deficiency in discrimination (Chapter 5). In another project, the role of α₂-adrenoreceptors in the OB during early odor preference learning was studied. OB α2-adrenoceptor activation was necessary for odor learning in rat pups. α₂-adrenoceptor activation was additive with β-adrenoceptor mediated signalling to promote learning (Chapter 2). Together, these experiments suggest that odor representations are highly adaptive at the early stages of odor processing. The OB and aPC work in concert to support odor learning and top-down adrenergic input exerts a powerful modulation on both learning and odor representation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The way we've always envisioned computer programs is slowly changing. Thanks to the recent development of wearable technologies we're experiencing the birth of new applications that are no more limited to a fixed screen, but are instead sparse in our surroundings by means of fully fledged computational objects. In this paper we discuss proper techniques and technologies to be used for the creation of "Augmented Worlds", through the design and development of a novel framework that can help us understand how to build these new programs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The general knowledge of the hydrographic structure of the Southern Ocean is still rather incomplete since observations particularly in the ice covered regions are cumbersome to be carried out. But we know from the available information that thermohaline processes have large amplitudes and cover a wide range of scales in this part of the world ocean. The modification of water masses around Antarctica have indeed a worldwide impact, these processes ultimately determine the cold state of the present climate in the world ocean. We have converted efforts of the German and Russian polar research institutions to collect and validate the presently available temperature, salinity and oxygen data of the ocean south of 30°S latitude. We have carried out this work in spite of the fact that the hydrographic programme of the World Ocean Circulation Experiment (WOCE) will provide more new information in due time, but its contribution to the high latitudes of the Southern Ocean is quite sparse. The modified picture of the hydrographic structure of the Southern Ocean presented in this atlas may serve the oceanographic community in many ways and help to unravel the role of this ocean in the global climate system. This atlas could only be prepared with the altruistic assistance of many colleagues from various institutions worldwide who have provided us with their data and their advice. Their generous help is gratefully acknowledged. During two years scientists from the Arctic and Antarctic Research Institute in St. Petersburg and the Alfred Wegener Institute for Polar and Marine Research in Bremerhaven have cooperated in a fruitful way to establish the atlas and the archive of about 38749 validated hydrographic stations. We hope that both sources of information will be widely applied for future ocean studies and will serve as a reference state for global change considerations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The research vessel and supply icebreaker POLARSTERN is the flagship of the Alfred-Wegener-Institut in Bremerhaven (Germany) and one of the infrastructural pillars of German Antarctic research. Since its commissioning in 1982, POLARSTERN has conducted 30 campaigns to Antarctica (157 legs, mostly austral summer), and 29 to the Arctic (94 legs, northern summer). Usually, POLARSTERN is more than 300 days per year in operation and crosses the Atlantic Ocean in a meridional section twice a year. The first radiosonde on POLARSTERN was released on the 29th of December 1982, two days after POLARSTERN started on its maiden voyage to the Antarctic. And these daily soundings have continued up to the present. Due to the fact that POLARSTERN has reliably and regularly been providing upper air observations from data sparse regions (oceans and polar regions), the radiosonde data are of special value for researchers and weather forecast services alike. In the course of 30 years (1982-12-29 to 2012-11-25) a total of 12378 radiosonde balloons were started on POLARSTERN. All radiosonde data can now be found here. Each dataset contains the directly measured parameters air temperature, relative humidity and air pressure, and the derived altitude, wind direction and wind speed. 432 datasets additionally contain ozone measurements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An 1180-cm long core recovered from Lake Lyadhej-To (68°15'N, 65°45'E, 150 m a.s.l.) at the NW rim of the Polar Urals Mountains reflects the Holocene environmental history from ca. 11,000 cal. yr BP. Pollen assemblages from the diamicton (ca. 11,000-10,700 cal. yr BP) are dominated by Pre-Quaternary spores and redeposited Pinaceae pollen, pointing to a high terrestrial input. Turbid and nutrient-poor conditions existed in the lake ca. 10,700-10,550 cal. yr BP. The chironomid-inferred reconstructions suggest that mean July temperature increased rapidly from 10.0 to 11.8 °C during this period. Sparse, treeless vegetation dominated on the disturbed and denuded soils in the catchment area. A distinct dominance of planktonic diatoms ca. 10,500-8800 cal. yr BP points to the lowest lake-ice coverage, the longest growing season and the highest bioproductivity during the lake history. Birch forest with some shrub alder grew around the lake reflecting the warmest climate conditions during the Holocene. Mean July temperature was likely 11-13 °C and annual precipitation-400-500 mm. The period ca. 8800-5500 cal. yr BP is characterized by a gradual deterioration of environmental conditions in the lake and lake catchment. The pollen- and chironomid-inferred temperatures reflect a warm period (ca. 6500-6000 cal. BP) with a mean July temperature at least 1-2 °C higher than today. Birch forests disappeared from the lake vicinity after 6000 cal. yr BP. The vegetation in the Lyadhej-To region became similar to the modern one. Shrub (Betula nana, Salix) and herb tundra have dominated the lake catchment since ca. 5500 cal. yr BP. All proxies suggest rather harsh environmental conditions. Diatom assemblages reflect relatively short growing seasons and a longer persistence of lake-ice ca. 5500-2500 cal. yr BP. Pollen-based climate reconstructions suggest significant cooling between ca. 5500 and 3500 cal. yr BP with a mean July temperature 8-10 °C and annual precipitation-300-400 mm. The bioproductivity in the lake remained low after 2500 cal. yr BP, but biogeochemical proxies reflect a higher terrestrial influx. Changes in the diatom content may indicate warmer water temperatures and a reduced ice cover on the lake. However, chironomid-based reconstructions reflect a period with minimal temperatures during the lake history.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Based on a well-established stratigraphic framework and 47 AMS-14C dated sediment cores, the distribution of facies types on the NW Iberian margin is analysed in response to the last deglacial sea-level rise, thus providing a case study on the sedimentary evolution of a high-energy, low-accumulation shelf system. Altogether, four main types of sedimentary facies are defined. (1) A gravel-dominated facies occurs mostly as time-transgressive ravinement beds, which initially developed as shoreface and storm deposits in shallow waters on the outer shelf during the last sea-level lowstand; (2) A widespread, time-transgressive mixed siliceous/biogenic-carbonaceous sand facies indicates areas of moderate hydrodynamic regimes, high contribution of reworked shelf material, and fluvial supply to the shelf; (3) A glaucony-containing sand facies in a stationary position on the outer shelf formed mostly during the last-glacial sea-level rise by reworking of older deposits as well as authigenic mineral formation; and (4) A mud facies is mostly restricted to confined Holocene fine-grained depocentres, which are located in mid-shelf position. The observed spatial and temporal distribution of these facies types on the high-energy, low-accumulation NW Iberian shelf was essentially controlled by the local interplay of sediment supply, shelf morphology, and strength of the hydrodynamic system. These patterns are in contrast to high-accumulation systems where extensive sediment supply is the dominant factor on the facies distribution. This study emphasises the importance of large-scale erosion and material recycling on the sedimentary buildup during the deglacial drowning of the shelf. The presence of a homogenous and up to 15-m thick transgressive cover above a lag horizon contradicts the common assumption of sparse and laterally confined sediment accumulation on high-energy shelf systems during deglacial sea-level rise. In contrast to this extensive sand cover, laterally very confined and maximal 4-m thin mud depocentres developed during the Holocene sea-level highstand. This restricted formation of fine-grained depocentres was related to the combination of: (1) frequently occurring high-energy hydrodynamic conditions; (2) low overall terrigenous input by the adjacent rivers; and (3) the large distance of the Galicia Mud Belt to its main sediment supplier.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is persistent interest in understanding responses of passerine birds to habitat fragmentation, but research findings have been inconsistent and sometimes contradictory in conclusions about how birds respond to characteristics of sites they occupy, such as habitat patch size or edge density. We examined whether these inconsistencies could result from differences in the amount of habitat in the surrounding landscape, e.g., for woodland birds, the amount of tree cover in the surrounding landscape. We compared responses of 22 woodland bird species to proximate-scale tree cover in open landscapes versus wooded landscapes. Our main expectation was that woodland birds would tolerate less suitable sites (less tree cover at the site scale) in open environments where they had little choice–where little tree cover was available in the surrounding area. We compared responses using logistic regression coefficients and loess plots in open and wooded landscapes in eastern North Dakota, USA. Responses to proximate-scale tree cover were stronger, not weaker, as expected, in open landscapes. In some cases the sign of the response changed from positive to negative in contrasting landscapes. We draw two conclusions: First, observed responses to proximate habitat measures such as habitat extent or edge density cannot be interpreted reliably unless landscape context is specified. Second, birds appear more selective, not less so, where habitat is sparse. Habitat loss and fragmentation at the landscape scale are likely to reduce the usefulness of local habitat conservation, and regional drivers in land-use change can have important effects for site-scale habitat use.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation focuses on two vital challenges in relation to whale acoustic signals: detection and classification.

In detection, we evaluated the influence of the uncertain ocean environment on the spectrogram-based detector, and derived the likelihood ratio of the proposed Short Time Fourier Transform detector. Experimental results showed that the proposed detector outperforms detectors based on the spectrogram. The proposed detector is more sensitive to environmental changes because it includes phase information.

In classification, our focus is on finding a robust and sparse representation of whale vocalizations. Because whale vocalizations can be modeled as polynomial phase signals, we can represent the whale calls by their polynomial phase coefficients. In this dissertation, we used the Weyl transform to capture chirp rate information, and used a two dimensional feature set to represent whale vocalizations globally. Experimental results showed that our Weyl feature set outperforms chirplet coefficients and MFCC (Mel Frequency Cepstral Coefficients) when applied to our collected data.

Since whale vocalizations can be represented by polynomial phase coefficients, it is plausible that the signals lie on a manifold parameterized by these coefficients. We also studied the intrinsic structure of high dimensional whale data by exploiting its geometry. Experimental results showed that nonlinear mappings such as Laplacian Eigenmap and ISOMAP outperform linear mappings such as PCA and MDS, suggesting that the whale acoustic data is nonlinear.

We also explored deep learning algorithms on whale acoustic data. We built each layer as convolutions with either a PCA filter bank (PCANet) or a DCT filter bank (DCTNet). With the DCT filter bank, each layer has different a time-frequency scale representation, and from this, one can extract different physical information. Experimental results showed that our PCANet and DCTNet achieve high classification rate on the whale vocalization data set. The word error rate of the DCTNet feature is similar to the MFSC in speech recognition tasks, suggesting that the convolutional network is able to reveal acoustic content of speech signals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wetland ecosystems provide many valuable ecosystem services, including carbon (C) storage and improvement of water quality. Yet, restored and managed wetlands are not frequently evaluated for their capacity to function in order to deliver on these values. Specific restoration or management practices designed to meet one set of criteria may yield unrecognized biogeochemical costs or co-benefits. The goal of this dissertation is to improve scientific understanding of how wetland restoration practices and waterfowl habitat management affect critical wetland biogeochemical processes related to greenhouse gas emissions and nutrient cycling. I met this goal through field and laboratory research experiments in which I tested for relationships between management factors and the biogeochemical responses of wetland soil, water, plants and trace gas emissions. Specifically, I quantified: (1) the effect of organic matter amendments on the carbon balance of a restored wetland; (2) the effectiveness of two static chamber designs in measuring methane (CH4) emissions from wetlands; (3) the impact of waterfowl herbivory on the oxygen-sensitive processes of methane emission and coupled nitrification-denitrification; and (4) nitrogen (N) exports caused by prescribed draw down of a waterfowl impoundment.

The potency of CH4 emissions from wetlands raises the concern that widespread restoration and/or creation of freshwater wetlands may present a radiative forcing hazard. Yet data on greenhouse gas emissions from restored wetlands are sparse and there has been little investigation into the greenhouse gas effects of amending wetland soils with organic matter, a recent practice used to improve function of mitigation wetlands in the Eastern United States. I measured trace gas emissions across an organic matter gradient at a restored wetland in the coastal plain of Virginia to test the hypothesis that added C substrate would increase the emission of CH4. I found soils heavily loaded with organic matter emitted significantly more carbon dioxide than those that have received little or no organic matter. CH4 emissions from the wetland were low compared to reference wetlands and contrary to my hypothesis, showed no relationship with the loading rate of added organic matter or total soil C. The addition of moderate amounts of organic matter (< 11.2 kg m-2) to the wetland did not greatly increase greenhouse gas emissions, while the addition of high amounts produced additional carbon dioxide, but not CH4.

I found that the static chambers I used for sampling CH4 in wetlands were highly sensitive to soil disturbance. Temporary compression around chambers during sampling inflated the initial chamber CH4 headspace concentration and/or lead to generation of nonlinear, unreliable flux estimates that had to be discarded. I tested an often-used rubber-gasket sealed static chamber against a water-filled-gutter seal chamber I designed that could be set up and sampled from a distance of 2 m with a remote rod sampling system to reduce soil disturbance. Compared to the conventional design, the remotely-sampled static chambers reduced the chance of detecting inflated initial CH4 concentrations from 66 to 6%, and nearly doubled the proportion of robust linear regressions from 45 to 86%. The new system I developed allows for more accurate and reliable CH4 sampling without costly boardwalk construction.

I explored the relationship between CH4 emissions and aquatic herbivores, which are recognized for imposing top-down control on the structure of wetland ecosystems. The biogeochemical consequences of herbivore-driven disruption of plant growth, and in turn, mediated oxygen transport into wetland sediments, were not previously known. Two growing seasons of herbivore exclusion experiments in a major waterfowl overwintering wetland in the Southeastern U.S. demonstrate that waterfowl herbivory had a strong impact on the oxygen-sensitive processes of CH4 emission and nitrification. Denudation by herbivorous birds increased cumulative CH4 flux by 233% (a mean of 63 g CH4 m-2 y-1) and inhibited coupled nitrification-denitrification, as indicated by nitrate availability and emissions of nitrous oxide. The recognition that large populations of aquatic herbivores may influence the capacity for wetlands to emit greenhouse gases and cycle nitrogen is particularly salient in the context of climate change and nutrient pollution mitigation goals. For example, our results suggest that annual emissions of 23 Gg of CH4 y-1 from ~55,000 ha of publicly owned waterfowl impoundments in the Southeastern U.S. could be tripled by overgrazing.

Hydrologically controlled moist-soil impoundment wetlands provide critical habitat for high densities of migratory bird populations, thus their potential to export nitrogen (N) to downstream waters may contribute to the eutrophication of aquatic ecosystems. To investigate the relative importance of N export from these built and managed habitats, I conducted a field study at an impoundment wetland that drains into hypereutrophic Lake Mattamuskeet. I found that prescribed hydrologic drawdowns of the impoundment exported roughly the same amount of N (14 to 22 kg ha-1) as adjacent fertilized agricultural fields (16 to 31 kg ha-1), and contributed approximately one-fifth of total N load (~45 Mg N y-1) to Lake Mattamuskeet. Ironically, the prescribed drawdown regime, designed to maximize waterfowl production in impoundments, may be exacerbating the degradation of habitat quality in the downstream lake. Few studies of wetland N dynamics have targeted impoundments managed to provide wildlife habitat, but a similar phenomenon may occur in some of the 36,000 ha of similarly-managed moist-soil impoundments on National Wildlife Refuges in the southeastern U.S. I suggest early drawdown as a potential method to mitigate impoundment N pollution and estimate it could reduce N export from our study impoundment by more than 70%.

In this dissertation research I found direct relationships between wetland restoration and impoundment management practices, and biogeochemical responses of greenhouse gas emission and nutrient cycling. Elevated soil C at a restored wetland increased CO2 losses even ten years after the organic matter was originally added and intensive herbivory impact on emergent aquatic vegetation resulted in a ~230% increase in CH4 emissions and impaired N cycling and removal. These findings have important implications for the basic understanding of the biogeochemical functioning of wetlands and practical importance for wetland restoration and impoundment management in the face of pressure to mitigate the environmental challenges of global warming and aquatic eutrophication.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The goal of my Ph.D. thesis is to enhance the visualization of the peripheral retina using wide-field optical coherence tomography (OCT) in a clinical setting.

OCT has gain widespread adoption in clinical ophthalmology due to its ability to visualize the diseases of the macula and central retina in three-dimensions, however, clinical OCT has a limited field-of-view of 300. There has been increasing interest to obtain high-resolution images outside of this narrow field-of-view, because three-dimensional imaging of the peripheral retina may prove to be important in the early detection of neurodegenerative diseases, such as Alzheimer's and dementia, and the monitoring of known ocular diseases, such as diabetic retinopathy, retinal vein occlusions, and choroid masses.

Before attempting to build a wide-field OCT system, we need to better understand the peripheral optics of the human eye. Shack-Hartmann wavefront sensors are commonly used tools for measuring the optical imperfections of the eye, but their acquisition speed is limited by their underlying camera hardware. The first aim of my thesis research is to create a fast method of ocular wavefront sensing such that we can measure the wavefront aberrations at numerous points across a wide visual field. In order to address aim one, we will develop a sparse Zernike reconstruction technique (SPARZER) that will enable Shack-Hartmann wavefront sensors to use as little as 1/10th of the data that would normally be required for an accurate wavefront reading. If less data needs to be acquired, then we can increase the speed at which wavefronts can be recorded.

For my second aim, we will create a sophisticated optical model that reproduces the measured aberrations of the human eye. If we know how the average eye's optics distort light, then we can engineer ophthalmic imaging systems that preemptively cancel inherent ocular aberrations. This invention will help the retinal imaging community to design systems that are capable of acquiring high resolution images across a wide visual field. The proposed model eye is also of interest to the field of vision science as it aids in the study of how anatomy affects visual performance in the peripheral retina.

Using the optical model from aim two, we will design and reduce to practice a clinical OCT system that is capable of imaging a large (800) field-of-view with enhanced visualization of the peripheral retina. A key aspect of this third and final aim is to make the imaging system compatible with standard clinical practices. To this end, we will incorporate sensorless adaptive optics in order to correct the inter- and intra- patient variability in ophthalmic aberrations. Sensorless adaptive optics will improve both the brightness (signal) and clarity (resolution) of features in the peripheral retina without affecting the size of the imaging system.

The proposed work should not only be a noteworthy contribution to the ophthalmic and engineering communities, but it should strengthen our existing collaborations with the Duke Eye Center by advancing their capability to diagnose pathologies of the peripheral retinal.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The full-scale base-isolated structure studied in this dissertation is the only base-isolated building in South Island of New Zealand. It sustained hundreds of earthquake ground motions from September 2010 and well into 2012. Several large earthquake responses were recorded in December 2011 by NEES@UCLA and by GeoNet recording station nearby Christchurch Women's Hospital. The primary focus of this dissertation is to advance the state-of-the art of the methods to evaluate performance of seismic-isolated structures and the effects of soil-structure interaction by developing new data processing methodologies to overcome current limitations and by implementing advanced numerical modeling in OpenSees for direct analysis of soil-structure interaction.

This dissertation presents a novel method for recovering force-displacement relations within the isolators of building structures with unknown nonlinearities from sparse seismic-response measurements of floor accelerations. The method requires only direct matrix calculations (factorizations and multiplications); no iterative trial-and-error methods are required. The method requires a mass matrix, or at least an estimate of the floor masses. A stiffness matrix may be used, but is not necessary. Essentially, the method operates on a matrix of incomplete measurements of floor accelerations. In the special case of complete floor measurements of systems with linear dynamics, real modes, and equal floor masses, the principal components of this matrix are the modal responses. In the more general case of partial measurements and nonlinear dynamics, the method extracts a number of linearly-dependent components from Hankel matrices of measured horizontal response accelerations, assembles these components row-wise and extracts principal components from the singular value decomposition of this large matrix of linearly-dependent components. These principal components are then interpolated between floors in a way that minimizes the curvature energy of the interpolation. This interpolation step can make use of a reduced-order stiffness matrix, a backward difference matrix or a central difference matrix. The measured and interpolated floor acceleration components at all floors are then assembled and multiplied by a mass matrix. The recovered in-service force-displacement relations are then incorporated into the OpenSees soil structure interaction model.

Numerical simulations of soil-structure interaction involving non-uniform soil behavior are conducted following the development of the complete soil-structure interaction model of Christchurch Women's Hospital in OpenSees. In these 2D OpenSees models, the superstructure is modeled as two-dimensional frames in short span and long span respectively. The lead rubber bearings are modeled as elastomeric bearing (Bouc Wen) elements. The soil underlying the concrete raft foundation is modeled with linear elastic plane strain quadrilateral element. The non-uniformity of the soil profile is incorporated by extraction and interpolation of shear wave velocity profile from the Canterbury Geotechnical Database. The validity of the complete two-dimensional soil-structure interaction OpenSees model for the hospital is checked by comparing the results of peak floor responses and force-displacement relations within the isolation system achieved from OpenSees simulations to the recorded measurements. General explanations and implications, supported by displacement drifts, floor acceleration and displacement responses, force-displacement relations are described to address the effects of soil-structure interaction.