930 resultados para Bit error rate algorithm


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traffic incidents are non-recurring events that can cause a temporary reduction in roadway capacity. They have been recognized as a major contributor to traffic congestion on our nation’s highway systems. To alleviate their impacts on capacity, automatic incident detection (AID) has been applied as an incident management strategy to reduce the total incident duration. AID relies on an algorithm to identify the occurrence of incidents by analyzing real-time traffic data collected from surveillance detectors. Significant research has been performed to develop AID algorithms for incident detection on freeways; however, similar research on major arterial streets remains largely at the initial stage of development and testing. This dissertation research aims to identify design strategies for the deployment of an Artificial Neural Network (ANN) based AID algorithm for major arterial streets. A section of the US-1 corridor in Miami-Dade County, Florida was coded in the CORSIM microscopic simulation model to generate data for both model calibration and validation. To better capture the relationship between the traffic data and the corresponding incident status, Discrete Wavelet Transform (DWT) and data normalization were applied to the simulated data. Multiple ANN models were then developed for different detector configurations, historical data usage, and the selection of traffic flow parameters. To assess the performance of different design alternatives, the model outputs were compared based on both detection rate (DR) and false alarm rate (FAR). The results show that the best models were able to achieve a high DR of between 90% and 95%, a mean time to detect (MTTD) of 55-85 seconds, and a FAR below 4%. The results also show that a detector configuration including only the mid-block and upstream detectors performs almost as well as one that also includes a downstream detector. In addition, DWT was found to be able to improve model performance, and the use of historical data from previous time cycles improved the detection rate. Speed was found to have the most significant impact on the detection rate, while volume was found to contribute the least. The results from this research provide useful insights on the design of AID for arterial street applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Exchange rate economics has achieved substantial development in the past few decades. Despite extensive research, a large number of unresolved problems remain in the exchange rate debate. This dissertation studied three puzzling issues aiming to improve our understanding of exchange rate behavior. Chapter Two used advanced econometric techniques to model and forecast exchange rate dynamics. Chapter Three and Chapter Four studied issues related to exchange rates using the theory of New Open Economy Macroeconomics. ^ Chapter Two empirically examined the short-run forecastability of nominal exchange rates. It analyzed important empirical regularities in daily exchange rates. Through a series of hypothesis tests, a best-fitting fractionally integrated GARCH model with skewed student-t error distribution was identified. The forecasting performance of the model was compared with that of a random walk model. Results supported the contention that nominal exchange rates seem to be unpredictable over the short run in the sense that the best-fitting model cannot beat the random walk model in forecasting exchange rate movements. ^ Chapter Three assessed the ability of dynamic general-equilibrium sticky-price monetary models to generate volatile foreign exchange risk premia. It developed a tractable two-country model where agents face a cash-in-advance constraint and set prices to the local market; the exogenous money supply process exhibits time-varying volatility. The model yielded approximate closed form solutions for risk premia and real exchange rates. Numerical results provided quantitative evidence that volatile risk premia can endogenously arise in a new open economy macroeconomic model. Thus, the model had potential to rationalize the Uncovered Interest Parity Puzzle. ^ Chapter Four sought to resolve the consumption-real exchange rate anomaly, which refers to the inability of most international macro models to generate negative cross-correlations between real exchange rates and relative consumption across two countries as observed in the data. While maintaining the assumption of complete asset markets, this chapter introduced endogenously segmented asset markets into a dynamic sticky-price monetary model. Simulation results showed that such a model could replicate the stylized fact that real exchange rates tend to move in an opposite direction with respect to relative consumption. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigates the impact of a combined treatment of Systematic Error Correction and Repeated Reading on reading rate and errors for 18 year olds with undiagnosed reading difficulties on a Caribbean Island. In addition to direct daily measures of reading accuracy, the Reading Self Perception Scale was administered to determine whether the intervention was associated with changes in the way the student perceives himself as a reader.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Exchange rate economics has achieved substantial development in the past few decades. Despite extensive research, a large number of unresolved problems remain in the exchange rate debate. This dissertation studied three puzzling issues aiming to improve our understanding of exchange rate behavior. Chapter Two used advanced econometric techniques to model and forecast exchange rate dynamics. Chapter Three and Chapter Four studied issues related to exchange rates using the theory of New Open Economy Macroeconomics. Chapter Two empirically examined the short-run forecastability of nominal exchange rates. It analyzed important empirical regularities in daily exchange rates. Through a series of hypothesis tests, a best-fitting fractionally integrated GARCH model with skewed student-t error distribution was identified. The forecasting performance of the model was compared with that of a random walk model. Results supported the contention that nominal exchange rates seem to be unpredictable over the short run in the sense that the best-fitting model cannot beat the random walk model in forecasting exchange rate movements. Chapter Three assessed the ability of dynamic general-equilibrium sticky-price monetary models to generate volatile foreign exchange risk premia. It developed a tractable two-country model where agents face a cash-in-advance constraint and set prices to the local market; the exogenous money supply process exhibits time-varying volatility. The model yielded approximate closed form solutions for risk premia and real exchange rates. Numerical results provided quantitative evidence that volatile risk premia can endogenously arise in a new open economy macroeconomic model. Thus, the model had potential to rationalize the Uncovered Interest Parity Puzzle. Chapter Four sought to resolve the consumption-real exchange rate anomaly, which refers to the inability of most international macro models to generate negative cross-correlations between real exchange rates and relative consumption across two countries as observed in the data. While maintaining the assumption of complete asset markets, this chapter introduced endogenously segmented asset markets into a dynamic sticky-price monetary model. Simulation results showed that such a model could replicate the stylized fact that real exchange rates tend to move in an opposite direction with respect to relative consumption.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traffic incidents are non-recurring events that can cause a temporary reduction in roadway capacity. They have been recognized as a major contributor to traffic congestion on our national highway systems. To alleviate their impacts on capacity, automatic incident detection (AID) has been applied as an incident management strategy to reduce the total incident duration. AID relies on an algorithm to identify the occurrence of incidents by analyzing real-time traffic data collected from surveillance detectors. Significant research has been performed to develop AID algorithms for incident detection on freeways; however, similar research on major arterial streets remains largely at the initial stage of development and testing. This dissertation research aims to identify design strategies for the deployment of an Artificial Neural Network (ANN) based AID algorithm for major arterial streets. A section of the US-1 corridor in Miami-Dade County, Florida was coded in the CORSIM microscopic simulation model to generate data for both model calibration and validation. To better capture the relationship between the traffic data and the corresponding incident status, Discrete Wavelet Transform (DWT) and data normalization were applied to the simulated data. Multiple ANN models were then developed for different detector configurations, historical data usage, and the selection of traffic flow parameters. To assess the performance of different design alternatives, the model outputs were compared based on both detection rate (DR) and false alarm rate (FAR). The results show that the best models were able to achieve a high DR of between 90% and 95%, a mean time to detect (MTTD) of 55-85 seconds, and a FAR below 4%. The results also show that a detector configuration including only the mid-block and upstream detectors performs almost as well as one that also includes a downstream detector. In addition, DWT was found to be able to improve model performance, and the use of historical data from previous time cycles improved the detection rate. Speed was found to have the most significant impact on the detection rate, while volume was found to contribute the least. The results from this research provide useful insights on the design of AID for arterial street applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of the research is to investigate the emerging data security methodologies that will work with most suitable applications in the academic, industrial and commercial environments. Of several methodologies considered for Advanced Encryption Standard (AES), MARS (block cipher) developed by IBM, has been selected. Its design takes advantage of the powerful capabilities of modern computers to allow a much higher level of performance than can be obtained from less optimized algorithms such as Data Encryption Standards (DES). MARS is unique in combining virtually every design technique known to cryptographers in one algorithm. The thesis presents the performance of 128-bit cipher flexibility, which is a scaled down version of the algorithm MARS. The cryptosystem used showed equally comparable performance in speed, flexibility and security, with that of the original algorithm. The algorithm is considered to be very secure and robust and is expected to be implemented for most of the applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The tropical echinoid Echinometra viridis was reared in controlled laboratory experiments at temperatures of approximately 20°C and 30°C to mimic winter and summer temperatures and at carbon dioxide (CO2) partial pressures of approximately 487 ppm-v and 805 ppm-v to simulate current and predicted-end-of-century levels. Spine material produced during the experimental period and dissolved inorganic carbon (DIC) of the corresponding culture solutions were then analyzed for stable oxygen (delta 18Oe, delta 18ODIC) and carbon (The tropical echinoid Echinometra viridis was reared in controlled laboratory experiments at temperatures of approximately 20°C and 30°C to mimic winter and summer temperatures and at carbon dioxide (CO2) partial pressures of approximately 487 ppm-v and 805 ppm-v to simulate current and predicted-end-of-century levels. Spine material produced during the experimental period and dissolved inorganic carbon (DIC) of the corresponding culture solutions were then analyzed for stable oxygen (delta18Oe, delta18ODIC) and carbon (delta13Ce, delta13CDIC) isotopic composition. Fractionation of oxygen stable isotopes between the echinoid spines and DIC of their corresponding culture solutions (delta18O = delta18Oe - delta18ODIC) was significantly inversely correlated with seawater temperature but not significantly correlated with atmospheric pCO2. Fractionation of carbon stable isotopes between the echinoid spines and DIC of their corresponding culture solutions (Delta delta13C = delta13Ce - delta13CDIC) was significantly positively correlated with pCO2 and significantly inversely correlated with temperature, with pCO2 functioning as the primary factor and temperature moderating the pCO2-delta13C relationship. Echinoid calcification rate was significantly inversely correlated with both delta18O and delta13C, both within treatments (i.e., pCO2 and temperature fixed) and across treatments (i.e., with effects of pCO2 and temperature controlled for through ANOVA). Therefore, calcification rate and potentially the rate of co-occurring dissolution appear to be important drivers of the kinetic isotope effects observed in the echinoid spines. Study results suggest that echinoid delta18O monitors seawater temperature, but not atmospheric pCO2, and that echinoid delta13C monitors atmospheric pCO2, with temperature moderating this relationship. These findings, coupled with echinoids' long and generally high-quality fossil record, supports prior assertions that fossil echinoid delta18O is a viable archive of paleo-seawater temperature throughout Phanerozoic time, and that delta13C merits further investigation as a potential proxy of paleo-atmospheric pCO2. However, the apparent impact of calcification rate on echinoid delta18O and delta13C suggests that paleoceanographic reconstructions derived from these proxies in fossil echinoids could be improved by incorporating the effects of growth rate.13Ce, delta13CDIC) isotopic composition. Fractionation of oxygen stable isotopes between the echinoid spines and DIC of their corresponding culture solutions (delta18O = delta18Oe - delta18ODIC) was significantly inversely correlated with seawater temperature but not significantly correlated with atmospheric pCO2. Fractionation of carbon stable isotopes between the echinoid spines and DIC of their corresponding culture solutions (delta13C = delta13Ce - delta13CDIC) was significantly positively correlated with pCO2 and significantly inversely correlated with temperature, with pCO2 functioning as the primary factor and temperature moderating the pCO2-delta13C relationship. Echinoid calcification rate was significantly inversely correlated with both delta18O and delta13C, both within treatments (i.e., pCO2 and temperature fixed) and across treatments (i.e., with effects of pCO2 and temperature controlled for through ANOVA). Therefore, calcification rate and potentially the rate of co-occurring dissolution appear to be important drivers of the kinetic isotope effects observed in the echinoid spines. Study results suggest that echinoid delta18O monitors seawater temperature, but not atmospheric pCO2, and that echinoid delta13C monitors atmospheric pCO2, with temperature moderating this relationship. These findings, coupled with echinoids' long and generally high-quality fossil record, supports prior assertions that fossil echinoid delta18O is a viable archive of paleo-seawater temperature throughout Phanerozoic time, and that delta13C merits further investigation as a potential proxy of paleo-atmospheric pCO2. However, the apparent impact of calcification rate on echinoid delta18O and delta13C suggests that paleoceanographic reconstructions derived from these proxies in fossil echinoids could be improved by incorporating the effects of growth rate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Performing experiments on small-scale quantum computers is certainly a challenging endeavor. Many parameters need to be optimized to achieve high-fidelity operations. This can be done efficiently for operations acting on single qubits, as errors can be fully characterized. For multiqubit operations, though, this is no longer the case, as in the most general case, analyzing the effect of the operation on the system requires a full state tomography for which resources scale exponentially with the system size. Furthermore, in recent experiments, additional electronic levels beyond the two-level system encoding the qubit have been used to enhance the capabilities of quantum-information processors, which additionally increases the number of parameters that need to be controlled. For the optimization of the experimental system for a given task (e.g., a quantum algorithm), one has to find a satisfactory error model and also efficient observables to estimate the parameters of the model. In this manuscript, we demonstrate a method to optimize the encoding procedure for a small quantum error correction code in the presence of unknown but constant phase shifts. The method, which we implement here on a small-scale linear ion-trap quantum computer, is readily applicable to other AMO platforms for quantum-information processing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, we present an adaptive unequal loss protection (ULP) scheme for H264/AVC video transmission over lossy networks. This scheme combines erasure coding, H.264/AVC error resilience techniques and importance measures in video coding. The unequal importance of the video packets is identified in the group of pictures (GOP) and the H.264/AVC data partitioning levels. The presented method can adaptively assign unequal amount of forward error correction (FEC) parity across the video packets according to the network conditions, such as the available network bandwidth, packet loss rate and average packet burst loss length. A near optimal algorithm is developed to deal with the FEC assignment for optimization. The simulation results show that our scheme can effectively utilize network resources such as bandwidth, while improving the quality of the video transmission. In addition, the proposed ULP strategy ensures graceful degradation of the received video quality as the packet loss rate increases. © 2010 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: Computed Tomography (CT) is one of the standard diagnostic imaging modalities for the evaluation of a patient’s medical condition. In comparison to other imaging modalities such as Magnetic Resonance Imaging (MRI), CT is a fast acquisition imaging device with higher spatial resolution and higher contrast-to-noise ratio (CNR) for bony structures. CT images are presented through a gray scale of independent values in Hounsfield units (HU). High HU-valued materials represent higher density. High density materials, such as metal, tend to erroneously increase the HU values around it due to reconstruction software limitations. This problem of increased HU values due to metal presence is referred to as metal artefacts. Hip prostheses, dental fillings, aneurysm clips, and spinal clips are a few examples of metal objects that are of clinical relevance. These implants create artefacts such as beam hardening and photon starvation that distort CT images and degrade image quality. This is of great significance because the distortions may cause improper evaluation of images and inaccurate dose calculation in the treatment planning system. Different algorithms are being developed to reduce these artefacts for better image quality for both diagnostic and therapeutic purposes. However, very limited information is available about the effect of artefact correction on dose calculation accuracy. This research study evaluates the dosimetric effect of metal artefact reduction algorithms on severe artefacts on CT images. This study uses Gemstone Spectral Imaging (GSI)-based MAR algorithm, projection-based Metal Artefact Reduction (MAR) algorithm, and the Dual-Energy method.

Materials and Methods: The Gemstone Spectral Imaging (GSI)-based and SMART Metal Artefact Reduction (MAR) algorithms are metal artefact reduction protocols embedded in two different CT scanner models by General Electric (GE), and the Dual-Energy Imaging Method was developed at Duke University. All three approaches were applied in this research for dosimetric evaluation on CT images with severe metal artefacts. The first part of the research used a water phantom with four iodine syringes. Two sets of plans, multi-arc plans and single-arc plans, using the Volumetric Modulated Arc therapy (VMAT) technique were designed to avoid or minimize influences from high-density objects. The second part of the research used projection-based MAR Algorithm and the Dual-Energy Method. Calculated Doses (Mean, Minimum, and Maximum Doses) to the planning treatment volume (PTV) were compared and homogeneity index (HI) calculated.

Results: (1) Without the GSI-based MAR application, a percent error between mean dose and the absolute dose ranging from 3.4-5.7% per fraction was observed. In contrast, the error was decreased to a range of 0.09-2.3% per fraction with the GSI-based MAR algorithm. There was a percent difference ranging from 1.7-4.2% per fraction between with and without using the GSI-based MAR algorithm. (2) A range of 0.1-3.2% difference was observed for the maximum dose values, 1.5-10.4% for minimum dose difference, and 1.4-1.7% difference on the mean doses. Homogeneity indexes (HI) ranging from 0.068-0.065 for dual-energy method and 0.063-0.141 with projection-based MAR algorithm were also calculated.

Conclusion: (1) Percent error without using the GSI-based MAR algorithm may deviate as high as 5.7%. This error invalidates the goal of Radiation Therapy to provide a more precise treatment. Thus, GSI-based MAR algorithm was desirable due to its better dose calculation accuracy. (2) Based on direct numerical observation, there was no apparent deviation between the mean doses of different techniques but deviation was evident on the maximum and minimum doses. The HI for the dual-energy method almost achieved the desirable null values. In conclusion, the Dual-Energy method gave better dose calculation accuracy to the planning treatment volume (PTV) for images with metal artefacts than with or without GE MAR Algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Transfer of organic carbon (OC) from the terrestrial to the oceanic carbon pool is largely driven by riverine and aeolian transport. Before transport, however, terrigenous organic matter can be retained in intermediate terrestrial reservoirs such as soils. Using compound-specific radiocarbon analysis of terrigenous biomarkers their average terrestrial residence time can be evaluated. Here we show compound-specific radiocarbon (14C) ages of terrigenous biomarkers and bulk 14C ages accompanied by geochemical proxy data from core top samples collected along transects in front of several river mouths in the Black Sea. 14C ages of long chain n-alkanes, long chain n-fatty acids and total organic carbon (TOC) are highest in front of the river mouths, correlating well with BIT (branched and isoprenoid tetraether) indices, which indicates contribution of pre-aged, soil-derived terrigenous organic matter. The radiocarbon ages decrease further offshore towards locations where organic matter is dominated by marine production and aeolian input potentially contributes terrigenous organic matter. Average terrestrial residence times of vascular plant biomarkers deduced from n-C29+31 alkanes and n-C28+30 fatty acids ages from stations directly in front of the river mouths range from 900 ± 70 years to 4400 ± 170 years. These average residence times correlate with size and topography in climatically similar catchments, whereas the climatic regime appears to control continental carbon turnover times in morphologically similar drainage areas of the Black Sea catchment. Along-transect data imply petrogenic contribution of n-C29+31 alkanes and input via different terrigenous biomarker transport modes, i.e., riverine and aeolian, resulting in aged biomarkers at offshore core locations. Because n-C29+31 alkanes show contributions from petrogenic sources, n-C28+30 fatty acids likely provide better estimates of average terrestrial residence times of vascular plant biomarkers. Moreover, sedimentary n-C28 and n-C30 fatty acids appear clearly much less influenced by autochthonous sources than n-C24 and n-C26 fatty acids as indicated by increasing radiocarbon ages with increasing chain-length and are, thus, more representative as vascular plant biomarkers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Phytoplankton are the basis of marine food webs, and affect biogeochemical cycles. As CO2 levels increase, shifts in the frequencies and physiology of ecotypes within phytoplankton groups will affect their nutritional value and biogeochemical function. However, studies so far are based on a few representative genotypes from key species. Here, we measure changes in cellular function and growth rate at atmospheric CO2 concentrations predicted for the year 2100 in 16 ecotypes of the marine picoplankton Ostreococcus. We find that variation in plastic responses among ecotypes is on par with published between-genera variation, so the responses of one or a few ecotypes cannot estimate changes to the physiology or composition of a species under CO2 enrichment. We show that ecotypes best at taking advantage of CO2 enrichment by changing their photosynthesis rates most should increase in relative fitness, and so in frequency in a high-CO2 environment. Finally, information on sampling location, and not phylogenetic relatedness, is a good predictor of ecotypes likely to increase in frequency in this system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigated the effect of elevated partial pressure of CO2 (pCO2) on the photosynthesis and growth of four phylotypes (ITS2 types A1, A13, A2, and B1) from the genus Symbiodinium, a diverse dinoflagellate group that is important, both free-living and in symbiosis, for the viability of cnidarians and is thus a potentially important model dinoflagellate group. The response of Symbiodinium to an elevated pCO2 was phylotype-specific. Phylotypes A1 and B1 were largely unaffected by a doubling in pCO2 in contrast, the growth rate of A13 and the photosynthetic capacity of A2 both increased by ~ 60%. In no case was there an effect of ocean acidification (OA) upon respiration (dark- or light-dependent) for any of the phylotypes examined. Our observations suggest that OA might preferentially select among free-living populations of Symbiodinium, with implications for future symbioses that rely on algal acquisition from the environment (i.e., horizontal transmission). Furthermore, the carbon environment within the host could differentially affect the physiology of different Symbiodinium phylotypes. The range of responses we observed also highlights that the choice of species is an important consideration in OA research and that further investigation across phylogenetic diversity, for both the direction of effect and the underlying mechanism(s) involved, is warranted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we consider the secure beamforming design for an underlay cognitive radio multiple-input singleoutput broadcast channel in the presence of multiple passive eavesdroppers. Our goal is to design a jamming noise (JN) transmit strategy to maximize the secrecy rate of the secondary system. By utilizing the zero-forcing method to eliminate the interference caused by JN to the secondary user, we study the joint optimization of the information and JN beamforming for secrecy rate maximization of the secondary system while satisfying all the interference power constraints at the primary users, as well as the per-antenna power constraint at the secondary transmitter. For an optimal beamforming design, the original problem is a nonconvex program, which can be reformulated as a convex program by applying the rank relaxation method. To this end, we prove that the rank relaxation is tight and propose a barrier interior-point method to solve the resulting saddle point problem based on a duality result. To find the global optimal solution, we transform the considered problem into an unconstrained optimization problem. We then employ Broyden-Fletcher-Goldfarb-Shanno (BFGS) method to solve the resulting unconstrained problem which helps reduce the complexity significantly, compared to conventional methods. Simulation results show the fast convergence of the proposed algorithm and substantial performance improvements over existing approaches.