981 resultados para Sequential error ratio
Resumo:
Background The production of high yields of recombinant proteins is an enduring bottleneck in the post-genomic sciences that has yet to be addressed in a truly rational manner. Typically eukaryotic protein production experiments have relied on varying expression construct cassettes such as promoters and tags, or culture process parameters such as pH, temperature and aeration to enhance yields. These approaches require repeated rounds of trial-and-error optimization and cannot provide a mechanistic insight into the biology of recombinant protein production. We published an early transcriptome analysis that identified genes implicated in successful membrane protein production experiments in yeast. While there has been a subsequent explosion in such analyses in a range of production organisms, no one has yet exploited the genes identified. The aim of this study was to use the results of our previous comparative transcriptome analysis to engineer improved yeast strains and thereby gain an understanding of the mechanisms involved in high-yielding protein production hosts. Results We show that tuning BMS1 transcript levels in a doxycycline-dependent manner resulted in optimized yields of functional membrane and soluble protein targets. Online flow microcalorimetry demonstrated that there had been a substantial metabolic change to cells cultured under high-yielding conditions, and in particular that high yielding cells were more metabolically efficient. Polysome profiling showed that the key molecular event contributing to this metabolically efficient, high-yielding phenotype is a perturbation of the ratio of 60S to 40S ribosomal subunits from approximately 1:1 to 2:1, and correspondingly of 25S:18S ratios from 2:1 to 3:1. This result is consistent with the role of the gene product of BMS1 in ribosome biogenesis. Conclusion This work demonstrates the power of a rational approach to recombinant protein production by using the results of transcriptome analysis to engineer improved strains, thereby revealing the underlying biological events involved.
Resumo:
Recently within the machine learning and spatial statistics communities many papers have explored the potential of reduced rank representations of the covariance matrix, often referred to as projected or fixed rank approaches. In such methods the covariance function of the posterior process is represented by a reduced rank approximation which is chosen such that there is minimal information loss. In this paper a sequential framework for inference in such projected processes is presented, where the observations are considered one at a time. We introduce a C++ library for carrying out such projected, sequential estimation which adds several novel features. In particular we have incorporated the ability to use a generic observation operator, or sensor model, to permit data fusion. We can also cope with a range of observation error characteristics, including non-Gaussian observation errors. Inference for the variogram parameters is based on maximum likelihood estimation. We illustrate the projected sequential method in application to synthetic and real data sets. We discuss the software implementation and suggest possible future extensions.
Resumo:
Large monitoring networks are becoming increasingly common and can generate large datasets from thousands to millions of observations in size, often with high temporal resolution. Processing large datasets using traditional geostatistical methods is prohibitively slow and in real world applications different types of sensor can be found across a monitoring network. Heterogeneities in the error characteristics of different sensors, both in terms of distribution and magnitude, presents problems for generating coherent maps. An assumption in traditional geostatistics is that observations are made directly of the underlying process being studied and that the observations are contaminated with Gaussian errors. Under this assumption, sub–optimal predictions will be obtained if the error characteristics of the sensor are effectively non–Gaussian. One method, model based geostatistics, assumes that a Gaussian process prior is imposed over the (latent) process being studied and that the sensor model forms part of the likelihood term. One problem with this type of approach is that the corresponding posterior distribution will be non–Gaussian and computationally demanding as Monte Carlo methods have to be used. An extension of a sequential, approximate Bayesian inference method enables observations with arbitrary likelihoods to be treated, in a projected process kriging framework which is less computationally intensive. The approach is illustrated using a simulated dataset with a range of sensor models and error characteristics.
Resumo:
Heterogeneous datasets arise naturally in most applications due to the use of a variety of sensors and measuring platforms. Such datasets can be heterogeneous in terms of the error characteristics and sensor models. Treating such data is most naturally accomplished using a Bayesian or model-based geostatistical approach; however, such methods generally scale rather badly with the size of dataset, and require computationally expensive Monte Carlo based inference. Recently within the machine learning and spatial statistics communities many papers have explored the potential of reduced rank representations of the covariance matrix, often referred to as projected or fixed rank approaches. In such methods the covariance function of the posterior process is represented by a reduced rank approximation which is chosen such that there is minimal information loss. In this paper a sequential Bayesian framework for inference in such projected processes is presented. The observations are considered one at a time which avoids the need for high dimensional integrals typically required in a Bayesian approach. A C++ library, gptk, which is part of the INTAMAP web service, is introduced which implements projected, sequential estimation and adds several novel features. In particular the library includes the ability to use a generic observation operator, or sensor model, to permit data fusion. It is also possible to cope with a range of observation error characteristics, including non-Gaussian observation errors. Inference for the covariance parameters is explored, including the impact of the projected process approximation on likelihood profiles. We illustrate the projected sequential method in application to synthetic and real datasets. Limitations and extensions are discussed. © 2010 Elsevier Ltd.
Resumo:
Orthogonal Frequency-Division Multiplexing (OFDM) has been proved to be a promising technology that enables the transmission of higher data rate. Multicarrier Code-Division Multiple Access (MC-CDMA) is a transmission technique which combines the advantages of both OFDM and Code-Division Multiplexing Access (CDMA), so as to allow high transmission rates over severe time-dispersive multi-path channels without the need of a complex receiver implementation. Also MC-CDMA exploits frequency diversity via the different subcarriers, and therefore allows the high code rates systems to achieve good Bit Error Rate (BER) performances. Furthermore, the spreading in the frequency domain makes the time synchronization requirement much lower than traditional direct sequence CDMA schemes. There are still some problems when we use MC-CDMA. One is the high Peak-to-Average Power Ratio (PAPR) of the transmit signal. High PAPR leads to nonlinear distortion of the amplifier and results in inter-carrier self-interference plus out-of-band radiation. On the other hand, suppressing the Multiple Access Interference (MAI) is another crucial problem in the MC-CDMA system. Imperfect cross-correlation characteristics of the spreading codes and the multipath fading destroy the orthogonality among the users, and then cause MAI, which produces serious BER degradation in the system. Moreover, in uplink system the received signals at a base station are always asynchronous. This also destroys the orthogonality among the users, and hence, generates MAI which degrades the system performance. Besides those two problems, the interference should always be considered seriously for any communication system. In this dissertation, we design a novel MC-CDMA system, which has low PAPR and mitigated MAI. The new Semi-blind channel estimation and multi-user data detection based on Parallel Interference Cancellation (PIC) have been applied in the system. The Low Density Parity Codes (LDPC) has also been introduced into the system to improve the performance. Different interference models are analyzed in multi-carrier communication systems and then the effective interference suppression for MC-CDMA systems is employed in this dissertation. The experimental results indicate that our system not only significantly reduces the PAPR and MAI but also effectively suppresses the outside interference with low complexity. Finally, we present a practical cognitive application of the proposed system over the software defined radio platform.
Resumo:
Orthogonal Frequency-Division Multiplexing (OFDM) has been proved to be a promising technology that enables the transmission of higher data rate. Multicarrier Code-Division Multiple Access (MC-CDMA) is a transmission technique which combines the advantages of both OFDM and Code-Division Multiplexing Access (CDMA), so as to allow high transmission rates over severe time-dispersive multi-path channels without the need of a complex receiver implementation. Also MC-CDMA exploits frequency diversity via the different subcarriers, and therefore allows the high code rates systems to achieve good Bit Error Rate (BER) performances. Furthermore, the spreading in the frequency domain makes the time synchronization requirement much lower than traditional direct sequence CDMA schemes. There are still some problems when we use MC-CDMA. One is the high Peak-to-Average Power Ratio (PAPR) of the transmit signal. High PAPR leads to nonlinear distortion of the amplifier and results in inter-carrier self-interference plus out-of-band radiation. On the other hand, suppressing the Multiple Access Interference (MAI) is another crucial problem in the MC-CDMA system. Imperfect cross-correlation characteristics of the spreading codes and the multipath fading destroy the orthogonality among the users, and then cause MAI, which produces serious BER degradation in the system. Moreover, in uplink system the received signals at a base station are always asynchronous. This also destroys the orthogonality among the users, and hence, generates MAI which degrades the system performance. Besides those two problems, the interference should always be considered seriously for any communication system. In this dissertation, we design a novel MC-CDMA system, which has low PAPR and mitigated MAI. The new Semi-blind channel estimation and multi-user data detection based on Parallel Interference Cancellation (PIC) have been applied in the system. The Low Density Parity Codes (LDPC) has also been introduced into the system to improve the performance. Different interference models are analyzed in multi-carrier communication systems and then the effective interference suppression for MC-CDMA systems is employed in this dissertation. The experimental results indicate that our system not only significantly reduces the PAPR and MAI but also effectively suppresses the outside interference with low complexity. Finally, we present a practical cognitive application of the proposed system over the software defined radio platform.
Resumo:
Trials in a temporal two-interval forced-choice discrimination experiment consist of two sequential intervals presenting stimuli that differ from one another as to magnitude along some continuum. The observer must report in which interval the stimulus had a larger magnitude. The standard difference model from signal detection theory analyses poses that order of presentation should not affect the results of the comparison, something known as the balance condition (J.-C. Falmagne, 1985, in Elements of Psychophysical Theory). But empirical data prove otherwise and consistently reveal what Fechner (1860/1966, in Elements of Psychophysics) called time-order errors, whereby the magnitude of the stimulus presented in one of the intervals is systematically underestimated relative to the other. Here we discuss sensory factors (temporary desensitization) and procedural glitches (short interstimulus or intertrial intervals and response bias) that might explain the time-order error, and we derive a formal model indicating how these factors make observed performance vary with presentation order despite a single underlying mechanism. Experimental results are also presented illustrating the conventional failure of the balance condition and testing the hypothesis that time-order errors result from contamination by the factors included in the model.
Resumo:
Phytoplankton are the basis of marine food webs, and affect biogeochemical cycles. As CO2 levels increase, shifts in the frequencies and physiology of ecotypes within phytoplankton groups will affect their nutritional value and biogeochemical function. However, studies so far are based on a few representative genotypes from key species. Here, we measure changes in cellular function and growth rate at atmospheric CO2 concentrations predicted for the year 2100 in 16 ecotypes of the marine picoplankton Ostreococcus. We find that variation in plastic responses among ecotypes is on par with published between-genera variation, so the responses of one or a few ecotypes cannot estimate changes to the physiology or composition of a species under CO2 enrichment. We show that ecotypes best at taking advantage of CO2 enrichment by changing their photosynthesis rates most should increase in relative fitness, and so in frequency in a high-CO2 environment. Finally, information on sampling location, and not phylogenetic relatedness, is a good predictor of ecotypes likely to increase in frequency in this system.
Resumo:
The stable hydrogen isotope composition of lipid biomarkers, such as alkenones, is a promising new tool for the improvement of palaeosalinity reconstructions. Laboratory studies confirmed the correlation between lipid biomarker dD composition (dDLipid), water dD composition (dDH2O) and salinity; yet there is limited insight into the applicability of this proxy in oceanic environments. To fill this gap, we test the use of the dD composition of alkenones (dDC37) and palmitic acid (dDPA) as salinity proxies using samples of surface suspended material along the distinct salinity gradient induced by the Amazon Plume. Our results indicate a positive correlation between salinity and dDH2O, while the relationship between dDH2O and dDLipid is more complex: dDPAM correlates strongly with dDH2O (r2 = 0.81) and shows a salinity-dependent isotopic fractionation factor. dDC37 only correlates with dDH2O in a small number (n = 8) of samples with alkenone concentrations > 10 ng L**-1, while there is no correlation if all samples are taken into account. These findings are mirrored by alkenone-based temperature reconstructions, which are inaccurate for samples with low alkenone concentrations. Deviations in dDC37 and temperature are likely to be caused by limited haptophyte algae growth due to low salinity and light limitation imposed by the Amazon Plume. Our study confirms the applicability of dDLipid as a salinity proxy in oceanic environments. But it raises a note of caution concerning regions where low alkenone production can be expected due to low salinity and light limitation, for instance, under strong riverine discharge.
Resumo:
Excess nutrient loads carried by streams and rivers are a great concern for environmental resource managers. In agricultural regions, excess loads are transported downstream to receiving water bodies, potentially causing algal blooms, which could lead to numerous ecological problems. To better understand nutrient load transport, and to develop appropriate water management plans, it is important to have accurate estimates of annual nutrient loads. This study used a Monte Carlo sub-sampling method and error-corrected statistical models to estimate annual nitrate-N loads from two watersheds in central Illinois. The performance of three load estimation methods (the seven-parameter log-linear model, the ratio estimator, and the flow-weighted averaging estimator) applied at one-, two-, four-, six-, and eight-week sampling frequencies were compared. Five error correction techniques; the existing composite method, and four new error correction techniques developed in this study; were applied to each combination of sampling frequency and load estimation method. On average, the most accurate error reduction technique, (proportional rectangular) resulted in 15% and 30% more accurate load estimates when compared to the most accurate uncorrected load estimation method (ratio estimator) for the two watersheds. Using error correction methods, it is possible to design more cost-effective monitoring plans by achieving the same load estimation accuracy with fewer observations. Finally, the optimum combinations of monitoring threshold and sampling frequency that minimizes the number of samples required to achieve specified levels of accuracy in load estimation were determined. For one- to three-weeks sampling frequencies, combined threshold/fixed-interval monitoring approaches produced the best outcomes, while fixed-interval-only approaches produced the most accurate results for four- to eight-weeks sampling frequencies.
Resumo:
We develop the energy norm a-posteriori error estimation for hp-version discontinuous Galerkin (DG) discretizations of elliptic boundary-value problems on 1-irregularly, isotropically refined affine hexahedral meshes in three dimensions. We derive a reliable and efficient indicator for the errors measured in terms of the natural energy norm. The ratio of the efficiency and reliability constants is independent of the local mesh sizes and weakly depending on the polynomial degrees. In our analysis we make use of an hp-version averaging operator in three dimensions, which we explicitly construct and analyze. We use our error indicator in an hp-adaptive refinement algorithm and illustrate its practical performance in a series of numerical examples. Our numerical results indicate that exponential rates of convergence are achieved for problems with smooth solutions, as well as for problems with isotropic corner singularities.
Resumo:
The one-dimensional propagation of a combustion wave through a premixed solid fuel for two-stage kinetics is studied. We re-examine the analysis of a single reaction travelling-wave and extend it to the case of two-stage reactions. We derive an expression for the travelling wave speed in the limit of large activation energy for both reactions. The analysis shows that when both reactions are exothermic, the wave structure is similar to the single reaction case. However, when the second reaction is endothermic, the wave structure can be significantly different from single reaction case. In particular, as might be expected, a travelling wave does not necessarily exist in this case. We establish conditions in the limiting large activation energy limit for the non-existence, and for monotonicity of the temperature profile in the travelling wave.