993 resultados para linear approximation
Resumo:
In addition to the importance of sample preparation and extract separation, MS detection is a key factor in the sensitive quantification of large undigested peptides. In this article, a linear ion trap MS (LIT-MS) and a triple quadrupole MS (TQ-MS) have been compared in the detection of large peptides at subnanomolar concentrations. Natural brain natriuretic peptide, C-peptide, substance P and D-Junk-inhibitor peptide, a full D-amino acid therapeutic peptide, were chosen. They were detected by ESI and simultaneous MS(1) and MS(2) acquisitions. With direct peptide infusion, MS(2) spectra revealed that fragmentation was peptide dependent, milder on the LIT-MS and required high collision energies on the TQ-MS to obtain high-intensity product ions. Peptide adsorption on surfaces was overcome and peptide dilutions ranging from 0.1 to 25 nM were injected onto an ultra high-pressure LC system with a 1 mm id analytical column and coupled with the MS instruments. No difference was observed between the two instruments when recording in LC-MS(1) acquisitions. However, in LC-MS(2) acquisitions, a better sensitivity in the detection of large peptides was observed with the LIT-MS. Indeed, with the three longer peptides, the typical fragmentation in the TQ-MS resulted in a dramatic loss of sensitivity (> or = 10x).
Resumo:
This report describes a new approach to the problem of scheduling highway construction type projects. The technique can accurately model linear activities and identify the controlling activity path on a linear schedule. Current scheduling practices are unable to accomplish these two tasks with any accuracy for linear activities, leaving planners and manager suspicious of the information they provide. Basic linear scheduling is not a new technique, and many attempts have been made to apply it to various types of work in the past. However, the technique has never been widely used because of the lack of an analytical approach to activity relationships and development of an analytical approach to determining controlling activities. The Linear Scheduling Model (LSM) developed in this report, completes the linear scheduling technique by adding to linear scheduling all of the analytical capabilities, including computer applications, present in CPM scheduling today. The LSM has tremendous potential, and will likely have a significant impact on the way linear construction is scheduled in the future.
Resumo:
PURPOSE: The longitudinal relaxation rate (R1 ) measured in vivo depends on the local microstructural properties of the tissue, such as macromolecular, iron, and water content. Here, we use whole brain multiparametric in vivo data and a general linear relaxometry model to describe the dependence of R1 on these components. We explore a) the validity of having a single fixed set of model coefficients for the whole brain and b) the stability of the model coefficients in a large cohort. METHODS: Maps of magnetization transfer (MT) and effective transverse relaxation rate (R2 *) were used as surrogates for macromolecular and iron content, respectively. Spatial variations in these parameters reflected variations in underlying tissue microstructure. A linear model was applied to the whole brain, including gray/white matter and deep brain structures, to determine the global model coefficients. Synthetic R1 values were then calculated using these coefficients and compared with the measured R1 maps. RESULTS: The model's validity was demonstrated by correspondence between the synthetic and measured R1 values and by high stability of the model coefficients across a large cohort. CONCLUSION: A single set of global coefficients can be used to relate R1 , MT, and R2 * across the whole brain. Our population study demonstrates the robustness and stability of the model. Magn Reson Med, 2014. © 2014 The Authors. Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. Magn Reson Med 73:1309-1314, 2015. © 2014 Wiley Periodicals, Inc.
Resumo:
Polynomial constraint solving plays a prominent role in several areas of hardware and software analysis and verification, e.g., termination proving, program invariant generation and hybrid system verification, to name a few. In this paper we propose a new method for solving non-linear constraints based on encoding the problem into an SMT problem considering only linear arithmetic. Unlike other existing methods, our method focuses on proving satisfiability of the constraints rather than on proving unsatisfiability, which is more relevant in several applications as we illustrate with several examples. Nevertheless, we also present new techniques based on the analysis of unsatisfiable cores that allow one to efficiently prove unsatisfiability too for a broad class of problems. The power of our approach is demonstrated by means of extensive experiments comparing our prototype with state-of-the-art tools on benchmarks taken both from the academic and the industrial world.
Resumo:
Biometric system performance can be improved by means of data fusion. Several kinds of information can be fused in order to obtain a more accurate classification (identification or verification) of an input sample. In this paper we present a method for computing the weights in a weighted sum fusion for score combinations, by means of a likelihood model. The maximum likelihood estimation is set as a linear programming problem. The scores are derived from a GMM classifier working on a different feature extractor. Our experimental results assesed the robustness of the system in front a changes on time (different sessions) and robustness in front a change of microphone. The improvements obtained were significantly better (error bars of two standard deviations) than a uniform weighted sum or a uniform weighted product or the best single classifier. The proposed method scales computationaly with the number of scores to be fussioned as the simplex method for linear programming.
Resumo:
This paper proposes a very fast method for blindly approximating a nonlinear mapping which transforms a sum of random variables. The estimation is surprisingly good even when the basic assumption is not satisfied.We use the method for providing a good initialization for inverting post-nonlinear mixtures and Wiener systems. Experiments show that the algorithm speed is strongly improved and the asymptotic performance is preserved with a very low extra computational cost.
Resumo:
This paper deals with non-linear transformations for improving the performance of an entropy-based voice activity detector (VAD). The idea to use a non-linear transformation has already been applied in the field of speech linear prediction, or linear predictive coding (LPC), based on source separation techniques, where a score function is added to classical equations in order to take into account the true distribution of the signal. We explore the possibility of estimating the entropy of frames after calculating its score function, instead of using original frames. We observe that if the signal is clean, the estimated entropy is essentially the same; if the signal is noisy, however, the frames transformed using the score function may give entropy that is different in voiced frames as compared to nonvoiced ones. Experimental evidence is given to show that this fact enables voice activity detection under high noise, where the simple entropy method fails.
Resumo:
This special issue aims to cover some problems related to non-linear and nonconventional speech processing. The origin of this volume is in the ISCA Tutorial and Research Workshop on Non-Linear Speech Processing, NOLISP’09, held at the Universitat de Vic (Catalonia, Spain) on June 25–27, 2009. The series of NOLISP workshops started in 2003 has become a biannual event whose aim is to discuss alternative techniques for speech processing that, in a sense, do not fit into mainstream approaches. A selected choice of papers based on the presentations delivered at NOLISP’09 has given rise to this issue of Cognitive Computation.
Resumo:
Acoustic waveform inversions are an increasingly popular tool for extracting subsurface information from seismic data. They are computationally much more efficient than elastic inversions. Naturally, an inherent disadvantage is that any elastic effects present in the recorded data are ignored in acoustic inversions. We investigate the extent to which elastic effects influence seismic crosshole data. Our numerical modeling studies reveal that in the presence of high contrast interfaces, at which P-to-S conversions occur, elastic effects can dominate the seismic sections, even for experiments involving pressure sources and pressure receivers. Comparisons of waveform inversion results using a purely acoustic algorithm on synthetic data that is either acoustic or elastic, show that subsurface models comprising small low-to-medium contrast (?30%) structures can be successfully resolved in the acoustic approximation. However, in the presence of extended high-contrast anomalous bodies, P-to-S-conversions may substantially degrade the quality of the tomographic images. In particular, extended low-velocity zones are difficult to image. Likewise, relatively small low-velocity features are unresolved, even when advanced a priori information is included. One option for mitigating elastic effects is data windowing, which suppresses later arriving seismic arrivals, such as shear waves. Our tests of this approach found it to be inappropriate because elastic effects are also included in earlier arriving wavetrains. Furthermore, data windowing removes later arriving P-wave phases that may provide critical constraints on the tomograms. Finally, we investigated the extent to which acoustic inversions of elastic data are useful for time-lapse analyses of high contrast engineered structures, for which accurate reconstruction of the subsurface structure is not as critical as imaging differential changes between sequential experiments. Based on a realistic scenario for monitoring a radioactive waste repository, we demonstrated that acoustic inversions of elastic data yield substantial distortions of the tomograms and also unreliable information on trends in the velocity changes.
Resumo:
It is well known the relationship between source separation and blind deconvolution: If a filtered version of an unknown i.i.d. signal is observed, temporal independence between samples can be used to retrieve the original signal, in the same manner as spatial independence is used for source separation. In this paper we propose the use of a Genetic Algorithm (GA) to blindly invert linear channels. The use of GA is justified in the case of small number of samples, where other gradient-like methods fails because of poor estimation of statistics.
Resumo:
Postprint (published version)
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the regional scale represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed an upscaling procedure based on a Bayesian sequential simulation approach. This method is then applied to the stochastic integration of low-resolution, regional-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities. Finally, the overall viability of this upscaling approach is tested and verified by performing and comparing flow and transport simulation through the original and the upscaled hydraulic conductivity fields. Our results indicate that the proposed procedure does indeed allow for obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.
Resumo:
A simple method using liquid chromatography-linear ion trap mass spectrometry for simultaneous determination of testosterone glucuronide (TG), testosterone sulfate (TS), epitestosterone glucuronide (EG) and epitestosterone sulfate (ES) in urine samples was developed. For validation purposes, a urine containing no detectable amount of TG, TS and EG was selected and fortified with steroid conjugate standards. Quantification was performed using deuterated testosterone conjugates to correct for ion suppression/enhancement during ESI. Assay validation was performed in terms of lower limit of detection (1-3ng/mL), recovery (89-101%), intraday precision (2.0-6.8%), interday precision (3.4-9.6%) and accuracy (101-103%). Application of the method to short-term stability testing of urine samples at temperature ranging from 4 to 37 degrees C during a time-storage of a week lead to the conclusion that addition of sodium azide (10mg/mL) is required for preservation of the analytes.
Resumo:
The objective of this study was to adapt a nonlinear model (Wang and Engel - WE) for simulating the phenology of maize (Zea mays L.), and to evaluate this model and a linear one (thermal time), in order to predict developmental stages of a field-grown maize variety. A field experiment, during 2005/2006 and 2006/2007 was conducted in Santa Maria, RS, Brazil, in two growing seasons, with seven sowing dates each. Dates of emergence, silking, and physiological maturity of the maize variety BRS Missões were recorded in six replications in each sowing date. Data collected in 2005/2006 growing season were used to estimate the coefficients of the two models, and data collected in the 2006/2007 growing season were used as independent data set for model evaluations. The nonlinear WE model accurately predicted the date of silking and physiological maturity, and had a lower root mean square error (RMSE) than the linear (thermal time) model. The overall RMSE for silking and physiological maturity was 2.7 and 4.8 days with WE model, and 5.6 and 8.3 days with thermal time model, respectively.
Análise genética de escores de avaliação visual de bovinos com modelos bayesianos de limiar e linear
Resumo:
O objetivo deste trabalho foi comparar as estimativas de parâmetros genéticos obtidas em análises bayesianas uni-característica e bi-característica, em modelo animal linear e de limiar, considerando-se as características categóricas morfológicas de bovinos da raça Nelore. Os dados de musculosidade, estrutura física e conformação foram obtidos entre 2000 e 2005, em 3.864 animais de 13 fazendas participantes do Programa Nelore Brasil. Foram realizadas análises bayesianas uni e bi-características, em modelos de limiar e linear. De modo geral, os modelos de limiar e linear foram eficientes na estimação dos parâmetros genéticos para escores visuais em análises bayesianas uni-características. Nas análises bi-características, observou-se que: com utilização de dados contínuos e categóricos, o modelo de limiar proporcionou estimativas de correlação genética de maior magnitude do que aquelas do modelo linear; e com o uso de dados categóricos, as estimativas de herdabilidade foram semelhantes. A vantagem do modelo linear foi o menor tempo gasto no processamento das análises. Na avaliação genética de animais para escores visuais, o uso do modelo de limiar ou linear não influenciou a classificação dos animais, quanto aos valores genéticos preditos, o que indica que ambos os modelos podem ser utilizados em programas de melhoramento genético.