53 resultados para Sums of squares


Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: This 12-week study assessed the efficacy and tolerability of imeglimin as add-on therapy to the dipeptidyl peptidase-4 inhibitor sitagliptin in patients with type 2 diabetes inadequately controlled with sitagliptin monotherapy. RESEARCH DESIGN AND METHODS: In a multicenter, randomized, double-blind, placebo-controlled, parallel-group study, imeglimin (1,500 mg b.i.d.) or placebo was added to sitagliptin (100 mg q.d.) over 12weeks in 170 patientswith type 2 diabetes (mean age 56.8 years; BMI 32.2 kg/m2) that was inadequately controlled with sitagliptin alone (A1C ≥7.5%) during a 12-week run-in period. The primary ef ficacy end point was the change in A1C from baseline versus placebo; secondary end points included corresponding changes in fasting plasma glucose (FPG) levels, strati fication by baseline A1C, and percentage of A1C responders. RESULTS: Imeglimin reduced A1C levels (least-squares mean difference) from baseline (8.5%) by 0.60% compared with an increase of 0.12% with placebo (between-group difference 0.72%, P < 0.001). The corresponding changes in FPG were -0.93 mmol/L with imeglimin vs. -0.11 mmol/L with placebo (P = 0.014). With imeglimin, the A1C level decreased by ≥0.5% in 54.3% of subjects vs. 21.6% with placebo (P < 0.001), and 19.8%of subjects receiving imeglimin achieved a decrease in A1C level of ≤7% compared with subjects receiving placebo (1.1%) (P = 0.004). Imeglimin was generally well tolerated, with a safety pro file comparable to placebo and no related treatment-emergent adverse events. CONCLUSIONS: Imeglimin demonstrated incremental efficacy benefits as add-on therapy to sitagliptin, with comparable tolerability to placebo, highlighting the potential for imeglimin to complement other oral antihyperglycemic therapies. © 2014 by the American Diabetes Association.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data fluctuation in multiple measurements of Laser Induced Breakdown Spectroscopy (LIBS) greatly affects the accuracy of quantitative analysis. A new LIBS quantitative analysis method based on the Robust Least Squares Support Vector Machine (RLS-SVM) regression model is proposed. The usual way to enhance the analysis accuracy is to improve the quality and consistency of the emission signal, such as by averaging the spectral signals or spectrum standardization over a number of laser shots. The proposed method focuses more on how to enhance the robustness of the quantitative analysis regression model. The proposed RLS-SVM regression model originates from the Weighted Least Squares Support Vector Machine (WLS-SVM) but has an improved segmented weighting function and residual error calculation according to the statistical distribution of measured spectral data. Through the improved segmented weighting function, the information on the spectral data in the normal distribution will be retained in the regression model while the information on the outliers will be restrained or removed. Copper elemental concentration analysis experiments of 16 certified standard brass samples were carried out. The average value of relative standard deviation obtained from the RLS-SVM model was 3.06% and the root mean square error was 1.537%. The experimental results showed that the proposed method achieved better prediction accuracy and better modeling robustness compared with the quantitative analysis methods based on Partial Least Squares (PLS) regression, standard Support Vector Machine (SVM) and WLS-SVM. It was also demonstrated that the improved weighting function had better comprehensive performance in model robustness and convergence speed, compared with the four known weighting functions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Measurements of area summation for luminance-modulated stimuli are typically confounded by variations in sensitivity across the retina. Recently we conducted a detailed analysis of sensitivity across the visual field (Baldwin et al, 2012) and found it to be well-described by a bilinear “witch’s hat” function: sensitivity declines rapidly over the first 8 cycles or so, more gently thereafter. Here we multiplied luminance-modulated stimuli (4 c/deg gratings and “Swiss cheeses”) by the inverse of the witch’s hat function to compensate for the inhomogeneity. This revealed summation functions that were straight lines (on double log axes) with a slope of -1/4 extending to ≥33 cycles, demonstrating fourth-root summation of contrast over a wider area than has previously been reported for the central retina. Fourth-root summation is typically attributed to probability summation, but recent studies have rejected that interpretation in favour of a noisy energy model that performs local square-law transduction of the signal, adds noise at each location of the target and then sums over signal area. Modelling shows our results to be consistent with a wide field application of such a contrast integrator. We reject a probability summation model, a quadratic model and a matched template model of our results under the assumptions of signal detection theory. We also reject the high threshold theory of contrast detection under the assumption of probability summation over area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Allergy is a form of hypersensitivity to normally innocuous substances, such as dust, pollen, foods or drugs. Allergens are small antigens that commonly provoke an IgE antibody response. There are two types of bioinformatics-based allergen prediction. The first approach follows FAO/WHO Codex alimentarius guidelines and searches for sequence similarity. The second approach is based on identifying conserved allergenicity-related linear motifs. Both approaches assume that allergenicity is a linearly coded property. In the present study, we applied ACC pre-processing to sets of known allergens, developing alignment-independent models for allergen recognition based on the main chemical properties of amino acid sequences.Results: A set of 684 food, 1,156 inhalant and 555 toxin allergens was collected from several databases. A set of non-allergens from the same species were selected to mirror the allergen set. The amino acids in the protein sequences were described by three z-descriptors (z1, z2 and z3) and by auto- and cross-covariance (ACC) transformation were converted into uniform vectors. Each protein was presented as a vector of 45 variables. Five machine learning methods for classification were applied in the study to derive models for allergen prediction. The methods were: discriminant analysis by partial least squares (DA-PLS), logistic regression (LR), decision tree (DT), naïve Bayes (NB) and k nearest neighbours (kNN). The best performing model was derived by kNN at k = 3. It was optimized, cross-validated and implemented in a server named AllerTOP, freely accessible at http://www.pharmfac.net/allertop. AllerTOP also predicts the most probable route of exposure. In comparison to other servers for allergen prediction, AllerTOP outperforms them with 94% sensitivity.Conclusions: AllerTOP is the first alignment-free server for in silico prediction of allergens based on the main physicochemical properties of proteins. Significantly, as well allergenicity AllerTOP is able to predict the route of allergen exposure: food, inhalant or toxin. © 2013 Dimitrov et al.; licensee BioMed Central Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The focus of this study is on the governance decisions in a concurrent channels context, in the case of uncertainty. The study examines how a firm chooses to deploy its sales force in times of uncertainty, and the subsequent performance outcome of those deployment choices. The theoretical framework is based on multiple theories of governance, including transaction cost analysis (TCA), agency theory, and institutional economics. Three uncertainty variables are investigated in this study. The first two are demand and competitive uncertainty which are considered to be industry-level market uncertainty forms. The third uncertainty, political uncertainty, is chosen as it is an important dimension of institutional environments, capturing non-economic circumstances such as regulations and political systemic issues. The study employs longitudinal secondary data from a Thai hotel chain, comprising monthly observations from January 2007 – December 2012. This hotel chain has its operations in 4 countries, Thailand, the Philippines, United Arab Emirates – Dubai, and Egypt, all of which experienced substantial demand, competitive, and political uncertainty during the study period. This makes them ideal contexts for this study. Two econometric models, both deploying Newey-West estimations, are employed to test 13 hypotheses. The first model considers the relationship between uncertainty and governance. The second model is a version of Newey-West, using an Instrumental Variables (IV) estimator and a Two-Stage Least Squares model (2SLS), to test the direct effect of uncertainty on performance and the moderating effect of governance on the relationship between uncertainty and performance. The observed relationship between uncertainty and governance observed follows a core prediction of TCA; that vertical integration is the preferred choice of governance when uncertainty rises. As for the subsequent performance outcomes, the results corroborate that uncertainty has a negative effect on performance. Importantly, the findings show that becoming more vertically integrated cannot help moderate the effect of demand and competitive uncertainty, but can significantly moderate the effect of political uncertainty. These findings have significant theoretical and practical implications, and extend our knowledge of the impact on uncertainty significantly, as well as bringing an institutional perspective to TCA. Further, they offer managers novel insight into the nature of different types of uncertainty, their impact on performance, and how channel decisions can mitigate these impacts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motivation: In any macromolecular polyprotic system - for example protein, DNA or RNA - the isoelectric point - commonly referred to as the pI - can be defined as the point of singularity in a titration curve, corresponding to the solution pH value at which the net overall surface charge - and thus the electrophoretic mobility - of the ampholyte sums to zero. Different modern analytical biochemistry and proteomics methods depend on the isoelectric point as a principal feature for protein and peptide characterization. Protein separation by isoelectric point is a critical part of 2-D gel electrophoresis, a key precursor of proteomics, where discrete spots can be digested in-gel, and proteins subsequently identified by analytical mass spectrometry. Peptide fractionation according to their pI is also widely used in current proteomics sample preparation procedures previous to the LC-MS/MS analysis. Therefore accurate theoretical prediction of pI would expedite such analysis. While such pI calculation is widely used, it remains largely untested, motivating our efforts to benchmark pI prediction methods. Results: Using data from the database PIP-DB and one publically available dataset as our reference gold standard, we have undertaken the benchmarking of pI calculation methods. We find that methods vary in their accuracy and are highly sensitive to the choice of basis set. The machine-learning algorithms, especially the SVM-based algorithm, showed a superior performance when studying peptide mixtures. In general, learning-based pI prediction methods (such as Cofactor, SVM and Branca) require a large training dataset and their resulting performance will strongly depend of the quality of that data. In contrast with Iterative methods, machine-learning algorithms have the advantage of being able to add new features to improve the accuracy of prediction. Contact: yperez@ebi.ac.uk Availability and Implementation: The software and data are freely available at https://github.com/ypriverol/pIR. Supplementary information: Supplementary data are available at Bioinformatics online.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background and aims: Glucagon-like peptide-1 (GLP-1) receptor agonists improve islet function and delay gastric emptying in subjects with type 2 diabetes mellitus. We evaluated 2-hour glucose, glucagon and insulin changes following a standardized mixed-meal tolerance test before and after 24 weeks of treatment with the once-daily prandial GLP-1 receptor agonist lixisenatide (approved for a therapeutic dose of 20 μg once daily) in six randomized, placebo-controlled studies within the lixisenatide Phase III GetGoal programme. In the studies, the mixed-meal test was conducted before and after: (1) lixisenatide treatment in patients insufficiently controlled despite diet and exercise (GetGoal-Mono), (2) lixisenatide treatment in combination with oral antidiabetic drugs (OADs) (GetGoal-M and GetGoal-S), or (3) lixisenatide treatment in combination with basal insulin ± OAD (GetGoal-Duo 1, GetGoal-L and GetGoal-L-Asia).Materials and methods: A meta-analysis was performed (lixisenatide n=1124 vs placebo n=707) combining ANCOVA least squares (LS) mean values using an inverse variance weighted analysis. Results: Lixisenatide significantly reduced 2-hour postprandial glucose from baseline (LS mean difference vs placebo: -4.9 mmol/L, p<0.0001, Figure) and glucose excursions (LS mean difference vs placebo: -4.5 mmol/L, p<0.0001). As measured in two studies, lixisenatide also reduced postprandial glucagon (LS mean difference vs placebo: -19.0 ng/L, p<0.0001) and insulin (LS mean difference vs placebo: -64.8 pmol/L, p<0.0001), although the glucagon/insulin ratio was increased (LS mean difference vs placebo: 0.15, p=0.02) compared with placebo. Conclusion: The results show that lixisenatide potently reduces the glucose excursion after meal ingestion in subjects with type 2 diabetes, in association with marked reductions in glucagon and insulin levels. It is suggested that diminished glucagon secretion and slower gastric emptying contribute to reduced hepatic glucose production and delayed glucose absorption, enabling postprandial glycaemia to be controlled with less demand on beta-cell insulin secretion. Clinical Trial Registration Number: NCT00688701; NCT00712673; NCT00713830; NCT00975286; NCT00715624; NCT00866658 Supported by: Sanofi

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: The aim of this study was to design a novel experimental approach to investigate the morphological characteristics of auditory cortical responses elicited by rapidly changing synthesized speech sounds. Methods: Six sound-evoked magnetoencephalographic (MEG) responses were measured to a synthesized train of speech sounds using the vowels /e/ and /u/ in 17 normal hearing young adults. Responses were measured to: (i) the onset of the speech train, (ii) an F0 increment; (iii) an F0 decrement; (iv) an F2 decrement; (v) an F2 increment; and (vi) the offset of the speech train using short (jittered around 135. ms) and long (1500. ms) stimulus onset asynchronies (SOAs). The least squares (LS) deconvolution technique was used to disentangle the overlapping MEG responses in the short SOA condition only. Results: Comparison between the morphology of the recovered cortical responses in the short and long SOAs conditions showed high similarity, suggesting that the LS deconvolution technique was successful in disentangling the MEG waveforms. Waveform latencies and amplitudes were different for the two SOAs conditions and were influenced by the spectro-temporal properties of the sound sequence. The magnetic acoustic change complex (mACC) for the short SOA condition showed significantly lower amplitudes and shorter latencies compared to the long SOA condition. The F0 transition showed a larger reduction in amplitude from long to short SOA compared to the F2 transition. Lateralization of the cortical responses were observed under some stimulus conditions and appeared to be associated with the spectro-temporal properties of the acoustic stimulus. Conclusions: The LS deconvolution technique provides a new tool to study the properties of the auditory cortical response to rapidly changing sound stimuli. The presence of the cortical auditory evoked responses for rapid transition of synthesized speech stimuli suggests that the temporal code is preserved at the level of the auditory cortex. Further, the reduced amplitudes and shorter latencies might reflect intrinsic properties of the cortical neurons to rapidly presented sounds. Significance: This is the first demonstration of the separation of overlapping cortical responses to rapidly changing speech sounds and offers a potential new biomarker of discrimination of rapid transition of sound.