922 resultados para Sums of squares
Resumo:
With its implications for vaccine discovery, the accurate prediction of T cell epitopes is one of the key aspirations of computational vaccinology. We have developed a robust multivariate statistical method, based on partial least squares, for the quantitative prediction of peptide binding to major histocompatibility complexes (MHC), the principal checkpoint on the antigen presentation pathway. As a service to the immunobiology community, we have made a Perl implementation of the method available via a World Wide Web server. We call this server MHCPred. Access to the server is freely available from the URL: http://www.jenner.ac.uk/MHCPred. We have exemplified our method with a model for peptides binding to the common human MHC molecule HLA-B*3501.
Resumo:
Accurate T-cell epitope prediction is a principal objective of computational vaccinology. As a service to the immunology and vaccinology communities at large, we have implemented, as a server on the World Wide Web, a partial least squares-base multivariate statistical approach to the quantitative prediction of peptide binding to major histocom-patibility complexes (MHC), the key checkpoint on the antigen presentation pathway within adaptive,cellular immunity. MHCPred implements robust statistical models for both Class I alleles (HLA-A*0101, HLA-A*0201, HLA-A*0202, HLA-A*0203,HLA-A*0206, HLA-A*0301, HLA-A*1101, HLA-A*3301, HLA-A*6801, HLA-A*6802 and HLA-B*3501) and Class II alleles (HLA-DRB*0401, HLA-DRB*0401and HLA-DRB* 0701).
Resumo:
The kinetic parameters of the pyrolysis of miscanthus and its acid hydrolysis residue (AHR) were determined using thermogravimetric analysis (TGA). The AHR was produced at the University of Limerick by treating miscanthus with 5 wt.% sulphuric acid at 175 °C as representative of a lignocellulosic acid hydrolysis product. For the TGA experiments, 3 to 6 g of sample, milled and sieved to a particle size below 250 μm, were placed in the TGA ceramic crucible. The experiments were carried out under non-isothermal conditions heating the samples from 50 to 900 °C at heating rates of 2.5, 5, 10, 17 and 25 °C/min. The activation energy (EA) of the decomposition process was determined from the TGA data by differential analysis (Friedman) and three isoconversional methods of integral analysis (Kissinger–Akahira–Sunose, Ozawa–Flynn–Wall, Vyazovkin). The activation energy ranged from 129 to 156 kJ/mol for miscanthus and from 200 to 376 kJ/mol for AHR increasing with increasing conversion. The reaction model was selected using the non-linear least squares method and the pre-exponential factor was calculated from the Arrhenius approximation. The results showed that the best fitting reaction model was the third order reaction for both feedstocks. The pre-exponential factor was in the range of 5.6 × 1010 to 3.9 × 10+ 13 min− 1 for miscanthus and 2.1 × 1016 to 7.7 × 1025 min− 1 for AHR.
Resumo:
* Supported by the Army Research Office under grant DAAD-19-02-10059.
Resumo:
The paper contains calculus rules for coderivatives of compositions, sums and intersections of set-valued mappings. The types of coderivatives considered correspond to Dini-Hadamard and limiting Dini-Hadamard subdifferentials in Gˆateaux differentiable spaces, Fréchet and limiting Fréchet subdifferentials in Asplund spaces and approximate subdifferentials in arbitrary Banach spaces. The key element of the unified approach to obtaining various calculus rules for various types of derivatives presented in the paper are simple formulas for subdifferentials of marginal, or performance functions.
Resumo:
The research presented in this thesis was developed as part of DIBANET, an EC funded project aiming to develop an energetically self-sustainable process for the production of diesel miscible biofuels (i.e. ethyl levulinate) via acid hydrolysis of selected biomass feedstocks. Three thermal conversion technologies, pyrolysis, gasification and combustion, were evaluated in the present work with the aim of recovering the energy stored in the acid hydrolysis solid residue (AHR). Mainly consisting of lignin and humins, the AHR can contain up to 80% of the energy in the original feedstock. Pyrolysis of AHR proved unsatisfactory, so attention focussed on gasification and combustion with the aim of producing heat and/or power to supply the energy demanded by the ethyl levulinate production process. A thermal processing rig consisting on a Laminar Entrained Flow Reactor (LEFR) equipped with solid and liquid collection and online gas analysis systems was designed and built to explore pyrolysis, gasification and air-blown combustion of AHR. Maximum liquid yield for pyrolysis of AHR was 30wt% with volatile conversion of 80%. Gas yield for AHR gasification was 78wt%, with 8wt% tar yields and conversion of volatiles close to 100%. 90wt% of the AHR was transformed into gas by combustion, with volatile conversions above 90%. 5volO2%-95vol%N2 gasification resulted in a nitrogen diluted, low heating value gas (2MJ/m3). Steam and oxygen-blown gasification of AHR were additionally investigated in a batch gasifier at KTH in Sweden. Steam promoted the formation of hydrogen (25vol%) and methane (14vol%) improving the gas heating value to 10MJ/m3, below the typical for steam gasification due to equipment limitations. Arrhenius kinetic parameters were calculated using data collected with the LEFR to provide reaction rate information for process design and optimisation. Activation energy (EA) and pre-exponential factor (ko in s-1) for pyrolysis (EA=80kJ/mol, lnko=14), gasification (EA=69kJ/mol, lnko=13) and combustion (EA=42kJ/mol, lnko=8) were calculated after linearly fitting the data using the random pore model. Kinetic parameters for pyrolysis and combustion were also determined by dynamic thermogravimetric analysis (TGA), including studies of the original biomass feedstocks for comparison. Results obtained by differential and integral isoconversional methods for activation energy determination were compared. Activation energy calculated by the Vyazovkin method was 103-204kJ/mol for pyrolysis of untreated feedstocks and 185-387kJ/mol for AHRs. Combustion activation energy was 138-163kJ/mol for biomass and 119-158 for AHRs. The non-linear least squares method was used to determine reaction model and pre-exponential factor. Pyrolysis and combustion of biomass were best modelled by a combination of third order reaction and 3 dimensional diffusion models, while AHR decomposed following the third order reaction for pyrolysis and the 3 dimensional diffusion for combustion.
Resumo:
From a Service-Dominant Logic (S-DL) perspective, employees constitute operant resources that firms can draw to enhance the outcomes of innovation efforts. While research acknowledges that frontline employees (FLEs) constitute, through service encounters, a key interface for the transfer of valuable external knowledge into the firm, the range of potential benefits derived from FLE-driven innovation deserves more investigation. Using a sample of knowledge intensive business services firms (KIBS), this study examines how the collaboration with FLEs along the new service development (NSD) process, namely FLE co-creation, impacts on service innovation performance following two routes of different effects. Partial least squares structural equation modeling (PLS-SEM) results indicate that FLE co-creation benefits the NS success among FLEs and firm’s customers, the constituents of the resources route. FLE co-creation also has a positive effect on the NSD speed, which in turn enhances the NS quality. NSD speed and NS quality integrate the operational route, which proves to be the most effective path to impact the NS market performance. Accordingly, KIBS managers must value their FLEs as essential partners to achieve successful innovation from an internal and external perspective, and develop the appropriate mechanisms to guarantee their effective involvement along the NSD process.
Resumo:
OBJECTIVE: This 12-week study assessed the efficacy and tolerability of imeglimin as add-on therapy to the dipeptidyl peptidase-4 inhibitor sitagliptin in patients with type 2 diabetes inadequately controlled with sitagliptin monotherapy. RESEARCH DESIGN AND METHODS: In a multicenter, randomized, double-blind, placebo-controlled, parallel-group study, imeglimin (1,500 mg b.i.d.) or placebo was added to sitagliptin (100 mg q.d.) over 12weeks in 170 patientswith type 2 diabetes (mean age 56.8 years; BMI 32.2 kg/m2) that was inadequately controlled with sitagliptin alone (A1C ≥7.5%) during a 12-week run-in period. The primary ef ficacy end point was the change in A1C from baseline versus placebo; secondary end points included corresponding changes in fasting plasma glucose (FPG) levels, strati fication by baseline A1C, and percentage of A1C responders. RESULTS: Imeglimin reduced A1C levels (least-squares mean difference) from baseline (8.5%) by 0.60% compared with an increase of 0.12% with placebo (between-group difference 0.72%, P < 0.001). The corresponding changes in FPG were -0.93 mmol/L with imeglimin vs. -0.11 mmol/L with placebo (P = 0.014). With imeglimin, the A1C level decreased by ≥0.5% in 54.3% of subjects vs. 21.6% with placebo (P < 0.001), and 19.8%of subjects receiving imeglimin achieved a decrease in A1C level of ≤7% compared with subjects receiving placebo (1.1%) (P = 0.004). Imeglimin was generally well tolerated, with a safety pro file comparable to placebo and no related treatment-emergent adverse events. CONCLUSIONS: Imeglimin demonstrated incremental efficacy benefits as add-on therapy to sitagliptin, with comparable tolerability to placebo, highlighting the potential for imeglimin to complement other oral antihyperglycemic therapies. © 2014 by the American Diabetes Association.
Resumo:
Data fluctuation in multiple measurements of Laser Induced Breakdown Spectroscopy (LIBS) greatly affects the accuracy of quantitative analysis. A new LIBS quantitative analysis method based on the Robust Least Squares Support Vector Machine (RLS-SVM) regression model is proposed. The usual way to enhance the analysis accuracy is to improve the quality and consistency of the emission signal, such as by averaging the spectral signals or spectrum standardization over a number of laser shots. The proposed method focuses more on how to enhance the robustness of the quantitative analysis regression model. The proposed RLS-SVM regression model originates from the Weighted Least Squares Support Vector Machine (WLS-SVM) but has an improved segmented weighting function and residual error calculation according to the statistical distribution of measured spectral data. Through the improved segmented weighting function, the information on the spectral data in the normal distribution will be retained in the regression model while the information on the outliers will be restrained or removed. Copper elemental concentration analysis experiments of 16 certified standard brass samples were carried out. The average value of relative standard deviation obtained from the RLS-SVM model was 3.06% and the root mean square error was 1.537%. The experimental results showed that the proposed method achieved better prediction accuracy and better modeling robustness compared with the quantitative analysis methods based on Partial Least Squares (PLS) regression, standard Support Vector Machine (SVM) and WLS-SVM. It was also demonstrated that the improved weighting function had better comprehensive performance in model robustness and convergence speed, compared with the four known weighting functions.
Resumo:
Resolutions which are orthogonal to at least one other resolution (RORs) and sets of m mutually orthogonal resolutions (m-MORs) of 2-(v, k, λ) designs are considered. A dependence of the number of nonisomorphic RORs and m-MORs of multiple designs on the number of inequivalent sets of v/k − 1 mutually orthogonal latin squares (MOLS) of size m is obtained. ACM Computing Classification System (1998): G.2.1.
Resumo:
Only a few characterizations have been obtained in literatute for the negative binomial distribution (see Johnson et al., Chap. 5, 1992). In this article a characterization of the negative binomial distribution related to random sums is obtained which is motivated by the geometric distribution characterization given by Khalil et al. (1991). An interpretation in terms of an unreliable system is given.
Resumo:
2000 Mathematics Subject Classification: 60J80, 62M05.
Resumo:
2000 Mathematics Subject Classification: 65C05
Resumo:
Measurements of area summation for luminance-modulated stimuli are typically confounded by variations in sensitivity across the retina. Recently we conducted a detailed analysis of sensitivity across the visual field (Baldwin et al, 2012) and found it to be well-described by a bilinear “witch’s hat” function: sensitivity declines rapidly over the first 8 cycles or so, more gently thereafter. Here we multiplied luminance-modulated stimuli (4 c/deg gratings and “Swiss cheeses”) by the inverse of the witch’s hat function to compensate for the inhomogeneity. This revealed summation functions that were straight lines (on double log axes) with a slope of -1/4 extending to ≥33 cycles, demonstrating fourth-root summation of contrast over a wider area than has previously been reported for the central retina. Fourth-root summation is typically attributed to probability summation, but recent studies have rejected that interpretation in favour of a noisy energy model that performs local square-law transduction of the signal, adds noise at each location of the target and then sums over signal area. Modelling shows our results to be consistent with a wide field application of such a contrast integrator. We reject a probability summation model, a quadratic model and a matched template model of our results under the assumptions of signal detection theory. We also reject the high threshold theory of contrast detection under the assumption of probability summation over area.
Resumo:
Background: Allergy is a form of hypersensitivity to normally innocuous substances, such as dust, pollen, foods or drugs. Allergens are small antigens that commonly provoke an IgE antibody response. There are two types of bioinformatics-based allergen prediction. The first approach follows FAO/WHO Codex alimentarius guidelines and searches for sequence similarity. The second approach is based on identifying conserved allergenicity-related linear motifs. Both approaches assume that allergenicity is a linearly coded property. In the present study, we applied ACC pre-processing to sets of known allergens, developing alignment-independent models for allergen recognition based on the main chemical properties of amino acid sequences.Results: A set of 684 food, 1,156 inhalant and 555 toxin allergens was collected from several databases. A set of non-allergens from the same species were selected to mirror the allergen set. The amino acids in the protein sequences were described by three z-descriptors (z1, z2 and z3) and by auto- and cross-covariance (ACC) transformation were converted into uniform vectors. Each protein was presented as a vector of 45 variables. Five machine learning methods for classification were applied in the study to derive models for allergen prediction. The methods were: discriminant analysis by partial least squares (DA-PLS), logistic regression (LR), decision tree (DT), naïve Bayes (NB) and k nearest neighbours (kNN). The best performing model was derived by kNN at k = 3. It was optimized, cross-validated and implemented in a server named AllerTOP, freely accessible at http://www.pharmfac.net/allertop. AllerTOP also predicts the most probable route of exposure. In comparison to other servers for allergen prediction, AllerTOP outperforms them with 94% sensitivity.Conclusions: AllerTOP is the first alignment-free server for in silico prediction of allergens based on the main physicochemical properties of proteins. Significantly, as well allergenicity AllerTOP is able to predict the route of allergen exposure: food, inhalant or toxin. © 2013 Dimitrov et al.; licensee BioMed Central Ltd.