34 resultados para Least Square Method
Resumo:
57Fe Mössbauer spectroscopy of the mononuclear [Fe(II)(isoxazole)6](BF4) 2compound has been studied to reveal the thermal spin crossover of Fe(II) between low-spin (S = 0) and high-spin (S = 2) states. A temperature-dependent spin transition curve has been constructed with the least-square fitted data obtained from the Mössbauer spectra measured at various temperatures in the 240-60K range during the cooling and heating cycle. The compound exhibits a temperature-dependent two-step spin transition phenomenon with Tsco (step 1) = 92 and Tsco (step2) = 191K. The compound has three high-spin Fe(II) sites at the highest temperature of study; among them, two have slightly different coordination environments. These two Fe(II) sites are found to undergo a spin transition, while the third Fe(II) site retains the high-spin state over the whole temperature range. Possible reasons for the formation of the two steps in the spin transition curve are discussed. The observations made from the present study are in complete agreement with those envisaged from earlier magnetic and structural studies made on [Fe(II)(isoxazole)6](BF4)2, but highlights the nature of the spin crossover mechanism.
Resumo:
Background - The objective of this study was to investigate the association between ethnicity and health related quality of life (HRQoL) in patients with type 2 diabetes. Methods - The EuroQol EQ-5D measure was administered to 1,978 patients with type 2 diabetes in the UK Asian Diabetes Study (UKADS): 1,486 of south Asian origin (Indian, Pakistani, Bangladeshi or other south Asian) and 492 of white European origin. Multivariate regression using ordinary least square (OLS), Tobit, fractional logit and Censored Least Absolutes Deviations estimators was used to estimate the impact of ethnicity on both visual analogue scale (VAS) and utility scores for the EuroQol EQ-5D. Results - Mean EQ-5D VAS and utility scores were lower among south Asians with diabetes compared to the white European population; the unadjusted effect on the mean EQ-5D VAS score was −7.82 (Standard error [SE] = 1.06, p < 0.01) and on the EQ-5D utility score was −0.06 (SE = 0.02, p < 0.01) (OLS estimator). After controlling for socio-demographic and clinical confounders, the adjusted effect on the EQ-5D VAS score was −9.35 (SE = 2.46, p < 0.01) and on the EQ-5D utility score was 0.06 (SE = 0.04), although the latter was not statistically significant. Conclusions - There was a large and statistically significant association between south Asian ethnicity and lower EQ-5D VAS scores. In contrast, there was no significant difference in EQ-5D utility scores between the south Asian and white European sub-groups. Further research is needed to explain the differences in effects on subjective EQ-5D VAS scores and population-weighted EQ-5D utility scores in this context.
Resumo:
Glucagon-like peptide-1 (GLP-1) receptor agonists improve islet function and delay gastric emptying in patients with type 2 diabetes mellitus (T2DM). This meta-analysis aimed to investigate the effects of the once-daily prandial GLP-1 receptor agonist lixisenatide on postprandial plasma glucose (PPG), glucagon and insulin levels. Methods: Six randomized, placebo-controlled studies of lixisenatide 20μg once daily were included in this analysis: lixisenatide as monotherapy (GetGoal-Mono), as add-on to oral antidiabetic drugs (OADs; GetGoal-M, GetGoal-S) or in combination with basal insulin (GetGoal-L, GetGoal-Duo-1 and GetGoal-L-Asia). Change in 2-h PPG and glucose excursion were evaluated across six studies. Change in 2-h glucagon and postprandial insulin were evaluated across two studies. A meta-analysis was performed on least square (LS) mean estimates obtained from analysis of covariance (ANCOVA)-based linear regression. Results: Lixisenatide significantly reduced 2-h PPG from baseline (LS mean difference vs. placebo: -4.9mmol/l, p<0.001) and glucose excursion (LS mean difference vs. placebo: -4.5mmol/l, p<0.001). As measured in two studies, lixisenatide also reduced postprandial glucagon (LS mean difference vs. placebo: -19.0ng/l, p<0.001) and insulin (LS mean difference vs. placebo: -64.8 pmol/l, p<0.001). There was a stronger correlation between 2-h postprandial glucagon and 2-h PPG with lixisenatide than with placebo. Conclusions: Lixisenatide significantly reduced 2-h PPG and glucose excursion together with a marked reduction in postprandial glucagon and insulin; thus, lixisenatide appears to have biological effects on blood glucose that are independent of increased insulin secretion. These effects may be, in part, attributed to reduced glucagon secretion. © 2014 John Wiley and Sons Ltd.
Resumo:
Purpose - The paper aims to examine the role of market orientation (MO) and innovation capability in determining business performance during an economic upturn and downturn. Design/methodology/approach - The data comprise two national-level surveys conducted in Finland in 2008, representing an economic boom, and in 2010 when the global economic crisis had hit the Finnish market. Partial least square path analysis is used to test the potential mediating effect of innovation capability on the relationship between MO and business performance during economic boom and bust. Findings - The results show that innovation capability fully mediates the performance effects of a MO during an economic upturn, whereas the mediation is only partial during a downturn. Innovation capability also mediates the relationship between a customer orientation and business performance during an upturn, whereas the mediating effect culminates in a competitor orientation during a downturn. Thus, the role of innovation capability as a mediator between the individual market-orientation components varies along the business cycle. Originality/value - This paper is one of the first studies that empirically examine the impact of the economic cycle on the relationship between strategic marketing concepts, such as MO or innovation capability, and the firm's business performance.
Resumo:
The kinetic parameters of the pyrolysis of miscanthus and its acid hydrolysis residue (AHR) were determined using thermogravimetric analysis (TGA). The AHR was produced at the University of Limerick by treating miscanthus with 5 wt.% sulphuric acid at 175 °C as representative of a lignocellulosic acid hydrolysis product. For the TGA experiments, 3 to 6 g of sample, milled and sieved to a particle size below 250 μm, were placed in the TGA ceramic crucible. The experiments were carried out under non-isothermal conditions heating the samples from 50 to 900 °C at heating rates of 2.5, 5, 10, 17 and 25 °C/min. The activation energy (EA) of the decomposition process was determined from the TGA data by differential analysis (Friedman) and three isoconversional methods of integral analysis (Kissinger–Akahira–Sunose, Ozawa–Flynn–Wall, Vyazovkin). The activation energy ranged from 129 to 156 kJ/mol for miscanthus and from 200 to 376 kJ/mol for AHR increasing with increasing conversion. The reaction model was selected using the non-linear least squares method and the pre-exponential factor was calculated from the Arrhenius approximation. The results showed that the best fitting reaction model was the third order reaction for both feedstocks. The pre-exponential factor was in the range of 5.6 × 1010 to 3.9 × 10+ 13 min− 1 for miscanthus and 2.1 × 1016 to 7.7 × 1025 min− 1 for AHR.
Resumo:
The research presented in this thesis was developed as part of DIBANET, an EC funded project aiming to develop an energetically self-sustainable process for the production of diesel miscible biofuels (i.e. ethyl levulinate) via acid hydrolysis of selected biomass feedstocks. Three thermal conversion technologies, pyrolysis, gasification and combustion, were evaluated in the present work with the aim of recovering the energy stored in the acid hydrolysis solid residue (AHR). Mainly consisting of lignin and humins, the AHR can contain up to 80% of the energy in the original feedstock. Pyrolysis of AHR proved unsatisfactory, so attention focussed on gasification and combustion with the aim of producing heat and/or power to supply the energy demanded by the ethyl levulinate production process. A thermal processing rig consisting on a Laminar Entrained Flow Reactor (LEFR) equipped with solid and liquid collection and online gas analysis systems was designed and built to explore pyrolysis, gasification and air-blown combustion of AHR. Maximum liquid yield for pyrolysis of AHR was 30wt% with volatile conversion of 80%. Gas yield for AHR gasification was 78wt%, with 8wt% tar yields and conversion of volatiles close to 100%. 90wt% of the AHR was transformed into gas by combustion, with volatile conversions above 90%. 5volO2%-95vol%N2 gasification resulted in a nitrogen diluted, low heating value gas (2MJ/m3). Steam and oxygen-blown gasification of AHR were additionally investigated in a batch gasifier at KTH in Sweden. Steam promoted the formation of hydrogen (25vol%) and methane (14vol%) improving the gas heating value to 10MJ/m3, below the typical for steam gasification due to equipment limitations. Arrhenius kinetic parameters were calculated using data collected with the LEFR to provide reaction rate information for process design and optimisation. Activation energy (EA) and pre-exponential factor (ko in s-1) for pyrolysis (EA=80kJ/mol, lnko=14), gasification (EA=69kJ/mol, lnko=13) and combustion (EA=42kJ/mol, lnko=8) were calculated after linearly fitting the data using the random pore model. Kinetic parameters for pyrolysis and combustion were also determined by dynamic thermogravimetric analysis (TGA), including studies of the original biomass feedstocks for comparison. Results obtained by differential and integral isoconversional methods for activation energy determination were compared. Activation energy calculated by the Vyazovkin method was 103-204kJ/mol for pyrolysis of untreated feedstocks and 185-387kJ/mol for AHRs. Combustion activation energy was 138-163kJ/mol for biomass and 119-158 for AHRs. The non-linear least squares method was used to determine reaction model and pre-exponential factor. Pyrolysis and combustion of biomass were best modelled by a combination of third order reaction and 3 dimensional diffusion models, while AHR decomposed following the third order reaction for pyrolysis and the 3 dimensional diffusion for combustion.
Resumo:
Circulating low density lipoproteins (LDL) are thought to play a crucial role in the onset and development of atherosclerosis, though the detailed molecular mechanisms responsible for their biological effects remain controversial. The complexity of biomolecules (lipids, glycans and protein) and structural features (isoforms and chemical modifications) found in LDL particles hampers the complete understanding of the mechanism underlying its atherogenicity. For this reason the screening of LDL for features discriminative of a particular pathology in search of biomarkers is of high importance. Three major biomolecule classes (lipids, protein and glycans) in LDL particles were screened using mass spectrometry coupled to liquid chromatography. Dual-polarity screening resulted in good lipidome coverage, identifying over 300 lipid species from 12 lipid sub-classes. Multivariate analysis was used to investigate potential discriminators in the individual lipid sub-classes for different study groups (age, gender, pathology). Additionally, the high protein sequence coverage of ApoB-100 routinely achieved (≥70%) assisted in the search for protein modifications correlating to aging and pathology. The large size and complexity of the datasets required the use of chemometric methods (Partial Least Square-Discriminant Analysis, PLS-DA) for their analysis and for the identification of ions that discriminate between study groups. The peptide profile from enzymatically digested ApoB-100 can be correlated with the high structural complexity of lipids associated with ApoB-100 using exploratory data analysis. In addition, using targeted scanning modes, glycosylation sites within neutral and acidic sugar residues in ApoB-100 are also being explored. Together or individually, knowledge of the profiles and modifications of the major biomolecules in LDL particles will contribute towards an in-depth understanding, will help to map the structural features that contribute to the atherogenicity of LDL, and may allow identification of reliable, pathology-specific biomarkers. This research was supported by a Marie Curie Intra-European Fellowship within the 7th European Community Framework Program (IEF 255076). Work of A. Rudnitskaya was supported by Portuguese Science and Technology Foundation, through the European Social Fund (ESF) and "Programa Operacional Potencial Humano - POPH".
Resumo:
Data fluctuation in multiple measurements of Laser Induced Breakdown Spectroscopy (LIBS) greatly affects the accuracy of quantitative analysis. A new LIBS quantitative analysis method based on the Robust Least Squares Support Vector Machine (RLS-SVM) regression model is proposed. The usual way to enhance the analysis accuracy is to improve the quality and consistency of the emission signal, such as by averaging the spectral signals or spectrum standardization over a number of laser shots. The proposed method focuses more on how to enhance the robustness of the quantitative analysis regression model. The proposed RLS-SVM regression model originates from the Weighted Least Squares Support Vector Machine (WLS-SVM) but has an improved segmented weighting function and residual error calculation according to the statistical distribution of measured spectral data. Through the improved segmented weighting function, the information on the spectral data in the normal distribution will be retained in the regression model while the information on the outliers will be restrained or removed. Copper elemental concentration analysis experiments of 16 certified standard brass samples were carried out. The average value of relative standard deviation obtained from the RLS-SVM model was 3.06% and the root mean square error was 1.537%. The experimental results showed that the proposed method achieved better prediction accuracy and better modeling robustness compared with the quantitative analysis methods based on Partial Least Squares (PLS) regression, standard Support Vector Machine (SVM) and WLS-SVM. It was also demonstrated that the improved weighting function had better comprehensive performance in model robustness and convergence speed, compared with the four known weighting functions.
Resumo:
In developed countries travel time savings can account for as much as 80% of the overall benefits arising from transport infrastructure and service improvements. In developing countries they are generally ignored in transport project appraisals, notwithstanding their importance. One of the reasons for ignoring these benefits in the developing countries is that there is insufficient empirical evidence to support the conventional models for valuing travel time where work patterns, particularly of the poor, are diverse and it is difficult to distinguish between work and non-work activities. The exclusion of time saving benefits may lead to a bias against investment decisions that benefit the poor and understate the poverty reduction potential of transport investments in Least Developed Countries (LDCs). This is because the poor undertake most travel and transport by walking and headloading on local roads, tracks and paths and improvements of local infrastructure and services bring large time saving benefits for them through modal shifts. The paper reports on an empirical study to develop a methodology for valuing rural travel time savings in the LDCs. Apart from identifying the theoretical and empirical issues in valuing travel time savings in the LDCs, the paper presents and discusses the results of an analysis of data from Bangladesh. Some of the study findings challenge the conventional wisdom concerning the time saving values. The Bangladesh study suggests that the western concept of dividing travel time savings into working and non-working time savings is broadly valid in the developing country context. The study validates the use of preference methods in valuing non-working time saving values. However, stated preference (SP) method is more appropriate than revealed preference (RP) method.
Resumo:
When the data are counts or the frequencies of particular events and can be expressed as a contingency table, then they can be analysed using the chi-square distribution. When applied to a 2 x 2 table, the test is approximate and care needs to be taken in analysing tables when the expected frequencies are small either by applying Yate’s correction or by using Fisher’s exact test. Larger contingency tables can also be analysed using this method. Note that it is a serious statistical error to use any of these tests on measurement data!
Resumo:
Economic factors such as the rise in cost of raw materials, labour and power, are compelling manufacturers of cold-drawn polygonal sections, to seek new production routes which will enable the expansion in the varieties of metals used and the inclusion of difficult-to-draw materials. One such method generating considerable industrial interest is the drawing of polygonal sections from round at elevated temperature. The technique of drawing mild steel, medium carbon steel and boron steel wire into octagonal, hexagonal and square sections from round at up to 850 deg C and 50% reduction of area in one pass has been established. The main objective was to provide a basic understanding of the process, with particular emphasis being placed on modelling using both experimental and theoretical considerations. Elevated temperature stress-strain data was obtained using a modified torsion testing machine. Data were used in the upper bound solution derived and solved numerically to predict drawing stress strain, strain-rate, temperature and flow stress distribution in the deforming zone for a range of variables. The success of this warm working process will, of course, depend on the use of a satisfactory elevated temperature lubricant, an efficient cooling system, a suitable tool material having good wear and thermal shock resistance and an efficient die profile design which incorporates the principle of least work. The merits and demerits of die materials such as tungsten carbide, chromium carbide, Syalon and Stellite are discussed, principally from the standpoint of minimising drawing force and die wear. Generally, the experimental and theoretical results were in good agreement, the drawing stress could be predicted within close limits and the process proved to be technically feasible. Finite element analysis has been carried out on the various die geometries and die materials, to gain a greater understanding of the behaviour of these dies under the process of elevated temperature drawing, and to establish the temperature distribution and thermal distortion in the deforming zone, thus establishing the optimum die design and die material for the process. It is now possible to predict, for the materials already tested, (i) the optimum drawing temperature range, (ii) the maximum possible reduction of area per pass, (iii) the optimum drawing die profiles and die materials, (iv) the most efficient lubricant in terms of reducing the drawing force and die wear.
Resumo:
Distortion or deprivation of vision during an early `critical' period of visual development can result in permanent visual impairment which indicates the need to identify and treat visually at-risk individuals early. A significant difficulty in this respect is that conventional, subjective methods of visual acuity determination are ineffective before approximately three years of age. In laboratory studies, infant visual function has been quantified precisely, using objective methods based on visual evoked potentials (VEP), preferential looking (PL) and optokinetic nystagmus (OKN) but clinical assessment of infant vision has presented a particular difficulty. An initial aim of this study was to evaluate the relative clinical merits of the three techniques. Clinical derivatives were devised, the OKN method proved unsuitable but the PL and VEP methods were evaluated in a pilot study. Most infants participating in the study had known ocular and/or neurological abnormalities but a few normals were included for comparison. The study suggested that the PL method was more clinically appropriate for the objective assessment of infant acuity. A study of normal visual development from birth to one year was subsequently conducted. Observations included cycloplegic refraction, ophthalmoscopy and preferential looking visual acuity assessment using horizontally and vertically oriented square wave gratings. The aims of the work were to investigate the efficiency and sensitivity of the technique and to study possible correlates of visual development. The success rate of the PL method varied with age; 87% of newborns and 98% of infants attending follow-up successfully completed at least one acuity test. Below two months monocular acuities were difficult to secure; infants were most testable around six months. The results produced were similar to published data using the acuity card procedure and slightly lower than, but comparable with acuity data derived using extended PL methods. Acuity development was not impaired in infants found to have retinal haemorrhages as newborns. A significant relationship was found between newborn binocular acuity and anisometropia but not with other refractive findings. No strong or consistent correlations between grating acuity and refraction were found for three, six or twelve months olds. Improvements in acuity and decreases in levels of hyperopia over the first week of life were suggestive of recovery from minor birth trauma. The refractive data was analysed separately to investigate the natural history of refraction in normal infants. Most newborns (80%) were hyperopic, significant astigmatism was found in 86% and significant anisometropia in 22%. No significant alteration in spherical equivalent refraction was noted between birth and three months, a significant reduction in hyperopia was evident by six months and this trend continued until one year. Observations on the astigmatic component of the refractive error revealed a rather erratic series of changes which would be worthy of further investigation since a repeat refraction study suggested difficulties in obtaining stable measurements in newborns. Astigmatism tended to decrease between birth and three months, increased significantly from three to six months and decreased significantly from six to twelve months. A constant decrease in the degree of anisometropia was evident throughout the first year. These findings have implications for the correction of infantile refractive error.
Resumo:
The performance of wireless networks is limited by multiple access interference (MAI) in the traditional communication approach where the interfered signals of the concurrent transmissions are treated as noise. In this paper, we treat the interfered signals from a new perspective on the basis of additive electromagnetic (EM) waves and propose a network coding based interference cancelation (NCIC) scheme. In the proposed scheme, adjacent nodes can transmit simultaneously with careful scheduling; therefore, network performance will not be limited by the MAI. Additionally we design a space segmentation method for general wireless ad hoc networks, which organizes network into clusters with regular shapes (e.g., square and hexagon) to reduce the number of relay nodes. The segmentation methodworks with the scheduling scheme and can help achieve better scalability and reduced complexity. We derive accurate analytic models for the probability of connectivity between two adjacent cluster heads which is important for successful information relay. We proved that with the proposed NCIC scheme, the transmission efficiency can be improved by at least 50% for general wireless networks as compared to the traditional interference avoidance schemes. Numeric results also show the space segmentation is feasible and effective. Finally we propose and discuss a method to implement the NCIC scheme in a practical orthogonal frequency division multiplexing (OFDM) communications networks. Copyright © 2009 John Wiley & Sons, Ltd.
Resumo:
A significant change of scene in a gradually changing scene is detected with the aid of a least one camera means for capturing digital images of the scene. A current image of the scene is formed together with a present weighted reference image which is formed from a plurality of previous images of the scene. Cell data is established based on the current image and the present weighted reference image. The cell data is statistically analysed so as to be able to identify at least one difference corresponding to a significant change of scene. When identified, an indication of such significant change of scene is provided.
Resumo:
The accurate identification of T-cell epitopes remains a principal goal of bioinformatics within immunology. As the immunogenicity of peptide epitopes is dependent on their binding to major histocompatibility complex (MHC) molecules, the prediction of binding affinity is a prerequisite to the reliable prediction of epitopes. The iterative self-consistent (ISC) partial-least-squares (PLS)-based additive method is a recently developed bioinformatic approach for predicting class II peptide−MHC binding affinity. The ISC−PLS method overcomes many of the conceptual difficulties inherent in the prediction of class II peptide−MHC affinity, such as the binding of a mixed population of peptide lengths due to the open-ended class II binding site. The method has applications in both the accurate prediction of class II epitopes and the manipulation of affinity for heteroclitic and competitor peptides. The method is applied here to six class II mouse alleles (I-Ab, I-Ad, I-Ak, I-As, I-Ed, and I-Ek) and included peptides up to 25 amino acids in length. A series of regression equations highlighting the quantitative contributions of individual amino acids at each peptide position was established. The initial model for each allele exhibited only moderate predictivity. Once the set of selected peptide subsequences had converged, the final models exhibited a satisfactory predictive power. Convergence was reached between the 4th and 17th iterations, and the leave-one-out cross-validation statistical terms - q2, SEP, and NC - ranged between 0.732 and 0.925, 0.418 and 0.816, and 1 and 6, respectively. The non-cross-validated statistical terms r2 and SEE ranged between 0.98 and 0.995 and 0.089 and 0.180, respectively. The peptides used in this study are available from the AntiJen database (http://www.jenner.ac.uk/AntiJen). The PLS method is available commercially in the SYBYL molecular modeling software package. The resulting models, which can be used for accurate T-cell epitope prediction, will be made freely available online (http://www.jenner.ac.uk/MHCPred).