953 resultados para Linear multivariate methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: Thoracic fat has been associated with an increased risk of coronary artery disease (CAD). As endothelium-dependent vasoreactivity is a surrogate of cardiovascular events and is impaired early in atherosclerosis, we aimed at assessing the possible relationship between thoracic fat volume (TFV) and endothelium-dependent coronary vasomotion. METHODS: Fifty healthy volunteers without known CAD or major cardiovascular risk factors (CRFs) prospectively underwent a (82)Rb cardiac PET/CT to quantify myocardial blood flow (MBF) at rest, and MBF response to cold pressor testing (CPT-MBF) and adenosine (i.e., stress-MBF). TFV was measured by a 2D volumetric CT method and common laboratory blood tests (glucose and insulin levels, HOMA-IR, cholesterol, triglyceride, hsCRP) were performed. Relationships between CPT-MBF, TFV and other CRFs were assessed using non-parametric Spearman rank correlation testing and multivariate linear regression analysis. RESULTS: All of the 50 participants (58 ± 10y) had normal stress-MBF (2.7 ± 0.6 mL/min/g; 95 % CI: 2.6-2.9) and myocardial flow reserve (2.8 ± 0.8; 95 % CI: 2.6-3.0) excluding underlying CAD. Univariate analysis revealed a significant inverse relation between absolute CPT-MBF and sex (ρ = -0.47, p = 0.0006), triglyceride (ρ = -0.32, p = 0.024) and insulin levels (ρ = -0.43, p = 0.0024), HOMA-IR (ρ = -0.39, p = 0.007), BMI (ρ = -0.51, p = 0.0002) and TFV (ρ = -0.52, p = 0.0001). MBF response to adenosine was also correlated with TFV (ρ = -0.32, p = 0.026). On multivariate analysis, TFV emerged as the only significant predictor of MBF response to CPT (p = 0.014). CONCLUSIONS: TFV is significantly correlated with endothelium-dependent and -independent coronary vasomotion. High TF burden might negatively influence MBF response to CPT and to adenosine stress, even in persons without CAD, suggesting a link between thoracic fat and future cardiovascular events.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To quantify the relation between body mass index (BMI) and endometrial cancer risk, and to describe the shape of such a relation. DESIGN: Pooled analysis of three hospital-based case-control studies. SETTING: Italy and Switzerland. POPULATION: A total of 1449 women with endometrial cancer and 3811 controls. METHODS: Multivariate odds ratios (OR) and 95% confidence intervals (95% CI) were obtained from logistic regression models. The shape of the relation was determined using a class of flexible regression models. MAIN OUTCOME MEASURE: The relation of BMI with endometrial cancer. RESULTS: Compared with women with BMI 18.5 to <25 kg/m(2) , the odds ratio was 5.73 (95% CI 4.28-7.68) for women with a BMI ≥35 kg/m(2) . The odds ratios were 1.10 (95% CI 1.09-1.12) and 1.63 (95% CI 1.52-1.75) respectively for an increment of BMI of 1 and 5 units. The relation was stronger in never-users of oral contraceptives (OR 3.35, 95% CI 2.78-4.03, for BMI ≥30 versus <25 kg/m(2) ) than in users (OR 1.22, 95% CI 0.56-2.67), and in women with diabetes (OR 8.10, 95% CI 4.10-16.01, for BMI ≥30 versus <25 kg/m(2) ) than in those without diabetes (OR 2.95, 95% CI 2.44-3.56). The relation was best fitted by a cubic model, although after the exclusion of the 5% upper and lower tails, it was best fitted by a linear model. CONCLUSIONS: The results of this study confirm a role of elevated BMI in the aetiology of endometrial cancer and suggest that the risk in obese women increases in a cubic nonlinear fashion. The relation was stronger in never-users of oral contraceptives and in women with diabetes. TWEETABLE ABSTRACT: Risk of endometrial cancer increases with elevated body weight in a cubic nonlinear fashion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than prediction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for predicting individual customer choices from the vast amount of user generated feedback. In this thesis we propose several algorithms for learning preference relations. These algorithms stem from well founded and robust class of regularized least-squares methods and have many attractive computational properties. In order to improve the performance of our methods, we introduce several non-linear kernel functions. Thus, contribution of this thesis is twofold: kernel functions for structured data that are used to take advantage of various non-vectorial data representations and the preference learning algorithms that are suitable for different tasks, namely efficient learning of preference relations, learning with large amount of training data, and semi-supervised preference learning. Proposed kernel-based algorithms and kernels are applied to the parse ranking task in natural language processing, document ranking in information retrieval, and remote homology detection in bioinformatics domain. Training of kernel-based ranking algorithms can be infeasible when the size of the training set is large. This problem is addressed by proposing a preference learning algorithm whose computation complexity scales linearly with the number of training data points. We also introduce sparse approximation of the algorithm that can be efficiently trained with large amount of data. For situations when small amount of labeled data but a large amount of unlabeled data is available, we propose a co-regularized preference learning algorithm. To conclude, the methods presented in this thesis address not only the problem of the efficient training of the algorithms but also fast regularization parameter selection, multiple output prediction, and cross-validation. Furthermore, proposed algorithms lead to notably better performance in many preference learning tasks considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rosin is a natural product from pine forests and it is used as a raw material in resinate syntheses. Resinates are polyvalent metal salts of rosin acids and especially Ca- and Ca/Mg- resinates find wide application in the printing ink industry. In this thesis, analytical methods were applied to increase general knowledge of resinate chemistry and the reaction kinetics was studied in order to model the non linear solution viscosity increase during resinate syntheses by the fusion method. Solution viscosity in toluene is an important quality factor for resinates to be used in printing inks. The concept of critical resinate concentration, c crit, was introduced to define an abrupt change in viscosity dependence on resinate concentration in the solution. The concept was then used to explain the non-inear solution viscosity increase during resinate syntheses. A semi empirical model with two estimated parameters was derived for the viscosity increase on the basis of apparent reaction kinetics. The model was used to control the viscosity and to predict the total reaction time of the resinate process. The kinetic data from the complex reaction media was obtained by acid value titration and by FTIR spectroscopic analyses using a conventional calibration method to measure the resinate concentration and the concentration of free rosin acids. A multivariate calibration method was successfully applied to make partial least square (PLS) models for monitoring acid value and solution viscosity in both mid-infrared (MIR) and near infrared (NIR) regions during the syntheses. The calibration models can be used for on line resinate process monitoring. In kinetic studies, two main reaction steps were observed during the syntheses. First a fast irreversible resination reaction occurs at 235 °C and then a slow thermal decarboxylation of rosin acids starts to take place at 265 °C. Rosin oil is formed during the decarboxylation reaction step causing significant mass loss as the rosin oil evaporates from the system while the viscosity increases to the target level. The mass balance of the syntheses was determined based on the resinate concentration increase during the decarboxylation reaction step. A mechanistic study of the decarboxylation reaction was based on the observation that resinate molecules are partly solvated by rosin acids during the syntheses. Different decarboxylation mechanisms were proposed for the free and solvating rosin acids. The deduced kinetic model supported the analytical data of the syntheses in a wide resinate concentration region, over a wide range of viscosity values and at different reaction temperatures. In addition, the application of the kinetic model to the modified resinate syntheses gave a good fit. A novel synthesis method with the addition of decarboxylated rosin (i.e. rosin oil) to the reaction mixture was introduced. The conversion of rosin acid to resinate was increased to the level necessary to obtain the target viscosity for the product at 235 °C. Due to a lower reaction temperature than in traditional fusion synthesis at 265 °C, thermal decarboxylation is avoided. As a consequence, the mass yield of the resinate syntheses can be increased from ca. 70% to almost 100% by recycling the added rosin oil.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background: Little is known about how sitting time, alone or in combination with markers of physical activity (PA), influences mental well-being and work productivity. Given the need to develop workplace PA interventions that target employees’ health related efficiency outcomes; this study examined the associations between self-reported sitting time, PA, mental well-being and work productivity in office employees. Methods: Descriptive cross-sectional study. Spanish university office employees (n = 557) completed a survey measuring socio-demographics, total and domain specific (work and travel) self-reported sitting time, PA (International Physical Activity Questionnaire short version), mental well-being (Warwick-Edinburg Mental Well-Being Scale) and work productivity (Work Limitations Questionnaire). Multivariate linear regression analyses determined associations between the main variables adjusted for gender, age, body mass index and occupation. PA levels (low, moderate and high) were introduced into the model to examine interactive associations. Results: Higher volumes of PA were related to higher mental well-being, work productivity and spending less time sitting at work, throughout the working day and travelling during the week, including the weekends (p < 0.05). Greater levels of sitting during weekends was associated with lower mental well-being (p < 0.05). Similarly, more sitting while travelling at weekends was linked to lower work productivity (p < 0.05). In highly active employees, higher sitting times on work days and occupational sitting were associated with decreased mental well-being (p < 0.05). Higher sitting times while travelling on weekend days was also linked to lower work productivity in the highly active (p < 0.05). No significant associations were observed in low active employees. Conclusions: Employees’ PA levels exerts different influences on the associations between sitting time, mental well-being and work productivity. The specific associations and the broad sweep of evidence in the current study suggest that workplace PA strategies to improve the mental well-being and productivity of all employees should focus on reducing sitting time alongside efforts to increase PA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The environmental impact of detergents and other consumer products is behind the continued interest in the chemistry of the surfactants used. Of these, linear alkylbenzene sulfonates (LASs) are most widely employed in detergent formulations. The precursors to LASs are linear alkylbenzenes (LABs). There is also interest in the chemistry of these hydrocarbons, because they are usually present in commercial LASs (due to incomplete sulfonation), or form as one of their degradation products. Additionally, they may be employed as molecular tracers of domestic waste in the aquatic environment. The following aspects are covered in the present review: The chemistry of surfactants, in particular LAS; environmental impact of the production of LAS; environmental and toxicological effects of LAS; mechanisms of removal of LAS in the environment, and methods for monitoring LAS and LAB, the latter in domestic wastes. Classical and novel analytical methods employed for the determination of LAS and LAB are discussed in detail, and a brief comment on detergents in Brazil is given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La tècnica de l’electroencefalograma (EEG) és una de les tècniques més utilitzades per estudiar el cervell. En aquesta tècnica s’enregistren els senyals elèctrics que es produeixen en el còrtex humà a través d’elèctrodes col•locats al cap. Aquesta tècnica, però, presenta algunes limitacions a l’hora de realitzar els enregistraments, la principal limitació es coneix com a artefactes, que són senyals indesitjats que es mesclen amb els senyals EEG. L’objectiu d’aquest treball de final de màster és presentar tres nous mètodes de neteja d’artefactes que poden ser aplicats en EEG. Aquests estan basats en l’aplicació de la Multivariate Empirical Mode Decomposition, que és una nova tècnica utilitzada per al processament de senyal. Els mètodes de neteja proposats s’apliquen a dades EEG simulades que contenen artefactes (pestanyeigs), i un cop s’han aplicat els procediments de neteja es comparen amb dades EEG que no tenen pestanyeigs, per comprovar quina millora presenten. Posteriorment, dos dels tres mètodes de neteja proposats s’apliquen sobre dades EEG reals. Les conclusions que s’han extret del treball són que dos dels nous procediments de neteja proposats es poden utilitzar per realitzar el preprocessament de dades reals per eliminar pestanyeigs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new analytical method was developed to non-destructively determine pH and degree of polymerisation (DP) of cellulose in fibres in 19th 20th century painting canvases, and to identify the fibre type: cotton, linen, hemp, ramie or jute. The method is based on NIR spectroscopy and multivariate data analysis, while for calibration and validation a reference collection of 199 historical canvas samples was used. The reference collection was analysed destructively using microscopy and chemical analytical methods. Partial least squares regression was used to build quantitative methods to determine pH and DP, and linear discriminant analysis was used to determine the fibre type. To interpret the obtained chemical information, an expert assessment panel developed a categorisation system to discriminate between canvases that may not be fit to withstand excessive mechanical stress, e.g. transportation. The limiting DP for this category was found to be 600. With the new method and categorisation system, canvases of 12 Dalí paintings from the Fundació Gala-Salvador Dalí (Figueres, Spain) were non-destructively analysed for pH, DP and fibre type, and their fitness determined, which informs conservation recommendations. The study demonstrates that collection-wide canvas condition surveys can be performed efficiently and non-destructively, which could significantly improve collection management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A reversed-phase liquid chromatographic (LC) and ultraviolet (UV) spectrophotometric methods were developed and validated for the assay of bromopride in oral and injectable solutions. The methods were validated according to ICH guideline. Both methods were linear in the range between 5-25 μg mL-1 (y = 41837x - 5103.4, r = 0.9996 and y = 0.0284x - 0.0351, r = 1, respectively). The statistical analysis showed no significant difference between the results obtained by the two methods. The proposed methods were found to be simple, rapid, precise, accurate, and sensitive. The LC and UV methods can be used in the routine quantitative analysis of bromopride in oral and injectable solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The simultaneous determination of two or more active components in pharmaceutical preparations, without previous chemical separation, is a common analytical problem. Published works describe the determination of AZT and 3TC separately, as raw material or in different pharmaceutical preparations. In this work, a method using UV spectroscopy and multivariate calibration is described for the simultaneous measurement of 3TC and AZT in fixed dose combinations. The methodology was validated and applied to determine the AZT+3TC contents in tablets from five different manufacturers, as well as their dissolution profile. The results obtained employing the proposed methodology was similar to methods using first derivative technique and HPLC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ten common doubts of chemistry students and professionals about their statistical applications are discussed. The use of the N-1 denominator instead of N is described for the standard deviation. The statistical meaning of the denominators of the root mean square error of calibration (RMSEC) and root mean square error of validation (RMSEV) are given for researchers using multivariate calibration methods. The reason why scientists and engineers use the average instead of the median is explained. Several problematic aspects about regression and correlation are treated. The popular use of triplicate experiments in teaching and research laboratories is seen to have its origin in statistical confidence intervals. Nonparametric statistics and bootstrapping methods round out the discussion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, a spectrophotometric methodology was applied in order to determine epinephrine (EP), uric acid (UA), and acetaminophen (AC) in pharmaceutical formulations and spiked human serum, plasma, and urine by using a multivariate approach. Multivariate calibration methods such as partial least squares (PLS) methods and its derivates were used to obtain a model for simultaneous determination of EP, UA and AC with good figures of merit and mixture design was in the range of 1.8 - 35.3, 1.7 - 16.8, and 1.5 - 12.1 µg mL-1. The 2nd derivate PLS showed recoveries of 95.3 - 103.3, 93.3 - 104.0, and 94.0 - 105.5 µg mL-1 for EP, UA, and AC, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this present work was to provide a more fast, simple and less expensive to analyze sulfur content in diesel samples than by the standard methods currently used. Thus, samples of diesel fuel with sulfur concentrations varying from 400 and 2500 mgkg-1 were analyzed by two methodologies: X-ray fluorescence, according to ASTM D4294 and by Fourier transform infrared spectrometry (FTIR). The spectral data obtained from FTIR were used to build multivariate calibration models by partial least squares (PLS). Four models were built in three different ways: 1) a model using the full spectra (665 to 4000 cm-1), 2) two models using some specific spectrum regions and 3) a model with variable selected by classic method of variable selection stepwise. The model obtained by variable selection stepwise and the model built with region spectra between 665 and 856 cm-1 and 1145 and 2717 cm-1 showed better results in the determination of sulfur content.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Currently, numerous high-throughput technologies are available for the study of human carcinomas. In literature, many variations of these techniques have been described. The common denominator for these methodologies is the high amount of data obtained in a single experiment, in a short time period, and at a fairly low cost. However, these methods have also been described with several problems and limitations. The purpose of this study was to test the applicability of two selected high-throughput methods, cDNA and tissue microarrays (TMA), in cancer research. Two common human malignancies, breast and colorectal cancer, were used as examples. This thesis aims to present some practical considerations that need to be addressed when applying these techniques. cDNA microarrays were applied to screen aberrant gene expression in breast and colon cancers. Immunohistochemistry was used to validate the results and to evaluate the association of selected novel tumour markers with the outcome of the patients. The type of histological material used in immunohistochemistry was evaluated especially considering the applicability of whole tissue sections and different types of TMAs. Special attention was put on the methodological details in the cDNA microarray and TMA experiments. In conclusion, many potential tumour markers were identified in the cDNA microarray analyses. Immunohistochemistry could be applied to validate the observed gene expression changes of selected markers and to associate their expression change with patient outcome. In the current experiments, both TMAs and whole tissue sections could be used for this purpose. This study showed for the first time that securin and p120 catenin protein expression predict breast cancer outcome and the immunopositivity of carbonic anhydrase IX associates with the outcome of rectal cancer. The predictive value of these proteins was statistically evident also in multivariate analyses with up to a 13.1- fold risk for cancer specific death in a specific subgroup of patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Switched Reluctance technology is probably best suited for industrial low-speed or zerospeed applications where the power can be small but the torque or the force in linear movement cases might be relatively high. Because of its simple structure the SR-motor is an interesting alternative for low power applications where pneumatic or hydraulic linear drives are to be avoided. This study analyses the basic parts of an LSR-motor which are the two mover poles and one stator pole and which form the “basic pole pair” in linear-movement transversal-flux switchedreluctance motors. The static properties of the basic pole pair are modelled and the basic design rules are derived. The models developed are validated with experiments. A one-sided one-polepair transversal-flux switched-reluctance-linear-motor prototype is demonstrated and its static properties are measured. The modelling of the static properties is performed with FEM-calculations. Two-dimensional models are accurate enough to model the static key features for the basic dimensioning of LSRmotors. Three-dimensional models must be used in order to get the most accurate calculation results of the static traction force production. The developed dimensioning and modelling methods, which could be systematically validated by laboratory measurements, are the most significant contributions of this thesis.