952 resultados para Multivariate statistical method
Resumo:
In the present study, a reversed-phase high-performance liquid chromatographic (RP-HPLC) procedure was developed and validated for the simultaneous determination of seven water-soluble vitamins (thiamine, riboflavin, niacin, cyanocobalamin, ascorbic acid, folic acid, and p-aminobenzoic acid) and four fat-soluble vitamins (retinol acetate, cholecalciferol, α-tocopherol, and phytonadione) in multivitamin tablets. The linearity of the method was excellent (R² > 0.999) over the concentration range of 10 - 500 ng mL-1. The statistical evaluation of the method was carried out by performing the intra- and inter-day precision. The accuracy of the method was tested by measuring the average recovery; values ranged between 87.4% and 98.5% and were acceptable quantitative results that corresponded with the label claims.
Resumo:
The quantitative structure property relationship (QSPR) for the boiling point (Tb) of polychlorinated dibenzo-p-dioxins and polychlorinated dibenzofurans (PCDD/Fs) was investigated. The molecular distance-edge vector (MDEV) index was used as the structural descriptor. The quantitative relationship between the MDEV index and Tb was modeled by using multivariate linear regression (MLR) and artificial neural network (ANN), respectively. Leave-one-out cross validation and external validation were carried out to assess the prediction performance of the models developed. For the MLR method, the prediction root mean square relative error (RMSRE) of leave-one-out cross validation and external validation was 1.77 and 1.23, respectively. For the ANN method, the prediction RMSRE of leave-one-out cross validation and external validation was 1.65 and 1.16, respectively. A quantitative relationship between the MDEV index and Tb of PCDD/Fs was demonstrated. Both MLR and ANN are practicable for modeling this relationship. The MLR model and ANN model developed can be used to predict the Tb of PCDD/Fs. Thus, the Tb of each PCDD/F was predicted by the developed models.
Resumo:
A novel sensitive and relatively selective kinetic method is presented for the determination of V(V), based on its catalytic effect on the oxidation reaction of Ponceau Xylydine by potassium bromate in presence of 5-sulfosalicylic acid (SSA) as activator. The reaction was monitored spectrophotometrically by measuring the decrease in absorbance of Ponceau Xylydine at 640 nm between 0.5 to 7 min (the fixed time method) in H3PO4 medium at 25ºC. The effect of various parameters such as concentrations of H3PO4, SSA, bromate and Ponceau Xylydine, temperature and ionic strength on the rate of net reaction were studied. The method is free from most interferences, especially from large amounts of V(IV). The decrease in absorbance is proportional to the concentration of V(V) over the entire concentration range tested (1-15 ng mL−1) with a detection limit of 0.46 ng mL-1 (according to statistical 3Sblank/k criterion) and a coefficient of variation (CV) of 1.8% (for ten replicate measurement at 95% confidence level). The proposed method suffers few interferences such as Cr(VI) and Hg(II) ions. The method was successfully applied to the determination of V(V) in tap water, drinking water, bottled mineral water samples and a certified standard reference material such as SRM-1640 with satisfactory results. The vanadium contents of water samples were also determined by FAAS for a comparison. The recovery of spiked vanadium(V) was found to be quantitative and the reproducibility was satisfactory. It was observed that the results of the SRM 1640 were in good agreement with the certified value.
Resumo:
The aim of this present work was to provide a more fast, simple and less expensive to analyze sulfur content in diesel samples than by the standard methods currently used. Thus, samples of diesel fuel with sulfur concentrations varying from 400 and 2500 mgkg-1 were analyzed by two methodologies: X-ray fluorescence, according to ASTM D4294 and by Fourier transform infrared spectrometry (FTIR). The spectral data obtained from FTIR were used to build multivariate calibration models by partial least squares (PLS). Four models were built in three different ways: 1) a model using the full spectra (665 to 4000 cm-1), 2) two models using some specific spectrum regions and 3) a model with variable selected by classic method of variable selection stepwise. The model obtained by variable selection stepwise and the model built with region spectra between 665 and 856 cm-1 and 1145 and 2717 cm-1 showed better results in the determination of sulfur content.
Resumo:
A simple, rapid, accurate and inexpensive spectrophotometric method for the determination of tetracycline and doxycycline has been developed. The method is based on the reaction between these drugs and chloramine-T in alkaline medium producing red color products with absorbance maximum at the Λ = 535 and 525 nm for the tetracycline and doxycycline, respectively. The best conditions for the reactions have been found using multivariate method. Beer´s law is obeyed in a concentration ranges 1.03 x 10-5 to 3.61 x 10-4 mol L-1 and 1.75 x 10-5 to 3.48 x 10-4 mol L-1 for the tetracycline and doxycycline, respectively. The quantification limits were 5.63 x 10-6 mol L-1 and 7.12 x 10-7 mol L-1 for the tetracycline and doxycycline, respectively. The proposed method was successfully applied to the determination of these drugs in pharmaceutical formulations and the results obtained were in good agreement with those obtained by the comparative method at the 95% confidence level.
Resumo:
Thermal and air conditions inside animal facilities change during the day due to the influence of the external environment. For statistical and geostatistical analyses to be representative, a large number of points spatially distributed in the facility area must be monitored. This work suggests that the time variation of environmental variables of interest for animal production, monitored within animal facility, can be modeled accurately from discrete-time records. The aim of this study was to develop a numerical method to correct the temporal variations of these environmental variables, transforming the data so that such observations are independent of the time spent during the measurement. The proposed method approached values recorded with time delays to those expected at the exact moment of interest, if the data were measured simultaneously at the moment at all points distributed spatially. The correction model for numerical environmental variables was validated for environmental air temperature parameter, and the values corrected by the method did not differ by Tukey's test at 5% significance of real values recorded by data loggers.
Resumo:
Radiation balance is the fraction of incident solar radiation upon earth surface which is available to be used in several natural processes, such as biological metabolism, water loss by vegetated surfaces, variation of temperature in farming systems and organic decomposition. The present study aimed to assess and validate the performance of two estimation models for Rn in Ponta Grossa city, Paraná State, Brazil. To this end, during the period of 04/01/2008 to 04/30/2011, from radiometric data collected by an automatic weather station set at the Experimental Station, of the State University of Ponta Grossa. We performed a linear regression study by confrontation between measurements made through radiometric balance and Rn estimates obtained from Brunt classical method, and the proposed method. Both models showed excellent performance and were confirmed by the statistical parameters applied. However, the alternative method has the advantage of requiring only global solar radiation values, temperature, and relative humidity.
Resumo:
Longitudinal surveys are increasingly used to collect event history data on person-specific processes such as transitions between labour market states. Surveybased event history data pose a number of challenges for statistical analysis. These challenges include survey errors due to sampling, non-response, attrition and measurement. This study deals with non-response, attrition and measurement errors in event history data and the bias caused by them in event history analysis. The study also discusses some choices faced by a researcher using longitudinal survey data for event history analysis and demonstrates their effects. These choices include, whether a design-based or a model-based approach is taken, which subset of data to use and, if a design-based approach is taken, which weights to use. The study takes advantage of the possibility to use combined longitudinal survey register data. The Finnish subset of European Community Household Panel (FI ECHP) survey for waves 1–5 were linked at person-level with longitudinal register data. Unemployment spells were used as study variables of interest. Lastly, a simulation study was conducted in order to assess the statistical properties of the Inverse Probability of Censoring Weighting (IPCW) method in a survey data context. The study shows how combined longitudinal survey register data can be used to analyse and compare the non-response and attrition processes, test the missingness mechanism type and estimate the size of bias due to non-response and attrition. In our empirical analysis, initial non-response turned out to be a more important source of bias than attrition. Reported unemployment spells were subject to seam effects, omissions, and, to a lesser extent, overreporting. The use of proxy interviews tended to cause spell omissions. An often-ignored phenomenon classification error in reported spell outcomes, was also found in the data. Neither the Missing At Random (MAR) assumption about non-response and attrition mechanisms, nor the classical assumptions about measurement errors, turned out to be valid. Both measurement errors in spell durations and spell outcomes were found to cause bias in estimates from event history models. Low measurement accuracy affected the estimates of baseline hazard most. The design-based estimates based on data from respondents to all waves of interest and weighted by the last wave weights displayed the largest bias. Using all the available data, including the spells by attriters until the time of attrition, helped to reduce attrition bias. Lastly, the simulation study showed that the IPCW correction to design weights reduces bias due to dependent censoring in design-based Kaplan-Meier and Cox proportional hazard model estimators. The study discusses implications of the results for survey organisations collecting event history data, researchers using surveys for event history analysis, and researchers who develop methods to correct for non-sampling biases in event history data.
Resumo:
The purpose of this study was to examine and expand understanding concerning young Finnish registered nurses (RN) with an intention to leave the profession and the related variables, specifically when that intention has emerged before the age of 30. The overall goal of the study was to develop a conceptual model in relation to young RNs’ intention to leave the profession. Suggestions for policymakers, nurse leaders and nurse managers are presented for how to retain more young RNs in the nursing workforce. Suggestions for future nursing research are also provided. Phase I consists of two sequential integrative literature reviews of 75 empirical articles concerning nurses’ intention to leave the profession. In phase II, data had been collected as part of the Nurses’ Early Exit (NEXT) study, using the BQ-12 structured postal questionnaire. A total of 147 young RNs participated in the study. The data were analysed with statistical methods. In phase III, firstly, an in-depth interpretive case study was conducted in order to understand how young RNs explain and make sense of their intention to leave the profession. The data in this study consisted of longitudinal career stories by three young RNs. The data was analysed by using narrative holistic-content and thematic methods. Secondly, a total of 15 young RNs were interviewed in order to explore in-depth their experiences concerning organizational turnover and their intent to leave the profession. The data was analysed using conventional content analysis. Based on earlier research, empirical research on the young RNs intention to leave the profession is scarce. Nurses’ intention to leave the profession has mainly been studied with quantitative descriptive studies, conducted with survey questionnaires. Furthermore, the quality of previous studies varies considerably. Moreover, nurses’ intention to leave the profession seems to be driven by a number of variables. According to the survey study, 26% of young RNs had often considered giving up nursing completely and starting a different kind of job during the course of the previous year. Many different variables were associated with an intention to leave the profession (e.g. personal burnout, job dissatisfaction). According to the in-depth inquiries, poor nursing practice environments and a nursing career as a ‘second-best’ or serendipitous career choice were themes associated with young RNs’ intention to leave the profession. In summary, young RNs intention to leave the profession is a complex phenomenon with multiple associated variables. These findings suggest that policymakers, nurse leaders and nurse managers should enable improvements in nursing practice environments in order to retain more young RNs. These improvements can include, for example, adequate staffing levels, balanced nursing workloads, measures to reduce work-related stress as well as possibilities for advancement and development. Young RNs’ requirements to provide high-quality and ethical nursing care must be recognized in society and health-care organizations. Moreover, sufficient mentoring and orientation programmes should be provided for all graduate RNs. Future research is needed into whether the motive for choosing a nursing career affects the length of the tenure in the profession. Both quantitative and in-depth research is needed for the comprehensive development of nursing-turnover research.
Resumo:
Thirty-seven patients were submitted to kidney transplantation after transfusion at 2-week intervals with 4-week stored blood from their potential donors. All patients and donors were typed for HLA-A-B and DR antigens. The patients were also tested for cytotoxic antibodies against donor antigens before each transfusion. The percentage of panel reactive antibodies (PRA) was determined against a selected panel of 30 cell donors before and after the transfusions. The patients were immunosuppressed with azathioprine and prednisone. Rejection crises were treated with methylprednisolone. The control group consisted of 23 patients who received grafts from an unrelated donor but who did not receive donor-specific pretransplant blood transfusion. The incidence and reversibility of rejection episodes, allograft loss caused by rejection, and patient and graft survival rates were determined for both groups. Non-parametric methods (chi-square and Fisher tests) were used for statistical analysis, with the level of significance set at P<0.05. The incidence and reversibility of rejection crises during the first 60 post-transplant days did not differ significantly between groups. The actuarial graft and patient survival rates at five years were 56% and 77%, respectively, for the treated group and 39.8% and 57.5% for the control group. Graft loss due to rejection was significantly higher in the untreated group (P = 0.0026) which also required more intense immunosuppression (P = 0.0001). We conclude that tranfusions using stored blood have the immunosuppressive effect of fresh blood transfusions without the risk of provoking a widespread formation of antibodies. In addition, this method permits a reduction of the immunosuppressive drugs during the process without impairing the adequate functioning of the renal graft
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.
Resumo:
The influence of some process variables on the productivity of the fractions (liquid yield times fraction percent) obtained from SCFE of a Brazilian mineral coal using isopropanol and ethanol as primary solvents is analyzed using statistical techniques. A full factorial 23 experimental design was adopted to investigate the effects of process variables (temperature, pressure and cosolvent concentration) on the extraction products. The extracts were analyzed by the Preparative Liquid Chromatography-8 fractions method (PLC-8), a reliable, non destructive solvent fractionation method, especially developed for coal-derived liquids. Empirical statistical modeling was carried out in order to reproduce the experimental data. Correlations obtained were always greater than 0.98. Four specific process criteria were used to allow process optimization. Results obtained show that it is not possible to maximize both extract productivity and purity (through the minimization of heavy fraction content) simultaneously by manipulating the mentioned process variables.
Resumo:
New microbiological methods have been developed and commercialized, but their performance must be guaranteed. The aim of the present study was to evaluate the PetrifilmTM and TEMPO® systems compared to the conventional method for counting microorganisms in pasteurized milk. A total of 141 samples of pasteurized milk were analyzed by counting mesophilic aerobic, Coliforms at 35 ºC, Coliforms at 45 ºC, and Escherichia coli microorganisms. High correlation was found between the methods for counting Coliforms at 35 ºC, but low correlation was found for counting mesophilic aerobic, Coliforms at 45 ºC, and Escherichia coli. No significant statistical difference was found among the three methods for counting Coliforms at 35 ºC; however, the mean counts of mesophilic aerobic, Coliforms at 45 ºC, and Escherichia coli showed significant statistical difference. PetrifilmTM and TEMPO® systems had satisfactory results for Coliforms at 35 ºC in pasteurized milk but low performance for mesophilic aerobic, Coliforms at 45 ºC and Escherichia coli.
Resumo:
This study developed a gluten-free granola and evaluated it during storage with the application of multivariate and regression analysis of the sensory and instrumental parameters. The physicochemical, sensory, and nutritional characteristics of a product containing quinoa, amaranth and linseed were evaluated. The crude protein and lipid contents ranged from 97.49 and 122.72 g kg-1 of food, respectively. The polyunsaturated/saturated, and n-6:n-3 fatty acid ratios ranged from 2.82 and 2.59:1, respectively. Granola had the best alpha-linolenic acid content, nutritional indices in the lipid fraction, and mineral content. There were good hygienic and sanitary conditions during storage; probably due to the low water activity of the formulation, which contributed to inhibit microbial growth. The sensory attributes ranged from 'like very much' to 'like slightly', and the regression models were highly fitted and correlated during the storage period. A reduction in the sensory attribute levels and in the product physical stabilisation was verified by principal component analysis. The use of the affective test acceptance and instrumental analysis combined with statistical methods allowed us to obtain promising results about the characteristics of gluten-free granola.
Resumo:
Four problems of physical interest have been solved in this thesis using the path integral formalism. Using the trigonometric expansion method of Burton and de Borde (1955), we found the kernel for two interacting one dimensional oscillators• The result is the same as one would obtain using a normal coordinate transformation, We next introduced the method of Papadopolous (1969), which is a systematic perturbation type method specifically geared to finding the partition function Z, or equivalently, the Helmholtz free energy F, of a system of interacting oscillators. We applied this method to the next three problems considered• First, by summing the perturbation expansion, we found F for a system of N interacting Einstein oscillators^ The result obtained is the same as the usual result obtained by Shukla and Muller (1972) • Next, we found F to 0(Xi)f where A is the usual Tan Hove ordering parameter* The results obtained are the same as those of Shukla and Oowley (1971), who have used a diagrammatic procedure, and did the necessary sums in Fourier space* We performed the work in temperature space• Finally, slightly modifying the method of Papadopolous, we found the finite temperature expressions for the Debyecaller factor in Bravais lattices, to 0(AZ) and u(/K/ j,where K is the scattering vector* The high temperature limit of the expressions obtained here, are in complete agreement with the classical results of Maradudin and Flinn (1963) .