898 resultados para least absolute deviation (LAD) fitting
Resumo:
Different types of spin–spin coupling constants (SSCCs) for several representative small molecules are evaluated and analyzed using a combination of 10 exchange functionals with 12 correlation functionals. For comparison, calculations performed using MCSCF, SOPPA, other common DFT methods, and also experimental data are considered. A detailed study of the percentage of Hartree–Fock exchange energy in SSCCs and in its four contributions is carried out. From the above analysis, a combined functional formed with local Slater (34%), Hartree–Fock exchange (66%), and P86 correlation functional (S66P86) is proposed in this paper. The accuracy of the values obtained with this hybrid functional (mean absolute deviation of 4.5 Hz) is similar to that of the SOPPA method (mean absolute deviation of 4.6 Hz).
Resumo:
Tese de mestrado, Bioinformática e Biologia Computacional (Bioinformática), Universidade de Lisboa, Faculdade de Ciências, 2016
Resumo:
Correlation and regression are two of the statistical procedures most widely used by optometrists. However, these tests are often misused or interpreted incorrectly, leading to erroneous conclusions from clinical experiments. This review examines the major statistical tests concerned with correlation and regression that are most likely to arise in clinical investigations in optometry. First, the use, interpretation and limitations of Pearson's product moment correlation coefficient are described. Second, the least squares method of fitting a linear regression to data and for testing how well a regression line fits the data are described. Third, the problems of using linear regression methods in observational studies, if there are errors associated in measuring the independent variable and for predicting a new value of Y for a given X, are discussed. Finally, methods for testing whether a non-linear relationship provides a better fit to the data and for comparing two or more regression lines are considered.
Resumo:
Feature selection is important in medical field for many reasons. However, selecting important variables is a difficult task with the presence of censoring that is a unique feature in survival data analysis. This paper proposed an approach to deal with the censoring problem in endovascular aortic repair survival data through Bayesian networks. It was merged and embedded with a hybrid feature selection process that combines cox's univariate analysis with machine learning approaches such as ensemble artificial neural networks to select the most relevant predictive variables. The proposed algorithm was compared with common survival variable selection approaches such as; least absolute shrinkage and selection operator LASSO, and Akaike information criterion AIC methods. The results showed that it was capable of dealing with high censoring in the datasets. Moreover, ensemble classifiers increased the area under the roc curves of the two datasets collected from two centers located in United Kingdom separately. Furthermore, ensembles constructed with center 1 enhanced the concordance index of center 2 prediction compared to the model built with a single network. Although the size of the final reduced model using the neural networks and its ensembles is greater than other methods, the model outperformed the others in both concordance index and sensitivity for center 2 prediction. This indicates the reduced model is more powerful for cross center prediction.
Resumo:
This thesis studies survival analysis techniques dealing with censoring to produce predictive tools that predict the risk of endovascular aortic aneurysm repair (EVAR) re-intervention. Censoring indicates that some patients do not continue follow up, so their outcome class is unknown. Methods dealing with censoring have drawbacks and cannot handle the high censoring of the two EVAR datasets collected. Therefore, this thesis presents a new solution to high censoring by modifying an approach that was incapable of differentiating between risks groups of aortic complications. Feature selection (FS) becomes complicated with censoring. Most survival FS methods depends on Cox's model, however machine learning classifiers (MLC) are preferred. Few methods adopted MLC to perform survival FS, but they cannot be used with high censoring. This thesis proposes two FS methods which use MLC to evaluate features. The two FS methods use the new solution to deal with censoring. They combine factor analysis with greedy stepwise FS search which allows eliminated features to enter the FS process. The first FS method searches for the best neural networks' configuration and subset of features. The second approach combines support vector machines, neural networks, and K nearest neighbor classifiers using simple and weighted majority voting to construct a multiple classifier system (MCS) for improving the performance of individual classifiers. It presents a new hybrid FS process by using MCS as a wrapper method and merging it with the iterated feature ranking filter method to further reduce the features. The proposed techniques outperformed FS methods based on Cox's model such as; Akaike and Bayesian information criteria, and least absolute shrinkage and selector operator in the log-rank test's p-values, sensitivity, and concordance. This proves that the proposed techniques are more powerful in correctly predicting the risk of re-intervention. Consequently, they enable doctors to set patients’ appropriate future observation plan.
Resumo:
Correct specification of the simple location quotients in regionalizing the national direct requirements table is essential to the accuracy of regional input-output multipliers. The purpose of this research is to examine the relative accuracy of these multipliers when earnings, employment, number of establishments, and payroll data specify the simple location quotients.^ For each specification type, I derive a column of total output multipliers and a column of total income multipliers. These multipliers are based on the 1987 benchmark input-output accounts of the U.S. economy and 1988-1992 state of Florida data.^ Error sign tests, and Standardized Mean Absolute Deviation (SMAD) statistics indicate that the output multiplier estimates overestimate the output multipliers published by the Department of Commerce-Bureau of Economic Analysis (BEA) for the state of Florida. In contrast, the income multiplier estimates underestimate the BEA's income multipliers. For a given multiplier type, the Spearman-rank correlation analysis shows that the multiplier estimates and the BEA multipliers have statistically different rank ordering of row elements. The above tests also find no significant different differences, both in size and ranking distributions, among the vectors of multiplier estimates. ^
Resumo:
Correct specification of the simple location quotients in regionalizing the national direct requirements table is essential to the accuracy of regional input-output multipliers. The purpose of this research is to examine the relative accuracy of these multipliers when earnings, employment, number of establishments, and payroll data specify the simple location quotients. For each specification type, I derive a column of total output multipliers and a column of total income multipliers. These multipliers are based on the 1987 benchmark input-output accounts of the U.S. economy and 1988-1992 state of Florida data. Error sign tests, and Standardized Mean Absolute Deviation (SMAD) statistics indicate that the output multiplier estimates overestimate the output multipliers published by the Department of Commerce-Bureau of Economic Analysis (BEA) for the state of Florida. In contrast, the income multiplier estimates underestimate the BEA's income multipliers. For a given multiplier type, the Spearman-rank correlation analysis shows that the multiplier estimates and the BEA multipliers have statistically different rank ordering of row elements. The above tests also find no significant different differences, both in size and ranking distributions, among the vectors of multiplier estimates.
Resumo:
Seagrass is expected to benefit from increased carbon availability under future ocean acidification. This hypothesis has been little tested by in situ manipulation. To test for ocean acidification effects on seagrass meadows under controlled CO2/pH conditions, we used a Free Ocean Carbon Dioxide Enrichment (FOCE) system which allows for the manipulation of pH as continuous offset from ambient. It was deployed in a Posidonia oceanica meadow at 11 m depth in the Northwestern Mediterranean Sea. It consisted of two benthic enclosures, an experimental and a control unit both 1.7 m**3, and an additional reference plot in the ambient environment (2 m**2) to account for structural artifacts. The meadow was monitored from April to November 2014. The pH of the experimental enclosure was lowered by 0.26 pH units for the second half of the 8-month study. The greatest magnitude of change in P. oceanica leaf biometrics, photosynthesis, and leaf growth accompanied seasonal changes recorded in the environment and values were similar between the two enclosures. Leaf thickness may change in response to lower pH but this requires further testing. Results are congruent with other short-term and natural studies that have investigated the response of P. oceanica over a wide range of pH. They suggest any benefit from ocean acidification, over the next century (at a pH of 7.7 on the total scale), on Posidonia physiology and growth may be minimal and difficult to detect without increased replication or longer experimental duration. The limited stimulation, which did not surpass any enclosure or seasonal effect, casts doubts on speculations that elevated CO2 would confer resistance to thermal stress and increase the buffering capacity of meadows.
Resumo:
Reliable and fine resolution estimates of surface net-radiation are required for estimating latent and sensible heat fluxes between the land surface and the atmosphere. However, currently, fine resolution estimates of net-radiation are not available and consequently it is challenging to develop multi-year estimates of evapotranspiration at scales that can capture land surface heterogeneity and are relevant for policy and decision-making. We developed and evaluated a global net-radiation product at 5 km and 8-day resolution by combining mutually consistent atmosphere and land data from the Moderate Resolution Imaging Spectroradiometer (MODIS) on board Terra. Comparison with net-radiation measurements from 154 globally distributed sites (414 site-years) from the FLUXNET and Surface Radiation budget network (SURFRAD) showed that the net-radiation product agreed well with measurements across seasons and climate types in the extratropics (Wilmott’s index ranged from 0.74 for boreal to 0.63 for Mediterranean sites). Mean absolute deviation between the MODIS and measured net-radiation ranged from 38.0 ± 1.8 W∙m−2 in boreal to 72.0 ± 4.1 W∙m−2 in the tropical climates. The mean bias was small and constituted only 11%, 0.7%, 8.4%, 4.2%, 13.3%, and 5.4% of the mean absolute error in daytime net-radiation in boreal, Mediterranean, temperate-continental, temperate, semi-arid, and tropical climate, respectively. To assess the accuracy of the broader spatiotemporal patterns, we upscaled error-quantified MODIS net-radiation and compared it with the net-radiation estimates from the coarse spatial (1° × 1°) but high temporal resolution gridded net-radiation product from the Clouds and Earth’s Radiant Energy System (CERES). Our estimates agreed closely with the net-radiation estimates from the CERES. Difference between the two was less than 10 W•m−2 in 94% of the total land area. MODIS net-radiation product will be a valuable resource for the science community studying turbulent fluxes and energy budget at the Earth’s surface.
Resumo:
The viscosity of ionic liquids (ILs) has been modeled as a function of temperature and at atmospheric pressure using a new method based on the UNIFAC–VISCO method. This model extends the calculations previously reported by our group (see Zhao et al. J. Chem. Eng. Data 2016, 61, 2160–2169) which used 154 experimental viscosity data points of 25 ionic liquids for regression of a set of binary interaction parameters and ion Vogel–Fulcher–Tammann (VFT) parameters. Discrepancies in the experimental data of the same IL affect the quality of the correlation and thus the development of the predictive method. In this work, mathematical gnostics was used to analyze the experimental data from different sources and recommend one set of reliable data for each IL. These recommended data (totally 819 data points) for 70 ILs were correlated using this model to obtain an extended set of binary interaction parameters and ion VFT parameters, with a regression accuracy of 1.4%. In addition, 966 experimental viscosity data points for 11 binary mixtures of ILs were collected from literature to establish this model. All the binary data consist of 128 training data points used for the optimization of binary interaction parameters and 838 test data points used for the comparison of the pure evaluated values. The relative average absolute deviation (RAAD) for training and test is 2.9% and 3.9%, respectively.
Resumo:
BACKGROUND: The purpose of the present study was to investigate the diagnostic value of T2-mapping in acute myocarditis (ACM) and to define cut-off values for edema detection. METHODS: Cardiovascular magnetic resonance (CMR) data of 31 patients with ACM were retrospectively analyzed. 30 healthy volunteers (HV) served as a control. Additionally to the routine CMR protocol, T2-mapping data were acquired at 1.5 T using a breathhold Gradient-Spin-Echo T2-mapping sequence in six short axis slices. T2-maps were segmented according to the 16-segments AHA-model and segmental T2 values as well as the segmental pixel-standard deviation (SD) were analyzed. RESULTS: Mean differences of global myocardial T2 or pixel-SD between HV and ACM patients were only small, lying in the normal range of HV. In contrast, variation of segmental T2 values and pixel-SD was much larger in ACM patients compared to HV. In random forests and multiple logistic regression analyses, the combination of the highest segmental T2 value within each patient (maxT2) and the mean absolute deviation (MAD) of log-transformed pixel-SD (madSD) over all 16 segments within each patient proved to be the best discriminators between HV and ACM patients with an AUC of 0.85 in ROC-analysis. In classification trees, a combined cut-off of 0.22 for madSD and of 68 ms for maxT2 resulted in 83% specificity and 81% sensitivity for detection of ACM. CONCLUSIONS: The proposed cut-off values for maxT2 and madSD in the setting of ACM allow edema detection with high sensitivity and specificity and therefore have the potential to overcome the hurdles of T2-mapping for its integration into clinical routine.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnoloigia, 2016.
Resumo:
There is a need to identify factors that are able to influence health in old age and to develop interventions that could slow down the process of aging and its associated pathologies. Lifestyle modifications, and especially nutrition, appear to be promising strategies to promote healthy aging. Their impact on aging biomarkers has been poorly investigated. In the first part of this work, we evaluated the impact of a one-year Mediterranean-like diet, delivered within the framework of the NU-AGE project in 120 elderly subjects, on epigenetic age acceleration measures assessed with Horvath’s clock. We observed a rejuvenation of participants after nutritional intervention. The effect was more marked in the group of Polish females and in subjects who were epigenetically older at baseline. In the second part of this work, we developed a new model of epigenetic biomarker, based on a gene-targeted approach with the EpiTYPER® system. We selected six regions of interest (associated with ELOVL2, NHLRC1, SIRT7/MAFG, AIM2, EDARADD and TFAP2E genes) and constructed our model through a ridge regression analysis. In controls, estimation of chronological age was accurate, with a correlation coefficient between predicted and chronological age of 0.92 and a mean absolute deviation of 4.70 years. Our model was able to capture phenomena of accelerated or decelerated aging, in Down syndrome subjects and centenarians and offspring respectively. Applying our model to samples of the NU-AGE project, we observed similar results to the ones obtained with the canonical epigenetic clock, with a rejuvenation of the individuals after one-year of nutritional intervention. Together, our findings indicate that nutrition can promote epigenetic rejuvenation and that epigenetic age acceleration measures could be suitable biomarkers to evaluate their impact. We demonstrated that the effect of the dietary intervention is country-, sex- and individual-specific, thus suggesting the need for a personalized approach to nutritional interventions.
Resumo:
Some aspects of the application of electrochemical impedance spectroscopy to studies of solid electrode / solution interface, in the absence of faradaic processes, are analysed. In order to perform this analysis, gold electrodes with (111) and (210) crystallographic orientations in an aqueous solution containing 10 mmol dm-3 KF, as supporting electrolyte, and a pyridine concentration varying from 0.01 to 4.6 mmol dm-3, were used. The experimental data was analysed by using EQUIVCRT software, which utilises non-linear least squares routines, attributing to the solid electrode / solution interface behaviour described by an equivalent circuit with a resistance in series with a constant phase element. The results of this fitting procedure were analysed by the dependence on the electrode potential on two parameters: the pre-exponential factor, Y0, and the exponent n f, related with the phase angle shift. By this analysis it was possible to observe that the pyridine adsorption is strongly affected by the crystallographic orientation of the electrode surface and that the extent of deviation from ideal capacitive behaviour is mainly of interfacial origin.