882 resultados para the least squares distance method
Resumo:
The cultivation of dessert apples has to meet the consumer's increasing demand for high fruit quality and a sustainable mostly residue-free production while ensuring a competitive agricultural productivity. It is therefore of great interest to know the impact of different cultivation methods on the fruit quality and the chemical composition, respectively. Previous studies have demonstrated the feasibility of High Resolution Magic Angle Spinning (HR-MAS) NMR spectroscopy directly performed on apple tissue as analytical tool for metabonomic studies. In this study, HR-MAS NMR spectroscopy is applied to apple tissue to analyze the metabolic profiles of apples grown under 3 different cultivation methods. Golden Delicious apples were grown applying organic (Bio), integrated (IP) and low-input (LI) plant protection strategies. A total of 70 1H HR-MAS NMR spectra were analyzed by means of principle component analysis (PCA) and partial least squares discriminant analysis (PLS-DA). Apples derived from Bio-production could be well separated from the two other cultivation methods applying both, PCA and PLS-DA. Apples obtained from integrated (IP) and low-input (LI) production discriminated when taking the third PLS-component into account. The identified chemical composition and the compounds responsible for the separation, i.e. the PLS-loadings, are discussed. The results are compared with fruit quality parameters assessed by conventional methods. The present study demonstrates the potential of HR-MAS NMR spectroscopy of fruit tissue as analytical tool for finding markers for specific fruit production conditions like the cultivation method.
Resumo:
(1) A mathematical theory for computing the probabilities of various nucleotide configurations is developed, and the probability of obtaining the correct phylogenetic tree (model tree) from sequence data is evaluated for six phylogenetic tree-making methods (UPGMA, distance Wagner method, transformed distance method, Fitch-Margoliash's method, maximum parsimony method, and compatibility method). The number of nucleotides (m*) necessary to obtain the correct tree with a probability of 95% is estimated with special reference to the human, chimpanzee, and gorilla divergence. m* is at least 4,200, but the availability of outgroup species greatly reduces m* for all methods except UPGMA. m* increases if transitions occur more frequently than transversions as in the case of mitochondrial DNA. (2) A new tree-making method called the neighbor-joining method is proposed. This method is applicable either for distance data or character state data. Computer simulation has shown that the neighbor-joining method is generally better than UPGMA, Farris' method, Li's method, and modified Farris method on recovering the true topology when distance data are used. A related method, the simultaneous partitioning method, is also discussed. (3) The maximum likelihood (ML) method for phylogeny reconstruction under the assumption of both constant and varying evolutionary rates is studied, and a new algorithm for obtaining the ML tree is presented. This method gives a tree similar to that obtained by UPGMA when constant evolutionary rate is assumed, whereas it gives a tree similar to that obtained by the maximum parsimony tree and the neighbor-joining method when varying evolutionary rate is assumed. ^
Resumo:
The direct Bayesian admissible region approach is an a priori state free measurement association and initial orbit determination technique for optical tracks. In this paper, we test a hybrid approach that appends a least squares estimator to the direct Bayesian method on measurements taken at the Zimmerwald Observatory of the Astronomical Institute at the University of Bern. Over half of the association pairs agreed with conventional geometric track correlation and least squares techniques. The remaining pairs cast light on the fundamental limits of conducting tracklet association based solely on dynamical and geometrical information.
Resumo:
OBJECT The authors developed a new mapping technique to overcome the temporal and spatial limitations of classic subcortical mapping of the corticospinal tract (CST). The feasibility and safety of continuous (0.4-2 Hz) and dynamic (at the site of and synchronized with tissue resection) subcortical motor mapping was evaluated. METHODS The authors prospectively studied 69 patients who underwent tumor surgery adjacent to the CST (< 1 cm using diffusion tensor imaging and fiber tracking) with simultaneous subcortical monopolar motor mapping (short train, interstimulus interval 4 msec, pulse duration 500 μsec) and a new acoustic motor evoked potential alarm. Continuous (temporal coverage) and dynamic (spatial coverage) mapping was technically realized by integrating the mapping probe at the tip of a new suction device, with the concept that this device will be in contact with the tissue where the resection is performed. Motor function was assessed 1 day after surgery, at discharge, and at 3 months. RESULTS All procedures were technically successful. There was a 1:1 correlation of motor thresholds for stimulation sites simultaneously mapped with the new suction mapping device and the classic fingerstick probe (24 patients, 74 stimulation points; r(2) = 0.98, p < 0.001). The lowest individual motor thresholds were as follows: > 20 mA, 7 patients; 11-20 mA, 13 patients; 6-10 mA, 8 patients; 4-5 mA, 17 patients; and 1-3 mA, 24 patients. At 3 months, 2 patients (3%) had a persistent postoperative motor deficit, both of which were caused by a vascular injury. No patient had a permanent motor deficit caused by a mechanical injury of the CST. CONCLUSIONS Continuous dynamic mapping was found to be a feasible and ergonomic technique for localizing the exact site of the CST and distance to the motor fibers. The acoustic feedback and the ability to stimulate the tissue continuously and exactly at the site of tissue removal improves the accuracy of mapping, especially at low (< 5 mA) stimulation intensities. This new technique may increase the safety of motor eloquent tumor surgery.
Resumo:
With most clinical trials, missing data presents a statistical problem in evaluating a treatment's efficacy. There are many methods commonly used to assess missing data; however, these methods leave room for bias to enter the study. This thesis was a secondary analysis on data taken from TIME, a phase 2 randomized clinical trial conducted to evaluate the safety and effect of the administration timing of bone marrow mononuclear cells (BMMNC) for subjects with acute myocardial infarction (AMI).^ We evaluated the effect of missing data by comparing the variance inflation factor (VIF) of the effect of therapy between all subjects and only subjects with complete data. Through the general linear model, an unbiased solution was made for the VIF of the treatment's efficacy using the weighted least squares method to incorporate missing data. Two groups were identified from the TIME data: 1) all subjects and 2) subjects with complete data (baseline and follow-up measurements). After the general solution was found for the VIF, it was migrated Excel 2010 to evaluate data from TIME. The resulting numerical value from the two groups was compared to assess the effect of missing data.^ The VIF values from the TIME study were considerably less in the group with missing data. By design, we varied the correlation factor in order to evaluate the VIFs of both groups. As the correlation factor increased, the VIF values increased at a faster rate in the group with only complete data. Furthermore, while varying the correlation factor, the number of subjects with missing data was also varied to see how missing data affects the VIF. When subjects with only baseline data was increased, we saw a significant rate increase in VIF values in the group with only complete data while the group with missing data saw a steady and consistent increase in the VIF. The same was seen when we varied the group with follow-up only data. This essentially showed that the VIFs steadily increased when missing data is not ignored. When missing data is ignored as with our comparison group, the VIF values sharply increase as correlation increases.^
Resumo:
A new technique for the harmonic analysis of current observations is described. It consists in applying a linear band pass filter which separates the various species and removes the contribution of non-tidal effects at intertidal frequencies. The tidal constituents are then evaluated through the method of least squares. In spite of the narrowness of the filter, only three days of data are lost through the filtering procedure and the only requirement on the data is that the time interval between samples be an integer fraction of one day. This technique is illustrated through the analysis of a few French current observations from the English Channel within the framework of INOUT. The characteristics of the main tidal constituents are given.
Resumo:
In the recent decades, meshless methods (MMs), like the element-free Galerkin method (EFGM), have been widely studied and interesting results have been reached when solving partial differential equations. However, such solutions show a problem around boundary conditions, where the accuracy is not adequately achieved. This is caused by the use of moving least squares or residual kernel particle method methods to obtain the shape functions needed in MM, since such methods are good enough in the inner of the integration domains, but not so accurate in boundaries. This way, Bernstein curves, which are a partition of unity themselves,can solve this problem with the same accuracy in the inner area of the domain and at their boundaries.
Resumo:
The present research is focused on the application of hyperspectral images for the supervision of quality deterioration in ready to use leafy spinach during storage (Spinacia oleracea). Two sets of samples of packed leafy spinach were considered: (a) a first set of samples was stored at 20 °C (E-20) in order to accelerate the degradation process, and these samples were measured the day of reception in the laboratory and after 2 days of storage; (b) a second set of samples was kept at 10 °C (E-10), and the measurements were taken throughout storage, beginning the day of reception and repeating the acquisition of Images 3, 6 and 9 days later. Twenty leaves per test were analyzed. Hyperspectral images were acquired with a push-broom CCD camera equipped with a spectrograph VNIR (400–1000 nm). Calibration set of spectra was extracted from E-20 samples, containing three classes of degradation: class A (optimal quality), class B and class C (maximum deterioration). Reference average spectra were defined for each class. Three models, computed on the calibration set, with a decreasing degree of complexity were compared, according to their ability for segregating leaves at different quality stages (fresh, with incipient and non-visible symptoms of degradation, and degraded): spectral angle mapper distance (SAM), partial least squares discriminant analysis models (PLS-DA), and a non linear index (Leafy Vegetable Evolution, LEVE) combining five wavelengths were included among the previously selected by CovSel procedure. In sets E-10 and E-20, artificial images of the membership degree according to the distance of each pixel to the reference classes, were computed assigning each pixel to the closest reference class. The three methods were able to show the degradation of the leaves with storage time.
Resumo:
In the present work we report theoretical Stark widths and shifts calculated using the Griem semi-empirical approach, corresponding to 237 spectral lines of MgIII. Data are presented for an electron density of 1017 cm?3 and temperatures T = 0.5?10.0 (104 K). The matrix elements used in these calculations have been determined from 23 configurations of MgIII: 2s22p6, 2s22p53p, 2s22p54p, 2s22p54f and 2s22p55f for even parity and 2s22p5ns (n = 3?6), 2s22p5nd (n = 3?9), 2s22p55g and 2s2p6np (n = 3?8) for odd parity. For the intermediate coupling (IC) calculations, we use the standard method of least-squares fitting from experimental energy levels by means of the Cowan computer code. Also, in order to test the matrix elements used in our calculations, we present calculated values of 70 transition probabilities of MgIII spectral lines and 14 calculated values of radiative lifetimes of MgIII levels. There is good agreement between our calculations and experimental radiative lifetimes. Spectral lines of MgIII are relevant in astrophysics and also play an important role in the spectral analysis of laboratory plasma. Theoretical trends of the Stark broadening parameter versus the temperature for relevant lines are presented. No values of Stark parameters can be found in the bibliography.
Resumo:
The impact of the Parkinson's disease and its treatment on the patients' health-related quality of life can be estimated either by means of generic measures such as the european quality of Life-5 Dimensions (EQ-5D) or specific measures such as the 8-item Parkinson's disease questionnaire (PDQ-8). In clinical studies, PDQ-8 could be used in detriment of EQ-5D due to the lack of resources, time or clinical interest in generic measures. Nevertheless, PDQ-8 cannot be applied in cost-effectiveness analyses which require generic measures and quantitative utility scores, such as EQ-5D. To deal with this problem, a commonly used solution is the prediction of EQ-5D from PDQ-8. In this paper, we propose a new probabilistic method to predict EQ-5D from PDQ-8 using multi-dimensional Bayesian network classifiers. Our approach is evaluated using five-fold cross-validation experiments carried out on a Parkinson's data set containing 488 patients, and is compared with two additional Bayesian network-based approaches, two commonly used mapping methods namely, ordinary least squares and censored least absolute deviations, and a deterministic model. Experimental results are promising in terms of predictive performance as well as the identification of dependence relationships among EQ-5D and PDQ-8 items that the mapping approaches are unable to detect
Resumo:
The Jones-Wilkins-Lee (JWL) equation of state parameters for ANFO and emulsion-type explosives have been obtained from cylinder test expansion measurements. The calculation method comprises a new radial expansion function, with a non-zero initial velocity at the onset of the expansion in order to comply with a positive Gurney energy at unit relative volume, as the isentropic expansion from the CJ state predicts. The equations reflecting the CJ state conditions and the measured expansion energy were solved for the JWL parameters by a non-linear least squares scheme. The JWL parameters of thirteen ANFO and emulsion type explosives have been determined in this way from their cylinder test expansion data. The results were evaluated through numerical modelling of the tests with the LS-DYNA hydrocode; the expansion histories from the modelling were compared with the measured ones, and excellent agreement was found.
Resumo:
The present thesis is focused on the development of a thorough mathematical modelling and computational solution framework aimed at the numerical simulation of journal and sliding bearing systems operating under a wide range of lubrication regimes (mixed, elastohydrodynamic and full film lubrication regimes) and working conditions (static, quasi-static and transient conditions). The fluid flow effects have been considered in terms of the Isothermal Generalized Equation of the Mechanics of the Viscous Thin Films (Reynolds equation), along with the massconserving p-Ø Elrod-Adams cavitation model that accordingly ensures the so-called JFO complementary boundary conditions for fluid film rupture. The variation of the lubricant rheological properties due to the viscous-pressure (Barus and Roelands equations), viscous-shear-thinning (Eyring and Carreau-Yasuda equations) and density-pressure (Dowson-Higginson equation) relationships have also been taken into account in the overall modelling. Generic models have been derived for the aforementioned bearing components in order to enable their applications in general multibody dynamic systems (MDS), and by including the effects of angular misalignments, superficial geometric defects (form/waviness deviations, EHL deformations, etc.) and axial motion. The bearing exibility (conformal EHL) has been incorporated by means of FEM model reduction (or condensation) techniques. The macroscopic in fluence of the mixedlubrication phenomena have been included into the modelling by the stochastic Patir and Cheng average ow model and the Greenwood-Williamson/Greenwood-Tripp formulations for rough contacts. Furthermore, a deterministic mixed-lubrication model with inter-asperity cavitation has also been proposed for full-scale simulations in the microscopic (roughness) level. According to the extensive mathematical modelling background established, three significant contributions have been accomplished. Firstly, a general numerical solution for the Reynolds lubrication equation with the mass-conserving p - Ø cavitation model has been developed based on the hybridtype Element-Based Finite Volume Method (EbFVM). This new solution scheme allows solving lubrication problems with complex geometries to be discretized by unstructured grids. The numerical method was validated in agreement with several example cases from the literature, and further used in numerical experiments to explore its exibility in coping with irregular meshes for reducing the number of nodes required in the solution of textured sliding bearings. Secondly, novel robust partitioned techniques, namely: Fixed Point Gauss-Seidel Method (PGMF), Point Gauss-Seidel Method with Aitken Acceleration (PGMA) and Interface Quasi-Newton Method with Inverse Jacobian from Least-Squares approximation (IQN-ILS), commonly adopted for solving uid-structure interaction problems have been introduced in the context of tribological simulations, particularly for the coupled calculation of dynamic conformal EHL contacts. The performance of such partitioned methods was evaluated according to simulations of dynamically loaded connecting-rod big-end bearings of both heavy-duty and high-speed engines. Finally, the proposed deterministic mixed-lubrication modelling was applied to investigate the in fluence of the cylinder liner wear after a 100h dynamometer engine test on the hydrodynamic pressure generation and friction of Twin-Land Oil Control Rings.
Resumo:
From the Introduction. The aim of the present “letter” is to provoke, rather than to prove. It is intended to further stimulate the – already well engaged – scientific dialogue on the open method of coordination (OMC).1 This explains why some of the arguments put forward are not entirely new, while others are overstretched. This contribution, belated as it is entering into the debate, has the benefit of some hindsight. This hindsight is based on three factors (in chronological order): a) the fact that the author has participated himself as a member of a national delegation in one of the OMC-induced benchmarking exercises (only to see the final evaluation report getting lost in the Labyrinth of the national bureaucracy, despite the fact that it contained an overall favorable assessment), as well as in a OECD led exercise of coordination, concerning regulatory reform; b) the extremely rich and knowledgeable academic input, offering a very promising theoretical background for the OMC; and c) some recent empirical research as to the efficiency of the OMC, the accounts of which are, to say the least, ambiguous. This recent empirical research grounds the basic assumption of the present paper: that the OMC has only restricted, if not negligible, direct effects in the short term, while it may have some indirect effects in the medium-long term (2). On the basis of this assumption a series of arguments against the current “spread” of the OMC will be put forward (3). Some proposals on how to neutralize some of the shortfalls of the OMC will follow (4).
Resumo:
Maximum entropy spectral analyses and a fitting test to find the best suitable curve for the modified time series based on the non-linear least squares method for Td (diatom temperature) values were performed for the Quaternary portion of the DSDP Sites 579 and 580 in the western North Pacific. The sampling interval averages 13.7 kyr in the Brunhes Chron (0-780 ka) and 16.5 kyr in the later portion of the Matuyama Chron (780-1800 ka) at Site 580, but increases to 17.3 kyr and 23.2 kyr, respectively, at Site 579. Among dominant cycles during the Brunhes Chron, there are 411.5 kyr and 126.0 kyr at Site 579, and 467.0 kyr and 136.7 kyr at Site 580 correspond to 413 kyr and 95 to 124 kyr of the orbital eccentricity. Minor cycles of 41.2 kyr at Site 579 and 41.7 kyr at Site 580 are near to 41 kyr of the obliquity (tilt). During the Matuyama Chron at Site 580, cycles of 49.7 kyr and 43.6 kyr are dominant. The surface-water temperature estimated from diatoms at the western North Pacific DSDP Sites 579 and 580 shows correlation with the fundamental Earth's orbital parameters during Quaternary time.
Resumo:
Includes bibliographical references (p. 58-59)