49 resultados para Linear multivariate methods
em CentAUR: Central Archive University of Reading - UK
Resumo:
The current energy requirements system used in the United Kingdom for lactating dairy cows utilizes key parameters such as metabolizable energy intake (MEI) at maintenance (MEm), the efficiency of utilization of MEI for 1) maintenance, 2) milk production (k(l)), 3) growth (k(g)), and the efficiency of utilization of body stores for milk production (k(t)). Traditionally, these have been determined using linear regression methods to analyze energy balance data from calorimetry experiments. Many studies have highlighted a number of concerns over current energy feeding systems particularly in relation to these key parameters, and the linear models used for analyzing. Therefore, a database containing 652 dairy cow observations was assembled from calorimetry studies in the United Kingdom. Five functions for analyzing energy balance data were considered: straight line, two diminishing returns functions, (the Mitscherlich and the rectangular hyperbola), and two sigmoidal functions (the logistic and the Gompertz). Meta-analysis of the data was conducted to estimate k(g) and k(t). Values of 0.83 to 0.86 and 0.66 to 0.69 were obtained for k(g) and k(t) using all the functions (with standard errors of 0.028 and 0.027), respectively, which were considerably different from previous reports of 0.60 to 0.75 for k(g) and 0.82 to 0.84 for k(t). Using the estimated values of k(g) and k(t), the data were corrected to allow for body tissue changes. Based on the definition of k(l) as the derivative of the ratio of milk energy derived from MEI to MEI directed towards milk production, MEm and k(l) were determined. Meta-analysis of the pooled data showed that the average k(l) ranged from 0.50 to 0.58 and MEm ranged between 0.34 and 0.64 MJ/kg of BW0.75 per day. Although the constrained Mitscherlich fitted the data as good as the straight line, more observations at high energy intakes (above 2.4 MJ/kg of BW0.75 per day) are required to determine conclusively whether milk energy is related to MEI linearly or not.
Resumo:
Understanding the metabolic processes associated with aging is key to developing effective management and treatment strategies for age-related diseases. We investigated the metabolic profiles associated with age in a Taiwanese and an American population. 1H NMR spectral profiles were generated for urine specimens collected from the Taiwanese Social Environment and Biomarkers of Aging Study (SEBAS; n = 857; age 54–91 years) and the Mid-Life in the USA study (MIDUS II; n = 1148; age 35–86 years). Multivariate and univariate linear projection methods revealed some common age-related characteristics in urinary metabolite profiles in the American and Taiwanese populations, as well as some distinctive features. In both cases, two metabolites—4-cresyl sulfate (4CS) and phenylacetylglutamine (PAG)—were positively associated with age. In addition, creatine and β-hydroxy-β-methylbutyrate (HMB) were negatively correlated with age in both populations (p < 4 × 10–6). These age-associated gradients in creatine and HMB reflect decreasing muscle mass with age. The systematic increase in PAG and 4CS was confirmed using ultraperformance liquid chromatography–mass spectrometry (UPLC–MS). Both are products of concerted microbial–mammalian host cometabolism and indicate an age-related association with the balance of host–microbiome metabolism.
Resumo:
Accurate monitoring of degradation levels in soils is essential in order to understand and achieve complete degradation of petroleum hydrocarbons in contaminated soils. We aimed to develop the use of multivariate methods for the monitoring of biodegradation of diesel in soils and to determine if diesel contaminated soils could be remediated to a chemical composition similar to that of an uncontaminated soil. An incubation experiment was set up with three contrasting soil types. Each soil was exposed to diesel at varying stages of degradation and then analysed for key hydrocarbons throughout 161 days of incubation. Hydrocarbon distributions were analysed by Principal Coordinate Analysis and similar samples grouped by cluster analysis. Variation and differences between samples were determined using permutational multivariate analysis of variance. It was found that all soils followed trajectories approaching the chemical composition of the unpolluted soil. Some contaminated soils were no longer significantly different to that of uncontaminated soil after 161 days of incubation. The use of cluster analysis allows the assignment of a percentage chemical similarity of a diesel contaminated soil to an uncontaminated soil sample. This will aid in the monitoring of hydrocarbon contaminated sites and the establishment of potential endpoints for successful remediation.
Resumo:
Sixteen years (1994 – 2009) of ozone profiling by ozonesondes at Valentia Meteorological and Geophysical Observatory, Ireland (51.94° N, 10.23° W) along with a co-located MkIV Brewer spectrophotometer for the period 1993–2009 are analyzed. Simple and multiple linear regression methods are used to infer the recent trend, if any, in stratospheric column ozone over the station. The decadal trend from 1994 to 2010 is also calculated from the monthly mean data of Brewer and column ozone data derived from satellite observations. Both of these show a 1.5 % increase per decade during this period with an uncertainty of about ±0.25 %. Monthly mean data for March show a much stronger trend of ~ 4.8 % increase per decade for both ozonesonde and Brewer data. The ozone profile is divided between three vertical slots of 0–15 km, 15–26 km, and 26 km to the top of the atmosphere and a 11-year running average is calculated. Ozone values for the month of March only are observed to increase at each level with a maximum change of +9.2 ± 3.2 % per decade (between years 1994 and 2009) being observed in the vertical region from 15 to 26 km. In the tropospheric region from 0 to 15 km, the trend is positive but with a poor statistical significance. However, for the top level of above 26 km the trend is significantly positive at about 4 % per decade. The March integrated ozonesonde column ozone during this period is found to increase at a rate of ~6.6 % per decade compared with the Brewer and satellite positive trends of ~5 % per decade.
Resumo:
Purpose – The HRM literature provides various typologies of the HR managers’ roles in organizations. The purpose of this paper is to examine how the roles and required competencies of HR managers in Slovenian multinational companies change when these companies enter the international arena. Design/methodology/approach – The authors explored the total population of 25 Slovenian multinational companies (MNCs) operating in Serbia. In these companies the authors conducted interviews with 16 expatriates working in branches in Serbia, sent questionnaires to the CEOs, and conducted a survey of 50 HR managers and interviews with 15 of them. The authors used a triangulation approach and analyzed the results by multivariate methods and content analysis. Findings – The authors found that the complexity of HR managers’ roles, and expectations of their competencies, increases with an increasing level of internationalization of companies. Orientation to people and conflict resolution are seen as elementary competencies needed in all stages of internationalization. The key competence is seen to be strategic thinking that, according to CEOs and expatriates, goes hand in hand with cultural sensitivity, openness to change and a comprehensive understanding of the international environment and business processes. Practical implications – These results can potentially be used for assessing the HRM roles and competencies in different stages of company internationalization, especially MNCs operating in the ex-communist states of Europe, and will help HR managers to support expatriates, CEOs and other employees working in branches abroad more efficiently. Originality/value – This study contributes to the review and evaluation of the quite limited research on HR managers’ roles and competencies in MNCs. It focuses on MNCs and outward internationalization in the Central and Eastern European region. It contributes to studies of the HR managers’ roles and competencies and is the first study to establish a set of roles and competencies for HR managers in Slovenian MNCs.
Resumo:
The Gauss–Newton algorithm is an iterative method regularly used for solving nonlinear least squares problems. It is particularly well suited to the treatment of very large scale variational data assimilation problems that arise in atmosphere and ocean forecasting. The procedure consists of a sequence of linear least squares approximations to the nonlinear problem, each of which is solved by an “inner” direct or iterative process. In comparison with Newton’s method and its variants, the algorithm is attractive because it does not require the evaluation of second-order derivatives in the Hessian of the objective function. In practice the exact Gauss–Newton method is too expensive to apply operationally in meteorological forecasting, and various approximations are made in order to reduce computational costs and to solve the problems in real time. Here we investigate the effects on the convergence of the Gauss–Newton method of two types of approximation used commonly in data assimilation. First, we examine “truncated” Gauss–Newton methods where the inner linear least squares problem is not solved exactly, and second, we examine “perturbed” Gauss–Newton methods where the true linearized inner problem is approximated by a simplified, or perturbed, linear least squares problem. We give conditions ensuring that the truncated and perturbed Gauss–Newton methods converge and also derive rates of convergence for the iterations. The results are illustrated by a simple numerical example. A practical application to the problem of data assimilation in a typical meteorological system is presented.
Resumo:
The goal of the review is to provide a state-of-the-art survey on sampling and probe methods for the solution of inverse problems. Further, a configuration approach to some of the problems will be presented. We study the concepts and analytical results for several recent sampling and probe methods. We will give an introduction to the basic idea behind each method using a simple model problem and then provide some general formulation in terms of particular configurations to study the range of the arguments which are used to set up the method. This provides a novel way to present the algorithms and the analytic arguments for their investigation in a variety of different settings. In detail we investigate the probe method (Ikehata), linear sampling method (Colton-Kirsch) and the factorization method (Kirsch), singular sources Method (Potthast), no response test (Luke-Potthast), range test (Kusiak, Potthast and Sylvester) and the enclosure method (Ikehata) for the solution of inverse acoustic and electromagnetic scattering problems. The main ideas, approaches and convergence results of the methods are presented. For each method, we provide a historical survey about applications to different situations.
Resumo:
In this paper we consider the scattering of a plane acoustic or electromagnetic wave by a one-dimensional, periodic rough surface. We restrict the discussion to the case when the boundary is sound soft in the acoustic case, perfectly reflecting with TE polarization in the EM case, so that the total field vanishes on the boundary. We propose a uniquely solvable first kind integral equation formulation of the problem, which amounts to a requirement that the normal derivative of the Green's representation formula for the total field vanish on a horizontal line below the scattering surface. We then discuss the numerical solution by Galerkin's method of this (ill-posed) integral equation. We point out that, with two particular choices of the trial and test spaces, we recover the so-called SC (spectral-coordinate) and SS (spectral-spectral) numerical schemes of DeSanto et al., Waves Random Media, 8, 315-414 1998. We next propose a new Galerkin scheme, a modification of the SS method that we term the SS* method, which is an instance of the well-known dual least squares Galerkin method. We show that the SS* method is always well-defined and is optimally convergent as the size of the approximation space increases. Moreover, we make a connection with the classical least squares method, in which the coefficients in the Rayleigh expansion of the solution are determined by enforcing the boundary condition in a least squares sense, pointing out that the linear system to be solved in the SS* method is identical to that in the least squares method. Using this connection we show that (reflecting the ill-posed nature of the integral equation solved) the condition number of the linear system in the SS* and least squares methods approaches infinity as the approximation space increases in size. We also provide theoretical error bounds on the condition number and on the errors induced in the numerical solution computed as a result of ill-conditioning. Numerical results confirm the convergence of the SS* method and illustrate the ill-conditioning that arises.
Resumo:
We develop the linearization of a semi-implicit semi-Lagrangian model of the one-dimensional shallow-water equations using two different methods. The usual tangent linear model, formed by linearizing the discrete nonlinear model, is compared with a model formed by first linearizing the continuous nonlinear equations and then discretizing. Both models are shown to perform equally well for finite perturbations. However, the asymptotic behaviour of the two models differs as the perturbation size is reduced. This leads to difficulties in showing that the models are correctly coded using the standard tests. To overcome this difficulty we propose a new method for testing linear models, which we demonstrate both theoretically and numerically. © Crown copyright, 2003. Royal Meteorological Society
Resumo:
In the past decade, the amount of data in biological field has become larger and larger; Bio-techniques for analysis of biological data have been developed and new tools have been introduced. Several computational methods are based on unsupervised neural network algorithms that are widely used for multiple purposes including clustering and visualization, i.e. the Self Organizing Maps (SOM). Unfortunately, even though this method is unsupervised, the performances in terms of quality of result and learning speed are strongly dependent from the neuron weights initialization. In this paper we present a new initialization technique based on a totally connected undirected graph, that report relations among some intersting features of data input. Result of experimental tests, where the proposed algorithm is compared to the original initialization techniques, shows that our technique assures faster learning and better performance in terms of quantization error.