37 resultados para linear weighting methods
em CentAUR: Central Archive University of Reading - UK
Resumo:
The current energy requirements system used in the United Kingdom for lactating dairy cows utilizes key parameters such as metabolizable energy intake (MEI) at maintenance (MEm), the efficiency of utilization of MEI for 1) maintenance, 2) milk production (k(l)), 3) growth (k(g)), and the efficiency of utilization of body stores for milk production (k(t)). Traditionally, these have been determined using linear regression methods to analyze energy balance data from calorimetry experiments. Many studies have highlighted a number of concerns over current energy feeding systems particularly in relation to these key parameters, and the linear models used for analyzing. Therefore, a database containing 652 dairy cow observations was assembled from calorimetry studies in the United Kingdom. Five functions for analyzing energy balance data were considered: straight line, two diminishing returns functions, (the Mitscherlich and the rectangular hyperbola), and two sigmoidal functions (the logistic and the Gompertz). Meta-analysis of the data was conducted to estimate k(g) and k(t). Values of 0.83 to 0.86 and 0.66 to 0.69 were obtained for k(g) and k(t) using all the functions (with standard errors of 0.028 and 0.027), respectively, which were considerably different from previous reports of 0.60 to 0.75 for k(g) and 0.82 to 0.84 for k(t). Using the estimated values of k(g) and k(t), the data were corrected to allow for body tissue changes. Based on the definition of k(l) as the derivative of the ratio of milk energy derived from MEI to MEI directed towards milk production, MEm and k(l) were determined. Meta-analysis of the pooled data showed that the average k(l) ranged from 0.50 to 0.58 and MEm ranged between 0.34 and 0.64 MJ/kg of BW0.75 per day. Although the constrained Mitscherlich fitted the data as good as the straight line, more observations at high energy intakes (above 2.4 MJ/kg of BW0.75 per day) are required to determine conclusively whether milk energy is related to MEI linearly or not.
Resumo:
In financial decision-making processes, the adopted weights of the objective functions have significant impacts on the final decision outcome. However, conventional rating and weighting methods exhibit difficulty in deriving appropriate weights for complex decision-making problems with imprecise information. Entropy is a quantitative measure of uncertainty and has been useful in exploring weights of attributes in decision making. A fuzzy and entropy-based mathematical approach is employed to solve the weighting problem of the objective functions in an overall cash-flow model. The multiproject being undertaken by a medium-size construction firm in Hong Kong was used as a real case study to demonstrate the application of entropy. Its application in multiproject cash flow situations is demonstrated. The results indicate that the overall before-tax profit was HK$ 0.11 millions lower after the introduction of appropriate weights. In addition, the best time to invest in new projects arising from positive cash flow was identified to be two working months earlier than the nonweight system.
Resumo:
Understanding the metabolic processes associated with aging is key to developing effective management and treatment strategies for age-related diseases. We investigated the metabolic profiles associated with age in a Taiwanese and an American population. 1H NMR spectral profiles were generated for urine specimens collected from the Taiwanese Social Environment and Biomarkers of Aging Study (SEBAS; n = 857; age 54–91 years) and the Mid-Life in the USA study (MIDUS II; n = 1148; age 35–86 years). Multivariate and univariate linear projection methods revealed some common age-related characteristics in urinary metabolite profiles in the American and Taiwanese populations, as well as some distinctive features. In both cases, two metabolites—4-cresyl sulfate (4CS) and phenylacetylglutamine (PAG)—were positively associated with age. In addition, creatine and β-hydroxy-β-methylbutyrate (HMB) were negatively correlated with age in both populations (p < 4 × 10–6). These age-associated gradients in creatine and HMB reflect decreasing muscle mass with age. The systematic increase in PAG and 4CS was confirmed using ultraperformance liquid chromatography–mass spectrometry (UPLC–MS). Both are products of concerted microbial–mammalian host cometabolism and indicate an age-related association with the balance of host–microbiome metabolism.
Resumo:
Sixteen years (1994 – 2009) of ozone profiling by ozonesondes at Valentia Meteorological and Geophysical Observatory, Ireland (51.94° N, 10.23° W) along with a co-located MkIV Brewer spectrophotometer for the period 1993–2009 are analyzed. Simple and multiple linear regression methods are used to infer the recent trend, if any, in stratospheric column ozone over the station. The decadal trend from 1994 to 2010 is also calculated from the monthly mean data of Brewer and column ozone data derived from satellite observations. Both of these show a 1.5 % increase per decade during this period with an uncertainty of about ±0.25 %. Monthly mean data for March show a much stronger trend of ~ 4.8 % increase per decade for both ozonesonde and Brewer data. The ozone profile is divided between three vertical slots of 0–15 km, 15–26 km, and 26 km to the top of the atmosphere and a 11-year running average is calculated. Ozone values for the month of March only are observed to increase at each level with a maximum change of +9.2 ± 3.2 % per decade (between years 1994 and 2009) being observed in the vertical region from 15 to 26 km. In the tropospheric region from 0 to 15 km, the trend is positive but with a poor statistical significance. However, for the top level of above 26 km the trend is significantly positive at about 4 % per decade. The March integrated ozonesonde column ozone during this period is found to increase at a rate of ~6.6 % per decade compared with the Brewer and satellite positive trends of ~5 % per decade.
Resumo:
The Gauss–Newton algorithm is an iterative method regularly used for solving nonlinear least squares problems. It is particularly well suited to the treatment of very large scale variational data assimilation problems that arise in atmosphere and ocean forecasting. The procedure consists of a sequence of linear least squares approximations to the nonlinear problem, each of which is solved by an “inner” direct or iterative process. In comparison with Newton’s method and its variants, the algorithm is attractive because it does not require the evaluation of second-order derivatives in the Hessian of the objective function. In practice the exact Gauss–Newton method is too expensive to apply operationally in meteorological forecasting, and various approximations are made in order to reduce computational costs and to solve the problems in real time. Here we investigate the effects on the convergence of the Gauss–Newton method of two types of approximation used commonly in data assimilation. First, we examine “truncated” Gauss–Newton methods where the inner linear least squares problem is not solved exactly, and second, we examine “perturbed” Gauss–Newton methods where the true linearized inner problem is approximated by a simplified, or perturbed, linear least squares problem. We give conditions ensuring that the truncated and perturbed Gauss–Newton methods converge and also derive rates of convergence for the iterations. The results are illustrated by a simple numerical example. A practical application to the problem of data assimilation in a typical meteorological system is presented.
Resumo:
The goal of the review is to provide a state-of-the-art survey on sampling and probe methods for the solution of inverse problems. Further, a configuration approach to some of the problems will be presented. We study the concepts and analytical results for several recent sampling and probe methods. We will give an introduction to the basic idea behind each method using a simple model problem and then provide some general formulation in terms of particular configurations to study the range of the arguments which are used to set up the method. This provides a novel way to present the algorithms and the analytic arguments for their investigation in a variety of different settings. In detail we investigate the probe method (Ikehata), linear sampling method (Colton-Kirsch) and the factorization method (Kirsch), singular sources Method (Potthast), no response test (Luke-Potthast), range test (Kusiak, Potthast and Sylvester) and the enclosure method (Ikehata) for the solution of inverse acoustic and electromagnetic scattering problems. The main ideas, approaches and convergence results of the methods are presented. For each method, we provide a historical survey about applications to different situations.
Resumo:
In this paper we consider the scattering of a plane acoustic or electromagnetic wave by a one-dimensional, periodic rough surface. We restrict the discussion to the case when the boundary is sound soft in the acoustic case, perfectly reflecting with TE polarization in the EM case, so that the total field vanishes on the boundary. We propose a uniquely solvable first kind integral equation formulation of the problem, which amounts to a requirement that the normal derivative of the Green's representation formula for the total field vanish on a horizontal line below the scattering surface. We then discuss the numerical solution by Galerkin's method of this (ill-posed) integral equation. We point out that, with two particular choices of the trial and test spaces, we recover the so-called SC (spectral-coordinate) and SS (spectral-spectral) numerical schemes of DeSanto et al., Waves Random Media, 8, 315-414 1998. We next propose a new Galerkin scheme, a modification of the SS method that we term the SS* method, which is an instance of the well-known dual least squares Galerkin method. We show that the SS* method is always well-defined and is optimally convergent as the size of the approximation space increases. Moreover, we make a connection with the classical least squares method, in which the coefficients in the Rayleigh expansion of the solution are determined by enforcing the boundary condition in a least squares sense, pointing out that the linear system to be solved in the SS* method is identical to that in the least squares method. Using this connection we show that (reflecting the ill-posed nature of the integral equation solved) the condition number of the linear system in the SS* and least squares methods approaches infinity as the approximation space increases in size. We also provide theoretical error bounds on the condition number and on the errors induced in the numerical solution computed as a result of ill-conditioning. Numerical results confirm the convergence of the SS* method and illustrate the ill-conditioning that arises.
Resumo:
We develop the linearization of a semi-implicit semi-Lagrangian model of the one-dimensional shallow-water equations using two different methods. The usual tangent linear model, formed by linearizing the discrete nonlinear model, is compared with a model formed by first linearizing the continuous nonlinear equations and then discretizing. Both models are shown to perform equally well for finite perturbations. However, the asymptotic behaviour of the two models differs as the perturbation size is reduced. This leads to difficulties in showing that the models are correctly coded using the standard tests. To overcome this difficulty we propose a new method for testing linear models, which we demonstrate both theoretically and numerically. © Crown copyright, 2003. Royal Meteorological Society
Resumo:
In the past decade, the amount of data in biological field has become larger and larger; Bio-techniques for analysis of biological data have been developed and new tools have been introduced. Several computational methods are based on unsupervised neural network algorithms that are widely used for multiple purposes including clustering and visualization, i.e. the Self Organizing Maps (SOM). Unfortunately, even though this method is unsupervised, the performances in terms of quality of result and learning speed are strongly dependent from the neuron weights initialization. In this paper we present a new initialization technique based on a totally connected undirected graph, that report relations among some intersting features of data input. Result of experimental tests, where the proposed algorithm is compared to the original initialization techniques, shows that our technique assures faster learning and better performance in terms of quantization error.
Resumo:
Small-scale dairy systems play an important role in the Mexican dairy sector and farm planning activities related to resource allocation have a significant impact on the profitability of such enterprises. Linear programming is a technique widely used for planning and ration formulation, and partial budgeting is a technique for assessing the impact of changes on the profitability of an enterprise. This study used both methods to optimise land use for forage production and nutrient availability, and to evaluate the economic impact of such changes in small-scale Mexican dairy systems. The model showed satisfactory performance when optimal solutions were compared with the traditional strategy. The strategy using fresh ryegrass, maize silage and oat hay, and the strategy using a combination of alfalfa hay, maize silage, fresh ryegrass and oat hay appeared attractive options for providing a better nutrient supply and maintaining a higher stocking rate throughout the year than the traditional strategy.
Resumo:
This paper identifies the indicators of energy efficiency assessment in residential building in China through a wide literature review. Indicators are derived from three main sources: 1) The existing building assessment methods; 2)The existing Chinese standards and technology codes in building energy efficiency; 3)Academia research. As a result, we proposed an indicator list by refining the indicators in the above sources. Identified indicators are weighted by the group analytic hierarchy process (AHP) method. Group AHP method is implemented following key steps: Step 1: Experienced experts are selected to form a group; Step 2: A survey is implemented to collect the individual judgments on the importance of indicators in the group; Step 3: Members’ judgments are synthesized to the group judgments; Step 4: Indicators are weighted by AHP on the group judgments; Step 5: Investigation of consistency estimation shows that the consistency of the judgment matrix is accepted. We believe that the weighted indicators in this paper will provide important references to building energy efficiency assessment.