118 resultados para Quadratic polynomial
Resumo:
Using the classical Parzen window (PW) estimate as the desired response, the kernel density estimation is formulated as a regression problem and the orthogonal forward regression technique is adopted to construct sparse kernel density (SKD) estimates. The proposed algorithm incrementally minimises a leave-one-out test score to select a sparse kernel model, and a local regularisation method is incorporated into the density construction process to further enforce sparsity. The kernel weights of the selected sparse model are finally updated using the multiplicative nonnegative quadratic programming algorithm, which ensures the nonnegative and unity constraints for the kernel weights and has the desired ability to reduce the model size further. Except for the kernel width, the proposed method has no other parameters that need tuning, and the user is not required to specify any additional criterion to terminate the density construction procedure. Several examples demonstrate the ability of this simple regression-based approach to effectively construct a SKID estimate with comparable accuracy to that of the full-sample optimised PW density estimate. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
We discuss the feasibility of wireless terahertz communications links deployed in a metropolitan area and model the large-scale fading of such channels. The model takes into account reception through direct line of sight, ground and wall reflection, as well as diffraction around a corner. The movement of the receiver is modeled by an autonomous dynamic linear system in state space, whereas the geometric relations involved in the attenuation and multipath propagation of the electric field are described by a static nonlinear mapping. A subspace algorithm in conjunction with polynomial regression is used to identify a single-output Wiener model from time-domain measurements of the field intensity when the receiver motion is simulated using a constant angular speed and an exponentially decaying radius. The identification procedure is validated by using the model to perform q-step ahead predictions. The sensitivity of the algorithm to small-scale fading, detector noise, and atmospheric changes are discussed. The performance of the algorithm is tested in the diffraction zone assuming a range of emitter frequencies (2, 38, 60, 100, 140, and 400 GHz). Extensions of the simulation results to situations where a more complicated trajectory describes the motion of the receiver are also implemented, providing information on the performance of the algorithm under a worst case scenario. Finally, a sensitivity analysis to model parameters for the identified Wiener system is proposed.
Resumo:
A quadratic programming optimization procedure for designing asymmetric apodization windows tailored to the shape of time-domain sample waveforms recorded using a terahertz transient spectrometer is proposed. By artificially degrading the waveforms, the performance of the designed window in both the time and the frequency domains is compared with that of conventional rectangular, triangular (Mertz), and Hamming windows. Examples of window optimization assuming Gaussian functions as the building elements of the apodization window are provided. The formulation is sufficiently general to accommodate other basis functions. (C) 2007 Optical Society of America
Resumo:
The large scale fading of wireless mobile communications links is modelled assuming the mobile receiver motion is described by a dynamic linear system in state-space. The geometric relations involved in the attenuation and multi-path propagation of the electric field are described by a static non-linear mapping. A Wiener system subspace identification algorithm in conjunction with polynomial regression is used to identify a model from time-domain estimates of the field intensity assuming a multitude of emitters and an antenna array at the receiver end.
Resumo:
We introduce and describe the Multiple Gravity Assist problem, a global optimisation problem that is of great interest in the design of spacecraft and their trajectories. We discuss its formalization and we show, in one particular problem instance, the performance of selected state of the art heuristic global optimisation algorithms. A deterministic search space pruning algorithm is then developed and its polynomial time and space complexity derived. The algorithm is shown to achieve search space reductions of greater than six orders of magnitude, thus reducing significantly the complexity of the subsequent optimisation.
Resumo:
Near isogenic lines (NILs) varying for reduced height (Rht) and photoperiod insensitivity (Ppd-D1) alleles in a cv. Mercia background (rht (tall), Rht-B1b, Rht-D1b, Rht-B1c, Rht8c+Ppd-D1a, Rht-D1c, Rht12) were compared for interception of photosynthetically active radiation (PAR), radiation use efficiency (RUE), above-ground biomass (AGB), harvest index (HI), height, weed prevalence, lodging and grain yield, at one field site but within contrasting (‘organic’ v ‘conventional’) rotational and agronomic contexts, in each of three years. In the final year, further NILs (rht (tall), Rht-B1b, Rht-D1b, Rht-B1c, Rht-B1b+Rht-D1b, Rht-D1b+Rht-B1c) in Maris Huntsman and Maris Widgeon backgrounds were added together with 64 lines of a doubled haploid (DH) population [Savannah (Rht-D1b) × Renesansa (Rht-8c+Ppd-D1a)]. There were highly significant genotype × system interactions for grain yield, mostly because differences were greater in the conventional system than in the organic system. Quadratic fits of NIL grain yield against height were appropriate for both systems when all NILs and years were included. Extreme dwarfing was associated with reduced PAR, RUE, AGB, HI, and increased weed prevalence. Intermediate dwarfing was often associated with improved HI in the conventional system, but not in the organic system. Heights in excess of the optimum for yield were associated particularly with reduced HI and, in the conventional system, lodging. There was no statistical evidence that optimum height for grain yield varied with system although fits peaked at 85cm and 96cm in the conventional and organic systems, respectively. Amongst the DH lines, the marker for Ppd-D1a was associated with earlier flowering, and just in the conventional system also with reduced PAR, AGB and grain yield. The marker for Rht-D1b was associated with reduced height, and again just in the conventional system, with increased HI and grain yield. The marker for Rht8c reduced height, and in the conventional system only, increased HI. When using the System × DH line means as observations grain yield was associated with height and early vegetative growth in the organic system, but not in the conventional system. In the conventional system, PAR interception after anthesis correlated with yield. Savannah was the highest yielding line in the conventional system, producing significantly more grain than several lines that out yielded it in the organic system.
Resumo:
In this paper the meteorological processes responsible for transporting tracer during the second ETEX (European Tracer EXperiment) release are determined using the UK Met Office Unified Model (UM). The UM predicted distribution of tracer is also compared with observations from the ETEX campaign. The dominant meteorological process is a warm conveyor belt which transports large amounts of tracer away from the surface up to a height of 4 km over a 36 h period. Convection is also an important process, transporting tracer to heights of up to 8 km. Potential sources of error when using an operational numerical weather prediction model to forecast air quality are also investigated. These potential sources of error include model dynamics, model resolution and model physics. In the UM a semi-Lagrangian monotonic advection scheme is used with cubic polynomial interpolation. This can predict unrealistic negative values of tracer which are subsequently set to zero, and hence results in an overprediction of tracer concentrations. In order to conserve mass in the UM tracer simulations it was necessary to include a flux corrected transport method. Model resolution can also affect the accuracy of predicted tracer distributions. Low resolution simulations (50 km grid length) were unable to resolve a change in wind direction observed during ETEX 2, this led to an error in the transport direction and hence an error in tracer distribution. High resolution simulations (12 km grid length) captured the change in wind direction and hence produced a tracer distribution that compared better with the observations. The representation of convective mixing was found to have a large effect on the vertical transport of tracer. Turning off the convective mixing parameterisation in the UM significantly reduced the vertical transport of tracer. Finally, air quality forecasts were found to be sensitive to the timing of synoptic scale features. Errors in the position of the cold front relative to the tracer release location of only 1 h resulted in changes in the predicted tracer concentrations that were of the same order of magnitude as the absolute tracer concentrations.
Resumo:
A sparse kernel density estimator is derived based on the zero-norm constraint, in which the zero-norm of the kernel weights is incorporated to enhance model sparsity. The classical Parzen window estimate is adopted as the desired response for density estimation, and an approximate function of the zero-norm is used for achieving mathemtical tractability and algorithmic efficiency. Under the mild condition of the positive definite design matrix, the kernel weights of the proposed density estimator based on the zero-norm approximation can be obtained using the multiplicative nonnegative quadratic programming algorithm. Using the -optimality based selection algorithm as the preprocessing to select a small significant subset design matrix, the proposed zero-norm based approach offers an effective means for constructing very sparse kernel density estimates with excellent generalisation performance.
Resumo:
A generalized or tunable-kernel model is proposed for probability density function estimation based on an orthogonal forward regression procedure. Each stage of the density estimation process determines a tunable kernel, namely, its center vector and diagonal covariance matrix, by minimizing a leave-one-out test criterion. The kernel mixing weights of the constructed sparse density estimate are finally updated using the multiplicative nonnegative quadratic programming algorithm to ensure the nonnegative and unity constraints, and this weight-updating process additionally has the desired ability to further reduce the model size. The proposed tunable-kernel model has advantages, in terms of model generalization capability and model sparsity, over the standard fixed-kernel model that restricts kernel centers to the training data points and employs a single common kernel variance for every kernel. On the other hand, it does not optimize all the model parameters together and thus avoids the problems of high-dimensional ill-conditioned nonlinear optimization associated with the conventional finite mixture model. Several examples are included to demonstrate the ability of the proposed novel tunable-kernel model to effectively construct a very compact density estimate accurately.
Resumo:
This paper derives an efficient algorithm for constructing sparse kernel density (SKD) estimates. The algorithm first selects a very small subset of significant kernels using an orthogonal forward regression (OFR) procedure based on the D-optimality experimental design criterion. The weights of the resulting sparse kernel model are then calculated using a modified multiplicative nonnegative quadratic programming algorithm. Unlike most of the SKD estimators, the proposed D-optimality regression approach is an unsupervised construction algorithm and it does not require an empirical desired response for the kernel selection task. The strength of the D-optimality OFR is owing to the fact that the algorithm automatically selects a small subset of the most significant kernels related to the largest eigenvalues of the kernel design matrix, which counts for the most energy of the kernel training data, and this also guarantees the most accurate kernel weight estimate. The proposed method is also computationally attractive, in comparison with many existing SKD construction algorithms. Extensive numerical investigation demonstrates the ability of this regression-based approach to efficiently construct a very sparse kernel density estimate with excellent test accuracy, and our results show that the proposed method compares favourably with other existing sparse methods, in terms of test accuracy, model sparsity and complexity, for constructing kernel density estimates.
Resumo:
We consider problems of splitting and connectivity augmentation in hypergraphs. In a hypergraph G = (V +s, E), to split two edges su, sv, is to replace them with a single edge uv. We are interested in doing this in such a way as to preserve a defined level of connectivity in V . The splitting technique is often used as a way of adding new edges into a graph or hypergraph, so as to augment the connectivity to some prescribed level. We begin by providing a short history of work done in this area. Then several preliminary results are given in a general form so that they may be used to tackle several problems. We then analyse the hypergraphs G = (V + s, E) for which there is no split preserving the local-edge-connectivity present in V. We provide two structural theorems, one of which implies a slight extension to Mader’s classical splitting theorem. We also provide a characterisation of the hypergraphs for which there is no such “good” split and a splitting result concerned with a specialisation of the local-connectivity function. We then use our splitting results to provide an upper bound on the smallest number of size-two edges we must add to any given hypergraph to ensure that in the resulting hypergraph we have λ(x, y) ≥ r(x, y) for all x, y in V, where r is an integer valued, symmetric requirement function on V*V. This is the so called “local-edge-connectivity augmentation problem” for hypergraphs. We also provide an extension to a Theorem of Szigeti, about augmenting to satisfy a requirement r, but using hyperedges. Next, in a result born of collaborative work with Zoltán Király from Budapest, we show that the local-connectivity augmentation problem is NP-complete for hypergraphs. Lastly we concern ourselves with an augmentation problem that includes a locational constraint. The premise is that we are given a hypergraph H = (V,E) with a bipartition P = {P1, P2} of V and asked to augment it with size-two edges, so that the result is k-edge-connected, and has no new edge contained in some P(i). We consider the splitting technique and describe the obstacles that prevent us forming “good” splits. From this we deduce results about which hypergraphs have a complete Pk-split. This leads to a minimax result on the optimal number of edges required and a polynomial algorithm to provide an optimal augmentation.
Resumo:
Six Holstein cows fitted with ruminal cannulas and permanent indwelling catheters in the portal vein, hepatic vein, mesenteric vein, and an artery were used to study the effects of abomasal glucose infusion on splanchnic plasma concentrations of gut peptides. The experimental design was a randomized block design with repeated measurements. Cows were assigned to one of 2 treatments: control or infusion of 1,500 g of glucose/d into the abomasum from the day of parturition to 29 d in milk. Cows were sampled 12 ± 6 d prepartum and at 4, 15, and 29 d in milk. Concentrations of glucose-dependent insulinotropic polypeptide, glucagon-like peptide 1(7–36) amide, and oxyntomodulin were measured in pooled samples within cow and sampling day, whereas active ghrelin was measured in samples obtained 30 min before and after feeding at 0800 h. Postpartum, dry matter intake increased at a lower rate with infusion compared with the control. Arterial, portal venous, and hepatic venous plasma concentrations of the measured gut peptides were unaffected by abomasal glucose infusion. The arterial, portal venous, and hepatic venous plasma concentrations of glucose-dependent insulinotropic polypeptide and glucagon-like peptide 1(7–36) amide increased linearly from 12 d prepartum to 29 d postpartum. Plasma concentrations of oxyntomodulin were unaffected by day relative to parturition. Arterial and portal venous plasma concentrations of ghrelin were lower postfeeding compared with prefeeding concentrations. Arterial plasma concentrations of ghrelin were greatest prepartum and lowest at 4 d postpartum, giving a quadratic pattern of change over the transition period. Positive portal venous-arterial and hepatic venous–arterial concentration differences were observed for glucagon-like peptide 1(7–36) amide. A negative portal venous–arterial concentration difference was observed for ghrelin pre-feeding. The remaining portal venous–arterial and hepatic venous–arterial concentration differences of gut peptides did not differ from zero. In conclusion, increased postruminal glucose supply to postpartum transition dairy cows reduced feed intake relative to control cows, but did not affect arterial, portal venous, or hepatic venous plasma concentrations of gut peptide hormones. Instead, gut peptide plasma concentrations increased as lactation progressed. Thus, the lower feed intake of postpartum dairy cows receiving abomasal glucose infusion was not attributable to changes in gut peptide concentrations.
Resumo:
A cause and effect relationship between glucagon-like peptide 1 (7, 36) amide (GLP-1) and cholecystokinin (CCK) and DMI regulation has not been established in ruminants. Three randomized complete block experiments were conducted to determine the effect of feeding fat or infusing GLP-1 or CCK intravenously on DMI, nutrient digestibility, and Cr rate of passage (using Cr(2)O(3) as a marker) in wethers. A total of 18 Targhee × Hampshire wethers (36.5 ± 2.5 kg of BW) were used, and each experiment consisted of four 21-d periods (14 d for adaptation and 7 d for infusion and sampling). Wethers allotted to the control treatments served as the controls for all 3 experiments; experiments were performed simultaneously. The basal diet was 60% concentrate and 40% forage. In Exp. 1, treatments were the control (0% added fat) and addition of 4 or 6% Ca salts of palm oil fatty acids (DM basis). Treatments in Exp. 2 and 3 were the control and 3 jugular vein infusion dosages of GLP-1 (0.052, 0.103, or 0.155 µg•kg of BW(-1)•d(-1)) or CCK (0.069, 0.138, or 0.207 µg•kg of BW(-1)•d(-1)), respectively. Increases in plasma GLP-1 and CCK concentrations during hormone infusions were comparable with increases observed when increasing amounts of fat were fed. Feeding fat and infusion of GLP-1 tended (linear, P = 0.12; quadratic, P = 0.13) to decrease DMI. Infusion of CCK did not affect (P > 0.21) DMI. Retention time of Cr in the total gastrointestinal tract decreased (linear, P < 0.01) when fat was fed, but was not affected by GLP-1 or CCK infusion. In conclusion, jugular vein infusion produced similar plasma CCK and GLP-1 concentrations as observed when fat was fed. The effects of feeding fat on DMI may be partially regulated by plasma concentration of GLP-1, but are not likely due solely to changes in a single hormone concentration.
Resumo:
Generalizing the notion of an eigenvector, invariant subspaces are frequently used in the context of linear eigenvalue problems, leading to conceptually elegant and numerically stable formulations in applications that require the computation of several eigenvalues and/or eigenvectors. Similar benefits can be expected for polynomial eigenvalue problems, for which the concept of an invariant subspace needs to be replaced by the concept of an invariant pair. Little has been known so far about numerical aspects of such invariant pairs. The aim of this paper is to fill this gap. The behavior of invariant pairs under perturbations of the matrix polynomial is studied and a first-order perturbation expansion is given. From a computational point of view, we investigate how to best extract invariant pairs from a linearization of the matrix polynomial. Moreover, we describe efficient refinement procedures directly based on the polynomial formulation. Numerical experiments with matrix polynomials from a number of applications demonstrate the effectiveness of our extraction and refinement procedures.
Resumo:
We consider the scattering of a time-harmonic acoustic incident plane wave by a sound soft convex curvilinear polygon with Lipschitz boundary. For standard boundary or finite element methods, with a piecewise polynomial approximation space, the number of degrees of freedom required to achieve a prescribed level of accuracy grows at least linearly with respect to the frequency of the incident wave. Here we propose a novel Galerkin boundary element method with a hybrid approximation space, consisting of the products of plane wave basis functions with piecewise polynomials supported on several overlapping meshes; a uniform mesh on illuminated sides, and graded meshes refined towards the corners of the polygon on illuminated and shadow sides. Numerical experiments suggest that the number of degrees of freedom required to achieve a prescribed level of accuracy need only grow logarithmically as the frequency of the incident wave increases.