955 resultados para Convex Polygon
Resumo:
This article is about modeling count data with zero truncation. A parametric count density family is considered. The truncated mixture of densities from this family is different from the mixture of truncated densities from the same family. Whereas the former model is more natural to formulate and to interpret, the latter model is theoretically easier to treat. It is shown that for any mixing distribution leading to a truncated mixture, a (usually different) mixing distribution can be found so. that the associated mixture of truncated densities equals the truncated mixture, and vice versa. This implies that the likelihood surfaces for both situations agree, and in this sense both models are equivalent. Zero-truncated count data models are used frequently in the capture-recapture setting to estimate population size, and it can be shown that the two Horvitz-Thompson estimators, associated with the two models, agree. In particular, it is possible to achieve strong results for mixtures of truncated Poisson densities, including reliable, global construction of the unique NPMLE (nonparametric maximum likelihood estimator) of the mixing distribution, implying a unique estimator for the population size. The benefit of these results lies in the fact that it is valid to work with the mixture of truncated count densities, which is less appealing for the practitioner but theoretically easier. Mixtures of truncated count densities form a convex linear model, for which a developed theory exists, including global maximum likelihood theory as well as algorithmic approaches. Once the problem has been solved in this class, it might readily be transformed back to the original problem by means of an explicitly given mapping. Applications of these ideas are given, particularly in the case of the truncated Poisson family.
Resumo:
1. The UK Biodiversity Action Plan (UKBAP) identifies invertebrate species in danger of national extinction. For many of these species, targets for recovery specify the number of populations that should exist by a specific future date but offer no procedure to plan strategically to achieve the target for any species. 2. Here we describe techniques based upon geographic information systems (GIS) that produce conservation strategy maps (CSM) to assist with achieving recovery targets based on all available and relevant information. 3. The heath fritillary Mellicta athalia is a UKBAP species used here to illustrate the use of CSM. A phase 1 habitat survey was used to identify habitat polygons across the county of Kent, UK. These were systematically filtered using relevant habitat, botanical and autecological data to identify seven types of polygon, including those with extant colonies or in the vicinity of extant colonies, areas managed for conservation but without colonies, and polygons that had the appropriate habitat structure and may therefore be suitable for reintroduction. 4. Five clusters of polygons of interest were found across the study area. The CSM of two of them are illustrated here: the Blean Wood complex, which contains the existing colonies of heath fritillary in Kent, and the Orlestone Forest complex, which offers opportunities for reintroduction. 5. Synthesis and applications. Although the CSM concept is illustrated here for the UK, we suggest that CSM could be part of species conservation programmes throughout the world. CSM are dynamic and should be stored in electronic format, preferably on the world-wide web, so that they can be easily viewed and updated. CSM can be used to illustrate opportunities and to develop strategies with scientists and non-scientists, enabling the engagement of all communities in a conservation programme. CSM for different years can be presented to illustrate the progress of a plan or to provide continuous feedback on how a field scenario develops.
Resumo:
The title compound, C21H28O4, a synthetic glucocorticoid, crystallizes with a single molecule in the asymmetric unit. Ring A is almost in a half-chair conformation, rings B and C are almost in chair conformations, and ring D is between a twist and a 13 beta-envelope conformation. The A/B ring junction is quasi-trans, whereas the B/C and C/D ring junctions both approach trans characteristics. The molecule as a whole is slightly convex towards the beta side, with an angle of 9.60 (2)degrees between the C10-C19 and C13-C18 vectors. Molecular-packing and hydrogen-bonding (both intra- and inter-molecular) interactions play a major role in the structural association of the compound.
Resumo:
A new primary model based on a thermodynamically consistent first-order kinetic approach was constructed to describe non-log-linear inactivation kinetics of pressure-treated bacteria. The model assumes a first-order process in which the specific inactivation rate changes inversely with the square root of time. The model gave reasonable fits to experimental data over six to seven orders of magnitude. It was also tested on 138 published data sets and provided good fits in about 70% of cases in which the shape of the curve followed the typical convex upward form. In the remainder of published examples, curves contained additional shoulder regions or extended tail regions. Curves with shoulders could be accommodated by including an additional time delay parameter and curves with tails shoulders could be accommodated by omitting points in the tail beyond the point at which survival levels remained more or less constant. The model parameters varied regularly with pressure, which may reflect a genuine mechanistic basis for the model. This property also allowed the calculation of (a) parameters analogous to the decimal reduction time D and z, the temperature increase needed to change the D value by a factor of 10, in thermal processing, and hence the processing conditions needed to attain a desired level of inactivation; and (b) the apparent thermodynamic volumes of activation associated with the lethal events. The hypothesis that inactivation rates changed as a function of the square root of time would be consistent with a diffusion-limited process.
Resumo:
The identification of non-linear systems using only observed finite datasets has become a mature research area over the last two decades. A class of linear-in-the-parameter models with universal approximation capabilities have been intensively studied and widely used due to the availability of many linear-learning algorithms and their inherent convergence conditions. This article presents a systematic overview of basic research on model selection approaches for linear-in-the-parameter models. One of the fundamental problems in non-linear system identification is to find the minimal model with the best model generalisation performance from observational data only. The important concepts in achieving good model generalisation used in various non-linear system-identification algorithms are first reviewed, including Bayesian parameter regularisation and models selective criteria based on the cross validation and experimental design. A significant advance in machine learning has been the development of the support vector machine as a means for identifying kernel models based on the structural risk minimisation principle. The developments on the convex optimisation-based model construction algorithms including the support vector regression algorithms are outlined. Input selection algorithms and on-line system identification algorithms are also included in this review. Finally, some industrial applications of non-linear models are discussed.
Resumo:
An efficient algorithm is presented for the solution of the equations of isentropic gas dynamics with a general convex gas law. The scheme is based on solving linearized Riemann problems approximately, and in more than one dimension incorporates operator splitting. In particular, only two function evaluations in each computational cell are required. The scheme is applied to a standard test problem in gas dynamics for a polytropic gas
Resumo:
An approximate Riemann solver is presented for the compressible flow equations with a general (convex) equation of state in a Lagrangian frame of reference.
Resumo:
An efficient algorithm based on flux difference splitting is presented for the solution of the three-dimensional equations of isentropic flow in a generalised coordinate system, and with a general convex gas law. The scheme is based on solving linearised Riemann problems approximately and in more than one dimension incorporates operator splitting. The algorithm requires only one function evaluation of the gas law in each computational cell. The scheme has good shock capturing properties and the advantage of using body-fitted meshes. Numerical results are shown for Mach 3 flow of air past a circular cylinder. Furthermore, the algorithm also applies to shallow water flows by employing the familiar gas dynamics analogy.
Resumo:
Let $A$ be an infinite Toeplitz matrix with a real symbol $f$ defined on $[-\pi, \pi]$. It is well known that the sequence of spectra of finite truncations $A_N$ of $A$ converges to the convex hull of the range of $f$. Recently, Levitin and Shargorodsky, on the basis of some numerical experiments, conjectured, for symbols $f$ with two discontinuities located at rational multiples of $\pi$, that the eigenvalues of $A_N$ located in the gap of $f$ asymptotically exhibit periodicity in $N$, and suggested a formula for the period as a function of the position of discontinuities. In this paper, we quantify and prove the analog of this conjecture for the matrix $A^2$ in a particular case when $f$ is a piecewise constant function taking values $-1$ and $1$.
Resumo:
We consider a quantity κ(Ω)—the distance to the origin from the null variety of the Fourier transform of the characteristic function of Ω. We conjecture, firstly, that κ(Ω) is maximised, among all convex balanced domains of a fixed volume, by a ball, and also that κ(Ω) is bounded above by the square root of the second Dirichlet eigenvalue of Ω. We prove some weaker versions of these conjectures in dimension two, as well as their validity for domains asymptotically close to a disk, and also discuss further links between κ(Ω) and the eigenvalues of the Laplacians.
Resumo:
In recent years nonpolynomial finite element methods have received increasing attention for the efficient solution of wave problems. As with their close cousin the method of particular solutions, high efficiency comes from using solutions to the Helmholtz equation as basis functions. We present and analyze such a method for the scattering of two-dimensional scalar waves from a polygonal domain that achieves exponential convergence purely by increasing the number of basis functions in each element. Key ingredients are the use of basis functions that capture the singularities at corners and the representation of the scattered field towards infinity by a combination of fundamental solutions. The solution is obtained by minimizing a least-squares functional, which we discretize in such a way that a matrix least-squares problem is obtained. We give computable exponential bounds on the rate of convergence of the least-squares functional that are in very good agreement with the observed numerical convergence. Challenging numerical examples, including a nonconvex polygon with several corner singularities, and a cavity domain, are solved to around 10 digits of accuracy with a few seconds of CPU time. The examples are implemented concisely with MPSpack, a MATLAB toolbox for wave computations with nonpolynomial basis functions, developed by the authors. A code example is included.
Resumo:
This paper introduces a new fast, effective and practical model structure construction algorithm for a mixture of experts network system utilising only process data. The algorithm is based on a novel forward constrained regression procedure. Given a full set of the experts as potential model bases, the structure construction algorithm, formed on the forward constrained regression procedure, selects the most significant model base one by one so as to minimise the overall system approximation error at each iteration, while the gate parameters in the mixture of experts network system are accordingly adjusted so as to satisfy the convex constraints required in the derivation of the forward constrained regression procedure. The procedure continues until a proper system model is constructed that utilises some or all of the experts. A pruning algorithm of the consequent mixture of experts network system is also derived to generate an overall parsimonious construction algorithm. Numerical examples are provided to demonstrate the effectiveness of the new algorithms. The mixture of experts network framework can be applied to a wide variety of applications ranging from multiple model controller synthesis to multi-sensor data fusion.
Resumo:
Neurofuzzy modelling systems combine fuzzy logic with quantitative artificial neural networks via a concept of fuzzification by using a fuzzy membership function usually based on B-splines and algebraic operators for inference, etc. The paper introduces a neurofuzzy model construction algorithm using Bezier-Bernstein polynomial functions as basis functions. The new network maintains most of the properties of the B-spline expansion based neurofuzzy system, such as the non-negativity of the basis functions, and unity of support but with the additional advantages of structural parsimony and Delaunay input space partitioning, avoiding the inherent computational problems of lattice networks. This new modelling network is based on the idea that an input vector can be mapped into barycentric co-ordinates with respect to a set of predetermined knots as vertices of a polygon (a set of tiled Delaunay triangles) over the input space. The network is expressed as the Bezier-Bernstein polynomial function of barycentric co-ordinates of the input vector. An inverse de Casteljau procedure using backpropagation is developed to obtain the input vector's barycentric co-ordinates that form the basis functions. Extension of the Bezier-Bernstein neurofuzzy algorithm to n-dimensional inputs is discussed followed by numerical examples to demonstrate the effectiveness of this new data based modelling approach.
Resumo:
Associative memory networks such as Radial Basis Functions, Neurofuzzy and Fuzzy Logic used for modelling nonlinear processes suffer from the curse of dimensionality (COD), in that as the input dimension increases the parameterization, computation cost, training data requirements, etc. increase exponentially. Here a new algorithm is introduced for the construction of a Delaunay input space partitioned optimal piecewise locally linear models to overcome the COD as well as generate locally linear models directly amenable to linear control and estimation algorithms. The training of the model is configured as a new mixture of experts network with a new fast decision rule derived using convex set theory. A very fast simulated reannealing (VFSR) algorithm is utilized to search a global optimal solution of the Delaunay input space partition. A benchmark non-linear time series is used to demonstrate the new approach.
Resumo:
Models which define fitness in terms of per capita rate of increase of phenotypes are used to analyse patterns of individual growth. It is shown that sigmoid growth curves are an optimal strategy (i.e. maximize fitness) if (Assumption 1a) mortality decreases with body size; (2a) mortality is a convex function of specific growth rate, viewed from above; (3) there is a constraint on growth rate, which is attained in the first phase of growth. If the constraint is not attained then size should increase at a progressively reducing rate. These predictions are biologically plausible. Catch-up growth, for retarded individuals, is generally not an optimal strategy though in special cases (e.g. seasonal breeding) it might be. Growth may be advantageous after first breeding if birth rate is a convex function of G (the fraction of production devoted to growth) viewed from above (Assumption 5a), or if mortality rate is a convex function of G, viewed from above (Assumption 6c). If assumptions 5a and 6c are both false, growth should cease at the age of first reproduction. These predictions could be used to evaluate the incidence of indeterminate versus determinate growth in the animal kingdom though the data currently available do not allow quantitative tests. In animals with invariant adult size a method is given which allows one to calculate whether an increase in body size is favoured given that fecundity and developmental time are thereby increased.