17 resultados para Set theory.
em CentAUR: Central Archive University of Reading - UK
Resumo:
Associative memory networks such as Radial Basis Functions, Neurofuzzy and Fuzzy Logic used for modelling nonlinear processes suffer from the curse of dimensionality (COD), in that as the input dimension increases the parameterization, computation cost, training data requirements, etc. increase exponentially. Here a new algorithm is introduced for the construction of a Delaunay input space partitioned optimal piecewise locally linear models to overcome the COD as well as generate locally linear models directly amenable to linear control and estimation algorithms. The training of the model is configured as a new mixture of experts network with a new fast decision rule derived using convex set theory. A very fast simulated reannealing (VFSR) algorithm is utilized to search a global optimal solution of the Delaunay input space partition. A benchmark non-linear time series is used to demonstrate the new approach.
Resumo:
A new autonomous ship collision free (ASCF) trajectory navigation and control system has been introduced with a new recursive navigation algorithm based on analytic geometry and convex set theory for ship collision free guidance. The underlying assumption is that the geometric information of ship environment is available in the form of a polygon shaped free space, which may be easily generated from a 2D image or plots relating to physical hazards or other constraints such as collision avoidance regulations. The navigation command is given as a heading command sequence based on generating a way point which falls within a small neighborhood of the current position, and the sequence of the way points along the trajectory are guaranteed to lie within a bounded obstacle free region using convex set theory. A neurofuzzy network predictor which in practice uses only observed input/output data generated by on board sensors or external sensors (or a sensor fusion algorithm), based on using rudder deflection angle for the control of ship heading angle, is utilised in the simulation of an ESSO 190000 dwt tanker model to demonstrate the effectiveness of the system.
Resumo:
The perturbed Hartree–Fock theory developed in the preceding paper is applied to LiH, BH, and HF, using limited basis‐set SCF–MO wavefunctions derived by previous workers. The calculated values for the force constant ke and the dipole‐moment derivative μ(1) are (experimental values in parentheses): LiH, ke = 1.618(1.026)mdyn/Å,μ(1) = −18.77(−2.0±0.3)D/ÅBH,ke = 5.199(3.032)mdyn/Å,μ(1) = −1.03(−)D/Å;HF,ke = 12.90(9.651)mdyn/Å,μ(1) = −2.15(+1.50)D/Å. The values of the force on the proton were calculated exactly and according to the Hellmann–Feynman theorem in each case, and the discrepancies show that none of the wavefunctions used are close to the Hartree–Fock limit, so that the large errors in ke and μ(1) are not surprising. However no difficulties arose in the perturbed Hartree–Fock calculation, so that the application of the theory to more accurate wavefunctions appears quite feasible.
Resumo:
The theory of harmonic force constant refinement calculations is reviewed, and a general-purpose program for force constant and normal coordinate calculations is described. The program, called ASYM20. is available through Quantum Chemistry Program Exchange. It will work on molecules of any symmetry containing up to 20 atoms and will produce results on a series of isotopomers as desired. The vibrational secular equations are solved in either nonredundant valence internal coordinates or symmetry coordinates. As well as calculating the (harmonic) vibrational wavenumbers and normal coordinates, the program will calculate centrifugal distortion constants, Coriolis zeta constants, harmonic contributions to the α′s. root-mean-square amplitudes of vibration, and other quantities related to gas electron-diffraction studies and thermodynamic properties. The program will work in either a predict mode, in which it calculates results from an input force field, or in a refine mode, in which it refines an input force field by least squares to fit observed data on the quantities mentioned above. Predicate values of the force constants may be included in the data set for a least-squares refinement. The program is written in FORTRAN for use on a PC or a mainframe computer. Operation is mainly controlled by steering indices in the input data file, but some interactive control is also implemented.
Resumo:
1. The management of threatened species is an important practical way in which conservationists can intervene in the extinction process and reduce the loss of biodiversity. Understanding the causes of population declines (past, present and future) is pivotal to designing effective practical management. This is the declining-population paradigm identified by Caughley. 2. There are three broad classes of ecological tool used by conservationists to guide management decisions for threatened species: statistical models of habitat use, demographic models and behaviour-based models. Each of these is described here, illustrated with a case study and evaluated critically in terms of its practical application. 3. These tools are fundamentally different. Statistical models of habitat use and demographic models both use descriptions of patterns in abundance and demography, in relation to a range of factors, to inform management decisions. In contrast, behaviourbased models describe the evolutionary processes underlying these patterns, and derive such patterns from the strategies employed by individuals when competing for resources under a specific set of environmental conditions. 4. Statistical models of habitat use and demographic models have been used successfully to make management recommendations for declining populations. To do this, assumptions are made about population growth or vital rates that will apply when environmental conditions are restored, based on either past data collected under favourable environmental conditions or estimates of these parameters when the agent of decline is removed. As a result, they can only be used to make reliable quantitative predictions about future environments when a comparable environment has been experienced by the population of interest in the past. 5. Many future changes in the environment driven by management will not have been experienced by a population in the past. Under these circumstances, vital rates and their relationship with population density will change in the future in a way that is not predictable from past patterns. Reliable quantitative predictions about population-level responses then need to be based on an explicit consideration of the evolutionary processes operating at the individual level. 6. Synthesis and applications. It is argued that evolutionary theory underpins Caughley’s declining-population paradigm, and that it needs to become much more widely used within mainstream conservation biology. This will help conservationists examine critically the reliability of the tools they have traditionally used to aid management decision-making. It will also give them access to alternative tools, particularly when predictions are required for changes in the environment that have not been experienced by a population in the past.
Resumo:
1. The management of threatened species is an important practical way in which conservationists can intervene in the extinction process and reduce the loss of biodiversity. Understanding the causes of population declines (past, present and future) is pivotal to designing effective practical management. This is the declining-population paradigm identified by Caughley. 2. There are three broad classes of ecological tool used by conservationists to guide management decisions for threatened species: statistical models of habitat use, demographic models and behaviour-based models. Each of these is described here, illustrated with a case study and evaluated critically in terms of its practical application. 3. These tools are fundamentally different. Statistical models of habitat use and demographic models both use descriptions of patterns in abundance and demography, in relation to a range of factors, to inform management decisions. In contrast, behaviour-based models describe the evolutionary processes underlying these patterns, and derive such patterns from the strategies employed by individuals when competing for resources under a specific set of environmental conditions. 4. Statistical models of habitat use and demographic models have been used successfully to make management recommendations for declining populations. To do this, assumptions are made about population growth or vital rates that will apply when environmental conditions are restored, based on either past data collected under favourable environmental conditions or estimates of these parameters when the agent of decline is removed. As a result, they can only be used to make reliable quantitative predictions about future environments when a comparable environment has been experienced by the population of interest in the past. 5. Many future changes in the environment driven by management will not have been experienced by a population in the past. Under these circumstances, vital rates and their relationship with population density will change in the future in a way that is not predictable from past patterns. Reliable quantitative predictions about population-level responses then need to be based on an explicit consideration of the evolutionary processes operating at the individual level. 6. Synthesis and applications. It is argued that evolutionary theory underpins Caughley's declining-population paradigm, and that it needs to become much more widely used within mainstream conservation biology. This will help conservationists examine critically the reliability of the tools they have traditionally used to aid management decision-making. It will also give them access to alternative tools, particularly when predictions are required for changes in the environment that have not been experienced by a population in the past.
Resumo:
Population subdivision complicates analysis of molecular variation. Even if neutrality is assumed, three evolutionary forces need to be considered: migration, mutation, and drift. Simplification can be achieved by assuming that the process of migration among and drift within subpopulations is occurring fast compared to Mutation and drift in the entire population. This allows a two-step approach in the analysis: (i) analysis of population subdivision and (ii) analysis of molecular variation in the migrant pool. We model population subdivision using an infinite island model, where we allow the migration/drift parameter Theta to vary among populations. Thus, central and peripheral populations can be differentiated. For inference of Theta, we use a coalescence approach, implemented via a Markov chain Monte Carlo (MCMC) integration method that allows estimation of allele frequencies in the migrant pool. The second step of this approach (analysis of molecular variation in the migrant pool) uses the estimated allele frequencies in the migrant pool for the study of molecular variation. We apply this method to a Drosophila ananassae sequence data set. We find little indication of isolation by distance, but large differences in the migration parameter among populations. The population as a whole seems to be expanding. A population from Bogor (Java, Indonesia) shows the highest variation and seems closest to the species center.
Resumo:
Formal and analytical models that contractors can use to assess and price project risk at the tender stage have proliferated in recent years. However, they are rarely used in practice. Introducing more models would, therefore, not necessarily help. A better understanding is needed of how contractors arrive at a bid price in practice, and how, and in what circumstances, risk apportionment actually influences pricing levels. More than 60 proposed risk models for contractors that are published in journals were examined and classified. Then exploratory interviews with five UK contractors and documentary analyses on how contractors price work generally and risk specifically were carried out to help in comparing the propositions from the literature to what contractors actually do. No comprehensive literature on the real bidding processes used in practice was found, and there is no evidence that pricing is systematic. Hence, systematic risk and pricing models for contractors may have no justifiable basis. Contractors process their bids through certain tendering gateways. They acknowledge the risk that they should price. However, the final settlement depends on a set of complex, micro-economic factors. Hence, risk accountability may be smaller than its true cost to the contractor. Risk apportionment occurs at three stages of the whole bid-pricing process. However, analytical approaches tend not to incorporate this, although they could.
Resumo:
Experimentally and theoretically determined infrared spectra are reported for a series of straight-chain perfluorocarbons: C2F6, C3F8, C4F10, C5F12, C6F14, and C8F18. Theoretical spectra were determined using both density functional (DFT) and ab initio methods. Radiative efficiencies (REs) were determined using the method of Pinnock et al. (1995) and combined with atmospheric lifetimes from the literature to determine global warming potentials (GWPs). Theoretically determined absorption cross sections were within 10% of experimentally determined values. Despite being much less computationally expensive, DFT calculations were generally found to perform better than ab initio methods. There is a strong wavenumber dependence of radiative forcing in the region of the fundamental C-F vibration, and small differences in wavelength between band positions determined by theory and experiment have a significant impact on the REs. We apply an empirical correction to the theoretical spectra and then test this correction on a number of branched chain and cyclic perfluoroalkanes. We then compute absorption cross sections, REs, and GWPs for an additional set of perfluoroalkenes.
Resumo:
Real estate development appraisal is a quantification of future expectations. The appraisal model relies upon the valuer/developer having an understanding of the future in terms of the future marketability of the completed development and the future cost of development. In some cases the developer has some degree of control over the possible variation in the variables, as with the cost of construction through the choice of specification. However, other variables, such as the sale price of the final product, are totally dependent upon the vagaries of the market at the completion date. To try to address the risk of a different outcome to the one expected (modelled) the developer will often carry out a sensitivity analysis on the development. However, traditional sensitivity analysis has generally only looked at the best and worst scenarios and has focused on the anticipated or expected outcomes. This does not take into account uncertainty and the range of outcomes that can happen. A fuller analysis should include examination of the uncertainties in each of the components of the appraisal and account for the appropriate distributions of the variables. Similarly, as many of the variables in the model are not independent, the variables need to be correlated. This requires a standardised approach and we suggest that the use of a generic forecasting software package, in this case Crystal Ball, allows the analyst to work with an existing development appraisal model set up in Excel (or other spreadsheet) and to work with a predetermined set of probability distributions. Without a full knowledge of risk, developers are unable to determine the anticipated level of return that should be sought to compensate for the risk. This model allows the user a better understanding of the possible outcomes for the development. Ultimately the final decision will be made relative to current expectations and current business constraints, but by assessing the upside and downside risks more appropriately, the decision maker should be better placed to make a more informed and “better”.
Resumo:
We consider the general response theory recently proposed by Ruelle for describing the impact of small perturbations to the non-equilibrium steady states resulting from Axiom A dynamical systems. We show that the causality of the response functions entails the possibility of writing a set of Kramers-Kronig (K-K) relations for the corresponding susceptibilities at all orders of nonlinearity. Nonetheless, only a special class of directly observable susceptibilities obey K-K relations. Specific results are provided for the case of arbitrary order harmonic response, which allows for a very comprehensive K-K analysis and the establishment of sum rules connecting the asymptotic behavior of the harmonic generation susceptibility to the short-time response of the perturbed system. These results set in a more general theoretical framework previous findings obtained for optical systems and simple mechanical models, and shed light on the very general impact of considering the principle of causality for testing self-consistency: the described dispersion relations constitute unavoidable benchmarks that any experimental and model generated dataset must obey. The theory exposed in the present paper is dual to the time-dependent theory of perturbations to equilibrium states and to non-equilibrium steady states, and has in principle similar range of applicability and limitations. In order to connect the equilibrium and the non equilibrium steady state case, we show how to rewrite the classical response theory by Kubo so that response functions formally identical to those proposed by Ruelle, apart from the measure involved in the phase space integration, are obtained. These results, taking into account the chaotic hypothesis by Gallavotti and Cohen, might be relevant in several fields, including climate research. In particular, whereas the fluctuation-dissipation theorem does not work for non-equilibrium systems, because of the non-equivalence between internal and external fluctuations, K-K relations might be robust tools for the definition of a self-consistent theory of climate change.
Resumo:
In the first half of this memoir we explore the interrelationships between the abstract theory of limit operators (see e.g. the recent monographs of Rabinovich, Roch and Silbermann (2004) and Lindner (2006)) and the concepts and results of the generalised collectively compact operator theory introduced by Chandler-Wilde and Zhang (2002). We build up to results obtained by applying this generalised collectively compact operator theory to the set of limit operators of an operator (its operator spectrum). In the second half of this memoir we study bounded linear operators on the generalised sequence space , where and is some complex Banach space. We make what seems to be a more complete study than hitherto of the connections between Fredholmness, invertibility, invertibility at infinity, and invertibility or injectivity of the set of limit operators, with some emphasis on the case when the operator is a locally compact perturbation of the identity. Especially, we obtain stronger results than previously known for the subtle limiting cases of and . Our tools in this study are the results from the first half of the memoir and an exploitation of the partial duality between and and its implications for bounded linear operators which are also continuous with respect to the weaker topology (the strict topology) introduced in the first half of the memoir. Results in this second half of the memoir include a new proof that injectivity of all limit operators (the classic Favard condition) implies invertibility for a general class of almost periodic operators, and characterisations of invertibility at infinity and Fredholmness for operators in the so-called Wiener algebra. In two final chapters our results are illustrated by and applied to concrete examples. Firstly, we study the spectra and essential spectra of discrete Schrödinger operators (both self-adjoint and non-self-adjoint), including operators with almost periodic and random potentials. In the final chapter we apply our results to integral operators on .
Resumo:
Straightforward mathematical techniques are used innovatively to form a coherent theoretical system to deal with chemical equilibrium problems. For a systematic theory it is necessary to establish a system to connect different concepts. This paper shows the usefulness and consistence of the system by applications of the theorems introduced previously. Some theorems are shown somewhat unexpectedly to be mathematically correlated and relationships are obtained in a coherent manner. It has been shown that theorem 1 plays an important part in interconnecting most of the theorems. The usefulness of theorem 2 is illustrated by proving it to be consistent with theorem 3. A set of uniform mathematical expressions are associated with theorem 3. A variety of mathematical techniques based on theorems 1–3 are shown to establish the direction of equilibrium shift. The equilibrium properties expressed in initial and equilibrium conditions are shown to be connected via theorem 5. Theorem 6 is connected with theorem 4 through the mathematical representation of theorem 1.
Resumo:
In this paper a generalization of collectively compact operator theory in Banach spaces is developed. A feature of the new theory is that the operators involved are no longer required to be compact in the norm topology. Instead it is required that the image of a bounded set under the operator family is sequentially compact in a weaker topology. As an application, the theory developed is used to establish solvability results for a class of systems of second kind integral equations on unbounded domains, this class including in particular systems of Wiener-Hopf integral equations with L1 convolutions kernels
Resumo:
There exists a well-developed body of theory based on quasi-geostrophic (QG) dynamics that is central to our present understanding of large-scale atmospheric and oceanic dynamics. An important question is the extent to which this body of theory may generalize to more accurate dynamical models. As a first step in this process, we here generalize a set of theoretical results, concerning the evolution of disturbances to prescribed basic states, to semi-geostrophic (SG) dynamics. SG dynamics, like QG dynamics, is a Hamiltonian balanced model whose evolution is described by the material conservation of potential vorticity, together with an invertibility principle relating the potential vorticity to the advecting fields. SG dynamics has features that make it a good prototype for balanced models that are more accurate than QG dynamics. In the first part of this two-part study, we derive a pseudomomentum invariant for the SG equations, and use it to obtain: (i) linear and nonlinear generalized Charney–Stern theorems for disturbances to parallel flows; (ii) a finite-amplitude local conservation law for the invariant, obeying the group-velocity property in the WKB limit; and (iii) a wave-mean-flow interaction theorem consisting of generalized Eliassen–Palm flux diagnostics, an elliptic equation for the stream-function tendency, and a non-acceleration theorem. All these results are analogous to their QG forms. The pseudomomentum invariant – a conserved second-order disturbance quantity that is associated with zonal symmetry – is constructed using a variational principle in a similar manner to the QG calculations. Such an approach is possible when the equations of motion under the geostrophic momentum approximation are transformed to isentropic and geostrophic coordinates, in which the ageostrophic advection terms are no longer explicit. Symmetry-related wave-activity invariants such as the pseudomomentum then arise naturally from the Hamiltonian structure of the SG equations. We avoid use of the so-called ‘massless layer’ approach to the modelling of isentropic gradients at the lower boundary, preferring instead to incorporate explicitly those boundary contributions into the wave-activity and stability results. This makes the analogy with QG dynamics most transparent. This paper treats the f-plane Boussinesq form of SG dynamics, and its recent extension to β-plane, compressible flow by Magnusdottir & Schubert. In the limit of small Rossby number, the results reduce to their respective QG forms. Novel features particular to SG dynamics include apparently unnoticed lateral boundary stability criteria in (i), and the necessity of including additional zonal-mean eddy correlation terms besides the zonal-mean potential vorticity fluxes in the wave-mean-flow balance in (iii). In the companion paper, wave-activity conservation laws and stability theorems based on the SG form of the pseudoenergy are presented.