959 resultados para analytic semigroup
Resumo:
This paper tackles the problem of computing smooth, optimal trajectories on the Euclidean group of motions SE(3). The problem is formulated as an optimal control problem where the cost function to be minimized is equal to the integral of the classical curvature squared. This problem is analogous to the elastic problem from differential geometry and thus the resulting rigid body motions will trace elastic curves. An application of the Maximum Principle to this optimal control problem shifts the emphasis to the language of symplectic geometry and to the associated Hamiltonian formalism. This results in a system of first order differential equations that yield coordinate free necessary conditions for optimality for these curves. From these necessary conditions we identify an integrable case and these particular set of curves are solved analytically. These analytic solutions provide interpolating curves between an initial given position and orientation and a desired position and orientation that would be useful in motion planning for systems such as robotic manipulators and autonomous-oriented vehicles.
Resumo:
Many kernel classifier construction algorithms adopt classification accuracy as performance metrics in model evaluation. Moreover, equal weighting is often applied to each data sample in parameter estimation. These modeling practices often become problematic if the data sets are imbalanced. We present a kernel classifier construction algorithm using orthogonal forward selection (OFS) in order to optimize the model generalization for imbalanced two-class data sets. This kernel classifier identification algorithm is based on a new regularized orthogonal weighted least squares (ROWLS) estimator and the model selection criterion of maximal leave-one-out area under curve (LOO-AUC) of the receiver operating characteristics (ROCs). It is shown that, owing to the orthogonalization procedure, the LOO-AUC can be calculated via an analytic formula based on the new regularized orthogonal weighted least squares parameter estimator, without actually splitting the estimation data set. The proposed algorithm can achieve minimal computational expense via a set of forward recursive updating formula in searching model terms with maximal incremental LOO-AUC value. Numerical examples are used to demonstrate the efficacy of the algorithm.
Resumo:
We propose a simple and computationally efficient construction algorithm for two class linear-in-the-parameters classifiers. In order to optimize model generalization, a forward orthogonal selection (OFS) procedure is used for minimizing the leave-one-out (LOO) misclassification rate directly. An analytic formula and a set of forward recursive updating formula of the LOO misclassification rate are developed and applied in the proposed algorithm. Numerical examples are used to demonstrate that the proposed algorithm is an excellent alternative approach to construct sparse two class classifiers in terms of performance and computational efficiency.
Resumo:
A fundamental principle in practical nonlinear data modeling is the parsimonious principle of constructing the minimal model that explains the training data well. Leave-one-out (LOO) cross validation is often used to estimate generalization errors by choosing amongst different network architectures (M. Stone, "Cross validatory choice and assessment of statistical predictions", J. R. Stast. Soc., Ser. B, 36, pp. 117-147, 1974). Based upon the minimization of LOO criteria of either the mean squares of LOO errors or the LOO misclassification rate respectively, we present two backward elimination algorithms as model post-processing procedures for regression and classification problems. The proposed backward elimination procedures exploit an orthogonalization procedure to enable the orthogonality between the subspace as spanned by the pruned model and the deleted regressor. Subsequently, it is shown that the LOO criteria used in both algorithms can be calculated via some analytic recursive formula, as derived in this contribution, without actually splitting the estimation data set so as to reduce computational expense. Compared to most other model construction methods, the proposed algorithms are advantageous in several aspects; (i) There are no tuning parameters to be optimized through an extra validation data set; (ii) The procedure is fully automatic without an additional stopping criteria; and (iii) The model structure selection is directly based on model generalization performance. The illustrative examples on regression and classification are used to demonstrate that the proposed algorithms are viable post-processing methods to prune a model to gain extra sparsity and improved generalization.
Resumo:
We use a spectral method to solve numerically two nonlocal, nonlinear, dispersive, integrable wave equations, the Benjamin-Ono and the Intermediate Long Wave equations. The proposed numerical method is able to capture well the dynamics of the solutions; we use it to investigate the behaviour of solitary wave solutions of the equations with special attention to those, among the properties usually connected with integrability, for which there is at present no analytic proof. Thus we study in particular the resolution property of arbitrary initial profiles into sequences of solitary waves for both equations and clean interaction of Benjamin-Ono solitary waves. We also verify numerically that the behaviour of the solution of the Intermediate Long Wave equation as the model parameter tends to the infinite depth limit is the one predicted by the theory.
Resumo:
A numerical mesoscale model is used to make a high-resolution simulation of the marine boundary layer in the Persian Gulf, during conditions of offshore flow from Saudi Arabia. A marine internal boundary layer (MIBL) and a sea-breeze circulation (SBC) are found to co-exist. The sea breeze develops in the mid-afternoon, at which time its front is displaced several tens of kilometres offshore. Between the coast and the sea-breeze system, the MIBL that occurs is consistent with a picture described in the existing literature. However, the MIBL is perturbed by the SBC, the boundary layer deepening significantly seaward of the sea-breeze front. Our analysis suggests that this strong, localized deepening is not a direct consequence of frontal uplift, but rather that the immediate cause is the retardation of the prevailing, low-level offshore wind by the SBC. The simulated boundary-layer development can be accounted for by using a simple 1D Lagrangian model of growth driven by the surface heat flux. This model is obtained as a straightforward modification of an established MIBL analytic growth model.
Resumo:
A new autonomous ship collision free (ASCF) trajectory navigation and control system has been introduced with a new recursive navigation algorithm based on analytic geometry and convex set theory for ship collision free guidance. The underlying assumption is that the geometric information of ship environment is available in the form of a polygon shaped free space, which may be easily generated from a 2D image or plots relating to physical hazards or other constraints such as collision avoidance regulations. The navigation command is given as a heading command sequence based on generating a way point which falls within a small neighborhood of the current position, and the sequence of the way points along the trajectory are guaranteed to lie within a bounded obstacle free region using convex set theory. A neurofuzzy network predictor which in practice uses only observed input/output data generated by on board sensors or external sensors (or a sensor fusion algorithm), based on using rudder deflection angle for the control of ship heading angle, is utilised in the simulation of an ESSO 190000 dwt tanker model to demonstrate the effectiveness of the system.
Resumo:
This paper examines the extent to which the valuation of partial interests in private property vehicles should be closely aligned to the valuation of the underlying assets. A sample of vehicle managers and investors replied to a questionnaire on the qualities of private property vehicles relative to direct property investment. Applying the Analytic Hierarchy Process (AHP) technique the relative importance of the various advantages and disadvantages of investment in private property vehicles relative to acquisition of the underlying assets are assessed. The results suggest that the main drivers of the growth of the this sector have been the ability for certain categories of investor to acquire interests in assets that are normally inaccessible due to the amount of specific risk. Additionally, investors have been attracted by the ability to ‘outsource’ asset management in a manner that minimises perceived agency problems. It is concluded that deviations from NAV should be expected given that investment in private property vehicles differs from investment in the underlying assets in terms of liquidity, management structures, lot size, financial structure inter alia. However, reliably appraising the pricing implications of these variations is likely to be extremely difficult due to the lack of secondary market trading and vehicle heterogeneity.
Resumo:
Cognitive phenomenology starts from something that has been obscured in much recent analytic philosophy: the fact that lived conscious experience isn’t just a matter of sensation or feeling, but is also cognitive in character, through and through. This is obviously true of ordinary human perceptual experience, and cognitive phenomenology is also concerned with something more exclusively cognitive, which we may call propositional meaning-experience, e.g. occurrent experience of linguistic representations as meaning something, as this occurs in thinking or reading or hearing others speak.
Resumo:
We give an asymptotic expansion for the Taylor coe±cients of L(P(z)) where L(z) is analytic in the open unit disc whose Taylor coe±cients vary `smoothly' and P(z) is a probability generating function. We show how this result applies to a variety of problems, amongst them obtaining the asymptotics of Bernoulli transforms and weighted renewal sequences.
Resumo:
Statistical graphics are a fundamental, yet often overlooked, set of components in the repertoire of data analytic tools. Graphs are quick and efficient, yet simple instruments of preliminary exploration of a dataset to understand its structure and to provide insight into influential aspects of inference such as departures from assumptions and latent patterns. In this paper, we present and assess a graphical device for choosing a method for estimating population size in capture-recapture studies of closed populations. The basic concept is derived from a homogeneous Poisson distribution where the ratios of neighboring Poisson probabilities multiplied by the value of the larger neighbor count are constant. This property extends to the zero-truncated Poisson distribution which is of fundamental importance in capture–recapture studies. In practice however, this distributional property is often violated. The graphical device developed here, the ratio plot, can be used for assessing specific departures from a Poisson distribution. For example, simple contaminations of an otherwise homogeneous Poisson model can be easily detected and a robust estimator for the population size can be suggested. Several robust estimators are developed and a simulation study is provided to give some guidance on which should be used in practice. More systematic departures can also easily be detected using the ratio plot. In this paper, the focus is on Gamma mixtures of the Poisson distribution which leads to a linear pattern (called structured heterogeneity) in the ratio plot. More generally, the paper shows that the ratio plot is monotone for arbitrary mixtures of power series densities.
Resumo:
Harmonic analysis on configuration spaces is used in order to extend explicit expressions for the images of creation, annihilation, and second quantization operators in L2-spaces with respect to Poisson point processes to a set of functions larger than the space obtained by directly using chaos expansion. This permits, in particular, to derive an explicit expression for the generator of the second quantization of a sub-Markovian contraction semigroup on a set of functions which forms a core of the generator.
Resumo:
The adaptive thermal comfort theory considers people as active rather than passive recipients in response to ambient physical thermal stimuli, in contrast with conventional, heat-balance-based, thermal comfort theory. Occupants actively interact with the environments they occupy by means of utilizing adaptations in terms of physiological, behavioural and psychological dimensions to achieve ‘real world’ thermal comfort. This paper introduces a method of quantifying the physiological, behavioural and psychological portions of the adaptation process by using the analytic hierarchy process (AHP) based on the case studies conducted in the UK and China. Apart from three categories of adaptations which are viewed as criteria, six possible alternatives are considered: physiological indices/health status, the indoor environment, the outdoor environment, personal physical factors, environmental control and thermal expectation. With the AHP technique, all the above-mentioned criteria, factors and corresponding elements are arranged in a hierarchy tree and quantified by using a series of pair-wise judgements. A sensitivity analysis is carried out to improve the quality of these results. The proposed quantitative weighting method provides researchers with opportunities to better understand the adaptive mechanisms and reveal the significance of each category for the achievement of adaptive thermal comfort.
Resumo:
By eliminating the short range negative divergence of the Debye–Hückel pair distribution function, but retaining the exponential charge screening known to operate at large interparticle separation, the thermodynamic properties of one-component plasmas of point ions or charged hard spheres can be well represented even in the strong coupling regime. Predicted electrostatic free energies agree within 5% of simulation data for typical Coulomb interactions up to a factor of 10 times the average kinetic energy. Here, this idea is extended to the general case of a uniform ionic mixture, comprising an arbitrary number of components, embedded in a rigid neutralizing background. The new theory is implemented in two ways: (i) by an unambiguous iterative algorithm that requires numerical methods and breaks the symmetry of cross correlation functions; and (ii) by invoking generalized matrix inverses that maintain symmetry and yield completely analytic solutions, but which are not uniquely determined. The extreme computational simplicity of the theory is attractive when considering applications to complex inhomogeneous fluids of charged particles.
Resumo:
A military operation is about to take place during an ongoing international armed conflict; it can be carried out either by aerial attack, which is expected to cause the deaths of enemy civilians, or by using ground troops, which is expected to cause the deaths of fewer enemy civilians but is expected to result in more deaths of compatriot soldiers. Does the principle of proportionality in international humanitarian law impose a duty on an attacker to expose its soldiers to life-threatening risks in order to minimise or avert risks of incidental damage to enemy civilians? If such a duty exists, is it absolute or qualified? And if it is a qualified duty, what considerations may be taken into account in determining its character and scope? This article presents an analytic framework under the current international humanitarian law (IHL) legal structure, following a proportionality analysis. The proposed framework identifies five main positions for addressing the above queries. The five positions are arranged along two ‘axes’: a value ‘axis’, which identifies the value assigned to the lives of compatriot soldiers in relation to lives of enemy civilians; and a justification ‘axis’, which outlines the justificatory bases for assigning certain values to lives of compatriot soldiers and enemy civilians: intrinsic, instrumental or a combination thereof. The article critically assesses these positions, and favours a position which attributes a value to compatriot soldiers’ lives, premised on a justificatory basis which marries intrinsic considerations with circumscribed instrumental considerations, avoiding the indeterminacy and normative questionability entailed by more expansive instrumental considerations.