388 resultados para Lipschitz Mappings
Resumo:
This paper describes an experimental application of constrained predictive control and feedback linearisation based on dynamic neural networks. It also verifies experimentally a method for handling input constraints, which are transformed by the feedback linearisation mappings. A performance comparison with a PID controller is also provided. The experimental system consists of a laboratory based single link manipulator arm, which is controlled in real time using MATLAB/SIMULINK together with data acquisition equipment.
Resumo:
Vekua operators map harmonic functions defined on domain in \mathbb R2R2 to solutions of elliptic partial differential equations on the same domain and vice versa. In this paper, following the original work of I. Vekua (Ilja Vekua (1907–1977), Soviet-Georgian mathematician), we define Vekua operators in the case of the Helmholtz equation in a completely explicit fashion, in any space dimension N ≥ 2. We prove (i) that they actually transform harmonic functions and Helmholtz solutions into each other; (ii) that they are inverse to each other; and (iii) that they are continuous in any Sobolev norm in star-shaped Lipschitz domains. Finally, we define and compute the generalized harmonic polynomials as the Vekua transforms of harmonic polynomials. These results are instrumental in proving approximation estimates for solutions of the Helmholtz equation in spaces of circular, spherical, and plane waves.
Resumo:
We investigate the error dynamics for cycled data assimilation systems, such that the inverse problem of state determination is solved at tk, k = 1, 2, 3, ..., with a first guess given by the state propagated via a dynamical system model from time tk − 1 to time tk. In particular, for nonlinear dynamical systems that are Lipschitz continuous with respect to their initial states, we provide deterministic estimates for the development of the error ||ek|| := ||x(a)k − x(t)k|| between the estimated state x(a) and the true state x(t) over time. Clearly, observation error of size δ > 0 leads to an estimation error in every assimilation step. These errors can accumulate, if they are not (a) controlled in the reconstruction and (b) damped by the dynamical system under consideration. A data assimilation method is called stable, if the error in the estimate is bounded in time by some constant C. The key task of this work is to provide estimates for the error ||ek||, depending on the size δ of the observation error, the reconstruction operator Rα, the observation operator H and the Lipschitz constants K(1) and K(2) on the lower and higher modes of controlling the damping behaviour of the dynamics. We show that systems can be stabilized by choosing α sufficiently small, but the bound C will then depend on the data error δ in the form c||Rα||δ with some constant c. Since ||Rα|| → ∞ for α → 0, the constant might be large. Numerical examples for this behaviour in the nonlinear case are provided using a (low-dimensional) Lorenz '63 system.
Resumo:
Using a cross-layer approach, two enhancement techniques applied for adaptive modulation and coding (AMC) with truncated automatic repeat request (T-ARQ) are investigated, namely, aggressive AMC (A-AMC) and constellation rearrangement (CoRe). Aggressive AMC selects the appropriate modulation and coding schemes (MCS) to achieve higher spectral efficiency, profiting from the feasibility of using different MCSs for retransmitting a packet, whereas in the CoRe-based AMC, retransmissions of the same data packet are performed using different mappings so as to provide different degrees of protection to the bits involved, thus achieving mapping diversity gain. The performance of both schemes is evaluated in terms of average spectral efficiency and average packet loss rate, which are derived in closed-form considering transmission over Nakagami-m fading channels. Numerical results and comparisons are provided. In particular, it is shown that A-AMC combined with T-ARQ yields higher spectral efficiency than the AMC-based conventional scheme while keeping the achieved packet loss rate closer to the system's requirement, and that it can achieve larger spectral efficiency objectives than that of the scheme using AMC along with CoRe.
Resumo:
There are now many reports of imaging experiments with small cohorts of typical participants that precede large-scale, often multicentre studies of psychiatric and neurological disorders. Data from these calibration experiments are sufficient to make estimates of statistical power and predictions of sample size and minimum observable effect sizes. In this technical note, we suggest how previously reported voxel-based power calculations can support decision making in the design, execution and analysis of cross-sectional multicentre imaging studies. The choice of MRI acquisition sequence, distribution of recruitment across acquisition centres, and changes to the registration method applied during data analysis are considered as examples. The consequences of modification are explored in quantitative terms by assessing the impact on sample size for a fixed effect size and detectable effect size for a fixed sample size. The calibration experiment dataset used for illustration was a precursor to the now complete Medical Research Council Autism Imaging Multicentre Study (MRC-AIMS). Validation of the voxel-based power calculations is made by comparing the predicted values from the calibration experiment with those observed in MRC-AIMS. The effect of non-linear mappings during image registration to a standard stereotactic space on the prediction is explored with reference to the amount of local deformation. In summary, power calculations offer a validated, quantitative means of making informed choices on important factors that influence the outcome of studies that consume significant resources.
Resumo:
When the sensory consequences of an action are systematically altered our brain can recalibrate the mappings between sensory cues and properties of our environment. This recalibration can be driven by both cue conflicts and altered sensory statistics, but neither mechanism offers a way for cues to be calibrated so they provide accurate information about the world, as sensory cues carry no information as to their own accuracy. Here, we explored whether sensory predictions based on internal physical models could be used to accurately calibrate visual cues to 3D surface slant. Human observers played a 3D kinematic game in which they adjusted the slant of a surface so that a moving ball would bounce off the surface and through a target hoop. In one group, the ball’s bounce was manipulated so that the surface behaved as if it had a different slant to that signaled by visual cues. With experience of this altered bounce, observers recalibrated their perception of slant so that it was more consistent with the assumed laws of kinematics and physical behavior of the surface. In another group, making the ball spin in a way that could physically explain its altered bounce eliminated this pattern of recalibration. Importantly, both groups adjusted their behavior in the kinematic game in the same way, experienced the same set of slants and were not presented with low-level cue conflicts that could drive the recalibration. We conclude that observers use predictive kinematic models to accurately calibrate visual cues to 3D properties of world.
Resumo:
This paper provides an overview of interpolation of Banach and Hilbert spaces, with a focus on establishing when equivalence of norms is in fact equality of norms in the key results of the theory. (In brief, our conclusion for the Hilbert space case is that, with the right normalisations, all the key results hold with equality of norms.) In the final section we apply the Hilbert space results to the Sobolev spaces Hs(Ω) and tildeHs(Ω), for s in R and an open Ω in R^n. We exhibit examples in one and two dimensions of sets Ω for which these scales of Sobolev spaces are not interpolation scales. In the cases when they are interpolation scales (in particular, if Ω is Lipschitz) we exhibit examples that show that, in general, the interpolation norm does not coincide with the intrinsic Sobolev norm and, in fact, the ratio of these two norms can be arbitrarily large.
Resumo:
A universal systems design process is specified, tested in a case study and evaluated. It links English narratives to numbers using a categorical language framework with mathematical mappings taking the place of conjunctions and numbers. The framework is a ring of English narrative words between 1 (option) and 360 (capital); beyond 360 the ring cycles again to 1. English narratives are shown to correspond to the field of fractional numbers. The process can enable the development, presentation and communication of complex narrative policy information among communities of any scale, on a software implementation known as the "ecoputer". The information is more accessible and comprehensive than that in conventional decision support, because: (1) it is expressed in narrative language; and (2) the narratives are expressed as compounds of words within the framework. Hence option generation is made more effective than in conventional decision support processes including Multiple Criteria Decision Analysis, Life Cycle Assessment and Cost-Benefit Analysis.The case study is of a participatory workshop in UK bioenergy project objectives and criteria, at which attributes were elicited in environmental, economic and social systems. From the attributes, the framework was used to derive consequences at a range of levels of precision; these are compared with the project objectives and criteria as set out in the Case for Support. The design process is to be supported by a social information manipulation, storage and retrieval system for numeric and verbal narratives attached to the "ecoputer". The "ecoputer" will have an integrated verbal and numeric operating system. Novel design source code language will assist the development of narrative policy. The utility of the program, including in the transition to sustainable development and in applications at both community micro-scale and policy macro-scale, is discussed from public, stakeholder, corporate, Governmental and regulatory perspectives.
A benchmark-driven modelling approach for evaluating deployment choices on a multi-core architecture
Resumo:
The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.
Resumo:
This contribution is concerned with aposteriori error analysis of discontinuous Galerkin (dG) schemes approximating hyperbolic conservation laws. In the scalar case the aposteriori analysis is based on the L1 contraction property and the doubling of variables technique. In the system case the appropriate stability framework is in L2, based on relative entropies. It is only applicable if one of the solutions, which are compared to each other, is Lipschitz. For dG schemes approximating hyperbolic conservation laws neither the entropy solution nor the numerical solution need to be Lipschitz. We explain how this obstacle can be overcome using a reconstruction approach which leads to an aposteriori error estimate.
Resumo:
We study Toeplitz operators on the Besov spaces in the case of the open unit disk. We prove that a symbol satisfying a weak Lipschitz type condition induces a bounded Toeplitz operator. Such symbols do not need to be bounded functions or have continuous extensions to the boundary of the open unit disk. We discuss the problem of the existence of nontrivial compact Toeplitz operators, and also consider Fredholm properties and prove an index formula.
Resumo:
Multidimensional Visualization techniques are invaluable tools for analysis of structured and unstructured data with variable dimensionality. This paper introduces PEx-Image-Projection Explorer for Images-a tool aimed at supporting analysis of image collections. The tool supports a methodology that employs interactive visualizations to aid user-driven feature detection and classification tasks, thus offering improved analysis and exploration capabilities. The visual mappings employ similarity-based multidimensional projections and point placement to layout the data on a plane for visual exploration. In addition to its application to image databases, we also illustrate how the proposed approach can be successfully employed in simultaneous analysis of different data types, such as text and images, offering a common visual representation for data expressed in different modalities.
Resumo:
Given a model 2-complex K(P) of a group presentation P, we associate to it an integer matrix Delta(P) and we prove that a cellular map f : K(P) -> S(2) is root free (is not strongly surjective) if and only if the diophantine linear system Delta(P) Y = (deg) over right arrow (f) has an integer solution, here (deg) over right arrow (f) is the so-called vector-degree of f
Resumo:
Let Y = (f, g, h): R(3) -> R(3) be a C(2) map and let Spec(Y) denote the set of eigenvalues of the derivative DY(p), when p varies in R(3). We begin proving that if, for some epsilon > 0, Spec(Y) boolean AND (-epsilon, epsilon) = empty set, then the foliation F(k), with k is an element of {f, g, h}, made up by the level surfaces {k = constant}, consists just of planes. As a consequence, we prove a bijectivity result related to the three-dimensional case of Jelonek`s Jacobian Conjecture for polynomial maps of R(n).
Resumo:
In this paper we present results for the systematic study of reversible-equivariant vector fields - namely, in the simultaneous presence of symmetries and reversing symmetries - by employing algebraic techniques from invariant theory for compact Lie groups. The Hilbert-Poincare series and their associated Molien formulae are introduced,and we prove the character formulae for the computation of dimensions of spaces of homogeneous anti-invariant polynomial functions and reversible-equivariant polynomial mappings. A symbolic algorithm is obtained for the computation of generators for the module of reversible-equivariant polynomial mappings over the ring of invariant polynomials. We show that this computation can be obtained directly from a well-known situation, namely from the generators of the ring of invariants and the module of the equivariants. (C) 2008 Elsevier B.V, All rights reserved.