7 resultados para MAPPINGS
em CentAUR: Central Archive University of Reading - UK
Resumo:
This paper studies periodic traveling gravity waves at the free surface of water in a flow of constant vorticity over a flat bed. Using conformal mappings the free-boundary problem is transformed into a quasilinear pseudodifferential equation for a periodic function of one variable. The new formulation leads to a regularity result and, by use of bifurcation theory, to the existence of waves of small amplitude even in the presence of stagnation points in the flow.
Resumo:
This paper describes an experimental application of constrained predictive control and feedback linearisation based on dynamic neural networks. It also verifies experimentally a method for handling input constraints, which are transformed by the feedback linearisation mappings. A performance comparison with a PID controller is also provided. The experimental system consists of a laboratory based single link manipulator arm, which is controlled in real time using MATLAB/SIMULINK together with data acquisition equipment.
Resumo:
Using a cross-layer approach, two enhancement techniques applied for adaptive modulation and coding (AMC) with truncated automatic repeat request (T-ARQ) are investigated, namely, aggressive AMC (A-AMC) and constellation rearrangement (CoRe). Aggressive AMC selects the appropriate modulation and coding schemes (MCS) to achieve higher spectral efficiency, profiting from the feasibility of using different MCSs for retransmitting a packet, whereas in the CoRe-based AMC, retransmissions of the same data packet are performed using different mappings so as to provide different degrees of protection to the bits involved, thus achieving mapping diversity gain. The performance of both schemes is evaluated in terms of average spectral efficiency and average packet loss rate, which are derived in closed-form considering transmission over Nakagami-m fading channels. Numerical results and comparisons are provided. In particular, it is shown that A-AMC combined with T-ARQ yields higher spectral efficiency than the AMC-based conventional scheme while keeping the achieved packet loss rate closer to the system's requirement, and that it can achieve larger spectral efficiency objectives than that of the scheme using AMC along with CoRe.
Resumo:
There are now many reports of imaging experiments with small cohorts of typical participants that precede large-scale, often multicentre studies of psychiatric and neurological disorders. Data from these calibration experiments are sufficient to make estimates of statistical power and predictions of sample size and minimum observable effect sizes. In this technical note, we suggest how previously reported voxel-based power calculations can support decision making in the design, execution and analysis of cross-sectional multicentre imaging studies. The choice of MRI acquisition sequence, distribution of recruitment across acquisition centres, and changes to the registration method applied during data analysis are considered as examples. The consequences of modification are explored in quantitative terms by assessing the impact on sample size for a fixed effect size and detectable effect size for a fixed sample size. The calibration experiment dataset used for illustration was a precursor to the now complete Medical Research Council Autism Imaging Multicentre Study (MRC-AIMS). Validation of the voxel-based power calculations is made by comparing the predicted values from the calibration experiment with those observed in MRC-AIMS. The effect of non-linear mappings during image registration to a standard stereotactic space on the prediction is explored with reference to the amount of local deformation. In summary, power calculations offer a validated, quantitative means of making informed choices on important factors that influence the outcome of studies that consume significant resources.
Resumo:
When the sensory consequences of an action are systematically altered our brain can recalibrate the mappings between sensory cues and properties of our environment. This recalibration can be driven by both cue conflicts and altered sensory statistics, but neither mechanism offers a way for cues to be calibrated so they provide accurate information about the world, as sensory cues carry no information as to their own accuracy. Here, we explored whether sensory predictions based on internal physical models could be used to accurately calibrate visual cues to 3D surface slant. Human observers played a 3D kinematic game in which they adjusted the slant of a surface so that a moving ball would bounce off the surface and through a target hoop. In one group, the ball’s bounce was manipulated so that the surface behaved as if it had a different slant to that signaled by visual cues. With experience of this altered bounce, observers recalibrated their perception of slant so that it was more consistent with the assumed laws of kinematics and physical behavior of the surface. In another group, making the ball spin in a way that could physically explain its altered bounce eliminated this pattern of recalibration. Importantly, both groups adjusted their behavior in the kinematic game in the same way, experienced the same set of slants and were not presented with low-level cue conflicts that could drive the recalibration. We conclude that observers use predictive kinematic models to accurately calibrate visual cues to 3D properties of world.
Resumo:
A universal systems design process is specified, tested in a case study and evaluated. It links English narratives to numbers using a categorical language framework with mathematical mappings taking the place of conjunctions and numbers. The framework is a ring of English narrative words between 1 (option) and 360 (capital); beyond 360 the ring cycles again to 1. English narratives are shown to correspond to the field of fractional numbers. The process can enable the development, presentation and communication of complex narrative policy information among communities of any scale, on a software implementation known as the "ecoputer". The information is more accessible and comprehensive than that in conventional decision support, because: (1) it is expressed in narrative language; and (2) the narratives are expressed as compounds of words within the framework. Hence option generation is made more effective than in conventional decision support processes including Multiple Criteria Decision Analysis, Life Cycle Assessment and Cost-Benefit Analysis.The case study is of a participatory workshop in UK bioenergy project objectives and criteria, at which attributes were elicited in environmental, economic and social systems. From the attributes, the framework was used to derive consequences at a range of levels of precision; these are compared with the project objectives and criteria as set out in the Case for Support. The design process is to be supported by a social information manipulation, storage and retrieval system for numeric and verbal narratives attached to the "ecoputer". The "ecoputer" will have an integrated verbal and numeric operating system. Novel design source code language will assist the development of narrative policy. The utility of the program, including in the transition to sustainable development and in applications at both community micro-scale and policy macro-scale, is discussed from public, stakeholder, corporate, Governmental and regulatory perspectives.
A benchmark-driven modelling approach for evaluating deployment choices on a multi-core architecture
Resumo:
The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.