955 resultados para Boolean Computations


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Manipulation of an object by a multi-fingered robot hand requires task planning which involves computation of joint space vectors and fingertip forces. To implement a task as fast as possible, computations have to be carried out in minimum time. The state of the art in manipulation by multi-fingered robot hand designs has shown the possible use of remotely driven finger joints. Such remotely driven hands require computation of tendon displacement for evaluating joint space vectors before signals are sent to actuators. Alternatively, a direct drive hand is a mechanical hand in which the shafts of articulated joints are directly coupled to the rotors of motors with high output torques. This article has been divided into two main sections. The first section presents a brief view of manipulation using a direct drive approach. Meanwhile, the other section presents ongoing research which is being carried out to design a four-finger articulated hand in the Department of Cybernetics at the University of Reading.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reconfigurable computing is becoming an important new alternative for implementing computations. Field programmable gate arrays (FPGAs) are the ideal integrated circuit technology to experiment with the potential benefits of using different strategies of circuit specialization by reconfiguration. The final form of the reconfiguration strategy is often non-trivial to determine. Consequently, in this paper, we examine strategies for reconfiguration and, based on our experience, propose general guidelines for the tradeoffs using an area-time metric called functional density. Three experiments are set up to explore different reconfiguration strategies for FPGAs applied to a systolic implementation of a scalar quantizer used as a case study. Quantitative results for each experiment are given. The regular nature of the example means that the results can be generalized to a wide class of industry-relevant problems based on arrays.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

DISOPE is a technique for solving optimal control problems where there are differences in structure and parameter values between reality and the model employed in the computations. The model reality differences can also allow for deliberate simplification of model characteristics and performance indices in order to facilitate the solution of the optimal control problem. The technique was developed originally in continuous time and later extended to discrete time. The main property of the procedure is that by iterating on appropriately modified model based problems the correct optimal solution is achieved in spite of the model-reality differences. Algorithms have been developed in both continuous and discrete time for a general nonlinear optimal control problem with terminal weighting, bounded controls and terminal constraints. The aim of this paper is to show how the DISOPE technique can aid receding horizon optimal control computation in nonlinear model predictive control.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many natural and technological applications generate time ordered sequences of networks, defined over a fixed set of nodes; for example time-stamped information about ‘who phoned who’ or ‘who came into contact with who’ arise naturally in studies of communication and the spread of disease. Concepts and algorithms for static networks do not immediately carry through to this dynamic setting. For example, suppose A and B interact in the morning, and then B and C interact in the afternoon. Information, or disease, may then pass from A to C, but not vice versa. This subtlety is lost if we simply summarize using the daily aggregate network given by the chain A-B-C. However, using a natural definition of a walk on an evolving network, we show that classic centrality measures from the static setting can be extended in a computationally convenient manner. In particular, communicability indices can be computed to summarize the ability of each node to broadcast and receive information. The computations involve basic operations in linear algebra, and the asymmetry caused by time’s arrow is captured naturally through the non-mutativity of matrix-matrix multiplication. Illustrative examples are given for both synthetic and real-world communication data sets. We also discuss the use of the new centrality measures for real-time monitoring and prediction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of a manipulator operating in a noisy workspace and required to move from an initial fixed position P0 to a final position Pf is considered. However, Pf is corrupted by noise, giving rise to Pˆf, which may be obtained by sensors. The use of learning automata is proposed to tackle this problem. An automaton is placed at each joint of the manipulator which moves according to the action chosen by the automaton (forward, backward, stationary) at each instant. The simultaneous reward or penalty of the automata enables avoiding any inverse kinematics computations that would be necessary if the distance of each joint from the final position had to be calculated. Three variable-structure learning algorithms are used, i.e., the discretized linear reward-penalty (DLR-P, the linear reward-penalty (LR-P ) and a nonlinear scheme. Each algorithm is separately tested with two (forward, backward) and three forward, backward, stationary) actions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Practically all extant work on flows over obstacle arrays, whether laboratory experiments or numerical modelling, is for cases where the oncoming wind is normal to salient faces of the obstacles. In the field, however, this is rarely the case. Here, simulations of flows at various directions over arrays of cubes representing typical urban canopy regions are presented and discussed. The computations are of both direct numerical simulation and large-eddy simulation type. Attention is concentrated on the differences in the mean flow within the canopy region arising from the different wind directions and the consequent effects on global properties such as the total surface drag, which can change very significantly—by up to a factor of three in some circumstances. It is shown that for a given Reynolds number the typical viscous forces are generally a rather larger fraction of the pressure forces (principally the drag) for non-normal than for normal wind directions and that, dependent on the surface morphology, the average flow direction deep within the canopy can be largely independent of the oncoming wind direction. Even for regular arrays of regular obstacles, a wind direction not normal to the obstacle faces can in general generate a lateral lift force (in the direction normal to the oncoming flow). The results demonstrate this and it is shown how computations in a finite domain with the oncoming flow generated by an appropriate forcing term (e.g. a pressure gradient) then lead inevitably to an oncoming wind direction aloft that is not aligned with the forcing term vector.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This spreadsheet contains key data about that part of the endgame of Western Chess for which Endgame Tables (EGTs) have been generated by computer. It is derived from the EGT work since 1969 of Thomas Ströhlein, Ken Thompson, Christopher Wirth, Eugene Nalimov, Marc Bourzutschky, John Tamplin and Yakov Konoval. The data includes %s of wins, draws and losses (wtm and btm), the maximum and average depths of win under various metrics (DTC = Depth to Conversion, DTM = Depth to Mate, DTZ = Depth to Conversion or Pawn-push), and examples of positions of maximum depth. It is essentially about sub-7-man Chess but is updated as news comes in of 7-man EGT computations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000- fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. Keywords: haldanes, biological time, scaling, pedomorphosis

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of determining the pressure and velocity fields for a weakly compressible fluid flowing in a three-dimensional layer, composed of an inhomogeneous, anisotropic porous medium, with vertical side walls and variable upper and lower boundaries, in the presence of vertical wells injecting and/or extracting fluid. Numerical solution of this three-dimensional evolution problem may be expensive, particularly in the case that the depth scale of the layer h is small compared to the horizontal length scale l, a situation which occurs frequently in the application to oil and gas reservoir recovery and which leads to significant stiffness in the numerical problem. Under the assumption that $\epsilon\propto h/l\ll 1$, we show that, to leading order in $\epsilon$, the pressure field varies only in the horizontal directions away from the wells (the outer region). We construct asymptotic expansions in $\epsilon$ in both the inner (near the wells) and outer regions and use the asymptotic matching principle to derive expressions for all significant process quantities. The only computations required are for the solution of non-stiff linear, elliptic, two-dimensional boundary-value, and eigenvalue problems. This approach, via the method of matched asymptotic expansions, takes advantage of the small aspect ratio of the layer, $\epsilon$, at precisely the stage where full numerical computations become stiff, and also reveals the detailed structure of the dynamics of the flow, both in the neighbourhood of wells and away from wells.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A system for continuous data assimilation is presented and discussed. To simulate the dynamical development a channel version of a balanced barotropic model is used and geopotential (height) data are assimilated into the models computations as data become available. In the first experiment the updating is performed every 24th, 12th and 6th hours with a given network. The stations are distributed at random in 4 groups in order to simulate 4 areas with different density of stations. Optimum interpolation is performed for the difference between the forecast and the valid observations. The RMS-error of the analyses is reduced in time, and the error being smaller the more frequent the updating is performed. The updating every 6th hour yields an error in the analysis less than the RMS-error of the observation. In a second experiment the updating is performed by data from a moving satellite with a side-scan capability of about 15°. If the satellite data are analysed at every time step before they are introduced into the system the error of the analysis is reduced to a value below the RMS-error of the observation already after 24 hours and yields as a whole a better result than updating from a fixed network. If the satellite data are introduced without any modification the error of the analysis is reduced much slower and it takes about 4 days to reach a comparable result to the one where the data have been analysed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ensemble-based data assimilation is rapidly proving itself as a computationally-efficient and skilful assimilation method for numerical weather prediction, which can provide a viable alternative to more established variational assimilation techniques. However, a fundamental shortcoming of ensemble techniques is that the resulting analysis increments can only span a limited subspace of the state space, whose dimension is less than the ensemble size. This limits the amount of observational information that can effectively constrain the analysis. In this paper, a data selection strategy that aims to assimilate only the observational components that matter most and that can be used with both stochastic and deterministic ensemble filters is presented. This avoids unnecessary computations, reduces round-off errors and minimizes the risk of importing observation bias in the analysis. When an ensemble-based assimilation technique is used to assimilate high-density observations, the data-selection procedure allows the use of larger localization domains that may lead to a more balanced analysis. Results from the use of this data selection technique with a two-dimensional linear and a nonlinear advection model using both in situ and remote sounding observations are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the Dirichlet and Robin boundary value problems for the Helmholtz equation in a non-locally perturbed half-plane, modelling time harmonic acoustic scattering of an incident field by, respectively, sound-soft and impedance infinite rough surfaces.Recently proposed novel boundary integral equation formulations of these problems are discussed. It is usual in practical computations to truncate the infinite rough surface, solving a boundary integral equation on a finite section of the boundary, of length 2A, say. In the case of surfaces of small amplitude and slope we prove the stability and convergence as A→∞ of this approximation procedure. For surfaces of arbitrarily large amplitude and/or surface slope we prove stability and convergence of a modified finite section procedure in which the truncated boundary is ‘flattened’ in finite neighbourhoods of its two endpoints. Copyright © 2001 John Wiley & Sons, Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article examines the potential to improve numerical weather prediction (NWP) by estimating upper and lower bounds on predictability by re-visiting the original study of Lorenz (1982) but applied to the most recent version of the European Centre for Medium Range Weather Forecasts (ECMWF) forecast system, for both the deterministic and ensemble prediction systems (EPS). These bounds are contrasted with an older version of the same NWP system to see how they have changed with improvements to the NWP system. The computations were performed for the earlier seasons of DJF 1985/1986 and JJA 1986 and the later seasons of DJF 2010/2011 and JJA 2011 using the 500-hPa geopotential height field. Results indicate that for this field, we may be approaching the limit of deterministic forecasting so that further improvements might only be obtained by improving the initial state. The results also show that predictability calculations with earlier versions of the model may overestimate potential forecast skill, which may be due to insufficient internal variability in the model and because recent versions of the model are more realistic in representing the true atmospheric evolution. The same methodology is applied to the EPS to calculate upper and lower bounds of predictability of the ensemble mean forecast in order to explore how ensemble forecasting could extend the limits of the deterministic forecast. The results show that there is a large potential to improve the ensemble predictions, but for the increased predictability of the ensemble mean, there will be a trade-off in information as the forecasts will become increasingly smoothed with time. From around the 10-d forecast time, the ensemble mean begins to converge towards climatology. Until this point, the ensemble mean is able to predict the main features of the large-scale flow accurately and with high consistency from one forecast cycle to the next. By the 15-d forecast time, the ensemble mean has lost information with the anomaly of the flow strongly smoothed out. In contrast, the control forecast is much less consistent from run to run, but provides more detailed (unsmoothed) but less useful information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Approximate Bayesian computation (ABC) methods make use of comparisons between simulated and observed summary statistics to overcome the problem of computationally intractable likelihood functions. As the practical implementation of ABC requires computations based on vectors of summary statistics, rather than full data sets, a central question is how to derive low-dimensional summary statistics from the observed data with minimal loss of information. In this article we provide a comprehensive review and comparison of the performance of the principal methods of dimension reduction proposed in the ABC literature. The methods are split into three nonmutually exclusive classes consisting of best subset selection methods, projection techniques and regularization. In addition, we introduce two new methods of dimension reduction. The first is a best subset selection method based on Akaike and Bayesian information criteria, and the second uses ridge regression as a regularization procedure. We illustrate the performance of these dimension reduction techniques through the analysis of three challenging models and data sets.