50 resultados para Thread safe parallel run-time
Resumo:
This paper describes the design, implementation and testing of a high speed controlled stereo “head/eye” platform which facilitates the rapid redirection of gaze in response to visual input. It details the mechanical device, which is based around geared DC motors, and describes hardware aspects of the controller and vision system, which are implemented on a reconfigurable network of general purpose parallel processors. The servo-controller is described in detail and higher level gaze and vision constructs outlined. The paper gives performance figures gained both from mechanical tests on the platform alone, and from closed loop tests on the entire system using visual feedback from a feature detector.
Resumo:
The poor performance of the Stock Market in the US up to the middle of 2003 has meant that REITs are increasingly been seen as an attractive addition to the mixed-asset portfolio. However, there is little evidence to indicate the consistency of the role REITs should play a role in the mixed-asset portfolio over different investment horizons. The results highlight that REITs do play a significant role over both different time horizons and holding periods. The findings show that REITs attractiveness as a diversification asset increase as the holding period increases. In addition, their diversification qualities span the entire efficient frontier, providing return enhancement properties at the lower end, switching to risk reduction qualities at the top end of the frontier.
Resumo:
Proposed is a unique cell histogram architecture which will process k data items in parallel to compute 2q histogram bins per time step. An array of m/2q cells computes an m-bin histogram with a speed-up factor of k; k ⩾ 2 makes it faster than current dual-ported memory implementations. Furthermore, simple mechanisms for conflict-free storing of the histogram bins into an external memory array are discussed.
Resumo:
Many numerical models for weather prediction and climate studies are run at resolutions that are too coarse to resolve convection explicitly, but too fine to justify the local equilibrium assumed by conventional convective parameterizations. The Plant-Craig (PC) stochastic convective parameterization scheme, developed in this paper, solves this problem by removing the assumption that a given grid-scale situation must always produce the same sub-grid-scale convective response. Instead, for each timestep and gridpoint, one of the many possible convective responses consistent with the large-scale situation is randomly selected. The scheme requires as input the large-scale state as opposed to the instantaneous grid-scale state, but must nonetheless be able to account for genuine variations in the largescale situation. Here we investigate the behaviour of the PC scheme in three-dimensional simulations of radiative-convective equilibrium, demonstrating in particular that the necessary space-time averaging required to produce a good representation of the input large-scale state is not in conflict with the requirement to capture large-scale variations. The resulting equilibrium profiles agree well with those obtained from established deterministic schemes, and with corresponding cloud-resolving model simulations. Unlike the conventional schemes the statistics for mass flux and rainfall variability from the PC scheme also agree well with relevant theory and vary appropriately with spatial scale. The scheme is further shown to adapt automatically to changes in grid length and in forcing strength.
Resumo:
This paper uses long-term regional construction data to investigate whether increases infrastructure investment in the English regions leads to subsequent rises in housebuilding and new commercial property, using time series modeling. Both physical (roads and harbours) and social infrastructure (education and health) impacts are investigated across nine regions in England. Significant effects for physical infrastructure are found across most regions and, also, some evidence of a social infrastructure effect. The results are not consistent across regions, which may be due to geographical differences and to network and diversionary effects. However, the results do suggest that infrastructure does have some impact but follows differential lag structures. These results provide a test of the hypothesis of the economic benefits of infrastructure investment in an approach that has not been used before.
Resumo:
This paper sets out progress during the first eighteen months of doctoral research into the City of London office market. The overall aim of the research is to explore relationships between office rents and the economy in the UK over the last 150 years. To do this, a database of lettings has been created from which a long run index of City office rents can be constructed. With this index, it should then be possible to analyse trends in rents and relationships with their long run determinants. The focus of this paper is on the creation of the rent database. First, it considers the existing secondary sources of long run rental data for the UK. This highlights a lack of information for years prior to 1970 and the need for primary data collection if earlier periods are to be studied. The paper then discusses the selection of the City of London and of the time period chosen for research. After this, it describes how a dataset covering the period 1860-1960 has been assembled using the records of property companies active in the City office market. It is hoped that, if successful, this research will contribute to existing knowledge on the long run characteristics of commercial real estate. In particular, it should add a price dimension (rents) to the existing long run information on stock/supply and investment. Hence, it should enable a more complete picture of the development and performance of commercial real estate through time to be gained.
Resumo:
The benefits of property in the mixed asset portfolio has been the subject of a number of studies both in the UK and around the world. The traditional way of investigating this issue is to use MPT with the results suggesting that Property should play a significant role in the mixed asset portfolio. These results are not without criticism and generally revolve around quality and quantity of the property data series. To overcome these deficiencies this paper uses cointegration methodology which examines the longer term time series behaviour of various asset markets using a very long run desmoothed data series. Using a number of different cointegration tests, both pair-wise and multivariate, the results show, in unambiguous terms, that there is no contemporous cointegration between the major asset classes Property, Equities and Bonds. The implications of which are that Property does indeed have a risk reducing place to play in the long-run strategic mixed-asset portfolio. A result of particular relevance to institutions such as pension funds and life insurance companies who would wish to hold investments for the long-term.
Resumo:
A parallel processor architecture based on a communicating sequential processor chip, the transputer, is described. The architecture is easily linearly extensible to enable separate functions to be included in the controller. To demonstrate the power of the resulting controller some experimental results are presented comparing PID and full inverse dynamics on the first three joints of a Puma 560 robot. Also examined are some of the sample rate issues raised by the asynchronous updating of inertial parameters, and the need for full inverse dynamics at every sample interval is questioned.
Resumo:
In this paper, we extend to the time-harmonic Maxwell equations the p-version analysis technique developed in [R. Hiptmair, A. Moiola and I. Perugia, Plane wave discontinuous Galerkin methods for the 2D Helmholtz equation: analysis of the p-version, SIAM J. Numer. Anal., 49 (2011), 264-284] for Trefftz-discontinuous Galerkin approximations of the Helmholtz problem. While error estimates in a mesh-skeleton norm are derived parallel to the Helmholtz case, the derivation of estimates in a mesh-independent norm requires new twists in the duality argument. The particular case where the local Trefftz approximation spaces are built of vector-valued plane wave functions is considered, and convergence rates are derived.
Resumo:
In a world where massive amounts of data are recorded on a large scale we need data mining technologies to gain knowledge from the data in a reasonable time. The Top Down Induction of Decision Trees (TDIDT) algorithm is a very widely used technology to predict the classification of newly recorded data. However alternative technologies have been derived that often produce better rules but do not scale well on large datasets. Such an alternative to TDIDT is the PrismTCS algorithm. PrismTCS performs particularly well on noisy data but does not scale well on large datasets. In this paper we introduce Prism and investigate its scaling behaviour. We describe how we improved the scalability of the serial version of Prism and investigate its limitations. We then describe our work to overcome these limitations by developing a framework to parallelise algorithms of the Prism family and similar algorithms. We also present the scale up results of a first prototype implementation.
Resumo:
Advances in hardware and software technology enable us to collect, store and distribute large quantities of data on a very large scale. Automatically discovering and extracting hidden knowledge in the form of patterns from these large data volumes is known as data mining. Data mining technology is not only a part of business intelligence, but is also used in many other application areas such as research, marketing and financial analytics. For example medical scientists can use patterns extracted from historic patient data in order to determine if a new patient is likely to respond positively to a particular treatment or not; marketing analysts can use extracted patterns from customer data for future advertisement campaigns; finance experts have an interest in patterns that forecast the development of certain stock market shares for investment recommendations. However, extracting knowledge in the form of patterns from massive data volumes imposes a number of computational challenges in terms of processing time, memory, bandwidth and power consumption. These challenges have led to the development of parallel and distributed data analysis approaches and the utilisation of Grid and Cloud computing. This chapter gives an overview of parallel and distributed computing approaches and how they can be used to scale up data mining to large datasets.
Resumo:
A parallel formulation of an algorithm for the histogram computation of n data items using an on-the-fly data decomposition and a novel quantum-like representation (QR) is developed. The QR transformation separates multiple data read operations from multiple bin update operations thereby making it easier to bind data items into their corresponding histogram bins. Under this model the steps required to compute the histogram is n/s + t steps, where s is a speedup factor and t is associated with pipeline latency. Here, we show that an overall speedup factor, s, is available for up to an eightfold acceleration. Our evaluation also shows that each one of these cells requires less area/time complexity compared to similar proposals found in the literature.
Resumo:
The question of linear sheared-disturbance evolution in constant-shear parallel flow is here reexamined with regard to the temporary-amplification phenomenon noted first by Orr in 1907. The results apply directly to Rossby waves on a beta-plane, and are also relevant to the Eady model of baroclinic instability. It is shown that an isotropic initial distribution of standing waves maintains a constant energy level throughout the shearing process, the amplification of some waves being precisely balanced by the decay of the others. An expression is obtained for the energy of a distribution of disturbances whose wavevectors lie within a given angular wedge and an upper bound derived. It is concluded that the case for ubiquitous amplification made in recent studies may have been somewhat overstated: while carefully-chosen individual Fourier components can amplify considerably before they decay. a general distribution will tend to exhibit little or no amplification.
Resumo:
A parallel pipelined array of cells suitable for real-time computation of histograms is proposed. The cell architecture builds on previous work obtained via C-slow retiming techniques and can be clocked at 65 percent faster frequency than previous arrays. The new arrays can be exploited for higher throughput particularly when dual data rate sampling techniques are used to operate on single streams of data from image sensors. In this way, the new cell operates on a p-bit data bus which is more convenient for interfacing to camera sensors or to microprocessors in consumer digital cameras.
Resumo:
The time to process each of W/B processing blocks of a median calculation method on a set of N W-bit integers is improved here by a factor of three compared to the literature. Parallelism uncovered in blocks containing B-bit slices are exploited by independent accumulative parallel counters so that the median is calculated faster than any known previous method for any N, W values. The improvements to the method are discussed in the context of calculating the median for a moving set of N integers for which a pipelined architecture is developed. An extra benefit of smaller area for the architecture is also reported.