187 resultados para Cable-Driven Parallel Manipulator


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A parallel pipelined array of cells suitable for realtime computation of histograms is proposed. The cell architecture builds on previous work to now allow operating on a stream of data at 1 pixel per clock cycle. This new cell is more suitable for interfacing to camera sensors or to microprocessors of 8-bit data buses which are common in consumer digital cameras. Arrays using the new proposed cells are obtained via C-slow retiming techniques and can be clocked at a 65% faster frequency than previous arrays. This achieves over 80% of the performance of two-pixel per clock cycle parallel pipelined arrays.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A parallel formulation of an algorithm for the histogram computation of n data items using an on-the-fly data decomposition and a novel quantum-like representation (QR) is developed. The QR transformation separates multiple data read operations from multiple bin update operations thereby making it easier to bind data items into their corresponding histogram bins. Under this model the steps required to compute the histogram is n/s + t steps, where s is a speedup factor and t is associated with pipeline latency. Here, we show that an overall speedup factor, s, is available for up to an eightfold acceleration. Our evaluation also shows that each one of these cells requires less area/time complexity compared to similar proposals found in the literature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to demonstrate key strategic decisions involved in turning around a large multinational operating in a dynamic market. Design/methodology/approach – The paper is based on analysis of archival documents and a semi-structured interview with the chairman of the company credited with its rescue. Findings – Turnaround is complex and involves both planned and emergent strategies. The progress is non-linear requiring adjustment and change in direction of travel. Top management credibility and vision is critical to success. Rescue is only possible if the company has a strong cash generative business among its businesses. The speed of decision making, decisiveness and the ability to implement strategy are among the key ingredients of success. Originality/value – Turnaround is an under-researched area in strategy. This paper contributes to a better understanding in this important area and bridges the gap between theory and practice. It provides a practical view and demonstrates how a leading executive with significant expertise and successful turnaround track record deals with inherent dilemmas of turnaround

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In order to improve the quality of healthcare services, the integrated large-scale medical information system is needed to adapt to the changing medical environment. In this paper, we propose a requirement driven architecture of healthcare information system with hierarchical architecture. The system operates through the mapping mechanism between these layers and thus can organize functions dynamically adapting to user’s requirement. Furthermore, we introduce the organizational semiotics methods to capture and analyze user’s requirement through ontology chart and norms. Based on these results, the structure of user’s requirement pattern (URP) is established as the driven factor of our system. Our research makes a contribution to design architecture of healthcare system which can adapt to the changing medical environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We compare future changes in global mean temperature in response to different future scenarios which, for the first time, arise from emission-driven rather than concentration-driven perturbed parameter ensemble of a global climate model (GCM). These new GCM simulations sample uncertainties in atmospheric feedbacks, land carbon cycle, ocean physics and aerosol sulphur cycle processes. We find broader ranges of projected temperature responses arising when considering emission rather than concentration-driven simulations (with 10–90th percentile ranges of 1.7 K for the aggressive mitigation scenario, up to 3.9 K for the high-end, business as usual scenario). A small minority of simulations resulting from combinations of strong atmospheric feedbacks and carbon cycle responses show temperature increases in excess of 9 K (RCP8.5) and even under aggressive mitigation (RCP2.6) temperatures in excess of 4 K. While the simulations point to much larger temperature ranges for emission-driven experiments, they do not change existing expectations (based on previous concentration-driven experiments) on the timescales over which different sources of uncertainty are important. The new simulations sample a range of future atmospheric concentrations for each emission scenario. Both in the case of SRES A1B and the Representative Concentration Pathways (RCPs), the concentration scenarios used to drive GCM ensembles, lies towards the lower end of our simulated distribution. This design decision (a legacy of previous assessments) is likely to lead concentration-driven experiments to under-sample strong feedback responses in future projections. Our ensemble of emission-driven simulations span the global temperature response of the CMIP5 emission-driven simulations, except at the low end. Combinations of low climate sensitivity and low carbon cycle feedbacks lead to a number of CMIP5 responses to lie below our ensemble range. The ensemble simulates a number of high-end responses which lie above the CMIP5 carbon cycle range. These high-end simulations can be linked to sampling a number of stronger carbon cycle feedbacks and to sampling climate sensitivities above 4.5 K. This latter aspect highlights the priority in identifying real-world climate-sensitivity constraints which, if achieved, would lead to reductions on the upper bound of projected global mean temperature change. The ensembles of simulations presented here provides a framework to explore relationships between present-day observables and future changes, while the large spread of future-projected changes highlights the ongoing need for such work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A novel method is presented for obtaining rigorous upper bounds on the finite-amplitude growth of instabilities to parallel shear flows on the beta-plane. The method relies on the existence of finite-amplitude Liapunov (normed) stability theorems, due to Arnol'd, which are nonlinear generalizations of the classical stability theorems of Rayleigh and Fjørtoft. Briefly, the idea is to use the finite-amplitude stability theorems to constrain the evolution of unstable flows in terms of their proximity to a stable flow. Two classes of general bounds are derived, and various examples are considered. It is also shown that, for a certain kind of forced-dissipative problem with dissipation proportional to vorticity, the finite-amplitude stability theorems (which were originally derived for inviscid, unforced flow) remain valid (though they are no longer strictly Liapunov); the saturation bounds therefore continue to hold under these conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Disturbances of arbitrary amplitude are superposed on a basic flow which is assumed to be steady and either (a) two-dimensional, homogeneous, and incompressible (rotating or non-rotating) or (b) stably stratified and quasi-geostrophic. Flow over shallow topography is allowed in either case. The basic flow, as well as the disturbance, is assumed to be subject neither to external forcing nor to dissipative processes like viscosity. An exact, local ‘wave-activity conservation theorem’ is derived in which the density A and flux F are second-order ‘wave properties’ or ‘disturbance properties’, meaning that they are O(a2) in magnitude as disturbance amplitude a [rightward arrow] 0, and that they are evaluable correct to O(a2) from linear theory, to O(a3) from second-order theory, and so on to higher orders in a. For a disturbance in the form of a single, slowly varying, non-stationary Rossby wavetrain, $\overline{F}/\overline{A}$ reduces approximately to the Rossby-wave group velocity, where (${}^{-}$) is an appropriate averaging operator. F and A have the formal appearance of Eulerian quantities, but generally involve a multivalued function the correct branch of which requires a certain amount of Lagrangian information for its determination. It is shown that, in a certain sense, the construction of conservable, quasi-Eulerian wave properties like A is unique and that the multivaluedness is inescapable in general. The connection with the concepts of pseudoenergy (quasi-energy), pseudomomentum (quasi-momentum), and ‘Eliassen-Palm wave activity’ is noted. The relationship of this and similar conservation theorems to dynamical fundamentals and to Arnol'd's nonlinear stability theorems is discussed in the light of recent advances in Hamiltonian dynamics. These show where such conservation theorems come from and how to construct them in other cases. An elementary proof of the Hamiltonian structure of two-dimensional Eulerian vortex dynamics is put on record, with explicit attention to the boundary conditions. The connection between Arnol'd's second stability theorem and the suppression of shear and self-tuning resonant instabilities by boundary constraints is discussed, and a finite-amplitude counterpart to Rayleigh's inflection-point theorem noted

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A parallel pipelined array of cells suitable for real-time computation of histograms is proposed. The cell architecture builds on previous work obtained via C-slow retiming techniques and can be clocked at 65 percent faster frequency than previous arrays. The new arrays can be exploited for higher throughput particularly when dual data rate sampling techniques are used to operate on single streams of data from image sensors. In this way, the new cell operates on a p-bit data bus which is more convenient for interfacing to camera sensors or to microprocessors in consumer digital cameras.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A statistical model is derived relating the diurnal variation of sea surface temperature (SST) to the net surface heat flux and surface wind speed from a numerical weather prediction (NWP) model. The model is derived using fluxes and winds from the European Centre for Medium-Range Weather Forecasting (ECMWF) NWP model and SSTs from the Spinning Enhanced Visible and Infrared Imager (SEVIRI). In the model, diurnal warming has a linear dependence on the net surface heat flux integrated since (approximately) dawn and an inverse quadratic dependence on the maximum of the surface wind speed in the same period. The model coefficients are found by matching, for a given integrated heat flux, the frequency distributions of the maximum wind speed and the observed warming. Diurnal cooling, where it occurs, is modelled as proportional to the integrated heat flux divided by the heat capacity of the seasonal mixed layer. The model reproduces the statistics (mean, standard deviation, and 95-percentile) of the diurnal variation of SST seen by SEVIRI and reproduces the geographical pattern of mean warming seen by the Advanced Microwave Scanning Radiometer (AMSR-E). We use the functional dependencies in the statistical model to test the behaviour of two physical model of diurnal warming that display contrasting systematic errors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The exchange between the open ocean and sub-ice shelf cavities is important to both water mass transformations and ice shelf melting. Here we use a high-resolution (500 m) numerical model to investigate to which degree eddies produced by frontal instability at the edge of a polynya are capable of transporting dense High Salinity Shelf Water (HSSW) underneath an ice shelf. The applied surface buoyancy flux and ice shelf geometry is based on Ronne Ice Shelf in the southern Weddell Sea, an area of intense wintertime sea ice production where a flow of HSSW into the cavity has been observed. Results show that eddies are able to enter the cavity at the southwestern corner of the polynya where an anticyclonic rim current intersects the ice shelf front. The size and time scale of simulated eddies are in agreement with observations close to the Ronne Ice Front. The properties and strength of the inflow are sensitive to the prescribed total ice production, flushing the ice shelf cavity at a rate of 0.2–0.4 × 106 m3 s−1 depending on polynya size and magnitude of surface buoyancy flux. Eddy-driven HSSW transport into the cavity is reduced by about 50% if the model grid resolution is decreased to 2-5 km and eddies are not properly resolved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We have optimised the atmospheric radiation algorithm of the FAMOUS climate model on several hardware platforms. The optimisation involved translating the Fortran code to C and restructuring the algorithm around the computation of a single air column. Instead of the existing MPI-based domain decomposition, we used a task queue and a thread pool to schedule the computation of individual columns on the available processors. Finally, four air columns are packed together in a single data structure and computed simultaneously using Single Instruction Multiple Data operations. The modified algorithm runs more than 50 times faster on the CELL’s Synergistic Processing Elements than on its main PowerPC processing element. On Intel-compatible processors, the new radiation code runs 4 times faster. On the tested graphics processor, using OpenCL, we find a speed-up of more than 2.5 times as compared to the original code on the main CPU. Because the radiation code takes more than 60% of the total CPU time, FAMOUS executes more than twice as fast. Our version of the algorithm returns bit-wise identical results, which demonstrates the robustness of our approach. We estimate that this project required around two and a half man-years of work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Descent and spreading of high salinity water generated by salt rejection during sea ice formation in an Antarctic coastal polynya is studied using a hydrostatic, primitive equation three-dimensional ocean model called the Proudman Oceanographic Laboratory Coastal Ocean Modeling System (POLCOMS). The shape of the polynya is assumed to be a rectangle 100 km long and 30 km wide, and the salinity flux into the polynya at its surface is constant. The model has been run at high horizontal spatial resolution (500 m), and numerical simulations reveal a buoyancy-driven coastal current. The coastal current is a robust feature and appears in a range of simulations designed to investigate the influence of a sloping bottom, variable bottom drag, variable vertical turbulent diffusivities, higher salinity flux, and an offshore position of the polynya. It is shown that bottom drag is the main factor determining the current width. This coastal current has not been produced with other numerical models of polynyas, which may be because these models were run at coarser resolutions. The coastal current becomes unstable upstream of its front when the polynya is adjacent to the coast. When the polynya is situated offshore, an unstable current is produced from its outset owing to the capture of cyclonic eddies. The effect of a coastal protrusion and a canyon on the current motion is investigated. In particular, due to the convex shape of the coastal protrusion, the current sheds a dipolar eddy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Garfield produces a critique of neo-minimalist art practice by demonstrating how the artist Melanie Jackson’s Some things you are not allowed to send around the world (2003 and 2006) and the experimental film-maker Vivienne Dick’s Liberty’s booty (1980) – neither of which can be said to be about feeling ‘at home’ in the world, be it as a resident or as a nomad – examine global humanity through multi-positionality, excess and contingency, and thereby begin to articulate a new cosmopolitan relationship with the local – or, rather, with many different localities – in one and the same maximalist sweep of the work. ‘Maximalism’ in Garfield’s coinage signifies an excessive overloading (through editing, collage, and the sheer density of the range of the material) that enables the viewer to insert themselves into the narrative of the work. In the art of both Jackson and Dick Garfield detects a refusal to know or to judge the world; instead, there is an attempt to incorporate the complexities of its full range into the singular vision of the work, challenging the viewer to identify what is at stake.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Exascale systems are the next frontier in high-performance computing and are expected to deliver a performance of the order of 10^18 operations per second using massive multicore processors. Very large- and extreme-scale parallel systems pose critical algorithmic challenges, especially related to concurrency, locality and the need to avoid global communication patterns. This work investigates a novel protocol for dynamic group communication that can be used to remove the global communication requirement and to reduce the communication cost in parallel formulations of iterative data mining algorithms. The protocol is used to provide a communication-efficient parallel formulation of the k-means algorithm for cluster analysis. The approach is based on a collective communication operation for dynamic groups of processes and exploits non-uniform data distributions. Non-uniform data distributions can be either found in real-world distributed applications or induced by means of multidimensional binary search trees. The analysis of the proposed dynamic group communication protocol has shown that it does not introduce significant communication overhead. The parallel clustering algorithm has also been extended to accommodate an approximation error, which allows a further reduction of the communication costs. The effectiveness of the exact and approximate methods has been tested in a parallel computing system with 64 processors and in simulations with 1024 processing elements.