961 resultados para Special purpose operations


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose – This paper aims to provide a critical comment on complex funding systems. Design/methodology/approach – This is a critical comment written in the form of a poem. The poem is in the style of the English light opera composers Gilbert and Sullivan, and is a variation on their song “I Am the Very Model of a Modern Major General”, from The Pirates of Penzance. Findings - The poem spotlights financial failure. Originality/value - The poem spotlights the crazy names and poor transparency of special purpose vehicles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An Application Specific Instruction-set Processor (ASIP) is a specialized processor tailored to run a particular application/s efficiently. However, when there are multiple candidate applications in the application’s domain it is difficult and time consuming to find optimum set of applications to be implemented. Existing ASIP design approaches perform this selection manually based on a designer’s knowledge. We help in cutting down the number of candidate applications by devising a classification method to cluster similar applications based on the special-purpose operations they share. This provides a significant reduction in the comparison overhead while resulting in customized ASIP instruction sets which can benefit a whole family of related applications. Our method gives users the ability to quantify the degree of similarity between the sets of shared operations to control the size of clusters. A case study involving twelve algorithms confirms that our approach can successfully cluster similar algorithms together based on the similarity of their component operations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the increasing importance of Application Domain Specific Processor (ADSP) design, a significant challenge is to identify special-purpose operations for implementation as a customized instruction. While many methodologies have been proposed for this purpose, they all work for a single algorithm chosen from the target application domain. Such algorithm-specific approaches are not suitable for designing instruction sets applicable to a whole family of related algorithms. For an entire range of related algorithms, this paper develops a methodology for identifying compound operations, as a basis for designing “domain-specific” Instruction Set Architectures (ISAs) that can efficiently run most of the algorithms in a given domain. Our methodology combines three different static analysis techniques to identify instruction sequences common to several related algorithms: identification of (non-branching) instruction sequences that occur commonly across the algorithms; identification of instruction sequences nested within iterative constructs that are thus executed frequently; and identification of commonly-occurring instruction sequences that span basic blocks. Choosing different combinations of these results enables us to design domain-specific special operations with different desired characteristics, such as performance or suitability as a library function. To demonstrate our approach, case studies are carried out for a family of thirteen string matching algorithms. Finally, the validity of our static analysis results is confirmed through independent dynamic analysis experiments and performance improvement measurements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis in software engineering presents a novel automated framework to identify similar operations utilized by multiple algorithms for solving related computing problems. It provides a new effective solution to perform multi-application based algorithm analysis, employing fundamentally light-weight static analysis techniques compared to the state-of-art approaches. Significant performance improvements are achieved across the objective algorithms through enhancing the efficiency of the identified similar operations, targeting discrete application domains.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

General-purpose computing devices allow us to (1) customize computation after fabrication and (2) conserve area by reusing expensive active circuitry for different functions in time. We define RP-space, a restricted domain of the general-purpose architectural space focussed on reconfigurable computing architectures. Two dominant features differentiate reconfigurable from special-purpose architectures and account for most of the area overhead associated with RP devices: (1) instructions which tell the device how to behave, and (2) flexible interconnect which supports task dependent dataflow between operations. We can characterize RP-space by the allocation and structure of these resources and compare the efficiencies of architectural points across broad application characteristics. Conventional FPGAs fall at one extreme end of this space and their efficiency ranges over two orders of magnitude across the space of application characteristics. Understanding RP-space and its consequences allows us to pick the best architecture for a task and to search for more robust design points in the space. Our DPGA, a fine- grained computing device which adds small, on-chip instruction memories to FPGAs is one such design point. For typical logic applications and finite- state machines, a DPGA can implement tasks in one-third the area of a traditional FPGA. TSFPGA, a variant of the DPGA which focuses on heavily time-switched interconnect, achieves circuit densities close to the DPGA, while reducing typical physical mapping times from hours to seconds. Rigid, fabrication-time organization of instruction resources significantly narrows the range of efficiency for conventional architectures. To avoid this performance brittleness, we developed MATRIX, the first architecture to defer the binding of instruction resources until run-time, allowing the application to organize resources according to its needs. Our focus MATRIX design point is based on an array of 8-bit ALU and register-file building blocks interconnected via a byte-wide network. With today's silicon, a single chip MATRIX array can deliver over 10 Gop/s (8-bit ops). On sample image processing tasks, we show that MATRIX yields 10-20x the computational density of conventional processors. Understanding the cost structure of RP-space helps us identify these intermediate architectural points and may provide useful insight more broadly in guiding our continual search for robust and efficient general-purpose computing structures.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Insulated Rail Joints (IRJs) are designed to electrically isolate two rails in rail tracks to control the signalling system for safer train operations. Unfortunately the gapped section of the IRJs is structurally weak and often fails prematurely especially in heavy haul tracks, which adversely affects service reliability and efficiency. The IRJs suffer from a number of failure modes; the railhead ratchetting at the gap is, however, regarded as the root cause and attended to in this thesis. Ratchetting increases with the increase in wheel loads; in the absence of a life prediction model, effective management of the IRJs for increased wagon wheel loads has become very challenging. Therefore, the main aim of this thesis is to determine method to predict IRJs' service life. The distinct discontinuity of the railhead at the gap makes the Hertzian theory and the rolling contact shakedown map, commonly used in the continuously welded rails, not applicable to examine the metal ratchetting of the IRJs. Finite Element (FE) technique is, therefore, used to explore the railhead metal ratchetting characteristics in this thesis, the boundary conditions of which has been determined from a full scale study of the IRJ specimens under rolling contact of the loaded wheels. A special purpose test set up containing full-scale wagon wheel was used to apply rolling wheel loads on the railhead edges of the test specimens. The state of the rail end face strains was determined using a non-contact digital imaging technique and used for calibrating the FE model. The basic material parameters for this FE model were obtained through independent uniaxial, monotonic tensile tests on specimens cut from the head hardened virgin rails. The monotonic tensile test data have been used to establish a cyclic load simulation model of the railhead steel specimen; the simulated cyclic load test has provided the necessary data for the three decomposed kinematic hardening plastic strain accumulation model of Chaboche. A performance based service life prediction algorithm for the IRJs was established using the plastic strain accumulation obtained from the Chaboche model. The predicted service lives of IRJs using this algorithm have agreed well with the published data. The finite element model has been used to carry out a sensitivity study on the effects of wheel diameter to the railhead metal plasticity. This study revealed that the depth of the plastic zone at the railhead edges is independent of the wheel diameter; however, large wheel diameter is shown to increase the IRJs' service life.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Garbage collector performance in LISP systems on custom hardware has been substantially improved by the adoption of lifetime-based garbage collection techniques. To date, however, successful lifetime-based garbage collectors have required special-purpose hardware, or at least privileged access to data structures maintained by the virtual memory system. I present here a lifetime-based garbage collector requiring no special-purpose hardware or virtual memory system support, and discuss its performance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper addresses the problem of efficiently computing the motor torques required to drive a lower-pair kinematic chain (e.g., a typical manipulator arm in free motion, or a mechanical leg in the swing phase) given the desired trajectory; i.e., the Inverse Dynamics problem. It investigates the high degree of parallelism inherent in the computations, and presents two "mathematically exact" formulations especially suited to high-speed, highly parallel implementations using special-purpose hardware or VLSI devices. In principle, the formulations should permit the calculations to run at a speed bounded only by I/O. The first presented is a parallel version of the recent linear Newton-Euler recursive algorithm. The time cost is also linear in the number of joints, but the real-time coefficients are reduced by almost two orders of magnitude. The second formulation reports a new parallel algorithm which shows that it is possible to improve upon the linear time dependency. The real time required to perform the calculations increases only as the [log2] of the number of joints. Either formulation is susceptible to a systolic pipelined architecture in which complete sets of joint torques emerge at successive intervals of four floating-point operations. Hardware requirements necessary to support the algorithm are considered and found not to be excessive, and a VLSI implementation architecture is suggested. We indicate possible applications to incorporating dynamical considerations into trajectory planning, e.g. it may be possible to build an on-line trajectory optimizer.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Desde o final da década de 90, a securitização de ativos vem se firmando no mercado brasileiro como uma importante alternativa de captação de recursos. Essa inovação financeira permite às empresa o acesso direto ao mercado de capitais, para a venda de títulos lastreados em suas carteiras de recebíveis, eliminando a intermediação bancária e permitindo reduções no custo de capital, inclusive em comparação com títulos convencionais de dívida corporativa. Os títulos de securitização são em regra emitidos por um veículo de propósito específico (FIDC ou companhia securitizadora), com o objetivo de segregar os riscos do originador/tomador em relação aos créditos securitizados (lastro da emissão). Em 2004, a Lei 11.076 criou os novos títulos do agronegócio (CDA-WA, CDCA, LCA e CRA), inspirada na legislação da securitização imobiliária (SFI - Lei 9.514/97), buscando disponibilizar ao agronegócio uma nova fonte de recursos, via emissão de títulos no mercado de capitais. Desde então, um número crescente de operações estruturadas com esses papéis pôde ser observada, demonstrando sua funcionalidade e potencial. No entanto, o volume de captações públicas mais sofisticadas fundadas na emissão de cotas de FIDCs e de CRAs ainda se apresenta muito reduzida em relação à demanda do agronegócio por financiamento, sobretudo levando-se em conta a representatividade desse setor no Brasil. O setor sucro-energético é provavelmente o segmento do agronegócio que está em melhor posição para o desenvolvimento de operações de securitização, por apresentar características como: escala, padronização dos produtos, grau de consolidação dos grupos empresariais e perspectivas de crescimento, além do forte apelo ambiental. Os papéis associados a esse segmento possuem, dessa forma, um potencial singular para superar a resistência natural de investidores às aplicações financeiras na área agrícola. Este trabalho dedica-se a investigar como o conceito de securitização pode ser aplicado em operações ligadas ao agronegócio, particularmente ao setor sucro-alcooleiro. A partir de um estudo de caso, serão analisados aspectos operacionais de uma emissão pública de CRAs, ressaltando os mecanismos e procedimentos adotados para lidar com as particularidades dos títulos oriundos do agronegócio. O estudo mostra que a estruturação desse tipo de operação apresenta algumas características e desafios diferentes das operações fundadas em outros papéis, porém a priori administráveis a partir das técnicas tradicionais de securitização e da incorporação de mecanismos suplementares de gestão de riscos.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Mode of access: Internet.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis deals with the challenging problem of designing systems able to perceive objects in underwater environments. In the last few decades research activities in robotics have advanced the state of art regarding intervention capabilities of autonomous systems. State of art in fields such as localization and navigation, real time perception and cognition, safe action and manipulation capabilities, applied to ground environments (both indoor and outdoor) has now reached such a readiness level that it allows high level autonomous operations. On the opposite side, the underwater environment remains a very difficult one for autonomous robots. Water influences the mechanical and electrical design of systems, interferes with sensors by limiting their capabilities, heavily impacts on data transmissions, and generally requires systems with low power consumption in order to enable reasonable mission duration. Interest in underwater applications is driven by needs of exploring and intervening in environments in which human capabilities are very limited. Nowadays, most underwater field operations are carried out by manned or remotely operated vehicles, deployed for explorations and limited intervention missions. Manned vehicles, directly on-board controlled, expose human operators to risks related to the stay in field of the mission, within a hostile environment. Remotely Operated Vehicles (ROV) currently represent the most advanced technology for underwater intervention services available on the market. These vehicles can be remotely operated for long time but they need support from an oceanographic vessel with multiple teams of highly specialized pilots. Vehicles equipped with multiple state-of-art sensors and capable to autonomously plan missions have been deployed in the last ten years and exploited as observers for underwater fauna, seabed, ship wrecks, and so on. On the other hand, underwater operations like object recovery and equipment maintenance are still challenging tasks to be conducted without human supervision since they require object perception and localization with much higher accuracy and robustness, to a degree seldom available in Autonomous Underwater Vehicles (AUV). This thesis reports the study, from design to deployment and evaluation, of a general purpose and configurable platform dedicated to stereo-vision perception in underwater environments. Several aspects related to the peculiar environment characteristics have been taken into account during all stages of system design and evaluation: depth of operation and light conditions, together with water turbidity and external weather, heavily impact on perception capabilities. The vision platform proposed in this work is a modular system comprising off-the-shelf components for both the imaging sensors and the computational unit, linked by a high performance ethernet network bus. The adopted design philosophy aims at achieving high flexibility in terms of feasible perception applications, that should not be as limited as in case of a special-purpose and dedicated hardware. Flexibility is required by the variability of underwater environments, with water conditions ranging from clear to turbid, light backscattering varying with daylight and depth, strong color distortion, and other environmental factors. Furthermore, the proposed modular design ensures an easier maintenance and update of the system over time. Performance of the proposed system, in terms of perception capabilities, has been evaluated in several underwater contexts taking advantage of the opportunity offered by the MARIS national project. Design issues like energy power consumption, heat dissipation and network capabilities have been evaluated in different scenarios. Finally, real-world experiments, conducted in multiple and variable underwater contexts, including open sea waters, have led to the collection of several datasets that have been publicly released to the scientific community. The vision system has been integrated in a state of the art AUV equipped with a robotic arm and gripper, and has been exploited in the robot control loop to successfully perform underwater grasping operations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Policy instruments of education, regulation, fines and inspection have all been utilised by Australian jurisdictions as they attempt to improve the poor performance of occupational health and safety (OH&S) in the construction industry. However, such policy frameworks have been largely uncoordinated across Australia, resulting in differing policy systems, with differing requirements and compliance systems. Such complexity, particularly for construction firms operating across jurisdictional borders, led to various attempts to improve the consistency of OH&S regulation across Australia, four of which will be reviewed in this report. 1. The first is the Occupational Health and Safety Act 1991 (Commonwealth) which enabled certain organisations to opt out of state based regulatory regimes. 2. The second is the development of national standards, codes of practice and guidance documents by the National Occupational Health and Safety Council (NOHSC). The intent was that the OHS requirements, principles and practices contained in these documents would be adopted by state and territory governments into their legislation and policy, thereby promoting regulatory consistency across Australia. 3. The third is the attachment of conditions to special purpose payments from the Commonwealth to the States, in the form of OH&S accreditation with the Office of the Federal Safety Commissioner. 4. The fourth is the development of national voluntary codes of OHS practice for the construction industry. It is interesting to note that the tempo of change has increased significantly since 2003, with the release of the findings of the Cole Royal Commission. This paper examines and evaluates each of these attempts to promote consistency across Australia. It concludes that while there is a high level of information sharing between jurisdictions, particularly from the NOSHC standards, a fragmented OH&S policy framework still remains in place across Australia. The utility of emergent industry initiatives such as voluntary codes and guidelines for safer construction practices to enhance consistency are discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the study of complex neurobiological movement systems, measurement indeterminacy has typically been overcome by imposing artificial modelling constraints to reduce the number of unknowns (e.g., reducing all muscle, bone and ligament forces crossing a joint to a single vector). However, this approach prevents human movement scientists from investigating more fully the role, functionality and ubiquity of coordinative structures or functional motor synergies. Advancements in measurement methods and analysis techniques are required if the contribution of individual component parts or degrees of freedom of these task-specific structural units is to be established, thereby effectively solving the indeterminacy problem by reducing the number of unknowns. A further benefit of establishing more of the unknowns is that human movement scientists will be able to gain greater insight into ubiquitous processes of physical self-organising that underpin the formation of coordinative structures and the confluence of organismic, environmental and task constraints that determine the exact morphology of these special-purpose devices.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Against a background of already thin markets in some sectors of major public sector infrastructure in Australia and the desire of Australian federal government to leverage private finance, concerns about ensuring sufficient levels of competition are prompting federal government to seek new sources of in-bound Foreign Direct Income. The aim of this paper is to justify and develop a means to deploying the eclectic paradigm of internationalisation that forms part of an Australian federally funded research project designed to explain the determinants of multinational contractors' willingness to bid for Australian public sector major infrastructure projects. Despite the dominance of the eclectic paradigm as a theory of internationalisation for over two decades, it has seen limited application in terms of multinational construction. It is expected that the research project will be the first empirical study to deploy the eclectic paradigm to inbound FDI to Australia whilst using the dominant economic theories advocated for use within the eclectic paradigm. Furthermore, the research project is anticipated to yield a number of practical benefits. These include estimates of the potential scope to attract more multinational contractors to bid for Australian public sector infrastructure, including the nature and extent to which this scope can be influenced by Australian governments responsible for the delivery of infrastructure. On the other hand, the research is also expected to indicate the extent to which indigenous and other multinational contractors domiciled in Australia are investing in special purpose technology and achieving productivity gains relative to foreign multinational contractors.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ratchetting failure of railhead material adjacent to endpost which is placed in the air gap between the two rail ends at insulated rail joints causes significant economic problems to the railway operators who rely on the proper functioning of these joints for train control using the signalling track circuitry. The ratchetting failure is a localised problem and is very difficult to predict even when complex analytical methods are employed. This paper presents a novel experimental technique that enables measurement of the progressive ratchetting. A special purpose test rig was developed for this purpose and commissioned by the Centre for Railway Engineering at Central Queensland University. The rig also provides the capability of testing of the wheel/rail rolling contract conditions. The results provide confidence that accurate measurement of the localised failure of railhead material can be achieved using the test rig.