982 resultados para dynamic heterogeneous panels


Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the intention of introducing unique and value-added products to the market, organizations have become more conscious of how to best create knowledge as reported by Ganesh Bhatt in 2000 in 'Information dynamics, learning and knowledge creation in organizations'. Knowledge creation is recognized as having an important role in generating and sustaining a competitive advantage as well as in meeting organizational goals, as reported by Aleda Roth and her colleagues in 1994 in 'The knowledge factory for accelerated learning practices.' One of the successful ingredients of value management (VM) is its utilization of diverse knowledge resources, drawing upon different organizational functions, professional disciplines, and stakeholders, in a facilitated team process. Multidisciplinary VM study teams are viewed as having high potential to innovate due to their heterogeneous nature. This paper looks at one of the VM workshop's major benefits, namely, knowledge creation. A case study approach was used to explore the nature, processes, and issues associated with fostering a dynamic knowledge creation capability within VM teams. The results indicate that the dynamic knowledge creating process is embedded in and influenced by managing team constellation, creating shared awareness, developing shared understanding, and producing aligned action. The catalysts that can speed up the processes are open dialogue and discussion among participants. This process is enhanced by the use of facilitators, skilled at extracting knowledge.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates the performance of the tests proposed by Hadri and by Hadri and Larsson for testing for stationarity in heterogeneous panel data under model misspecification. The panel tests are based on the well known KPSS test (cf. Kwiatkowski et al.) which considers two models: stationarity around a deterministic level and stationarity around a deterministic trend. There is no study, as far as we know, on the statistical properties of the test when the wrong model is used. We also consider the case of the simultaneous presence of the two types of models in a panel. We employ two asymptotics: joint asymptotic, T, N -> infinity simultaneously, and T fixed and N allowed to grow indefinitely. We use Monte Carlo experiments to investigate the effects of misspecification in sample sizes usually used in practice. The results indicate that the assumption that T is fixed rather than asymptotic leads to tests that have less size distortions, particularly for relatively small T with large N panels (micro-panels) than the tests derived under the joint asymptotics. We also find that choosing a deterministic trend when a deterministic level is true does not significantly affect the properties of the test. But, choosing a deterministic level when a deterministic trend is true leads to extreme over-rejections. Therefore, when unsure about which model has generated the data, it is suggested to use the model with a trend. We also propose a new statistic for testing for stationarity in mixed panel data where the mixture is known. The performance of this new test is very good for both cases of T asymptotic and T fixed. The statistic for T asymptotic is slightly undersized when T is very small (

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The most promising way to maintain reliable data transfer across the rapidly fluctuating channels used by next generation multiple-input multiple-output communications schemes is to exploit run-time variable modulation and antenna configurations. This demands that the baseband signal processing architectures employed in the communications terminals must provide low cost and high performance with runtime reconfigurability. We present a softcore-processor based solution to this issue, and show for the first time, that such programmable architectures can enable real-time data operation for cutting-edge standards
such as 802.11n; furthermore, by exploiting deep processing pipelines and interleaved task execution, the cost and performance of these architectures is shown to be on a par with traditional dedicated circuit based solutions. We believe this to be the first such programmable architecture to achieve this, and the combination of implementation efficiency and programmability makes this implementation style the most promising approach for hosting such dynamic architectures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Chemoenzymatic dynamic kinetic resolution (DKR) of rac-1-phenyl ethanol into R-1-phenylethanol acetate was investigated with emphasis on the minimization of side reactions. The organometallic hydrogen transfer (racemization) catalyst was varied, and this was observed to alter the rate and extent of oxidation of the alcohol to form ketone side products. The performance of highly active catalyst [(pentamethylcyclopentadienyl) IrCl2(1-benzyl,3-methyl-imidazol-2-ylidene)] was found to depend on the batch of lipase B used. The interaction between the bio- and chemo-catalysts was reduced by employing physical entrapment of the enzyme in silica using a sol-gel process. The nature of the gelation method was found to be important, with an alkaline method preferred, as an acidic method was found to initiate a further side reaction, the acid catalyzed dehydration of the secondary alcohol. The acidic gel was found to be a heterogeneous solid acid.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A postbuckling blade-stiffened composite panel was loaded in uniaxial compression, until failure. During loading beyond initial buckling, this panel was observed to undergo a secondary instability characterised by a dynamic mode shape change. These abrupt changes cause considerable numerical difficulties using standard path-following quasi-static solution procedures in finite element analysis. Improved methods such as the arc-length-related procedures do better at traversing certain critical points along an equilibrium path but these procedures may also encounter difficulties in highly non-linear problems. This paper presents a robust, modified explicit dynamic analysis for the modelling of postbuckling structures. This method was shown to predict the mode-switch with good accuracy and is more efficient than standard explicit dynamic analysis. (C) 2003 Elsevier Science Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Damage tolerant hat-stiffened thin-skinned composite panels with and without a centrally located circular cutout, under uniaxial compression loading, were investigated experimentally and analytically. These panels incorporated a highly postbuckling design characterised by two integral stiffeners separated by a large skin bay with a high width to skin-thickness ratio. In both configurations, the skin initially buckled into three half-wavelengths and underwent two mode-shape changes; the first a gradual mode change characterised by a central deformation with double curvature and the second a dynamic snap to five half-wavelengths. The use of standard path-following non-linear finite element analysis did not consistently capture the dynamic mode change and an approximate solution for the prediction of mode-changes using a Marguerre-type Rayleigh-Ritz energy method is presented. Shortcomings with both methods of analysis are discussed and improvements suggested. The panels failed catastrophically and their strength was limited by the local buckling strength of the hat stiffeners. (C) 2001 Elsevier Science Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the ParaPhrase project, a new 3-year targeted research project funded under EU Framework 7 Objective 3.4 (Computer Systems), starting in October 2011. ParaPhrase aims to follow a new approach to introducing parallelism using advanced refactoring techniques coupled with high-level parallel design patterns. The refactoring approach will use these design patterns to restructure programs defined as networks of software components into other forms that are more suited to parallel execution. The programmer will be aided by high-level cost information that will be integrated into the refactoring tools. The implementation of these patterns will then use a well-understood algorithmic skeleton approach to achieve good parallelism. A key ParaPhrase design goal is that parallel components are intended to match heterogeneous architectures, defined in terms of CPU/GPU combinations, for example. In order to achieve this, the ParaPhrase approach will map components at link time to the available hardware, and will then re-map them during program execution, taking account of multiple applications, changes in hardware resource availability, the desire to reduce communication costs etc. In this way, we aim to develop a new approach to programming that will be able to produce software that can adapt to dynamic changes in the system environment. Moreover, by using a strong component basis for parallelism, we can achieve potentially significant gains in terms of reducing sharing at a high level of abstraction, and so in reducing or even eliminating the costs that are usually associated with cache management, locking, and synchronisation. © 2013 Springer-Verlag Berlin Heidelberg.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Low-power processors and accelerators that were originally designed for the embedded systems market are emerging as building blocks for servers. Power capping has been actively explored as a technique to reduce the energy footprint of high-performance processors. The opportunities and limitations of power capping on the new low-power processor and accelerator ecosystem are less understood. This paper presents an efficient power capping and management infrastructure for heterogeneous SoCs based on hybrid ARM/FPGA designs. The infrastructure coordinates dynamic voltage and frequency scaling with task allocation on a customised Linux system for the Xilinx Zynq SoC. We present a compiler-assisted power model to guide voltage and frequency scaling, in conjunction with workload allocation between the ARM cores and the FPGA, under given power caps. The model achieves less than 5% estimation bias to mean power consumption. In an FFT case study, the proposed power capping schemes achieve on average 97.5% of the performance of the optimal execution and match the optimal execution in 87.5% of the cases, while always meeting power constraints.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Exascale computation is the next target of high performance computing. In the push to create exascale computing platforms, simply increasing the number of hardware devices is not an acceptable option given the limitations of power consumption, heat dissipation, and programming models which are designed for current hardware platforms. Instead, new hardware technologies, coupled with improved programming abstractions and more autonomous runtime systems, are required to achieve this goal. This position paper presents the design of a new runtime for a new heterogeneous hardware platform being developed to explore energy efficient, high performance computing. By combining a number of different technologies, this framework will both simplify the programming of current and future HPC applications, as well as automating the scheduling of data and computation across this new hardware platform. In particular, this work explores the use of FPGAs to achieve both the power and performance goals of exascale, as well as utilising the runtime to automatically effect dynamic configuration and reconfiguration of these platforms. 

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Power capping is a fundamental method for reducing the energy consumption of a wide range of modern computing environments, ranging from mobile embedded systems to datacentres. Unfortunately, maximising performance and system efficiency under static power caps remains challenging, while maximising performance under dynamic power caps has been largely unexplored. We present an adaptive power capping method that reduces the power consumption and maximizes the performance of heterogeneous SoCs for mobile and server platforms. Our technique combines power capping with coordinated DVFS, data partitioning and core allocations on a heterogeneous SoC with ARM processors and FPGA resources. We design our framework as a run-time system based on OpenMP and OpenCL to utilise the heterogeneous resources. We evaluate it through five data-parallel benchmarks on the Xilinx SoC which allows fully voltage and frequency control. Our experiments show a significant performance boost of 30% under dynamic power caps with concurrent execution on ARM and FPGA, compared to a naive separate approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The scarcity and diversity of resources among the devices of heterogeneous computing environments may affect their ability to perform services with specific Quality of Service constraints, particularly in dynamic distributed environments where the characteristics of the computational load cannot always be predicted in advance. Our work addresses this problem by allowing resource constrained devices to cooperate with more powerful neighbour nodes, opportunistically taking advantage of global distributed resources and processing power. Rather than assuming that the dynamic configuration of this cooperative service executes until it computes its optimal output, the paper proposes an anytime approach that has the ability to tradeoff deliberation time for the quality of the solution. Extensive simulations demonstrate that the proposed anytime algorithms are able to quickly find a good initial solution and effectively optimise the rate at which the quality of the current solution improves at each iteration, with an overhead that can be considered negligible.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Heterogeneous multicore platforms are becoming an interesting alternative for embedded computing systems with limited power supply as they can execute specific tasks in an efficient manner. Nonetheless, one of the main challenges of such platforms consists of optimising the energy consumption in the presence of temporal constraints. This paper addresses the problem of task-to-core allocation onto heterogeneous multicore platforms such that the overall energy consumption of the system is minimised. To this end, we propose a two-phase approach that considers both dynamic and leakage energy consumption: (i) the first phase allocates tasks to the cores such that the dynamic energy consumption is reduced; (ii) the second phase refines the allocation performed in the first phase in order to achieve better sleep states by trading off the dynamic energy consumption with the reduction in leakage energy consumption. This hybrid approach considers core frequency set-points, tasks energy consumption and sleep states of the cores to reduce the energy consumption of the system. Major value has been placed on a realistic power model which increases the practical relevance of the proposed approach. Finally, extensive simulations have been carried out to demonstrate the effectiveness of the proposed algorithm. In the best-case, savings up to 18% of energy are reached over the first fit algorithm, which has shown, in previous works, to perform better than other bin-packing heuristics for the target heterogeneous multicore platform.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Na sociedade atual, a preocupação com o ambiente, por um lado, e com o conforto e a segurança, por outro, faz com que a sustentabilidade energética se assuma como uma forma de intervenção adequada às exigências de qualidade de vida e à eficiência no âmbito da economia. Nesta conformidade, é incontornável a mais-valia do Smart Panel, um quadro elétrico inteligente criado com vista à consecução daqueles desideratos, o que motivou o tema do presente trabalho. Assim, pretende-se demonstrar as potencialidades do Smart Panel, um novo conceito de quadro elétrico que visa a otimização da sua funcionalidade na gestão dinâmica e pragmática das instalações elétricas, nomeadamente no que respeita ao controlo, monitorização e atuação sobre os dispositivos, quer in loco quer, sobretudo, à distância. Para a consecução deste objetivo, concorrem outros que o potenciam, designadamente a compreensão do funcionamento do quadro elétrico (QE) tradicional, a comparação deste com o Smart Panel e a demonstração das vantagens da utilização desta nova tecnologia. A grande finalidade do trabalho desenvolvido é, por um lado, colocar a formação académica ao serviço de um bom desempenho profissional futuro, por outro ir ao encontro da tendência tecnológica inerente às necessidades que o homem, hoje, tem de controlar. Deste modo, num primeiro momento, é feita uma abordagem geral ao quadro eléctrico tradicional a fim de ser compreendido o seu funcionamento, aplicações e potencialidades. Para tanto, a explanação inclui a apresentação de conceitos teóricos subjacentes à conceção, produção e montagem do QE. São explicitados os diversos componentes que o integram e funções que desempenham, bem como as interações que estabelecem entre si e os normativos a que devem obedecer, para conformidade. Houve a preocupação de incluir imagens coadjuvantes das explicações, descrições e procedimentos técnicos. No terceiro capítulo é abordada a tecnologia Smart Panel, introduzindo o conceito e objetivos que lhe subjazem. Explicita-se o modo de funcionamento deste sistema que agrupa proteção, supervisão, controlo, armazenamento e manutenção preventiva, e demonstra-se de que forma a capacidade de leitura de dados, de comunicação e de comando do quadro elétrico à distância se afigura uma revolução tecnológica facilitadora do cumprimento das necessidades de segurança, conforto e economia da vida moderna. Os capítulos quarto, quinto e sexto versam uma componente prática do trabalho. No capítulo quarto é explanado um suporte formativo e posterior demonstração do kit de ensaio, que servirá de apoio à apresentação da tecnologia Smart Panel aos clientes. Além deste suporte de formação, no quinto capítulo é elaborada uma lista de procedimentos de verificação a serem executados aos componentes de comunicação que integram o Smart Panel, para fornecimento ao quadrista. Por fim, no sexto capítulo incluem-se dois casos de estudo: o estudo A centra-se na aplicação da tecnologia Smart Panel ao projeto de um QE tradicional, que implica fazer o levantamento de toda a aparelhagem existente e, de seguida, proceder à transposição para a tecnologia Smart Panel por forma a cumprir os requisitos estabelecidos pelo cliente. O estudo de caso B consiste na elaboração de um projeto de um quadro eléctrico com a tecnologia Smart Panel em função de determinados requisitos e necessidades do cliente, por forma a garantir as funções desejadas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Almost everyone sketches. People use sketches day in and day out in many different and heterogeneous fields, to share their thoughts and clarify ambiguous interpretations, for example. The media used to sketch varies from analog tools like flipcharts to digital tools like smartboards. Whereas analog tools are usually affected by insufficient editing capabilities like cut/copy/paste, digital tools greatly support these scenarios. Digital tools can be grouped into informal and formal tools. Informal tools can be understood as simple drawing environments, whereas formal tools offer sophisticated support to create, optimize and validate diagrams of a certain application domain. Most digital formal tools force users to stick to a concrete syntax and editing workflow, limiting the user’s creativity. For that reason, a lot of people first sketch their ideas using the flexibility of analog or digital informal tools. Subsequently, the sketch is "portrayed" in an appropriate digital formal tool. This work presents Scribble, a highly configurable and extensible sketching framework which allows to dynamically inject sketching features into existing graphical diagram editors, based on Eclipse GEF. This allows to combine the flexibility of informal tools with the power of formal tools without any effort. No additional code is required to augment a GEF editor with sophisticated sketching features. Scribble recognizes drawn elements as well as handwritten text and automatically generates the corresponding domain elements. A local training data library is created dynamically by incrementally learning shapes, drawn by the user. Training data can be shared with others using the WebScribble web application which has been created as part of this work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This report addresses the problem of achieving cooperation within small- to medium- sized teams of heterogeneous mobile robots. I describe a software architecture I have developed, called ALLIANCE, that facilitates robust, fault tolerant, reliable, and adaptive cooperative control. In addition, an extended version of ALLIANCE, called L-ALLIANCE, is described, which incorporates a dynamic parameter update mechanism that allows teams of mobile robots to improve the efficiency of their mission performance through learning. A number of experimental results of implementing these architectures on both physical and simulated mobile robot teams are described. In addition, this report presents the results of studies of a number of issues in mobile robot cooperation, including fault tolerant cooperative control, adaptive action selection, distributed control, robot awareness of team member actions, improving efficiency through learning, inter-robot communication, action recognition, and local versus global control.