962 resultados para ubiquitous multi-core framework


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study examines the business model complexity of Irish credit unions using a latent class approach to measure structural performance over the period 2002 to 2013. The latent class approach allows the endogenous identification of a multi-class framework for business models based on credit union specific characteristics. The analysis finds a three class system to be appropriate with the multi-class model dependent on three financial viability characteristics. This finding is consistent with the deliberations of the Irish Commission on Credit Unions (2012) which identified complexity and diversity in the business models of Irish credit unions and recommended that such complexity and diversity could not be accommodated within a one size fits all regulatory framework. The analysis also highlights that two of the classes are subject to diseconomies of scale. This may suggest credit unions would benefit from a reduction in scale or perhaps that there is an imbalance in the present change process. Finally, relative performance differences are identified for each class in terms of technical efficiency. This suggests that there is an opportunity for credit unions to improve their performance by using within-class best practice or alternatively by switching to another class.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to the growth of design size and complexity, design verification is an important aspect of the Logic Circuit development process. The purpose of verification is to validate that the design meets the system requirements and specification. This is done by either functional or formal verification. The most popular approach to functional verification is the use of simulation based techniques. Using models to replicate the behaviour of an actual system is called simulation. In this thesis, a software/data structure architecture without explicit locks is proposed to accelerate logic gate circuit simulation. We call thus system ZSIM. The ZSIM software architecture simulator targets low cost SIMD multi-core machines. Its performance is evaluated on the Intel Xeon Phi and 2 other machines (Intel Xeon and AMD Opteron). The aim of these experiments is to: • Verify that the data structure used allows SIMD acceleration, particularly on machines with gather instructions ( section 5.3.1). • Verify that, on sufficiently large circuits, substantial gains could be made from multicore parallelism ( section 5.3.2 ). • Show that a simulator using this approach out-performs an existing commercial simulator on a standard workstation ( section 5.3.3 ). • Show that the performance on a cheap Xeon Phi card is competitive with results reported elsewhere on much more expensive super-computers ( section 5.3.5 ). To evaluate the ZSIM, two types of test circuits were used: 1. Circuits from the IWLS benchmark suit [1] which allow direct comparison with other published studies of parallel simulators.2. Circuits generated by a parametrised circuit synthesizer. The synthesizer used an algorithm that has been shown to generate circuits that are statistically representative of real logic circuits. The synthesizer allowed testing of a range of very large circuits, larger than the ones for which it was possible to obtain open source files. The experimental results show that with SIMD acceleration and multicore, ZSIM gained a peak parallelisation factor of 300 on Intel Xeon Phi and 11 on Intel Xeon. With only SIMD enabled, ZSIM achieved a maximum parallelistion gain of 10 on Intel Xeon Phi and 4 on Intel Xeon. Furthermore, it was shown that this software architecture simulator running on a SIMD machine is much faster than, and can handle much bigger circuits than a widely used commercial simulator (Xilinx) running on a workstation. The performance achieved by ZSIM was also compared with similar pre-existing work on logic simulation targeting GPUs and supercomputers. It was shown that ZSIM simulator running on a Xeon Phi machine gives comparable simulation performance to the IBM Blue Gene supercomputer at very much lower cost. The experimental results have shown that the Xeon Phi is competitive with simulation on GPUs and allows the handling of much larger circuits than have been reported for GPU simulation. When targeting Xeon Phi architecture, the automatic cache management of the Xeon Phi, handles and manages the on-chip local store without any explicit mention of the local store being made in the architecture of the simulator itself. However, targeting GPUs, explicit cache management in program increases the complexity of the software architecture. Furthermore, one of the strongest points of the ZSIM simulator is its portability. Note that the same code was tested on both AMD and Xeon Phi machines. The same architecture that efficiently performs on Xeon Phi, was ported into a 64 core NUMA AMD Opteron. To conclude, the two main achievements are restated as following: The primary achievement of this work was proving that the ZSIM architecture was faster than previously published logic simulators on low cost platforms. The secondary achievement was the development of a synthetic testing suite that went beyond the scale range that was previously publicly available, based on prior work that showed the synthesis technique is valid.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the multi-core CPU world, transactional memory (TM)has emerged as an alternative to lock-based programming for thread synchronization. Recent research proposes the use of TM in GPU architectures, where a high number of computing threads, organized in SIMT fashion, requires an effective synchronization method. In contrast to CPUs, GPUs offer two memory spaces: global memory and local memory. The local memory space serves as a shared scratch-pad for a subset of the computing threads, and it is used by programmers to speed-up their applications thanks to its low latency. Prior work from the authors proposed a lightweight hardware TM (HTM) support based in the local memory, modifying the SIMT execution model and adding a conflict detection mechanism. An efficient implementation of these features is key in order to provide an effective synchronization mechanism at the local memory level. After a quick description of the main features of our HTM design for GPU local memory, in this work we gather together a number of proposals designed with the aim of improving those mechanisms with high impact on performance. Firstly, the SIMT execution model is modified to increase the parallelism of the application when transactions must be serialized in order to make forward progress. Secondly, the conflict detection mechanism is optimized depending on application characteristics, such us the read/write sets, the probability of conflict between transactions and the existence of read-only transactions. As these features can be present in hardware simultaneously, it is a task of the compiler and runtime to determine which ones are more important for a given application. This work includes a discussion on the analysis to be done in order to choose the best configuration solution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hardware vendors make an important effort creating low-power CPUs that keep battery duration and durability above acceptable levels. In order to achieve this goal and provide good performance-energy for a wide variety of applications, ARM designed the big.LITTLE architecture. This heterogeneous multi-core architecture features two different types of cores: big cores oriented to performance and little cores, slower and aimed to save energy consumption. As all the cores have access to the same memory, multi-threaded applications must resort to some mutual exclusion mechanism to coordinate the access to shared data by the concurrent threads. Transactional Memory (TM) represents an optimistic approach for shared-memory synchronization. To take full advantage of the features offered by software TM, but also benefit from the characteristics of the heterogeneous big.LITTLE architectures, our focus is to propose TM solutions that take into account the power/performance requirements of the application and what it is offered by the architecture. In order to understand the current state-of-the-art and obtain useful information for future power-aware software TM solutions, we have performed an analysis of a popular TM library running on top of an ARM big.LITTLE processor. Experiments show, in general, better scalability for the LITTLE cores for most of the applications except for one, which requires the computing performance that the big cores offer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Catering to society’s demand for high performance computing, billions of transistors are now integrated on IC chips to deliver unprecedented performances. With increasing transistor density, the power consumption/density is growing exponentially. The increasing power consumption directly translates to the high chip temperature, which not only raises the packaging/cooling costs, but also degrades the performance/reliability and life span of the computing systems. Moreover, high chip temperature also greatly increases the leakage power consumption, which is becoming more and more significant with the continuous scaling of the transistor size. As the semiconductor industry continues to evolve, power and thermal challenges have become the most critical challenges in the design of new generations of computing systems. In this dissertation, we addressed the power/thermal issues from the system-level perspective. Specifically, we sought to employ real-time scheduling methods to optimize the power/thermal efficiency of the real-time computing systems, with leakage/ temperature dependency taken into consideration. In our research, we first explored the fundamental principles on how to employ dynamic voltage scaling (DVS) techniques to reduce the peak operating temperature when running a real-time application on a single core platform. We further proposed a novel real-time scheduling method, “M-Oscillations” to reduce the peak temperature when scheduling a hard real-time periodic task set. We also developed three checking methods to guarantee the feasibility of a periodic real-time schedule under peak temperature constraint. We further extended our research from single core platform to multi-core platform. We investigated the energy estimation problem on the multi-core platforms and developed a light weight and accurate method to calculate the energy consumption for a given voltage schedule on a multi-core platform. Finally, we concluded the dissertation with elaborated discussions of future extensions of our research.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This research has been undertaken to determine how successful multi-organisational enterprise strategy is reliant on the correct type of Enterprise Resource Planning (ERP) information systems being used. However there appears to be a dearth of research as regards strategic alignment between ERP systems development and multi-organisational enterprise governance as guidelines and frameworks to assist practitioners in making decision for multi-organisational collaboration supported by different types of ERP systems are still missing from theoretical and empirical perspectives. This calls for this research which investigates ERP systems development and emerging practices in the management of multi-organisational enterprises (i.e. parts of companies working with parts of other companies to deliver complex product-service systems) and identify how different ERP systems fit into different multi-organisational enterprise structures, in order to achieve sustainable competitive success. An empirical inductive study was conducted using the Grounded Theory-based methodological approach based on successful manufacturing and service companies in the UK and China. This involved an initial pre-study literature review, data collection via 48 semi-structured interviews with 8 companies delivering complex products and services across organisational boundaries whilst adopting ERP systems to support their collaborative business strategies – 4 cases cover printing, semiconductor manufacturing, and parcel distribution industries in the UK and 4 cases cover crane manufacturing, concrete production, and banking industries in China in order to form a set of 29 tentative propositions that have been validated via a questionnaire receiving 116 responses from 16 companies. The research has resulted in the consolidation of the validated propositions into a novel concept referred to as the ‘Dynamic Enterprise Reference Grid for ERP’ (DERG-ERP) which draws from multiple theoretical perspectives. The core of the DERG-ERP concept is a contingency management framework which indicates that different multi-organisational enterprise paradigms and the supporting ERP information systems are not the result of different strategies, but are best considered part of a strategic continuum with the same overall business purpose of multi-organisational cooperation. At different times and circumstances in a partnership lifecycle firms may prefer particular multi-organisational enterprise structures and the use of different types of ERP systems to satisfy business requirements. Thus the DERG-ERP concept helps decision makers in selecting, managing and co-developing the most appropriate multi-organistional enterprise strategy and its corresponding ERP systems by drawing on core competence, expected competitiveness, and information systems strategic capabilities as the main contingency factors. Specifically, this research suggests that traditional ERP(I) systems are associated with Vertically Integrated Enterprise (VIE); whilst ERPIIsystems can be correlated to Extended Enterprise (EE) requirements and ERPIII systems can best support the operations of Virtual Enterprise (VE). The contribution of this thesis is threefold. Firstly, this work contributes to a gap in the extant literature about the best fit between ERP system types and multi-organisational enterprise structure types; and proposes a new contingency framework – the DERG-ERP, which can be used to explain how and why enterprise managers need to change and adapt their ERP information systems in response to changing business and operational requirements. Secondly, with respect to a priori theoretical models, the new DERG-ERP has furthered multi-organisational enterprise management thinking by incorporating information system strategy, rather than purely focusing on strategy, structural, and operational aspects of enterprise design and management. Simultaneously, the DERG-ERP makes theoretical contributions to the current IS Strategy Formulation Model which does not explicitly address multi-organisational enterprise governance. Thirdly, this research clarifies and emphasises the new concept and ideas of future ERP systems (referred to as ERPIII) that are inadequately covered in the extant literature. The novel DERG-ERP concept and its elements have also been applied to 8 empirical cases to serve as a practical guide for ERP vendors, information systems management, and operations managers hoping to grow and sustain their competitive advantage with respect to effective enterprise strategy, enterprise structures, and ERP systems use; referred to in this thesis as the “enterprisation of operations”.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In this paper, we describe recent architectural and technological advances of the end to end optical network architecture proposed by the DISCUS project (the DIStributed Core for unlimited bandwidth supply for all Users and Services). The two main targets of DISCUS are the principle of equivalence in the access and the reduction of optical-to-electronic conversions in the metro-core network. Technological advances and techno-economic evaluation of Long-Reach Passive Optical Networks (LR-PON), as well as the optimal metro-core node architecture and the required network control plane framework are reported. Network infrastructure sharing challenges are also discussed. © 2014 IEEE.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Multislice computed tomography (MSCT) for the noninvasive detection of coronary artery stenoses is a promising candidate for widespread clinical application because of its non-invasive nature and high sensitivity and negative predictive value as found in several previous studies using 16 to 64 simultaneous detector rows. A multi-centre study of CT coronary angiography using 16 simultaneous detector rows has shown that 16-slice CT is limited by a high number of nondiagnostic cases and a high false-positive rate. A recent meta-analysis indicated a significant interaction between the size of the study sample and the diagnostic odds ratios suggestive of small study bias, highlighting the importance of evaluating MSCT using 64 simultaneous detector rows in a multi-centre approach with a larger sample size. In this manuscript we detail the objectives and methods of the prospective ""CORE-64"" trial (""Coronary Evaluation Using Multidetector Spiral Computed Tomography Angiography using 64 Detectors""). This multi-centre trial was unique in that it assessed the diagnostic performance of 64-slice CT coronary angiography in nine centres worldwide in comparison to conventional coronary angiography. In conclusion, the multi-centre, multi-institutional and multi-continental trial CORE-64 has great potential to ultimately assess the per-patient diagnostic performance of coronary CT angiography using 64 simultaneous detector rows.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Dissertação de Mestrado em Engenharia Informática

Relevância:

40.00% 40.00%

Publicador:

Resumo:

13th IEEE/IFIP International Conference on Embedded and Ubiquitous Computing (EUC 2015). 21 to 23, Oct, 2015, Session W1-A: Multiprocessing and Multicore Architectures. Porto, Portugal.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The recent technological advancements and market trends are causing an interesting phenomenon towards the convergence of High-Performance Computing (HPC) and Embedded Computing (EC) domains. On one side, new kinds of HPC applications are being required by markets needing huge amounts of information to be processed within a bounded amount of time. On the other side, EC systems are increasingly concerned with providing higher performance in real-time, challenging the performance capabilities of current architectures. The advent of next-generation many-core embedded platforms has the chance of intercepting this converging need for predictable high-performance, allowing HPC and EC applications to be executed on efficient and powerful heterogeneous architectures integrating general-purpose processors with many-core computing fabrics. To this end, it is of paramount importance to develop new techniques for exploiting the massively parallel computation capabilities of such platforms in a predictable way. P-SOCRATES will tackle this important challenge by merging leading research groups from the HPC and EC communities. The time-criticality and parallelisation challenges common to both areas will be addressed by proposing an integrated framework for executing workload-intensive applications with real-time requirements on top of next-generation commercial-off-the-shelf (COTS) platforms based on many-core accelerated architectures. The project will investigate new HPC techniques that fulfil real-time requirements. The main sources of indeterminism will be identified, proposing efficient mapping and scheduling algorithms, along with the associated timing and schedulability analysis, to guarantee the real-time and performance requirements of the applications.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Os Sistemas de Apoio à Tomada de Decisão em Grupo (SADG) surgiram com o objetivo de apoiar um conjunto de decisores no processo de tomada de decisão. Uma das abordagens mais comuns na literatura para a implementação dos SADG é a utilização de Sistemas Multi-Agente (SMA). Os SMA permitem refletir com maior transparência o contexto real, tanto na representação que cada agente faz do decisor que representa como no formato de comunicação utilizado. Com o crescimento das organizações, atualmente vive-se uma viragem no conceito de tomada de decisão. Cada vez mais, devido a questões como: o estilo de vida, os mercados globais e o tipo de tecnologias disponíveis, faz sentido falar de decisão ubíqua. Isto significa que o decisor deverá poder utilizar o sistema a partir de qualquer local, a qualquer altura e através dos mais variados tipos de dispositivos eletrónicos tais como tablets, smartphones, etc. Neste trabalho é proposto um novo modelo de argumentação, adaptado ao contexto da tomada de decisão ubíqua para ser utilizado por um SMA na resolução de problemas multi-critério. É assumido que cada agente poderá utilizar um estilo de comportamento que afeta o modo como esse agente interage com outros agentes em situações de conflito. Sendo assim, pretende-se estudar o impacto da utilização de estilos de comportamento ao longo do processo da tomada de decisão e perceber se os agentes modelados com estilos de comportamento conseguem atingir o consenso mais facilmente quando comparados com agentes que não apresentam nenhum estilo de comportamento. Pretende-se ainda estudar se o número de argumentos trocados entre os agentes é proporcional ao nível de consenso final após o processo de tomada de decisão. De forma a poder estudar as hipóteses de investigação desenvolveu-se um protótipo de um SADG, utilizando um SMA. Desenvolveu-se ainda uma framework de argumentação que foi adaptada ao protótipo desenvolvido. Os resultados obtidos permitiram validar as hipóteses definidas neste trabalho tendo-se concluído que os agentes modelados com estilos de comportamento conseguem na maioria das vezes atingir um consenso mais facilmente comparado com agentes que não apresentam nenhum estilo de comportamento e que o número de argumentos trocados entre os agentes durante o processo de tomada de decisão não é proporcional ao nível de consenso final.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Engenharia Informática

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The National Obesity Observatory was established to provide a single point of contact for wide-ranging authoritative information on data and evidence related to obesity, overweight, underweight and their determinants. The Standard Evaluation Framework is a list of data collection criteria and supporting guidance for collecting high quality information to support the evaluation of weight management interventions. This is a quick reference guide to the core criteria of the Standard Evaluation Framework. Essential criteria are presented as the minimum recommended data for evaluating a weight management intervention. Desirable criteria are additional data that would enhance the evaluation.refer to the resource