962 resultados para ubiquitous multi-core framework


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main aim of this thesis is the controlled and reproducible synthesis of functional materials at the nanoscale. In the first chapter, a tuning of morphology and magnetic properties of magnetite nanoparticles is presented. It was achieved by an innovative approach, which involves the use of an organic macrocycle (calixarene) to induce the oriented aggregation of NPs during the synthesis. This method is potentially applicable to the preparation of other metal oxide NPs by thermal decomposition of the respective precursors. Products obtained, in particular the multi-core nanoparticles, show remarkable magnetic and colloidal properties, making them very interesting for biomedical applications. The synthesis and functionalisation of plasmonic Au and Ag nanoparticles is presented in the second chapter. Here, a supramolecular approach was exploited to achieve a controlled and potentially reversible aggregation between Au and Ag NPs. This aggregation phenomena was followed by UV - visible spectroscopy and dynamic light scattering. In the final chapters, the conjugation of plasmonic and magnetic functionalities was tackled through the preparation of dimeric nanostructures. Au - Fe oxide heterodimeric nanoparticles were prepared and their magnetic properties thoroughly characterised. The results demonstrate the formation of FeO (wustite), together with magnetite, during the thermal decomposition of the iron precursor. By an oxidation process that preserves Au in the dimeric structures, wustite completely disappeared, with the formation of either magnetite and / or maghemite, much better from the magnetic point of view. The plasmon resonance of Au results damped by the presence of the iron oxide, a material with high refractive index, but it is still present if the Au domain of the nanoparticles is exposed towards the bulk. Finally, remarkable hyperthermia, also in vitro, was found for these structures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A parallel algorithm for image noise removal is proposed. The algorithm is based on peer group concept and uses a fuzzy metric. An optimization study on the use of the CUDA platform to remove impulsive noise using this algorithm is presented. Moreover, an implementation of the algorithm on multi-core platforms using OpenMP is presented. Performance is evaluated in terms of execution time and a comparison of the implementation parallelised in multi-core, GPUs and the combination of both is conducted. A performance analysis with large images is conducted in order to identify the amount of pixels to allocate in the CPU and GPU. The observed time shows that both devices must have work to do, leaving the most to the GPU. Results show that parallel implementations of denoising filters on GPUs and multi-cores are very advisable, and they open the door to use such algorithms for real-time processing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A parallel algorithm to remove impulsive noise in digital images using heterogeneous CPU/GPU computing is proposed. The parallel denoising algorithm is based on the peer group concept and uses an Euclidean metric. In order to identify the amount of pixels to be allocated in multi-core and GPUs, a performance analysis using large images is presented. A comparison of the parallel implementation in multi-core, GPUs and a combination of both is performed. Performance has been evaluated in terms of execution time and Megapixels/second. We present several optimization strategies especially effective for the multi-core environment, and demonstrate significant performance improvements. The main advantage of the proposed noise removal methodology is its computational speed, which enables efficient filtering of color images in real-time applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Em Bioinformática são frequentes problemas cujo tratamento necessita de considerável poder de processamento/cálculo e/ou grande capacidade de armazenamento de dados e elevada largura de banda no acesso aos mesmos (de forma não comprometer a eficiência do seu processamento). Um exemplo deste tipo de problemas é a busca de regiões de similaridade em sequências de amino-ácidos de proteínas, ou em sequências de nucleótidos de DNA, por comparação com uma dada sequência fornecida (query sequence). Neste âmbito, a ferramenta computacional porventura mais conhecida e usada é o BLAST (Basic Local Alignment Search Tool) [1]. Donde, qualquer incremento no desempenho desta ferramenta tem impacto considerável (desde logo positivo) na atividade de quem a utiliza regularmente (seja para investigação, seja para fins comerciais). Precisamente, desde que o BLAST foi inicialmente introduzido, foram surgindo diversas versões, com desempenho melhorado, nomeadamente através da aplicação de técnicas de paralelização às várias fases do algoritmo (e. g., partição e distribuição das bases de dados a pesquisar, segmentação das queries, etc. ), capazes de tirar partido de diferentes ambientes computacionais de execução paralela, como: máquinas multi-core (BLAST+ 2), clusters de nós multi-core (mpiBLAST3J e, mais recentemente, co-processadores aceleradores como GPUs" ou FPGAs. É também possível usar as ferramentas da família BLAST através de um interface/sítio WEB5, que permite, de forma expedita, a pesquisa de uma variedade de bases de dados conhecidas (e em permanente atualização), com tempos de resposta suficientemente pequenos para a maioria dos utilizadores, graças aos recursos computacionais de elevado desempenho que sustentam o seu backend. Ainda assim, esta forma de utilização do BLAST poderá não ser a melhor opção em algumas situações, como por exemplo quando as bases de dados a pesquisar ainda não são de domínio público, ou, sendo-o, não estão disponíveis no referido sitio WEB. Adicionalmente, a utilização do referido sitio como ferramenta de trabalho regular pressupõe a sua disponibilidade permanente (dependente de terceiros) e uma largura de banda de qualidade suficiente, do lado do cliente, para uma interacção eficiente com o mesmo. Por estas razões, poderá ter interesse (ou ser mesmo necessário) implantar uma infra-estrutura BLAST local, capaz de albergar as bases de dados pertinentes e de suportar a sua pesquisa da forma mais eficiente possível, tudo isto levando em conta eventuais constrangimentos financeiros que limitam o tipo de hardware usado na implementação dessa infra-estrutura. Neste contexto, foi realizado um estudo comparativo de diversas versões do BLAST, numa infra-estrutura de computação paralela do IPB, baseada em componentes commodity: um cluster de 8 nós (virtuais, sob VMWare ESXi) de computação (com CPU Í7-4790K 4GHz, 32GB RAM e 128GB SSD) e um nó dotado de uma GPU (CPU Í7-2600 3.8GHz, 32GB RAM, 128 GB SSD, 1 TB HD, NVIDIA GTX 580). Assim, o foco principal incidiu na avaliação do desempenho do BLAST original e do mpiBLAST, dado que são fornecidos de base na distribuição Linux em que assenta o cluster [6]. Complementarmente, avaliou-se também o BLAST+ e o gpuBLAST no nó dotado de GPU. A avaliação contemplou diversas configurações de recursos, incluindo diferentes números de nós utilizados e diferentes plataformas de armazenamento das bases de dados (HD, SSD, NFS). As bases de dados pesquisadas correspondem a um subconjunto representativo das disponíveis no sitio WEB do BLAST, cobrindo uma variedade de dimensões (desde algumas dezenas de MBytes, até à centena de GBytes) e contendo quer sequências de amino-ácidos (env_nr e nr), quer de nucleótidos (drosohp. nt, env_nt, mito. nt, nt e patnt). Para as pesquisas foram 'usadas sequências arbitrárias de 568 letras em formato FASTA, e adoptadas as opções por omissão dos vários aplicativos BLAST. Salvo menção em contrário, os tempos de execução considerados nas comparações e no cálculo de speedups são relativos à primeira execução de uma pesquisa, não sendo assim beneficiados por qualquer efeito de cache; esta opção assume um cenário real em que não é habitual que uma mesma query seja executada várias vezes seguidas (embora possa ser re-executada, mais tarde). As principais conclusões do estudo comparativo realizado foram as seguintes: - e necessário acautelar, à priori, recursos de armazenamento com capacidade suficiente para albergar as bases de dados nas suas várias versões (originais/compactadas, descompactadas e formatadas); no nosso cenário de teste a coexistência de todas estas versões consumiu 600GBytes; - o tempo de preparação (formataçâo) das bases de dados para posterior pesquisa pode ser considerável; no nosso cenário experimental, a formatação das bases de dados mais pesadas (nr, env_nt e nt) demorou entre 30m a 40m (para o BLAST), e entre 45m a 55m (para o mpiBLAST); - embora economicamente mais onerosos, a utilização de discos de estado sólido, em alternativa a discos rígidos tradicionais, permite melhorar o tempo da formatação das bases de dados; no entanto, os benefícios registados (à volta de 9%) ficam bastante aquém do inicialmente esperado; - o tempo de execução do BLAST é fortemente penalizado quando as bases de dados são acedidas através da rede, via NFS; neste caso, nem sequer compensa usar vários cores; quando as bases de dados são locais e estão em SSD, o tempo de execução melhora bastante, em especial com a utilização de vários cores; neste caso, com 4 cores, o speedup chega a atingir 3.5 (sendo o ideal 4) para a pesquisa de BDs de proteínas, mas não passa de 1.8 para a pesquisa de BDs de nucleótidos; - o tempo de execução do mpiBLAST é muito prejudicado quando os fragmentos das bases de dados ainda não se encontram nos nós do cluster, tendo que ser distribuídos previamente à pesquisa propriamente dita; após a distribuição, a repetição das mesmas queries beneficia de speedups de 14 a 70; porém, como a mesma base de dados poderá ser usada para responder a diferentes queries, então não é necessário repetir a mesma query para amortizar o esforço de distribuição; - no cenário de teste, a utilização do mpiBLAST com 32+2 cores, face ao BLAST com 4 cores, traduz-se em speedups que, conforme a base de dados pesquisada (e previamente distribuída), variam entre 2 a 5, valores aquém do máximo teórico de 6.5 (34/4), mas ainda assim demonstradores de que, havendo essa possibilidade, compensa realizar as pesquisas em cluster; explorar vários cores) e com o gpuBLAST, realizada no nó com GPU (representativo de uma workstation típica), permite aferir qual a melhor opção no caso de não serem possíveis pesquisas em cluster; as observações realizadas indicam que não há diferenças significativas entre o BLAST e o BLAST+; adicionalmente, o desempenho do gpuBLAST foi sempre pior (aproximadmente em 50%) que o do BLAST e BLAST+, o que pode encontrar explicação na longevidade do modelo da GPU usada; - finalmente, a comparação da melhor opção no nosso cenário de teste, representada pelo uso do mpiBLAST, com o recurso a pesquisa online, no site do BLAST5, revela que o mpiBLAST apresenta um desempenho bastante competitivo com o BLAST online, chegando a ser claramente superior se se considerarem os tempos do mpiBLAST tirando partido de efeitos de cache; esta assunção acaba por se justa, Já que BLAST online também rentabiliza o mesmo tipo de efeitos; no entanto, com tempos de pequisa tão reduzidos (< 30s), só é defensável a utilização do mpiBLAST numa infra-estrutura local se o objetivo for a pesquisa de Bds não pesquisáveis via BLAS+ online.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research adopts a resource allocation theoretical framework to generate predictions regarding the relationship between self-efficacy and task performance from two levels of analysis and specificity. Participants were given multiple trials of practice on an air traffic control task. Measures of task-specific self-efficacy and performance were taken at repeated intervals. The authors used multilevel analysis to demonstrate dynamic main effects, dynamic mediation and dynamic moderation. As predicted, the positive effects of overall task specific self-efficacy and general self-efficacy on task performance strengthened throughout practice. In line with these dynamic main effects, the effect of general self-efficacy was mediated by overall task specific self-efficacy; however this pattern emerged over time. Finally, changes in task specific self-efficacy were negatively associated with changes in performance at the within-person level; however this effect only emerged towards the end of practice for individuals with high levels of overall task specific self-efficacy. These novel findings emphasise the importance of conceptualising self-efficacy within a multi-level and multi-specificity framework and make a significant contribution to understanding the way this construct relates to task performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Drawing on extensive academic research and theory on clusters and their analysis, the methodology employed in this pilot study (sponsored by the Welsh Assembly Government’s Economic Research Grants Assessment Board) seeks to create a framework for reviewing and monitoring clusters in Wales on an ongoing basis, and generate the information necessary for successful cluster development policy to occur. The multi-method framework developed and tested in the pilot study is designed to map existing Welsh sectors with cluster characteristics, uncover existing linkages, and better understand areas of strength and weakness. The approach adopted relies on synthesising both quantitative and qualitative evidence. Statistical measures, including the size of potential clusters, are united with other evidence on input-output derived inter-linkages within clusters and to other sectors in Wales and the UK, as well as the export and import intensity of the cluster. Multi Sector Qualitative Analysis is then designed for competencies/capacity, risk factors, markets, types and crucially, the perceived strengths of cluster structures and relationships. The approach outlined above can, with the refinements recommended through the review process, provide policy-makers with a valuable tool for reviewing and monitoring individual sectors and ameliorating problems in sectors likely to decline further.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the results of a multinational large-scale survey, investigating the current trends in strategic planning. The survey was conducted online using the Warwick Business School alumni database. Considering the development and implementation of strategy within a multi-process framework, the 'Strategic Development Process' model by Dyson and O'Brien (1998), using factor analysis, four distinct factors of strategic planning have been produced and with regression analysis, their impact on the success of strategic planning from a process point of view has been assessed. The results indicate that significant variation in practices involved is created by complexity either of the organizational size or environmental turbulence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lack of discrimination power and poor weight dispersion remain major issues in Data Envelopment Analysis (DEA). Since the initial multiple criteria DEA (MCDEA) model developed in the late 1990s, only goal programming approaches; that is, the GPDEA-CCR and GPDEA-BCC were introduced for solving the said problems in a multi-objective framework. We found GPDEA models to be invalid and demonstrate that our proposed bi-objective multiple criteria DEA (BiO-MCDEA) outperforms the GPDEA models in the aspects of discrimination power and weight dispersion, as well as requiring less computational codes. An application of energy dependency among 25 European Union member countries is further used to describe the efficacy of our approach. © 2013 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main requirements to DRM platforms implementing effective user experience and strong security measures to prevent unauthorized use of content are discussed. Comparison of hardware-based and software- based platforms is made showing the general inherent advantages of hardware DRM solutions. Analysis and evaluation of the main flaws of hardware platforms are conducted, pointing out the possibilities to overcome them. The overview of the existing concepts for practical realization of hardware DRM protection reveals their advantages and disadvantages and the increasing demand for creation of multi-core architecture, which could assure an effective DRM protection without decreasing the user’s freedom and importing risks for end system security.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Authors suggested earlier hierarchical method for definition of class description at pattern recognition problems solution. In this paper development and use of such hierarchical descriptions for parallel representation of complex patterns on the base of multi-core computers or neural networks is proposed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The traffic carried by core optical networks grows at a steady but remarkable pace of 30-40% year-over-year. Optical transmissions and networking advancements continue to satisfy the traffic requirements by delivering the content over the network infrastructure in a cost and energy efficient manner. Such core optical networks serve the information traffic demands in a dynamic way, in response to requirements for shifting of traffics demands, both temporally (day/night) and spatially (business district/residential). However as we are approaching fundamental spectral efficiency limits of singlemode fibers, the scientific community is pursuing recently the development of an innovative, all-optical network architecture introducing the spatial degree of freedom when designing/operating future transport networks. Spacedivision- multiplexing through the use of bundled single mode fibers, and/or multi-core fibers and/or few-mode fibers can offer up to 100-fold capacity increase in future optical networks. The EU INSPACE project is working on the development of a complete spatial-spectral flexible optical networking solution, offering the network ultra-high capacity, flexibility and energy efficiency required to meet the challenges of delivering exponentially growing traffic demands in the internet over the next twenty years. In this paper we will present the motivation and main research activities of the INSPACE consortium towards the realization of the overall project solution. © 2014 Copyright SPIE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since the development of large scale power grid interconnections and power markets, research on available transfer capability (ATC) has attracted great attention. The challenges for accurate assessment of ATC originate from the numerous uncertainties in electricity generation, transmission, distribution and utilization sectors. Power system uncertainties can be mainly described as two types: randomness and fuzziness. However, the traditional transmission reliability margin (TRM) approach only considers randomness. Based on credibility theory, this paper firstly built models of generators, transmission lines and loads according to their features of both randomness and fuzziness. Then a random fuzzy simulation is applied, along with a novel method proposed for ATC assessment, in which both randomness and fuzziness are considered. The bootstrap method and multi-core parallel computing technique are introduced to enhance the processing speed. By implementing simulation for the IEEE-30-bus system and a real-life system located in Northwest China, the viability of the models and the proposed method is verified.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this talk we will review some of the key enabling technologies of optical communications and potential future bottlenecks. Single mode fibre (SMF) has long been the preferred waveguide for long distance communication. This is largely due to low loss, low cost and relative linearity over a wide bandwidth. As capacity demands have grown SMF has largely been able to keep pace with demand. Several groups have been identifying the possibility of exhausting the bandwidth provided by SMF [1,2,3]. This so called “capacity-crunch” has potentially vast economic and social consequences and will be discussed in detail. As demand grows optical power launched into the fibre has the potential to cause nonlinearities that can be detrimental to transmission. There has been considerable work done on identifying this nonlinear limit [4, 5] with a strong re- search interest currently on the topic of nonlinear compensation [6, 7]. Embracing and compensating for nonlinear transmission is one potential solution that may extend the lifetime of the current waveguide technology. However, at sufficiently high powers the waveguide will fail due to heat-induced mechanical failure. Moving forward it be- comes necessary to address the waveguide itself with several promising contenders discussed, including few-mode fibre and multi-core fibre.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Catering to society's demand for high performance computing, billions of transistors are now integrated on IC chips to deliver unprecedented performances. With increasing transistor density, the power consumption/density is growing exponentially. The increasing power consumption directly translates to the high chip temperature, which not only raises the packaging/cooling costs, but also degrades the performance/reliability and life span of the computing systems. Moreover, high chip temperature also greatly increases the leakage power consumption, which is becoming more and more significant with the continuous scaling of the transistor size. As the semiconductor industry continues to evolve, power and thermal challenges have become the most critical challenges in the design of new generations of computing systems. ^ In this dissertation, we addressed the power/thermal issues from the system-level perspective. Specifically, we sought to employ real-time scheduling methods to optimize the power/thermal efficiency of the real-time computing systems, with leakage/ temperature dependency taken into consideration. In our research, we first explored the fundamental principles on how to employ dynamic voltage scaling (DVS) techniques to reduce the peak operating temperature when running a real-time application on a single core platform. We further proposed a novel real-time scheduling method, “M-Oscillations” to reduce the peak temperature when scheduling a hard real-time periodic task set. We also developed three checking methods to guarantee the feasibility of a periodic real-time schedule under peak temperature constraint. We further extended our research from single core platform to multi-core platform. We investigated the energy estimation problem on the multi-core platforms and developed a light weight and accurate method to calculate the energy consumption for a given voltage schedule on a multi-core platform. Finally, we concluded the dissertation with elaborated discussions of future extensions of our research. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Though there is now growing commitment to publicly engaged research, the role and definition of the public in such processes is wide-ranging, contested, and often rather vague. This article addresses this problem by showing that, although there is no single agreed upon theory or way of being public, it is still possible and very important to develop clear, public-centric, understandings of engaged research practice. The article introduces a multi-dimensional framework based on the theoretical literature on the ‘public’, and demonstrates – in the context of a recent engaged research project – how it possible to conceptualise, design and evaluate context-specific formations of the public. Starting from an understanding of publics as mediated and dynamic entities, the article seeks to illuminate some of the choices that researchers face and how the framework can help them navigate these. This article is for all those interested in what it means to address, support and account for an engaged public in contemporary settings.