996 resultados para NMR quantum computing
Resumo:
Although the computational power of mobile devices has been increasing, it is still not enough for some classes of applications. In the present, these applications delegate the computing power burden on servers located on the Internet. This model assumes an always-on Internet connectivity and implies a non-negligible latency. The thesis addresses the challenges and contributions posed to the application of a mobile collaborative computing environment concept to wireless networks. The goal is to define a reference architecture for high performance mobile applications. Current work is focused on efficient data dissemination on a highly transitive environment, suitable to many mobile applications and also to the reputation and incentive system available on this mobile collaborative computing environment. For this we are improving our already published reputation/incentive algorithm with knowledge from the usage pattern from the eduroam wireless network in the Lisbon area.
Resumo:
Physical computing has spun a true global revolution in the way in which the digital interfaces with the real world. From bicycle jackets with turn signal lights to twitter-controlled christmas trees, the Do-it-Yourself (DiY) hardware movement has been driving endless innovations and stimulating an age of creative engineering. This ongoing (r)evolution has been led by popular electronics platforms such as the Arduino, the Lilypad, or the Raspberry Pi, however, these are not designed taking into account the specific requirements of biosignal acquisition. To date, the physiological computing community has been severely lacking a parallel to that found in the DiY electronics realm, especially in what concerns suitable hardware frameworks. In this paper, we build on previous work developed within our group, focusing on an all-in-one, low-cost, and modular biosignal acquisition hardware platform, that makes it quicker and easier to build biomedical devices. We describe the main design considerations, experimental evaluation and circuit characterization results, together with the results from a usability study performed with volunteers from multiple target user groups, namely health sciences and electrical, biomedical, and computer engineering. Copyright © 2014 SCITEPRESS - Science and Technology Publications. All rights reserved.
Resumo:
Floating-point computing with more than one TFLOP of peak performance is already a reality in recent Field-Programmable Gate Arrays (FPGA). General-Purpose Graphics Processing Units (GPGPU) and recent many-core CPUs have also taken advantage of the recent technological innovations in integrated circuit (IC) design and had also dramatically improved their peak performances. In this paper, we compare the trends of these computing architectures for high-performance computing and survey these platforms in the execution of algorithms belonging to different scientific application domains. Trends in peak performance, power consumption and sustained performances, for particular applications, show that FPGAs are increasing the gap to GPUs and many-core CPUs moving them away from high-performance computing with intensive floating-point calculations. FPGAs become competitive for custom floating-point or fixed-point representations, for smaller input sizes of certain algorithms, for combinational logic problems and parallel map-reduce problems. © 2014 Technical University of Munich (TUM).
Resumo:
Este trabalho foi realizado sob orientação do Prof. António Brandão Moniz para a disciplina “Factores Sociais da Inovação” do Mestrado Engenharia Informática realizado na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa (Portugal)
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do Grau de Mestre em Engenharia do Ambiente, perfil Gestão e Sistemas Ambientais
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Biotecnologia
Resumo:
Dynamically reconfigurable SRAM-based field-programmable gate arrays (FPGAs) enable the implementation of reconfigurable computing systems where several applications may be run simultaneously, sharing the available resources according to their own immediate functional requirements. To exclude malfunctioning due to faulty elements, the reliability of all FPGA resources must be guaranteed. Since resource allocation takes place asynchronously, an online structural test scheme is the only way of ensuring reliable system operation. On the other hand, this test scheme should not disturb the operation of the circuit, otherwise availability would be compromised. System performance is also influenced by the efficiency of the management strategies that must be able to dynamically allocate enough resources when requested by each application. As those resources are allocated and later released, many small free resource blocks are created, which are left unused due to performance and routing restrictions. To avoid wasting logic resources, the FPGA logic space must be defragmented regularly. This paper presents a non-intrusive active replication procedure that supports the proposed test methodology and the implementation of defragmentation strategies, assuring both the availability of resources and their perfect working condition, without disturbing system operation.
Resumo:
Para muitos, o ato de ensinar, era e continua a ser uma “arte”, em que os professores e os grandes mestres mais eficientes são aqueles que têm a capacidade e a arte de fazer passar as suas mensagens e conhecimentos, de forma simples e apelativa, independentemente da área de estudo. A informação relacionada com a aula, é cada vez mais digital, sendo importante, por parte dos docentes, o domínio de tecnologias de criação, organização e disponibilização de conteúdos. Essa partilha foi inicialmente possível pelas páginas Web e mais tarde pelas plataformas LMS (Learning Management System). Criar um Website era uma tarefa complicada quer ao nível do seu custo quer ao nível do domínio da tecnologia Web e era por vezes necessário contratar profissionais para o efeito. Surgiram então as CMS (Content Management System) que são tecnologias Open Source, que permitem a gestão de conteúdos. Neste sentido foi realizado um estudo com o objetivo de aferir sobre as competências dos professores no domínio da partilha de Gestão de Conteúdos Digitais. O presente estudo permitiu retirar conclusões sobre o potencial e aplicabilidade das CMS no ensino. O principal objetivo do presente estudo incidiu no potencial de distribuição e partilha de Recursos Educativos Digitais organizados sobre o ponto de vista pedagógico aos alunos. Foi ainda analisado e estudado o papel do Cloud Computing no processo de partilha colaborativa de documentos. Foi delineado como suporte à presente investigação um curso modelo que por sua vez foi implementado nas três principais CMS da atualidade e avaliado o potencial de cada uma neste contexto. Finalmente foram apresentadas as conclusões retiradas do presente estudo.
Resumo:
Empowered by virtualisation technology, cloud infrastructures enable the construction of flexi- ble and elastic computing environments, providing an opportunity for energy and resource cost optimisation while enhancing system availability and achieving high performance. A crucial re- quirement for effective consolidation is the ability to efficiently utilise system resources for high- availability computing and energy-efficiency optimisation to reduce operational costs and carbon footprints in the environment. Additionally, failures in highly networked computing systems can negatively impact system performance substantially, prohibiting the system from achieving its initial objectives. In this paper, we propose algorithms to dynamically construct and readjust vir- tual clusters to enable the execution of users’ jobs. Allied with an energy optimising mechanism to detect and mitigate energy inefficiencies, our decision-making algorithms leverage virtuali- sation tools to provide proactive fault-tolerance and energy-efficiency to virtual clusters. We conducted simulations by injecting random synthetic jobs and jobs using the latest version of the Google cloud tracelogs. The results indicate that our strategy improves the work per Joule ratio by approximately 12.9% and the working efficiency by almost 15.9% compared with other state-of-the-art algorithms.
Resumo:
Extracting the semantic relatedness of terms is an important topic in several areas, including data mining, information retrieval and web recommendation. This paper presents an approach for computing the semantic relatedness of terms using the knowledge base of DBpedia — a community effort to extract structured information from Wikipedia. Several approaches to extract semantic relatedness from Wikipedia using bag-of-words vector models are already available in the literature. The research presented in this paper explores a novel approach using paths on an ontological graph extracted from DBpedia. It is based on an algorithm for finding and weighting a collection of paths connecting concept nodes. This algorithm was implemented on a tool called Shakti that extract relevant ontological data for a given domain from DBpedia using its SPARQL endpoint. To validate the proposed approach Shakti was used to recommend web pages on a Portuguese social site related to alternative music and the results of that experiment are reported in this paper.
Resumo:
The design of magnetic cores can be carried out by taking into account the optimization of different parameters in accordance with the application requirements. Considering the specifications of the fast field cycling nuclear magnetic resonance (FFC-NMR) technique, the magnetic flux density distribution, at the sample insertion volume, is one of the core parameters that needs to be evaluated. Recently, it has been shown that the FFC-NMR magnets can be built on the basis of solenoid coils with ferromagnetic cores. Since this type of apparatus requires magnets with high magnetic flux density uniformity, a new type of magnet using a ferromagnetic core, copper coils, and superconducting blocks was designed with improved magnetic flux density distribution. In this paper, the designing aspects of the magnet are described and discussed with emphasis on the improvement of the magnetic flux density homogeneity (Delta B/B-0) in the air gap. The magnetic flux density distribution is analyzed based on 3-D simulations and NMR experimental results.
Resumo:
Abstract Background: Nanotechnology has the potential to provide agriculture with new tools that may be used in the rapid detection and molecular treatment of diseases and enhancement of plant ability to absorb nutrients, among others. Data on nanoparticle toxicity in plants is largely heterogeneous with a diversity of physicochemical parameters reported, which difficult generalizations. Here a cell biology approach was used to evaluate the impact of Quantum Dots (QDs) nanocrystals on plant cells, including their effect on cell growth, cell viability, oxidative stress and ROS accumulation, besides their cytomobility. Results: A plant cell suspension culture of Medicago sativa was settled for the assessment of the impact of the addition of mercaptopropanoic acid coated CdSe/ZnS QDs. Cell growth was significantly reduced when 100 mM of mercaptopropanoic acid -QDs was added during the exponential growth phase, with less than 50% of the cells viable 72 hours after mercaptopropanoic acid -QDs addition. They were up taken by Medicago sativa cells and accumulated in the cytoplasm and nucleus as revealed by optical thin confocal imaging. As part of the cellular response to internalization, Medicago sativa cells were found to increase the production of Reactive Oxygen Species (ROS) in a dose and time dependent manner. Using the fluorescent dye H2DCFDA it was observable that mercaptopropanoic acid-QDs concentrations between 5-180 nM led to a progressive and linear increase of ROS accumulation. Conclusions: Our results showed that the extent of mercaptopropanoic acid coated CdSe/ZnS QDs cytotoxicity in plant cells is dependent upon a number of factors including QDs properties, dose and the environmental conditions of administration and that, for Medicago sativa cells, a safe range of 1-5 nM should not be exceeded for biological applications.
Resumo:
Since long ago cellulosic lyotropic liquid crystals were thought as potential materials to produce fibers competitive with spidersilk or Kevlar, yet the processing of high modulus materials from cellulose-based precursors was hampered by their complex rheological behavior. In this work, by using the Rheo-NMR technique, which combines deuterium NMR with rheology, we investigate the high shear rate regimes that may be of interest to the industrial processing of these materials. Whereas the low shear rate regimes were already investigated by this technique in different works [1-4], the high shear rates range is still lacking a detailed study. This work focuses on the orientational order in the system both under shear and subsequent relaxation process arising after shear cessation through the analysis of deuterium spectra from the deuterated solvent water. At the analyzed shear rates the cholesteric order is suppressed and a flow-aligned nematic is observed which for the higher shear rates develops after certain time periodic perturbations that transiently annihilate the order in the system. During relaxation the flow aligned nematic starts losing order due to the onset of the cholesteric helices leading to a period of very low order where cholesteric helices with different orientations are forming from the aligned nematic, followed in the final stage by an increase in order at long relaxation times corresponding to the development of aligned cholesteric domains. This study sheds light on the complex rheological behavior of chiral nematic cellulose-based systems and opens ways to improve its processing. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
When a pesticide is released into the environment, most of it is lost before it reaches its target. An effective way to reduce environmental losses of pesticides is by using controlled release technology. Microencapsulation becomes a promising technique for the production of controlled release agricultural formulations. In this work, the microencapsulation of chlorophenoxy herbicide MCPA with native b-cyclodextrin and its methyl and hydroxypropyl derivatives was investigated. The phase solubility study showed that both native and b-CD derivatives increased the water solubility of the herbicide and inclusion complexes are formed in a stoichiometric ratio of 1:1. The stability constants describing the extent of formation of the complexes have been determined by phase solubility studies. 1H NMR experiments were also accomplished for the prepared solid systems and the data gathered confirm the formation of the inclusion complexes. 1H NMR data obtained for the MCPA/CDs complexes disclosed noticeable proton shift displacements for OCH2 group and H6 aromatic proton of MCPA provided clear evidence of inclusion complexation process, suggesting that the phenyl moiety of the herbicide was included in the hydrophobic cavity of CDs. Free energy molecular mechanics calculations confirm all these findings. The gathered results can be regarded as an essential step to the development of controlled release agricultural formulations containing herbicide MCPA.
Resumo:
Single processor architectures are unable to provide the required performance of high performance embedded systems. Parallel processing based on general-purpose processors can achieve these performances with a considerable increase of required resources. However, in many cases, simplified optimized parallel cores can be used instead of general-purpose processors achieving better performance at lower resource utilization. In this paper, we propose a configurable many-core architecture to serve as a co-processor for high-performance embedded computing on Field-Programmable Gate Arrays. The architecture consists of an array of configurable simple cores with support for floating-point operations interconnected with a configurable interconnection network. For each core it is possible to configure the size of the internal memory, the supported operations and number of interfacing ports. The architecture was tested in a ZYNQ-7020 FPGA in the execution of several parallel algorithms. The results show that the proposed many-core architecture achieves better performance than that achieved with a parallel generalpurpose processor and that up to 32 floating-point cores can be implemented in a ZYNQ-7020 SoC FPGA.