872 resultados para High Performance Computing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although the computational power of mobile devices has been increasing, it is still not enough for some classes of applications. In the present, these applications delegate the computing power burden on servers located on the Internet. This model assumes an always-on Internet connectivity and implies a non-negligible latency. The thesis addresses the challenges and contributions posed to the application of a mobile collaborative computing environment concept to wireless networks. The goal is to define a reference architecture for high performance mobile applications. Current work is focused on efficient data dissemination on a highly transitive environment, suitable to many mobile applications and also to the reputation and incentive system available on this mobile collaborative computing environment. For this we are improving our already published reputation/incentive algorithm with knowledge from the usage pattern from the eduroam wireless network in the Lisbon area.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Single processor architectures are unable to provide the required performance of high performance embedded systems. Parallel processing based on general-purpose processors can achieve these performances with a considerable increase of required resources. However, in many cases, simplified optimized parallel cores can be used instead of general-purpose processors achieving better performance at lower resource utilization. In this paper, we propose a configurable many-core architecture to serve as a co-processor for high-performance embedded computing on Field-Programmable Gate Arrays. The architecture consists of an array of configurable simple cores with support for floating-point operations interconnected with a configurable interconnection network. For each core it is possible to configure the size of the internal memory, the supported operations and number of interfacing ports. The architecture was tested in a ZYNQ-7020 FPGA in the execution of several parallel algorithms. The results show that the proposed many-core architecture achieves better performance than that achieved with a parallel generalpurpose processor and that up to 32 floating-point cores can be implemented in a ZYNQ-7020 SoC FPGA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article examines, from the energy viewpoint, a new lightweight, slim, high energy efficient, light-transmitting envelope system, providing for seamless, free-form designs for use in architectural projects. The research was based on envelope components already existing on the market, especially components implemented with granular silica gel insulation, as this is the most effective translucent thermal insulation there is today. The tests run on these materials revealed that there is not one that has all the features required of the new envelope model, although some do have properties that could be exploited to generate this envelope, namely, the vacuum chamber of vacuum insulated panels (VIP), the monolithic aerogel used as insulation in some prototypes, reinforced polyester barriers. By combining these three design components — the high-performance thermal insulation of the vacuum chamber combined with monolithic silica gel insulation, the free-form design potential provided by materials like reinforced polyester and epoxy resins—, we have been able to define and test a new, variable geometry, energy-saving envelope system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The high performance and capacity of current FPGAs makes them suitable as acceleration co-processors. This article studies the implementation, for such accelerators, of the floating-point power function xy as defined by the C99 and IEEE 754-2008 standards, generalized here to arbitrary exponent and mantissa sizes. Last-bit accuracy at the smallest possible cost is obtained thanks to a careful study of the various subcomponents: a floating-point logarithm, a modified floating-point exponential, and a truncated floating-point multiplier. A parameterized architecture generator in the open-source FloPoCo project is presented in details and evaluated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The self-assembly of cobalt coordination frameworks (Co-CPs) with a two-dimensional morphology is demonstrated by a solvothermal method. The morphology of the Co-CPs has been controlled by various solvothermal conditions. The two-dimensional nanostructures agglomerated by Co3O4 nanoparticles remained after the pyrolysis of the Co-CPs. The as-synthesized Co3O4 anode material is characterized by cyclic voltammetry (CV), electrochemical impedance spectroscopy (EIS) and galvanostatic charge-discharge measurements. The morphology of Co3O4 plays a crucial role in the high performance anode materials for lithium batteries. The Co3O4 nanoparticles with opened-book morphology deliver a high capacity of 597 mA h g-1 after 50 cycles at a current rate of 800 mA g-1. The opened-book morphology of Co3O4 provides efficient lithium ion diffusion tunnels and increases the electrolyte/Co3O4 contact/interfacial area. At a relatively high current rate of 1200 mA g-1, Co3O4 with opened-book morphology delivers an excellent rate capability of 574 mA h g-1.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computing and information technology have made significant advances. The use of computing and technology is a major aspect of our lives, and this use will only continue to increase in our lifetime. Electronic digital computers and high performance communication networks are central to contemporary information technology. The computing applications in a wide range of areas including business, communications, medical research, transportation, entertainments, and education are transforming local and global societies around the globe. The rapid changes in the fields of computing and information technology also make the study of ethics exciting and challenging, as nearly every day, the media report on a new invention, controversy, or court ruling. This tutorial will explore a broad overview on the scientific foundations, technological advances, social implications, and ethical and legal issues related to computing. It will provide the milestones in computing and in networking, social context of computing, professional and ethical responsibilities, philosophical frameworks, and social, ethical, historical, and political implications of computer and information technology. It will outline the impact of the tremendous growth of computer and information technology on people, ethics and law. Political and legal implications will become clear when we analyze how technology has outpaced the legal and political arenas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The phenomenonal growth of the Internet has connected us to a vast amount of computation and information resources around the world. However, making use of these resources is difficult due to the unparalleled massiveness, high communication latency, share-nothing architecture and unreliable connection of the Internet. In this dissertation, we present a distributed software agent approach, which brings a new distributed problem-solving paradigm to the Internet computing researches with enhanced client-server scheme, inherent scalability and heterogeneity. Our study discusses the role of a distributed software agent in Internet computing and classifies it into three major categories by the objects it interacts with: computation agent, information agent and interface agent. The discussion of the problem domain and the deployment of the computation agent and the information agent are presented with the analysis, design and implementation of the experimental systems in high performance Internet computing and in scalable Web searching. ^ In the computation agent study, high performance Internet computing can be achieved with our proposed Java massive computation agent (JAM) model. We analyzed the JAM computing scheme and built a brutal force cipher text decryption prototype. In the information agent study, we discuss the scalability problem of the existing Web search engines and designed the approach of Web searching with distributed collaborative index agent. This approach can be used for constructing a more accurate, reusable and scalable solution to deal with the growth of the Web and of the information on the Web. ^ Our research reveals that with the deployment of the distributed software agent in Internet computing, we can have a more cost effective approach to make better use of the gigantic scale network of computation and information resources on the Internet. The case studies in our research show that we are now able to solve many practically hard or previously unsolvable problems caused by the inherent difficulties of Internet computing. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The scientific success of the LHC experiments at CERN highly depends on the availability of computing resources which efficiently store, process, and analyse the amount of data collected every year. This is ensured by the Worldwide LHC Computing Grid infrastructure that connect computing centres distributed all over the world with high performance network. LHC has an ambitious experimental program for the coming years, which includes large investments and improvements both for the hardware of the detectors and for the software and computing systems, in order to deal with the huge increase in the event rate expected from the High Luminosity LHC (HL-LHC) phase and consequently with the huge amount of data that will be produced. Since few years the role of Artificial Intelligence has become relevant in the High Energy Physics (HEP) world. Machine Learning (ML) and Deep Learning algorithms have been successfully used in many areas of HEP, like online and offline reconstruction programs, detector simulation, object reconstruction, identification, Monte Carlo generation, and surely they will be crucial in the HL-LHC phase. This thesis aims at contributing to a CMS R&D project, regarding a ML "as a Service" solution for HEP needs (MLaaS4HEP). It consists in a data-service able to perform an entire ML pipeline (in terms of reading data, processing data, training ML models, serving predictions) in a completely model-agnostic fashion, directly using ROOT files of arbitrary size from local or distributed data sources. This framework has been updated adding new features in the data preprocessing phase, allowing more flexibility to the user. Since the MLaaS4HEP framework is experiment agnostic, the ATLAS Higgs Boson ML challenge has been chosen as physics use case, with the aim to test MLaaS4HEP and the contribution done with this work.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Universidade Estadual de Campinas. Faculdade de Educação Física

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Geographic Data Warehouses (GDW) are one of the main technologies used in decision-making processes and spatial analysis, and the literature proposes several conceptual and logical data models for GDW. However, little effort has been focused on studying how spatial data redundancy affects SOLAP (Spatial On-Line Analytical Processing) query performance over GDW. In this paper, we investigate this issue. Firstly, we compare redundant and non-redundant GDW schemas and conclude that redundancy is related to high performance losses. We also analyze the issue of indexing, aiming at improving SOLAP query performance on a redundant GDW. Comparisons of the SB-index approach, the star-join aided by R-tree and the star-join aided by GiST indicate that the SB-index significantly improves the elapsed time in query processing from 25% up to 99% with regard to SOLAP queries defined over the spatial predicates of intersection, enclosure and containment and applied to roll-up and drill-down operations. We also investigate the impact of the increase in data volume on the performance. The increase did not impair the performance of the SB-index, which highly improved the elapsed time in query processing. Performance tests also show that the SB-index is far more compact than the star-join, requiring only a small fraction of at most 0.20% of the volume. Moreover, we propose a specific enhancement of the SB-index to deal with spatial data redundancy. This enhancement improved performance from 80 to 91% for redundant GDW schemas.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

High performance composite membranes based on molecular sieving silica (MSS) were synthesized using sols containing silicon co-polymers (methyltriethoxysilane and tetraethylorthosilicate). Alpha alumina supports were treated with hydrochloric acid prior to sol deposition. Permselectivity of CO2 over CH4 as high as 16.68 was achieved whilst permeability of CO2 up to 36.7 GPU (10(-6) cm(3) (STP) cm(-2) . s(-1) . cm Hg-1) was measured. The best membrane's permeability was finger printed during various stages of the synthesis process showing an increase in CO2/CH4 permselectivity by over 25 times from initial support condition (no membrane film) to the completion of pore structure tailoring. Transport measurement results indicate that the membrane pretreated with HCl has highest permselectivity and permeation rate. In particular, there is a definite cut-off pore size between 3.3 and 3.4 angstroms which is just below the kinetic diameters of Ar and CH4. This demonstrates that the mechanism for the separation in the prepared composite membrane is molecular sieving (activated diffusion), rather than Knudsen diffusion.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An increased degree of utilization of the potential N-glycosylation site In the fourth repeat unit of the human tau protein may be involved in the inability of tau to bind to the corresponding tubulin sequence(s) and in the subsequent development of the paired helical filaments of Alzheimer's disease. To model these processes, we synthesized the octadecapeptide spanning this region without sugar, and with the addition of an N-acetyl-glucosamine moiety. The carbohydrate-protected, glycosylated asparagine was incorporated as a building block during conventional Fmoc-solid phase peptide synthesis. While the crude non-glycosylated analog was obtained as a single peptide, two peptides with, the identical, expected masses, in approximately equal amounts, were detected after the cleavage of the peracetylated glycopeptide. Surprisingly, the two glycopeptides switched positions on the reversed-phase high performance liquid chromatogram after removal of the sugar-protecting acetyl groups. Nuclear magnetic resonance spectroscopy and peptide sequencing identified the more hydrophobic deprotected peak as the target peptide, and the more hydrophilic deprotected peak as a peptide analog in which the aspartic acid-bond just preceding the glycosylated asparagine residue was isomerized resulting in the formation of a beta-peptide. The anomalous chromatographic behavior of the acetylated beta-isomer could be explained on the basis of the generation of an extended hydrophobic surface which is not present in any of the other three glycopeptide variants. Repetition of the syntheses, with altered conditions and reagents, revealed reproducibly high levels of aspartic acid-bond isomerization of the glycopeptide as well as lack of isomerization for the non-glycosylated parent analog. If similar increased aspartic acid-bond isomerization occurs in vivo, a protein modification well known to take place for both the amyloid deposits and the neurofibrillary tangles in Alzheimer's disease, this process may explain the aggregation of glycosylated tau into the paired helical filaments in the affected brains. Copyright (C) 1999 European Peptide Society and John Wiley & Sons, Ltd.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recent advances in the control of molecular engineering architectures have allowed unprecedented ability of molecular recognition in biosensing, with a promising impact for clinical diagnosis and environment control. The availability of large amounts of data from electrical, optical, or electrochemical measurements requires, however, sophisticated data treatment in order to optimize sensing performance. In this study, we show how an information visualization system based on projections, referred to as Projection Explorer (PEx), can be used to achieve high performance for biosensors made with nanostructured films containing immobilized antigens. As a proof of concept, various visualizations were obtained with impedance spectroscopy data from an array of sensors whose electrical response could be specific toward a given antibody (analyte) owing to molecular recognition processes. In addition to discussing the distinct methods for projection and normalization of the data, we demonstrate that an excellent distinction can be made between real samples tested positive for Chagas disease and Leishmaniasis, which could not be achieved with conventional statistical methods. Such high performance probably arose from the possibility of treating the data in the whole frequency range. Through a systematic analysis, it was inferred that Sammon`s mapping with standardization to normalize the data gives the best results, where distinction could be made of blood serum samples containing 10(-7) mg/mL of the antibody. The method inherent in PEx and the procedures for analyzing the impedance data are entirely generic and can be extended to optimize any type of sensor or biosensor.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A new high throughput and scalable architecture for unified transform coding in H.264/AVC is proposed in this paper. Such flexible structure is capable of computing all the 4x4 and 2x2 transforms for Ultra High Definition Video (UHDV) applications (4320x7680@ 30fps) in real-time and with low hardware cost. These significantly high performance levels were proven with the implementation of several different configurations of the proposed structure using both FPGA and ASIC 90 nm technologies. In addition, such experimental evaluation also demonstrated the high area efficiency of theproposed architecture, which in terms of Data Throughput per Unit of Area (DTUA) is at least 1.5 times more efficient than its more prominent related designs(1).