836 resultados para Overhead conductors


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Les langages de programmation typés dynamiquement tels que JavaScript et Python repoussent la vérification de typage jusqu’au moment de l’exécution. Afin d’optimiser la performance de ces langages, les implémentations de machines virtuelles pour langages dynamiques doivent tenter d’éliminer les tests de typage dynamiques redondants. Cela se fait habituellement en utilisant une analyse d’inférence de types. Cependant, les analyses de ce genre sont souvent coûteuses et impliquent des compromis entre le temps de compilation et la précision des résultats obtenus. Ceci a conduit à la conception d’architectures de VM de plus en plus complexes. Nous proposons le versionnement paresseux de blocs de base, une technique de compilation à la volée simple qui élimine efficacement les tests de typage dynamiques redondants sur les chemins d’exécution critiques. Cette nouvelle approche génère paresseusement des versions spécialisées des blocs de base tout en propageant de l’information de typage contextualisée. Notre technique ne nécessite pas l’utilisation d’analyses de programme coûteuses, n’est pas contrainte par les limitations de précision des analyses d’inférence de types traditionnelles et évite la complexité des techniques d’optimisation spéculatives. Trois extensions sont apportées au versionnement de blocs de base afin de lui donner des capacités d’optimisation interprocédurale. Une première extension lui donne la possibilité de joindre des informations de typage aux propriétés des objets et aux variables globales. Puis, la spécialisation de points d’entrée lui permet de passer de l’information de typage des fonctions appellantes aux fonctions appellées. Finalement, la spécialisation des continuations d’appels permet de transmettre le type des valeurs de retour des fonctions appellées aux appellants sans coût dynamique. Nous démontrons empiriquement que ces extensions permettent au versionnement de blocs de base d’éliminer plus de tests de typage dynamiques que toute analyse d’inférence de typage statique.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This document presents GEmSysC, an unified cryptographic API for embedded systems. Software layers implementing this API can be built over existing libraries, allowing embedded software to access cryptographic functions in a consistent way that does not depend on the underlying library. The API complies to good practices for API design and good practices for embedded software development and took its inspiration from other cryptographic libraries and standards. The main inspiration for creating GEmSysC was the CMSIS-RTOS standard, which defines an unified API for embedded software in an implementation-independent way, but targets operating systems instead of cryptographic functions. GEmSysC is made of a generic core and attachable modules, one for each cryptographic algorithm. This document contains the specification of the core of GEmSysC and three of its modules: AES, RSA and SHA-256. GEmSysC was built targeting embedded systems, but this does not restrict its use only in such systems – after all, embedded systems are just very limited computing devices. As a proof of concept, two implementations of GEmSysC were made. One of them was built over wolfSSL, which is an open source library for embedded systems. The other was built over OpenSSL, which is open source and a de facto standard. Unlike wolfSSL, OpenSSL does not specifically target embedded systems. The implementation built over wolfSSL was evaluated in a Cortex- M3 processor with no operating system while the implementation built over OpenSSL was evaluated on a personal computer with Windows 10 operating system. This document displays test results showing GEmSysC to be simpler than other libraries in some aspects. These results have shown that both implementations incur in little overhead in computation time compared to the cryptographic libraries themselves. The overhead of the implementation has been measured for each cryptographic algorithm and is between around 0% and 0.17% for the implementation over wolfSSL and between 0.03% and 1.40% for the one over OpenSSL. This document also presents the memory costs for each implementation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

During the lifetime of a research project, different partners develop several research prototype tools that share many common aspects. This is equally true for researchers as individuals and as groups: during a period of time they often develop several related tools to pursue a specific research line. Making research prototype tools easily accessible to the community is of utmost importance to promote the corresponding research, get feedback, and increase the tools’ lifetime beyond the duration of a specific project. One way to achieve this is to build graphical user interfaces (GUIs) that facilitate trying tools; in particular, with web-interfaces one avoids the overhead of downloading and installing the tools. Building GUIs from scratch is a tedious task, in particular for web-interfaces, and thus it typically gets low priority when developing a research prototype. Often we opt for copying the GUI of one tool and modifying it to fit the needs of a new related tool. Apart from code duplication, these tools will “live” separately, even though we might benefit from having them all in a common environment since they are related. This work aims at simplifying the process of building GUIs for research prototypes tools. In particular, we present EasyInterface, a toolkit that is based on novel methodology that provides an easy way to make research prototype tools available via common different environments such as a web-interface, within Eclipse, etc. It includes a novel text-based output language that allows to present results graphically without requiring any knowledge in GUI/Web programming. For example, an output of a tool could be (a structured version of) “highlight line number 10 of file ex.c” and “when the user clicks on line 10, open a dialog box with the text ...”. The environment will interpret this output and converts it to corresponding visual e_ects. The advantage of using this approach is that it will be interpreted equally by all environments of EasyInterface, e.g., the web-interface, the Eclipse plugin, etc. EasyInterface has been developed in the context of the Envisage [5] project, and has been evaluated on tools developed in this project, which include static analyzers, test-case generators, compilers, simulators, etc. EasyInterface is open source and available at GitHub2.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Tungsten/copper composites are commonly used for electrical and thermal objectives like heat sinks and lectrical conductors, propitiating an excellent thermal and electrical conductivity. These properties are dependents of the composition, crystallite size and production process. The high energy milling of the powder of W-Cu produces an dispersion high and homogenization levels with crystallite size of W very small in the ductile Cu phase. This work discusses the effect of the HEM in preparation of the W-25Cu composite powders. Three techniques of powder preparation were utilized: milling the dry with powder of thick Cu, milling the dry with powder of fine Cu and milling the wet with powder of thick Cu. The form, size and composition of the particles of the powders milled were observed by scanning electron microscopy (SEM). The X-ray diffraction (XRD) was used to analyse the phases, lattice parameters, size and microstrain of the crystallite. The analyse of the crystalline structure of the W-25Cu powders milled made by Rietveld Method suggests the partial solid solubility of the constituent elements of the Cu in lattice of the W. This analyse shows too that the HEM produces the reduction high on the crystallite size and the increase in the lattice strain of both phases, this is more intense in the phase W

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dynamically reconfigurable hardware is a promising technology that combines in the same device both the high performance and the flexibility that many recent applications demand. However, one of its main drawbacks is the reconfiguration overhead, which involves important delays in the task execution, usually in the order of hundreds of milliseconds, as well as high energy consumption. One of the most powerful ways to tackle this problem is configuration reuse, since reusing a task does not involve any reconfiguration overhead. In this paper we propose a configuration replacement policy for reconfigurable systems that maximizes task reuse in highly dynamic environments. We have integrated this policy in an external taskgraph execution manager that applies task prefetch by loading and executing the tasks as soon as possible (ASAP). However, we have also modified this ASAP technique in order to make the replacements more flexible, by taking into account the mobility of the tasks and delaying some of the reconfigurations. In addition, this replacement policy is a hybrid design-time/run-time approach, which performs the bulk of the computations at design time in order to save run-time computations. Our results illustrate that the proposed strategy outperforms other state-ofthe-art replacement policies in terms of reuse rates and achieves near-optimal reconfiguration overhead reductions. In addition, by performing the bulk of the computations at design time, we reduce the execution time of the replacement technique by 10 times with respect to an equivalent purely run-time one.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Double Degree

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reconfigurable platforms are a promising technology that offers an interesting trade-off between flexibility and performance, which many recent embedded system applications demand, especially in fields such as multimedia processing. These applications typically involve multiple ad-hoc tasks for hardware acceleration, which are usually represented using formalisms such as Data Flow Diagrams (DFDs), Data Flow Graphs (DFGs), Control and Data Flow Graphs (CDFGs) or Petri Nets. However, none of these models is able to capture at the same time the pipeline behavior between tasks (that therefore can coexist in order to minimize the application execution time), their communication patterns, and their data dependencies. This paper proves that the knowledge of all this information can be effectively exploited to reduce the resource requirements and the timing performance of modern reconfigurable systems, where a set of hardware accelerators is used to support the computation. For this purpose, this paper proposes a novel task representation model, named Temporal Constrained Data Flow Diagram (TCDFD), which includes all this information. This paper also presents a mapping-scheduling algorithm that is able to take advantage of the new TCDFD model. It aims at minimizing the dynamic reconfiguration overhead while meeting the communication requirements among the tasks. Experimental results show that the presented approach achieves up to 75% of resources saving and up to 89% of reconfiguration overhead reduction with respect to other state-of-the-art techniques for reconfigurable platforms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reconfigurable HW can be used to build a hardware multitasking system where tasks can be assigned to the reconfigurable HW at run-time according to the requirements of the running applications. Normally the execution in this kind of systems is controlled by an embedded processor. In these systems tasks are frequently represented as subtask graphs, where a subtask is the basic scheduling unit that can be assigned to a reconfigurable HW. In order to control the execution of these tasks, the processor must manage at run-time complex data structures, like graphs or linked list, which may generate significant execution-time penalties. In addition, HW/SW communications are frequently a system bottleneck. Hence, it is very interesting to find a way to reduce the run-time SW computations and the HW/SW communications. To this end we have developed a HW execution manager that controls the execution of subtask graphs over a set of reconfigurable units. This manager receives as input a subtask graph coupled to a subtask schedule, and guarantees its proper execution. In addition it includes support to reduce the execution-time overhead due to reconfigurations. With this HW support the execution of task graphs can be managed efficiently generating only very small run-time penalties.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

2D materials have attracted tremendous attention due to their unique physical and chemical properties since the discovery of graphene. Despite these intrinsic properties, various modification methods have been applied to 2D materials that yield even more exciting results. Among all modification methods, the intercalation of 2D materials provides the highest possible doping and/or phase change to the pristine 2D materials. This doping effect highly modifies 2D materials, with extraordinary electrical transport as well as optical, thermal, magnetic, and catalytic properties, which are advantageous for optoelectronics, superconductors, thermoelectronics, catalysis and energy storage applications. To study the property changes of 2D materials, we designed and built a planar nanobattery that allows electrochemical ion intercalation in 2D materials. More importantly, this planar nanobattery enables characterization of electrical, optical and structural properties of 2D materials in situ and real time upon ion intercalation. With this device, we successfully intercalated Li-ions into few layer graphene (FLG) and ultrathin graphite, heavily dopes the graphene to 0.6 x 10^15 /cm2, which simultaneously increased its conductivity and transmittance in the visible range. The intercalated LiC6 single crystallite achieved extraordinary optoelectronic properties, in which an eight-layered Li intercalated FLG achieved transmittance of 91.7% (at 550 nm) and sheet resistance of 3 ohm/sq. We extend the research to obtain scalable, printable graphene based transparent conductors with ion intercalation. Surfactant free, printed reduced graphene oxide transparent conductor thin film with Na-ion intercalation is obtained with transmittance of 79% and sheet resistance of 300 ohm/sq (at 550 nm). The figure of merit is calculated as the best pure rGO based transparent conductors. We further improved the tunability of the reduced graphene oxide film by using two layers of CNT films to sandwich it. The tunable range of rGO film is demonstrated from 0.9 um to 10 um in wavelength. Other ions such as K-ion is also studied of its intercalation chemistry and optical properties in graphitic materials. We also used the in situ characterization tools to understand the fundamental properties and improve the performance of battery electrode materials. We investigated the Na-ion interaction with rGO by in situ Transmission electron microscopy (TEM). For the first time, we observed reversible Na metal cluster (with diameter larger than 10 nm) deposition on rGO surface, which we evidenced with atom-resolved HRTEM image of Na metal and electron diffraction pattern. This discovery leads to a porous reduced graphene oxide sodium ion battery anode with record high reversible specific capacity around 450 mAh/g at 25mA/g, a high rate performance of 200 mAh/g at 250 mA/g, and stable cycling performance up to 750 cycles. In addition, direct observation of irreversible formation of Na2O on rGO unveils the origin of commonly observed low 1st Columbic Efficiency of rGO containing electrodes. Another example for in situ characterization for battery electrode is using the planar nanobattery for 2D MoS2 crystallite. Planar nanobattery allows the intrinsic electrical conductivity measurement with single crystalline 2D battery electrode upon ion intercalation and deintercalation process, which is lacking in conventional battery characterization techniques. We discovered that with a “rapid-charging” process at the first cycle, the lithiated MoS2 undergoes a drastic resistance decrease, which in a regular lithiation process, the resistance always increases after lithiation at its final stage. This discovery leads to a 2- fold increase in specific capacity with with rapid first lithiated MoS2 composite electrode material, compare with the regular first lithiated MoS2 composite electrode material, at current density of 250 mA/g.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ionic liquids (ILs) are organic compounds liquid at room temperature, good electrical conductors, with the potential to form as a means for electrolyte on electrolysis of water, in which the electrodes would not be subjected to such extreme conditions demanding chemistry [1]. This paper describes the synthesis, characterization and study of the feasibility of ionic liquid ionic liquid 1-methyl-3(2,6-(S)-dimethyloct-2-ene)-imidazole tetrafluoroborate (MDI-BF4) as electrolyte to produce hydrogen through electrolysis of water. The MDI-BF4 synthesized was characterized by thermal methods of analysis (Thermogravimetric Analysis - TG and Differential Scanning Calorimetry - DSC), mid-infrared spectroscopy with Fourier transform by method of attenuated total reflectance (FTIR-ATR), nuclear magnetic resonance spectroscopy of hydrogen (NMR 1H) and cyclic voltammetry (CV). Where thermal methods were used to calculate the yield of the synthesis of MDI-BF4 which was 88.84%, characterized infrared spectroscopy functional groups of the compound and the binding B-F 1053 cm-1; the NMR 1H analyzed and compared with literature data defines the structure of MDI-BF4 and the current density achieved by MDI-BF4 in the voltammogram shows that the LI can conduct electrical current indicating that the MDI-BF4 is a good electrolyte, and that their behavior does not change with the increasing concentration of water

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Power distribution systems are susceptible to extreme damage from natural hazards especially hurricanes. Hurricane winds can knock down distribution poles thereby causing damage to the system and power outages which can result in millions of dollars in lost revenue and restoration costs. Timber has been the dominant material used to support overhead lines in distribution systems. Recently however, utility companies have been searching for a cost-effective alternative to timber poles due to environmental concerns, durability, high cost of maintenance and need for improved aesthetics. Steel has emerged as a viable alternative to timber due to its advantages such as relatively lower maintenance cost, light weight, consistent performance, and invulnerability to wood-pecker attacks. Both timber and steel poles are prone to deterioration over time due to decay in the timber and corrosion of the steel. This research proposes a framework for conducting fragility analysis of timber and steel poles subjected to hurricane winds considering deterioration of the poles over time. Monte Carlo simulation was used to develop the fragility curves considering uncertainties in strength, geometry and wind loads. A framework for life-cycle cost analysis is also proposed to compare the steel and timber poles. The results show that steel poles can have superior reliability and lower life-cycle cost compared to timber poles, which makes them suitable substitutes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Future power grids are envisioned to be serviced by heterogeneous arrangements of renewable energy sources. Due to their stochastic nature, energy storage distribution and management are pivotal in realizing microgrids serviced heavily by renewable energy assets. Identifying the required response characteristics to meet the operational requirements of a power grid are of great importance and must be illuminated in order to discern optimal hardware topologies. Hamiltonian Surface Shaping and Power Flow Control (HSSPFC) presents the tools to identify such characteristics. By using energy storage as actuation within the closed loop controller, the response requirements may be identified while providing a decoupled controller solution. A DC microgrid servicing a fixed RC load through source and bus level storage managed by HSSPFC was realized in hardware. A procedure was developed to calibrate the DC microgrid architecture of this work to the reduced order model used by the HSSPFC law. Storage requirements were examined through simulation and experimental testing. Bandwidth contributions between feed forward and PI components of the HSSPFC law are illuminated and suggest the need for well-known system losses to prevent the need for additional overhead in storage allocations. The following work outlines the steps taken in realizing a DC microgrid and presents design considerations for system calibration and storage requirements per the closed loop controls for future DC microgrids.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The lack of analytical models that can accurately describe large-scale networked systems makes empirical experimentation indispensable for understanding complex behaviors. Research on network testbeds for testing network protocols and distributed services, including physical, emulated, and federated testbeds, has made steady progress. Although the success of these testbeds is undeniable, they fail to provide: 1) scalability, for handling large-scale networks with hundreds or thousands of hosts and routers organized in different scenarios, 2) flexibility, for testing new protocols or applications in diverse settings, and 3) inter-operability, for combining simulated and real network entities in experiments. This dissertation tackles these issues in three different dimensions. First, we present SVEET, a system that enables inter-operability between real and simulated hosts. In order to increase the scalability of networks under study, SVEET enables time-dilated synchronization between real hosts and the discrete-event simulator. Realistic TCP congestion control algorithms are implemented in the simulator to allow seamless interactions between real and simulated hosts. SVEET is validated via extensive experiments and its capabilities are assessed through case studies involving real applications. Second, we present PrimoGENI, a system that allows a distributed discrete-event simulator, running in real-time, to interact with real network entities in a federated environment. PrimoGENI greatly enhances the flexibility of network experiments, through which a great variety of network conditions can be reproduced to examine what-if questions. Furthermore, PrimoGENI performs resource management functions, on behalf of the user, for instantiating network experiments on shared infrastructures. Finally, to further increase the scalability of network testbeds to handle large-scale high-capacity networks, we present a novel symbiotic simulation approach. We present SymbioSim, a testbed for large-scale network experimentation where a high-performance simulation system closely cooperates with an emulation system in a mutually beneficial way. On the one hand, the simulation system benefits from incorporating the traffic metadata from real applications in the emulation system to reproduce the realistic traffic conditions. On the other hand, the emulation system benefits from receiving the continuous updates from the simulation system to calibrate the traffic between real applications. Specific techniques that support the symbiotic approach include: 1) a model downscaling scheme that can significantly reduce the complexity of the large-scale simulation model, resulting in an efficient emulation system for modulating the high-capacity network traffic between real applications; 2) a queuing network model for the downscaled emulation system to accurately represent the network effects of the simulated traffic; and 3) techniques for reducing the synchronization overhead between the simulation and emulation systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many important problems in communication networks, transportation networks, and logistics networks are solved by the minimization of cost functions. In general, these can be complex optimization problems involving many variables. However, physicists noted that in a network, a node variable (such as the amount of resources of the nodes) is connected to a set of link variables (such as the flow connecting the node), and similarly each link variable is connected to a number of (usually two) node variables. This enables one to break the problem into local components, often arriving at distributive algorithms to solve the problems. Compared with centralized algorithms, distributed algorithms have the advantages of lower computational complexity, and lower communication overhead. Since they have a faster response to local changes of the environment, they are especially useful for networks with evolving conditions. This review will cover message-passing algorithms in applications such as resource allocation, transportation networks, facility location, traffic routing, and stability of power grids.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism.