999 resultados para SOC Design
Resumo:
The purpose of this research is design considerations for environmental monitoring platforms for the detection of hazardous materials using System-on-a-Chip (SoC) design. Design considerations focus on improving key areas such as: (1) sampling methodology; (2) context awareness; and (3) sensor placement. These design considerations for environmental monitoring platforms using wireless sensor networks (WSN) is applied to the detection of methylmercury (MeHg) and environmental parameters affecting its formation (methylation) and deformation (demethylation). ^ The sampling methodology investigates a proof-of-concept for the monitoring of MeHg using three primary components: (1) chemical derivatization; (2) preconcentration using the purge-and-trap (P&T) method; and (3) sensing using Quartz Crystal Microbalance (QCM) sensors. This study focuses on the measurement of inorganic mercury (Hg) (e.g., Hg2+) and applies lessons learned to organic Hg (e.g., MeHg) detection. ^ Context awareness of a WSN and sampling strategies is enhanced by using spatial analysis techniques, namely geostatistical analysis (i.e., classical variography and ordinary point kriging), to help predict the phenomena of interest in unmonitored locations (i.e., locations without sensors). This aids in making more informed decisions on control of the WSN (e.g., communications strategy, power management, resource allocation, sampling rate and strategy, etc.). This methodology improves the precision of controllability by adding potentially significant information of unmonitored locations.^ There are two types of sensors that are investigated in this study for near-optimal placement in a WSN: (1) environmental (e.g., humidity, moisture, temperature, etc.) and (2) visual (e.g., camera) sensors. The near-optimal placement of environmental sensors is found utilizing a strategy which minimizes the variance of spatial analysis based on randomly chosen points representing the sensor locations. Spatial analysis is employed using geostatistical analysis and optimization occurs with Monte Carlo analysis. Visual sensor placement is accomplished for omnidirectional cameras operating in a WSN using an optimal placement metric (OPM) which is calculated for each grid point based on line-of-site (LOS) in a defined number of directions where known obstacles are taken into consideration. Optimal areas of camera placement are determined based on areas generating the largest OPMs. Statistical analysis is examined by using Monte Carlo analysis with varying number of obstacles and cameras in a defined space. ^
Resumo:
Aquest projecte consisteix en la realització d'un entorn gràfic que serveixi per generar SoCs basats en el processador soft-core OpenRISC. Aquest entorn permetrà afegir diferents components de manera dinàmica a un repositori d’IPs, mostrar i sel·leccionar qualsevol component disponible dins d’aquest repositori, amb la finalitat d’unir-los al bus del sistema i fer-los accessibles al processador OpenRISC. L’entorn també mostrarà en tot moment com va evolucionant el nostre SoC, guardarà cadascún dels projectes que es realitzen amb aquest entorn i finalment permetrà generar el SoC dissenyat.
Resumo:
Avec la complexité croissante des systèmes sur puce, de nouveaux défis ne cessent d’émerger dans la conception de ces systèmes en matière de vérification formelle et de synthèse de haut niveau. Plusieurs travaux autour de SystemC, considéré comme la norme pour la conception au niveau système, sont en cours afin de relever ces nouveaux défis. Cependant, à cause du modèle de concurrence complexe de SystemC, relever ces défis reste toujours une tâche difficile. Ainsi, nous pensons qu’il est primordial de partir sur de meilleures bases en utilisant un modèle de concurrence plus efficace. Par conséquent, dans cette thèse, nous étudions une méthodologie de conception qui offre une meilleure abstraction pour modéliser des composants parallèles en se basant sur le concept de transaction. Nous montrons comment, grâce au raisonnement simple que procure le concept de transaction, il devient plus facile d’appliquer la vérification formelle, le raffinement incrémental et la synthèse de haut niveau. Dans le but d’évaluer l’efficacité de cette méthodologie, nous avons fixé l’objectif d’optimiser la vitesse de simulation d’un modèle transactionnel en profitant d’une machine multicoeur. Nous présentons ainsi l’environnement de modélisation et de simulation parallèle que nous avons développé. Nous étudions différentes stratégies d’ordonnancement en matière de parallélisme et de surcoût de synchronisation. Une expérimentation faite sur un modèle du transmetteur Wi-Fi 802.11a a permis d’atteindre une accélération d’environ 1.8 en utilisant deux threads. Avec 8 threads, bien que la charge de travail des différentes transactions n’était pas importante, nous avons pu atteindre une accélération d’environ 4.6, ce qui est un résultat très prometteur.
Resumo:
Current SoC design trends are characterized by the integration of larger amount of IPs targeting a wide range of application fields. Such multi-application systems are constrained by a set of requirements. In such scenario network-on-chips (NoC) are becoming more important as the on-chip communication structure. Designing an optimal NoC for satisfying the requirements of each individual application requires the specification of a large set of configuration parameters leading to a wide solution space. It has been shown that IP mapping is one of the most critical parameters in NoC design, strongly influencing the SoC performance. IP mapping has been solved for single application systems using single and multi-objective optimization algorithms. In this paper we propose the use of a multi-objective adaptive immune algorithm (M(2)AIA), an evolutionary approach to solve the multi-application NoC mapping problem. Latency and power consumption were adopted as the target multi-objective functions. To compare the efficiency of our approach, our results are compared with those of the genetic and branch and bound multi-objective mapping algorithms. We tested 11 well-known benchmarks, including random and real applications, and combines up to 8 applications at the same SoC. The experimental results showed that the M(2)AIA decreases in average the power consumption and the latency 27.3 and 42.1 % compared to the branch and bound approach and 29.3 and 36.1 % over the genetic approach.
Resumo:
Nowadays the rise of non-recurring engineering (NRE) costs associated with complexity is becoming a major factor in SoC design, limiting both scaling opportunities and the flexibility advantages offered by the integration of complex computational units. The introduction of embedded programmable elements can represent an appealing solution, able both to guarantee the desired flexibility and upgradabilty and to widen the SoC market. In particular embedded FPGA (eFPGA) cores can provide bit-level optimization for those applications which benefits from synthesis, paying on the other side in terms of performance penalties and area overhead with respect to standard cell ASIC implementations. In this scenario this thesis proposes a design methodology for a synthesizable programmable device designed to be embedded in a SoC. A soft-core embedded FPGA (eFPGA) is hence presented and analyzed in terms of the opportunities given by a fully synthesizable approach, following an implementation flow based on Standard-Cell methodology. A key point of the proposed eFPGA template is that it adopts a Multi-Stage Switching Network (MSSN) as the foundation of the programmable interconnects, since it can be efficiently synthesized and optimized through a standard cell based implementation flow, ensuring at the same time an intrinsic congestion-free network topology. The evaluation of the flexibility potentialities of the eFPGA has been performed using different technology libraries (STMicroelectronics CMOS 65nm and BCD9s 0.11μm) through a design space exploration in terms of area-speed-leakage tradeoffs, enabled by the full synthesizability of the template. Since the most relevant disadvantage of the adopted soft approach, compared to a hardcore, is represented by a performance overhead increase, the eFPGA analysis has been made targeting small area budgets. The generation of the configuration bitstream has been obtained thanks to the implementation of a custom CAD flow environment, and has allowed functional verification and performance evaluation through an application-aware analysis.
Resumo:
Contemporary integrated circuits are designed and manufactured in a globalized environment leading to concerns of piracy, overproduction and counterfeiting. One class of techniques to combat these threats is circuit obfuscation which seeks to modify the gate-level (or structural) description of a circuit without affecting its functionality in order to increase the complexity and cost of reverse engineering. Most of the existing circuit obfuscation methods are based on the insertion of additional logic (called “key gates”) or camouflaging existing gates in order to make it difficult for a malicious user to get the complete layout information without extensive computations to determine key-gate values. However, when the netlist or the circuit layout, although camouflaged, is available to the attacker, he/she can use advanced logic analysis and circuit simulation tools and Boolean SAT solvers to reveal the unknown gate-level information without exhaustively trying all the input vectors, thus bringing down the complexity of reverse engineering. To counter this problem, some ‘provably secure’ logic encryption algorithms that emphasize methodical selection of camouflaged gates have been proposed previously in literature [1,2,3]. The contribution of this paper is the creation and simulation of a new layout obfuscation method that uses don't care conditions. We also present proof-of-concept of a new functional or logic obfuscation technique that not only conceals, but modifies the circuit functionality in addition to the gate-level description, and can be implemented automatically during the design process. Our layout obfuscation technique utilizes don’t care conditions (namely, Observability and Satisfiability Don’t Cares) inherent in the circuit to camouflage selected gates and modify sub-circuit functionality while meeting the overall circuit specification. Here, camouflaging or obfuscating a gate means replacing the candidate gate by a 4X1 Multiplexer which can be configured to perform all possible 2-input/ 1-output functions as proposed by Bao et al. [4]. It is important to emphasize that our approach not only obfuscates but alters sub-circuit level functionality in an attempt to make IP piracy difficult. The choice of gates to obfuscate determines the effort required to reverse engineer or brute force the design. As such, we propose a method of camouflaged gate selection based on the intersection of output logic cones. By choosing these candidate gates methodically, the complexity of reverse engineering can be made exponential, thus making it computationally very expensive to determine the true circuit functionality. We propose several heuristic algorithms to maximize the RE complexity based on don’t care based obfuscation and methodical gate selection. Thus, the goal of protecting the design IP from malicious end-users is achieved. It also makes it significantly harder for rogue elements in the supply chain to use, copy or replicate the same design with a different logic. We analyze the reverse engineering complexity by applying our obfuscation algorithm on ISCAS-85 benchmarks. Our experimental results indicate that significant reverse engineering complexity can be achieved at minimal design overhead (average area overhead for the proposed layout obfuscation methods is 5.51% and average delay overhead is about 7.732%). We discuss the strengths and limitations of our approach and suggest directions that may lead to improved logic encryption algorithms in the future. References: [1] R. Chakraborty and S. Bhunia, “HARPOON: An Obfuscation-Based SoC Design Methodology for Hardware Protection,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 28, no. 10, pp. 1493–1502, 2009. [2] J. A. Roy, F. Koushanfar, and I. L. Markov, “EPIC: Ending Piracy of Integrated Circuits,” in 2008 Design, Automation and Test in Europe, 2008, pp. 1069–1074. [3] J. Rajendran, M. Sam, O. Sinanoglu, and R. Karri, “Security Analysis of Integrated Circuit Camouflaging,” ACM Conference on Computer Communications and Security, 2013. [4] Bao Liu, Wang, B., "Embedded reconfigurable logic for ASIC design obfuscation against supply chain attacks,"Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014 , vol., no., pp.1,6, 24-28 March 2014.
Resumo:
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.
Resumo:
Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.
Resumo:
This paper proposes a new design methodology for discrete multi-pumped Raman amplifier. In a multi-objective optimization scenario, in a first step the whole solution-space is inspected by a CW analytical formulation. Then, the most promising solutions are fully investigated by a rigorous numerical treatment and the Raman amplification performance is thus determined by the combination of analytical and numerical approaches. As an application of our methodology we designed an photonic crystal fiber Raman amplifier configuration which provides low ripple, high gain, clear eye opening and a low power penalty. The amplifier configuration also enables to fully compensate the dispersion introduced by a 70-km singlemode fiber in a 10 Gbit/s system. We have successfully obtained a configuration with 8.5 dB average gain over the C-band and 0.71 dB ripple with almost zero eye-penalty using only two pump lasers with relatively low pump power. (C) 2009 Optical Society of America
Resumo:
In this work, the effects of conical indentation variables on the load-depth indentation curves were analyzed using finite element modeling and dimensional analysis. A factorial design 2(6) was used with the aim of quantifying the effects of the mechanical properties of the indented material and of the indenter geometry. Analysis was based on the input variables Y/E, R/h(max), n, theta, E, and h(max). The dimensional variables E and h(max) were used such that each value of dimensionless Y/E was obtained with two different values of E and each value of dimensionless R/h(max) was obtained with two different h(max) values. A set of dimensionless functions was defined to analyze the effect of the input variables: Pi(1) = P(1)/Eh(2), Pi(2) = h(c)/h, Pi(3) = H/Y, Pi(4) = S/Eh(max), Pi(6) = h(max)/h(f) and Pi(7) = W(P)/W(T). These six functions were found to depend only on the dimensionless variables studied (Y/E, R/h(max), n, theta). Another dimension less function, Pi(5) = beta, was not well defined for most of the dimensionless variables and the only variable that provided a significant effect on beta was theta. However, beta showed a strong dependence on the fraction of the data selected to fit the unloading curve, which means that beta is especially Susceptible to the error in the Calculation of the initial unloading slope.
Resumo:
Three new bimetallic oxamato-based magnets with the proligand 4,5-dimethyl-1,2-phenylenebis-(oxamato) (dmopba) were synthesized using water or dimethylsulfoxide (DMSO) as solvents. Single crystal X-ray diffraction provided structures for two of them: [MnCu(dmopba)(H(2)O)(3)]n center dot 4nH(2)O (1) and [MnCu(dmopba)(DMSO)(3)](n center dot)nDMSO (2). The crystalline structures for both 1 and 2 consist of linearly ordered oxamato-bridged Mn(II)Cu(II) bimetallic chains. The magnetic characterization revealed a typical behaviour of ferrimagnetic chains for 1 and 2. Least-squares fits of the experimental magnetic data performed in the 300-20 K temperature range led to J(MnCu) = -27.9 cm(-1), g(Cu) = 2.09 and g(Mn) = 1.98 for 1 and J(MnCu) = -30.5 cm(-1), g(Cu) = 2.09 and g(Mn) = 2.02 for 2 (H = -J(MnCu)Sigma S(Mn, i)(S(Cu, i) + S(Cu, i-1))). The two-dimensional ferrimagnetic system [Me(4)N](2n){Co(2)[Cu(dmopba)](3)}center dot 4nDMSO center dot nH(2)O (3) was prepared by reaction of Co(II) ions and an excess of [Cu(dmopba)](2-) in DMSO. The study of the temperature dependence of the magnetic susceptibility as well as the temperature and field dependences of the magnetization revealed a cluster glass-like behaviour for 3.
Resumo:
The cost of a new ship design heavily depends on the principal dimensions of the ship; however, dimensions minimization often conflicts with the minimum oil outflow (in the event of an accidental spill). This study demonstrates one rational methodology for selecting the optimal dimensions and coefficients of form of tankers via the use of a genetic algorithm. Therein, a multi-objective optimization problem was formulated by using two objective attributes in the evaluation of each design, specifically, total cost and mean oil outflow. In addition, a procedure that can be used to balance the designs in terms of weight and useful space is proposed. A genetic algorithm was implemented to search for optimal design parameters and to identify the nondominated Pareto frontier. At the end of this study, three real ships are used as case studies. [DOI:10.1115/1.4002740]
Resumo:
A new concept and a preliminary study for a monocolumn floating unit are introduced, aimed at exploring and producing oil in ultradeep waters. This platform, which combines two relevant features-great oil storage capacity and dry tree production capability-comprises two bodies with relatively independent heave motions between them. A parametric model is used to define the main design characteristics of the floating units. A set of design alternatives is generated using this procedure. These solutions are evaluated in terms of stability requirements and dynamic response. A mathematical model is developed to estimate the first order heave and pitch motions of the platform. Experimental tests are carried out in order to calibrate this model. The response of each body alone is estimated numerically using the WAMIT (R) code. This paper also includes a preliminary study on the platform mooring system and appendages. The study of the heave plates presents the gain, in terms of decreasing the motions, achieved by the introduction of the appropriate appendages to the platform. [DOI: 10.1115/1.4001429]
Resumo:
Thymidine monophosphate kinase (TMPK) has emerged as an attractive target for developing inhibitors of Mycobacterium tuberculosis growth. In this study the receptor-independent (RI) 4D-QSAR formalism has been used to develop QSAR models and corresponding 3D-pharmacophores for a set of 5`-thiourea-substituted alpha-thymidine inhibitors. Models were developed for the entire training set and for a subset of the training set consisting of the most potent inhibitors. The optimized (RI) 4D-QSAR models are statistically significant (r(2) = 0.90, q(2) = 0.83 entire set, r(2) = 0.86, q(2) = 0.80 high potency subset) and also possess good predictivity based on test set predictions. The most and least potent inhibitors, in their respective postulated active conformations derived from the models, were docked in the active site of the TMPK crystallographic structure. There is a solid consistency between the 3D-pharmacophore sites defined by the QSAR models and interactions with binding site residues. This model identifies new regions of the inhibitors that contain pharmacophore sites, such as the sugar-pyrimidine ring structure and the region of the 5`-arylthiourea moiety. These new regions of the ligands can be further explored and possibly exploited to identify new, novel, and, perhaps, better antituberculosis inhibitors of TMPKmt. Furthermore, the 3D-pharmacophores defined by these models can be used as a starting point for future receptor-dependent antituberculosis drug design as well as to elucidate candidate sites for substituent addition to optimize ADMET properties of analog inhibitors.
Resumo:
Dietary changes associated with drug therapy can reduce high serum cholesterol levels and dramatically decrease the risk of coronary artery disease, stroke, and overall mortality. Statins are hypolipemic drugs that are effective in the reduction of cholesterol serum levels, attenuating cholesterol synthesis in liver by competitive inhibition regarding the substrate or molecular target HMG-CoA reductase. We have herewith used computer-aided molecular design tools, i.e., flexible docking, virtual screening in large data bases, molecular interaction fields to propose novel potential HMG-CoA reductase inhibitors that are promising for the treatment of hypercholesterolemia.