899 resultados para Integrated circuits Very large scale integration Design and construction.
Resumo:
Macroscopic brain networks have been widely described with the manifold of metrics available using graph theory. However, most analyses do not incorporate information about the physical position of network nodes. Here, we provide a multimodal macroscopic network characterization while considering the physical positions of nodes. To do so, we examined anatomical and functional macroscopic brain networks in a sample of twenty healthy subjects. Anatomical networks are obtained with a graph based tractography algorithm from diffusion-weighted magnetic resonance images (DW-MRI). Anatomical con- nections identified via DW-MRI provided probabilistic constraints for determining the connectedness of 90 dif- ferent brain areas. Functional networks are derived from temporal linear correlations between blood-oxygenation level-dependent signals derived from the same brain areas. Rentian Scaling analysis, a technique adapted from very- large-scale integration circuits analyses, shows that func- tional networks are more random and less optimized than the anatomical networks. We also provide a new metric that allows quantifying the global connectivity arrange- ments for both structural and functional networks. While the functional networks show a higher contribution of inter-hemispheric connections, the anatomical networks highest connections are identified in a dorsal?ventral arrangement. These results indicate that anatomical and functional networks present different connectivity organi- zations that can only be identified when the physical locations of the nodes are included in the analysis.
Resumo:
Data Envelopment Analysis (DEA) is one of the most widely used methods in the measurement of the efficiency and productivity of Decision Making Units (DMUs). DEA for a large dataset with many inputs/outputs would require huge computer resources in terms of memory and CPU time. This paper proposes a neural network back-propagation Data Envelopment Analysis to address this problem for the very large scale datasets now emerging in practice. Neural network requirements for computer memory and CPU time are far less than that needed by conventional DEA methods and can therefore be a useful tool in measuring the efficiency of large datasets. Finally, the back-propagation DEA algorithm is applied to five large datasets and compared with the results obtained by conventional DEA.
Resumo:
Two-dimensional (2D) materials have generated great interest in the last few years as a new toolbox for electronics. This family of materials includes, among others, metallic graphene, semiconducting transition metal dichalcogenides (such as MoS2) and insulating Boron Nitride. These materials and their heterostructures offer excellent mechanical flexibility, optical transparency and favorable transport properties for realizing electronic, sensing and optical systems on arbitrary surfaces. In this work, we develop several etch stop layer technologies that allow the fabrication of complex 2D devices and present for the first time the large scale integration of graphene with molybdenum disulfide (MoS2) , both grown using the fully scalable CVD technique. Transistor devices and logic circuits with MoS2 channel and graphene as contacts and interconnects are constructed and show high performances. In addition, the graphene/MoS2 heterojunction contact has been systematically compared with MoS2-metal junctions experimentally and studied using density functional theory. The tunability of the graphene work function significantly improves the ohmic contact to MoS2. These high-performance large-scale devices and circuits based on 2D heterostructure pave the way for practical flexible transparent electronics in the future. The authors acknowledge financial support from the Office of Naval Research (ONR) Young Investigator Program, the ONR GATE MURI program, and the Army Research Laboratory. This research has made use of the MI.
Resumo:
Dynamically reconfigurable SRAM-based field-programmable gate arrays (FPGAs) enable the implementation of reconfigurable computing systems where several applications may be run simultaneously, sharing the available resources according to their own immediate functional requirements. To exclude malfunctioning due to faulty elements, the reliability of all FPGA resources must be guaranteed. Since resource allocation takes place asynchronously, an online structural test scheme is the only way of ensuring reliable system operation. On the other hand, this test scheme should not disturb the operation of the circuit, otherwise availability would be compromised. System performance is also influenced by the efficiency of the management strategies that must be able to dynamically allocate enough resources when requested by each application. As those resources are allocated and later released, many small free resource blocks are created, which are left unused due to performance and routing restrictions. To avoid wasting logic resources, the FPGA logic space must be defragmented regularly. This paper presents a non-intrusive active replication procedure that supports the proposed test methodology and the implementation of defragmentation strategies, assuring both the availability of resources and their perfect working condition, without disturbing system operation.
Resumo:
Over recent years there has been an increasing deployment of renewable energy generation technologies, particularly large-scale wind farms. As wind farm deployment increases, it is vital to gain a good understanding of how the energy produced is affected by climate variations, over a wide range of time-scales, from short (hours to weeks) to long (months to decades) periods. By relating wind speed at specific sites in the UK to a large-scale climate pattern (the North Atlantic Oscillation or "NAO"), the power generated by a modelled wind turbine under three different NAO states is calculated. It was found that the wind conditions under these NAO states may yield a difference in the mean wind power output of up to 10%. A simple model is used to demonstrate that forecasts of future NAO states can potentially be used to improve month-ahead statistical forecasts of monthly-mean wind power generation. The results confirm that the NAO has a significant impact on the hourly-, daily- and monthly-mean power output distributions from the turbine with important implications for (a) the use of meteorological data (e.g. their relationship to large scale climate patterns) in wind farm site assessment and, (b) the utilisation of seasonal-to-decadal climate forecasts to estimate future wind farm power output. This suggests that further research into the links between large-scale climate variability and wind power generation is both necessary and valuable.
Resumo:
As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison of the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. These large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.
Resumo:
A comparison tool has been developed by mapping the global GPS total electron content (TEC) and large coverage of ionospheric scintillations together on the geomagnetic latitude/magnetic local time coordinates. Using this tool, a comparison between large-scale ionospheric irregularities and scintillations are pursued during a geomagnetic storm. Irregularities, such as storm enhanced density (SED), middle-latitude trough and polar cap patches, are clearly identified from the TEC maps. At the edges of these irregularities, clear scintillations appeared but their behaviors were different. Phase scintillations (σsub{φ}) were almost always larger than amplitude scintillations (S4) at the edges of these irregularities, associated with bursty flows or flow reversals with large density gradients. An unexpected scintillation feature appeared inside the modeled auroral oval where S4 were much larger than σsub{φ}, most likely caused by particle precipitations around the exiting polar cap patches.
Resumo:
Large parts of the world are subjected to one or more natural hazards, such as earthquakes, tsunamis, landslides, tropical storms (hurricanes, cyclones and typhoons), costal inundation and flooding. Virtually the entire world is at risk of man-made hazards. In recent decades, rapid population growth and economic development in hazard-prone areas have greatly increased the potential of multiple hazards to cause damage and destruction of buildings, bridges, power plants, and other infrastructure; thus posing a grave danger to the community and disruption of economic and societal activities. Although an individual hazard is significant in many parts of the United States (U.S.), in certain areas more than one hazard may pose a threat to the constructed environment. In such areas, structural design and construction practices should address multiple hazards in an integrated manner to achieve structural performance that is consistent with owner expectations and general societal objectives. The growing interest and importance of multiple-hazard engineering has been recognized recently. This has spurred the evolution of multiple-hazard risk-assessment frameworks and development of design approaches which have paved way for future research towards sustainable construction of new and improved structures and retrofitting of the existing structures. This report provides a review of literature and the current state of practice for assessment, design and mitigation of the impact of multiple hazards on structural infrastructure. It also presents an overview of future research needs related to multiple-hazard performance of constructed facilities.
Resumo:
Alpine heavy precipitation events often affect small catchments, although the circulation pattern leading to the event extends over the entire North Atlantic. The various scale interactions involved are particularly challenging for the numerical weather prediction of such events. Unlike previous studies focusing on the southern Alps, here a comprehensive study of a heavy precipitation event in the northern Alps in October 2011 is presented with particular focus on the role of the large-scale circulation in the North Atlantic/European region. During the event exceptionally high amounts of total precipitable water occurred in and north of the Alps. This moisture was initially transported along the flanks of a blocking ridge over the North Atlantic. Subsequently, strong and persistent northerly flow established at the upstream flank of a trough over Europe and steered the moisture towards the northern Alps. Lagrangian diagnostics reveal that a large fraction of the moisture emerged from the West African coast where a subtropical upper-level cut-off low served as an important moisture collector. Wave activity flux diagnostics show that the ridge was initiated as part of a low-frequency, large-scale Rossby wave train while convergence of fast transients helped to amplify it locally in the North Atlantic. A novel diagnostic for advective potential vorticity tendencies sheds more light on this amplification and further emphasizes the role of the ridge in amplifying the trough over Europe. Operational forecasts misrepresented the amplitude and orientation of this trough. For the first time, this study documents an important pathway for northern Alpine flooding, in which the interaction of synoptic-scale to large-scale weather systems and of long-range moisture transport from the Tropics are dominant. Moreover, the trapping of moisture in a subtropical cut-off near the West African coast is found to be a crucial precursor to the observed European high-impact weather.
Resumo:
The appearance of radix-$2^{2}$ was a milestone in the design of pipelined FFT hardware architectures. Later, radix-$2^{2}$ was extended to radix-$2^{k}$ . However, radix-$2^{k}$ was only proposed for single-path delay feedback (SDF) architectures, but not for feedforward ones, also called multi-path delay commutator (MDC). This paper presents the radix-$2^{k}$ feedforward (MDC) FFT architectures. In feedforward architectures radix-$2^{k}$ can be used for any number of parallel samples which is a power of two. Furthermore, both decimation in frequency (DIF) and decimation in time (DIT) decompositions can be used. In addition to this, the designs can achieve very high throughputs, which makes them suitable for the most demanding applications. Indeed, the proposed radix-$2^{k}$ feedforward architectures require fewer hardware resources than parallel feedback ones, also called multi-path delay feedback (MDF), when several samples in parallel must be processed. As a result, the proposed radix-$2^{k}$ feedforward architectures not only offer an attractive solution for current applications, but also open up a new research line on feedforward structures.
Resumo:
Paper submitted to the IFIP International Conference on Very Large Scale Integration (VLSI-SOC), Darmstadt, Germany, 2003.
Resumo:
Paper submitted to the IFIP International Conference on Very Large Scale Integration (VLSI-SOC), Darmstadt, Germany, 2003.
Resumo:
Hardware/software (HW/SW) cosimulation integrates software simulation and hardware simulation simultaneously. Usually, HW/SW co-simulation platform is used to ease debugging and verification for very large-scale integration (VLSI) design. To accelerate the computation of the gesture recognition technique, an HW/SW implementation using field programmable gate array (FPGA) technology is presented in this paper. The major contributions of this work are: (1) a novel design of memory controller in the Verilog Hardware Description Language (Verilog HDL) to reduce memory consumption and load on the processor. (2) The testing part of the neural network algorithm is being hardwired to improve the speed and performance. The American Sign Language gesture recognition is chosen to verify the performance of the approach. Several experiments were carried out on four databases of the gestures (alphabet signs A to Z). (3) The major benefit of this design is that it takes only few milliseconds to recognize the hand gesture which makes it computationally more efficient.
Resumo:
The focus of this research is to explore the applications of the finite difference formulation based on the latency insertion method (LIM) to the analysis of circuit interconnects. Special attention is devoted to addressing the issues that arise in very large networks such as on-chip signal and power distribution networks. We demonstrate that the LIM has the power and flexibility to handle various types of analysis required at different stages of circuit design. The LIM is particularly suitable for simulations of very large scale linear networks and can significantly outperform conventional circuit solvers (such as SPICE).
Resumo:
In this brief, a read-only-memoryless structure for binary-to-residue number system (RNS) conversion modulo {2(n) +/- k} is proposed. This structure is based only on adders and constant multipliers. This brief is motivated by the existing {2(n) +/- k} binary-to-RNS converters, which are particular inefficient for larger values of n. The experimental results obtained for 4n and 8n bits of dynamic range suggest that the proposed conversion structures are able to significantly improve the forward conversion efficiency, with an AT metric improvement above 100%, regarding the related state of the art. Delay improvements of 2.17 times with only 5% area increase can be achieved if a proper selection of the {2(n) +/- k} moduli is performed.