942 resultados para high-level synthesis
Resumo:
Porous, large surface area, metastable zirconias, are of importance to catalytic, electrochemical, biological, and thermal insulation applications. Combustion synthesis is a very commonly used method for producing such zirconias. However, its rapid nature makes control difficult. A simple modification has been made to traditional solution combustion synthesis to address this problem. It involves the addition of starch to yield a starting mixture with a ``dough-like'' consistency. Just 5 wt% starch is seen to significantly alter the combustion characteristics of the ``dough.'' In particular, it helps to achieve better control over reaction zone temperature that is significantly lower than the one calculated by the adiabatic approximation typically used in self-propagating high-temperature synthesis. The effect of such control is demonstrated by the ability to tune dough composition to yield zirconias with different phase compositions from the relatively elusive ``amorphous'' to monoclinic (> 30 nm grain size) and tetragonal pure zirconia (< 30 nm grain size). The nature of this amorphous phase has been investigated using infrared spectroscopy. Starch content also helps tailor porosity in the final product. Zirconias with an average pore size of about 50 mu m and specific surface area as large as 110 m2/g have been obtained.
Resumo:
Service discovery is vital in ubiquitous applications, where a large number of devices and software components collaborate unobtrusively and provide numerous services without user intervention. Existing service discovery schemes use a service matching process in order to offer services of interest to the users. Potentially, the context information of the users and surrounding environment can be used to improve the quality of service matching. To make use of context information in service matching, a service discovery technique needs to address certain challenges. Firstly, it is required that the context information shall have unambiguous representation. Secondly, the devices in the environment shall be able to disseminate high level and low level context information seamlessly in the different networks. And thirdly, dynamic nature of the context information be taken into account. We propose a C-IOB(Context-Information, Observation and Belief) based service discovery model which deals with the above challenges by processing the context information and by formulating the beliefs based on the observations. With these formulated beliefs the required services will be provided to the users. The method has been tested with a typical ubiquitous museum guide application over different cases. The simulation results are time efficient and quite encouraging.
Resumo:
This paper reports ab intio, DFT and transition state theory (TST) calculations on HF, HCI and CIF elimination reactions from CH2Cl-CH2F molecule. Both the ground state and the transition state for HX elimination reactions have been optimized at HF, MP2 and DFT calculations with 6-31G*, 6-31G** and 6-311++G** basis sets. In addition, CCSD(T) single point calculations were carried out with MP2/6-311++G** optimized geometry for more accurate determination of the energies of the minima and transition state, compared to the other methods employed here. Classical barriers are converted to Arrhenius activation energy by TST calculations for comparisons with experimental results. The pre-exponential factors, A, calculated at all levels of theory are significantly larger than the experimental values. For activation energy, E-a DFT gives good results for HF elimination, within 4-8 W mol(-1) from experimental values. None of the methods employed, including CCSD(T), give comparable results for HCI elimination reactions. However, rate constants calculated by CCSD(T) method are in very good agreement with experiment for HCI elimination and they are in reasonable agreement for HF elimination reactions. Due to the strong correlation between A and E., the rate constants could be fit to a lower A and E-a (as given by experimental fitting, corresponding to a tight TS) or to larger A and E-a (as given by high level ab initio calculations, corresponding to a loose TS). The barrier for CIF elimination is determined to be 607 U mol(-1) at HF level and it is unlikely to be important for CH2FCH2Cl. Results for other CH2X-CH2Y (X,Y = F/Cl) are included for comparison.
Resumo:
Methylated guanine damage at O6 position (i.e. O6MG) is dangerous due to its mutagenic and carcinogenic character that often gives rise to G:C-A:T mutation. However, the reason for this mutagenicity is not known precisely and has been a matter of controversy. Further, although it is known that O6-alkylguanine-DNA alkyltransferase (AGT) repairs O6MG paired with cytosine in DNA, the complete mechanism of target recognition and repair is not known completely. All these aspects of DNA damage and repair have been addressed here by employing high level density functional theory in gas phase and aqueous medium. It is found that the actual cause of O6MG mediated mutation may arise due to the fact that DNA polymerases incorporate thymine opposite to O6MG, misreading the resulting O6MG:T complex as an A:T base pair due to their analogous binding energies and structural alignments. It is further revealed that AGT mediated nucleotide flipping occurs in two successive steps. The intercalation of the finger residue Arg 128 into the DNA double helix and its interaction with the O6MG: C base pair followed by rotation of the O6MG nucleotide are found to be crucial for the damage recognition and nucleotide flipping.
Resumo:
Tuberous sclerosis complex (TSC) is an autosomal dominant disorder with loci on chromosome 9q34.12 (TSC1) and chromosome 16p13.3 (TSC2). Genes for both loci have been isolated and characterized. The promoters of both genes have not been characterized so far and little is known about the regulation of these genes. This study reports the characterization of the human TSC1 promoter region for the first time. We have identified a novel alternative isoform in the 5' untranslated region (UTR) of the TSC1 gene transcript involving exon 1. Alternative isoforms in the 5' UTR of the mouse Tsc1 gene transcript involving exon I and exon 2 have also been identified. We have identified three upstream open reading frames (uORFs) in the 5' UTR of the TSC1/Tsc1 gene. A comparative study of the 5' UTR of TSC1/Tsc1 gene has revealed that there is a high degree of similarity not only in the sequence but also in the splicing pattern of both human and mouse TSC1 genes. We have used PCR methodology to isolate approximately 1.6 kb genomic DNA 5' to the TSC1 cDNA. This sequence has directed a high level of expression of luciferase activity in both HeLa and HepG2 cells. Successive 5' and 3' deletion analysis has suggested that a -587 bp region, from position +77 to -510 from the transcription start site (TSS), contains the promoter activity. Interestingly, this region contains no consensus TATA box or CAAT box. However, a 521-bp fragment surrounding the TSS exhibits the characteristics of a CpG island which overlaps with the promoter region. The identification of the TSC1 promoter region will help in designing a suitable strategy to identify mutations in this region in patients who do not show any mutations in the coding regions. It will also help to study the regulation of the TSC1 gene and its role in tumorigenesis. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
Use of engineered landfills for the disposal of industrial wastes is currently a common practice. Bentonite is attracting a greater attention not only as capping and lining materials in landfills but also as buffer and backfill materials for repositories of high-level nuclear waste around the world. In the design of buffer and backfill materials, it is important to know the swelling pressures of compacted bentonite with different electrolyte solutions. The theoretical studies on swell pressure behaviour are all based on Diffuse Double Layer (DDL) theory. To establish a relation between the swell pressure and void ratio of the soil, it is necessary to calculate the mid-plane potential in the diffuse part of the interacting ionic double layers. The difficulty in these calculations is the elliptic integral involved in the relation between half space distance and mid plane potential. Several investigators circumvented this problem using indirect methods or by using cumbersome numerical techniques. In this work, a novel approach is proposed for theoretical estimations of swell pressures of fine-grained soil from the DDL theory. The proposed approach circumvents the complex computations in establishing the relationship between mid-plane potential and diffused plates’ distances in other words, between swell pressure and void ratio.
Resumo:
The present study examines the geotechnical properties of Indian bentonite clays for their suitability as buffer material in deep geological repository for high-level nuclear wastes. The bentonite samples are characterized for index properties, compaction, hydraulic conductivity and swelling characteristics. Evaluation of geotechnical properties of the compacted bentonite-sand admixtures, from parts of NW India reveals swelling potentials and hydraulic conductivities in the range of 55 % - 108 % and 1.2 X 10 –10 cm/s to 5.42x 10 –11 cm/s respectively. Strong correlation was observed between ESP (exchangeable sodium percentage) and liquid limit/swell potential of tested specimens. Relatively less well-defined trends emerged between ESP and swell pressure/hydraulic conductivity. The Barmer-1 bentonite despite possessing relatively lower montmorillonite content of 68 %, developed higher Atterberg limit and swell potential, and exhibited comparable swelling pressure and hydraulic conductivity as other bentonites with higher montmorillonite contents (82 to 86 %). The desirable geotechnical properties of Barmer clay as a buffer material is attributed to its large ESP (63 %) and, EMDD (1.17 Mg/m3) attained at the experimental compactive stress(5 MPa).
Resumo:
Scan circuit is widely practiced DFT technology. The scan testing procedure consist of state initialization, test application, response capture and observation process. During the state initialization process the scan vectors are shifted into the scan cells and simultaneously the responses captured in last cycle are shifted out. During this shift operation the transitions that arise in the scan cells are propagated to the combinational circuit, which inturn create many more toggling activities in the combinational block and hence increases the dynamic power consumption. The dynamic power consumed during scan shift operation is much more higher than that of normal mode operation.
INTACTE: An Interconnect Area, Delay, and Energy Estimation Tool for Microarchitectural Explorations
Resumo:
Prior work on modeling interconnects has focused on optimizing the wire and repeater design for trading off energy and delay, and is largely based on low level circuit parameters. Hence these models are hard to use directly to make high level microarchitectural trade-offs in the initial exploration phase of a design. In this paper, we propose INTACTE, a tool that can be used by architects toget reasonably accurate interconnect area, delay, and power estimates based on a few architecture level parameters for the interconnect such as length, width (in number of bits), frequency, and latency for a specified technology and voltage. The tool uses well known models of interconnect delay and energy taking into account the wire pitch, repeater size, and spacing for a range of voltages and technologies.It then solves an optimization problem of finding the lowest energy interconnect design in terms of the low level circuit parameters, which meets the architectural constraintsgiven as inputs. In addition, the tool also provides the area, energy, and delay for a range of supply voltages and degrees of pipelining, which can be used for micro-architectural exploration of a chip. The delay and energy models used by the tool have been validated against low level circuit simulations. We discuss several potential applications of the tool and present an example of optimizing interconnect design in the context of clustered VLIW architectures. Copyright 2007 ACM.
Resumo:
This letter proposes the combination of a passive muffler and an active noise control system for the control of very high‐level noise in ducts used with large industrial fans and similar equipment. The analysis of such a hybrid system is presented making use of electroacoustic analogies and the transfer matrix method. It turns out that a passive muffler upstream of the input microphone can indeed lower the acoustic pressure and, hence, the power requirement of the auxiliary source. The parameter that needs to be optimized (or maximized) for this purpose is a certain velocity ratio that can readily be evaluated in a closed form, making it more or less straightforward to synthesize the configuration of an effective passive muffler to go with the active noise control system.
Resumo:
Continuous advances in VLSI technology have made implementation of very complicated systems possible. Modern System-on -Chips (SoCs) have many processors, IP cores and other functional units. As a result, complete verification of whole systems before implementation is becoming infeasible; hence it is likely that these systems may have some errors after manufacturing. This increases the need to find design errors in chips after fabrication. The main challenge for post-silicon debug is the observability of the internal signals. Post-silicon debug is the problem of determining what's wrong when the fabricated chip of a new design behaves incorrectly. This problem now consumes over half of the overall verification effort on large designs, and the problem is growing worse.Traditional post-silicon debug methods concentrate on functional parts of systems and provide mechanisms to increase the observability of internal state of systems. Those methods may not be sufficient as modern SoCs have lots of blocks (processors, IP cores, etc.) which are communicating with one another and communication is another source of design errors. This tutorial will be provide an insight into various observability enhancement techniques, on chip instrumentation techniques and use of high level models to support the debug process targeting both inside blocks and communication among them. It will also cover the use of formal methods to help debug process.
Resumo:
Instruction reuse is a microarchitectural technique that improves the execution time of a program by removing redundant computations at run-time. Although this is the job of an optimizing compiler, they do not succeed many a time due to limited knowledge of run-time data. In this paper we examine instruction reuse of integer ALU and load instructions in network processing applications. Specifically, this paper attempts to answer the following questions: (1) How much of instruction reuse is inherent in network processing applications?, (2) Can reuse be improved by reducing interference in the reuse buffer?, (3) What characteristics of network applications can be exploited to improve reuse?, and (4) What is the effect of reuse on resource contention and memory accesses? We propose an aggregation scheme that combines the high-level concept of network traffic i.e. "flows" with a low level microarchitectural feature of programs i.e. repetition of instructions and data along with an architecture that exploits temporal locality in incoming packet data to improve reuse. We find that for the benchmarks considered, 1% to 50% of instructions are reused while the speedup achieved varies between 1% and 24%. As a side effect, instruction reuse reduces memory traffic and can therefore be considered as a scheme for low power.
Resumo:
The solar radiation flux at the earth's surface has gone through decadal changes of decreasing and increasing trends over the globe. These phenomena known as dimming and brightening, respectively, have attracted the scientific interest in relation to the changes in radiative balance and climate. Despite the interest in the solar dimming/brightening phenomenon in various parts of the world, south Asia has not attracted great scientific attention so far. The present work uses the net downward shortwave radiation (NDSWR) values derived from satellites (Modern Era Retrospective-analysis for Research and Applications, MERRA 2D) in order to examine the multi-decadal variations in the incoming solar radiation over south Asia for the period of 1979-2004. From the analysis it is seen that solar dimming continues over south Asia with a trend of -0.54 Wm(-2) yr(-1). Assuming clear skies an average decrease of -0.05 Wm(-2)yr(-1) in NDSWR was observed, which is attributed to increased aerosol emissions over the region. There is evidence that the increase in cloud optical depth plays the major role for the solar dimming over the area. The cloud optical depth (MERRA retrievals) has increased by 10.7% during the study period, with the largest increase to be detected for the high-level (atmospheric pressure P < 400 hPa) clouds (31.2%). Nevertheless, the decrease in solar radiation and the role of aerosols and clouds exhibit large monthly and seasonal variations directly affected by the local monsoon system, the anthropogenic and natural aerosol emissions. All these aspects are examined in detail aiming at shedding light into the solar dimming phenomenon over a densely populated area. (C) 2011 Elsevier Ltd. All rights reserved.