999 resultados para Automated circuit synthesis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A genetic algorithm used to design radio-frequency binary-weighted differential switched capacitor arrays (RFDSCAs) is presented in this article. The algorithm provides a set of circuits all having the same maximum performance. This article also describes the design, implementation, and measurements results of a 0.25 lm BiCMOS 3-bit RFDSCA. The experimental results show that the circuit presents the expected performance up to 40 GHz. The similarity between the evolutionary solutions, circuit simulations, and measured results indicates that the genetic synthesis method is a very useful tool for designing optimum performance RFDSCAs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a technique for performing analog design synthesis at circuit level providing feedback to the designer through the exploration of the Pareto frontier. A modified simulated annealing which is able to perform crossover with past anchor points when a local minimum is found which is used as the optimization algorithm on the initial synthesis procedure. After all specifications are met, the algorithm searches for the extreme points of the Pareto frontier in order to obtain a non-exhaustive exploration of the Pareto front. Finally, multi-objective particle swarm optimization is used to spread the results and to find a more accurate frontier. Piecewise linear functions are used as single-objective cost functions to produce a smooth and equal convergence of all measurements to the desired specifications during the composition of the aggregate objective function. To verify the presented technique two circuits were designed, which are: a Miller amplifier with 96 dB Voltage gain, 15.48 MHz unity gain frequency, slew rate of 19.2 V/mu s with a current supply of 385.15 mu A, and a complementary folded cascode with 104.25 dB Voltage gain, 18.15 MHz of unity gain frequency and a slew rate of 13.370 MV/mu s. These circuits were synthesized using a 0.35 mu m technology. The results show that the method provides a fast approach for good solutions using the modified SA and further good Pareto front exploration through its connection to the particle swarm optimization algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bibliography: p. 45.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We develop an automated spectral synthesis technique for the estimation of metallicities ([Fe/H]) and carbon abundances ([C/Fe]) for metal-poor stars, including carbon-enhanced metal-poor stars, for which other methods may prove insufficient. This technique, autoMOOG, is designed to operate on relatively strong features visible in even low- to medium-resolution spectra, yielding results comparable to much more telescope-intensive high-resolution studies. We validate this method by comparison with 913 stars which have existing high-resolution and low- to medium-resolution to medium-resolution spectra, and that cover a wide range of stellar parameters. We find that at low metallicities ([Fe/H] less than or similar to -2.0), we successfully recover both the metallicity and carbon abundance, where possible, with an accuracy of similar to 0.20 dex. At higher metallicities, due to issues of continuum placement in spectral normalization done prior to the running of autoMOOG, a general underestimate of the overall metallicity of a star is seen, although the carbon abundance is still successfully recovered. As a result, this method is only recommended for use on samples of stars of known sufficiently low metallicity. For these low- metallicity stars, however, autoMOOG performs much more consistently and quickly than similar, existing techniques, which should allow for analyses of large samples of metal-poor stars in the near future. Steps to improve and correct the continuum placement difficulties are being pursued.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Peptidic Nucleic Acids (PNAs) are achiral, uncharged nucleic add mimetics, with a novel backbone composed of N-(2-aminoethyl)glycine units attached to the DNA bases through carboxymethylene linkers. With the aim of extending and improving upon the molecular recognition properties of PNAs, the aim of this work was to synthesjse PNA building block intermediates containing a series of substituted purine bases for subsequent use in automated PNA synthesis. Four purine bases: 2,6~diaminopurine (D), isoGuanine (isoG), xanthine (X) and hypoxanthine (H) were identified for incorporation into PNAs targeted to DNA, with the promise of increased hybrid stability over extended pH ranges together with improvements over the use of adenine (A) in duplex formation, and cytosine (C) in triplex formation. A reliable, high-yielding synthesis of the PNA backbone component N -('2- butyloxycarbonyl-aminoethyl)glycinate ethyl ester was establishecl. The precursor N~(2-butyloxycarbonyl)amino acetonitrile was crystallised and analysed by X-ray crystallography for the first time. An excellent refinement (R = 0.0276) was attained for this structure, allowing comparisons with known analogues. Although chemical synthesis of pure, fully-characterised PNA monomers was not achieved, chemical synthesis of PNA building blocks composed of diaminopurine, xanthine and hypoxanthine was completely successful. In parallel, a second objective of this work was to characterise and evaluate novel crystalline intermediates, which formed a new series of substituted purine bases, generated by attaching alkyl substituents at the N9 or N7 sites of purine bases. Crystallographic analysis was undertaken to probe the regiochemistry of isomers, and to reveal interesting structural features of the new series of similarly-substituted purine bases. The attainment of the versatile synthetic intermediate 2,6-dichloro~9- (carboxymethyl)purine ethyl ester, and its homologous regioisomers 6-chloro~9- (carboxymethyl)purine ethyl ester and 6-chloro-7-(carboxymethyl)purine ethyl ester, necessitated the use of X-ray crystallographic analysis for unambiguous structural assignment. Successful refinement of the disordered 2,6-diamino-9-(carboxymethyl) purine ethyl ester allowed comparison with the reported structure of the adenine analogue, ethyl adenin-9-yl acetate. Replacement of the chloro moieties with amino, azido and methoxy groups expanded the internal angles at their point of attachment to the purine ring. Crystallographic analysis played a pivotal role towards confirming the identity of the peralkylated hypoxanthine derivative diethyl 6-oxo-6,7-dihydro-3H-purlne~3,7~djacetate, where two ethyl side chains were found to attach at N3 and N7,

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fractional Calculus (FC) goes back to the beginning of the theory of differential calculus. Nevertheless, the application of FC just emerged in the last two decades. It has been recognized the advantageous use of this mathematical tool in the modelling and control of many dynamical systems. Having these ideas in mind, this paper discusses a FC perspective in the study of the dynamics and control of several systems. The paper investigates the use of FC in the fields of controller tuning, legged robots, electrical systems and digital circuit synthesis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fractional Calculus FC goes back to the beginning of the theory of differential calculus. Nevertheless, the application of FC just emerged in the last two decades, due to the progress in the area of chaos that revealed subtle relationships with the FC concepts. In the field of dynamical systems theory some work has been carried out but the proposed models and algorithms are still in a preliminary stage of establishment. Having these ideas in mind, the paper discusses FC in the study of system dynamics and control. In this perspective, this paper investigates the use of FC in the fields of controller tuning, legged robots, redundant robots, heat diffusion, and digital circuit synthesis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Swarm Intelligence (SI) is the property of a system whereby the collective behaviors of (unsophisticated) agents interacting locally with their environment cause coherent functional global patterns to emerge. Particle swarm optimization (PSO) is a form of SI, and a population-based search algorithm that is initialized with a population of random solutions, called particles. These particles are flying through hyperspace and have two essential reasoning capabilities: their memory of their own best position and knowledge of the swarm's best position. In a PSO scheme each particle flies through the search space with a velocity that is adjusted dynamically according with its historical behavior. Therefore, the particles have a tendency to fly towards the best search area along the search process. This work proposes a PSO based algorithm for logic circuit synthesis. The results show the statistical characteristics of this algorithm with respect to number of generations required to achieve the solutions. It is also presented a comparison with other two Evolutionary Algorithms, namely Genetic and Memetic Algorithms.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Advancements in high-throughput technologies to measure increasingly complex biological phenomena at the genomic level are rapidly changing the face of biological research from the single-gene single-protein experimental approach to studying the behavior of a gene in the context of the entire genome (and proteome). This shift in research methodologies has resulted in a new field of network biology that deals with modeling cellular behavior in terms of network structures such as signaling pathways and gene regulatory networks. In these networks, different biological entities such as genes, proteins, and metabolites interact with each other, giving rise to a dynamical system. Even though there exists a mature field of dynamical systems theory to model such network structures, some technical challenges are unique to biology such as the inability to measure precise kinetic information on gene-gene or gene-protein interactions and the need to model increasingly large networks comprising thousands of nodes. These challenges have renewed interest in developing new computational techniques for modeling complex biological systems. This chapter presents a modeling framework based on Boolean algebra and finite-state machines that are reminiscent of the approach used for digital circuit synthesis and simulation in the field of very-large-scale integration (VLSI). The proposed formalism enables a common mathematical framework to develop computational techniques for modeling different aspects of the regulatory networks such as steady-state behavior, stochasticity, and gene perturbation experiments.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The evolution of integrated circuits technologies demands the development of new CAD tools. The traditional development of digital circuits at physical level is based in library of cells. These libraries of cells offer certain predictability of the electrical behavior of the design due to the previous characterization of the cells. Besides, different versions of each cell are required in such a way that delay and power consumption characteristics are taken into account, increasing the number of cells in a library. The automatic full custom layout generation is an alternative each time more important to cell based generation approaches. This strategy implements transistors and connections according patterns defined by algorithms. So, it is possible to implement any logic function avoiding the limitations of the library of cells. Tools of analysis and estimate must offer the predictability in automatic full custom layouts. These tools must be able to work with layout estimates and to generate information related to delay, power consumption and area occupation. This work includes the research of new methods of physical synthesis and the implementation of an automatic layout generation in which the cells are generated at the moment of the layout synthesis. The research investigates different strategies of elements disposition (transistors, contacts and connections) in a layout and their effects in the area occupation and circuit delay. The presented layout strategy applies delay optimization by the integration with a gate sizing technique. This is performed in such a way the folding method allows individual discrete sizing to transistors. The main characteristics of the proposed strategy are: power supply lines between rows, over the layout routing (channel routing is not used), circuit routing performed before layout generation and layout generation targeting delay reduction by the application of the sizing technique. The possibility to implement any logic function, without restrictions imposed by a library of cells, allows the circuit synthesis with optimization in the number of the transistors. This reduction in the number of transistors decreases the delay and power consumption, mainly the static power consumption in submicrometer circuits. Comparisons between the proposed strategy and other well-known methods are presented in such a way the proposed method is validated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Molecular beacons (MBs) are stem-loop DNA probes used for identifying and reporting the presence and localization of nucleic acid targets in vitro and in vivo via target-dependent dequenching of fluorescence. A drawback of conventional MB design is present in the stem sequence that is necessary to keep the MBs in a closed conformation in the absence of a target, but that can participate in target binding in the open (target-on) conformation, giving rise to the possibility of false-positive results. In order to circumvent these problems, we designed MBs in which the stem was replaced by an orthogonal DNA analog that does not cross-pair with natural nucleic acids. Homo-DNA seemed to be specially suited, as it forms stable adenine-adenine base pairs of the reversed Hoogsteen type, potentially reducing the number of necessary building blocks for stem design to one. We found that MBs in which the stem part was replaced by homo-adenylate residues can easily be synthesized using conventional automated DNA synthesis. As conventional MBs, such hybrid MBs show cooperative hairpin to coil transitions in the absence of a DNA target, indicating stable homo-DNA base pair formation in the closed conformation. Furthermore, our results show that the homo-adenylate stem is excluded from DNA target binding, which leads to a significant increase in target binding selectivity

Relevância:

80.00% 80.00%

Publicador:

Resumo:

10.1002/hlca.19900730309.abs In three steps, 2-deoxy-D-ribose has been converted into a phosphoramidite building block bearing a (t-Bu)Me2Si protecting group at the OH function of the anomeric centre of the furanose ring. This building block was subsequently incorporated into DNA oligomers of various base sequences using the standard phosphoramidite protocol for automated DNA synthesis. The resulting silyl-oligomers have been purified by HPLC and selectively desilylated to the corresponding free apurinic DNA sequences. The hexamer d (A-A-A-A-X-A) (X representing the apurinic site) which was prepared in this way was characterized by 1H- and 31P-NMR spectroscopy. The other sequences as well as their fragments, which formed upon treatment with alkali base, were analyzed by polyacrylamide gel electrophoresis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Graphics Processing Units have become a booster for the microelectronics industry. However, due to intellectual property issues, there is a serious lack of information on implementation details of the hardware architecture that is behind GPUs. For instance, the way texture is handled and decompressed in a GPU to reduce bandwidth usage has never been dealt with in depth from a hardware point of view. This work addresses a comparative study on the hardware implementation of different texture decompression algorithms for both conventional (PCs and video game consoles) and mobile platforms. Circuit synthesis is performed targeting both a reconfigurable hardware platform and a 90nm standard cell library. Area-delay trade-offs have been extensively analyzed, which allows us to compare the complexity of decompressors and thus determine suitability of algorithms for systems with limited hardware resources.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present new methodologies to generate rational function approximations of broadband electromagnetic responses of linear and passive networks of high-speed interconnects, and to construct SPICE-compatible, equivalent circuit representations of the generated rational functions. These new methodologies are driven by the desire to improve the computational efficiency of the rational function fitting process, and to ensure enhanced accuracy of the generated rational function interpolation and its equivalent circuit representation. Toward this goal, we propose two new methodologies for rational function approximation of high-speed interconnect network responses. The first one relies on the use of both time-domain and frequency-domain data, obtained either through measurement or numerical simulation, to generate a rational function representation that extrapolates the input, early-time transient response data to late-time response while at the same time providing a means to both interpolate and extrapolate the used frequency-domain data. The aforementioned hybrid methodology can be considered as a generalization of the frequency-domain rational function fitting utilizing frequency-domain response data only, and the time-domain rational function fitting utilizing transient response data only. In this context, a guideline is proposed for estimating the order of the rational function approximation from transient data. The availability of such an estimate expedites the time-domain rational function fitting process. The second approach relies on the extraction of the delay associated with causal electromagnetic responses of interconnect systems to provide for a more stable rational function process utilizing a lower-order rational function interpolation. A distinctive feature of the proposed methodology is its utilization of scattering parameters. For both methodologies, the approach of fitting the electromagnetic network matrix one element at a time is applied. It is shown that, with regard to the computational cost of the rational function fitting process, such an element-by-element rational function fitting is more advantageous than full matrix fitting for systems with a large number of ports. Despite the disadvantage that different sets of poles are used in the rational function of different elements in the network matrix, such an approach provides for improved accuracy in the fitting of network matrices of systems characterized by both strongly coupled and weakly coupled ports. Finally, in order to provide a means for enforcing passivity in the adopted element-by-element rational function fitting approach, the methodology for passivity enforcement via quadratic programming is modified appropriately for this purpose and demonstrated in the context of element-by-element rational function fitting of the admittance matrix of an electromagnetic multiport.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The paper presents a RFDSCA automated synthesis procedure. This algorithm determines several RFDSCA circuits from the top-level system specifications all with the same maximum performance. The genetic synthesis tool optimizes a fitness function proportional to the RFDSCA quality factor and uses the epsiv-concept and maximin sorting scheme to achieve a set of solutions well distributed along a non-dominated front. To confirm the results of the algorithm, three RFDSCAs were simulated in SpectreRF and one of them was implemented and tested. The design used a 0.25 mum BiCMOS process. All the results (synthesized, simulated and measured) are very close, which indicate that the genetic synthesis method is a very useful tool to design optimum performance RFDSCAs.