984 resultados para 100603 Logic Design


Relevância:

30.00% 30.00%

Publicador:

Resumo:

There has recently been an emphasis within literacy studies on both the spatial dimensions of social practices (Leander & Sheehy, 2004) and the importance of incorporating design and multiple modes of meaning-making into contemporary understandings of literacy (Cope & Kalantzis, 2000; New London Group, 1996). Kress (2003) in particular has outlined the potential implications of the cultural shift from the dominance of writing, based on a logic of time and sequence in time, to the dominance of the mode of the image, based on a logic of space. However, the widespread re-design of curriculum and pedagogy by classroom teachers to allow students to capitalise on the various affordances of different modes of meaning-making – including the spatial – remains in an emergent stage. We report on a project in which university researchers’ expertise in architecture, literacy and communications enabled two teachers in one school to expand the forms of literacy that primary school children engaged in. Starting from the school community’s concerns about an urban renewal project in their neighbourhood, we worked together to develop a curriculum of spatial literacies with real-world goals and outcomes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is based on an Australian Learning & Teaching Council (ALTC) funded evaluation in 13 universities across Australia and New Zealand of the use of Engineers Without Borders (EWB) projects in first-year engineering courses. All of the partner institutions have implemented this innovation differently and comparison of these implementations affords us the opportunity to assemble "a body of carefully gathered data that provides evidence of which approaches work for which students in which learning environments". This study used a mixed-methods data collection approach and a realist analysis. Data was collected by program logic analysis with course co-ordinators, observation of classes, focus groups with students, exit survey of students and interviews with staff as well as scrutiny of relevant course and curriculum documents. Course designers and co-ordinators gave us a range of reasons for using the projects, most of which alluded to their presumed capacity to deliver experience in and learning of higher order thinking skills in areas such as sustainability, ethics, teamwork and communication. For some students, however, the nature of the projects decreased their interest in issues such as ethical development, sustainability and how to work in teams. We also found that the projects provoked different responses from students depending on the nature of the courses in which they were embedded (general introduction, design, communication, or problem-solving courses) and their mode of delivery (lecture, workshop or online).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Based on trial interchanges, this paper develops three algorithms for the solution of the placement problem of logic modules in a circuit. A significant decrease in the computation time of such placement algorithms can be achieved by restricting the trial interchanges to only a subset of all the modules in a circuit. The three algorithms are simulated on a DEC 1090 system in Pascal and the performance of these algorithms in terms of total wirelength and computation time is compared with the results obtained by Steinberg, for the 34-module backboard wiring problem. Performance analysis of the first two algorithms reveals that algorithms based on pairwise trial interchanges (2 interchanges) achieve a desired placement faster than the algorithms based on trial N interchanges. The first two algorithms do not perform better than Steinberg's algorithm1, whereas the third algorithm based on trial pairwise interchange among unconnected pairs of modules (UPM) and connected pairs of modules (CPM) performs better than Steinberg's algorithm, both in terms of total wirelength (TWL) and computation time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study addresses three important issues in tree bucking optimization in the context of cut-to-length harvesting. (1) Would the fit between the log demand and log output distributions be better if the price and/or demand matrices controlling the bucking decisions on modern cut-to-length harvesters were adjusted to the unique conditions of each individual stand? (2) In what ways can we generate stand and product specific price and demand matrices? (3) What alternatives do we have to measure the fit between the log demand and log output distributions, and what would be an ideal goodness-of-fit measure? Three iterative search systems were developed for seeking stand-specific price and demand matrix sets: (1) A fuzzy logic control system for calibrating the price matrix of one log product for one stand at a time (the stand-level one-product approach); (2) a genetic algorithm system for adjusting the price matrices of one log product in parallel for several stands (the forest-level one-product approach); and (3) a genetic algorithm system for dividing the overall demand matrix of each of the several log products into stand-specific sub-demands simultaneously for several stands and products (the forest-level multi-product approach). The stem material used for testing the performance of the stand-specific price and demand matrices against that of the reference matrices was comprised of 9 155 Norway spruce (Picea abies (L.) Karst.) sawlog stems gathered by harvesters from 15 mature spruce-dominated stands in southern Finland. The reference price and demand matrices were either direct copies or slightly modified versions of those used by two Finnish sawmilling companies. Two types of stand-specific bucking matrices were compiled for each log product. One was from the harvester-collected stem profiles and the other was from the pre-harvest inventory data. Four goodness-of-fit measures were analyzed for their appropriateness in determining the similarity between the log demand and log output distributions: (1) the apportionment degree (index), (2) the chi-square statistic, (3) Laspeyres quantity index, and (4) the price-weighted apportionment degree. The study confirmed that any improvement in the fit between the log demand and log output distributions can only be realized at the expense of log volumes produced. Stand-level pre-control of price matrices was found to be advantageous, provided the control is done with perfect stem data. Forest-level pre-control of price matrices resulted in no improvement in the cumulative apportionment degree. Cutting stands under the control of stand-specific demand matrices yielded a better total fit between the demand and output matrices at the forest level than was obtained by cutting each stand with non-stand-specific reference matrices. The theoretical and experimental analyses suggest that none of the three alternative goodness-of-fit measures clearly outperforms the traditional apportionment degree measure. Keywords: harvesting, tree bucking optimization, simulation, fuzzy control, genetic algorithms, goodness-of-fit

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Digital positioning systems often require a down counter for their operation. Due to the necessity of particular logic sequences and control of individual terminals, the design of down counters for particular use is very essential. In this paper the design procedure and logic diagram for a synchronous decade down counter with parallel carry are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The recent trend towards minimizing the interconnections in large scale integration (LSI) circuits has led to intensive investigation in the development of ternary circuits and the improvement of their design. The ternary multiplexer is a convenient and useful logic module which can be used as a basic building block in the design of a ternary system. This paper discusses a systematic procedure for the simplification and realization of ternary functions using ternary multiplexers as building blocks. Both single level and multilevel multiplexing techniques are considered. The importance of the design procedure is highlighted by considering two specific applications, namely, the development of ternary adder/subtractor and TCD to ternary converter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A compact model for noise margin (NM) of single-electron transistor (SET) logic is developed, which is a function of device capacitances and background charge (zeta). Noise margin is, then, used as a metric to evaluate the robustness of SET logic against background charge, temperature, and variation of SET gate and tunnel junction capacitances (CG and CT). It is shown that choosing alpha=CT/CG=1/3 maximizes the NM. An estimate of the maximum tolerable zeta is shown to be equal to plusmn0.03 e. Finally, the effect of mismatch in device parameters on the NM is studied through exhaustive simulations, which indicates that a isin [0.3, 0.4] provides maximum robustness. It is also observed that mismatch can have a significant impact on static power dissipation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper the static noise margin for SET (single electron transistor) logic is defined and compact models for the noise margin are developed by making use of the MIB (Mahapatra-Ionescu-Banerjee) model. The variation of the noise margin with temperature and background charge is also studied. A chain of SET inverters is simulated to validate the definition of various logic levels (like VIH, VOH, etc.) and noise margin. Finally the noise immunity of SET logic is compared with current CMOS logic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of determining a minimal number of control inputs for converting a programmable logic array (PLA) with undetectable faults to crosspoint-irredundant PLA for testing has been formulated as a nonstandard set covering problem. By representing subsets of sets as cubes, this problem has been reformulated as familiar problems. It is noted that this result has significance because a crosspoint-irredundant PLA can be converted to a completely testable PLA in a straightforward fashion, thus achieving very good fault coverage and easy testability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Indian logic has a long history. It somewhat covers the domains of two of the six schools (darsanas) of Indian philosophy, namely, Nyaya and Vaisesika. The generally accepted definition of Indian logic over the ages is the science which ascertains valid knowledge either by means of six senses or by means of the five members of the syllogism. In other words, perception and inference constitute the subject matter of logic. The science of logic evolved in India through three ages: the ancient, the medieval and the modern, spanning almost thirty centuries. Advances in Computer Science, in particular, in Artificial Intelligence have got researchers in these areas interested in the basic problems of language, logic and cognition in the past three decades. In the 1980s, Artificial Intelligence has evolved into knowledge-based and intelligent system design, and the knowledge base and inference engine have become standard subsystems of an intelligent system. One of the important issues in the design of such systems is knowledge acquisition from humans who are experts in a branch of learning (such as medicine or law) and transferring that knowledge to a computing system. The second important issue in such systems is the validation of the knowledge base of the system i.e. ensuring that the knowledge is complete and consistent. It is in this context that comparative study of Indian logic with recent theories of logic, language and knowledge engineering will help the computer scientist understand the deeper implications of the terms and concepts he is currently using and attempting to develop.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new range of programmable logic devices are revolutionizing the way complex digital hardware is designed and built all over the world. Being able to test these devices in order to validate and dynamically improve on the design is crucial. This paper describes a low-cost FPGA tester that can test SRAM based FPGAs in the laboratory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wave pipelining is a design technique for increasing the throughput of a digital circuit or system without introducing pipelining registers between adjacent combinational logic blocks in the circuit/system. However, this requires balancing of the delays along all the paths from the input to the output which comes the way of its implementation. Static CMOS is inherently susceptible to delay variation with input data, and hence, receives a low priority for wave pipelined digital design. On the other hand, ECL and CML, which are amenable to wave pipelining, lack the compactness and low power attributes of CMOS. In this paper we attempt to exploit wave pipelining in CMOS technology. We use a single generic building block in Normal Process Complementary Pass Transistor Logic (NPCPL), modeled after CPL, to achieve equal delay along all the propagation paths in the logic structure. An 8×8 b multiplier is designed using this logic in a 0.8 ?m technology. The carry-save multiplier architecture is modified suitably to support wave pipelining, viz., the logic depth of all the paths are made identical. The 1 mm×0.6 mm multiplier core supports a throughput of 400 MHz and dissipates a total power of 0.6 W. We develop simple enhancements to the NPCPL building blocks that allow the multiplier to sustain throughputs in excess of 600 MHz. The methodology can be extended to introduce wave pipelining in other circuits as well

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper analytical expressions for optimal Vdd and Vth to minimize energy for a given speed constraint are derived. These expressions are based on the EKV model for transistors and are valid in both strong inversion and sub threshold regions. The effect of gate leakage on the optimal Vdd and Vth is analyzed. A new gradient based algorithm for controlling Vdd and Vth based on delay and power monitoring results is proposed. A Vdd-Vth controller which uses the algorithm to dynamically control the supply and threshold voltage of a representative logic block (sum of absolute difference computation of an MPEG decoder) is designed. Simulation results using 65 nm predictive technology models are given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Yaw rate of a vehicle is highly influenced by the lateral forces generated at the tire contact patch to attain the desired lateral acceleration, and/or by external disturbances resulting from factors such as crosswinds, flat tire or, split-μ braking. The presence of the latter and the insufficiency of the former may lead to undesired yaw motion of a vehicle. This paper proposes a steer-by-wire system based on fuzzy logic as yaw-stability controller for a four-wheeled road vehicle with active front steering. The dynamics governing the yaw behavior of the vehicle has been modeled in MATLAB/Simulink. The fuzzy controller receives the yaw rate error of the vehicle and the steering signal given by the driver as inputs and generates an additional steering angle as output which provides the corrective yaw moment. The results of simulations with various drive input signals show that the yaw stability controller using fuzzy logic proposed in the current study has a good performance in situations involving unexpected yaw motion. The yaw rate errors of a vehicle having the proposed controller are notably smaller than an uncontrolled vehicle's, and the vehicle having the yaw stability controller recovers lateral distance and desired yaw rate more quickly than the uncontrolled vehicle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

On the basis of signed-digit negabinary representation, parallel two-step addition and one-step subtraction can be performed for arbitrary-length negabinary operands.; The arithmetic is realized by signed logic operations and optically implemented by spatial encoding and decoding techniques. The proposed algorithm and optical system are simple, reliable, and practicable, and they have the property of parallel processing of two-dimensional data. This leads to an efficient design for the optical arithmetic and logic unit. (C) 1997 Optical Society of America.