950 resultados para Dataflow diagrams
Resumo:
We introduce a class of exactly solvable models exhibiting an ordering noise-induced phase transition in which order arises as a result of a balance between the relaxing deterministic dynamics and the randomizing character of the fluctuations. A finite-size scaling analysis of the phase transition reveals that it belongs to the universality class of the equilibrium Ising model. All these results are analyzed in the light of the nonequilibrium probability distribution of the system, which can be obtained analytically. Our results could constitute a possible scenario of inverted phase diagrams in the so-called lower critical solution temperature transitions.
Resumo:
The uncertainties inherent to experimental differential scanning calorimetric data are evaluated. A new procedure is developed to perform the kinetic analysis of continuous heating calorimetric data when the heat capacity of the sample changes during the crystallization. The accuracy of isothermal calorimetric data is analyzed in terms of the peak-to-peak noise of the calorimetric signal and base line drift typical of differential scanning calorimetry equipment. Their influence in the evaluation of the kinetic parameters is discussed. An empirical construction of the time-temperature and temperature heating rate transformation diagrams, grounded on the kinetic parameters, is presented. The method is applied to the kinetic study of the primary crystallization of Te in an amorphous alloy of nominal composition Ga20Te80, obtained by rapid solidification.
Resumo:
Rich and Suter diagrams are a very useful tool to explain the electron configurations of all transition elements, and in particular, the s¹ and s0 configurations of the elements Cr, Cu, Nb, Mo, Ru, Rh, Pd, Ag, and Pt. The application of these diagrams to the inner transition elements also explains the electron configurations of lanthanoids and actinoids, except for Ce, Pa, U, Np, and Cm, whose electron configurations are indeed very special because they are a mixture of several configurations.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
Diagrams of cross sections on the Welland Railway, Port Dalhousie (4 hand-drawn diagrams), March 1860.
Resumo:
Specifications for the erection of a building at Long Point with diagrams (9 pages, handwritten), n.d.
Resumo:
Diagrams (charts and graphs) made into a booklet with a newspaper cover. This booklet contains cross sections of the back ditch on the south side of the Welland Canal feeder, west of the Marshville culverts (45 pages, hand drawn). This was created by Fred Holmes, Oct. 3, 1857.
Resumo:
The dataflow model of computation exposes and exploits parallelism in programs without requiring programmer annotation; however, instruction- level dataflow is too fine-grained to be efficient on general-purpose processors. A popular solution is to develop a "hybrid'' model of computation where regions of dataflow graphs are combined into sequential blocks of code. I have implemented such a system to allow the J-Machine to run Id programs, leaving exposed a high amount of parallelism --- such as among loop iterations. I describe this system and provide an analysis of its strengths and weaknesses and those of the J-Machine, along with ideas for improvement.