175 resultados para Programmable array logic
Resumo:
This paper performs a further generalization of the notion of independence in constraint logic programs to the context of constraint logic programs with dynamic scheduling. The complexity of this new environment made necessary to first formally define the relationship between independence and search space preservation in the context of CLP languages. In particular, we show that search space preservation is, in the context of CLP languages, not only a sufficient but also a necessary condition for ensuring that both the intended solutions and the number of transitions performed do not change. These results are then extended to dynamically scheduled languages and used as the basis for the extension of the concepts of independence. We also propose several a priori sufficient conditions for independence and also give correctness and efficiency results for parallel execution of constraint logic programs based on the proposed notions of independence.
Resumo:
Global analyzers traditionally read and analyze the entire program at once, in a non-incremental way. However, there are many situations which are not well suited to this simple model and which instead require reanalysis of certain parts of a program which has already been analyzed. In these cases, it appears inefficient to perform the analysis of the program again from scratch, as needs to be done with current systems. We describe how the fixpoint algorithms in current generic analysis engines can be extended to support incremental analysis. The possible changes to a program are classified into three types: addition, deletion, and arbitrary change. For each one of these, we provide one or more algorithms for identifying the parts of the analysis that must be recomputed and for performing the actual recomputation. The potential benefits and drawbacks of these algorithms are discussed. Finally, we present some experimental results obtained with an implementation of the algorithms in the PLAI generic abstract interpretation framework. The results show significant benefits when using the proposed incremental analysis algorithms.
Resumo:
In this paper, we examine the issue of memory management in the parallel execution of logic programs. We concentrate on non-deterministic and-parallel schemes which we believe present a relatively general set of problems to be solved, including most of those encountered in the memory management of or-parallel systems. We present a distributed stack memory management model which allows flexible scheduling of goals. Previously proposed models (based on the "Marker model") are lacking in that they impose restrictions on the selection of goals to be executed or they may require consume a large amount of virtual memory. This paper first presents results which imply that the above mentioned shortcomings can have significant performance impacts. An extension of the Marker Model is then proposed which allows flexible scheduling of goals while keeping (virtual) memory consumption down. Measurements are presented which show the advantage of this solution. Methods for handling forward and backward execution, cut and roll back are discussed in the context of the proposed scheme. In addition, the paper shows how the same mechanism for flexible scheduling can be applied to allow the efficient handling of the very general form of suspension that can occur in systems which combine several types of and-parallelism and more sophisticated methods of executing logic programs. We believe that the results are applicable to many and- and or-parallel systems.
Resumo:
Knowing the size of the terms to which program variables are bound at run-time in logic programs is required in a class of applications related to program optimization such as, for example, recursion elimination and granularity analysis. Such size is difficult to even approximate at compile time and is thus generally computed at run-time by using (possibly predefined) predicates which traverse the terms involved. We propose a technique based on program transformation which has the potential of performing this computation much more efficiently. The technique is based on finding program procedures which are called before those in which knowledge regarding term sizes is needed and which traverse the terms whose size is to be determined, and transforming such procedures so that they compute term sizes "on the fly". We present a systematic way of determining whether a given program can be transformed in order to compute a given term size at a given program point without additional term traversal. Also, if several such transformations are possible our approach allows finding minimal transformations under certain criteria. We also discuss the advantages and present some applications of our technique.
Resumo:
Bruynooghe described a framework for the top-down abstract interpretation of logic programs. In this framework, abstract interpretation is carried out by constructing an abstract and-or tree in a top-down fashion for a given query and program. Such an abstract interpreter requires fixpoint computation for programs which contain recursive predicates. This paper presents in detail a fixpoint algorithm that has been developed for this purpose and the motivation behind it. We start off by describing a simple-minded algorithm. After pointing out its shortcomings, we present a series of refinements to this algorithm, until we reach the final version. The aim is to give an intuitive grasp and provide justification for the relative complexity of the final algorithm. We also present an informal proof of correctness of the algorithm and some results obtained from an implementation.
Resumo:
Se ha diseñado y construido un array de microlentes cilíndricas de cristal líquido (CL) y se ha llevado a cabo un estudio sobre su comportamiento electroóptico. El array lenticular es novedoso en cuanto a los materiales empleados en su fabricación. Se ha utilizado Níquel como material clave para la implementación de un electrodo de alta resistividad. La combinación del electrodo de alta resistividad junto al CL (cuya impedancia paralelo es elevada) da lugar a un divisor reactivo que proporciona un gradiente de tensión hiperbólico del centro al extremo de cada lente. Este efecto, unido al alineamiento homogéneo de las moléculas de CL, permite la generación de un gradiente de índice de refracción, comportándose el dispositivo como una lente GRIN (GRadient Refraction INdex). Para la caracterización de su funcionamiento se ha analizado su perfil de fase empleando métodos interferométricos y procesamiento de imágenes. Además se han efectuado también diferentes medidas de contraste angular.
Resumo:
In large antenna arrays with a large number of antenna elements, the required number of measurements for the characterization of the antenna array is very demanding in cost and time. This letter presents a new offline calibration process for active antenna arrays that reduces the number of measurements by subarray-level characterization. This letter embraces measurements, characterization, and calibration as a global procedure assessing about the most adequate calibration technique and computing of compensation matrices. The procedure has been fully validated with measurements of a 45-element triangular panel array designed for Low Earth Orbit (LEO) satellite tracking that compensates the degradation due to gain and phase imbalances and mutual coupling.
Resumo:
Se utiliza la lógica borrosa para elaborar un modelo útil para analizar el desarrollo sostenible de proyectos. The sustainable development is defined as “the development that satisfies needs of the present time without endangering the capacity of future generations to satisfy theirs”. The term “sustainable development” represents that balance between the satisfaction of present needs and the future ones, offering options of technological and social growth for reducing the risks meaning trends of topical increase. The idea of sustainability can be analysed from three perspectives: environmental, social and economic
Resumo:
In this work, a new design concept of SMS moving optics is developed, in which the movement is no longer lateral but follows a curved trajectory calculated in the design process. Curved tracking trajectory helps to broaden the incident angle?s range significantly. We have chosen an afocal-type structure which aim to direct the parallel rays of large incident angles to parallel output rays. The RMS of the divergence angle of the output rays remains below 1 degree for an incident angular range of ±450. Potential applications of this beam-steering device are: skylights to provide steerable natural illumination, building integrated CPV systems, and steerable LED illumination.
Resumo:
This paper presents a simple gravity evaluation model for large reflector antennas and the experimental example for a case study of one uplink array of 4x35-m antennas at X and Ka band. This model can be used to evaluate the gain reduction as a function of the maximum gravity distortion, and also to specify this at system designer level. The case study consists of one array of 35-m antennas for deep space missions. Main issues due to the gravity effect have been explored with Monte Carlo based simulation analysis.
Resumo:
This paper introduces novel calibration processes applied to antenna arrays with new architectures and technologies designed to improve the performance of traditional earth stations for satellite communications due to the increasing requirement of data capacity during last decades. Besides, the Radiation Group from the Technical University of Madrid has been working on the development of new antenna arrays based on novel architecture and technologies along many projects as a solution for the ground segment in the early future. Nowadays, the calibration process is an interesting and cutting edge research field in a period of expansion with a lot of work to do for calibration in transmission and also for reception of these novel antennas under development.
Resumo:
Permanently bonded onto a structure, an integrated Phased Array (PhA II) transducer that can provide reliable electromechanical connection with corresponding sophisticated miniaturized ?all in one? SHM electronic device installed directly above it, without need for any interface cabling, during all aerospace structure lifecycle phases and for a huge variety of real harsh service environments of structures to be monitored is presented. This integrated PhA II transducer [1], as a key component of the PAMELA SHM? (Phased Array Monitoring for Enhanced Life Assessment) system, has two principal tasks at the same time, reliably transceive elastic waves in real aerospace service environments and serves as a reliable sole carrier or support for associated integrated on-board SHM electronic device attached above. The PhA II transducer successfully accomplished both required task throughout extensive test campaigns which included low to high temperature tests, temperature cycling, mechanical loading, combined thermo- mechanical loading and vibration resistance, etc. both with and without SHM device attached above due to RTCA DO-160F.
Resumo:
This paper proposes an automatic framework for the seamless integration of hardware accelerators, starting from an OpenMP-based application and an XML file describing the HW/SW partitioning. It extends a fully software architecture by generating and integrating the cores, along with the proper interfaces, and the code for scheduling and synchronization. Experimental results show that it is possible to validate different solutions only by varying the input code.
Resumo:
Adaptive hardware requires some reconfiguration capabilities. FPGAs with native dynamic partial reconfiguration (DPR) support pose a dilemma for system designers: whether to use native DPR or to build a virtual reconfigurable circuit (VRC) on top of the FPGA which allows selecting alternative functions by a multiplexing scheme. This solution allows much faster reconfiguration, but with higher resource overhead. This paper discusses the advantages of both implementations for a 2D image processing matrix. Results show how higher operating frequency is obtained for the matrix using DPR. However, this is compensated in the VRC during evolution due to the comparatively negligible reconfiguration time. Regarding area, the DPR implementation consumes slightly more resources due to the reconfiguration engine, but adds further more capabilities to the system.
Resumo:
Side Channel Attacks (SCAs) typically gather unintentional (side channel) physical leakages from running crypto-devices to reveal confidential data. Dual-rail Precharge Logic (DPL) is one of the most efficient countermeasures against power or EM side channel threats. This logic relies on the implementation of complementary rails to counterbalance the data-dependent variations of the leakage from dynamic behavior of the original circuit. However, the lack of flexibility of commercial FPGA design tools makes it quite difficult to obtain completely balanced routings between complementary networks. In this paper, a controllable repair mechanism to guarantee identical net pairs from two lines is presented: i. repairs the identical yet conflict nets after the duplication (copy & paste) from original rail to complementary rail, and ii. repairs the non-identical nets in off-the-stock DPL circuits; These rerouting steps are carried out starting from a placed and routed netlist using Xilinx Description Language (XDL). Low level XDL modifications have been completely automated using a set of APIs named RapidSmith. Experimental EM attacks show that the resistance level of an AES core after the automatic routing repair is increased in a factor of at least 3.5. Timing analyses further demonstrate that net delay differences between complementary networks are minimized significantly.