598 resultados para processor
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
Feature extraction is the part of pattern recognition, where the sensor data is transformed into a more suitable form for the machine to interpret. The purpose of this step is also to reduce the amount of information passed to the next stages of the system, and to preserve the essential information in the view of discriminating the data into different classes. For instance, in the case of image analysis the actual image intensities are vulnerable to various environmental effects, such as lighting changes and the feature extraction can be used as means for detecting features, which are invariant to certain types of illumination changes. Finally, classification tries to make decisions based on the previously transformed data. The main focus of this thesis is on developing new methods for the embedded feature extraction based on local non-parametric image descriptors. Also, feature analysis is carried out for the selected image features. Low-level Local Binary Pattern (LBP) based features are in a main role in the analysis. In the embedded domain, the pattern recognition system must usually meet strict performance constraints, such as high speed, compact size and low power consumption. The characteristics of the final system can be seen as a trade-off between these metrics, which is largely affected by the decisions made during the implementation phase. The implementation alternatives of the LBP based feature extraction are explored in the embedded domain in the context of focal-plane vision processors. In particular, the thesis demonstrates the LBP extraction with MIPA4k massively parallel focal-plane processor IC. Also higher level processing is incorporated to this framework, by means of a framework for implementing a single chip face recognition system. Furthermore, a new method for determining optical flow based on LBPs, designed in particular to the embedded domain is presented. Inspired by some of the principles observed through the feature analysis of the Local Binary Patterns, an extension to the well known non-parametric rank transform is proposed, and its performance is evaluated in face recognition experiments with a standard dataset. Finally, an a priori model where the LBPs are seen as combinations of n-tuples is also presented
Resumo:
Tässä kandidaatintyössä luodaan kattava katsaus erilaisiin PC-laiteissa toimiviin usean näytön käyttöönottomenetelmiin, joita on olemassa useita ominaisuuksiltaan ja käyttötarkoituksiltaan erilaisia. Työssä perehdytään Windowsin usean näytön tuen historiaan ja sen kehitykseen eri Windows versioiden välillä tuen alkuajoista 1990-luvulta nykyaikaan aina viimeisimpiin Windows käyttöjärjestelmiin asti. Lopuksi tarkastellaan vielä pelien usean näytön tukea ja kuinka hyödyntää useaa näyttöä sellaisissa peleissä, jotka eivät sitä sisäänrakennetusti tue.
Resumo:
Duchennen lihasdystrofia (engl. Duchenne muscular dystrophy, DMD) on lähes pelkästään pojilla ilmenevä perinnöllinen lihasrappeumatauti, joka johtaa kuolemaan noin 25 vuoden iässä. Noin yksi 3500–6000 pojasta sairastaa DMD:tä. Taudin aiheuttaa X-kromosomissa sijaitsevan dystrofiinigeenin mutaatio, jonka seurauksena toimivaa, lihaksia koossapitävää dystrofiinia ei tuotu. Kliinisissä testeissä on lupaavia hoitoja, joten DMD:n vastasyntyneiden seulonnan aloittamista harkitaan. DMD:n seulonnassa analyyttina olisi mahdollista käyttää lihasperäistä kreatiinikinaasia (engl. muscle-type creatine kinase tai creatine kinase MM isoform, CK-MM), jota päätyy vereen lihassolujen vaurioituessa. DMD:tä sairastavilla vastasyntyneillä CK-MM:n määrä veressä on moninkertainen terveisiin vastasyntyneisiin verrattuna lihasten rappeutumisesta johtuen. Perinteisesti kreatiinikinaasia on mitattu entsyymiaktiivisuusmäärityksillä, jotka mittaavat kaikkia kreatiinikinaasimuotoja eli myös sydänperäistä ja aivoperäistä kreatiinikinaasia (CK-MB ja CK-BB). Työn tarkoituksena oli kehittää kuivatuista veritäplistä tehtävä CK-MM:lle spesifinen kaksipuoleinen immunomääritys, joka olisi siirrettävissä PerkinElmerin automaattiselle GSP® Genetic Screening Processor -analysaattorille. Työ suoritettiin kolmessa vaiheessa. Ensimmäiseksi vertailtiin kaupallisesti saatavilla olevien CK-MM-vasta-aineiden affiniteetteja biosensorilla. Seuraavassa vaiheessa pystytettiin manuaalinen kaksipuoleinen immunomääritys käyttäen ensimmäisessä vaiheessa valittuja vasta-aineita ja optimoitiin immunomäärityksen parametreja. Lopuksi immunomääritys sovitettiin GSP-laitteelle. Biosensorimittausten ja manuaalisten immunomääritysten tulosten perusteella valittiin kaksi potentiaalista leimavasta-ainetta ja yksi sitojavasta-aineeksi sopiva vasta-aine. Niitä käytettäessä määritys on melko spesifinen CK-MM:lle, sillä CK-BB ei tuottanut lainkaan signaalia ja CK-MB:n ristireaktiivisuus oli noin 7 %. GSP-laitteella mitattaessa DMD:tä sairastavien (n = 10) CK-MM-pitoisuuksien mediaani (vaihteluväli) oli 7590 ng/ml (1490–13400 ng/ml) ja terveiden vastasyntyneiden (n = 8) 165 ng/ml (108–263 ng/ml). Määrityksen dynaamista mittausaluetta ei vielä selvitetty, mutta alustavien mittausten perusteella se kattaa terveiden vastasyntyneiden pitoisuudet ja sairaiden pitoisuudet ainakin 8770 ng/ml asti, mikä mahdollistaa sairaiden erottumisen. Työssä kehitetty määritys vaikuttaa siis sopivalta DMD:n seulontaan vastasyntyneiltä.
Resumo:
Molecular mechanics calculations were done on tetrahedral phosphine oxide zinc complexes in simulated water, benzene and hexane phases using the DREIDING II force field in the BIOGRAF molecular modeling program. The SUN workstation computer (SUN_ 4c, with SPARK station 1 processor) was used for the calculations. Experimental structural information used in the parameterization was obtained from the September 1989 version of the Cambridge Structural Database. 2 Steric and solvation energies were calculated for complexes of the type ZnCl2 (RlO)2' The calculations were done with and without inclusion of electrostatic interactions. More reliable simulation results were obtained without inclusion of charges. In the simulated gas phase, the steric energies increase regularly with number of carbons in the alkyl group, whereas they go through a maximum when solvent shells are included in the calculation. Simulated distribution ratios vary with chain length and type of chain branching and the complexes are found to be more favourable for extraction by benzene than by hexane, in accord with experimental data. Also, in line with what would be expected for a favorable extraction, calculations without electrostatics predict that the complexes are better solvated by the organic solvents than by water.
Resumo:
Les systèmes multiprocesseurs sur puce électronique (On-Chip Multiprocessor [OCM]) sont considérés comme les meilleures structures pour occuper l'espace disponible sur les circuits intégrés actuels. Dans nos travaux, nous nous intéressons à un modèle architectural, appelé architecture isométrique de systèmes multiprocesseurs sur puce, qui permet d'évaluer, de prédire et d'optimiser les systèmes OCM en misant sur une organisation efficace des nœuds (processeurs et mémoires), et à des méthodologies qui permettent d'utiliser efficacement ces architectures. Dans la première partie de la thèse, nous nous intéressons à la topologie du modèle et nous proposons une architecture qui permet d'utiliser efficacement et massivement les mémoires sur la puce. Les processeurs et les mémoires sont organisés selon une approche isométrique qui consiste à rapprocher les données des processus plutôt que d'optimiser les transferts entre les processeurs et les mémoires disposés de manière conventionnelle. L'architecture est un modèle maillé en trois dimensions. La disposition des unités sur ce modèle est inspirée de la structure cristalline du chlorure de sodium (NaCl), où chaque processeur peut accéder à six mémoires à la fois et où chaque mémoire peut communiquer avec autant de processeurs à la fois. Dans la deuxième partie de notre travail, nous nous intéressons à une méthodologie de décomposition où le nombre de nœuds du modèle est idéal et peut être déterminé à partir d'une spécification matricielle de l'application qui est traitée par le modèle proposé. Sachant que la performance d'un modèle dépend de la quantité de flot de données échangées entre ses unités, en l'occurrence leur nombre, et notre but étant de garantir une bonne performance de calcul en fonction de l'application traitée, nous proposons de trouver le nombre idéal de processeurs et de mémoires du système à construire. Aussi, considérons-nous la décomposition de la spécification du modèle à construire ou de l'application à traiter en fonction de l'équilibre de charge des unités. Nous proposons ainsi une approche de décomposition sur trois points : la transformation de la spécification ou de l'application en une matrice d'incidence dont les éléments sont les flots de données entre les processus et les données, une nouvelle méthodologie basée sur le problème de la formation des cellules (Cell Formation Problem [CFP]), et un équilibre de charge de processus dans les processeurs et de données dans les mémoires. Dans la troisième partie, toujours dans le souci de concevoir un système efficace et performant, nous nous intéressons à l'affectation des processeurs et des mémoires par une méthodologie en deux étapes. Dans un premier temps, nous affectons des unités aux nœuds du système, considéré ici comme un graphe non orienté, et dans un deuxième temps, nous affectons des valeurs aux arcs de ce graphe. Pour l'affectation, nous proposons une modélisation des applications décomposées en utilisant une approche matricielle et l'utilisation du problème d'affectation quadratique (Quadratic Assignment Problem [QAP]). Pour l'affectation de valeurs aux arcs, nous proposons une approche de perturbation graduelle, afin de chercher la meilleure combinaison du coût de l'affectation, ceci en respectant certains paramètres comme la température, la dissipation de chaleur, la consommation d'énergie et la surface occupée par la puce. Le but ultime de ce travail est de proposer aux architectes de systèmes multiprocesseurs sur puce une méthodologie non traditionnelle et un outil systématique et efficace d'aide à la conception dès la phase de la spécification fonctionnelle du système.
Resumo:
Notre progiciel PoweR vise à faciliter l'obtention ou la vérification des études empiriques de puissance pour les tests d'ajustement. En tant que tel, il peut être considéré comme un outil de calcul de recherche reproductible, car il devient très facile à reproduire (ou détecter les erreurs) des résultats de simulation déjà publiés dans la littérature. En utilisant notre progiciel, il devient facile de concevoir de nouvelles études de simulation. Les valeurs critiques et puissances de nombreuses statistiques de tests sous une grande variété de distributions alternatives sont obtenues très rapidement et avec précision en utilisant un C/C++ et R environnement. On peut même compter sur le progiciel snow de R pour le calcul parallèle, en utilisant un processeur multicœur. Les résultats peuvent être affichés en utilisant des tables latex ou des graphiques spécialisés, qui peuvent être incorporés directement dans vos publications. Ce document donne un aperçu des principaux objectifs et les principes de conception ainsi que les stratégies d'adaptation et d'extension.
Resumo:
Au niveau méthodologique, ce travail innove en combinant plusieurs moyens d'observation complémentaires sur le processus d'écriture et sur le processus de correction. Les observations qualitatives ainsi recueillies sont retranscrites en les combinant selon l'ordre chronologique d'apparition, puis elles sont traitées et analysées sous le logiciel QDA Miner.
Resumo:
Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.
Resumo:
India is the largest producer and processor of cashew in the world. The export value of cashew is about Rupees 2600 crore during 2004-05. Kerala is the main processing and exporting center of cashew. In Kerala most of the cashew processing factories are located in Kollam district. The industry provides livelihood for about 6-7 lakhs of employees and farmers, the cashew industry has national importance. In Kollam district alone there are more than 2.5 lakhs employees directly involved in the industry, which comes about 10 per cent of the population of the district, out of which 95 per cent are women workers. It is a fact that any amount received by a woman worker will be utilized directly for the benefit of the family and hence the link relating to family welfare is quite clear. Even though the Government of Kerala has incorporated the Kerala State Cashew Development Corporation (KSCDC) and Kerala State Cashew Workers Apex Industrial Co—operative Society (CAPEX) to develop the Cashew industry, the cashew industry and ancillary industries did not grow as per the expectation. In this context, an attempt has been made to analyze the problems and potential of the industry so as to make the industry viable and sustainable for the perpetual employment and income generation as well as the overall development of the Kollam district.
Resumo:
We have investigated the effects of swift heavy ion irradiation on thermally evaporated 44 nm thick, amorphous Co77Fe23 thin films on silicon substrates using 100 MeV Ag7+ ions fluences of 1 1011 ions/ cm2, 1 1012 ions/cm2, 1 1013 ions/cm2, and 3 1013 ions/cm2. The structural modifications upon swift heavy irradiation were investigated using glancing angle X-ray diffraction. The surface morphological evolution of thin film with irradiation was studied using Atomic Force Microscopy. Power spectral density analysis was used to correlate the roughness variation with structural modifications investigated using X-ray diffraction. Magnetic measurements were carried out using vibrating sample magnetometry and the observed variation in coercivity of the irradiated films is explained on the basis of stress relaxation. Magnetic force microscopy images are subjected to analysis using the scanning probe image processor software. These results are in agreement with the results obtained using vibrating sample magnetometry. The magnetic and structural properties are correlated
Resumo:
Bank switching in embedded processors having partitioned memory architecture results in code size as well as run time overhead. An algorithm and its application to assist the compiler in eliminating the redundant bank switching codes introduced and deciding the optimum data allocation to banked memory is presented in this work. A relation matrix formed for the memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Data allocation to memory is done by considering all possible permutation of memory banks and combination of data. The compiler output corresponding to each data mapping scheme is subjected to a static machine code analysis which identifies the one with minimum number of bank switching codes. Even though the method is compiler independent, the algorithm utilizes certain architectural features of the target processor. A prototype based on PIC 16F87X microcontrollers is described. This method scales well into larger number of memory blocks and other architectures so that high performance compilers can integrate this technique for efficient code generation. The technique is illustrated with an example
Resumo:
The research in the area of geopolymer is gaining momentum during the past 20 years. Studies confirm that geopolymer concrete has good compressive strength, tensile strength, flexural strength, modulus of elasticity and durability. These properties are comparable with OPC concrete.There are many occasions where concrete is exposed to elevated temperatures like fire exposure from thermal processor, exposure from furnaces, nuclear exposure, etc.. In such cases, understanding of the behaviour of concrete and structural members exposed to elevated temperatures is vital. Even though many research reports are available about the behaviour of OPC concrete at elevated temperatures, there is limited information available about the behaviour of geopolymer concrete after exposure to elevated temperatures. A preliminary study was carried out for the selection of a mix proportion. The important variable considered in the present study include alkali/fly ash ratio, percentage of total aggregate content, fine aggregate to total aggregate ratio, molarity of sodium hydroxide, sodium silicate to sodium hydroxide ratio, curing temperature and curing period. Influence of different variables on engineering properties of geopolymer concrete was investigated. The study on interface shear strength of reinforced and unreinforced geopolymer concrete as well as OPC concrete was also carried out. Engineering properties of fly ash based geopolymer concrete after exposure to elevated temperatures (ambient to 800 °C) were studied and the corresponding results were compared with those of conventional concrete. Scanning Electron Microscope analysis, Fourier Transform Infrared analysis, X-ray powder Diffractometer analysis and Thermogravimetric analysis of geopolymer mortar or paste at ambient temperature and after exposure to elevated temperature were also carried out in the present research work. Experimental study was conducted on geopolymer concrete beams after exposure to elevated temperatures (ambient to 800 °C). Load deflection characteristics, ductility and moment-curvature behaviour of the geopolymer concrete beams after exposure to elevated temperatures were investigated. Based on the present study, major conclusions derived could be summarized as follows. There is a definite proportion for various ingredients to achieve maximum strength properties. Geopolymer concrete with total aggregate content of 70% by volume, ratio of fine aggregate to total aggregate of 0.35, NaOH molarity 10, Na2SiO3/NaOH ratio of 2.5 and alkali to fly ash ratio of 0.55 gave maximum compressive strength in the present study. An early strength development in geopolymer concrete could be achieved by the proper selection of curing temperature and the period of curing. With 24 hours of curing at 100 °C, 96.4% of the 28th day cube compressive strength could be achieved in 7 days in the present study. The interface shear strength of geopolymer concrete is lower to that of OPC concrete. Compared to OPC concrete, a reduction in the interface shear strength by 33% and 29% was observed for unreinforced and reinforced geopolymer specimens respectively. The interface shear strength of geopolymer concrete is lower than ordinary Portland cement concrete. The interface shear strength of geopolymer concrete can be approximately estimated as 50% of the value obtained based on the available equations for the calculation of interface shear strength of ordinary portland cement concrete (method used in Mattock and ACI). Fly ash based geopolymer concrete undergoes a high rate of strength loss (compressive strength, tensile strength and modulus of elasticity) during its early heating period (up to 200 °C) compared to OPC concrete. At a temperature exposure beyond 600 °C, the unreacted crystalline materials in geopolymer concrete get transformed into amorphous state and undergo polymerization. As a result, there is no further strength loss (compressive strength, tensile strength and modulus of elasticity) in geopolymer concrete, whereas, OPC concrete continues to lose its strength properties at a faster rate beyond a temperature exposure of 600 °C. At present no equation is available to predict the strength properties of geopolymer concrete after exposure to elevated temperatures. Based on the study carried out, new equations have been proposed to predict the residual strengths (cube compressive strength, split tensile strength and modulus of elasticity) of geopolymer concrete after exposure to elevated temperatures (upto 800 °C). These equations could be used for material modelling until better refined equations are available. Compared to OPC concrete, geopolymer concrete shows better resistance against surface cracking when exposed to elevated temperatures. In the present study, while OPC concrete started developing cracks at 400 °C, geopolymer concrete did not show any visible cracks up to 600 °C and developed only minor cracks at an exposure temperatureof 800 °C. Geopolymer concrete beams develop crack at an early load stages if they are exposed to elevated temperatures. Even though the material strength of the geopolymer concrete does not decrease beyond 600 °C, the flexural strength of corresponding beam reduces rapidly after 600 °C temperature exposure, primarily due to the rapid loss of the strength of steel. With increase in temperature, the curvature at yield point of geopolymer concrete beam increases and thereby the ductility reduces. In the present study, compared to the ductility at ambient temperature, the ductility of geopolymer concrete beams reduces by 63.8% at 800 °C temperature exposure. Appropriate equations have been proposed to predict the service load crack width of geopolymer concrete beam exposed to elevated temperatures. These equations could be used to limit the service load on geopolymer concrete beams exposed to elevated temperatures (up to 800 °C) for a predefined crack width (between 0.1mm and 0.3 mm) or vice versa. The moment-curvature relationship of geopolymer concrete beams at ambient temperature is similar to that of RCC beams and this could be predicted using strain compatibility approach Once exposed to an elevated temperature, the strain compatibility approach underestimates the curvature of geopolymer concrete beams between the first cracking and yielding point.
Resumo:
For the theoretical investigation of local phenomena (adsorption at surfaces, defects or impurities within a crystal, etc.) one can assume that the effects caused by the local disturbance are only limited to the neighbouring particles. With this model, that is well-known as cluster-approximation, an infinite system can be simulated by a much smaller segment of the surface (Cluster). The size of this segment varies strongly for different systems. Calculations to the convergence of bond distance and binding energy of an adsorbed aluminum atom on an Al(100)-surface showed that more than 100 atoms are necessary to get a sufficient description of surface properties. However with a full-quantummechanical approach these system sizes cannot be calculated because of the effort in computer memory and processor speed. Therefore we developed an embedding procedure for the simulation of surfaces and solids, where the whole system is partitioned in several parts which itsself are treated differently: the internal part (cluster), which is located near the place of the adsorbate, is calculated completely self-consistently and is embedded into an environment, whereas the influence of the environment on the cluster enters as an additional, external potential to the relativistic Kohn-Sham-equations. The basis of the procedure represents the density functional theory. However this means that the choice of the electronic density of the environment constitutes the quality of the embedding procedure. The environment density was modelled in three different ways: atomic densities; of a large prepended calculation without embedding transferred densities; bulk-densities (copied). The embedding procedure was tested on the atomic adsorptions of 'Al on Al(100) and Cu on Cu(100). The result was that if the environment is choices appropriately for the Al-system one needs only 9 embedded atoms to reproduce the results of exact slab-calculations. For the Cu-system first calculations without embedding procedures were accomplished, with the result that already 60 atoms are sufficient as a surface-cluster. Using the embedding procedure the same values with only 25 atoms were obtained. This means a substantial improvement if one takes into consideration that the calculation time increased cubically with the number of atoms. With the embedding method Infinite systems can be treated by molecular methods. Additionally the program code was extended by the possibility to make molecular-dynamic simulations. Now it is possible apart from the past calculations of fixed cores to investigate also structures of small clusters and surfaces. A first application we made with the adsorption of Cu on Cu(100). We calculated the relaxed positions of the atoms that were located close to the adsorption site and afterwards made the full-quantummechanical calculation of this system. We did that procedure for different distances to the surface. Thus a realistic adsorption process could be examined for the first time. It should be remarked that when doing the Cu reference-calculations (without embedding) we begun to parallelize the entire program code. Only because of this aspect the investigations for the 100 atomic Cu surface-clusters were possible. Due to the good efficiency of both the parallelization and the developed embedding procedure we will be able to apply the combination in future. This will help to work on more these areas it will be possible to bring in results of full-relativistic molecular calculations, what will be very interesting especially for the regime of heavy systems.
Resumo:
The Scheme86 and the HP Precision Architectures represent different trends in computer processor design. The former uses wide micro-instructions, parallel hardware, and a low latency memory interface. The latter encourages pipelined implementation and visible interlocks. To compare the merits of these approaches, algorithms frequently encountered in numerical and symbolic computation were hand-coded for each architecture. Timings were done in simulators and the results were evaluated to determine the speed of each design. Based on these measurements, conclusions were drawn as to which aspects of each architecture are suitable for a high- performance computer.