994 resultados para Transition P systems
Resumo:
The application of Concurrency Theory to Systems Biology is in its earliest stage of progress. The metaphor of cells as computing systems by Regev and Shapiro opened the employment of concurrent languages for the modelling of biological systems. Their peculiar characteristics led to the design of many bio-inspired formalisms which achieve higher faithfulness and specificity. In this thesis we present pi@, an extremely simple and conservative extension of the pi-calculus representing a keystone in this respect, thanks to its expressiveness capabilities. The pi@ calculus is obtained by the addition of polyadic synchronisation and priority to the pi-calculus, in order to achieve compartment semantics and atomicity of complex operations respectively. In its direct application to biological modelling, the stochastic variant of the calculus, Spi@, is shown able to model consistently several phenomena such as formation of molecular complexes, hierarchical subdivision of the system into compartments, inter-compartment reactions, dynamic reorganisation of compartment structure consistent with volume variation. The pivotal role of pi@ is evidenced by its capability of encoding in a compositional way several bio-inspired formalisms, so that it represents the optimal core of a framework for the analysis and implementation of bio-inspired languages. In this respect, the encodings of BioAmbients, Brane Calculi and a variant of P Systems in pi@ are formalised. The conciseness of their translation in pi@ allows their indirect comparison by means of their encodings. Furthermore it provides a ready-to-run implementation of minimal effort whose correctness is granted by the correctness of the respective encoding functions. Further important results of general validity are stated on the expressive power of priority. Several impossibility results are described, which clearly state the superior expressiveness of prioritised languages and the problems arising in the attempt of providing their parallel implementation. To this aim, a new setting in distributed computing (the last man standing problem) is singled out and exploited to prove the impossibility of providing a purely parallel implementation of priority by means of point-to-point or broadcast communication.
Resumo:
Das Ziel dieser Arbeit war die Entwicklung neuartiger organischer Materialien, die eine Absorption im NIR-Bereich besitzen. Zu diesem Zweck wurden höher konjugierte Carbinole und Tetrakis(stilbenyl)methane synthetisiert und aus den Carbinolen mit Trifloressigsäure Carbokationen-Salze hergestellt. Die Darstellung der Oligomeren erfolgte durch eine konvergente Synthesestrategie, bei der die meisten der synthetisierten Vorstufen in mehreren Folgereaktionen eingesetzt werden konnten. Als Schlüsselschritt der Synthesen wurde fast ausschließlich die Horner-Reaktion angewendet. Die Vorteile lagen dabei in der leichten Zugänglichkeit der Edukte, in hohen Produktausbeuten und vor allem in der hohen trans-Selektivität der Reaktion. Die effektiven Konjugationslängen (EKL) der Carbinole und Carbokationen wurden mit Hilfe der Fitfunktion berechnet. Mit zunehmender Ausdehnung des p-Systems wird der HOMO-LUMO Abstand des energieärmsten p-p*- Übergangs geringer und die Absorption zum sichtbaren Bereich hin bathochrom (langwellig) verschoben. Die Substitution mit einem Dialkylamino-Rest bewirkt ebenfalls eine bathochrome Verschiebung der Absorptionsbande.Umgekehrt verhält es sich bei den Carbokationen. Hier absorbieren die dialkylaminosubstituierten Verbindungen bei kürzerer Wellenlänge als die alkoxysubstituierten, da bei der Herstellung der Kationen aus den Carbinolen ,zuerst der Sickstoff als elektronenreichste Stelle protoniert wird. Dabei hat die Menge an zugesetzter Trifluoressigsäure einer entscheidende Einfluß auf die Bildung der Carbokationen aus den Carbinolen (s. Tabele 4-3). Während bei den alkoxysubstituierten Carbinolen das Gleichgewicht zwischen Carbinol und Carbokation schon nach Zusatz von ca 4 % Trifluoressigsäure erreicht ist, benötigt man bei den dialkylaminosubstituierten Carbinolen eine wesentlich höhere Säurekonzentration (10 %) zur vollständigen Protonierung (n = 4). Beiden Systemen ist gemein, daß nach Überschreiten der benötigten Gleichgewichtskonzentration an Trifluoressigsäure, eine Erhöhung der Säuremenge einen Abbau des Chromophos zur Folge hat. Offensichtlich wird dann die Doppelbindung protoniert, ein Vorgang der allerdings reversibel ist und keine Isomerisierungen beinhaltet.E/Z-Isomerisierungen finden allerdings durch Bestrahlungen mit Licht der Wellenlänge l = 366 nm statt. Bei dem Tetrakis(stilbenyl)methan 25a liegt das photostationäre Gleichgewicht bei ca 1/9 (Z/E), bei dem Tris(stilbenyl)methanol 19a liegt es bei 1/7 (Z/E).
Resumo:
The potential and adaptive flexibility of population dynamic P-systems (PDP) to study population dynamics suggests that they may be suitable for modelling complex fluvial ecosystems, characterized by a composition of dynamic habitats with many variables that interact simultaneously. Using as a model a reservoir occupied by the zebra mussel Dreissena polymorpha, we designed a computational model based on P systems to study the population dynamics of larvae, in order to evaluate management actions to control or eradicate this invasive species. The population dynamics of this species was simulated under different scenarios ranging from the absence of water flow change to a weekly variation with different flow rates, to the actual hydrodynamic situation of an intermediate flow rate. Our results show that PDP models can be very useful tools to model complex, partially desynchronized, processes that work in parallel. This allows the study of complex hydroecological processes such as the one presented, where reproductive cycles, temperature and water dynamics are involved in the desynchronization of the population dynamics both, within areas and among them. The results obtained may be useful in the management of other reservoirs with similar hydrodynamic situations in which the presence of this invasive species has been documented.
Resumo:
Membrane systems are parallel and bioinspired systems which simulate membranes behavior when processing information. As a part of unconventional computing, P-systems are proven to be effective in solvingcomplexproblems. A software technique is presented here that obtain good results when dealing with such problems. The rules application phase is studied and updated accordingly to obtain the desired results. Certain rules are candidate to be eliminated which can make the model improving in terms of time.
Resumo:
La informática teórica es una disciplina básica ya que la mayoría de los avances en informática se sustentan en un sólido resultado de esa materia. En los últimos a~nos debido tanto al incremento de la potencia de los ordenadores, como a la cercanía del límite físico en la miniaturización de los componentes electrónicos, resurge el interés por modelos formales de computación alternativos a la arquitectura clásica de von Neumann. Muchos de estos modelos se inspiran en la forma en la que la naturaleza resuelve eficientemente problemas muy complejos. La mayoría son computacionalmente completos e intrínsecamente paralelos. Por este motivo se les está llegando a considerar como nuevos paradigmas de computación (computación natural). Se dispone, por tanto, de un abanico de arquitecturas abstractas tan potentes como los computadores convencionales y, a veces, más eficientes: alguna de ellas mejora el rendimiento, al menos temporal, de problemas NPcompletos proporcionando costes no exponenciales. La representación formal de las redes de procesadores evolutivos requiere de construcciones, tanto independientes, como dependientes del contexto, dicho de otro modo, en general una representación formal completa de un NEP implica restricciones, tanto sintácticas, como semánticas, es decir, que muchas representaciones aparentemente (sintácticamente) correctas de casos particulares de estos dispositivos no tendrían sentido porque podrían no cumplir otras restricciones semánticas. La aplicación de evolución gramatical semántica a los NEPs pasa por la elección de un subconjunto de ellos entre los que buscar los que solucionen un problema concreto. En este trabajo se ha realizado un estudio sobre un modelo inspirado en la biología celular denominado redes de procesadores evolutivos [55, 53], esto es, redes cuyos nodos son procesadores muy simples capaces de realizar únicamente un tipo de mutación puntual (inserción, borrado o sustitución de un símbolo). Estos nodos están asociados con un filtro que está definido por alguna condición de contexto aleatorio o de pertenencia. Las redes están formadas a lo sumo de seis nodos y, teniendo los filtros definidos por una pertenencia a lenguajes regulares, son capaces de generar todos los lenguajes enumerables recursivos independientemente del grafo subyacente. Este resultado no es sorprendente ya que semejantes resultados han sido documentados en la literatura. Si se consideran redes con nodos y filtros definidos por contextos aleatorios {que parecen estar más cerca a las implementaciones biológicas{ entonces se pueden generar lenguajes más complejos como los lenguajes no independientes del contexto. Sin embargo, estos mecanismos tan simples son capaces de resolver problemas complejos en tiempo polinomial. Se ha presentado una solución lineal para un problema NP-completo, el problema de los 3-colores. Como primer aporte significativo se ha propuesto una nueva dinámica de las redes de procesadores evolutivos con un comportamiento no determinista y masivamente paralelo [55], y por tanto todo el trabajo de investigación en el área de la redes de procesadores se puede trasladar a las redes masivamente paralelas. Por ejemplo, las redes masivamente paralelas se pueden modificar de acuerdo a determinadas reglas para mover los filtros hacia las conexiones. Cada conexión se ve como un canal bidireccional de manera que los filtros de entrada y salida coinciden. A pesar de esto, estas redes son computacionalmente completas. Se pueden también implementar otro tipo de reglas para extender este modelo computacional. Se reemplazan las mutaciones puntuales asociadas a cada nodo por la operación de splicing. Este nuevo tipo de procesador se denomina procesador splicing. Este modelo computacional de Red de procesadores con splicing ANSP es semejante en cierto modo a los sistemas distribuidos en tubos de ensayo basados en splicing. Además, se ha definido un nuevo modelo [56] {Redes de procesadores evolutivos con filtros en las conexiones{ , en el cual los procesadores tan solo tienen reglas y los filtros se han trasladado a las conexiones. Dicho modelo es equivalente, bajo determinadas circunstancias, a las redes de procesadores evolutivos clásicas. Sin dichas restricciones el modelo propuesto es un superconjunto de los NEPs clásicos. La principal ventaja de mover los filtros a las conexiones radica en la simplicidad de la modelización. Otras aportaciones de este trabajo ha sido el dise~no de un simulador en Java [54, 52] para las redes de procesadores evolutivos propuestas en esta Tesis. Sobre el término "procesador evolutivo" empleado en esta Tesis, el proceso computacional descrito aquí no es exactamente un proceso evolutivo en el sentido Darwiniano. Pero las operaciones de reescritura que se han considerado pueden interpretarse como mutaciones y los procesos de filtrado se podrían ver como procesos de selección. Además, este trabajo no abarca la posible implementación biológica de estas redes, a pesar de ser de gran importancia. A lo largo de esta tesis se ha tomado como definición de la medida de complejidad para los ANSP, una que denotaremos como tama~no (considerando tama~no como el número de nodos del grafo subyacente). Se ha mostrado que cualquier lenguaje enumerable recursivo L puede ser aceptado por un ANSP en el cual el número de procesadores está linealmente acotado por la cardinalidad del alfabeto de la cinta de una máquina de Turing que reconoce dicho lenguaje L. Siguiendo el concepto de ANSP universales introducido por Manea [65], se ha demostrado que un ANSP con una estructura de grafo fija puede aceptar cualquier lenguaje enumerable recursivo. Un ANSP se puede considerar como un ente capaz de resolver problemas, además de tener otra propiedad relevante desde el punto de vista práctico: Se puede definir un ANSP universal como una subred, donde solo una cantidad limitada de parámetros es dependiente del lenguaje. La anterior característica se puede interpretar como un método para resolver cualquier problema NP en tiempo polinomial empleando un ANSP de tama~no constante, concretamente treinta y uno. Esto significa que la solución de cualquier problema NP es uniforme en el sentido de que la red, exceptuando la subred universal, se puede ver como un programa; adaptándolo a la instancia del problema a resolver, se escogerín los filtros y las reglas que no pertenecen a la subred universal. Un problema interesante desde nuestro punto de vista es el que hace referencia a como elegir el tama~no optimo de esta red.---ABSTRACT---This thesis deals with the recent research works in the area of Natural Computing {bio-inspired models{, more precisely Networks of Evolutionary Processors first developed by Victor Mitrana and they are based on P Systems whose father is Georghe Paun. In these models, they are a set of processors connected in an underlying undirected graph, such processors have an object multiset (strings) and a set of rules, named evolution rules, that transform objects inside processors[55, 53],. These objects can be sent/received using graph connections provided they accomplish constraints defined at input and output filters processors have. This symbolic model, non deterministic one (processors are not synchronized) and massive parallel one[55] (all rules can be applied in one computational step) has some important properties regarding solution of NP-problems in lineal time and of course, lineal resources. There are a great number of variants such as hybrid networks, splicing processors, etc. that provide the model a computational power equivalent to Turing machines. The origin of networks of evolutionary processors (NEP for short) is a basic architecture for parallel and distributed symbolic processing, related to the Connection Machine as well as the Logic Flow paradigm, which consists of several processors, each of them being placed in a node of a virtual complete graph, which are able to handle data associated with the respective node. All the nodes send simultaneously their data and the receiving nodes handle also simultaneously all the arriving messages, according to some strategies. In a series of papers one considers that each node may be viewed as a cell having genetic information encoded in DNA sequences which may evolve by local evolutionary events, that is point mutations. Each node is specialized just for one of these evolutionary operations. Furthermore, the data in each node is organized in the form of multisets of words (each word appears in an arbitrarily large number of copies), and all the copies are processed in parallel such that all the possible events that can take place do actually take place. Obviously, the computational process just described is not exactly an evolutionary process in the Darwinian sense. But the rewriting operations we have considered might be interpreted as mutations and the filtering process might be viewed as a selection process. Recombination is missing but it was asserted that evolutionary and functional relationships between genes can be captured by taking only local mutations into consideration. It is clear that filters associated with each node allow a strong control of the computation. Indeed, every node has an input and output filter; two nodes can exchange data if it passes the output filter of the sender and the input filter of the receiver. Moreover, if some data is sent out by some node and not able to enter any node, then it is lost. In this paper we simplify the ANSP model considered in by moving the filters from the nodes to the edges. Each edge is viewed as a two-way channel such that the input and output filters coincide. Clearly, the possibility of controlling the computation in such networks seems to be diminished. For instance, there is no possibility to loose data during the communication steps. In spite of this and of the fact that splicing is not a powerful operation (remember that splicing systems generates only regular languages) we prove here that these devices are computationally complete. As a consequence, we propose characterizations of two complexity classes, namely NP and PSPACE, in terms of accepting networks of restricted splicing processors with filtered connections. We proposed a uniform linear time solution to SAT based on ANSPFCs with linearly bounded resources. This solution should be understood correctly: we do not solve SAT in linear time and space. Since any word and auxiliary word appears in an arbitrarily large number of copies, one can generate in linear time, by parallelism and communication, an exponential number of words each of them having an exponential number of copies. However, this does not seem to be a major drawback since by PCR (Polymerase Chain Reaction) one can generate an exponential number of identical DNA molecules in a linear number of reactions. It is worth mentioning that the ANSPFC constructed above remains unchanged for any instance with the same number of variables. Therefore, the solution is uniform in the sense that the network, excepting the input and output nodes, may be viewed as a program according to the number of variables, we choose the filters, the splicing words and the rules, then we assign all possible values to the variables, and compute the formula.We proved that ANSP are computationally complete. Do the ANSPFC remain still computationally complete? If this is not the case, what other problems can be eficiently solved by these ANSPFCs? Moreover, the complexity class NP is exactly the class of all languages decided by ANSP in polynomial time. Can NP be characterized in a similar way with ANSPFCs?
Resumo:
Los resultados presentados en la memoria de esta tesis doctoral se enmarcan en la denominada computación celular con membranas una nueva rama de investigación dentro de la computación natural creada por Gh. Paun en 1998, de ahí que habitualmente reciba el nombre de sistemas P. Este nuevo modelo de cómputo distribuido está inspirado en la estructura y funcionamiento de la célula. El objetivo de esta tesis ha sido analizar el poder y la eficiencia computacional de estos sistemas de computación celular. En concreto, se han analizado dos tipos de sistemas P: por un lado los sistemas P de neuronas de impulsos, y por otro los sistemas P con proteínas en las membranas. Para el primer tipo, los resultados obtenidos demuestran que es posible que estos sistemas mantengan su universalidad aunque muchas de sus características se limiten o incluso se eliminen. Para el segundo tipo, se analiza la eficiencia computacional y se demuestra que son capaces de resolver problemas de la clase de complejidad ESPACIO-P (PSPACE) en tiempo polinómico. Análisis del poder computacional: Los sistemas P de neuronas de impulsos (en adelante SN P, acrónimo procedente del inglés «Spiking Neural P Systems») son sistemas inspirados en el funcionamiento neuronal y en la forma en la que los impulsos se propagan por las redes sinápticas. Los SN P bio-inpirados poseen un numeroso abanico de características que ha cen que dichos sistemas sean universales y por tanto equivalentes, en poder computacional, a una máquina de Turing. Estos sistemas son potentes a nivel computacional, pero tal y como se definen incorporan numerosas características, quizás demasiadas. En (Ibarra et al. 2007) se demostró que en estos sistemas sus funcionalidades podrían ser limitadas sin comprometer su universalidad. Los resultados presentados en esta memoria son continuistas con la línea de trabajo de (Ibarra et al. 2007) y aportan nuevas formas normales. Esto es, nuevas variantes simplificadas de los sistemas SN P con un conjunto mínimo de funcionalidades pero que mantienen su poder computacional universal. Análisis de la eficiencia computacional: En esta tesis se ha estudiado la eficiencia computacional de los denominados sistemas P con proteínas en las membranas. Se muestra que este modelo de cómputo es equivalente a las máquinas de acceso aleatorio paralelas (PRAM) o a las máquinas de Turing alterantes ya que se demuestra que un sistema P con proteínas, es capaz de resolver un problema ESPACIOP-Completo como el QSAT(problema de satisfacibilidad de fórmulas lógicas cuantificado) en tiempo polinómico. Esta variante de sistemas P con proteínas es muy eficiente gracias al poder de las proteínas a la hora de catalizar los procesos de comunicación intercelulares. ABSTRACT The results presented at this thesis belong to membrane computing a new research branch inside of Natural computing. This new branch was created by Gh. Paun on 1998, hence usually receives the name of P Systems. This new distributed computing model is inspired on structure and functioning of cell. The aim of this thesis is to analyze the efficiency and computational power of these computational cellular systems. Specifically there have been analyzed two different classes of P systems. On the one hand it has been analyzed the Neural Spiking P Systems, and on the other hand it has been analyzed the P systems with proteins on membranes. For the first class it is shown that it is possible to reduce or restrict the characteristics of these kind of systems without loss of computational power. For the second class it is analyzed the computational efficiency solving on polynomial time PSACE problems. Computational Power Analysis: The spiking neural P systems (SN P in short) are systems inspired by the way of neural cells operate sending spikes through the synaptic networks. The bio-inspired SN Ps possess a large range of features that make these systems to be universal and therefore equivalent in computational power to a Turing machine. Such systems are computationally powerful, but by definition they incorporate a lot of features, perhaps too much. In (Ibarra et al. in 2007) it was shown that their functionality may be limited without compromising its universality. The results presented herein continue the (Ibarra et al. 2007) line of work providing new formal forms. That is, new SN P simplified variants with a minimum set of functionalities but keeping the universal computational power. Computational Efficiency Analisys: In this thesis we study the computational efficiency of P systems with proteins on membranes. We show that this computational model is equivalent to parallel random access machine (PRAM) or alternating Turing machine because, we show P Systems with proteins can solve a PSPACE-Complete problem as QSAT (Quantified Propositional Satisfiability Problem) on polynomial time. This variant of P Systems with proteins is very efficient thanks to computational power of proteins to catalyze inter-cellular communication processes.
Resumo:
The criteria involved in the degradation of polyethylene-based degradable polymer samples have been investigated, with a view to obtaining a clearer mechanism of photo-biodegradation. The compatibility of degradable polymer samples during materials recycling was also studied. Commercial and laboratory prepared degradable polymer samples were oxidised in different environments and the oxidation products formed were studied using various analytical chromatographic and spectroscopic techniques such as HPLC, FT-IR and NMR. It was found that commercial degradable polymer samples which are based on the ECO systems, degrade predominantly via the Norrish II process, whereas the other degradable systems studied (starch-filled polyethylene systems, transition metal systems, including metal carboxylate based polyethylene systems and the photoantioxidant-activator systems) photodegrade essentially via the Norrish I process. In all cases, the major photoxidation products extracted from the degradable polymer samples were found to be carboxylic acids, although, in the polymer itself a mixture of carbonyl containing products such as esters, lactones, ketones and aldehydes was observed. The study also found that the formation of these hydrophilic carbonyl products causes surface swelling of the polymer, thus making bioerosion possible. It was thus concluded that environmental degradation of LDPE is a two step process, the initiation stage being oxidation of the polymer which gives rise to bioassimilable products, which are consequently bioeroded in the second stage, (the biodegradation step). Recycling of the degradable polymer samples as 10% homogeneous and heterogeneous blends was carried out using a single screw extruder (180°C and 210°C) and an internal mixer (190°C). The study showed that commercial degradable polymer samples may be recycled with a minimal loss in their properties.
Resumo:
The aim of this study is to describe the changes in nursing education during the process prior to and after the establishment of democracy in Spain. It begins with the hypothesis that differences in social and political organization influenced the way the system of nursing education evolved, keeping it in line with neopositivistic schemes and exclusively technical approaches up until the advent of democracy. The evolution of a specific profile for nursing within the educational system has been shaped by the relationship between the systems of social and political organization in Spain. To examine the insertion of subjects such as the anthropology of healthcare into education programs for Spanish nursing, one must consider the cultural, intercultural and transcultural factors that are key to understanding the changes in nursing education that allowed for the adoption of a holistic approach in the curricula. Until the arrival of democracy in 1977, Spanish nursing education was solely technical in nature and the role of nurses was limited to the tasks and procedures defined by the bureaucratic thinking characteristic of the rational-technological paradigm. Consequently, during the long period prior to democracy, nursing in Spain was under the influence of neopositivistic and technical thinking, which had its effect on educational curricula. The addition of humanities and anthropology to the curricula, which facilitated a holistic approach, occurred once nursing became a field of study at the university level in 1977, a period that coincided with the beginnings of democracy in Spain.
Resumo:
We propose a method to compute the entanglement degree E of bipartite systems having dimension 2 x 2 and demonstrate that the partial transposition of density matrix, the Peres criterion, arise as a consequence Of Our method. Differently from other existing measures of entanglement, the one presented here makes possible the derivation of a criterion to verify if an arbitrary bipartite entanglement will suffers sudden death (SD) based only on the initial-state parameters. Our method also makes possible to characterize the SD as a dynamical quantum phase transition, with order parameter epsilon. having a universal critical exponent -1/2. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
In this paper an unprecedent thermo-reversible sol-gel transition for titania nanoparticles dispersed in a solution of p-toluene sulfonic acid (PTSH) in isopropanol is reported. The sol formed by the thermo-hydrolysis at 60 degrees C of titanium tetraisopropoxide (Ti((OPr)-Pr-i)(4)) reversibly changes into a turbid gel upon cooling to room temperature. Turbidimetric measurements performed for samples containing different nominal acidity ratios (A = [PTSH]/[Ti]) have evidenced that the gel transformation temperature increases from 20 to 35 degrees C as the [PTSH]/[Ti] ratio increases from 0.2 to 2.0. SAXS results indicate that the thermo-reversible gelation is associated to a reversible aggregation of a monodisperse set of titania nanoparticles with average gyration radius of approximate to 2 nm. From the different PTSH species evidenced by Raman spectroscopy and TG/DTA of dried gels we proposed that the then-no-reversible gelation in this systems is induced by the formation of a supramolecular network, in which the protonated surface of nanoparticles is interconnected through cooperative hydrogen bonds between -SO3 groups of p-toluene sulfonic acid. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
A new criterion has been recently proposed combining the topological instability (lambda criterion) and the average electronegativity difference (Delta e) among the elements of an alloy to predict and select new glass-forming compositions. In the present work, this criterion (lambda.Delta e) is applied to the Al-Ni-La and Al-Ni-Gd ternary systems and its predictability is validated using literature data for both systems and additionally, using own experimental data for the Al-La-Ni system. The compositions with a high lambda.Delta e value found in each ternary system exhibit a very good correlation with the glass-forming ability of different alloys as indicated by their supercooled liquid regions (Delta T(x)) and their critical casting thicknesses. In the case of the Al-La-Ni system, the alloy with the largest lambda.Delta e value, La(56)Al(26.5)Ni(17.5), exhibits the highest glass-forming ability verified for this system. Therefore, the combined lambda.Delta e criterion is a simple and efficient tool to select new glass-forming compositions in Al-Ni-RE systems. (C) 2011 American Institute of Physics. [doi: 10.1063/1.3563099]