348 resultados para Coloring
Resumo:
The modern GPUs are well suited for intensive computational tasks and massive parallel computation. Sparse matrix multiplication and linear triangular solver are the most important and heavily used kernels in scientific computation, and several challenges in developing a high performance kernel with the two modules is investigated. The main interest it to solve linear systems derived from the elliptic equations with triangular elements. The resulting linear system has a symmetric positive definite matrix. The sparse matrix is stored in the compressed sparse row (CSR) format. It is proposed a CUDA algorithm to execute the matrix vector multiplication using directly the CSR format. A dependence tree algorithm is used to determine which variables the linear triangular solver can determine in parallel. To increase the number of the parallel threads, a coloring graph algorithm is implemented to reorder the mesh numbering in a pre-processing phase. The proposed method is compared with parallel and serial available libraries. The results show that the proposed method improves the computation cost of the matrix vector multiplication. The pre-processing associated with the triangular solver needs to be executed just once in the proposed method. The conjugate gradient method was implemented and showed similar convergence rate for all the compared methods. The proposed method showed significant smaller execution time.
Resumo:
[EN]A new parallel algorithm for simultaneous untangling and smoothing of tetrahedral meshes is proposed in this paper. We provide a detailed analysis of its performance on shared-memory many-core computer architectures. This performance analysis includes the evaluation of execution time, parallel scalability, load balancing, and parallelism bottlenecks. Additionally, we compare the impact of three previously published graph coloring procedures on the performance of our parallel algorithm. We use six benchmark meshes with a wide range of sizes. Using these experimental data sets, we describe the behavior of the parallel algorithm for different data sizes. We demonstrate that this algorithm is highly scalable when it runs on two different high-performance many-core computers with up to 128 processors...
Resumo:
Die Tyrosinase aus Streptomyces castaneoglobisporus HUT6202 ist für biochemische und strukturelle Untersuchungen besonders gut geeignet, da sie als globuläres binäres Protein vorliegt. Als bakterielles Protein lässt sich die Tyrosinase aus Streptomyces in einen E.coli Expressionsstamm klonieren und exprimieren.rnIn dieser Arbeit wurde die Tyrosinase zusammen mit seinem Hilfsprotein (ORF378) polycistronisch in Escherichia coli BL21 (DE3)-Zellen heterolog exprimiert. Das Produkt der Expression ergab einen funktionellen binären Proteinkomplex, welcher mit einer Ausbeute von bis zu 0,8 mg/L über einen C-terminalen His-Tag sowie eine anschließende Größenausschlusschromatographie auf bis 95 % gereinigt werden konnte.rnDer gereinigte binäre Komplex aus Tyrosinase und Hilfsprotein wurde mit Hilfe isoelektrischer Fokussierung untersucht um die jeweiligen isoelektrischen Punkte der beiden Proteine zu bestimmen (pI 4,8 für die Tyrosinase sowie 4,9 für das Hilfsprotein), welche stark von den anhand der Aminosäuresequenz errechneten pIs abweichen (6,2 und 6,4). Des Weiteren wurde die Tyrosinase auf ihre Substratspezifität getestet, wobei sich ein bevorzugter Umsatz von Kaffeesäure (Km 1,4 mM; Vmax 21.5 µM min-1) und p-Cumarsäure zeigte. Es erfolgte keine Katalyse von Tyrosin und Tyramin sowie nur in geringem Maß von L-Dopa. Darüber hinaus konnte gezeigt werden, dass ein enzymatischer Umsatz nur stattfindet, nachdem die Tyrosinase mit CuSO4 aktiviert wurde. Eine Aktivierung mit SDS konnte nicht beobachtet werden.rnZur Untersuchung der Aktivierung des binären Komplexes lässt sich mit Hilfe dynamischer Lichtstreuung und analytischer Ultrazentrifugation eine Dissoziation des Komplexes in seine monomeren Komponenten nach Aktivierung mit CuSO4 vermuten. Dies würde den bislang hypothetisch angenommenen Mechanismus der Aktivierung der Tyrosinase aus S.castaneoglobisporus bestätigen.rnIn silico-Arbeiten wurden durchgeführt um ein tieferes Verständnis der Substratspezifität zu bekommen. Substrat-Docking-Experimente bestätigten die im Labor erhaltenen Ergebnisse. Eine Strukturanalyse deutet auf eine sterische Hinderung der Substrataufnahme für Substrate mit sekundären Aminogruppen hin. rnAnalysen des Protein-Interface von Tyrosinase und Hilfsprotein konnten kupferfixierende Faltungsmotive an der Oberfläche des Hilfsproteins aufzeigen. Bei diesen handelt es meist um 3-4 polare Aminosäuren, welche in der Lage sind, ein Kupferatom zu fixieren. Durch die Bindung der Kupferatome an die fixierenden Motive werden wahrscheinlich zahlreiche Wasserstoff-brückenbindungen getrennt, welche den Komplex in seiner inaktiven Form stabilisieren.rn
Resumo:
The focus of this thesis is to contribute to the development of new, exact solution approaches to different combinatorial optimization problems. In particular, we derive dedicated algorithms for a special class of Traveling Tournament Problems (TTPs), the Dial-A-Ride Problem (DARP), and the Vehicle Routing Problem with Time Windows and Temporal Synchronized Pickup and Delivery (VRPTWTSPD). Furthermore, we extend the concept of using dual-optimal inequalities for stabilized Column Generation (CG) and detail its application to improved CG algorithms for the cutting stock problem, the bin packing problem, the vertex coloring problem, and the bin packing problem with conflicts. In all approaches, we make use of some knowledge about the structure of the problem at hand to individualize and enhance existing algorithms. Specifically, we utilize knowledge about the input data (TTP), problem-specific constraints (DARP and VRPTWTSPD), and the dual solution space (stabilized CG). Extensive computational results proving the usefulness of the proposed methods are reported.
Resumo:
The aim of this in vitro study was to assess the agreement among four techniques used as gold standard for the validation of methods for occlusal caries detection. Sixty-five human permanent molars were selected and one site in each occlusal surface was chosen as the test site. The teeth were cut and prepared according to each technique: stereomicroscopy without coloring (1), dye enhancement with rhodamine B (2) and fuchsine/acetic light green (3), and semi-quantitative microradiography (4). Digital photographs from each prepared tooth were assessed by three examiners for caries extension. Weighted kappa, as well as Friedman's test with multiple comparisons, was performed to compare all techniques and verify statistical significant differences. Results: kappa values varied from 0.62 to 0.78, the latter being found by both dye enhancement methods. Friedman's test showed statistical significant difference (P < 0.001) and multiple comparison identified these differences among all techniques, except between both dye enhancement methods (rhodamine B and fuchsine/acetic light green). Cross-tabulation showed that the stereomicroscopy overscored the lesions. Both dye enhancement methods showed a good agreement, while stereomicroscopy overscored the lesions. Furthermore, the outcome of caries diagnostic tests may be influenced by the validation method applied. Dye enhancement methods seem to be reliable as gold standard methods.
Resumo:
Self-stabilization is a property of a distributed system such that, regardless of the legitimacy of its current state, the system behavior shall eventually reach a legitimate state and shall remain legitimate thereafter. The elegance of self-stabilization stems from the fact that it distinguishes distributed systems by a strong fault tolerance property against arbitrary state perturbations. The difficulty of designing and reasoning about self-stabilization has been witnessed by many researchers; most of the existing techniques for the verification and design of self-stabilization are either brute-force, or adopt manual approaches non-amenable to automation. In this dissertation, we first investigate the possibility of automatically designing self-stabilization through global state space exploration. In particular, we develop a set of heuristics for automating the addition of recovery actions to distributed protocols on various network topologies. Our heuristics equally exploit the computational power of a single workstation and the available parallelism on computer clusters. We obtain existing and new stabilizing solutions for classical protocols like maximal matching, ring coloring, mutual exclusion, leader election and agreement. Second, we consider a foundation for local reasoning about self-stabilization; i.e., study the global behavior of the distributed system by exploring the state space of just one of its components. It turns out that local reasoning about deadlocks and livelocks is possible for an interesting class of protocols whose proof of stabilization is otherwise complex. In particular, we provide necessary and sufficient conditions – verifiable in the local state space of every process – for global deadlock- and livelock-freedom of protocols on ring topologies. Local reasoning potentially circumvents two fundamental problems that complicate the automated design and verification of distributed protocols: (1) state explosion and (2) partial state information. Moreover, local proofs of convergence are independent of the number of processes in the network, thereby enabling our assertions about deadlocks and livelocks to apply on rings of arbitrary sizes without worrying about state explosion.
Resumo:
Given arbitrary pictures, we explore the possibility of using new techniques from computer vision and artificial intelligence to create customized visual games on-the-fly. This includes coloring books, link-the-dot and spot-the-difference popular games. The feasibility of these systems is discussed and we describe prototype implementation that work well in practice in an automatic or semi-automatic way.
Resumo:
Many cell types in the retina are coupled via gap junctions and so there is a pressing need for a potent and reversible gap junction antagonist. We screened a series of potential gap junction antagonists by evaluating their effects on dye coupling in the network of A-type horizontal cells. We evaluated the following compounds: meclofenamic acid (MFA), mefloquine, 2-aminoethyldiphenyl borate (2-APB), 18-alpha-glycyrrhetinic acid, 18-beta-glycyrrhetinic acid (18-beta-GA), retinoic acid, flufenamic acid, niflumic acid, and carbenoxolone. The efficacy of each drug was determined by measuring the diffusion coefficient for Neurobiotin (Mills & Massey, 1998). MFA, 18-beta-GA, 2-APB and mefloquine were the most effective antagonists, completely eliminating A-type horizontal cell coupling at a concentration of 200 muM. Niflumic acid, flufenamic acid, and carbenoxolone were less potent. Additionally, carbenoxolone was difficult to wash out and also may be harmful, as the retina became opaque and swollen. MFA, 18-beta-GA, 2-APB and mefloquine also blocked coupling in B-type horizontal cells and AII amacrine cells. Because these cell types express different connexins, this suggests that the antagonists were relatively non-selective across several different types of gap junction. It should be emphasized that MFA was water-soluble and its effects on dye coupling were easily reversible. In contrast, the other gap junction antagonists, except carbenoxolone, required DMSO to make stock solutions and were difficult to wash out of the preparation at the doses required to block coupling in A-type HCs. The combination of potency, water solubility and reversibility suggest that MFA may be a useful compound to manipulate gap junction coupling.
Resumo:
Cores from Deep Sea Drilling Project Holes 501, 504B and 505B have an unusual near-vein zonation in basalts. Megascopically, zonation occurs as differently colored strips and zones whose typical thickness does not exceed 6 to 7 cm. Microscopically, the color of zones depends on variably colored clay minerals which are the products of low-temperature hydrothermal alteration in basalt. These differently colored zones form the so called "oxidative" type of alteration of basalts. Another "background," or, less precisely termed, "non-oxidative," type of alteration in basalts is characterized by large-scale, homogeneous replacement of olivine, and filling of vesicles and cracks by an olive-brown or olive-green clay mineral. The compositions of clay minerals of the "background" type of alteration, as well as the composition of co-existing titanomagnetites, were determined with an electron microprobe. There are sharp maxima in potassium and iron content, and minima in alumina, silica, and magnesia in clay minerals in the colored zones near veins. Coloring of clay and rock-forming minerals by iron hydroxides and a decrease of the amount of titanomagnetite, which apparently was the source of redeposited iron, occur frequently in colored zones. We assume that the large-scale "background" alteration in the basalts occurred under the effect of pore waters slowly penetrating through bottom sediments. Faulting can facilitate access of fresh sea water to basalts; thus above the general homogeneous background arise zones of "oxidative" alteration along fractures in basalts. The main factors controlling these processes are time (age of basalt), grain size, temperature, thickness of sedimentary cover, and heat flow.
Resumo:
Pliocene and Pleistocene sediments of the Oman margin and Owen Ridge are characterized by continuous alternation of light and dark layers of nannofossil ooze and marly nannofossil ooze and cyclic variation of wet-bulk density. Origin of the wet-bulk density and color cycles was examined at Ocean Drilling Program Site 722 on the Owen Ridge and Site 728 on the Oman margin using 3.4-m.y.-long GRAPE (gamma ray attenuation) wet-bulk density records and records of sediment color represented as changes in gray level on black-and-white core photographs. At Sites 722 and 728 sediments display a weak correlation of decreasing wet-bulk density with increasing darkness of sediment color. Wet-bulk density is inversely related to organic carbon concentration and displays little relation to calcium carbonate concentration, which varies inversely with the abundance of terrigenous sediment components. Sediment color darkens with increasing terrigenous sediment abundance (decreasing carbonate content) and with increasing organic carbon concentration. Upper Pleistocene sediments at Site 722 display a regular pattern of dark colored intervals coinciding with glacial periods, whereas at Site 728 the pattern of color variation is more irregular. There is not a consistent relationship between the dark intervals and their relative wet-bulk density in the upper Pleistocene sections at Sites 722 and 728, suggesting that dominance of organic matter or terrigenous sediment as primary coloring agents varies. Spectra of wet-bulk density and optical density time series display concentration of variance at orbital periodicities of 100, 41, 23, and 19 k.y. A strong 41-k.y. periodicity characterizes wet-bulk density and optical density variation at both sites throughout most of the past 3.4 m.y. Cyclicity at the 41-k.y. periodicity is characterized by a lack of coherence between wet-bulk density and optical density suggesting that the bulk density and color cycles reflect the mixed influence of varying abundance of terrigenous sediments and organic matter. The 23-k.y. periodicity in wet-bulk density and sediment color cycles is generally characterized by significant coherence between wet-bulk density and optical density, which reflects an inverse relationship between these parameters. Varying organic matter abundance, associated with changes in productivity or preservation, is inferred to more strongly influence changes in wet-bulk density and sediment color at this periodicity.
Resumo:
El aumento progresivo de la temperatura media anual y el déficit hídrico están provocando importantes cambios en la composición y la maduración de la uva, que repercuten directamente sobre el proceso fermentativo y, por ende, sobre la calidad del vino elaborado. En este trabajo se evalúan diferentes estrategias para la reducción del grado alcohólico, la mejora del color del vino y su estabilidad, y el incremento y la persistencia aromática. Mediante el empleo de levaduras con ineficiencia glicolítica se lograron reducciones medias en el grado alcohólico de entre 0.3 y 1.7 % v/v, mientras que con las fermentaciones secuenciales la máxima reducción lograda fue de 3.3 y 3.4 % v/v al combinar las cepas 938 (Schizosaccharomyces pombe) y 7013 (Torulaspora delbrueckii) con la 7VA (Saccharomyces cerevisiae). Al aplicar un tratamiento térmico sobre el inóculo, la TP2A(16) mostró una reducción media significativa en el grado alcohólico de 1 % v/v. El principal inconveniente en todas las técnicas empleadas para reducir el grado alcohólico fue la falta de repetibilidad en los resultados obtenidos. Por otra parte, la aplicación de altas presiones sobre uva despalillada resultó efectiva como tratamiento de pasteurización y como potenciador de la extracción de polifenoles, logrando un incremento en el contenido medio de antocianos totales del 12.4-18.5 %. La adición de flavonoides al mosto estimuló la formación de pigmentos estables como resultado de su condensación con antocianos mediada por acetaldehído. Con el empleo de Torulaspora delbrueckii en fermentación secuencial fue posible incrementar la producción de diacetilo y acetato de 2-feniletilo, además de la síntesis de un nuevo compuesto, el 3-etoxi-1-propanol. Sin embargo, su aportación sobre el color fue nula, así que debería combinarse con una cepa de Saccharomyces cerevisiae con buena formación de pigmentos piranoantociánicos. El empleo de Schizosaccharomyces pombe (938, V1) y Torulaspora delbrueckii (1880) en fermentaciones secuenciales y mixtas con Saccharomyces cerevisiae permitió mejorar el perfil sensorial del vino tinto mediante la mayor síntesis de polioles y la potenciación de aromas frutales, florales y herbáceos, e incrementar la estabilidad de la materia colorante al favorecer la formación de vitisinas y piranoantocianos vinilfenólicos. La crianza sobre lías en barrica a partir de levaduras seleccionadas, puede mejorar la complejidad y persistencia aromática del vino tinto, aunque sin grandes cambios en el color. ABSTRACT The progressive increase in annual average temperature, along with water deficit, is causing significant changes in grape composition and in its maturation, which directly affects the fermentative process and hence alters wine quality. In this work, different strategies for reducing the alcoholic strength, improve wine color and its stability, and increase aromatic complexity and its persistence, are evaluated. By using yeasts with glycolytic inefficiency, it was possible to achieve mean reductions between 0.3 and 1.7 % v/v in the alcoholic strength, while sequential fermentations allowed a maximum reduction of 3.3 and 3.4 % v/v by combining strains 938 (Schizosaccharomyces pombe) and 7013 (Torulaspora delbrueckii) with 7VA (Saccharomyces cerevisiae). When applying a heat shock treatment on the inoculum, only TP2A(16) strain showed a significant mean reduction of 1 % v/v in the alcohol content, compared with the control. The main drawback in all the techniques used to reduce the alcohol content was the lack of repeatability in the results. Moreover, the application of high pressures on destemmed grapes was effective as pasteurization treatment and also as enhancer of polyphenol extraction, achieving an increase of 12.4-18.5% in the average content of total anthocyanins. As expected, addition of flavonoids to the must, stimulated the formation of stable pigments, mainly as a result of condensation reactions between anthocyanins and flavanols mediated by acetaldehyde. With the use of Torulaspora delbrueckii strains in sequential fermentation with Saccharomyces cerevisiae, it was possible to increase the production of diacetyl and 2-phenylethyl acetate, besides the synthesis of a new compound: 3-ethoxy-1-propanol. The use of Schizosaccharomyces pombe (938, V1) and Torulaspora delbrueckii (1880) strains in sequential and mixed fermentations with Saccharomyces cerevisiae improved the sensory profile of red wine by increasing polyols synthesis and enhancing fruity, floral and herbaceous aromas, and it also increased the stability of the coloring matter by favouring vitisins and vinylphenolic pyranoanthocyanins formation. Ageing on lees in barrels from selected yeasts can improve the complexity and aromatic persistence of red wine, without major changes in the color.
Resumo:
El objetivo del presente trabajo de investigación es explorar nuevas técnicas de implementación, basadas en grafos, para las Redes de Neuronas, con el fin de simplificar y optimizar las arquitecturas y la complejidad computacional de las mismas. Hemos centrado nuestra atención en una clase de Red de Neuronas: las Redes de Neuronas Recursivas (RNR), también conocidas como redes de Hopfield. El problema de obtener la matriz sináptica asociada con una RNR imponiendo un determinado número de vectores como puntos fijos, no está en absoluto resuelto, el número de vectores prototipo que pueden ser almacenados en la red, cuando se utiliza la ley de Hebb, es bastante limitado, la red se satura rápidamente cuando se pretende almacenar nuevos prototipos. La ley de Hebb necesita, por tanto, ser revisada. Algunas aproximaciones dirigidas a solventar dicho problema, han sido ya desarrolladas. Nosotros hemos desarrollado una nueva aproximación en la forma de implementar una RNR en orden a solucionar estos problemas. La matriz sináptica es obtenida mediante la superposición de las componentes de los vectores prototipo, sobre los vértices de un Grafo, lo cual puede ser también interpretado como una coloración de dicho grafo. Cuando el periodo de entrenamiento se termina, la matriz de adyacencia del Grafo Resultante o matriz de pesos, presenta ciertas propiedades por las cuales dichas matrices serán llamadas tetraédricas. La energía asociada a cualquier estado de la red es representado por un punto (a,b) de R2. Cada uno de los puntos de energía asociados a estados que disten lo mismo del vector cero está localizado sobre la misma línea de energía de R2. El espacio de vectores de estado puede, por tanto, clasificarse en n clases correspondientes a cada una de las n diferentes distancias que puede tener cualquier vector al vector cero. La matriz (n x n) de pesos puede reducirse a un n-vector; de esta forma, tanto el tiempo de computación como el espacio de memoria requerido par almacenar los pesos, son simplificados y optimizados. En la etapa de recuperación, es introducido un vector de parámetros R2, éste es utilizado para controlar la capacidad de la red: probaremos que lo mayor es la componente a¡, lo menor es el número de puntos fijos pertenecientes a la línea de energía R¡. Una vez que la capacidad de la red ha sido controlada mediante este parámetro, introducimos otro parámetro, definido como la desviación del vector de pesos relativos, este parámetro sirve para disminuir ostensiblemente el número de parásitos. A lo largo de todo el trabajo, hemos ido desarrollando un ejemplo, el cual nos ha servido para ir corroborando los resultados teóricos, los algoritmos están escritos en un pseudocódigo, aunque a su vez han sido implamentados utilizando el paquete Mathematica 2.2., mostrándolos en un volumen suplementario al texto.---ABSTRACT---The aim of the present research is intended to explore new specifícation techniques of Neural Networks based on Graphs to be used in the optimization and simplification of Network Architectures and Computational Complexhy. We have focused our attention in a, well known, class of Neural Networks: the Recursive Neural Networks, also known as Hopfield's Neural Networks. The general problem of constructing the synaptic matrix associated with a Recursive Neural Network imposing some vectors as fixed points is fer for completery solved, the number of prototype vectors (learning patterns) which can be stored by Hebb's law is rather limited and the memory will thus quickly reach saturation if new prototypes are continuously acquired in the course of time. Hebb's law needs thus to be revised in order to allow new prototypes to be stored at the expense of the older ones. Some approaches related with this problem has been developed. We have developed a new approach of implementing a Recursive Neural Network in order to sob/e these kind of problems, the synaptic matrix is obtained superposing the components of the prototype vectors over the vértices of a Graph which may be interpreted as a coloring of the Graph. When training is finished the adjacency matrix of the Resulting Graph or matrix of weights presents certain properties for which it may be called a tetrahedral matrix The energy associated to any possible state of the net is represented as a point (a,b) in R2. Every one of the energy points associated with state-vectors having the same Hamming distance to the zero vector are located over the same energy Une in R2. The state-vector space may be then classified in n classes according to the n different possible distances firom any of the state-vectors to the zero vector The (n x n) matrix of weights may also be reduced to a n-vector of weights, in this way the computational time and the memory space required for obtaining the weights is optimized and simplified. In the recall stage, a parameter vectora is introduced, this parameter is used for controlling the capacity of the net: it may be proved that the bigger is the r, component of J, the lower is the number of fixed points located in the r¡ energy line. Once the capacity of the net has been controlled by the ex parameter, we introduced other parameter, obtained as the relative weight vector deviation parameter, in order to reduce the number of spurious states. All along the present text, we have also developed an example, which serves as a prove for the theoretical results, the algorithms are shown in a pseudocode language in the text, these algorithm so as the graphics have been developed also using the Mathematica 2.2. mathematical package which are shown in a supplementary volume of the text.
Resumo:
Los sistemas empotrados son cada día más comunes y complejos, de modo que encontrar procesos seguros, eficaces y baratos de desarrollo software dirigidos específicamente a esta clase de sistemas es más necesario que nunca. A diferencia de lo que ocurría hasta hace poco, en la actualidad los avances tecnológicos en el campo de los microprocesadores de los últimos tiempos permiten el desarrollo de equipos con prestaciones más que suficientes para ejecutar varios sistemas software en una única máquina. Además, hay sistemas empotrados con requisitos de seguridad (safety) de cuyo correcto funcionamiento depende la vida de muchas personas y/o grandes inversiones económicas. Estos sistemas software se diseñan e implementan de acuerdo con unos estándares de desarrollo software muy estrictos y exigentes. En algunos casos puede ser necesaria también la certificación del software. Para estos casos, los sistemas con criticidades mixtas pueden ser una alternativa muy valiosa. En esta clase de sistemas, aplicaciones con diferentes niveles de criticidad se ejecutan en el mismo computador. Sin embargo, a menudo es necesario certificar el sistema entero con el nivel de criticidad de la aplicación más crítica, lo que hace que los costes se disparen. La virtualización se ha postulado como una tecnología muy interesante para contener esos costes. Esta tecnología permite que un conjunto de máquinas virtuales o particiones ejecuten las aplicaciones con unos niveles de aislamiento tanto temporal como espacial muy altos. Esto, a su vez, permite que cada partición pueda ser certificada independientemente. Para el desarrollo de sistemas particionados con criticidades mixtas se necesita actualizar los modelos de desarrollo software tradicionales, pues estos no cubren ni las nuevas actividades ni los nuevos roles que se requieren en el desarrollo de estos sistemas. Por ejemplo, el integrador del sistema debe definir las particiones o el desarrollador de aplicaciones debe tener en cuenta las características de la partición donde su aplicación va a ejecutar. Tradicionalmente, en el desarrollo de sistemas empotrados, el modelo en V ha tenido una especial relevancia. Por ello, este modelo ha sido adaptado para tener en cuenta escenarios tales como el desarrollo en paralelo de aplicaciones o la incorporación de una nueva partición a un sistema ya existente. El objetivo de esta tesis doctoral es mejorar la tecnología actual de desarrollo de sistemas particionados con criticidades mixtas. Para ello, se ha diseñado e implementado un entorno dirigido específicamente a facilitar y mejorar los procesos de desarrollo de esta clase de sistemas. En concreto, se ha creado un algoritmo que genera el particionado del sistema automáticamente. En el entorno de desarrollo propuesto, se han integrado todas las actividades necesarias para desarrollo de un sistema particionado, incluidos los nuevos roles y actividades mencionados anteriormente. Además, el diseño del entorno de desarrollo se ha basado en la ingeniería guiada por modelos (Model-Driven Engineering), la cual promueve el uso de los modelos como elementos fundamentales en el proceso de desarrollo. Así pues, se proporcionan las herramientas necesarias para modelar y particionar el sistema, así como para validar los resultados y generar los artefactos necesarios para el compilado, construcción y despliegue del mismo. Además, en el diseño del entorno de desarrollo, la extensión e integración del mismo con herramientas de validación ha sido un factor clave. En concreto, se pueden incorporar al entorno de desarrollo nuevos requisitos no-funcionales, la generación de nuevos artefactos tales como documentación o diferentes lenguajes de programación, etc. Una parte clave del entorno de desarrollo es el algoritmo de particionado. Este algoritmo se ha diseñado para ser independiente de los requisitos de las aplicaciones así como para permitir al integrador del sistema implementar nuevos requisitos del sistema. Para lograr esta independencia, se han definido las restricciones al particionado. El algoritmo garantiza que dichas restricciones se cumplirán en el sistema particionado que resulte de su ejecución. Las restricciones al particionado se han diseñado con una capacidad expresiva suficiente para que, con un pequeño grupo de ellas, se puedan expresar la mayor parte de los requisitos no-funcionales más comunes. Las restricciones pueden ser definidas manualmente por el integrador del sistema o bien pueden ser generadas automáticamente por una herramienta a partir de los requisitos funcionales y no-funcionales de una aplicación. El algoritmo de particionado toma como entradas los modelos y las restricciones al particionado del sistema. Tras la ejecución y como resultado, se genera un modelo de despliegue en el que se definen las particiones que son necesarias para el particionado del sistema. A su vez, cada partición define qué aplicaciones deben ejecutar en ella así como los recursos que necesita la partición para ejecutar correctamente. El problema del particionado y las restricciones al particionado se modelan matemáticamente a través de grafos coloreados. En dichos grafos, un coloreado propio de los vértices representa un particionado del sistema correcto. El algoritmo se ha diseñado también para que, si es necesario, sea posible obtener particionados alternativos al inicialmente propuesto. El entorno de desarrollo, incluyendo el algoritmo de particionado, se ha probado con éxito en dos casos de uso industriales: el satélite UPMSat-2 y un demostrador del sistema de control de una turbina eólica. Además, el algoritmo se ha validado mediante la ejecución de numerosos escenarios sintéticos, incluyendo algunos muy complejos, de más de 500 aplicaciones. ABSTRACT The importance of embedded software is growing as it is required for a large number of systems. Devising cheap, efficient and reliable development processes for embedded systems is thus a notable challenge nowadays. Computer processing power is continuously increasing, and as a result, it is currently possible to integrate complex systems in a single processor, which was not feasible a few years ago.Embedded systems may have safety critical requirements. Its failure may result in personal or substantial economical loss. The development of these systems requires stringent development processes that are usually defined by suitable standards. In some cases their certification is also necessary. This scenario fosters the use of mixed-criticality systems in which applications of different criticality levels must coexist in a single system. In these cases, it is usually necessary to certify the whole system, including non-critical applications, which is costly. Virtualization emerges as an enabling technology used for dealing with this problem. The system is structured as a set of partitions, or virtual machines, that can be executed with temporal and spatial isolation. In this way, applications can be developed and certified independently. The development of MCPS (Mixed-Criticality Partitioned Systems) requires additional roles and activities that traditional systems do not require. The system integrator has to define system partitions. Application development has to consider the characteristics of the partition to which it is allocated. In addition, traditional software process models have to be adapted to this scenario. The V-model is commonly used in embedded systems development. It can be adapted to the development of MCPS by enabling the parallel development of applications or adding an additional partition to an existing system. The objective of this PhD is to improve the available technology for MCPS development by providing a framework tailored to the development of this type of system and by defining a flexible and efficient algorithm for automatically generating system partitionings. The goal of the framework is to integrate all the activities required for developing MCPS and to support the different roles involved in this process. The framework is based on MDE (Model-Driven Engineering), which emphasizes the use of models in the development process. The framework provides basic means for modeling the system, generating system partitions, validating the system and generating final artifacts. The framework has been designed to facilitate its extension and the integration of external validation tools. In particular, it can be extended by adding support for additional non-functional requirements and support for final artifacts, such as new programming languages or additional documentation. The framework includes a novel partitioning algorithm. It has been designed to be independent of the types of applications requirements and also to enable the system integrator to tailor the partitioning to the specific requirements of a system. This independence is achieved by defining partitioning constraints that must be met by the resulting partitioning. They have sufficient expressive capacity to state the most common constraints and can be defined manually by the system integrator or generated automatically based on functional and non-functional requirements of the applications. The partitioning algorithm uses system models and partitioning constraints as its inputs. It generates a deployment model that is composed by a set of partitions. Each partition is in turn composed of a set of allocated applications and assigned resources. The partitioning problem, including applications and constraints, is modeled as a colored graph. A valid partitioning is a proper vertex coloring. A specially designed algorithm generates this coloring and is able to provide alternative partitions if required. The framework, including the partitioning algorithm, has been successfully used in the development of two industrial use cases: the UPMSat-2 satellite and the control system of a wind-power turbine. The partitioning algorithm has been successfully validated by using a large number of synthetic loads, including complex scenarios with more that 500 applications.
Resumo:
O objetivo desse trabalho foi obter polpa de guavira desidratada por atomização, utilizando maltodextrina ou goma arábica como agentes carreadores. Inicialmente, avaliou-se a influência das condições de processo, temperatura do ar de secagem (130, 155 e 180) °C e vazão volumétrica da mistura (20 e 40) mL/min, o tipo e concentração de agente carreador (10 e 20) % nas características físicas, físico-químicas e atividade antioxidante do produto obtido. As propriedades analisadas foram umidade, atividade de água, higroscopicidade, solubilidade, cor, distribuição e tamanho médio de partículas, morfologia, compostos fenólicos totais e atividade antioxidante. A temperatura do ar de secagem e a vazão volumétrica de alimentação influenciaram significativamente todas as propriedades da guavira em pó. A umidade e atividade de água apresentaram os menores valores na temperatura intermediária, independentemente do tipo e concentração do carreador usado. A solubilidade das amostras adicionadas de maltodextrina foram superiores às amostras com goma arábica. O aumento da concentração de agente carreador, em geral, proporcionou um aumento no parâmetro L* e diminuição dos parâmetros a* e b*, tornando as amostras mais claras e reduzindo as tonalidades vermelha e amarela. A guavira em pó apresentou coloração próxima do amarelo e marrom, com grande variação nos parâmetros de cor C* e H* em função das diferentes condições de secagem. A distribuição do tamanho de partículas não teve um padrão definido e o tamanho médio das amostras com maltodextrina foram maiores do que as com goma arábica para a temperatura do ar a 130 °C. No entanto, para as outras temperaturas (155 e 180) °C não houve um comportamento específico do tamanho das partículas em função da vazão de alimentação, tipo e ou concentração de agente carreador. A análise de microscopia eletrônica de varredura permitiu observar que as partículas obtidas tanto com maltodextrina como goma arábica apresentaram formato esférico, superfície rugosa e com adesão de partículas menores nas de maior tamanho, sendo que a superfície das partículas com goma arábica também apresentaram concavidades. A atividade antioxidante foi superior quando utilizada a temperatura de secagem intermediária. A partir das condições selecionadas na primeira etapa (temperatura do ar de 155 °C, vazão volumétrica da mistura de 40 mL/min e 10% de maltodextrina ou goma arábica) a polpa de guavira em pó foi caracterizada quanto a temperatura de transição vítrea, as isotermas de adsorção e a estabilidade à estocagem do ácido ascórbico, compostos fenólicos totais e da atividade antioxidante da polpa de guavira em pó produzida por spray drying ao longo de 120 dias. As temperaturas de transição vítrea foram de (25,2 ± 2,7 °C e 31,4 ± 0,4) °C para os pós produzidos com goma arábica e maltodextrina, respectivamente. O modelo de BET apresentou ajuste muito bom (R2>0,99) para descrever o comportamento de sorção de água das amostras nas temperaturas de (20, 30 e 40) °C. A polpa de guavira em pó produzida com goma arábica apresentou maior adsorção de água do que as amostras obtidas com maltodextrina. No estudo da estabilidade, as amostras foram acondicionadas em embalagem de polietileno laminado e armazenadas a 25 °C e umidade relativa de 75%. A embalagem de polietileno laminado foi eficiente na manutenção do teor de ácido ascórbico e atividade antioxidante da guavira em pó por um período de 120 dias, independente do carreador adicionado. O teor de compostos fenólicos para a guavira em pó com goma arábica apresentou uma redução nos primeiros 22 dias, contudo a amostra com maltodextrina manteve-se estável durante 120 dias de armazenamento.
Resumo:
In the field of energy saving, finding composite materials with the ability of coloring upon both illumination and change of the applied electrode potential keeps on being an important goal. In this context, chemical bath deposition of Ni(OH)2 into nanoporous TiO2 thin films supported on conducting glass leads to electrodes showing both conventional electrochromic behavior (from colorless to dark brown and vice versa) together with photochromism at constant applied potential. The latter phenomenon, reported here for the first time, is characterized by fast and reversible coloration upon UV illumination. The bleaching kinetics shows first order behavior with respect to the NiIII centers in the film, and an order 1.2 with respect to electrons in the TiO2 film. From a more applied point of view, this study opens up the possibility of having two-mode smart windows showing not only conventional electrochromism but also reversible darkening upon illumination.