936 resultados para Thread safe parallel run-time
Resumo:
L’observation de l’exécution d’applications JavaScript est habituellement réalisée en instrumentant une machine virtuelle (MV) industrielle ou en effectuant une traduction source-à-source ad hoc et complexe. Ce mémoire présente une alternative basée sur la superposition de machines virtuelles. Notre approche consiste à faire une traduction source-à-source d’un programme pendant son exécution pour exposer ses opérations de bas niveau au travers d’un modèle objet flexible. Ces opérations de bas niveau peuvent ensuite être redéfinies pendant l’exécution pour pouvoir en faire l’observation. Pour limiter la pénalité en performance introduite, notre approche exploite les opérations rapides originales de la MV sous-jacente, lorsque cela est possible, et applique les techniques de compilation à-la-volée dans la MV superposée. Notre implémentation, Photon, est en moyenne 19% plus rapide qu’un interprète moderne, et entre 19× et 56× plus lente en moyenne que les compilateurs à-la-volée utilisés dans les navigateurs web populaires. Ce mémoire montre donc que la superposition de machines virtuelles est une technique alternative compétitive à la modification d’un interprète moderne pour JavaScript lorsqu’appliqué à l’observation à l’exécution des opérations sur les objets et des appels de fonction.
Resumo:
This thesis summarizes the results on the studies on a syntax based approach for translation between Malayalam, one of Dravidian languages and English and also on the development of the major modules in building a prototype machine translation system from Malayalam to English. The development of the system is a pioneering effort in Malayalam language unattempted by previous researchers. The computational models chosen for the system is first of its kind for Malayalam language. An in depth study has been carried out in the design of the computational models and data structures needed for different modules: morphological analyzer , a parser, a syntactic structure transfer module and target language sentence generator required for the prototype system. The generation of list of part of speech tags, chunk tags and the hierarchical dependencies among the chunks required for the translation process also has been done. In the development process, the major goals are: (a) accuracy of translation (b) speed and (c) space. Accuracy-wise, smart tools for handling transfer grammar and translation standards including equivalent words, expressions, phrases and styles in the target language are to be developed. The grammar should be optimized with a view to obtaining a single correct parse and hence a single translated output. Speed-wise, innovative use of corpus analysis, efficient parsing algorithm, design of efficient Data Structure and run-time frequency-based rearrangement of the grammar which substantially reduces the parsing and generation time are required. The space requirement also has to be minimised
Resumo:
Bank switching in embedded processors having partitioned memory architecture results in code size as well as run time overhead. An algorithm and its application to assist the compiler in eliminating the redundant bank switching codes introduced and deciding the optimum data allocation to banked memory is presented in this work. A relation matrix formed for the memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Data allocation to memory is done by considering all possible permutation of memory banks and combination of data. The compiler output corresponding to each data mapping scheme is subjected to a static machine code analysis which identifies the one with minimum number of bank switching codes. Even though the method is compiler independent, the algorithm utilizes certain architectural features of the target processor. A prototype based on PIC 16F87X microcontrollers is described. This method scales well into larger number of memory blocks and other architectures so that high performance compilers can integrate this technique for efficient code generation. The technique is illustrated with an example
Resumo:
Self-adaptive software provides a profound solution for adapting applications to changing contexts in dynamic and heterogeneous environments. Having emerged from Autonomic Computing, it incorporates fully autonomous decision making based on predefined structural and behavioural models. The most common approach for architectural runtime adaptation is the MAPE-K adaptation loop implementing an external adaptation manager without manual user control. However, it has turned out that adaptation behaviour lacks acceptance if it does not correspond to a user’s expectations – particularly for Ubiquitous Computing scenarios with user interaction. Adaptations can be irritating and distracting if they are not appropriate for a certain situation. In general, uncertainty during development and at run-time causes problems with users being outside the adaptation loop. In a literature study, we analyse publications about self-adaptive software research. The results show a discrepancy between the motivated application domains, the maturity of examples, and the quality of evaluations on the one hand and the provided solutions on the other hand. Only few publications analysed the impact of their work on the user, but many employ user-oriented examples for motivation and demonstration. To incorporate the user within the adaptation loop and to deal with uncertainty, our proposed solutions enable user participation for interactive selfadaptive software while at the same time maintaining the benefits of intelligent autonomous behaviour. We define three dimensions of user participation, namely temporal, behavioural, and structural user participation. This dissertation contributes solutions for user participation in the temporal and behavioural dimension. The temporal dimension addresses the moment of adaptation which is classically determined by the self-adaptive system. We provide mechanisms allowing users to influence or to define the moment of adaptation. With our solution, users can have full control over the moment of adaptation or the self-adaptive software considers the user’s situation more appropriately. The behavioural dimension addresses the actual adaptation logic and the resulting run-time behaviour. Application behaviour is established during development and does not necessarily match the run-time expectations. Our contributions are three distinct solutions which allow users to make changes to the application’s runtime behaviour: dynamic utility functions, fuzzy-based reasoning, and learning-based reasoning. The foundation of our work is a notification and feedback solution that improves intelligibility and controllability of self-adaptive applications by implementing a bi-directional communication between self-adaptive software and the user. The different mechanisms from the temporal and behavioural participation dimension require the notification and feedback solution to inform users on adaptation actions and to provide a mechanism to influence adaptations. Case studies show the feasibility of the developed solutions. Moreover, an extensive user study with 62 participants was conducted to evaluate the impact of notifications before and after adaptations. Although the study revealed that there is no preference for a particular notification design, participants clearly appreciated intelligibility and controllability over autonomous adaptations.
Resumo:
General-purpose computing devices allow us to (1) customize computation after fabrication and (2) conserve area by reusing expensive active circuitry for different functions in time. We define RP-space, a restricted domain of the general-purpose architectural space focussed on reconfigurable computing architectures. Two dominant features differentiate reconfigurable from special-purpose architectures and account for most of the area overhead associated with RP devices: (1) instructions which tell the device how to behave, and (2) flexible interconnect which supports task dependent dataflow between operations. We can characterize RP-space by the allocation and structure of these resources and compare the efficiencies of architectural points across broad application characteristics. Conventional FPGAs fall at one extreme end of this space and their efficiency ranges over two orders of magnitude across the space of application characteristics. Understanding RP-space and its consequences allows us to pick the best architecture for a task and to search for more robust design points in the space. Our DPGA, a fine- grained computing device which adds small, on-chip instruction memories to FPGAs is one such design point. For typical logic applications and finite- state machines, a DPGA can implement tasks in one-third the area of a traditional FPGA. TSFPGA, a variant of the DPGA which focuses on heavily time-switched interconnect, achieves circuit densities close to the DPGA, while reducing typical physical mapping times from hours to seconds. Rigid, fabrication-time organization of instruction resources significantly narrows the range of efficiency for conventional architectures. To avoid this performance brittleness, we developed MATRIX, the first architecture to defer the binding of instruction resources until run-time, allowing the application to organize resources according to its needs. Our focus MATRIX design point is based on an array of 8-bit ALU and register-file building blocks interconnected via a byte-wide network. With today's silicon, a single chip MATRIX array can deliver over 10 Gop/s (8-bit ops). On sample image processing tasks, we show that MATRIX yields 10-20x the computational density of conventional processors. Understanding the cost structure of RP-space helps us identify these intermediate architectural points and may provide useful insight more broadly in guiding our continual search for robust and efficient general-purpose computing structures.
Resumo:
El resumen es copia del publicado con el artículo
Resumo:
Las redes son un importante elemento topológico que tiene poco soporte en el software libre. Hay redes que cuentan con millones de nodos, lo que conlleva la necesidad de manejarlas de forma cuidadosa para optimizar los recursos. Consultando el estado del arte, hemos concluido que existe cierta cantidad de librerías de código abierto disponible, que generalmente emplean un modelo de gestión de los grafos que genera una estructura mallada de objetos en memoria precisando grandes cantidades de memoria y tiempos de puesta en marcha elevados. Estas carencias adquieren especial relevancia cuando se trata de manejar grandes redes. Además las librerías analizadas no suelen ser aptas para procesado multihilo por lo que no pueden usarse en entornos de servidores. Para estos casos hemos puesto en marcha el proyecto IDELabRoute la solución propuesta consiste en una librería genérica de análisis de redes “thread-safe” con gestión dinámica de memoria; para lo cual, se usa una arquitectura modular con gestores de memoria intercambiables, que desde distintas fuentes de almacenamiento persistente (i.e. bases de datos o sistemas de ficheros), maneja grafos de forma dinámica atendiendo a criterios espaciales y/o topológicos. Se trata de una solución de compromiso, puesto que el precio a pagar por la reducción del tamaño de los objetos en memoria es un incremento en el tiempo de respuesta, debido a la gestión de memorias con diversos tiempos de respuesta. Se trata, por tanto, de un sistema de gestión de grafos dinámico que permite manejar grandes modelos de redes de forma escalable, por lo que puede resultar adecuado en entornos con pocos recursos en relación al tamaño total de la red. El primer objetivo práctico del proyecto es proporcionar a la comunidad del GIS libre un servicio WPS para el cálculo de rutas
Resumo:
La autoevaluación y el contrato de aprendizaje como medios para la mejora de la enseñanza del dibujo en arquitectura. La experiencia de varios años en la docencia de expresión gráfica arquitectónica nos ha llevado a la implantación progresiva de fichas de autoevaluación que han dado buenos resultados, se han producido mejoras notables en la producción del alumno y la reducción del tiempo de ejecución de las tareas encomendadas.
Resumo:
El present treball es centra en l'estudi a diferents nivells dels carotenoides de les espècies marrons de Bacteris Verds del Sofre (GSB, de l'anglès Green Sulfur Bacteria). L'objectiu global ha estat el d'esbrinar quina és la funció d'aquests pigments dins l'aparell fotosintètic d'aquests microorganismes i aprofundir en el coneixement de la seva estructura i interaccions amb els altres pigments de l'aparell fotosintètic. En primer lloc es va dissenyar un nou mètode de cromatografia líquida d'alta resolució (HPLC) per analitzar de manera més ràpida i precisa els carotenoides de diferents soques de GSB (Capítol 3). Aquest mètode es basa en una purificació prèvia dels extractes pigmentaris amb columnes d'alúmina per eliminar les bacterioclorofil·les (BCls). Això va permetre analitzar amb una elevada resolució i en tan sols 45 min de carrera cromatogràfica els diferents carotenoides i els seus precursors, així com les configuracions trans i cis dels seus isòmers. El segon mètode utilitzat va consistir en una modificació del mètode de Borrego i Garcia-Gil (1994) i va permetre la separació precisa de tot tipus de pigments, procedents tant de cultius purs com de mostres de caràcter complex. Un exemple concret foren uns paleosediments de la zona lacustre de Banyoles. En aquests sediments (0,7-1,5 milions d'anys d'antiguitat) es van detectar, entre d'altres pigments, carotenoides específics de les espècies marrons de GSB, la qual cosa va permetre confirmar la presència d'aquests bacteris a la zona lacustre de Banyoles ja des del Pleistocè inferior. En aquest primer capítol també es van analitzar els carotenoides de Chlorobium (Chl.) phaeobacteroides CL1401 mitjançant cromatografia líquida acoblada a espectrometria de masses (LC-MS/MS), amb l'objectiu de confirmar la seva identificació i el seu pes molecular. A més, també es va avaluar l'efecte de la temperatura, la llum i diferents agents oxidants i reductors en la composició quantitativa i qualitativa dels carotenoides i les BCls d'aquesta espècie. Això va permetre confirmar el caràcter fotosensible de les BCls i que els isòmers trans/cis dels diferents carotenoides no són artefactes produïts durant la manipulació de les mostres, sinó que són constitutius de l'aparell fotosintètic d'aquests microorganismes. El Capítol 4 inclou els experiments de fisiologia duts a terme amb algunes espècies de GSB, a partir dels quals es va intentar esbrinar la dinàmica de síntesi dels diferents pigments de l'aparell fotosintètic (BCl antena, BCl a i carotenoides) durant el creixement d'aquestes espècies. Aquestes investigacions van permetre monitoritzar també els canvis en el nombre de centres de reacció (CR) durant el procés d'adaptació lumínica. La determinació experimental del nombre de CR es va realitzar a partir de la quantificació de la BCl663, l'acceptor primari en la cadena de transport d'electrons dels GSB. L'estimació del nombre de CR/clorosoma es va realitzar tant a partir de dades estequiomètriques i biomètriques presents a la bibliografia, com a partir de les dades experimentals obtingudes en el present treball. El bon ajust obtingut entre les diferents estimacions va donar solidesa al valor estequiomètric calculat, que fou, com a promig, d'uns 70 CR per clorosoma. En aquest capítol de fisiologia també es van estudiar les variacions en les relacions trans/cis pels principals carotenoides de les espècies marrons de GSB. Aquestes es van determinar a partir de cultius purs de laboratori i de poblacions naturals de GSB. Pel que fa als valors trobats en cultius de laboratori no es van observar diferències destacades entre el valor calculat a alta intensitat de llum i el calculat a baixa intensitat, essent en ambdós casos proper a 2. En els clorosomes aïllats de diferents soques marrons aquest quocient prengué un valor similar tant pels isòmers de l'isorenieratè (Isr) com pels del -isorenieratè (-Isr). En poblacions naturals de Chl. phaeobacteroides aquesta relació va ser també de 2 isòmers trans per cada isòmer cis, mantenint-se constant tant en fondària com al llarg del temps. Finalment, en el Capítol 5 es presenta un marcador molecular que permet la identificació específica d'espècies marrons de GSB. Malgrat que inicialment aquest marcador fou dissenyat a partir d'un gen implicat en la síntesi de carotenoides (crtY, el qual codifica per a una licopè ciclasa) la seqüència final a partir de la qual s'han aconseguit els encebadors selectius està relacionada amb la família de proteïnes de les Policètid-ceto-sintases (PKT). Tot i així, l'eina dissenyada pot ser de gran utilitat per a la discriminació d'espècies marrons de GSB respecte les verdes en poblacions mixtes com les que es troben en ambients naturals i obre la porta a futurs experiments d'ecologia microbiana utilitzant tècniques com la PCR en temps real, que permetria la monitorització selectiva de les poblacions d'espècies marrons de GSB en ecosistemes naturals.
Resumo:
A rapid capillary electrophoresis method was developed simultaneously to determine artificial sweeteners, preservatives and colours used as additives in carbonated soft drinks. Resolution between all additives occurring together in soft drinks was successfully achieved within a 15-min run-time by employing the micellar electrokinetic chromatography mode with a 20 mM carbonate buffer at pH 9.5 as the aqueous phase and 62 mM sodium dodecyl sulfate as the micellar phase. By using a diode-array detector to monitor the UV-visible range (190-600 nm), the identity of sample components, suggested by migration time, could be confirmed by spectral matching relative to standards.
Resumo:
Whole fresh goat's milk was heat treated at 135 degrees C for 4 s using a miniature UHT plant. The temperature of the milk in the preheating and sterilizer sections, and the milk flow rate were monitored to evaluate the overall heat transfer coefficient (OHTC). The decrease in OHTC was used to estimate the extent of fouling. Goat's milk fouled very quickly and run times of the UHT plant were short. The use of sodium hexametaphosphate, trisodium citrate and cation exchange resins to reduce ionic calcium prior to UHT processing, increased the pH and alcohol stability of the milk and markedly increased the run time of the UHT plant.
Resumo:
Pullpipelining, a pipeline technique where data is pulled from successor stages from predecessor stages is proposed Control circuits using a synchronous, a semi-synchronous and an asynchronous approach are given. Simulation examples for a DLX generic RISC datapath show that common control pipeline circuit overhead is avoided using the proposal. Applications to linear systolic arrays in cases when computation is finished at early stages in the array are foreseen. This would allow run-time data-driven digital frequency modulation of synchronous pipelined designs. This has applications to implement algorithms exhibiting average-case processing time using a synchronous approach.
Resumo:
A reconfigurable scalar quantiser capable of accepting n-bit input data is presented. The data length n can be varied in the range 1... N-1 under partial-run time reconfiguration, p-RTR. Issues as improvement in throughput using this reconfigurable quantiser of p-RTR against RTR for data of variable length are considered. The quantiser design referred to as the priority quantiser PQ is then compared against a direct design of the quantiser DIQ. It is then evaluated that for practical quantiser sizes, PQ shows better area usage when both are targeted onto the same FPGA. Other benefits are also identified.
Resumo:
A rapid, sensitive and specific LC-MS/MS method was developed and validated for quantifying chlordesmethyldiazepam (CDDZ or delorazepam), the active metabolite of cloxazolam, in human plasma. In the analytical assay, bromazepam (internal standard) and CDDZ were extracted using a liquid-liquid extraction (diethyl-ether/hexane, 80/20, v/v) procedure. The LC-MS/MS method on a RP-C18 column had an overall run time of 5.0 min and was linear (1/x weighted) over the range 0.5-50 ng/mL (R > 0.999). The between-run precision was 8.0% (1.5 ng/mL), 7.6% (9 ng/mL), 7.4% (40 ng/mL), and 10.9% at the low limit of quantification-LLOQ (0.500 ng/mL). The between-run accuracies were 0.1, -1.5, -2.7 and 8.7% for the above mentioned concentrations, respectively. All current bioanalytical method validation requirements (FDA and ANVISA) were achieved and it was applied to the bioequivalence study (Cloxazolam-test, Eurofarma Lab. Ltda and Olcadil (R)-reference, Novartis Biociencias S/A). The relative bioavailability between both formulations was assessed by calculating individual test/reference ratios for Cmax, AUClast and AUCO-inf. The pharmacokinetic profiles indicated bioequivalence since all ratios were as proposed by FDA and ANVISA. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
A rapid, sensitive and specific method for quantifying ciprofibrate in human plasma using bezafibrate as the internal standard (IS) is described. The sample was acidified prior extraction with formic acid (88%). The analyte and the IS were extracted from plasma by liquid-liquid extraction using an organic solvent (diethyl ether/dichloromethane 70/30 (v/v)). The extracts were analyzed by high performance liquid chromatography coupled with electrospray tandem mass spectrometry (HPLC-MS/MS). Chromatography was performed using Genesis C18 4 mu m analytical column (4.6 x 150 mm i.d.) and a mobile phase consisting of acetonitrile/water (70/30, v/v) and 1 mM acetic acid. The method had a chromatographic run time of 3.4 min and a linear calibration curve over the range 0.1-60 mu g/mL (r > 0.99). The limit of quantification was 0.1 mu g/mL. The intra- and interday accuracy and precision values of the assay were less than 13.5%. The stability tests indicated no significant degradation. The recovery of ciprofibrate was 81.2%, 73.3% and 76.2% for the 0.3, 5.0 and 48.0 ng/mL standard concentrations, respectively. For ciprofibrate, the optimized parameters of the declustering potential, collision energy and collision exit potential were -51 V, -16 eV and -5 V, respectively. The method was also validated without the use of the internal standard. This HPLC-MS/MS procedure was used to assess the bioequivalence of two ciprofibrate 100 mg tablet formulations in healthy volunteers of both sexes. The following pharmacokinetic parameters were obtained from the ciprofibrate plasma concentration vs. time curves: AUC(last), AUC(0-168 h), C(max) and T(max). The geometric mean with corresponding 90% confidence interval (CI) for test/reference percent ratios were 93.80% (90% CI = 88.16-99.79%) for C(max), 98.31% (90% CI = 94.91-101.83%) for AUC(last) and 97.67% (90% CI = 94.45-101.01%) for AUC(0-168 h). Since the 90% Cl for AUC(last), AUC(0-168 h) and C(max) ratios were within the 80-125% interval proposed by the US FDA, it was concluded that ciprofibrate (Lipless (R) 100 mg tablet) formulation manufactured by Biolab Sanus Farmaceutica Ltda. is bioequivalent to the Oroxadin (R) (100 mg tablet) formulation for both the rate and the extent of absorption. (C) 2011 Published by Elsevier B.V.