893 resultados para 090407 Process Control and Simulation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Five permanent cell lines were developed from Xiphophorus maculatus, X. helleri, and their hybrids using three tissue sources, including adults and embryos of different stages. To evaluate cell line gene expression for retention of either tissue-of-origin-specific or ontogenetic stage-specific characters, the activity distribution of 44 enzyme loci was determined in 11 X. maculatus tissues, and the developmental genetics of 17 enzyme loci was charted in X. helleri and in helleri x maculatus hybrids using starch gel electrophoresis. In the process, eight new loci were discovered and characterized for Xiphophorus.^ No Xiphophorus cell line showed retention of tissue-of-origin-specific or ontogenetic stage-specific enzyme gene expressional traits. Instead, gene expression was similar among the cell lines. One enzyme, adenosine deaminase (ADA) was an exception. Two adult-origin cell lines expressed ADA, whereas, three cell lines derived independently from embryos did not. ADA('-) expression of Xiphophorus embryo-derived cell lines may represent retention of an embryonic gene expressional trait. In one cell line (T(,3)) derived from 13 pooled interspecific hybrid (F(,2)) embryos, shifts with time were observed at enzyme loci polymorphic between the two species. This suggested shifts in ratios of cells of different genotypes in the population rather than unstable gene expression in one dominant cell type.^ Verification of this hypothesis was attempted by cloning the culture--seeking clones having different genetic signatures. The large number of loci electrophoretically polymorphic between the two species and whose alleles segregated independently into the 13 progeny from which this culture originated almost guaranteed the presence of different genetic signatures (lineages) in T(,3).^ Seven lineages of cells were found within T(,3), each expressing genotypes at some loci not characteristic of the expression of the culture-as-a-whole, supporting the hypothesis tested. Quantitative studies of ADA expression in the whole culture (ADA('-)) and in clones of these seven lineages suggested the predominance in T(,3) of ADA deficient cell lineages, although moderate to high ADA output clones also occurred. Thus, T(,3) has the potential to shift phenotypes from ADA('-) to ADA('+) by simply changing proportions of its constituent cell types, demonstrating that such shifts can occur in any cell culture containing cells of mixed expressional characteristics.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main objective of this paper is the presentation of modelling solutions off loating devices that can be used for harnessing energy from ocean currents. It has been structured into three main parts. First, the growing current interest in marine renewable energy in general, and in extracting energy from currents in particular, is presented, showing the large number of solutions that are emerging and some of the most significant types. GESMEY generator is presented in second section. It is based on a new concept that has been patented by the Universidad Politécnica de Madrid and which is currently being developed through a collaborative agreement with the SOERMAR Foundation. The main feature of this generator is that on operation is fully submerged, and no other facilities are required to move to floating state for maintenance, which greatly increases its performance. Third part of the article is devoted to present the modelling and simulation challenges that arise in the development of devices for harnessing the energy of marine currents, along with some solutions which have been adopted within the frame of the GESMEY Project, making particular emphasis on the dynamics of the generator and its control

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La fiabilidad está pasando a ser el principal problema de los circuitos integrados según la tecnología desciende por debajo de los 22nm. Pequeñas imperfecciones en la fabricación de los dispositivos dan lugar ahora a importantes diferencias aleatorias en sus características eléctricas, que han de ser tenidas en cuenta durante la fase de diseño. Los nuevos procesos y materiales requeridos para la fabricación de dispositivos de dimensiones tan reducidas están dando lugar a diferentes efectos que resultan finalmente en un incremento del consumo estático, o una mayor vulnerabilidad frente a radiación. Las memorias SRAM son ya la parte más vulnerable de un sistema electrónico, no solo por representar más de la mitad del área de los SoCs y microprocesadores actuales, sino también porque las variaciones de proceso les afectan de forma crítica, donde el fallo de una única célula afecta a la memoria entera. Esta tesis aborda los diferentes retos que presenta el diseño de memorias SRAM en las tecnologías más pequeñas. En un escenario de aumento de la variabilidad, se consideran problemas como el consumo de energía, el diseño teniendo en cuenta efectos de la tecnología a bajo nivel o el endurecimiento frente a radiación. En primer lugar, dado el aumento de la variabilidad de los dispositivos pertenecientes a los nodos tecnológicos más pequeños, así como a la aparición de nuevas fuentes de variabilidad por la inclusión de nuevos dispositivos y la reducción de sus dimensiones, la precisión del modelado de dicha variabilidad es crucial. Se propone en la tesis extender el método de inyectores, que modela la variabilidad a nivel de circuito, abstrayendo sus causas físicas, añadiendo dos nuevas fuentes para modelar la pendiente sub-umbral y el DIBL, de creciente importancia en la tecnología FinFET. Los dos nuevos inyectores propuestos incrementan la exactitud de figuras de mérito a diferentes niveles de abstracción del diseño electrónico: a nivel de transistor, de puerta y de circuito. El error cuadrático medio al simular métricas de estabilidad y prestaciones de células SRAM se reduce un mínimo de 1,5 veces y hasta un máximo de 7,5 a la vez que la estimación de la probabilidad de fallo se mejora en varios ordenes de magnitud. El diseño para bajo consumo es una de las principales aplicaciones actuales dada la creciente importancia de los dispositivos móviles dependientes de baterías. Es igualmente necesario debido a las importantes densidades de potencia en los sistemas actuales, con el fin de reducir su disipación térmica y sus consecuencias en cuanto al envejecimiento. El método tradicional de reducir la tensión de alimentación para reducir el consumo es problemático en el caso de las memorias SRAM dado el creciente impacto de la variabilidad a bajas tensiones. Se propone el diseño de una célula que usa valores negativos en la bit-line para reducir los fallos de escritura según se reduce la tensión de alimentación principal. A pesar de usar una segunda fuente de alimentación para la tensión negativa en la bit-line, el diseño propuesto consigue reducir el consumo hasta en un 20 % comparado con una célula convencional. Una nueva métrica, el hold trip point se ha propuesto para prevenir nuevos tipos de fallo debidos al uso de tensiones negativas, así como un método alternativo para estimar la velocidad de lectura, reduciendo el número de simulaciones necesarias. Según continúa la reducción del tamaño de los dispositivos electrónicos, se incluyen nuevos mecanismos que permiten facilitar el proceso de fabricación, o alcanzar las prestaciones requeridas para cada nueva generación tecnológica. Se puede citar como ejemplo el estrés compresivo o extensivo aplicado a los fins en tecnologías FinFET, que altera la movilidad de los transistores fabricados a partir de dichos fins. Los efectos de estos mecanismos dependen mucho del layout, la posición de unos transistores afecta a los transistores colindantes y pudiendo ser el efecto diferente en diferentes tipos de transistores. Se propone el uso de una célula SRAM complementaria que utiliza dispositivos pMOS en los transistores de paso, así reduciendo la longitud de los fins de los transistores nMOS y alargando los de los pMOS, extendiéndolos a las células vecinas y hasta los límites de la matriz de células. Considerando los efectos del STI y estresores de SiGe, el diseño propuesto mejora los dos tipos de transistores, mejorando las prestaciones de la célula SRAM complementaria en más de un 10% para una misma probabilidad de fallo y un mismo consumo estático, sin que se requiera aumentar el área. Finalmente, la radiación ha sido un problema recurrente en la electrónica para aplicaciones espaciales, pero la reducción de las corrientes y tensiones de los dispositivos actuales los está volviendo vulnerables al ruido generado por radiación, incluso a nivel de suelo. Pese a que tecnologías como SOI o FinFET reducen la cantidad de energía colectada por el circuito durante el impacto de una partícula, las importantes variaciones de proceso en los nodos más pequeños va a afectar su inmunidad frente a la radiación. Se demuestra que los errores inducidos por radiación pueden aumentar hasta en un 40 % en el nodo de 7nm cuando se consideran las variaciones de proceso, comparado con el caso nominal. Este incremento es de una magnitud mayor que la mejora obtenida mediante el diseño de células de memoria específicamente endurecidas frente a radiación, sugiriendo que la reducción de la variabilidad representaría una mayor mejora. ABSTRACT Reliability is becoming the main concern on integrated circuit as the technology goes beyond 22nm. Small imperfections in the device manufacturing result now in important random differences of the devices at electrical level which must be dealt with during the design. New processes and materials, required to allow the fabrication of the extremely short devices, are making new effects appear resulting ultimately on increased static power consumption, or higher vulnerability to radiation SRAMs have become the most vulnerable part of electronic systems, not only they account for more than half of the chip area of nowadays SoCs and microprocessors, but they are critical as soon as different variation sources are regarded, with failures in a single cell making the whole memory fail. This thesis addresses the different challenges that SRAM design has in the smallest technologies. In a common scenario of increasing variability, issues like energy consumption, design aware of the technology and radiation hardening are considered. First, given the increasing magnitude of device variability in the smallest nodes, as well as new sources of variability appearing as a consequence of new devices and shortened lengths, an accurate modeling of the variability is crucial. We propose to extend the injectors method that models variability at circuit level, abstracting its physical sources, to better model sub-threshold slope and drain induced barrier lowering that are gaining importance in FinFET technology. The two new proposed injectors bring an increased accuracy of figures of merit at different abstraction levels of electronic design, at transistor, gate and circuit levels. The mean square error estimating performance and stability metrics of SRAM cells is reduced by at least 1.5 and up to 7.5 while the yield estimation is improved by orders of magnitude. Low power design is a major constraint given the high-growing market of mobile devices that run on battery. It is also relevant because of the increased power densities of nowadays systems, in order to reduce the thermal dissipation and its impact on aging. The traditional approach of reducing the voltage to lower the energy consumption if challenging in the case of SRAMs given the increased impact of process variations at low voltage supplies. We propose a cell design that makes use of negative bit-line write-assist to overcome write failures as the main supply voltage is lowered. Despite using a second power source for the negative bit-line, the design achieves an energy reduction up to 20% compared to a conventional cell. A new metric, the hold trip point has been introduced to deal with new sources of failures to cells using a negative bit-line voltage, as well as an alternative method to estimate cell speed, requiring less simulations. With the continuous reduction of device sizes, new mechanisms need to be included to ease the fabrication process and to meet the performance targets of the successive nodes. As example we can consider the compressive or tensile strains included in FinFET technology, that alter the mobility of the transistors made out of the concerned fins. The effects of these mechanisms are very dependent on the layout, with transistor being affected by their neighbors, and different types of transistors being affected in a different way. We propose to use complementary SRAM cells with pMOS pass-gates in order to reduce the fin length of nMOS devices and achieve long uncut fins for the pMOS devices when the cell is included in its corresponding array. Once Shallow Trench isolation and SiGe stressors are considered the proposed design improves both kinds of transistor, boosting the performance of complementary SRAM cells by more than 10% for a same failure probability and static power consumption, with no area overhead. While radiation has been a traditional concern in space electronics, the small currents and voltages used in the latest nodes are making them more vulnerable to radiation-induced transient noise, even at ground level. Even if SOI or FinFET technologies reduce the amount of energy transferred from the striking particle to the circuit, the important process variation that the smallest nodes will present will affect their radiation hardening capabilities. We demonstrate that process variations can increase the radiation-induced error rate by up to 40% in the 7nm node compared to the nominal case. This increase is higher than the improvement achieved by radiation-hardened cells suggesting that the reduction of process variations would bring a higher improvement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissection of the primary and secondary response to an influenza A virus established that the liver contains a substantial population of CD8+ T cells specific for the immunodominant epitope formed by H-2Db and the influenza virus nucleoprotein peptide fragment NP366–374 (DbNP366). The numbers of CD8+ DbNP366+ cells in the liver reflected the magnitude of the inflammatory process in the pneumonic lung, though replication of this influenza virus is limited to the respiratory tract. Analysis of surface phenotypes indicated that the liver CD8+ DbNP366+ cells tended to be more “activated” than the set recovered from lymphoid tissue but generally less so than those from the lung. The distinguishing characteristic of the lymphocytes from the liver was that the prevalence of the CD8+ DbNP366+ set was always much higher than the percentage of CD8+ T cells that could be induced to synthesize interferon γ after short-term, in vitro stimulation with the NP366–374 peptide, whereas these values were generally comparable for virus-specific CD8+ T cells recovered from other tissue sites. Also, the numbers of apoptotic CD8+ T cells were higher in the liver. The results overall are consistent with the idea that antigen-specific CD8+ T cells are destroyed in the liver during the control and resolution phases of this viral infection, though this destruction is not necessarily an immediate process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have measured experimental adsorption isotherms of water in zeolite LTA4A, and studied the regeneration process by performing subsequent adsorption cycles after degassing at different temperatures. We observed incomplete desorption at low temperatures, and cation rearrangement at successive adsorption cycles. We also developed a new molecular simulation force field able to reproduce experimental adsorption isotherms in the range of temperatures between 273 K and 374 K. Small deviations observed at high pressures are attributed to the change in the water dipole moment at high loadings. The force field correctly describes the preferential adsorption sites of water at different pressures. We tested the influence of the zeolite structure, framework flexibility, and cation mobility when considering adsorption and diffusion of water. Finally, we performed checks on force field transferability between different hydrophilic zeolite types, concluding that classical, non-polarizable water force fields are not transferable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Standards reduce production costs and increase products’ value to consumers. Standards however entail risks of anti-competitive abuse. After the adoption of a standard, the chosen technology normally lacks credible substitutes. The owner of the patented technology might thus have additional market power relative to locked-in licensees, and might exploit this power to charge higher access rates. In the economic literature this phenomenon is referred to as ‘hold-up’. To reduce the risk of hold-up, standard-setting organisations often require patent holders to disclose their standard-essential patents before the adoption of the standard and to commit to license on fair, reasonable and non-discriminatory (FRAND) terms. The European Commission normally investigates unfair pricing abuse in a standard-setting context if a patent holder who committed to FRAND ex-ante is suspected not to abide to it ex-post. However, this approach risks ignoring a number of potential abuses which are likely harmful for welfare. That can happen if, for example, ex-post a licensee is able to impose excessively low access rates (‘reverse hold-up’) or if a patent holder acquires additional market power thanks to the standard but its essential patents are not encumbered by FRAND commitments, for instance because the patent holder did not directly participate to the standard setting process and was therefore not required by the standard-setting organisations to commit to FRAND ex-ante. A consistent policy by the Commission capable of tackling all sources of harm should be enforced regardless of whether FRAND commitments are given. Antitrust enforcement should hinge on the identification of a distortion in the bargaining process around technology access prices, which is determined by the adoption of the standard and is not attributable to pro-competitive merits of any of the involved players.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fuzzy data has grown to be an important factor in data mining. Whenever uncertainty exists, simulation can be used as a model. Simulation is very flexible, although it can involve significant levels of computation. This article discusses fuzzy decision-making using the grey related analysis method. Fuzzy models are expected to better reflect decision-making uncertainty, at some cost in accuracy relative to crisp models. Monte Carlo simulation is used to incorporate experimental levels of uncertainty into the data and to measure the impact of fuzzy decision tree models using categorical data. Results are compared with decision tree models based on crisp continuous data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Discrete stochastic simulations are a powerful tool for understanding the dynamics of chemical kinetics when there are small-to-moderate numbers of certain molecular species. In this paper we introduce delays into the stochastic simulation algorithm, thus mimicking delays associated with transcription and translation. We then show that this process may well explain more faithfully than continuous deterministic models the observed sustained oscillations in expression levels of hes1 mRNA and Hes1 protein.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The potential for the use of DEA and simulation in a mutually supporting role in guiding operating units to improved performance is presented. An analysis following a three-stage process is suggested. Stage one involves obtaining the data for the DEA analysis. This can be sourced from historical data, simulated data or a combination of the two. Stage two involves the DEA analysis that identifies benchmark operating units. In the third stage simulation can now be used in order to offer practical guidance to operating units towards improved performance. This can be achieved by the use of sensitivity analysis of the benchmark unit using a simulation model to offer direct support as to the feasibility and efficiency of any variations in operating practices to be tested. Alternatively, the simulation can be used as a mechanism to transmit the practices of the benchmark unit to weaker performing units by building a simulation model of the weaker unit to the process design of the benchmark unit. The model can then compare performance of the current and benchmark process designs. Quantifying improvement in this way provides a useful driver to any process change initiative that is required to bring the performance of weaker units up to the best in class. © 2005 Operational Research Society Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cellular mobile radio systems will be of increasing importance in the future. This thesis describes research work concerned with the teletraffic capacity and the canputer control requirements of such systems. The work involves theoretical analysis and experimental investigations using digital computer simulation. New formulas are derived for the congestion in single-cell systems in which there are both land-to-mobile and mobile-to-mobile calls and in which mobile-to-mobile calls go via the base station. Two approaches are used, the first yields modified forms of the familiar Erlang and Engset formulas, while the second gives more complicated but more accurate formulas. The results of computer simulations to establish the accuracy of the formulas are described. New teletraffic formulas are also derived for the congestion in multi -cell systems. Fixed, dynamic and hybrid channel assignments are considered. The formulas agree with previously published simulation results. Simulation programs are described for the evaluation of the speech traffic of mobiles and for the investigation of a possible computer network for the control of the speech traffic. The programs were developed according to the structured progranming approach leading to programs of modular construction. Two simulation methods are used for the speech traffic: the roulette method and the time-true method. The first is economical but has some restriction, while the second is expensive but gives comprehensive answers. The proposed control network operates at three hierarchical levels performing various control functions which include: the setting-up and clearing-down of calls, the hand-over of calls between cells and the address-changing of mobiles travelling between cities. The results demonstrate the feasibility of the control netwvork and indicate that small mini -computers inter-connected via voice grade data channels would be capable of providing satisfactory control

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A graphical process control language has been developed as a means of defining process control software. The user configures a block diagram describing the required control system, from a menu of functional blocks, using a graphics software system with graphics terminal. Additions may be made to the menu of functional blocks, to extend the system capability, and a group of blocks may be defined as a composite block. This latter feature provides for segmentation of the overall system diagram and the repeated use of the same group of blocks within the system. The completed diagram is analyzed by a graphics compiler which generates the programs and data structure to realise the run-time software. The run-time software has been designed as a data-driven system which allows for modifications at the run-time level in both parameters and system configuration. Data structures have been specified to ensure efficient execution and minimal storage requirements in the final control software. Machine independence has been accomodated as far as possible using CORAL 66 as the high level language throughout the entire system; the final run-time code being generated by a CORAL 66 compiler appropriate to the target processor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A major application of computers has been to control physical processes in which the computer is embedded within some large physical process and is required to control concurrent physical processes. The main difficulty with these systems is their event-driven characteristics, which complicate their modelling and analysis. Although a number of researchers in the process system community have approached the problems of modelling and analysis of such systems, there is still a lack of standardised software development formalisms for the system (controller) development, particular at early stage of the system design cycle. This research forms part of a larger research programme which is concerned with the development of real-time process-control systems in which software is used to control concurrent physical processes. The general objective of the research in this thesis is to investigate the use of formal techniques in the analysis of such systems at their early stages of development, with a particular bias towards an application to high speed machinery. Specifically, the research aims to generate a standardised software development formalism for real-time process-control systems, particularly for software controller synthesis. In this research, a graphical modelling formalism called Sequential Function Chart (SFC), a variant of Grafcet, is examined. SFC, which is defined in the international standard IEC1131 as a graphical description language, has been used widely in industry and has achieved an acceptable level of maturity and acceptance. A comparative study between SFC and Petri nets is presented in this thesis. To overcome identified inaccuracies in the SFC, a formal definition of the firing rules for SFC is given. To provide a framework in which SFC models can be analysed formally, an extended time-related Petri net model for SFC is proposed and the transformation method is defined. The SFC notation lacks a systematic way of synthesising system models from the real world systems. Thus a standardised approach to the development of real-time process control systems is required such that the system (software) functional requirements can be identified, captured, analysed. A rule-based approach and a method called system behaviour driven method (SBDM) are proposed as a development formalism for real-time process-control systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The concept of a task is fundamental to the discipline of ergonomics. Approaches to the analysis of tasks began in the early 1900's. These approaches have evolved and developed to the present day, when there is a vast array of methods available. Some of these methods are specific to particular contexts or applications, others more general. However, whilst many of these analyses allow tasks to be examined in detail, they do not act as tools to aid the design process or the designer. The present thesis examines the use of task analysis in a process control context, and in particular the use of task analysis to specify operator information and display requirements in such systems. The first part of the thesis examines the theoretical aspect of task analysis and presents a review of the methods, issues and concepts relating to task analysis. A review of over 80 methods of task analysis was carried out to form a basis for the development of a task analysis method to specify operator information requirements in industrial process control contexts. Of the methods reviewed Hierarchical Task Analysis was selected to provide such a basis and developed to meet the criteria outlined for such a method of task analysis. The second section outlines the practical application and evolution of the developed task analysis method. Four case studies were used to examine the method in an empirical context. The case studies represent a range of plant contexts and types, both complex and more simple, batch and continuous and high risk and low risk processes. The theoretical and empirical issues are drawn together and a method developed to provide a task analysis technique to specify operator information requirements and to provide the first stages of a tool to aid the design of VDU displays for process control.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The open content creation process has proven itself to be a powerful and influential way of developing text-based content, as demonstrated by the success of Wikipedia and related sites. Distributed individuals independently edit, revise, or refine content, thereby creating knowledge artifacts of considerable breadth and quality. Our study explores the mechanisms that control and guide the content creation process and develops an understanding of open content governance. The repertory grid method is employed to systematically capture the experiences of individuals involved in the open content creation process and to determine the relative importance of the diverse control and guiding mechanisms. Our findings illustrate the important control and guiding mechanisms and highlight the multifaceted nature of open content governance. A range of governance mechanisms is discussed with regard to the varied levels of formality, the different loci of authority, and the diverse interaction environments involved. Limitations and opportunities for future research are provided.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aggregation and caking of particles are common severe problems in many operations and processing of granular materials, where granulated sugar is an important example. Prevention of aggregation and caking of granular materials requires a good understanding of moisture migration and caking mechanisms. In this paper, the modeling of solid bridge formation between particles is introduced, based on moisture migration of atmospheric moisture into containers packed with granular materials through vapor evaporation and condensation. A model for the caking process is then developed, based on the growth of liquid bridges (during condensation), and their hardening and subsequent creation of solid bridges (during evaporation). The predicted caking strengths agree well with some available experimental data on granulated sugar under storage conditions.