924 resultados para Reasonable Lenght of Process


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This letter presents a temperature-sensing technique on the basis of the temperature dependency of MOSFET leakage currents. To mitigate the effects of process variation, the ratio of two different leakage current measurements is calculated. Simulations show that this ratio is robust to process spread. The resulting sensor is quite small-0.0016 mm2 including an analog-to-digital conversion-and very energy efficient, consuming less than 640 pJ/conversion. After a two-point calibration, the accuracy in a range of 40°C-110°C is less than 1.5°C , which makes the technique suitable for thermal management applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Speech is the major function, emergence and which development radically changes all course of formation of the identity of the child already in the early childhood. If language and speech development in solitary born children is investigated today quite well, at twin children this process practically is not studied. Our research was carried out for the purpose of studying of an originality of mastering by speech by heterosexual children of pair of twins within communicative and pragmatist approach (T.N. Ushakov,G. V. Chirkina). Application of this approach to the analysis of process of communication at twin children allowed us to allocate those peculiar receptions and means of communication which they functionally develop in a situation of pair of twins, as allows them to show the phenomena of the speech which are not meeting at solitary born contemporaries. In this work results of supervision and research of pair of heterosexual twins of the second year of the life, carried out by a technique developed by us under the scientific guide of G. V. Chirkina

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We analyze the performance of the geometric distortion, incurred when coding depth maps in 3D Video, as an estimator of the distortion of synthesized views. Our analysis is motivated by the need of reducing the computational complexity required for the computation of synthesis distortion in 3D video encoders. We propose several geometric distortion models that capture (i) the geometric distortion caused by the depth coding error, and (ii) the pixel-mapping precision in view synthesis. Our analysis starts with the evaluation of the correlation of geometric distortion values obtained with these models and the actual distortion on synthesized views. Then, the different geometric distortion models are employed in the rate-distortion optimization cycle of depth map coding, in order to assess the results obtained by the correlation analysis. Results show that one of the geometric distortion models is performing consistently better than the other models in all tests. Therefore, it can be used as a reasonable estimator of the synthesis distortion in low complexity depth encoders.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A great challenge for future information technologies is building reliable systems on top of unreliable components. Parameters of modern and future technology devices are affected by severe levels of process variability and devices will degrade and even fail during the normal lifeDme of the chip due to aging mechanisms. These extreme levels of variability are caused by the high device miniaturizaDon and the random placement of individual atoms. Variability is considered a "red brick" by the InternaDonal Technology Roadmap for Semiconductors. The session is devoted to this topic presenDng research experiences from the Spanish Network on Variability called VARIABLES. In this session a talk entlited "Modeling sub-threshold slope and DIBL mismatch of sub-22nm FinFet" was presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Impact response surfaces (IRSs) depict the response of an impact variable to changes in two explanatory variables as a plotted surface. Here, IRSs of spring and winter wheat yields were constructed from a 25-member ensemble of process-based crop simulation models. Twenty-one models were calibrated by different groups using a common set of calibration data, with calibrations applied independently to the same models in three cases. The sensitivity of modelled yield to changes in temperature and precipitation was tested by systematically modifying values of 1981-2010 baseline weather data to span the range of 19 changes projected for the late 21st century at three locations in Europe.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La fiabilidad está pasando a ser el principal problema de los circuitos integrados según la tecnología desciende por debajo de los 22nm. Pequeñas imperfecciones en la fabricación de los dispositivos dan lugar ahora a importantes diferencias aleatorias en sus características eléctricas, que han de ser tenidas en cuenta durante la fase de diseño. Los nuevos procesos y materiales requeridos para la fabricación de dispositivos de dimensiones tan reducidas están dando lugar a diferentes efectos que resultan finalmente en un incremento del consumo estático, o una mayor vulnerabilidad frente a radiación. Las memorias SRAM son ya la parte más vulnerable de un sistema electrónico, no solo por representar más de la mitad del área de los SoCs y microprocesadores actuales, sino también porque las variaciones de proceso les afectan de forma crítica, donde el fallo de una única célula afecta a la memoria entera. Esta tesis aborda los diferentes retos que presenta el diseño de memorias SRAM en las tecnologías más pequeñas. En un escenario de aumento de la variabilidad, se consideran problemas como el consumo de energía, el diseño teniendo en cuenta efectos de la tecnología a bajo nivel o el endurecimiento frente a radiación. En primer lugar, dado el aumento de la variabilidad de los dispositivos pertenecientes a los nodos tecnológicos más pequeños, así como a la aparición de nuevas fuentes de variabilidad por la inclusión de nuevos dispositivos y la reducción de sus dimensiones, la precisión del modelado de dicha variabilidad es crucial. Se propone en la tesis extender el método de inyectores, que modela la variabilidad a nivel de circuito, abstrayendo sus causas físicas, añadiendo dos nuevas fuentes para modelar la pendiente sub-umbral y el DIBL, de creciente importancia en la tecnología FinFET. Los dos nuevos inyectores propuestos incrementan la exactitud de figuras de mérito a diferentes niveles de abstracción del diseño electrónico: a nivel de transistor, de puerta y de circuito. El error cuadrático medio al simular métricas de estabilidad y prestaciones de células SRAM se reduce un mínimo de 1,5 veces y hasta un máximo de 7,5 a la vez que la estimación de la probabilidad de fallo se mejora en varios ordenes de magnitud. El diseño para bajo consumo es una de las principales aplicaciones actuales dada la creciente importancia de los dispositivos móviles dependientes de baterías. Es igualmente necesario debido a las importantes densidades de potencia en los sistemas actuales, con el fin de reducir su disipación térmica y sus consecuencias en cuanto al envejecimiento. El método tradicional de reducir la tensión de alimentación para reducir el consumo es problemático en el caso de las memorias SRAM dado el creciente impacto de la variabilidad a bajas tensiones. Se propone el diseño de una célula que usa valores negativos en la bit-line para reducir los fallos de escritura según se reduce la tensión de alimentación principal. A pesar de usar una segunda fuente de alimentación para la tensión negativa en la bit-line, el diseño propuesto consigue reducir el consumo hasta en un 20 % comparado con una célula convencional. Una nueva métrica, el hold trip point se ha propuesto para prevenir nuevos tipos de fallo debidos al uso de tensiones negativas, así como un método alternativo para estimar la velocidad de lectura, reduciendo el número de simulaciones necesarias. Según continúa la reducción del tamaño de los dispositivos electrónicos, se incluyen nuevos mecanismos que permiten facilitar el proceso de fabricación, o alcanzar las prestaciones requeridas para cada nueva generación tecnológica. Se puede citar como ejemplo el estrés compresivo o extensivo aplicado a los fins en tecnologías FinFET, que altera la movilidad de los transistores fabricados a partir de dichos fins. Los efectos de estos mecanismos dependen mucho del layout, la posición de unos transistores afecta a los transistores colindantes y pudiendo ser el efecto diferente en diferentes tipos de transistores. Se propone el uso de una célula SRAM complementaria que utiliza dispositivos pMOS en los transistores de paso, así reduciendo la longitud de los fins de los transistores nMOS y alargando los de los pMOS, extendiéndolos a las células vecinas y hasta los límites de la matriz de células. Considerando los efectos del STI y estresores de SiGe, el diseño propuesto mejora los dos tipos de transistores, mejorando las prestaciones de la célula SRAM complementaria en más de un 10% para una misma probabilidad de fallo y un mismo consumo estático, sin que se requiera aumentar el área. Finalmente, la radiación ha sido un problema recurrente en la electrónica para aplicaciones espaciales, pero la reducción de las corrientes y tensiones de los dispositivos actuales los está volviendo vulnerables al ruido generado por radiación, incluso a nivel de suelo. Pese a que tecnologías como SOI o FinFET reducen la cantidad de energía colectada por el circuito durante el impacto de una partícula, las importantes variaciones de proceso en los nodos más pequeños va a afectar su inmunidad frente a la radiación. Se demuestra que los errores inducidos por radiación pueden aumentar hasta en un 40 % en el nodo de 7nm cuando se consideran las variaciones de proceso, comparado con el caso nominal. Este incremento es de una magnitud mayor que la mejora obtenida mediante el diseño de células de memoria específicamente endurecidas frente a radiación, sugiriendo que la reducción de la variabilidad representaría una mayor mejora. ABSTRACT Reliability is becoming the main concern on integrated circuit as the technology goes beyond 22nm. Small imperfections in the device manufacturing result now in important random differences of the devices at electrical level which must be dealt with during the design. New processes and materials, required to allow the fabrication of the extremely short devices, are making new effects appear resulting ultimately on increased static power consumption, or higher vulnerability to radiation SRAMs have become the most vulnerable part of electronic systems, not only they account for more than half of the chip area of nowadays SoCs and microprocessors, but they are critical as soon as different variation sources are regarded, with failures in a single cell making the whole memory fail. This thesis addresses the different challenges that SRAM design has in the smallest technologies. In a common scenario of increasing variability, issues like energy consumption, design aware of the technology and radiation hardening are considered. First, given the increasing magnitude of device variability in the smallest nodes, as well as new sources of variability appearing as a consequence of new devices and shortened lengths, an accurate modeling of the variability is crucial. We propose to extend the injectors method that models variability at circuit level, abstracting its physical sources, to better model sub-threshold slope and drain induced barrier lowering that are gaining importance in FinFET technology. The two new proposed injectors bring an increased accuracy of figures of merit at different abstraction levels of electronic design, at transistor, gate and circuit levels. The mean square error estimating performance and stability metrics of SRAM cells is reduced by at least 1.5 and up to 7.5 while the yield estimation is improved by orders of magnitude. Low power design is a major constraint given the high-growing market of mobile devices that run on battery. It is also relevant because of the increased power densities of nowadays systems, in order to reduce the thermal dissipation and its impact on aging. The traditional approach of reducing the voltage to lower the energy consumption if challenging in the case of SRAMs given the increased impact of process variations at low voltage supplies. We propose a cell design that makes use of negative bit-line write-assist to overcome write failures as the main supply voltage is lowered. Despite using a second power source for the negative bit-line, the design achieves an energy reduction up to 20% compared to a conventional cell. A new metric, the hold trip point has been introduced to deal with new sources of failures to cells using a negative bit-line voltage, as well as an alternative method to estimate cell speed, requiring less simulations. With the continuous reduction of device sizes, new mechanisms need to be included to ease the fabrication process and to meet the performance targets of the successive nodes. As example we can consider the compressive or tensile strains included in FinFET technology, that alter the mobility of the transistors made out of the concerned fins. The effects of these mechanisms are very dependent on the layout, with transistor being affected by their neighbors, and different types of transistors being affected in a different way. We propose to use complementary SRAM cells with pMOS pass-gates in order to reduce the fin length of nMOS devices and achieve long uncut fins for the pMOS devices when the cell is included in its corresponding array. Once Shallow Trench isolation and SiGe stressors are considered the proposed design improves both kinds of transistor, boosting the performance of complementary SRAM cells by more than 10% for a same failure probability and static power consumption, with no area overhead. While radiation has been a traditional concern in space electronics, the small currents and voltages used in the latest nodes are making them more vulnerable to radiation-induced transient noise, even at ground level. Even if SOI or FinFET technologies reduce the amount of energy transferred from the striking particle to the circuit, the important process variation that the smallest nodes will present will affect their radiation hardening capabilities. We demonstrate that process variations can increase the radiation-induced error rate by up to 40% in the 7nm node compared to the nominal case. This increase is higher than the improvement achieved by radiation-hardened cells suggesting that the reduction of process variations would bring a higher improvement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Polysilicon production costs contribute approximately to 25-33% of the overall cost of the solar panels and a similar fraction of the total energy invested in their fabrication. Understanding the energy losses and the behaviour of process temperature is an essential requirement as one moves forward to design and build large scale polysilicon manufacturing plants. In this paper we present thermal models for two processes for poly production, viz., the Siemens process using trichlorosilane (TCS) as precursor and the fluid bed process using silane (monosilane, MS).We validate the models with some experimental measurements on prototype laboratory reactors relating the temperature profiles to product quality. A model sensitivity analysis is also performed, and the efects of some key parameters such as reactor wall emissivity, gas distributor temperature, etc., on temperature distribution and product quality are examined. The information presented in this paper is useful for further understanding of the strengths and weaknesses of both deposition technologies, and will help in optimal temperature profiling of these systems aiming at lowering production costs without compromising the solar cell quality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An overview is presented of the current situation regarding radioactive dating of the matter of which our Galaxy is comprised. A firm lower bound on the age from nuclear chronometers of ≈9–10 Gyr is entirely consistent with age determinations from globular clusters and white dwarf cooling histories. The reasonable assumption of an approximately uniform nucleosynthesis rate yields an age for the Galaxy of 12.8 ± 3 Gyr, which again is consistent with current determinations from other methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The friction of rocks in the laboratory is a function of time, velocity of sliding, and displacement. Although the processes responsible for these dependencies are unknown, constitutive equations have been developed that do a reasonable job of describing the laboratory behavior. These constitutive laws have been used to create a model of earthquakes at Parkfield, CA, by using boundary conditions appropriate for the section of the fault that slips in magnitude 6 earthquakes every 20-30 years. The behavior of this model prior to the earthquakes is investigated to determine whether or not the model earthquakes could be predicted in the real world by using realistic instruments and instrument locations. Premonitory slip does occur in the model, but it is relatively restricted in time and space and detecting it from the surface may be difficult. The magnitude of the strain rate at the earth's surface due to this accelerating slip seems lower than the detectability limit of instruments in the presence of earth noise. Although not specifically modeled, microseismicity related to the accelerating creep and to creep events in the model should be detectable. In fact the logarithm of the moment rate on the hypocentral cell of the fault due to slip increases linearly with minus the logarithm of the time to the earthquake. This could conceivably be used to determine when the earthquake was going to occur. An unresolved question is whether this pattern of accelerating slip could be recognized from the microseismicity, given the discrete nature of seismic events. Nevertheless, the model results suggest that the most likely solution to earthquake prediction is to look for a pattern of acceleration in microseismicity and thereby identify the microearthquakes as foreshocks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are now several crystal structures of antibody Fab fragments complexed to their protein antigens. These include Fab complexes with lysozyme, two Fab complexes with influenza virus neuraminidase, and three Fab complexes with their anti-idiotype Fabs. The pattern of binding that emerges is similar to that found with other protein-protein interactions, with good shape complementarity between the interacting surfaces and reasonable juxtapositions of polar residues so as to permit hydrogen-bond formation. Water molecules have been observed in cavities within the interface and on the periphery, where they often form bridging hydrogen bonds between antibody and antigen. For the most part the antigen is bound in the middle of the antibody combining site with most of the six complementarity-determining residues involved in binding. For the most studied antigen, lysozyme, the epitopes for four antibodies occupy approximately 45% of the accessible surface area. Some conformational changes have been observed to accompany binding in both the antibody and the antigen, although most of the information on conformational change in the latter comes from studies of complexes with small antigens.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O processo para o refúgio é o conjunto de regras e princípios necessários à aplicação do Direito dos Refugiados aos casos concretos. Quando este conjunto respeita os padrões democráticos do Devido Processo Legal, as tendências históricas de exploração e manipulação política do instituto de refúgio podem ser limitadas e os objetivos humanitários deste ramo dos Direitos Humanos podem ser alcançados com maior transparência. Quando o Devido Processo Legal para o refúgio é respeitado, também se permite que a pessoa que figura como solicitante de refúgio seja tratada como sujeito de direitos - e não como objeto do processo. Uma vez que a Convenção de Genebra de 1951, sobre o Estatuto dos Refugiados, não estabeleceu normas de processo, cada país signatário necessita criar um regime próprio para processar os pedidos de determinação, extensão, perda e cessação da condição de refugiado em seus territórios. O primeiro regime processual brasileiro foi criado no ano de 1997, pela Lei Federal 9497. Desde então, o país vem desenvolvendo, através do Comitê Nacional para Refugiados (CONARE), regras infra legais e rotinas práticas que têm determinado um padrão processual ainda fragmentado e inseguro. O estudo do aparato normativo nacional e da realidade observada entre 2012 e 2014 revelam a existência de problemas (pontuais ou crônicos) sobre o cumprimento de diversos princípios processuais, tais como a Legalidade, a Impessoalidade e Independência da autoridade julgadora, o Contraditório, a Ampla Defesa, a Publicidade, a Fundamentação, a Igualdade e a Razoável Duração do Processo. Estes problemas impõem desafios variados ao Brasil, tanto em dimensão legislativa quanto estrutural. O enfrentamento destas questões precisa ocorrer com rapidez. O motivo da urgência, porém, não é a nova demanda de imigração observada no país, mas sim o fato de que as violações ao Devido Processo Legal, verificadas no processo para o refúgio brasileiro, representam, em si, violações de Direitos Humanos, que, ademais prejudicam o compromisso do país para com a proteção internacional dos refugiados.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work addresses the optimization of ammonia–water absorption cycles for cooling and refrigeration applications with economic and environmental concerns. Our approach combines the capabilities of process simulation, multi-objective optimization (MOO), cost analysis and life cycle assessment (LCA). The optimization task is posed in mathematical terms as a multi-objective mixed-integer nonlinear program (moMINLP) that seeks to minimize the total annualized cost and environmental impact of the cycle. This moMINLP is solved by an outer-approximation strategy that iterates between primal nonlinear programming (NLP) subproblems with fixed binaries and a tailored mixed-integer linear programming (MILP) model. The capabilities of our approach are illustrated through its application to an ammonia–water absorption cycle used in cooling and refrigeration applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The optimal integration of work and its interaction with heat can represent large energy savings in industrial plants. This paper introduces a new optimization model for the simultaneous synthesis of work exchange networks (WENs), with heat integration for the optimal pressure recovery of process gaseous streams. The proposed approach for the WEN synthesis is analogous to the well-known problem of synthesis of heat exchanger networks (HENs). Thus, there is work exchange between high-pressure (HP) and low-pressure (LP) streams, achieved by pressure manipulation equipment running on common axes. The model allows the use of several units of single-shaft-turbine-compressor (SSTC), as well as stand-alone compressors, turbines and valves. Helper motors and generators are used to respond to any demand and excess of energy. Moreover, between the WEN stages the streams are sent to the HEN to promote thermal recovery, aiming to enhance the work integration. A multi-stage superstructure is proposed to represent the process. The WEN superstructure is optimized in a mixed-integer nonlinear programming (MINLP) formulation and solved with the GAMS software, with the goal of minimizing the total annualized cost. Three examples are conducted to verify the accuracy of the proposed method. In all case studies, the heat integration between WEN stages is essential to improve the pressure recovery, and to reduce the total costs involved in the process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces a new optimization model for the simultaneous synthesis of heat and work exchange networks. The work integration is performed in the work exchange network (WEN), while the heat integration is carried out in the heat exchanger network (HEN). In the WEN synthesis, streams at high-pressure (HP) and low-pressure (LP) are subjected to pressure manipulation stages, via turbines and compressors running on common shafts and stand-alone equipment. The model allows the use of several units of single-shaft-turbine-compressor (SSTC), as well as helper motors and generators to respond to any shortage and/or excess of energy, respectively, in the SSTC axes. The heat integration of the streams occurs in the HEN between each WEN stage. Thus, as the inlet and outlet streams temperatures in the HEN are dependent of the WEN design, they must be considered as optimization variables. The proposed multi-stage superstructure is formulated in mixed-integer nonlinear programming (MINLP), in order to minimize the total annualized cost composed by capital and operational expenses. A case study is conducted to verify the accuracy of the proposed approach. The results indicate that the heat integration between the WEN stages is essential to enhance the work integration, and to reduce the total cost of process due the need of a smaller amount of hot and cold utilities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

AIM: To define the financial and management conditions required to introduce a femtosecond laser system for cataract surgery in a clinic using a fuzzy logic approach. METHODS: In the simulation performed in the current study, the costs associated to the acquisition and use of a commercially available femtosecond laser platform for cataract surgery (VICTUS, TECHNOLAS Perfect Vision GmbH, Bausch & Lomb, Munich, Germany) during a period of 5y were considered. A sensitivity analysis was performed considering such costs and the countable amortization of the system during this 5y period. Furthermore, a fuzzy logic analysis was used to obtain an estimation of the money income associated to each femtosecond laser-assisted cataract surgery (G). RESULTS: According to the sensitivity analysis, the femtosecond laser system under evaluation can be profitable if 1400 cataract surgeries are performed per year and if each surgery can be invoiced more than $500. In contrast, the fuzzy logic analysis confirmed that the patient had to pay more per surgery, between $661.8 and $667.4 per surgery, without considering the cost of the intraocular lens (IOL). CONCLUSION: A profitability of femtosecond laser systems for cataract surgery can be obtained after a detailed financial analysis, especially in those centers with large volumes of patients. The cost of the surgery for patients should be adapted to the real flow of patients with the ability of paying a reasonable range of cost.