61 resultados para lightener evaluation system tyrosinaseTYR
Resumo:
Combustion is a complex phenomena involving a multiplicity of variables. Some important variables measured in flame tests follow [1]. In order to characterize ignition, such related parameters as ignition time, ease of ignition, flash ignition temperature, and self-ignition temperature are measured. For studying the propagation of the flame, parameters such as distance burned or charred, area of flame spread, time of flame spread, burning rate, charred or melted area, and fire endurance are measured. Smoke characteristics are studied by determining such parameters as specific optical density, maximum specific optical density, time of occurrence of the densities, maximum rate of density increase, visual obscuration time, and smoke obscuration index. In addition to the above variables, there are a number of specific properties of the combustible system which could be measured. These are soot formation, toxicity of combustion gases, heat of combustion, dripping phenomena during the burning of thermoplastics, afterglow, flame intensity, fuel contribution, visual characteristics, limiting oxygen concentration (OI), products of pyrolysis and combustion, and so forth. A multitude of flammability tests measuring one or more of these properties have been developed [2]. Admittedly, no one small scale test is adequate to mimic or assess the performance of a plastic in a real fire situation. The conditions are much too complicated [3, 4]. Some conceptual problems associated with flammability testing of polymers have been reviewed [5, 6].
Resumo:
Data flow computers are high-speed machines in which an instruction is executed as soon as all its operands are available. This paper describes the EXtended MANchester (EXMAN) data flow computer which incorporates three major extensions to the basic Manchester machine. As extensions we provide a multiple matching units scheme, an efficient, implementation of array data structure, and a facility to concurrently execute reentrant routines. A simulator for the EXMAN computer has been coded in the discrete event simulation language, SIMULA 67, on the DEC 1090 system. Performance analysis studies have been conducted on the simulated EXMAN computer to study the effectiveness of the proposed extensions. The performance experiments have been carried out using three sample problems: matrix multiplication, Bresenham's line drawing algorithm, and the polygon scan-conversion algorithm.
Resumo:
The rail-sleeper system is idealized as an infinite, periodic beam-mass system. Use is made of the periodicity principle for the semi-infinite halves on either side of the forcing point for evaluation of the wave propagation constants and the corresponding modal vectors. It is shown that the spread of acceleration away from the forcing point depends primarily upon one of the wave propagation constants. However, all the four modal vectors (two for the left-hand side and two for the right-hand side) determine the driving point impedance of the rail-sleeper system, which in combination with the driving point impedance of the wheel (which is adopted from the preceding companion paper) determines the forces generated by combined surface roughness and the resultant accelerations. The compound one-third octave acceleration levels generated by typical roughness spectra are generally of the same order as the observed levels.
Resumo:
A major concern of embedded system architects is the design for low power. We address one aspect of the problem in this paper, namely the effect of executable code compression. There are two benefits of code compression – firstly, a reduction in the memory footprint of embedded software, and secondly, potential reduction in memory bus traffic and power consumption. Since decompression has to be performed at run time it is achieved by hardware. We describe a tool called COMPASS which can evaluate a range of strategies for any given set of benchmarks and display compression ratios. Also, given an execution trace, it can compute the effect on bus toggles, and cache misses for a range of compression strategies. The tool is interactive and allows the user to vary a set of parameters, and observe their effect on performance. We describe an implementation of the tool and demonstrate its effectiveness. To the best of our knowledge this is the first tool proposed for such a purpose.
Resumo:
A reliable protection against direct lightning hit is very essential for satellite launch pads. In view of this, suitable protection systems are generally employed. The evaluation of efficacy of the lightning protection schemes among others requires an accurate knowledge of the consequential potential rise at the struck point and the current injected into soil at the earth termination. The present work has made a detailed effort to deduce these quantities for the lightning protection scheme of the Indian satellite launch pad-I. A reduced scale model of the system with a frequency domain approach is employed for the experimental study. For further validation of the experimental approach, numerical simulations using numerical electromagnetic code-2 are also carried out on schemes involving single tower. The study results on the protection system show that the present design is quite safe with regard to top potential rise. It is shown that by connecting ground wires to the tower, its base current and, hence, the soil potential rise can be reduced. An evaluation of an alternate design philosophy involving insulated mast scheme is also made. The potential rise in that design is quantified and the possibility of a flashover to supporting tower is briefly looked into. The supporting tower is shown to have significant induced currents.
Resumo:
The motivation behind the fusion of Intrusion Detection Systems was the realization that with the increasing traffic and increasing complexity of attacks, none of the present day stand-alone Intrusion Detection Systems can meet the high demand for a very high detection rate and an extremely low false positive rate. Multi-sensor fusion can be used to meet these requirements by a refinement of the combined response of different Intrusion Detection Systems. In this paper, we show the design technique of sensor fusion to best utilize the useful response from multiple sensors by an appropriate adjustment of the fusion threshold. The threshold is generally chosen according to the past experiences or by an expert system. In this paper, we show that the choice of the threshold bounds according to the Chebyshev inequality principle performs better. This approach also helps to solve the problem of scalability and has the advantage of failsafe capability. This paper theoretically models the fusion of Intrusion Detection Systems for the purpose of proving the improvement in performance, supplemented with the empirical evaluation. The combination of complementary sensors is shown to detect more attacks than the individual components. Since the individual sensors chosen detect sufficiently different attacks, their result can be merged for improved performance. The combination is done in different ways like (i) taking all the alarms from each system and avoiding duplications, (ii) taking alarms from each system by fixing threshold bounds, and (iii) rule-based fusion with a priori knowledge of the individual sensor performance. A number of evaluation metrics are used, and the results indicate that there is an overall enhancement in the performance of the combined detector using sensor fusion incorporating the threshold bounds and significantly better performance using simple rule-based fusion.
Resumo:
Methodologies are presented for minimization of risk in a river water quality management problem. A risk minimization model is developed to minimize the risk of low water quality along a river in the face of conflict among various stake holders. The model consists of three parts: a water quality simulation model, a risk evaluation model with uncertainty analysis and an optimization model. Sensitivity analysis, First Order Reliability Analysis (FORA) and Monte-Carlo simulations are performed to evaluate the fuzzy risk of low water quality. Fuzzy multiobjective programming is used to formulate the multiobjective model. Probabilistic Global Search Laussane (PGSL), a global search algorithm developed recently, is used for solving the resulting non-linear optimization problem. The algorithm is based on the assumption that better sets of points are more likely to be found in the neighborhood of good sets of points, therefore intensifying the search in the regions that contain good solutions. Another model is developed for risk minimization, which deals with only the moments of the generated probability density functions of the water quality indicators. Suitable skewness values of water quality indicators, which lead to low fuzzy risk are identified. Results of the models are compared with the results of a deterministic fuzzy waste load allocation model (FWLAM), when methodologies are applied to the case study of Tunga-Bhadra river system in southern India, with a steady state BOD-DO model. The fractional removal levels resulting from the risk minimization model are slightly higher, but result in a significant reduction in risk of low water quality. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Energy-based direct methods for transient stability analysis are potentially useful both as offline tools for planning purposes as well as for online security assessment. In this paper, a novel structure-preserving energy function (SPEF) is developed using the philosophy of structure-preserving model for the system and detailed generator model including flux decay, transient saliency, automatic voltage regulator (AVR), exciter and damper winding. A simpler and yet general expression for the SPEF is also derived which can simplify the computation of the energy function. The system equations and the energy function are derived using the centre-of-inertia (COI) formulation and the system loads are modelled as arbitrary functions of the respective bus voltages. Application of the proposed SPEF to transient stability evaluation of power systems is illustrated with numerical examples.
Resumo:
The insulin-like growth factors (IGEs; IGF-1 and IGF-2) play central roles in cell growth, differentiation, survival, transformation and metastasis. The biologic effects of the IGFs are mediated by the IGF-1 receptor (IGF-1R), a receptor tyrosine kinase with homology to the insulin receptor (IR). Dysregulation of the ICE system is well recognized as a key contributor to the progression of multiple cancers, with IGF-1R activation increasing the tumorigenic potential of breast, prostate, lung, colon and head and neck squamous cell carcinoma (HNSCC). Despite this relationship, targeting the IGF-1R has only recently undergone development as a molecular cancer therapeutic. As it has taken hold, we are witnessing a robust increase and interest in targeting the inhibition of IGF-1R signaling. This is accentuated by the list of over 30 drugs, including monoclonal antibodies (mAbs) and tyrosine kinase inhibitors (TKIs) that are under evaluation as single agents or in combination therapies 1]. The ICE-binding proteins (IGFBPs) represent the third component of the ICE system consisting of a class of six soluble secretory proteins. They represent a unique class of naturally occurring ICE-antagonists that bind to and sequester IGF-1 and IGF-2, inhibiting their access to the IGF-1R. Due to their dual targeting of the IGFs without affecting insulin action, the IGFBPs are an untapped ``third'' class of IGF-1R inhibitors. in this commentary, we highlight some of the significant aspects of and prospects for targeting the IGF-1R and describe what the future may hold. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
Erosion characteristics of high chromium (Cr, 16-19%) alloy cast iron with 5% and 10% manganese (Mn) prepared in metal and sand moulds through induction melting are investigated using jet erosion test setup in both as-cast and heat-treated conditions. The samples were characterised for hardness and microstructural properties. A new and novel non-destructive evaluation technique namely positron lifetime spectroscopy has also been used for the first time to characterise the microstructure of the material in terms of defects and their concentration. We found that the hardness decreases irrespective of the sample condition when the mould type is changed from metal to sand, On the other hand, the erosion volume loss shows an increasing trend. Since the macroscopic properties have a bearing on the microstructure, good credence is obtained from the microstructural features as seen from light and scanning electron micrographs. Faster cooling in the metal mould yielded fine carbide precipitation on the surface. The defect size and their concentration derived from positron method are higher for sand mould compared to metal mould. Lower erosion loss corresponds to smaller size defects in metal mould are the results of quicker heat transfer in the metal mould compared to the sand mould. Heat treatment effects are clearly seen as the reduced concentration of defects and spherodisation of carbides points to this. The erosion loss with respect to the defects size and concentration correlate very well.
Resumo:
We propose a novel formulation of the points-to analysis as a system of linear equations. With this, the efficiency of the points-to analysis can be significantly improved by leveraging the advances in solution procedures for solving the systems of linear equations. However, such a formulation is non-trivial and becomes challenging due to various facts, namely, multiple pointer indirections, address-of operators and multiple assignments to the same variable. Further, the problem is exacerbated by the need to keep the transformed equations linear. Despite this, we successfully model all the pointer operations. We propose a novel inclusion-based context-sensitive points-to analysis algorithm based on prime factorization, which can model all the pointer operations. Experimental evaluation on SPEC 2000 benchmarks and two large open source programs reveals that our approach is competitive to the state-of-the-art algorithms. With an average memory requirement of mere 21MB, our context-sensitive points-to analysis algorithm analyzes each benchmark in 55 seconds on an average.
Resumo:
The amount of reactive power margin available in a system determines its proximity to voltage instability under normal and emergency conditions. More the reactive power margin, better is the systems security and vice-versa. A hypothetical way of improving the reactive margin of a synchronous generator is to reduce the real power generation within its mega volt-ampere (MVA) ratings. This real power generation reduction will affect its power contract agreements entered in the electricity market. Owing to this, the benefit that the generator foregoes will have to be compensated by paying them some lost opportunity cost. The objective of this study is three fold. Firstly, the reactive power margins of the generators are evaluated. Secondly, they are improved using a reactive power optimization technique and optimally placed unified power flow controllers. Thirdly, the reactive power capacity exchanges along the tie-lines are evaluated under base case and improved conditions. A detailed analysis of all the reactive power sources and sinks scattered throughout the network is carried out in the study. Studies are carried out on a real life, three zone, 72-bus equivalent Indian southern grid considering normal and contingency conditions with base case operating point and optimised results presented.
Resumo:
A new class of nets, called S-nets, is introduced for the performance analysis of scheduling algorithms used in real-time systems Deterministic timed Petri nets do not adequately model the scheduling of resources encountered in real-time systems, and need to be augmented with resource places and signal places, and a scheduler block, to facilitate the modeling of scheduling algorithms. The tokens are colored, and the transition firing rules are suitably modified. Further, the concept of transition folding is used, to get intuitively simple models of multiframe real-time systems. Two generic performance measures, called �load index� and �balance index,� which characterize the resource utilization and the uniformity of workload distribution, respectively, are defined. The utility of S-nets for evaluating heuristic-based scheduling schemes is illustrated by considering three heuristics for real-time scheduling. S-nets are useful in tuning the hardware configuration and the underlying scheduling policy, so that the system utilization is maximized, and the workload distribution among the computing resources is balanced.
Resumo:
A structured systems methodology was developed to analyse the problems of production interruptions occurring at random intervals in continuous process type manufacturing systems. At a macro level the methodology focuses on identifying suitable investment policies to reduce interruptions of a total manufacturing system that is a combination of several process plants. An interruption-tree-based simulation model was developed for macroanalysis. At a micro level the methodology focuses on finding the effects of alternative configurations of individual process plants on the overall system performance. A Markov simulation model was developed for microlevel analysis. The methodology was tested with an industry-specific application.
Resumo:
The ability of Static Var Compensators (SVCs) to rapidly and continuously control reactive power in response to changing system conditions can result in the improvement of system stability and also increase the power transfer in the transmission system. This paper concerns the application of strategically located SVCs to enhance the transient stability limits and the direct evaluation of the effect of these SVCs on transient stability using a Structure Preserving Energy Function (SPEF). The SVC control system can be modelled from the steady- state control characteristic to accurately simulate its effect on transient stability. Treating the SVC as a voltage-dependent reactive power load leads to the derivation of a path-independent SPEF for the SVC. Case studies on a 10-machine test system using multiple SVCs illustrate the effects of SVCs on transient stability and its accurate prediction.