12 resultados para Assurance
em Indian Institute of Science - Bangalore - Índia
Resumo:
An attempt is made to discuss in brief the current philosophy and trends in quality assurance through nondestructive testing. The techniques currently in use and those being developed for newer and advanced materials such as composites are reviewed. 27 ref.--AA
Resumo:
Engineering education quality embraces the activities through which a technical institution satisfies itself that the quality of education it provides and standards it has set are appropriate and are being maintained. There is a need to develop a standardised approach to most aspects of quality assurance for engineering programmes which is sufficiently well defined to be accepted for all assessments.We have designed a Technical Educational Quality Assurance and Assessment (TEQ-AA) System, which makes use of the information on the web and analyzes the standards of the institution. With the standards as anchors for definition, the institution is clearer about its present in order to plan better for its future and enhancing the level of educational quality.The system has been tested and implemented on the technical educational Institutions in the Karnataka State which usually host their web pages for commercially advertising their technical education programs and their Institution objectives, policies, etc., for commercialization and for better reach-out to the students and faculty. This helps in assisting the students in selecting an institution for study and to assist in employment.
Resumo:
This paper presents on overview of the issues in precisely defining, specifying and evaluating the dependability of software, particularly in the context of computer controlled process systems. Dependability is intended to be a generic term embodying various quality factors and is useful for both software and hardware. While the developments in quality assurance and reliability theories have proceeded mostly in independent directions for hardware and software systems, we present here the case for developing a unified framework of dependability—a facet of operational effectiveness of modern technological systems, and develop a hierarchical systems model helpful in clarifying this view. In the second half of the paper, we survey the models and methods available for measuring and improving software reliability. The nature of software “bugs”, the failure history of the software system in the various phases of its lifecycle, the reliability growth in the development phase, estimation of the number of errors remaining in the operational phase, and the complexity of the debugging process have all been considered to varying degrees of detail. We also discuss the notion of software fault-tolerance, methods of achieving the same, and the status of other measures of software dependability such as maintainability, availability and safety.
Resumo:
Urban sprawl is the outgrowth along the periphery of cities and along highways. Although an accurate definition of urban sprawl may be debated, a consensus is that urban sprawl is characterized by an unplanned and uneven pattern of growth, driven by multitude of processes and leading to inefficient resource utilization. Urbanization in India has never been as rapid as it is in recent times. As one of the fastest growing economies in the world, India faces stiff challenges in managing the urban sprawl, while ensuring effective delivery of basic services in urban areas. The urban areas contribute significantly to the national economy (more than 50% of GDP), while facing critical challenges in accessing basic services and necessary infrastructure, both social and economic. The overall rise in the population of the urban poor or the increase in travel times due to congestion along road networks are indicators of the effectiveness of planning and governance in assessing and catering for this demand. Agencies of governance at all levels: local bodies, state government and federal government, are facing the brunt of this rapid urban growth. It is imperative for planning and governance to facilitate, augment and service the requisite infrastructure over time systematically. Provision of infrastructure and assurance of the delivery of basic services cannot happen overnight and hence planning has to facilitate forecasting and service provision with appropriate financial mechanisms.
Resumo:
In order to protect the critical electronic equipment/system against damped sine transient currents induced into its cables due to transient electromagnetic fields, switching phenomena, platform resonances, etc. it is necessary to provide proper hardening. The hardness assurance provided can be evaluated as per the test CS 116 of MIL STD 461E/F in laboratory by generating & inducing the necessary damped sine currents into the cables of the Equipment Under Test (EUT). The need and the stringent requirements for building a damped sine wave current generator for generation of damped sine current transients of very high frequencies (30 MHz & 100 MHz) have been presented. A method using LC discharge for the generation has been considered in the development. This involves building of extremely low & nearly loss less inductors (about 5 nH & 14 nH) as well as a capacitor & a switch with much lower inductances. A technique for achieving this has been described. Two units (I No for 30 MHz. & 100 MHz each) have been built. Experiments to verify the output are being conducted.
Resumo:
The MIT Lincoln Laboratory IDS evaluation methodology is a practical solution in terms of evaluating the performance of Intrusion Detection Systems, which has contributed tremendously to the research progress in that field. The DARPA IDS evaluation dataset has been criticized and considered by many as a very outdated dataset, unable to accommodate the latest trend in attacks. Then naturally the question arises as to whether the detection systems have improved beyond detecting these old level of attacks. If not, is it worth thinking of this dataset as obsolete? The paper presented here tries to provide supporting facts for the use of the DARPA IDS evaluation dataset. The two commonly used signature-based IDSs, Snort and Cisco IDS, and two anomaly detectors, the PHAD and the ALAD, are made use of for this evaluation purpose and the results support the usefulness of DARPA dataset for IDS evaluation.
Resumo:
The motivation behind the fusion of Intrusion Detection Systems was the realization that with the increasing traffic and increasing complexity of attacks, none of the present day stand-alone Intrusion Detection Systems can meet the high demand for a very high detection rate and an extremely low false positive rate. Multi-sensor fusion can be used to meet these requirements by a refinement of the combined response of different Intrusion Detection Systems. In this paper, we show the design technique of sensor fusion to best utilize the useful response from multiple sensors by an appropriate adjustment of the fusion threshold. The threshold is generally chosen according to the past experiences or by an expert system. In this paper, we show that the choice of the threshold bounds according to the Chebyshev inequality principle performs better. This approach also helps to solve the problem of scalability and has the advantage of failsafe capability. This paper theoretically models the fusion of Intrusion Detection Systems for the purpose of proving the improvement in performance, supplemented with the empirical evaluation. The combination of complementary sensors is shown to detect more attacks than the individual components. Since the individual sensors chosen detect sufficiently different attacks, their result can be merged for improved performance. The combination is done in different ways like (i) taking all the alarms from each system and avoiding duplications, (ii) taking alarms from each system by fixing threshold bounds, and (iii) rule-based fusion with a priori knowledge of the individual sensor performance. A number of evaluation metrics are used, and the results indicate that there is an overall enhancement in the performance of the combined detector using sensor fusion incorporating the threshold bounds and significantly better performance using simple rule-based fusion.
Resumo:
Suspensions of testicular germ cells from six species of mammals were prepared and stained for the DNA content with a fluorochrome (ethidium bromide) adopting a common technique and subjected to DNA flow cytometry. While uniform staining of the germ cells of the mouse, hamster, rat and monkey could be obtained by treating with 0.5% pepsin for 60 min followed by staining with ethidium bromide for 30 min, that of the guinea pig and rabbit required for optimal staining pepsinization for 90 min and treatment with ethidium bromide for 60 min. The procedure adopted here provided a uniform recovery of over 80% of germ cells with each one of the species tested and the cell population distributed itself according to the DNA content (expressed as C values) into 5 major classes-spermatogonia (2C), cells in S-phase, primary spermatocytes (4C), round spermatids (1C), and elongating/elongated spermatids (HC). Comparison of the DNA distribution pattern of the germ cell populations between species revealed little variation in the relative quantities of cells with 2C (8-11%), S-phase (6-9%), and 4C (6-9%) amount of DNA. Though the spermatid cell populations exhibited variations (1C:31-46%, HCI:7-20% and and HC2:11-25%) they represented the bulk of germ cells (70-80%). The overall conversion of 2C to 1C (1C:2C ratio) and meiotic transformation of 4C cells to IC (1C:4C ratio) kinetics were relatively constant between the species studied. The present study clearly demonstrates that DNA flow cytometry can be adopted with ease and assurance to quantify germ cell transformation and as such spermatogenesis by analysing a large number of samples with consistency both within and across the species barrier. Any variation from the norms in germ cell proportions observed following treatment, for e.g. hormonal stimulation or deprivation can then be ascribed due to a specific effect of the hormone/drug on single/multiple steps in germ cell transformation
Resumo:
Current standard security practices do not provide substantial assurance about information flow security: the end-to-end behavior of a computing system. Noninterference is the basic semantical condition used to account for information flow security. In the literature, there are many definitions of noninterference: Non-inference, Separability and so on. Mantel presented a framework of Basic Security Predicates (BSPs) for characterizing the definitions of noninterference in the literature. Model-checking these BSPs for finite state systems was shown to be decidable in [8]. In this paper, we show that verifying these BSPs for the more expressive system model of pushdown systems is undecidable. We also give an example of a simple security property which is undecidable even for finite-state systems: the property is a weak form of non-inference called WNI, which is not expressible in Mantel’s BSP framework.
Resumo:
Moore's Law has driven the semiconductor revolution enabling over four decades of scaling in frequency, size, complexity, and power. However, the limits of physics are preventing further scaling of speed, forcing a paradigm shift towards multicore computing and parallelization. In effect, the system is taking over the role that the single CPU was playing: high-speed signals running through chips but also packages and boards connect ever more complex systems. High-speed signals making their way through the entire system cause new challenges in the design of computing hardware. Inductance, phase shifts and velocity of light effects, material resonances, and wave behavior become not only prevalent but need to be calculated accurately and rapidly to enable short design cycle times. In essence, to continue scaling with Moore's Law requires the incorporation of Maxwell's equations in the design process. Incorporating Maxwell's equations into the design flow is only possible through the combined power that new algorithms, parallelization and high-speed computing provide. At the same time, incorporation of Maxwell-based models into circuit and system-level simulation presents a massive accuracy, passivity, and scalability challenge. In this tutorial, we navigate through the often confusing terminology and concepts behind field solvers, show how advances in field solvers enable integration into EDA flows, present novel methods for model generation and passivity assurance in large systems, and demonstrate the power of cloud computing in enabling the next generation of scalable Maxwell solvers and the next generation of Moore's Law scaling of systems. We intend to show the truly symbiotic growing relationship between Maxwell and Moore!
Resumo:
Increasingly, scientific collaborations and contracts cross country borders. The need for assurance that the quality of animal welfare and the caliber of animal research conducted are equivalent among research partners around the globe is of concern to the scientific and laboratory animal medicine communities, the general public, and other key stakeholders. Therefore, global harmonization of animal care and use standards and practices, with the welfare of the animals as a cornerstone, is essential. In the evolving global landscape of enhanced attention to animal welfare, a widely accepted path to achieving this goal is the successful integration of the 3Rs in animal care and use programs. Currently, awareness of the 3Rs, their implementation, and the resulting animal care and use standards and practices vary across countries. This variability has direct effects on the animals used in research and potentially the data generated and may also have secondary effects on the country's ability to be viewed as a global research partner. Here we review the status of implementation of the 3Rs worldwide and focus on 3 countries-Brazil, China and India-with increasing economic influence and an increasing footprint in the biomedical research enterprise.