19 resultados para Risks Assessment Methods
em Indian Institute of Science - Bangalore - Índia
Resumo:
Importance of the field: Antibiotic resistance in bacterial pathogens has increased worldwide leading to treatment failures. Concerns have been raised about the use of biocides as a contributing factor to the risk of antimicrobial resistance (AMR) development. In vitro studies demonstrating increase in resistance have often been cited as evidence for increased risks. It is therefore important to understand the mechanisms of resistance employed by bacteria toward biocides used in consumer products and their potential to impart cross-resistance to therapeutic antibiotics. Areas covered: In this review, the mechanisms of resistance and cross-resistance reported in the literature toward biocides commonly used in consumer products are summarized. The physiological and molecular techniques used in describing and examining these mechanisms are reviewed and application of these techniques for systematic assessment of biocides for their potential to develop resistance and/or cross-resistance is discussed. Expert opinion: The guidelines in the usage of biocides in household or industrial purpose should be monitored and regulated to avoid the emergence of any MDR strains. The genetic and molecular methods to monitor the resistance development to biocides should be developed and included in preclinical and clinical studies.
Resumo:
This paper presents an experimental study that was conducted to compare the results obtained from using different design methods (brainstorming (BR), functional analysis (FA), and SCAMPER) in design processes. The objectives of this work are twofold. The first was to determine whether there are any differences in the length of time devoted to the different types of activities that are carried out in the design process, depending on the method that is employed; in other words, whether the design methods that are used make a difference in the profile of time spent across the design activities. The second objective was to analyze whether there is any kind of relationship between the time spent on design process activities and the degree of creativity in the solutions that are obtained. Creativity evaluation has been done by means of the degree of novelty and the level of resolution of the designed solutions using creative product semantic scale (CPSS) questionnaire. The results show that there are significant differences between the amounts of time devoted to activities related to understanding the problem and the typology of the design method, intuitive or logical, that are used. While the amount of time spent on analyzing the problem is very small in intuitive methods, such as brainstorming and SCAMPER (around 8-9% of the time), with logical methods like functional analysis practically half the time is devoted to analyzing the problem. Also, it has been found that the amount of time spent in each design phase has an influence on the results in terms of creativity, but results are not enough strong to define in which measure are they affected. This paper offers new data and results on the distinct benefits to be obtained from applying design methods. DOI: 10.1115/1.4007362]
Resumo:
Objective identification and description of mimicked calls is a primary component of any study on avian vocal mimicry but few studies have adopted a quantitative approach. We used spectral feature representations commonly used in human speech analysis in combination with various distance metrics to distinguish between mimicked and non-mimicked calls of the greater racket-tailed drongo, Dicrurus paradiseus and cross-validated the results with human assessment of spectral similarity. We found that the automated method and human subjects performed similarly in terms of the overall number of correct matches of mimicked calls to putative model calls. However, the two methods also misclassified different subsets of calls and we achieved a maximum accuracy of ninety five per cent only when we combined the results of both the methods. This study is the first to use Mel-frequency Cepstral Coefficients and Relative Spectral Amplitude - filtered Linear Predictive Coding coefficients to quantify vocal mimicry. Our findings also suggest that in spite of several advances in automated methods of song analysis, corresponding cross-validation by humans remains essential.
Resumo:
Crystals growing from solution, the vapour phase and from supercooled melt exhibit, as a rule, planar faces. The geometry and distribution of dislocations present within the crystals thus grown are strongly related to the growth on planar faces and to the different growth sectors rather than the physical properties of the crystals and the growth methods employed. As a result, many features of generation and geometrical arrangement of defects are common to extremely different crystal species. In this paper these commoner aspects of dislocation generation and configuration which permits one to predict their nature and distribution are discussed. For the purpose of imaging the defects a very versatile and widely applicable technique viz. x-ray diffraction topography is used. Growth dislocations in solution grown crystals follow straight path with strongly defined directions. These preferred directions which in most cases lie within an angle of ±15° to the growth normal depend on the growth direction and on the Burger's vector involved. The potential configuration of dislocations in the growing crystals can be evaluated using the theory developed by Klapper which is based on linear anisotropic elastic theory. The preferred line direction of a particular dislocation corresponds to that in which the dislocation energy per unit growth length is a minimum. The line direction analysis based on this theory enables one to characterise dislocations propagating in a growing crystal. A combined theoretical analysis and experimental investigation based on the above theory is presented.
Resumo:
The paper presents a method for the evaluation of external stability of reinforced soil walls subjected to earthquakes in the framework of the pseudo-dynamic method. The seismic reliability of the wall is evaluated by considering the different possible failure modes such as sliding along the base, overturning about the toe point of the wall, bearing capacity and the eccentricity of the resultant force. The analysis is performed considering properties of the reinforced backfill, foundation soil below the base of the wall, length of the geosynthetic reinforcement and characteristics of earthquake ground motions such as shear wave and primary wave velocity as random variables. The optimum length of reinforcement needed to maintain stability against four modes of failure by targeting various component reliability indices is obtained. Differences between pseudo-static and pseudo-dynamic methods are clearly highlighted in the paper. A complete analysis of pseudo-static and pseudo-dynamic methodologies shows that the pseudodynamic method results in realistic design values for the length of geosynthetic reinforcement under earthquake conditions.
Resumo:
Hydrologic impacts of climate change are usually assessed by downscaling the General Circulation Model (GCM) output of large-scale climate variables to local-scale hydrologic variables. Such an assessment is characterized by uncertainty resulting from the ensembles of projections generated with multiple GCMs, which is known as intermodel or GCM uncertainty. Ensemble averaging with the assignment of weights to GCMs based on model evaluation is one of the methods to address such uncertainty and is used in the present study for regional-scale impact assessment. GCM outputs of large-scale climate variables are downscaled to subdivisional-scale monsoon rainfall. Weights are assigned to the GCMs on the basis of model performance and model convergence, which are evaluated with the Cumulative Distribution Functions (CDFs) generated from the downscaled GCM output (for both 20th Century [20C3M] and future scenarios) and observed data. Ensemble averaging approach, with the assignment of weights to GCMs, is characterized by the uncertainty caused by partial ignorance, which stems from nonavailability of the outputs of some of the GCMs for a few scenarios (in Intergovernmental Panel on Climate Change [IPCC] data distribution center for Assessment Report 4 [AR4]). This uncertainty is modeled with imprecise probability, i.e., the probability being represented as an interval gray number. Furthermore, the CDF generated with one GCM is entirely different from that with another and therefore the use of multiple GCMs results in a band of CDFs. Representing this band of CDFs with a single valued weighted mean CDF may be misleading. Such a band of CDFs can only be represented with an envelope that contains all the CDFs generated with a number of GCMs. Imprecise CDF represents such an envelope, which not only contains the CDFs generated with all the available GCMs but also to an extent accounts for the uncertainty resulting from the missing GCM output. This concept of imprecise probability is also validated in the present study. The imprecise CDFs of monsoon rainfall are derived for three 30-year time slices, 2020s, 2050s and 2080s, with A1B, A2 and B1 scenarios. The model is demonstrated with the prediction of monsoon rainfall in Orissa meteorological subdivision, which shows a possible decreasing trend in the future.
Resumo:
We propose a self-regularized pseudo-time marching strategy for ill-posed, nonlinear inverse problems involving recovery of system parameters given partial and noisy measurements of system response. While various regularized Newton methods are popularly employed to solve these problems, resulting solutions are known to sensitively depend upon the noise intensity in the data and on regularization parameters, an optimal choice for which remains a tricky issue. Through limited numerical experiments on a couple of parameter re-construction problems, one involving the identification of a truss bridge and the other related to imaging soft-tissue organs for early detection of cancer, we demonstrate the superior features of the pseudo-time marching schemes.
Resumo:
Protein structure validation is an important step in computational modeling and structure determination. Stereochemical assessment of protein structures examine internal parameters such as bond lengths and Ramachandran (phi, psi) angles. Gross structure prediction methods such as inverse folding procedure and structure determination especially at low resolution can sometimes give rise to models that are incorrect due to assignment of misfolds or mistracing of electron density maps. Such errors are not reflected as strain in internal parameters. HARMONY is a procedure that examines the compatibility between the sequence and the structure of a protein by assigning scores to individual residues and their amino acid exchange patterns after considering their local environments. Local environments are described by the backbone conformation, solvent accessibility and hydrogen bonding patterns. We are now providing HARMONY through a web server such that users can submit their protein structure files and, if required, the alignment of homologous sequences. Scores are mapped on the structure for subsequent examination that is useful to also recognize regions of possible local errors in protein structures. HARMONY server is located at http://caps.ncbs.res.in/harmony/
Resumo:
Purpose: To assess the effect of ultrasound modulation of near infrared (NIR) light on the quantification of scattering coefficient in tissue-mimicking biological phantoms.Methods: A unique method to estimate the phase of the modulated NIR light making use of only time averaged intensity measurements using a charge coupled device camera is used in this investigation. These experimental measurements from tissue-mimicking biological phantoms are used to estimate the differential pathlength, in turn leading to estimation of optical scattering coefficient. A Monte-Carlo model base numerical estimation of phase in lieu of ultrasound modulation is performed to verify the experimental results. Results: The results indicate that the ultrasound modulation of NIR light enhances the effective scattering coefficient. The observed effective scattering coefficient enhancement in tissue-mimicking viscoelastic phantoms increases with increasing ultrasound drive voltage. The same trend is noticed as the ultrasound modulation frequency approaches the natural vibration frequency of the phantom material. The contrast enhancement is less for the stiffer (larger storage modulus) tissue, mimicking tumor necrotic core, compared to the normal tissue. Conclusions: The ultrasound modulation of the insonified region leads to an increase in the effective number of scattering events experienced by NIR light, increasing the measured phase, causing the enhancement in the effective scattering coefficient. The ultrasound modulation of NIR light could provide better estimation of scattering coefficient. The observed local enhancement of the effective scattering coefficient, in the ultrasound focal region, is validated using both experimental measurements and Monte-Carlo simulations. (C) 2010 American Association of Physicists in Medicine. [DOI: 10.1118/1.3456441]
Resumo:
An application of direct methods to dynamic security assessment of power systems using structure-preserving energy functions (SPEF) is presented. The transient energy margin (TEM) is used as an index for checking the stability of the system as well as ranking the contigencies based on their severity. The computation of the TEM requires the evaluation of the critical energy and the energy at fault clearing. Usually this is done by simulating the faulted trajectory, which is time-consuming. In this paper, a new algorithm which eliminates the faulted trajectory estimation is presented to calculate the TEM. The system equations and the SPEF are developed using the centre-of-inertia (COI) formulation and the loads are modelled as arbitrary functions of the respective bus voltages. The critical energy is evaluated using the potential energy boundary surface (PEBS) method. The method is illustrated by considering two realistic power system examples.
Resumo:
The restoration, conservation and management of water resources require a thorough understanding of what constitutes a healthy ecosystem. Monitoring and assessment provides the basic information on the condition of our waterbodies. The present work details the study carried out at two waterbodies, namely, the Chamarajasagar reservoir and the Madiwala Lake. The waterbodies were selected on the basis of their current use and locations. Chamarajasagar reservoir serves the purpose of supplying drinking water to Bangalore city and is located on the outskirts of the city surrounded by agricultural and forest land. On the other hand, Madiwala lake is situated in the heart of Bangalore city receiving an influx of pollutants from domestic and industrial sewage. Comparative assessment of the surface water quality of both were carried out by instituting the various physico–chemical and biological parameters. The physico-chemical analyses included temperature, transparency, pH, electrical conductivity, dissolved oxygen, alkalinity, total hardness, calcium hardness, magnesium hardness, nitrates, phosphates, sodium, potassium and COD measurements of the given waterbody. The analysis was done based on the standard methods prescribed (or recommended) by (APHA) and NEERI. The biological parameter included phytoplankton analysis. The detailed investigations of the parameters, which are well within the tolerance limits in Chamarajasagar reservoir, indicate that it is fairly unpolluted, except for the pH values, which indicate greater alkalinity. This may be attributed to the natural causes and the agricultural runoff from the catchment. On the contrary, the limnology of Madiwala lake is greatly influenced by the inflow of sewage that contributes significantly to the dissolved solids of the lake water, total hardness, alkalinity and a low DO level. Although, the two study areas differ in age, physiography, chemistry and type of inflows, they still maintain a phytoplankton distribution overwhelmingly dominated by Cyanophyceae members,specifically Microcystis aeruginosa. These blue green algae apparently enter the waterbodies from soil, which are known to harbour a rich diversity of blue green flora with several species common to limnoplankton, a feature reported to be unique to the south Indian lakes.Chamarajasagar water samples revealed five classes of phytoplankton, of which Cyanophyceae (92.15 percent) that dominated other algal forms comprised of one single species of Microcystis aeruginosa. The next major class of algae was Chlorophyceae (3.752 percent) followed by Dinophyceae (3.51 percent), Bacillariophyceae (0.47 percent) and a sparsely available and unidentified class (0.12 percent).Madiwala Lake phytoplankton, in addition to Cyanophyceae (26.20 percent), revealed a high density of Chlorophyceae members (73.44 percent) dominated by Scenedesmus sp.,Pediastrum sp., and Euglena sp.,which are considered to be indicators of organic pollution. The domestic and industrial sewage, which finds its way into the lake, is a factor causing organic pollution. As compared to the other classes, Euglenophyceae and Bacillariophyceae members were the lowest in number. Thus, the analysis of various parameters indicates that Chamarajasagar reservoir is relatively unpolluted except for the high percentage of Microcystis aeruginosa, and a slightly alkaline nature of water. Madiwala lake samples revealed eutrophication and high levels of pollution, which is clarified by the physico–chemical analysis, whose values are way above the tolerance limits. Also, the phytoplankton analysis in Madiwala lake reveals the dominance of Chlorophyceae members, which indicate organic pollution (sewage being the causative factor).
Resumo:
Genetic Algorithms (GAs) are recognized as an alternative class of computational model, which mimic natural evolution to solve problems in a wide domain including machine learning, music generation, genetic synthesis etc. In the present study Genetic Algorithm has been employed to obtain damage assessment of composite structural elements. It is considered that a state of damage can be modeled as reduction in stiffness. The task is to determine the magnitude and location of damage. In a composite plate that is discretized into a set of finite elements, if a jth element is damaged, the GA based technique will predict the reduction in Ex and Ey and the location j. The fact that the natural frequency decreases with decrease in stiffness is made use of in the method. The natural frequency of any two modes of the damaged plates for the assumed damage parameters is facilitated by the use of Eigen sensitivity analysis. The Eigen value sensitivities are the derivatives of the Eigen values with respect to certain design parameters. If ωiu is the natural frequency of the ith mode of the undamaged plate and ωid is that of the damaged plate, with δωi as the difference between the two, while δωk is a similar difference in the kth mode, R is defined as the ratio of the two. For a random selection of Ex,Ey and j, a ratio Ri is obtained. A proper combination of Ex,Ey and j which makes Ri−R=0 is obtained by Genetic Algorithm.
Resumo:
This paper discusses an approach for river mapping and flood evaluation based on multi-temporal time-series analysis of satellite images utilizing pixel spectral information for image clustering and region based segmentation for extracting water covered regions. MODIS satellite images are analyzed at two stages: before flood and during flood. Multi-temporal MODIS images are processed in two steps. In the first step, clustering algorithms such as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) are used to distinguish the water regions from the non-water based on spectral information. These algorithms are chosen since they are quite efficient in solving multi-modal optimization problems. These classified images are then segmented using spatial features of the water region to extract the river. From the results obtained, we evaluate the performance of the methods and conclude that incorporating region based image segmentation along with clustering algorithms provides accurate and reliable approach for the extraction of water covered region.
Resumo:
Northeast India is one of the most highly seismically active regions in the world with more than seven earthquakes on an average per year of magnitude 5.0 and above. Reliable seismic hazard assessment could provide the necessary design inputs for earthquake resistant design of structures in this' region. In this study, deterministic as well as probabilistic methods have been attempted for seismic hazard assessment of Tripura and Mizoram states at bedrock level condition. An updated earthquake catalogue was collected from various national and international seismological agencies for the period from 1731 to 2011. The homogenization, declustering and data completeness analysis of events have been carried out before hazard evaluation. Seismicity parameters have been estimated using G R relationship for each source zone. Based on the seismicity, tectonic features and fault rupture mechanism, this region was divided into six major subzones. Region specific correlations were used for magnitude conversion for homogenization of earthquake size. Ground motion equations (Atkinson and Boore 2003; Gupta 2010) were validated with the observed PGA (peak ground acceleration) values before use in the hazard evaluation. In this study, the hazard is estimated using linear sources, identified in and around the study area. Results are presented in the form of PGA using both DSHA (deterministic seismic hazard analysis) and PSHA (probabilistic seismic hazard analysis) with 2 and 10% probability of exceedance in 50 years, and spectral acceleration (T = 0. 2 s, 1.0 s) for both the states (2% probability of exceedance in 50 years). The results are important to provide inputs for planning risk reduction strategies, for developing risk acceptance criteria and financial analysis for possible damages in the study area with a comprehensive analysis and higher resolution hazard mapping.