18 resultados para Common Assessment Framework (CAF)
Resumo:
Crystals growing from solution, the vapour phase and from supercooled melt exhibit, as a rule, planar faces. The geometry and distribution of dislocations present within the crystals thus grown are strongly related to the growth on planar faces and to the different growth sectors rather than the physical properties of the crystals and the growth methods employed. As a result, many features of generation and geometrical arrangement of defects are common to extremely different crystal species. In this paper these commoner aspects of dislocation generation and configuration which permits one to predict their nature and distribution are discussed. For the purpose of imaging the defects a very versatile and widely applicable technique viz. x-ray diffraction topography is used. Growth dislocations in solution grown crystals follow straight path with strongly defined directions. These preferred directions which in most cases lie within an angle of ±15° to the growth normal depend on the growth direction and on the Burger's vector involved. The potential configuration of dislocations in the growing crystals can be evaluated using the theory developed by Klapper which is based on linear anisotropic elastic theory. The preferred line direction of a particular dislocation corresponds to that in which the dislocation energy per unit growth length is a minimum. The line direction analysis based on this theory enables one to characterise dislocations propagating in a growing crystal. A combined theoretical analysis and experimental investigation based on the above theory is presented.
Resumo:
The paper presents a method for the evaluation of external stability of reinforced soil walls subjected to earthquakes in the framework of the pseudo-dynamic method. The seismic reliability of the wall is evaluated by considering the different possible failure modes such as sliding along the base, overturning about the toe point of the wall, bearing capacity and the eccentricity of the resultant force. The analysis is performed considering properties of the reinforced backfill, foundation soil below the base of the wall, length of the geosynthetic reinforcement and characteristics of earthquake ground motions such as shear wave and primary wave velocity as random variables. The optimum length of reinforcement needed to maintain stability against four modes of failure by targeting various component reliability indices is obtained. Differences between pseudo-static and pseudo-dynamic methods are clearly highlighted in the paper. A complete analysis of pseudo-static and pseudo-dynamic methodologies shows that the pseudodynamic method results in realistic design values for the length of geosynthetic reinforcement under earthquake conditions.
Resumo:
Downscaling to station-scale hydrologic variables from large-scale atmospheric variables simulated by general circulation models (GCMs) is usually necessary to assess the hydrologic impact of climate change. This work presents CRF-downscaling, a new probabilistic downscaling method that represents the daily precipitation sequence as a conditional random field (CRF). The conditional distribution of the precipitation sequence at a site, given the daily atmospheric (large-scale) variable sequence, is modeled as a linear chain CRF. CRFs do not make assumptions on independence of observations, which gives them flexibility in using high-dimensional feature vectors. Maximum likelihood parameter estimation for the model is performed using limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) optimization. Maximum a posteriori estimation is used to determine the most likely precipitation sequence for a given set of atmospheric input variables using the Viterbi algorithm. Direct classification of dry/wet days as well as precipitation amount is achieved within a single modeling framework. The model is used to project the future cumulative distribution function of precipitation. Uncertainty in precipitation prediction is addressed through a modified Viterbi algorithm that predicts the n most likely sequences. The model is applied for downscaling monsoon (June-September) daily precipitation at eight sites in the Mahanadi basin in Orissa, India, using the MIROC3.2 medium-resolution GCM. The predicted distributions at all sites show an increase in the number of wet days, and also an increase in wet day precipitation amounts. A comparison of current and future predicted probability density functions for daily precipitation shows a change in shape of the density function with decreasing probability of lower precipitation and increasing probability of higher precipitation.
Resumo:
A Batch Processing Machine (BPM) is one which processes a number of jobs simultaneously as a batch with common beginning and ending times. Also, a BPM, once started cannot be interrupted in between (Pre-emption not allowed). This research is motivated by a BPM in steel casting industry. There are three main stages in any steel casting industry viz., pre-casting stage, casting stage and post-casting stage. A quick overview of the entire process, is shown in Figure 1. There are two BPMs : (1) Melting furnace in the pre-casting stage and (2) Heat Treatment Furnace (HTF) in the post casting stage of steel casting manufacturing process. This study focuses on scheduling the latter, namely HTF. Heat-treatment operation is one of the most important stages of steel casting industries. It determines the final properties that enable components to perform under demanding service conditions such as large mechanical load, high temperature and anti-corrosive processing. In general, different types of castings have to undergo more than one type of heat-treatment operations, where the total heat-treatment processing times change. To have a better control, castings are primarily classified into a number of job-families based on the alloy type such as low-alloy castings and high alloy castings. For technical reasons such as type of alloy, temperature level and the expected combination of heat-treatment operations, the castings from different families can not be processed together in the same batch.
Resumo:
Hydrogen plasma can be used for deoxidation of functional materials containing reactive metals in both bulk and thin film forms. Since the different species in the plasma are not in thermodynamic equilibrium, application of classical thermodynamics to the analysis of such a system is associated with some difficulties. While global equilibrium approaches have been tried, with and without additional approximations or constraints, there is some ambiguity in the results obtained. Presented in this article is the application of a local equilibrium concept to assess the thermodynamic limit of the reaction of each species present in the gas with oxides or oxygen dissolved in metals. Each reaction results in a different pal tial pressure of H2O. Because of the higher reactivity of the dissociated and ionized species and the larger thermodynamic driving force for reactions involving these species, they act as powerful reducing agents. It is necessary to remove the products of reaction from the plasma to prevent back reaction and gradual approach to global equilibrium. A quantitative description using the framework of the Ellingham-Richardson-Jeffes diagrams is presented.
Resumo:
We address the problem of allocating a single divisible good to a number of agents. The agents have concave valuation functions parameterized by a scalar type. The agents report only the type. The goal is to find allocatively efficient, strategy proof, nearly budget balanced mechanisms within the Groves class. Near budget balance is attained by returning as much of the received payments as rebates to agents. Two performance criteria are of interest: the maximum ratio of budget surplus to efficient surplus, and the expected budget surplus, within the class of linear rebate functions. The goal is to minimize them. Assuming that the valuation functions are known, we show that both problems reduce to convex optimization problems, where the convex constraint sets are characterized by a continuum of half-plane constraints parameterized by the vector of reported types. We then propose a randomized relaxation of these problems by sampling constraints. The relaxed problem is a linear programming problem (LP). We then identify the number of samples needed for ``near-feasibility'' of the relaxed constraint set. Under some conditions on the valuation function, we show that value of the approximate LP is close to the optimal value. Simulation results show significant improvements of our proposed method over the Vickrey-Clarke-Groves (VCG) mechanism without rebates. In the special case of indivisible goods, the mechanisms in this paper fall back to those proposed by Moulin, by Guo and Conitzer, and by Gujar and Narahari, without any need for randomization. Extension of the proposed mechanisms to situations when the valuation functions are not known to the central planner are also discussed. Note to Practitioners-Our results will be useful in all resource allocation problems that involve gathering of information privately held by strategic users, where the utilities are any concave function of the allocations, and where the resource planner is not interested in maximizing revenue, but in efficient sharing of the resource. Such situations arise quite often in fair sharing of internet resources, fair sharing of funds across departments within the same parent organization, auctioning of public goods, etc. We study methods to achieve near budget balance by first collecting payments according to the celebrated VCG mechanism, and then returning as much of the collected money as rebates. Our focus on linear rebate functions allows for easy implementation. The resulting convex optimization problem is solved via relaxation to a randomized linear programming problem, for which several efficient solvers exist. This relaxation is enabled by constraint sampling. Keeping practitioners in mind, we identify the number of samples that assures a desired level of ``near-feasibility'' with the desired confidence level. Our methodology will occasionally require subsidy from outside the system. We however demonstrate via simulation that, if the mechanism is repeated several times over independent instances, then past surplus can support the subsidy requirements. We also extend our results to situations where the strategic users' utility functions are not known to the allocating entity, a common situation in the context of internet users and other problems.
Resumo:
The restoration, conservation and management of water resources require a thorough understanding of what constitutes a healthy ecosystem. Monitoring and assessment provides the basic information on the condition of our waterbodies. The present work details the study carried out at two waterbodies, namely, the Chamarajasagar reservoir and the Madiwala Lake. The waterbodies were selected on the basis of their current use and locations. Chamarajasagar reservoir serves the purpose of supplying drinking water to Bangalore city and is located on the outskirts of the city surrounded by agricultural and forest land. On the other hand, Madiwala lake is situated in the heart of Bangalore city receiving an influx of pollutants from domestic and industrial sewage. Comparative assessment of the surface water quality of both were carried out by instituting the various physico–chemical and biological parameters. The physico-chemical analyses included temperature, transparency, pH, electrical conductivity, dissolved oxygen, alkalinity, total hardness, calcium hardness, magnesium hardness, nitrates, phosphates, sodium, potassium and COD measurements of the given waterbody. The analysis was done based on the standard methods prescribed (or recommended) by (APHA) and NEERI. The biological parameter included phytoplankton analysis. The detailed investigations of the parameters, which are well within the tolerance limits in Chamarajasagar reservoir, indicate that it is fairly unpolluted, except for the pH values, which indicate greater alkalinity. This may be attributed to the natural causes and the agricultural runoff from the catchment. On the contrary, the limnology of Madiwala lake is greatly influenced by the inflow of sewage that contributes significantly to the dissolved solids of the lake water, total hardness, alkalinity and a low DO level. Although, the two study areas differ in age, physiography, chemistry and type of inflows, they still maintain a phytoplankton distribution overwhelmingly dominated by Cyanophyceae members,specifically Microcystis aeruginosa. These blue green algae apparently enter the waterbodies from soil, which are known to harbour a rich diversity of blue green flora with several species common to limnoplankton, a feature reported to be unique to the south Indian lakes.Chamarajasagar water samples revealed five classes of phytoplankton, of which Cyanophyceae (92.15 percent) that dominated other algal forms comprised of one single species of Microcystis aeruginosa. The next major class of algae was Chlorophyceae (3.752 percent) followed by Dinophyceae (3.51 percent), Bacillariophyceae (0.47 percent) and a sparsely available and unidentified class (0.12 percent).Madiwala Lake phytoplankton, in addition to Cyanophyceae (26.20 percent), revealed a high density of Chlorophyceae members (73.44 percent) dominated by Scenedesmus sp.,Pediastrum sp., and Euglena sp.,which are considered to be indicators of organic pollution. The domestic and industrial sewage, which finds its way into the lake, is a factor causing organic pollution. As compared to the other classes, Euglenophyceae and Bacillariophyceae members were the lowest in number. Thus, the analysis of various parameters indicates that Chamarajasagar reservoir is relatively unpolluted except for the high percentage of Microcystis aeruginosa, and a slightly alkaline nature of water. Madiwala lake samples revealed eutrophication and high levels of pollution, which is clarified by the physico–chemical analysis, whose values are way above the tolerance limits. Also, the phytoplankton analysis in Madiwala lake reveals the dominance of Chlorophyceae members, which indicate organic pollution (sewage being the causative factor).
Resumo:
The study focuses on probabilistic assessment of the internal seismic stability of reinforced soil structures (RSS) subjected to earthquake loading in the framework of the pseudo-dynamic method. In the literature, the pseudo-static approach has been used to compute reliability indices against the tension and pullout failure modes, and the real dynamic nature of earthquake accelerations cannot be considered. The work presented in this paper makes use of the horizontal and vertical sinusoidal accelerations, amplification of vibrations, shear wave and primary wave velocities and time period. This approach is applied to quantify the influence of the backfill properties, geosynthetic reinforcement and characteristics of earthquake ground motions on reliability indices in relation to the tension and pullout failure modes. Seismic reliability indices at different levels of geosynthetic layers are determined for different magnitudes of seismic acceleration, soil amplification, shear wave and primary wave velocities. The results are compared with the pseudo-static method, and the significance of the present methodology for designing reinforced soil structures is discussed.
Assessment of seismic hazard and liquefaction potential of Gujarat based on probabilistic approaches
Resumo:
Gujarat is one of the fastest-growing states of India with high industrial activities coming up in major cities of the state. It is indispensable to analyse seismic hazard as the region is considered to be most seismically active in stable continental region of India. The Bhuj earthquake of 2001 has caused extensive damage in terms of causality and economic loss. In the present study, the seismic hazard of Gujarat evaluated using a probabilistic approach with the use of logic tree framework that minimizes the uncertainties in hazard assessment. The peak horizontal acceleration (PHA) and spectral acceleration (Sa) values were evaluated for 10 and 2 % probability of exceedance in 50 years. Two important geotechnical effects of earthquakes, site amplification and liquefaction, are also evaluated, considering site characterization based on site classes. The liquefaction return period for the entire state of Gujarat is evaluated using a performance-based approach. The maps of PHA and PGA values prepared in this study are very useful for seismic hazard mitigation of the region in future.
Resumo:
The safety of an in-service brick arch railway bridge is assessed through field testing and finite-element analysis. Different loading test train configurations have been used in the field testing. The response of the bridge in terms of displacements, strains, and accelerations is measured under the ambient and design train traffic loading conditions. Nonlinear fracture mechanics-based finite-element analyses are performed to assess the margin of safety. A parametric study is done to study the effects of tensile strength on the progress of cracking in the arch. Furthermore, a stability analysis to assess collapse of the arch caused by lateral movement at the springing of one of the abutments that is elastically supported is carried out. The margin of safety with respect to cracking and stability failure is computed. Conclusions are drawn with some remarks on the state of the bridge within the framework of the information available and inferred information. DOI: 10.1061/(ASCE)BE.1943-5592.0000338. (C) 2013 American Society of Civil Engineers.
Resumo:
Functions are important in designing. However, several issues hinder progress with the understanding and usage of functions: lack of a clear and overarching definition of function, lack of overall justifications for the inevitability of the multiple views of function, and scarcity of systematic attempts to relate these views with one another. To help resolve these, the objectives of this research are to propose a common definition of function that underlies the multiple views in literature and to identify and validate the views of function that are logically justified to be present in designing. Function is defined as a change intended by designers between two scenarios: before and after the introduction of the design. A framework is proposed that comprises the above definition of function and an empirically validated model of designing, extended generate, evaluate, modify, and select of state-change, and an action, part, phenomenon, input, organ, and effect model of causality (Known as GEMS of SAPPhIRE), comprising the views of activity, outcome, requirement-solution-information, and system-environment. The framework is used to identify the logically possible views of function in the context of designing and is validated by comparing these with the views of function in the literature. Describing the different views of function using the proposed framework should enable comparisons and determine relationships among the various views, leading to better understanding and usage of functions in designing.
Resumo:
In the analysis and design of municipal solid waste (MSW) landfills, there are many uncertainties associated with the properties of MSW during and after MSW placement. Several studies are performed involving different laboratory and field tests to understand the complex behavior and properties of MSW, and based on these studies, different models are proposed for the analysis of time dependent settlement response of MSW. For the analysis of MSW settlement, it is very important to account for the variability of model parameters that reflect different processes such as primary compression under loading, mechanical creep and biodegradation. In this paper, regression equations based on response surface method (RSM) are used to represent the complex behavior of MSW using a newly developed constitutive model. An approach to assess landfill capacities and develop landfill closure plans based on prediction of landfill settlements is proposed. The variability associated with model parameters relating to primary compression, mechanical creep and biodegradation are used to examine their influence on MSW settlement using reliability analysis framework and influence of various parameters on the settlement of MSW are estimated through sensitivity analysis. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
Intermolecular cooperativity and structural relaxations in PVDF/PMMA blends were studied in this work with respect to different surface modified (amine, similar to NH2; carboxyl acid, similar to COOH and pristine) multiwalled nanotubes (MWNTs) at 1 wt % near blend's T-g and in the vicinity of demixing using dielectric spectroscopy, SAXS, DSC, and WAXD. Intermolecular cooperativity at T-g and configurational entropy was addressed in the framework of cooperative rearranging region (CRR) at T-g. Because of specific interactions between PVDF and NH2-MWNTs, the local composition fluctuates at its average value resulting in a broad T-g. The scale of cooperativity (xi(CRR)) and the number of segments in the cooperative volume (N-CRR) is comparatively smaller in the blends with NH2-MWNTs. This clearly suggests that the number of segments cooperatively relaxing is reduced in the blends due to specific interactions leading to more heterogeneity. The configurational entropy at T-g, as derived from Vogel-Fulcher and Adam-Gibbs analysis, was reduced in the blends in presence of MWNTs manifesting in entropic penalty of the chains. The crystallite size and the amorphous miscibility was evaluated using SAXS and was observed to be strongly contingent on the surface functional groups on MWNTs. Three distinct relaxations-alpha(c) due to relaxations in the crystalline phase of PVDF, alpha(m) indicating the amorphous miscibility in PVDF/PMMA blends, and alpha beta concerning the segmental dynamics of PMMA-were observed in the blends in the temperature range T-g < T < T-c. The dynamics as well as the nature of relaxations were observed to be dependent the surface functionality on the MWNTs. The dielectric permittivity was also enhanced in presence of MWNTs, especially with NH2-MWNTs, with minimal losses. The influence of the MWNTs on the spherulite size and crystalline morphology of the blends was also confirmed by POM and SEM.
Resumo:
This article describes a new performance-based approach for evaluating the return period of seismic soil liquefaction based on standard penetration test (SPT) and cone penetration test (CPT) data. The conventional liquefaction evaluation methods consider a single acceleration level and magnitude and these approaches fail to take into account the uncertainty in earthquake loading. The seismic hazard analysis based on the probabilistic method clearly shows that a particular acceleration value is being contributed by different magnitudes with varying probability. In the new method presented in this article, the entire range of ground shaking and the entire range of earthquake magnitude are considered and the liquefaction return period is evaluated based on the SPT and CPT data. This article explains the performance-based methodology for the liquefaction analysis – starting from probabilistic seismic hazard analysis (PSHA) for the evaluation of seismic hazard and the performance-based method to evaluate the liquefaction return period. A case study has been done for Bangalore, India, based on SPT data and converted CPT values. The comparison of results obtained from both the methods have been presented. In an area of 220 km2 in Bangalore city, the site class was assessed based on large number of borehole data and 58 Multi-channel analysis of surface wave survey. Using the site class and peak acceleration at rock depth from PSHA, the peak ground acceleration at the ground surface was estimated using probabilistic approach. The liquefaction analysis was done based on 450 borehole data obtained in the study area. The results of CPT match well with the results obtained from similar analysis with SPT data.
Resumo:
Using first principles calculations, we show that the storage capacity as well as desorption temperature of MOFs can be significantly enhanced by decorating pyridine (a common linker in MOFs) by metal atoms. The storage capacity of metal-pyridine complexes are found to be dependent on the type of decorating metal atom. Among the 3d transition metal atoms, Sc turns out to be the most efficient storing unto four H-2 molecules. Most importantly, Sc does not suffer dimerisation on the surface of pyridine, keeping the storage capacity of every metal atom intact. Based on these findings, we propose a metal-decorated pyridine-based MOFs, which has potential to meet the required H-2 storage capacity for vehicular usage. Copyright (C) 2014, Hydrogen Energy Publications, LLC. Published by Elsevier Ltd. All rights reserved.