836 resultados para model-based security management
Resumo:
A health-monitoring and life-estimation strategy for composite rotor blades is developed in this work. The cross-sectional stiffness reduction obtained by physics-based models is expressed as a function of the life of the structure using a recent phenomenological damage model. This stiffness reduction is further used to study the behavior of measurable system parameters such as blade deflections, loads, and strains of a composite rotor blade in static analysis and forward flight. The simulated measurements are obtained using an aeroelastic analysis of the composite rotor blade based on the finite element in space and time with physics-based damage modes that are then linked to the life consumption of the blade. The model-based measurements are contaminated with noise to simulate real data. Genetic fuzzy systems are developed for global online prediction of physical damage and life consumption using displacement- and force-based measurement deviations between damaged and undamaged conditions. Furthermore, local online prediction of physical damage and life consumption is done using strains measured along the blade length. It is observed that the life consumption in the matrix-cracking zone is about 12-15% and life consumption in debonding/delamination zone is about 45-55% of the total life of the blade. It is also observed that the success rate of the genetic fuzzy systems depends upon the number of measurements, type of measurements and training, and the testing noise level. The genetic fuzzy systems work quite well with noisy data and are recommended for online structural health monitoring of composite helicopter rotor blades.
Resumo:
XML has emerged as a medium for interoperability over the Internet. As the number of documents published in the form of XML is increasing there is a need for selective dissemination of XML documents based on user interests. In the proposed technique, a combination of Self Adaptive Migration Model Genetic Algorithm (SAMCA)[5] and multi class Support Vector Machine (SVM) are used to learn a user model. Based on the feedback from the users the system automatically adapts to the user's preference and interests. The user model and a similarity metric are used for selective dissemination of a continuous stream of XML documents. Experimental evaluations performed over a wide range of XML documents indicate that the proposed approach significantly improves the performance of the selective dissemination task, with respect to accuracy and efficiency.
Resumo:
Masonry strength is dependent upon characteristics of the masonry unit,the mortar and the bond between them. Empirical formulae as well as analytical and finite element (FE) models have been developed to predict structural behaviour of masonry. This paper is focused on developing a three dimensional non-linear FE model based on micro-modelling approach to predict masonry prism compressive strength and crack pattern. The proposed FE model uses multi-linear stress-strain relationships to model the non-linear behaviour of solid masonry unit and the mortar. Willam-Warnke's five parameter failure theory developed for modelling the tri-axial behaviour of concrete has been adopted to model the failure of masonry materials. The post failure regime has been modelled by applying orthotropic constitutive equations based on the smeared crack approach. Compressive strength of the masonry prism predicted by the proposed FE model has been compared with experimental values as well as the values predicted by other failure theories and Eurocode formula. The crack pattern predicted by the FE model shows vertical splitting cracks in the prism. The FE model predicts the ultimate failure compressive stress close to 85 of the mean experimental compressive strength value.
Resumo:
In Taita Hills, south-eastern Kenya, remnants of indigenous mountain rainforests play a crucial role as water towers and socio-cultural sites. They are pressurized due to poverty, shortage of cultivable land and the fading of traditional knowledge. This study examines the traditional ecological knowledge of Taitas and the ways it may be applied within transforming natural resource management regimes. I have analyzed some justifications for and hindrances to ethnodevelopment and participatory forest management in light of recently renewed Kenyan forest policies. Mixed methods were applied by combining an ethnographic approach with participatory GIS. I learned about traditionally protected forests and their ecological and cultural status through a seek out the expert method and with remote sensing data and tools. My informants were: 107 household interviewees, 257 focus group participants, 73 key informants and 87 common informants in participatory mapping. Religious leaders and state officials shared their knowledge for this study. I have gained a better understanding of the traditionally protected forests and sites through examining their ecological characteristics and relation to social dynamics, by evaluating their strengths and hindrances as sites for conservation of cultural and biological diversity. My results show that, these sites are important components of a complex socio-ecological system, which has symbolical status and sacred and mystical elements within it, that contributes to the connectivity of remnant forests in the agroforestry dominated landscape. Altogether, 255 plant species and 220 uses were recognized by the tradition experts, whereas 161 species with 108 beneficial uses were listed by farmers. Out of the traditionally protected forests studied 47 % were on private land and 23% on community land, leaving 9% within state forest reserves. A paradigm shift in conservation is needed; the conservation area approach is not functional for private lands or areas trusted upon communities. The role of traditionally protected forests in community-based forest management is, however, paradoxal, since communal approaches suggests equal participation of people, whereas management of these sites has traditionally been the duty of solely accredited experts in the village. As modernization has gathered pace such experts have become fewer. Sacredness clearly contributes but, it does not equal conservation. Various social, political and economic arrangements further affect the integrity of traditionally protected forests and sites, control of witchcraft being one of them. My results suggest that the Taita have a rich traditional ecological knowledge base, which should be more determinately integrated into the natural resource management planning processes.
Resumo:
A swarm is a temporary structure formed when several thousand honey bees leave their hive and settle on some object such as the branch of a tree. They remain in this position until a suitable site for a new home is located by the scout bees. A continuum model based on heat conduction and heat generation is used to predict temperature profiles in swarms. Since internal convection is neglected, the model is applicable only at low values of the ambient temperature T-a. Guided by the experimental observations of Heinrich (1981a-c, J. Exp. Biol. 91, 25-55; Science 212, 565-566; Sci. Am. 244, 147-160), the analysis is carried out mainly for non-spherical swarms. The effective thermal conductivity is estimated using the data of Heinrich (1981a, J. Exp. Biol. 91, 25-55) for dead bees. For T-a = 5 and 9 degrees C, results based on a modified version of the heat generation function due to Southwick (1991, The Behaviour and Physiology of Bees, PP 28-47. C.A.B. International, London) are in reasonable agreement with measurements. Results obtained with the heat generation function of Myerscough (1993, J. Theor. Biol. 162, 381-393) are qualitatively similar to those obtained with Southwick's function, but the error is more in the former case. The results suggest that the bees near the periphery generate more heat than those near the core, in accord with the conjecture of Heinrich (1981c, Sci. Am. 244, 147-160). On the other hand, for T-a = 5 degrees C, the heat generation function of Omholt and Lonvik (1986, J. Theor. Biol. 120, 447-456) leads to a trivial steady state where the entire swarm is at the ambient temperature. Therefore an acceptable heat generation function must result in a steady state which is both non-trivial and stable with respect to small perturbations. Omholt and Lonvik's function satisfies the first requirement, but not the second. For T-a = 15 degrees C, there is a considerable difference between predicted and measured values, probably due to the neglect of internal convection in the model.
Resumo:
A newly developed and validated constitutive model that accounts for primary compression and time-dependent mechanical creep and biodegradation is used for parametric study to investigate the effects of model parameters on the predicted settlement of municipal solid waste (MSW) with time. The model enables the prediction of stress strain response and yield surfaces for three components of settlement: primary compression, mechanical creep, and biodegradation. The MSW parameters investigated include compression index, coefficient of earth pressure at-rest, overconsolidation ratio, and biodegradation parameters of MSW. A comparison of the predicted settlements for typical MSW landfill conditions showed significant differences in time-settlement response depending on the selected model input parameters. The effect of lift thickness of MSW on predicted settlement is also investigated. Overall, the study shows that the variation in the model parameters can lead to significantly different results; therefore, the model parameter values should be carefully selected to predict landfill settlements accurately. It is shown that the proposed model captures the time settlement response which is in general agreement with the results obtained from the other two reported models having similar features. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
A one-dimensional, biphasic, multicomponent steady-state model based on phenomenological transport equations for the catalyst layer, diffusion layer, and polymeric electrolyte membrane has been developed for a liquid-feed solid polymer electrolyte direct methanol fuel cell (SPE- DMFC). The model employs three important requisites: (i) implementation of analytical treatment of nonlinear terms to obtain a faster numerical solution as also to render the iterative scheme easier to converge, (ii) an appropriate description of two-phase transport phenomena in the diffusive region of the cell to account for flooding and water condensation/evaporation effects, and (iii) treatment of polarization effects due to methanol crossover. An improved numerical solution has been achieved by coupling analytical integration of kinetics and transport equations in the reaction layer, which explicitly include the effect of concentration and pressure gradient on cell polarization within the bulk catalyst layer. In particular, the integrated kinetic treatment explicitly accounts for the nonhomogeneous porous structure of the catalyst layer and the diffusion of reactants within and between the pores in the cathode. At the anode, the analytical integration of electrode kinetics has been obtained within the assumption of macrohomogeneous electrode porous structure, because methanol transport in a liquid-feed SPE- DMFC is essentially a single-phase process because of the high miscibility of methanol with water and its higher concentration in relation to gaseous reactants. A simple empirical model accounts for the effect of capillary forces on liquid-phase saturation in the diffusion layer. Consequently, diffusive and convective flow equations, comprising Nernst-Plank relation for solutes, Darcy law for liquid water, and Stefan-Maxwell equation for gaseous species, have been modified to include the capillary flow contribution to transport. To understand fully the role of model parameters in simulating the performance of the DMCF, we have carried out its parametric study. An experimental validation of model has also been carried out. (C) 2003 The Electrochemical Society.
Resumo:
Modeling the performance behavior of parallel applications to predict the execution times of the applications for larger problem sizes and number of processors has been an active area of research for several years. The existing curve fitting strategies for performance modeling utilize data from experiments that are conducted under uniform loading conditions. Hence the accuracy of these models degrade when the load conditions on the machines and network change. In this paper, we analyze a curve fitting model that attempts to predict execution times for any load conditions that may exist on the systems during application execution. Based on the experiments conducted with the model for a parallel eigenvalue problem, we propose a multi-dimensional curve-fitting model based on rational polynomials for performance predictions of parallel applications in non-dedicated environments. We used the rational polynomial based model to predict execution times for 2 other parallel applications on systems with large load dynamics. In all the cases, the model gave good predictions of execution times with average percentage prediction errors of less than 20%
Resumo:
Purpose: To optimize the data-collection strategy for diffuse optical tomography and to obtain a set of independent measurements among the total measurements using the model based data-resolution matrix characteristics. Methods: The data-resolution matrix is computed based on the sensitivity matrix and the regularization scheme used in the reconstruction procedure by matching the predicted data with the actual one. The diagonal values of data-resolution matrix show the importance of a particular measurement and the magnitude of off-diagonal entries shows the dependence among measurements. Based on the closeness of diagonal value magnitude to off-diagonal entries, the independent measurements choice is made. The reconstruction results obtained using all measurements were compared to the ones obtained using only independent measurements in both numerical and experimental phantom cases. The traditional singular value analysis was also performed to compare the results obtained using the proposed method. Results: The results indicate that choosing only independent measurements based on data-resolution matrix characteristics for the image reconstruction does not compromise the reconstructed image quality significantly, in turn reduces the data-collection time associated with the procedure. When the same number of measurements (equivalent to independent ones) are chosen at random, the reconstruction results were having poor quality with major boundary artifacts. The number of independent measurements obtained using data-resolution matrix analysis is much higher compared to that obtained using the singular value analysis. Conclusions: The data-resolution matrix analysis is able to provide the high level of optimization needed for effective data-collection in diffuse optical imaging. The analysis itself is independent of noise characteristics in the data, resulting in an universal framework to characterize and optimize a given data-collection strategy. (C) 2012 American Association of Physicists in Medicine. http://dx.doi.org/10.1118/1.4736820]
Resumo:
Context-aware computing is useful in providing individualized services focusing mainly on acquiring surrounding context of user. By comparison, only very little research has been completed in integrating context from different environments, despite of its usefulness in diverse applications such as healthcare, M-commerce and tourist guide applications. In particular, one of the most important criteria in providing personalized service in a highly dynamic environment and constantly changing user environment, is to develop a context model which aggregates context from different domains to infer context of an entity at the more abstract level. Hence, the purpose of this paper is to propose a context model based on cognitive aspects to relate contextual information that better captures the observation of certain worlds of interest for a more sophisticated context-aware service. We developed a C-IOB (Context-Information, Observation, Belief) conceptual model to analyze the context data from physical, system, application, and social domains to infer context at the more abstract level. The beliefs developed about an entity (person, place, things) are primitive in most theories of decision making so that applications can use these beliefs in addition to history of transaction for providing intelligent service. We enhance our proposed context model by further classifying context information into three categories: a well-defined, a qualitative and credible context information to make the system more realistic towards real world implementation. The proposed model is deployed to assist a M-commerce application. The simulation results show that the service selection and service delivery of the system are high compared to traditional system.
Resumo:
A novel Projection Error Propagation-based Regularization (PEPR) method is proposed to improve the image quality in Electrical Impedance Tomography (EIT). PEPR method defines the regularization parameter as a function of the projection error developed by difference between experimental measurements and calculated data. The regularization parameter in the reconstruction algorithm gets modified automatically according to the noise level in measured data and ill-posedness of the Hessian matrix. Resistivity imaging of practical phantoms in a Model Based Iterative Image Reconstruction (MoBIIR) algorithm as well as with Electrical Impedance Diffuse Optical Reconstruction Software (EIDORS) with PEPR. The effect of PEPR method is also studied with phantoms with different configurations and with different current injection methods. All the resistivity images reconstructed with PEPR method are compared with the single step regularization (STR) and Modified Levenberg Regularization (LMR) techniques. The results show that, the PEPR technique reduces the projection error and solution error in each iterations both for simulated and experimental data in both the algorithms and improves the reconstructed images with better contrast to noise ratio (CNR), percentage of contrast recovery (PCR), coefficient of contrast (COC) and diametric resistivity profile (DRP). (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
We performed Gaussian network model based normal mode analysis of 3-dimensional structures of multiple active and inactive forms of protein kinases. In 14 different kinases, a more number of residues (1095) show higher structural fluctuations in inactive states than those in active states (525), suggesting that, in general, mobility of inactive states is higher than active states. This statistically significant difference is consistent with higher crystallographic B-factors and conformational energies for inactive than active states, suggesting lower stability of inactive forms. Only a small number of inactive conformations with the DFG motif in the ``in'' state were found to have fluctuation magnitudes comparable to the active conformation. Therefore our study reports for the first time, intrinsic higher structural fluctuation for almost all inactive conformations compared to the active forms. Regions with higher fluctuations in the inactive states are often localized to the aC-helix, aG-helix and activation loop which are involved in the regulation and/or in structural transitions between active and inactive states. Further analysis of 476 kinase structures involved in interactions with another domain/protein showed that many of the regions with higher inactive-state fluctuation correspond to contact interfaces. We also performed extensive GNM analysis of (i) insulin receptor kinase bound to another protein and (ii) holo and apo forms of active and inactive conformations followed by multi-factor analysis of variance. We conclude that binding of small molecules or other domains/proteins reduce the extent of fluctuation irrespective of active or inactive forms. Finally, we show that the perceived fluctuations serve as a useful input to predict the functional state of a kinase.
Resumo:
Electronic monitoring of perimeters plays vital roles in homeland security, management of traffic and of humanwildlife conflict. This paper reports the design and development of an optical beam-interruption-based ranging and profiling sensor for monitoring perimeters. The developed sensor system can determine the distance of the object from the sensing units and its temporal height profile as the object crosses the system. Together, these quantities can also be used to classify the object and to determine its speed. The sensor is designed, fabricated, and evaluated. The design enables compact construction, high sensitivity, and low measurement crosstalk. The evaluation demonstrates accuracy better than 98.5% in the determination of height and over 94% in determination of the distance of an object from the sensing units. Finally, a strategy is proposed to classify the objects based on the obtained height profiles. The strategy is demonstrated to correctly classify the objects despite differences in their speed and the location at which they cross the system.
Resumo:
In this paper an explicit guidance law for the powered descent phase of the soft lunar landing is presented. The descent trajectory, expressed in polynomial form is fixed based on the boundary conditions imposed by the precise soft landing mission. Adapting an inverse model based approach, the guidance command is computed from the known spacecraft trajectory. The guidance formulation ensures the vertical orientation of the spacecraft during touchdown. Also a closed form relation for the final flight time is proposed. The final time is expressed as a function of initial position and velocity of the spacecraft ( at the start of descent) and also depends on the desired landing site. To ensure the fuel minimum descent the proposed explicit method is extended to optimal guidance formulation. The effectiveness of the proposed guidance laws are demonstrated with simulation results.
Resumo:
A numerical model has been developed for simulating the rapid solidification processing (RSP) of Ni-Al alloy in order to predict the resultant phase composition semi-quantitatively during RSP. The present model couples the initial nucleation temperature evaluating method based on the time dependent nucleation theory, and solidified volume fraction calculation model based on the kinetics model of dendrite growth in undercooled melt. This model has been applied to predict the cooling curve and the volume fraction of solidified phases of Ni-Al alloy in planar flow casting. The numerical results agree with the experimental results semi-quantitatively.