20 resultados para Computational modelling by homology
em Cochin University of Science
Resumo:
We propose a novel, simple, efficient and distribution-free re-sampling technique for developing prediction intervals for returns and volatilities following ARCH/GARCH models. In particular, our key idea is to employ a Box–Jenkins linear representation of an ARCH/GARCH equation and then to adapt a sieve bootstrap procedure to the nonlinear GARCH framework. Our simulation studies indicate that the new re-sampling method provides sharp and well calibrated prediction intervals for both returns and volatilities while reducing computational costs by up to 100 times, compared to other available re-sampling techniques for ARCH/GARCH models. The proposed procedure is illustrated by an application to Yen/U.S. dollar daily exchange rate data.
Resumo:
In this article, techniques have been presented for faster evolution of wavelet lifting coefficients for fingerprint image compression (FIC). In addition to increasing the computational speed by 81.35%, the coefficients performed much better than the reported coefficients in literature. Generally, full-size images are used for evolving wavelet coefficients, which is time consuming. To overcome this, in this work, wavelets were evolved with resized, cropped, resized-average and cropped-average images. On comparing the peak- signal-to-noise-ratios (PSNR) offered by the evolved wavelets, it was found that the cropped images excelled the resized images and is in par with the results reported till date. Wavelet lifting coefficients evolved from an average of four 256 256 centre-cropped images took less than 1/5th the evolution time reported in literature. It produced an improvement of 1.009 dB in average PSNR. Improvement in average PSNR was observed for other compression ratios (CR) and degraded images as well. The proposed technique gave better PSNR for various bit rates, with set partitioning in hierarchical trees (SPIHT) coder. These coefficients performed well with other fingerprint databases as well.
Resumo:
Data centre is a centralized repository,either physical or virtual,for the storage,management and dissemination of data and information organized around a particular body and nerve centre of the present IT revolution.Data centre are expected to serve uniinterruptedly round the year enabling them to perform their functions,it consumes enormous energy in the present scenario.Tremendous growth in the demand from IT Industry made it customary to develop newer technologies for the better operation of data centre.Energy conservation activities in data centre mainly concentrate on the air conditioning system since it is the major mechanical sub-system which consumes considerable share of the total power consumption of the data centre.The data centre energy matrix is best represented by power utilization efficiency(PUE),which is defined as the ratio of the total facility power to the IT equipment power.Its value will be greater than one and a large value of PUE indicates that the sub-systems draw more power from the facility and the performance of the data will be poor from the stand point of energy conservation. PUE values of 1.4 to 1.6 are acievable by proper design and management techniques.Optimizing the air conditioning systems brings enormous opportunity in bringing down the PUE value.The air conditioning system can be optimized by two approaches namely,thermal management and air flow management.thermal management systems are now introduced by some companies but they are highly sophisticated and costly and do not catch much attention in the thumb rules.
Resumo:
An alkaline protease gene (Eap) was isolated for the first time from a marine fungus, Engyodontium album. Eap consists of an open reading frame of 1,161 bp encoding a prepropeptide consisting of 387 amino acids with a calculated molecular mass of 40.923 kDa. Homology comparison of the deduced amino acid sequence of Eap with other known proteins indicated that Eap encode an extracellular protease that belongs to the subtilase family of serine protease (Family S8). A comparative homology model of the Engyodontium album protease (EAP) was developed using the crystal structure of proteinase K. The model revealed that EAP has broad substrate specificity similar to Proteinase K with preference for bulky hydrophobic residues at P1 and P4. Also, EAP is suggested to have two disulfide bonds and more than two Ca2? binding sites in its 3D structure; both of which are assumed to contribute to the thermostable nature of the protein.
Resumo:
An alkaline protease gene (Eap) was isolated for the first time from a marine fungus, Engyodontium album. Eap consists of an open reading frame of 1,161 bp encoding a prepropeptide consisting of 387 amino acids with a calculated molecular mass of 40.923 kDa. Homology comparison of the deduced amino acid sequence of Eap with other known proteins indicated that Eap encode an extracellular protease that belongs to the subtilase family of serine protease (Family S8). A comparative homology model of the Engyodontium album protease (EAP) was developed using the crystal structure of proteinase K. The model revealed that EAP has broad substrate specificity similar to Proteinase K with preference for bulky hydrophobic residues at P1 and P4. Also, EAP is suggested to have two disulfide bonds and more than two Ca2? binding sites in its 3D structure; both of which are assumed to contribute to the thermostable nature of the protein.
Resumo:
The thesis gives a general introduction about the topic include India, the spatial and temporal variation of the surface meteorological parameters are dealt in detail. The general pattern of the winds over the region in different seasons and the generation and movements of the thermally and dynamically originated local wind systems of Western Ghats region has been studied. The modification of the prevailing winds over region by the Palghat Gap and its effect on the mouth regions pf the gap is analysed in great depth. The thesis gives the information of climatic elements of the mountain region such as energy budgets, rainfall studies, evaporation and condensation and the variation in the heat fluxes over the region. The impact of orography is studied in a different approach. The type of hypothetical study gives more insight into the control of mountain on the distribution of meteorological parameter over the study region and helps to quantify the impact of the mountain in varying the weather climate of region. The detailed study of the hydro-meteorological aspects of the main river basins of the region also should be included to the climatic studies for the total understanding of the weather and climate over the region.
Resumo:
The primary aim of the present study is to acquire a large amount of gravity data, to prepare gravity maps and interpret the data in terms of crustal structure below the Bavali shear zone and adjacent regions of northern Kerala. The gravity modeling is basically a tool to obtain knowledge of the subsurface extension of the exposed geological units and their structural relationship with the surroundings. The study is expected to throw light on the nature of the shear zone, crustal configuration below the high-grade granulite terrain and the tectonics operating during geological times in the region. The Bavali shear is manifested in the gravity profiles by a steep gravity gradient. The gravity models indicate that the Bavali shear coincides with steep plane that separates two contrasting crustal densities extending beyond a depth of 30 km possibly down to Moho, justifying it to be a Mantle fault. It is difficult to construct a generalized model of crustal evolution in terms of its varied manifestations using only the gravity data. However, the data constrains several aspects of crustal evolution and provides insights into some of the major events.
Resumo:
In this paper, we study a k-out-of-n system with single server who provides service to external customers also. The system consists of two parts:(i) a main queue consisting of customers (failed components of the k-out-of-n system) and (ii) a pool (of finite capacity M) of external customers together with an orbit for external customers who find the pool full. An external customer who finds the pool full on arrival, joins the orbit with probability and with probability 1− leaves the system forever. An orbital customer, who finds the pool full, at an epoch of repeated attempt, returns to orbit with probability (< 1) and with probability 1 − leaves the system forever. We compute the steady state system size probability. Several performance measures are computed, numerical illustrations are provided.
Resumo:
Sharing of information with those in need of it has always been an idealistic goal of networked environments. With the proliferation of computer networks, information is so widely distributed among systems, that it is imperative to have well-organized schemes for retrieval and also discovery. This thesis attempts to investigate the problems associated with such schemes and suggests a software architecture, which is aimed towards achieving a meaningful discovery. Usage of information elements as a modelling base for efficient information discovery in distributed systems is demonstrated with the aid of a novel conceptual entity called infotron.The investigations are focused on distributed systems and their associated problems. The study was directed towards identifying suitable software architecture and incorporating the same in an environment where information growth is phenomenal and a proper mechanism for carrying out information discovery becomes feasible. An empirical study undertaken with the aid of an election database of constituencies distributed geographically, provided the insights required. This is manifested in the Election Counting and Reporting Software (ECRS) System. ECRS system is a software system, which is essentially distributed in nature designed to prepare reports to district administrators about the election counting process and to generate other miscellaneous statutory reports.Most of the distributed systems of the nature of ECRS normally will possess a "fragile architecture" which would make them amenable to collapse, with the occurrence of minor faults. This is resolved with the help of the penta-tier architecture proposed, that contained five different technologies at different tiers of the architecture.The results of experiment conducted and its analysis show that such an architecture would help to maintain different components of the software intact in an impermeable manner from any internal or external faults. The architecture thus evolved needed a mechanism to support information processing and discovery. This necessitated the introduction of the noveI concept of infotrons. Further, when a computing machine has to perform any meaningful extraction of information, it is guided by what is termed an infotron dictionary.The other empirical study was to find out which of the two prominent markup languages namely HTML and XML, is best suited for the incorporation of infotrons. A comparative study of 200 documents in HTML and XML was undertaken. The result was in favor ofXML.The concept of infotron and that of infotron dictionary, which were developed, was applied to implement an Information Discovery System (IDS). IDS is essentially, a system, that starts with the infotron(s) supplied as clue(s), and results in brewing the information required to satisfy the need of the information discoverer by utilizing the documents available at its disposal (as information space). The various components of the system and their interaction follows the penta-tier architectural model and therefore can be considered fault-tolerant. IDS is generic in nature and therefore the characteristics and the specifications were drawn up accordingly. Many subsystems interacted with multiple infotron dictionaries that were maintained in the system.In order to demonstrate the working of the IDS and to discover the information without modification of a typical Library Information System (LIS), an Information Discovery in Library Information System (lDLIS) application was developed. IDLIS is essentially a wrapper for the LIS, which maintains all the databases of the library. The purpose was to demonstrate that the functionality of a legacy system could be enhanced with the augmentation of IDS leading to information discovery service. IDLIS demonstrates IDS in action. IDLIS proves that any legacy system could be augmented with IDS effectively to provide the additional functionality of information discovery service.Possible applications of IDS and scope for further research in the field are covered.
Resumo:
The current study is aimed at the development of a theoretical simulation tool based on Discrete Element Method (DEM) to 'interpret granular dynamics of solid bed in the cross section of the horizontal rotating cylinder at the microscopic level and subsequently apply this model to establish the transition behaviour, mixing and segregation.The simulation of the granular motion developed in this work is based on solving Newton's equation of motion for each particle in the granular bed subjected to the collisional forces, external forces and boundary forces. At every instant of time, the forces are tracked and the positions velocities and accelarations of each partcle is The software code for this simulation is written in VISUAL FORTRAN 90 After checking the validity of the code with special tests, it is used to investigate the transition behaviour of granular solids motion in the cross section of a rotating cylinder for various rotational speeds and fill fraction.This work is hence directed towards a theoretical investigation based on Discrete Element Method (DEM) of the motion of granular solids in the radial direction of the horizontal cylinder to elucidate the relationship between the operating parameters of the rotating cylinder geometry and physical properties ofthe granular solid.The operating parameters of the rotating cylinder include the various rotational velocities of the cylinder and volumetric fill. The physical properties of the granular solids include particle sizes, densities, stiffness coefficients, and coefficient of friction Further the work highlights the fundamental basis for the important phenomena of the system namely; (i) the different modes of solids motion observed in a transverse crosssection of the rotating cylinder for various rotational speeds, (ii) the radial mixing of the granular solid in terms of active layer depth (iii) rate coefficient of mixing as well as the transition behaviour in terms of the bed turnover time and rotational speed and (iv) the segregation mechanisms resulting from differences in the size and density of particles.The transition behaviour involving its six different modes of motion of the granular solid bed is quantified in terms of Froude number and the results obtained are validated with experimental and theoretical results reported in the literature The transition from slumping to rolling mode is quantified using the bed turnover time and a linear relationship is established between the bed turn over time and the inverse of the rotational speed of the cylinder as predicted by Davidson et al. [2000]. The effect of the rotational speed, fill fraction and coefficient of friction on the dynamic angle of repose are presented and discussed. The variation of active layer depth with respect to fill fraction and rotational speed have been investigated. The results obtained through simulation are compared with the experimental results reported by Van Puyvelde et. at. [2000] and Ding et at. [2002].The theoretical model has been further extended, to study the rmxmg and segregation in the transverse direction for different particle sizes and their size ratios. The effect of fill fraction and rotational speed on the transverse mixing behaviour is presented in the form of a mixing index and mixing kinetics curve. The segregation pattern obtained by the simulation of the granular solid bed with respect to the rotational speed of the cylinder is presented both in graphical and numerical forms. The segregation behaviour of the granular solid bed with respect to particle size, density and volume fraction of particle size has been investigated. Several important macro parameters characterising segregation such as mixing index, percolation index and segregation index have been derived from the simulation tool based on first principles developed in this work.
Resumo:
The hazards associated with major accident hazard (MAH) industries are fire, explosion and toxic gas releases. Of these, toxic gas release is the worst as it has the potential to cause extensive fatalities. Qualitative and quantitative hazard analyses are essential for the identitication and quantification of the hazards associated with chemical industries. This research work presents the results of a consequence analysis carried out to assess the damage potential of the hazardous material storages in an industrial area of central Kerala, India. A survey carried out in the major accident hazard (MAH) units in the industrial belt revealed that the major hazardous chemicals stored by the various industrial units are ammonia, chlorine, benzene, naphtha, cyclohexane, cyclohexanone and LPG. The damage potential of the above chemicals is assessed using consequence modelling. Modelling of pool fires for naphtha, cyclohexane, cyclohexanone, benzene and ammonia are carried out using TNO model. Vapor cloud explosion (VCE) modelling of LPG, cyclohexane and benzene are carried out using TNT equivalent model. Boiling liquid expanding vapor explosion (BLEVE) modelling of LPG is also carried out. Dispersion modelling of toxic chemicals like chlorine, ammonia and benzene is carried out using the ALOHA air quality model. Threat zones for different hazardous storages are estimated based on the consequence modelling. The distance covered by the threat zone was found to be maximum for chlorine release from a chlor-alkali industry located in the area. The results of consequence modelling are useful for the estimation of individual risk and societal risk in the above industrial area.Vulnerability assessment is carried out using probit functions for toxic, thermal and pressure loads. Individual and societal risks are also estimated at different locations. Mapping of threat zones due to different incident outcome cases from different MAH industries is done with the help of Are GIS.Fault Tree Analysis (FTA) is an established technique for hazard evaluation. This technique has the advantage of being both qualitative and quantitative, if the probabilities and frequencies of the basic events are known. However it is often difficult to estimate precisely the failure probability of the components due to insufficient data or vague characteristics of the basic event. It has been reported that availability of the failure probability data pertaining to local conditions is surprisingly limited in India. This thesis outlines the generation of failure probability values of the basic events that lead to the release of chlorine from the storage and filling facility of a major chlor-alkali industry located in the area using expert elicitation and proven fuzzy logic. Sensitivity analysis has been done to evaluate the percentage contribution of each basic event that could lead to chlorine release. Two dimensional fuzzy fault tree analysis (TDFFTA) has been proposed for balancing the hesitation factor invo1ved in expert elicitation .
Resumo:
Department of Statistics, Cochin University of Science and Technology
Resumo:
This thesis entitled Reliability Modelling and Analysis in Discrete time Some Concepts and Models Useful in the Analysis of discrete life time data.The present study consists of five chapters. In Chapter II we take up the derivation of some general results useful in reliability modelling that involves two component mixtures. Expression for the failure rate, mean residual life and second moment of residual life of the mixture distributions in terms of the corresponding quantities in the component distributions are investigated. Some applications of these results are also pointed out. The role of the geometric,Waring and negative hypergeometric distributions as models of life lengths in the discrete time domain has been discussed already. While describing various reliability characteristics, it was found that they can be often considered as a class. The applicability of these models in single populations naturally extends to the case of populations composed of sub-populations making mixtures of these distributions worth investigating. Accordingly the general properties, various reliability characteristics and characterizations of these models are discussed in chapter III. Inference of parameters in mixture distribution is usually a difficult problem because the mass function of the mixture is a linear function of the component masses that makes manipulation of the likelihood equations, leastsquare function etc and the resulting computations.very difficult. We show that one of our characterizations help in inferring the parameters of the geometric mixture without involving computational hazards. As mentioned in the review of results in the previous sections, partial moments were not studied extensively in literature especially in the case of discrete distributions. Chapters IV and V deal with descending and ascending partial factorial moments. Apart from studying their properties, we prove characterizations of distributions by functional forms of partial moments and establish recurrence relations between successive moments for some well known families. It is further demonstrated that partial moments are equally efficient and convenient compared to many of the conventional tools to resolve practical problems in reliability modelling and analysis. The study concludes by indicating some new problems that surfaced during the course of the present investigation which could be the subject for a future work in this area.
Resumo:
This work identifies the importance of plenum pressure on the performance of the data centre. The present methodology followed in the industry considers the pressure drop across the tile as a dependant variable, but it is shown in this work that this is the only one independent variable that is responsible for the entire flow dynamics in the data centre, and any design or assessment procedure must consider the pressure difference across the tile as the primary independent variable. This concept is further explained by the studies on the effect of dampers on the flow characteristics. The dampers have found to introduce an additional pressure drop there by reducing the effective pressure drop across the tile. The effect of damper is to change the flow both in quantitative and qualitative aspects. But the effect of damper on the flow in the quantitative aspect is only considered while using the damper as an aid for capacity control. Results from the present study suggest that the use of dampers must be avoided in data centre and well designed tiles which give required flow rates must be used in the appropriate locations. In the present study the effect of hot air recirculation is studied with suitable assumptions. It identifies that, the pressure drop across the tile is a dominant parameter which governs the recirculation. The rack suction pressure of the hardware along with the pressure drop across the tile determines the point of recirculation in the cold aisle. The positioning of hardware in the racks play an important role in controlling the recirculation point. The present study is thus helpful in the design of data centre air flow, based on the theory of jets. The air flow can be modelled both quantitatively and qualitatively based on the results.
Resumo:
Tsunamis are water waves generated by a sudden vertical displacement of the water surface. They are waves generated in the ocean by the disturbance associated with seismic activity, under sea volcanic eruptions, submarine landslides, nuclear explosion or meteorite impacts with the ocean. These waves are generated in the ocean and travel into coastal bays, gulfs, estuaries and rivers. These waves travel as gravity waves with a velocity dependent on water depth. The term tsunami is Japanese and means harbour (tsu) and wave (nami). It has been named so because such waves often develop resonant phenomena in harbours after offshore earthquakes.