7 resultados para Mathematical and Computer Modelling

em Cochin University of Science


Relevância:

100.00% 100.00%

Publicador:

Resumo:

An alkaline protease gene (Eap) was isolated for the first time from a marine fungus, Engyodontium album. Eap consists of an open reading frame of 1,161 bp encoding a prepropeptide consisting of 387 amino acids with a calculated molecular mass of 40.923 kDa. Homology comparison of the deduced amino acid sequence of Eap with other known proteins indicated that Eap encode an extracellular protease that belongs to the subtilase family of serine protease (Family S8). A comparative homology model of the Engyodontium album protease (EAP) was developed using the crystal structure of proteinase K. The model revealed that EAP has broad substrate specificity similar to Proteinase K with preference for bulky hydrophobic residues at P1 and P4. Also, EAP is suggested to have two disulfide bonds and more than two Ca2? binding sites in its 3D structure; both of which are assumed to contribute to the thermostable nature of the protein.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An alkaline protease gene (Eap) was isolated for the first time from a marine fungus, Engyodontium album. Eap consists of an open reading frame of 1,161 bp encoding a prepropeptide consisting of 387 amino acids with a calculated molecular mass of 40.923 kDa. Homology comparison of the deduced amino acid sequence of Eap with other known proteins indicated that Eap encode an extracellular protease that belongs to the subtilase family of serine protease (Family S8). A comparative homology model of the Engyodontium album protease (EAP) was developed using the crystal structure of proteinase K. The model revealed that EAP has broad substrate specificity similar to Proteinase K with preference for bulky hydrophobic residues at P1 and P4. Also, EAP is suggested to have two disulfide bonds and more than two Ca2? binding sites in its 3D structure; both of which are assumed to contribute to the thermostable nature of the protein.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, a family of bivariate distributions whose marginals are weighted distributions in the original variables is studied. The relationship between the failure rates of the derived and original models are obtained. These relationships are used to provide some characterizations of specific bivariate models

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Upgrading two widely used standard plastics, polypropylene (PP) and high density polyethylene (HDPE), and generating a variety of useful engineering materials based on these blends have been the main objective of this study. Upgradation was effected by using nanomodifiers and/or fibrous modifiers. PP and HDPE were selected for modification due to their attractive inherent properties and wide spectrum of use. Blending is the engineered method of producing new materials with tailor made properties. It has the advantages of both the materials. PP has high tensile and flexural strength and the HDPE acts as an impact modifier in the resultant blend. Hence an optimized blend of PP and HDPE was selected as the matrix material for upgradation. Nanokaolinite clay and E-glass fibre were chosen for modifying PP/HDPE blend. As the first stage of the work, the mechanical, thermal, morphological, rheological, dynamic mechanical and crystallization characteristics of the polymer nanocomposites prepared with PP/HDPE blend and different surface modified nanokaolinite clay were analyzed. As the second stage of the work, the effect of simultaneous inclusion of nanokaolinite clay (both N100A and N100) and short glass fibres are investigated. The presence of nanofiller has increased the properties of hybrid composites to a greater extent than micro composites. As the last stage, micromechanical modeling of both nano and hybrid A composite is carried out to analyze the behavior of the composite under load bearing conditions. These theoretical analyses indicate that the polymer-nanoclay interfacial characteristics partially converge to a state of perfect interfacial bonding (Takayanagi model) with an iso-stress (Reuss IROM) response. In the case of hybrid composites the experimental data follows the trend of Halpin-Tsai model. This implies that matrix and filler experience varying amount of strain and interfacial adhesion between filler and matrix and also between the two fillers which play a vital role in determining the modulus of the hybrid composites.A significant observation from this study is that the requirement of higher fibre loading for efficient reinforcement of polymers can be substantially reduced by the presence of nanofiller together with much lower fibre content in the composite. Hybrid composites with both nanokaolinite clay and micron sized E-glass fibre as reinforcements in PP/HDPE matrix will generate a novel class of high performance, cost effective engineering material.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cerebral glioma is the most prevalent primary brain tumor, which are classified broadly into low and high grades according to the degree of malignancy. High grade gliomas are highly malignant which possess a poor prognosis, and the patients survive less than eighteen months after diagnosis. Low grade gliomas are slow growing, least malignant and has better response to therapy. To date, histological grading is used as the standard technique for diagnosis, treatment planning and survival prediction. The main objective of this thesis is to propose novel methods for automatic extraction of low and high grade glioma and other brain tissues, grade detection techniques for glioma using conventional magnetic resonance imaging (MRI) modalities and 3D modelling of glioma from segmented tumor slices in order to assess the growth rate of tumors. Two new methods are developed for extracting tumor regions, of which the second method, named as Adaptive Gray level Algebraic set Segmentation Algorithm (AGASA) can also extract white matter and grey matter from T1 FLAIR an T2 weighted images. The methods were validated with manual Ground truth images, which showed promising results. The developed methods were compared with widely used Fuzzy c-means clustering technique and the robustness of the algorithm with respect to noise is also checked for different noise levels. Image texture can provide significant information on the (ab)normality of tissue, and this thesis expands this idea to tumour texture grading and detection. Based on the thresholds of discriminant first order and gray level cooccurrence matrix based second order statistical features three feature sets were formulated and a decision system was developed for grade detection of glioma from conventional T2 weighted MRI modality.The quantitative performance analysis using ROC curve showed 99.03% accuracy for distinguishing between advanced (aggressive) and early stage (non-aggressive) malignant glioma. The developed brain texture analysis techniques can improve the physician’s ability to detect and analyse pathologies leading to a more reliable diagnosis and treatment of disease. The segmented tumors were also used for volumetric modelling of tumors which can provide an idea of the growth rate of tumor; this can be used for assessing response to therapy and patient prognosis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordination among supply chain members is essential for better supply chain performance. An effective method to improve supply chain coordination is to implement proper coordination mechanisms. The primary objective of this research is to study the performance of a multi-level supply chain while using selected coordination mechanisms separately, and in combination, under lost sale and back order cases. The coordination mechanisms used in this study are price discount, delay in payment and different types of information sharing. Mathematical modelling and simulation modelling are used in this study to analyse the performance of the supply chain using these mechanisms. Initially, a three level supply chain consisting of a supplier, a manufacturer and a retailer has been used to study the combined effect of price discount and delay in payment on the performance (profit) of supply chain using mathematical modelling. This study showed that implementation of individual mechanisms improves the performance of the supply chain compared to ‘no coordination’. When more than one mechanism is used in combination, performance in most cases further improved. The three level supply chain considered in mathematical modelling was then extended to a three level network supply chain consisting of a four retailers, two wholesalers, and a manufacturer with an infinite part supplier. The performance of this network supply chain was analysed under both lost sale and backorder cases using simulation modelling with the same mechanisms: ‘price discount and delay in payment’ used in mathematical modelling. This study also showed that the performance of the supply chain is significantly improved while using combination of mechanisms as obtained earlier. In this study, it is found that the effect (increase in profit) of ‘delay in payment’ and combination of ‘price discount’ & ‘delay in payment’ on SC profit is relatively high in the case of lost sale. Sensitivity analysis showed that order cost of the retailer plays a major role in the performance of the supply chain as it decides the order quantity of the other players in the supply chain in this study. Sensitivity analysis also showed that there is a proportional change in supply chain profit with change in rate of return of any player. In the case of price discount, elasticity of demand is an important factor to improve the performance of the supply chain. It is also found that the change in permissible delay in payment given by the seller to the buyer affects the SC profit more than the delay in payment availed by the buyer from the seller. In continuation of the above, a study on the performance of a four level supply chain consisting of a manufacturer, a wholesaler, a distributor and a retailer with ‘information sharing’ as coordination mechanism, under lost sale and backorder cases, using a simulation game with live players has been conducted. In this study, best performance is obtained in the case of sharing ‘demand and supply chain performance’ compared to other seven types of information sharing including traditional method. This study also revealed that effect of information sharing on supply chain performance is relatively high in the case of lost sale than backorder. The in depth analysis in this part of the study showed that lack of information sharing need not always be resulting in bullwhip effect. Instead of bullwhip effect, lack of information sharing produced a huge hike in lost sales cost or backorder cost in this study which is also not favorable for the supply chain. Overall analysis provided the extent of improvement in supply chain performance under different cases. Sensitivity analysis revealed useful insights about the decision variables of supply chain and it will be useful for the supply chain management practitioners to take appropriate decisions.