40 resultados para problem of mediation


Relevância:

90.00% 90.00%

Publicador:

Resumo:

A/though steel is most commonly used as a reinforcing material in concrete due to its competitive cost and favorable mechanical properties, the problem of corrosion of steel rebars leads to a reduction in life span of the structure and adds to maintenance costs. Many techniques have been developed in recent past to reduce corrosion (galvanizing, epoxy coating, etc.) but none of the solutions seem to be viable as an adequate solution to the corrosion problem. Apart from the use of fiber reinforced polymer (FRP) rebars, hybrid rebars consisting of both FRP and steel are also being tried to overcome the problem of steel corrosion. This paper evaluates the performance of hybrid rebars as longitudinal reinforcement in normal strength concrete beams. Hybrid rebars used in this study essentially consist of glass fiber reinforced polymer (GFRP) strands of 2 mm diameter wound helically on a mild steel core of 6 mm diameter. GFRP stirrups have been used as shear reinforcement. An attempt has been made to evaluate the flexural and shear performance of beams having hybrid rebars in normal strength concrete with and without polypropylene fibers added to the concrete matrix

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Increasing amounts of plastic waste in the environment have become a problem of gigantic proportions. The case of linear low-density polyethylene (LLDPE) is especially significant as it is widely used for packaging and other applications. This synthetic polymer is normally not biodegradable until it is degraded into low molecular mass fragments that can be assimilated by microorganisms. Blends of nonbiodegradable polymers and biodegradable commercial polymers such as poly (vinyl alcohol) (PVA) can facilitate a reduction in the volume of plastic waste when they undergo partial degradation. Further, the remaining fragments stand a greater chance of undergoing biodegradation in a much shorter span of time. In this investigation, LLDPE was blended with different proportions of PVA (5–30%) in a torque rheometer. Mechanical, thermal, and biodegradation studies were carried out on the blends. The biodegradability of LLDPE/PVA blends has been studied in two environments: (1) in a culture medium containing Vibrio sp. and (2) soil environment, both over a period of 15 weeks. Blends exposed to culture medium degraded more than that exposed to soil environment. Changes in various properties of LLDPE/PVA blends before and after degradation were monitored using Fourier transform infrared spectroscopy, a differential scanning calorimeter (DSC) for crystallinity, and scanning electron microscope (SEM) for surface morphology among other things. Percentage crystallinity decreased as the PVA content increased and biodegradation resulted in an increase of crystallinity in LLDPE/PVA blends. The results prove that partial biodegradation of the blends has occurred holding promise for an eventual biodegradable product

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The classical methods of analysing time series by Box-Jenkins approach assume that the observed series uctuates around changing levels with constant variance. That is, the time series is assumed to be of homoscedastic nature. However, the nancial time series exhibits the presence of heteroscedasticity in the sense that, it possesses non-constant conditional variance given the past observations. So, the analysis of nancial time series, requires the modelling of such variances, which may depend on some time dependent factors or its own past values. This lead to introduction of several classes of models to study the behaviour of nancial time series. See Taylor (1986), Tsay (2005), Rachev et al. (2007). The class of models, used to describe the evolution of conditional variances is referred to as stochastic volatility modelsThe stochastic models available to analyse the conditional variances, are based on either normal or log-normal distributions. One of the objectives of the present study is to explore the possibility of employing some non-Gaussian distributions to model the volatility sequences and then study the behaviour of the resulting return series. This lead us to work on the related problem of statistical inference, which is the main contribution of the thesis

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this study, a novel improved technology could be developed to convert the recalcitrant coir pith into environmental friendly organic manure. The standard method of composting involves the substitution of urea with nitrogen fixing bacteria viz. Azotobacter vinelandii and Azospirillum brasilense leading to the development of an improved method of coir pith. The combined action of the microorganisms could enhance the biodegradation of coir pith. In the present study, Pleurotus sajor caju, an edible mushroom which has the ability to degrade coir pith, and the addition of nitrogen fixing bacteria like Azotobacter vinelandii and Azospirillum brasilense could accelerate the action of the fungi on coir pith. The use of these microorganisms brings about definite changes in the NPK, Ammonia, Organic Carbon and Lignin contents in coir pith. This study will encourage the use of biodegraded coir pith as organic manure for agri/horti purpose to get better yields and can serve as a better technology to solve the problem of accumulated coir pith in coir based industries

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Effective solids-liquid separation is the basic concept of any wastewater treatment system. Biological treatment methods involve microorganisms for the treatment of wastewater. Conventional activated sludge process (ASP) poses the problem of poor settleability and hence require a large footprint. Biogranulation is an effective biotechnological process which can overcome the drawbacks of conventional ASP to a great extent. Aerobic granulation represents an innovative cell immobilization strategy in biological wastewater treatment. Aerobic granules are selfimmobilized microbial aggregates that are cultivated in sequencing batch reactors (SBRs). Aerobic granules have several advantages over conventional activated sludge flocs such as a dense and compact microbial structure, good settleability and high biomass retention. For cells in a culture to aggregate, a number of conditions have to be satisfied. Hence aerobic granulation is affected by many operating parameters. The organic loading rate (OLR) helps to enrich different bacterial species and to influence the size and settling ability of granules. Hence, OLR was argued as an influencing parameter by helping to enrich different bacterial species and to influence the size and settling ability of granules. Hydrodynamic shear force, caused by aeration and measured as superficial upflow air velocity (SUAV), has a strong influence and hence it is used to control the granulation process. Settling time (ST) and volume exchange ratio (VER) are also two key influencing factors, which can be considered as selection pressures responsible for aerobic granulation based on the concept of minimal settling velocity. Hence, these four parameters - OLR, SUAV, ST and VER- were selected as major influencing parametersfor the present study. Influence of these four parameters on aerobic granulation was investigated in this work

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The problem of using information available from one variable X to make inferenceabout another Y is classical in many physical and social sciences. In statistics this isoften done via regression analysis where mean response is used to model the data. Onestipulates the model Y = µ(X) +ɛ. Here µ(X) is the mean response at the predictor variable value X = x, and ɛ = Y - µ(X) is the error. In classical regression analysis, both (X; Y ) are observable and one then proceeds to make inference about the mean response function µ(X). In practice there are numerous examples where X is not available, but a variable Z is observed which provides an estimate of X. As an example, consider the herbicidestudy of Rudemo, et al. [3] in which a nominal measured amount Z of herbicide was applied to a plant but the actual amount absorbed by the plant X is unobservable. As another example, from Wang [5], an epidemiologist studies the severity of a lung disease, Y , among the residents in a city in relation to the amount of certain air pollutants. The amount of the air pollutants Z can be measured at certain observation stations in the city, but the actual exposure of the residents to the pollutants, X, is unobservable and may vary randomly from the Z-values. In both cases X = Z+error: This is the so called Berkson measurement error model.In more classical measurement error model one observes an unbiased estimator W of X and stipulates the relation W = X + error: An example of this model occurs when assessing effect of nutrition X on a disease. Measuring nutrition intake precisely within 24 hours is almost impossible. There are many similar examples in agricultural or medical studies, see e.g., Carroll, Ruppert and Stefanski [1] and Fuller [2], , among others. In this talk we shall address the question of fitting a parametric model to the re-gression function µ(X) in the Berkson measurement error model: Y = µ(X) + ɛ; X = Z + η; where η and ɛ are random errors with E(ɛ) = 0, X and η are d-dimensional, and Z is the observable d-dimensional r.v.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The study is focused on education of tribes particularly the problem of high dropout rate existing among the tribal students at school level. Scheduled Tribe is one of the marginalized communities experiencing high level of educational deprivation. The analysis of the study shows that the extent of deprivation existing among STs of Kerala is much higher compared to that of other communities. The present study covered tribes of three tribal predominant districts of Kerala such as Idukki, Palakkad and Wayanad. Out of the 35 tribal communities in the State, 17 of them are concentrated in these districts. Tribes concentrated in Idukki include Muthuvans, Malai Arayan, Uraly, Mannan and Hill Pulaya. The present study analyzed dropouts situation in tribal areas of Kerala by conducting Field Survey among dropout and non-dropout students at school level. High dropouts among STs persist due to many problems which are of structural in nature. Important problems faced by the tribal students that have been analyzed, this can be classified as economic, social, cultural and institutional. It is found that there exists high correlation between Income and expenditure of the family with the well-being of individuals. Significant economic factors are poverty and financial indebtedness of the family. Some of the common cultural factors of tribes are Nature of Habitation, Difference in Dialect and Medium of Instruction etc. Social factors analyzed in the study are illiteracy of parents, migration of family, family environment, motivation by parents, activities engaged in for helping the family and students’ lack of interest in studies. The analysis showed that all these factors except migration of the family, are affecting the education of tribal students. Apart from social, economic and cultural factors, there are a few institutional factors which will also influence the education of tribal students. Institutional factors analyzed in the study include students’ absenteeism, irregularity of teachers, attitude of non-tribal teachers and non-tribal students, infrastructure facilities and accessibility to school. The study found irregularity of students and accessibility to school as significant factors which determine the dropout of the students.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Post-transcriptional gene silencing by RNA interference is mediated by small interfering RNA called siRNA. This gene silencing mechanism can be exploited therapeutically to a wide variety of disease-associated targets, especially in AIDS, neurodegenerative diseases, cholesterol and cancer on mice with the hope of extending these approaches to treat humans. Over the recent past, a significant amount of work has been undertaken to understand the gene silencing mediated by exogenous siRNA. The design of efficient exogenous siRNA sequences is challenging because of many issues related to siRNA. While designing efficient siRNA, target mRNAs must be selected such that their corresponding siRNAs are likely to be efficient against that target and unlikely to accidentally silence other transcripts due to sequence similarity. So before doing gene silencing by siRNAs, it is essential to analyze their off-target effects in addition to their inhibition efficiency against a particular target. Hence designing exogenous siRNA with good knock-down efficiency and target specificity is an area of concern to be addressed. Some methods have been developed already by considering both inhibition efficiency and off-target possibility of siRNA against agene. Out of these methods, only a few have achieved good inhibition efficiency, specificity and sensitivity. The main focus of this thesis is to develop computational methods to optimize the efficiency of siRNA in terms of “inhibition capacity and off-target possibility” against target mRNAs with improved efficacy, which may be useful in the area of gene silencing and drug design for tumor development. This study aims to investigate the currently available siRNA prediction approaches and to devise a better computational approach to tackle the problem of siRNA efficacy by inhibition capacity and off-target possibility. The strength and limitations of the available approaches are investigated and taken into consideration for making improved solution. Thus the approaches proposed in this study extend some of the good scoring previous state of the art techniques by incorporating machine learning and statistical approaches and thermodynamic features like whole stacking energy to improve the prediction accuracy, inhibition efficiency, sensitivity and specificity. Here, we propose one Support Vector Machine (SVM) model, and two Artificial Neural Network (ANN) models for siRNA efficiency prediction. In SVM model, the classification property is used to classify whether the siRNA is efficient or inefficient in silencing a target gene. The first ANNmodel, named siRNA Designer, is used for optimizing the inhibition efficiency of siRNA against target genes. The second ANN model, named Optimized siRNA Designer, OpsiD, produces efficient siRNAs with high inhibition efficiency to degrade target genes with improved sensitivity-specificity, and identifies the off-target knockdown possibility of siRNA against non-target genes. The models are trained and tested against a large data set of siRNA sequences. The validations are conducted using Pearson Correlation Coefficient, Mathews Correlation Coefficient, Receiver Operating Characteristic analysis, Accuracy of prediction, Sensitivity and Specificity. It is found that the approach, OpsiD, is capable of predicting the inhibition capacity of siRNA against a target mRNA with improved results over the state of the art techniques. Also we are able to understand the influence of whole stacking energy on efficiency of siRNA. The model is further improved by including the ability to identify the “off-target possibility” of predicted siRNA on non-target genes. Thus the proposed model, OpsiD, can predict optimized siRNA by considering both “inhibition efficiency on target genes and off-target possibility on non-target genes”, with improved inhibition efficiency, specificity and sensitivity. Since we have taken efforts to optimize the siRNA efficacy in terms of “inhibition efficiency and offtarget possibility”, we hope that the risk ofoff-target effect” while doing gene silencing in various bioinformatics fields can be overcome to a great extent. These findings may provide new insights into cancer diagnosis, prognosis and therapy by gene silencing. The approach may be found useful for designing exogenous siRNA for therapeutic applications and gene silencing techniques in different areas of bioinformatics.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In fish processing plants, there is huge amount of skin that is left as the waste. When this skin is taken and processed into fish collagen, it will save large amount of money that is used for extraction of collagen from other animal s.Fish collagen can be used as an alternative to replace mammalian collagen, especially collagen extracted from bovine, when we consider the outbreak of bovine spongiform encephalopathy (BSE), transmissible spongiform encephalopathy (TSE) and the foot - and-mouth disease (FMD) issues. BSE and TSE are progressive neurological disorders affecting cattles caused by proteinacious infectious particles called prions.The study aims in producing collagen that has been extracted from fish skin to replace other animal collagen so as to overcome the problem of other animal collagen issues. Also the study utilized the abandoned fish waste produced by fish processing industry since bone, skin, fin and scales of fish can be a useful source of collagen.