909 resultados para Smoothed ANOVA


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Au Québec, le trouble du déficit de l’attention/hyperactivité (TDA/H) est celui qui requiert le plus grand nombre de consultations en pédopsychiatrie (50 % à 75 %). À ce jour, l’intervention multimodale (traitement pharmacologique, programme d’entraînement aux habiletés parentales (PEHP) et programme d’intervention cognitive comportementale (PICC) auprès des enfants ayant un TDA/H) a obtenu de bons résultats à long terme. Dans cette étude, nous avons évalué les changements dans le fonctionnement familial suite à un PEHP. La conception de ce PEHP repose sur les deux approches : l’approche systémique familiale de Calgary (Wright & Leahey, 2013) et l’approche de solution collaborative et proactive (Greene, 2014). Le Family Assessment Device (FAD; Epstein, Baldwin, et Bishop, 1983), version courte, a été utilisé pour mesurer le fonctionnement général (FG) de la famille. La collecte de données a été réalisée auprès de deux groupes (groupe participant et groupe témoin) et à deux temps de mesure (avant et après le PEHP). L’échantillon contient 28 familles participantes et 18 familles témoins. L’analyse de variance à mesures répétées (ANOVA) a été utilisée pour tester l’effet des variables indépendantes (Temps et Intervention) sur la variable dépendante (FG). Les résultats indiquent que les parents qui participent à un PEHP perçoivent un fonctionnement familial général amélioré par rapport au groupe témoin. L’interprétation des changements à la suite du PEHP donne des pistes d’intervention infirmières à ces familles afin d’éviter les impacts de ce trouble sur le fonctionnement familial à long terme.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La présente recherche porte sur « l’attribution de la responsabilité » auprès d’une population de 166 adolescents auteurs d’agression sexuelle âgés de 12 à 19 ans. Le but premier de cette investigation est de déterminer quels aspects psychologiques (âge, stress post-traumatique, distorsion cognitive, estime de soi, aliénation, immaturité) influencent trois types d’attributions de la responsabilité, soit la culpabilité, l’attribution externe et l’attribution interne, et ainsi sur quels niveaux focaliser le traitement. Les résultats des régressions multiples ont mis en avant deux modèles. Pour le modèle prédisant la culpabilité, une seule composante est retenue, le stress post-traumatique. Ce modèle explique 26% (ajusté) de la variance de la culpabilité (R2=0,29, F(6,120)=8,35, p<0,01). Le modèle prédisant l’attribution externe est composé de l’âge et des distorsions cognitives et permet d’expliquer 25% (ajusté) de la variance (R2=0,28, F(6,122)=8,03, p<0,01). L’attribution interne ne présente aucune corrélation avec les variables étudiées. Le deuxième objectif est d’estimer l’efficacité de la prise en charge du jeune pour modifier l’attribution de responsabilité, selon les différentes modalités qui sont le « milieu de prise en charge », la « durée du traitement » et « l’approche thérapeutique » afin de choisir le programme le plus adéquat. En utilisant l’analyse de la variance (ANOVA), il a été possible de déterminer qu’aucune de ces modalités n’influence l’attribution de la responsabilité. Cette étude présente des limites, notamment la puissance statistique. Comme piste pour de futures recherches, le lien entre l’attribution de la responsabilité et la récidive pourrait être examiné.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Domaine en plein développement, le transfert des connaissances (TC) se définit, comme l’ensemble des activités, des mécanismes et des processus favorisant l’utilisation de connaissances pertinentes (tacites et empiriques) par un public cible tel que les intervenants psychosociaux. Cette recherche vise à améliorer l’efficacité des méthodes linéaires écrites de TC en identifiant mieux les besoins d’information des intervenants en protection de la jeunesse. Notons que les méthodes linéaires écrites de TC désignent des outils d’information écrits unidirectionnels tels que les revues, les publications, les sites Internet, etc. Le premier objectif est de déterminer les catégories de besoins exprimés par les intervenants, c’est-à-dire déterminer si les besoins rapportés par des intervenants se regroupent en types ou sortes de besoins. Le deuxième objectif est d’établir l’importance relative de chacune de ces catégories. Enfin, cette étude vise à déterminer si ces besoins diffèrent selon les caractéristiques des intervenants ou de l’environnement. Deux facteurs sont étudiés, l’expérience de l’intervenant et la direction pour laquelle celui-ci travaille (Direction des services milieu à l’enfance ou Direction des services milieu à l’adolescence et ressources). Un devis mixte séquentiel exploratoire a été développé. Lors de la première étape, une analyse thématique a été effectuée à partir des réponses à une question ouverte posée aux membres de trois équipes et à partir d’un document résumant les requêtes effectuées auprès de l’équipe de la bibliothèque du Centre jeunesse de Montréal. Les résultats permettent de répondre au premier objectif de ce mémoire. En effet, les analyses ont permis de créer un arbre thématique comprenant 42 éléments classés hiérarchiquement. Les besoins se regroupent en deux thèmes généraux, soit les besoins qui concernent les « opérations » (c’est-à-dire l’action de l’intervenant) et les besoins concernant les « systèmes » (c’est-à-dire les éléments sur lesquels peuvent porter l’intervention). Cette dernière catégorie se subdivise entre l’usager, ses environnements et le contexte culturel et sociétal. Lors de la deuxième étape, une analyse de la variance (ANOVA) et une analyse de variance multivariée (MANOVA) ont été effectuées à partir des réponses de 82 intervenants à un questionnaire en ligne structuré selon les catégories de besoins d’informations déterminées à l’étape qualitative précédente. Les résultats permettent de répondre au deuxième objectif de ce mémoire et de mesurer le degré de force ou d’importance de chacune des catégories de besoins, identifiées lors de la première étape, selon les intervenants eux-mêmes. Les besoins ont ainsi pu être classés par ordre décroissant d’importance. Il a été possible de définir un groupe de neuf besoins prioritaires (portant sur l’animation, les caractéristiques personnelles des usagers, les caractéristiques des parents et leurs relations avec l’enfant, ainsi que l’intervention interculturelle et les problématiques sociales) et un autre groupe de sept besoins moins élevés (portant sur les autres « opérations » et les services professionnels dont a bénéficié l’usager). L’interprétation de ces résultats indique que les besoins en TC des intervenants se limitent aux informations qui concernent directement leur mandat, leur pratique ou les problématiques rencontrées. Les résultats de cette étape ont également permis de répondre au troisième objectif de ce mémoire. En effet, les résultats indiquent que l’importance ressentie des besoins (sur une échelle de 1 à 7) ne diffère pas significativement selon la direction pour laquelle travaille l’intervenant, mais elle diffère significativement selon l’expérience de ce dernier (moins de 10 ans ou plus de 10 ans). Cette différence est discutée et plusieurs hypothèses explicatives sont envisagées telles que l’accumulation de connaissances liée à l’expérience ou les changements cognitifs liés à l’expertise. Enfin, dans la discussion, les résultats sont mis en contexte parmi les autres types de besoins existants et les autres caractéristiques des connaissances qui doivent être prises en considération. Cela permet de formuler des recommandations pour améliorer la production de documents écrits ainsi que pour poursuivre la recherche dans le domaine de l’évaluation des besoins de TC. Bien que présentant certaines limites méthodologiques, cette recherche ouvre la voie au développement de meilleurs outils d’évaluation des besoins et à l’amélioration des techniques de transfert linéaires écrites.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction : Le dalcetrapib, inhibiteur de la glycoprotéine hydrophobe de transfert des esters de cholestérol (CETP), a été étudié dans le cadre de l’essai clinique de phase II dal-PLAQUE2 (DP2). L’objectif principal est d’étudier l’effet du dalcetrapib après 1 an de traitement sur la structure et la fonction des HDL dans une sous-population de la cohorte DP2. Méthode : Les sujets de la cohorte DP2 ayant une série de mesures de cIMT et des échantillons de plasma et sérum au baseline et à 1 an de traitement furent sélectionnés (379 sujets: 193 du groupe placebo (PCB) et 186 du groupe dalcetrapib (DAL)). Des données biochimiques prédéterminées, le profil des concentrations et tailles des sous-classes de HDL et LDL en résonance magnétique nucléaire (RMN) et 2 mesures de capacité d’efflux de cholestérol (CEC) du sérum ont été explorées. Les données statistiques furent obtenues en comparant les changements à un an à partir du « baseline » avec un ANOVA ou ANCOVA. La procédure normalisée de fonctionnement d’essai d’efflux de cholestérol permet de calculer l’efflux fractionnel (en %) de 3H-cholestérol des lignées cellulaires BHK-ABCA1 (fibroblastes) et J774 (macrophages, voie ABCA1) et HepG2 (hépatocytes, voie SR-BI), vers les échantillons sériques de la cohorte DP2. Résultats : Pour la biochimie plasmatique, un effet combiné des changements d’activité de CETP dans les 2 groupes a causé une réduction de 30% dans le groupe DAL. Après 1 an de traitement dans le groupe DAL, la valeur de HDL-C a augmenté de 35,5% (p < 0,001) et l’apoA-I a augmenté de 14,0% (p < 0,001). Au profil RMN, dans le groupe DAL après 1 an de traitement, il y a augmentation de la taille des HDL-P (5,2%; p < 0,001), des grosses particules HDL (68,7%; p < 0,001) et des grosses particules LDL (37,5%; p < 0,01). Les petites particules HDL sont diminuées (-9,1%; p < 0,001). Il n’y a aucune différence significative de mesure de cIMT entre les deux groupes après 1 an de traitement. Pour la CEC, il y a augmentation significative par la voie du SR-BI et une augmentation via la voie ABCA1 dans le groupe DAL après 1 an de traitement. Conclusion : Après un an de traitement au dalcetrapib, on note une hausse de HDL-C, des résultats plutôt neutres au niveau du profil lipidique par RMN et une CEC augmentée mais trop faible pour affecter la valeur de cIMT chez les échantillons testés.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dans cet article, nous présentons les résultats d’une étude longitudinale concernant la proportion d’espace consacrée d’une part aux ouvrages de pseudosciences (paranormal, ésotérisme, nouvel âge, arts divinatoires, etc.) et de sciences pour adultes et, d’autre part, aux ouvrages de spiritualité et de sciences pour enfants dans les librairies du Québec. Deux mesures ont été prises, l’une en 2001 dans 55 librairies et l’autre, en 2011 dans 72 librairies. Des analyses statistiques ont été réalisées à partir des mesures prises uniquement dans les librairies visitées aux deux temps de mesure. Les résultats des analyses corrélationnelles montrent que les librairies qui consacrent davantage d’espaces aux ouvrages de pseudosciences destinés aux adultes (n = 40) et aux ouvrages de spiritualité destinés aux enfants (n = 38) sont les mêmes en 2001 et en 2011. Par ailleurs, une ANOVA à mesures répétées montre que la proportion d’espace dévolue aux ouvrages de pseudosciences destinés aux adultes a diminué au deuxième temps de mesure, ce qui n’est pas le cas des livres de spiritualité offerts aux enfants. Après un bref retour sur la méthode utilisée et les résultats, nous invoquons quatre raisons susceptibles d’expliquer la popularité des pseudosciences ainsi que quelques conséquences éthiques et sociales de leur vogue. En conclusion, nous proposons deux solutions pour valoriser la démarche scientifique aux yeux des adolescents et des enfants.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the twentieth century, as technology grew with it. This resulted in collective efforts and thinking in the direction of controlling work related hazards and accidents. Thus, safety management developed and became an important part of industrial management. While considerable research has been reported on the topic of safety management in industries from various parts of the world, there is scarcity of literature from India. It is logical to think that a clear understanding of the critical safety management practices and their relationships with accident rates and management system certifications would help in the development and implementation of safety management systems. In the first phase of research, a set of six critical safety management practices has been identified based on a thorough review of the prescriptive, practitioner, conceptual and empirical literature. An instrument for measuring the level of practice of these safety conduction a survey using questionnaire in chemical/process industry. The instrument has been empirically validated using Confirmatory Factor Analysis (CFA) approach. As the second step. Predictive validity of safety management practices and the relationship between safety management practices and self-reported accident rates and management system certifications have been investigated using ANOVA. Results of the ANOVA tests show that there is significant difference in the identified safety management practices and the determinants of safety performance have been investigated using Multiple Regression Analysis. The inter-relationships between safety management practices, determinants of safety performance and components of safety performance have been investigated with the help of structural equation modeling. Further investigations into engineering and construction industries reveal that safety climate factors are not stable across industries. However, some factors are found to be common in industries irrespective of the type of industry. This study identifies the critical safety management practices in major accident hazard chemical/process industry from the perspective of employees and the findings empirically support the necessity for obtaining safety specific management system certifications

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This research was undertaken with an objective of studying software development project risk, risk management, project outcomes and their inter-relationship in the Indian context. Validated instruments were used to measure risk, risk management and project outcome in software development projects undertaken in India. A second order factor model was developed for risk with five first order factors. Risk management was also identified as a second order construct with four first order factors. These structures were validated using confirmatory factor analysis. Variation in risk across categories of select organization / project characteristics was studied through a series of one way ANOVA tests. Regression model was developed for each of the risk factors by linking it to risk management factors and project /organization characteristics. Similarly regression models were developed for the project outcome measures linking them to risk factors. Integrated models linking risk factors, risk management factors and project outcome measures were tested through structural equation modeling. Quality of the software developed was seen to have a positive relationship with risk management and negative relationship with risk. The other outcome variables, namely time overrun and cost over run, had strong positive relationship with risk. Risk management did not have direct effect on overrun variables. Risk was seen to be acting as an intervening variable between risk management and overrun variables.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To ensure quality of machined products at minimum machining costs and maximum machining effectiveness, it is very important to select optimum parameters when metal cutting machine tools are employed. Traditionally, the experience of the operator plays a major role in the selection of optimum metal cutting conditions. However, attaining optimum values each time by even a skilled operator is difficult. The non-linear nature of the machining process has compelled engineers to search for more effective methods to attain optimization. The design objective preceding most engineering design activities is simply to minimize the cost of production or to maximize the production efficiency. The main aim of research work reported here is to build robust optimization algorithms by exploiting ideas that nature has to offer from its backyard and using it to solve real world optimization problems in manufacturing processes.In this thesis, after conducting an exhaustive literature review, several optimization techniques used in various manufacturing processes have been identified. The selection of optimal cutting parameters, like depth of cut, feed and speed is a very important issue for every machining process. Experiments have been designed using Taguchi technique and dry turning of SS420 has been performed on Kirlosker turn master 35 lathe. Analysis using S/N and ANOVA were performed to find the optimum level and percentage of contribution of each parameter. By using S/N analysis the optimum machining parameters from the experimentation is obtained.Optimization algorithms begin with one or more design solutions supplied by the user and then iteratively check new design solutions, relative search spaces in order to achieve the true optimum solution. A mathematical model has been developed using response surface analysis for surface roughness and the model was validated using published results from literature.Methodologies in optimization such as Simulated annealing (SA), Particle Swarm Optimization (PSO), Conventional Genetic Algorithm (CGA) and Improved Genetic Algorithm (IGA) are applied to optimize machining parameters while dry turning of SS420 material. All the above algorithms were tested for their efficiency, robustness and accuracy and observe how they often outperform conventional optimization method applied to difficult real world problems. The SA, PSO, CGA and IGA codes were developed using MATLAB. For each evolutionary algorithmic method, optimum cutting conditions are provided to achieve better surface finish.The computational results using SA clearly demonstrated that the proposed solution procedure is quite capable in solving such complicated problems effectively and efficiently. Particle Swarm Optimization (PSO) is a relatively recent heuristic search method whose mechanics are inspired by the swarming or collaborative behavior of biological populations. From the results it has been observed that PSO provides better results and also more computationally efficient.Based on the results obtained using CGA and IGA for the optimization of machining process, the proposed IGA provides better results than the conventional GA. The improved genetic algorithm incorporating a stochastic crossover technique and an artificial initial population scheme is developed to provide a faster search mechanism. Finally, a comparison among these algorithms were made for the specific example of dry turning of SS 420 material and arriving at optimum machining parameters of feed, cutting speed, depth of cut and tool nose radius for minimum surface roughness as the criterion. To summarize, the research work fills in conspicuous gaps between research prototypes and industry requirements, by simulating evolutionary procedures seen in nature that optimize its own systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Occupational stress is becoming a major issue in both corporate and social agenda .In industrialized countries, there have been quite dramatic changes in the conditions at work, during the last decade ,caused by economic, social and technical development. As a consequence, the people today at work are exposed to high quantitative and qualitative demands as well as hard competition caused by global economy. A recent report says that ailments due to work related stress is likely to cost India’s exchequer around 72000 crores between 2009 and 2015. Though India is a fast developing country, it is yet to create facilities to mitigate the adverse effects of work stress, more over only little efforts have been made to assess the work related stress.In the absence of well defined standards to assess the work related stress in India, an attempt is made in this direction to develop the factors for the evaluation of work stress. Accordingly, with the help of existing literature and in consultation with the safety experts, seven factors for the evaluation of work stress is developed. An instrument ( Questionnaire) was developed using these seven factors for the evaluation of work stress .The validity , and unidimensionality of the questionnaire was ensured by confirmatory factor analysis. The reliability of the questionnaire was ensured before administration. While analyzing the relation ship between the variables, it is noted that no relationship exists between them, and hence the above factors are treated as independent factors/ variables for the purpose of research .Initially five profit making manufacturing industries, under public sector in the state of Kerala, were selected for the study. The influence of factors responsible for work stress is analyzed in these industries. These industries were classified in to two types, namely chemical and heavy engineering ,based on the product manufactured and work environment and the analysis is further carried out for these two categories.The variation of work stress with different age , designation and experience of the employees are analyzed by means of one-way ANOVA. Further three different type of modelling of work stress, namely factor modelling, structural equation modelling and multinomial logistic regression modelling was done to analyze the association of factors responsible for work stress. All these models are found equally good in predicting the work stress.The present study indicates that work stress exists among the employees in public sector industries in Kerala. Employees belonging to age group 40-45yrs and experience groups 15-20yrs had relatively higher work demand ,low job control, and low support at work. Low job control was noted among lower designation levels, particularly at the worker level in these industries. Hence the instrument developed using the seven factors namely demand, control, manager support, peer support, relationship, role and change can be effectively used for the evaluation of work stress in industries.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The study is about the Gulf-returned Keralites and their personal financial planning during the Gulf-period. The researcher has examined the nature of their income, expenditure, savings and investments during the Gulf-period and after their return. Even though the Gulf-returned Keralites had remitted huge amounts to Kerala, it appears that the majority of them are struggling hard to make both ends meet. The sample consists of 318 Gulf-returned Keralites selected by employing stratified random sampling technique, from 5 districts. After a pilot study, the data was collected through personal interviews using a structured schedule. In order to find out whether the respondents had personal financial planning during the Gulf-period, the researcher has evaluated 15 elements of personal financeusing a five-point-scale rating technique. The hypotheses were tested using correlation, t-test, chi-square and ANOVA, through SPSS.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The preceding discussion and review of literature show that studies on gear selectivity have received great attention, while gear efficiency studies do not seem to have received equal consideration. In temperate waters, fishing industry is well organised and relatively large and well equipped vessels and gear are used for commercial fishing and the number of species are less; whereas in tropics particularly in India, small scale fishery dominates the scene and the fishery is multispecies operated upon by nmltigear. Therefore many of the problems faced in India may not exist in developed countries. Perhaps this would be the reason for the paucity of literature on the problems in estimation of relative efficiency. Much work has been carried out in estimating relative efficiency (Pycha, 1962; Pope, 1963; Gulland, 1967; Dickson, 1971 and Collins, 1979). The main subject of interest in the present thesis is an investigation into the problems in the comparison of fishing gears. especially in using classical test procedures with special reference to the prevailing fishing practices (that is. with reference to the catch data generated by the existing system). This has been taken up with a view to standardizing an approach for comparing the efficiency of fishing gear. Besides this, the implications of the terms ‘gear efficiency‘ and ‘gear selectivity‘ have been examined and based on the commonly used selectivity model (Holt, 1963), estimation of the ratio of fishing power of two gear has been considered. An attempt to determine the size of fish for which a gear is most efficient.has also been made. The work has been presented in eight chapters

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Decision trees are very powerful tools for classification in data mining tasks that involves different types of attributes. When coming to handling numeric data sets, usually they are converted first to categorical types and then classified using information gain concepts. Information gain is a very popular and useful concept which tells you, whether any benefit occurs after splitting with a given attribute as far as information content is concerned. But this process is computationally intensive for large data sets. Also popular decision tree algorithms like ID3 cannot handle numeric data sets. This paper proposes statistical variance as an alternative to information gain as well as statistical mean to split attributes in completely numerical data sets. The new algorithm has been proved to be competent with respect to its information gain counterpart C4.5 and competent with many existing decision tree algorithms against the standard UCI benchmarking datasets using the ANOVA test in statistics. The specific advantages of this proposed new algorithm are that it avoids the computational overhead of information gain computation for large data sets with many attributes, as well as it avoids the conversion to categorical data from huge numeric data sets which also is a time consuming task. So as a summary, huge numeric datasets can be directly submitted to this algorithm without any attribute mappings or information gain computations. It also blends the two closely related fields statistics and data mining

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The water quality and primary productivity of Valanthakad backwater (9° 55 10. 24 N latitude and 76° 20 01. 23 E longitude) was monitored from June to November 2007. Significant spatial and temporal variations in temperature, transparency, salinity, pH, dissolved oxygen, sulphides, carbon dioxide, alkalinity, biochemical oxygen demand, phosphatephosphorus, nitrate-nitrogen, nitrite-nitrogen as well as primary productivity could be observed from the study. Transparency was low (53.75 cm to 159 cm) during the active monsoon months when the intensity of solar radiation was minimum, which together with the run off from the land resulted in turbid waters in the study sites. The salinity in both the stations was low (0.10 ‰ to 4.69 ‰) except in August and November 2007. The presence of total sulphide (0.08 mg/ l to 1.84 mg/ l) and higher carbon dioxide (3 mg/ l to 17 mg/ l) could be due to hospital discharges and decaying slaughter house wastes in Station 1 and also from the mangrove vegetation in Station 2. Nitrate-nitrogen and phosphate-phosphorus depicted higher values and pronounced variations in the monsoon season. Maximum net primary production was seen in November (0.87 gC/ m3/ day) and was reported nil in September. The chlorophyll pigments showed higher values in July, August and November with a negative correlation with phosphate-phosphorus and nitrite-nitrogen. The study indicated that the water quality and productivity of Valanthakad backwater is impacted and is the first report from the region

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis entitled “Studies on Nitrifying Microorganisms in Cochin Estuary and Adjacent Coastal Waters” reports for the first time the spatial andtemporal variations in the abundance and activity of nitrifiers (Ammonia oxidizingbacteria-AOB; Nitrite oxidizing bacteria- NOB and Ammonia oxidizing archaea-AOA) from the Cochin Estuary (CE), a monsoon driven, nutrient rich tropicalestuary along the southwest coast of India. To fulfil the above objectives, field observations were carried out for aperiod of one year (2011) in the CE. Surface (1 m below surface) and near-bottomwater samples were collected from four locations (stations 1 to 3 in estuary and 4 in coastal region), covering pre-monsoon, monsoon and post-monsoon seasons. Station 1 is a low saline station (salinity range 0-10) with high freshwater influx While stations 2 and 3 are intermediately saline stations (salinity ranges 10-25). Station 4 is located ~20 km away from station 3 with least influence of fresh water and is considered as high saline (salinity range 25- 35) station. Ambient physicochemical parameters like temperature, pH, salinity, dissolved oxygen (DO), Ammonium, nitrite, nitrate, phosphate and silicate of surface and bottom waters were measured using standard techniques. Abundance of Eubacteria, total Archaea and ammonia and nitrite oxidizing bacteria (AOB and NOB) were quantified using Fluorescent in situ Hybridization (FISH) with oligonucleotide probes labeled withCy3. Community structure of AOB and AOA was studied using PCR Denaturing Gradient Gel Electrophoresis (DGGE) technique. PCR products were cloned and sequenced to determine approximate phylogenetic affiliations. Nitrification rate in the water samples were analyzed using chemical NaClO3 (inhibitor of nitrite oxidation), and ATU (inhibitor of ammonium oxidation). Contribution of AOA and AOB in ammonia oxidation process was measured based on the recovered ammonia oxidation rate. The contribution of AOB and AOA were analyzed after inhibiting the activities of AOB and AOA separately using specific protein inhibitors. To understand the factors influencing or controlling nitrification, various statistical tools were used viz. Karl Pearson’s correlation (to find out the relationship between environmental parameters, bacterial abundance and activity), three-way ANOVA (to find out the significant variation between observations), Canonical Discriminant Analysis (CDA) (for the discrimination of stations based on observations), Multivariate statistics, Principal components analysis (PCA) and Step up multiple regression model (SMRM) (First order interaction effects were applied to determine the significantly contributing biological and environmental parameters to the numerical abundance of nitrifiers). In the CE, nitrification is modulated by the complex interplay between different nitrifiers and environmental variables which in turn is dictated by various hydrodynamic characteristics like fresh water discharge and seawater influx brought in by river water discharge and flushing. AOB in the CE are more adapted to varying environmental conditions compared to AOA though the diversity of AOA is higher than AOB. The abundance and seasonality of AOB and NOB is influenced by the concentration of ammonia in the water column. AOB are the major players in modulating ammonia oxidation process in the water column of CE. The distribution pattern and seasonality of AOB and NOB in the CE suggest that these organisms coexist, and are responsible for modulating the entire nitrification process in the estuary. This process is fuelled by the cross feeding among different nitrifiers, which in turn is dictated by nutrient levels especially ammonia. Though nitrification modulates the increasing anthropogenic ammonia concentration the anthropogenic inputs have to be controlled to prevent eutrophication and associated environmental changes.