908 resultados para Welfare State Models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Work Project, presented as part of the requirements for the Award of a Masters Degree in Finance from the NOVA – School of Business and Economics

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this thesis is to examine various policy implementation models, and to determine what use they are to a government. In order to insure that governmental proposals are created and exercised in an effective manner, there roust be some guidelines in place which will assist in resolving difficult situations. All governments face the challenge of responding to public demand, by delivering the type of policy responses that will attempt to answer those demands. The problem for those people in positions of policy-making responsibility is to balance the competitive forces that would influence policy. This thesis examines provincial government policy in two unique cases. The first is the revolutionary recommendations brought forth in the Hall -Dennis Report. The second is the question of extending full -funding to the end of high school in the separate school system. These two cases illustrate how divergent and problematic the policy-making duties of any government may be. In order to respond to these political challenges decision-makers must have a clear understanding of what they are attempting to do. They must also have an assortment of policy-making models that will insure a policy response effectively deals with the issue under examination. A government must make every effort to insure that all policymaking methods are considered, and that the data gathered is inserted into the most appropriate model. Currently, there is considerable debate over the benefits of the progressive individualistic education approach as proposed by the Hall -Dennis Committee. This debate is usually intensified during periods of economic uncertainty. Periodically, the province will also experience brief yet equally intense debate on the question of separate school funding. At one level, this debate centres around the efficiency of maintaining two parallel education systems, but the debate frequently has undertones of the religious animosity common in Ontario's history. As a result of the two policy cases under study we may ask ourselves these questions: a) did the policies in question improve the general quality of life in the province? and b) did the policies unite the province? In the cases of educational instruction and finance the debate is ongoing and unsettling. Currently, there is a widespread belief that provincial students at the elementary and secondary levels of education are not being educated adequately to meet the challenges of the twenty-first century. The perceived culprit is individual education which sees students progressing through the system at their own pace and not meeting adequate education standards. The question of the finance of Catholic education occasionally rears its head in a painful fashion within the province. Some public school supporters tend to take extension as a personal religious defeat, rather than an opportunity to demonstrate that educational diversity can be accommodated within Canada's most populated province. This thesis is an attempt to analyze how successful provincial policy-implementation models were in answering public demand. A majority of the public did not demand additional separate school funding, yet it was put into place. The same majority did insist on an examination of educational methods, and the government did put changes in place. It will also demonstrate how policy if wisely created may spread additional benefits to the public at large. Catholic students currently enjoy a much improved financial contribution from the province, yet these additional funds were taken from somewhere. The public system had it funds reduced with what would appear to be minimal impact. This impact indicates that government policy is still sensitive to the strongly held convictions of those people in opposition to a given policy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The distribution of excitation energy between the two photosystems (PSII and PSI) of photosynthesis is regulated by the light state transition. Three models have been proposed for the mechanism of the state transition in phycobilisome (PBS) containing organisms, two involving protein phosphorylation. A procedure for the rapid isolation of thylakoid membranes and PBS fractions from the cyanobacterium Synechococcus m. PCC 6301 in light state 1 and light state 2 was developed. The phosphorylation of thylakoid and soluble proteins rapidly isolated from intact cells in state 1 and state 2 was investigated. 77 K fluorescence emission spectra revealed that rapidly isolated thylakoid membranes retained the excitation energy distribution characteristic of intact cells in state 1 and state 2. Phosphoproteins were identified by gel electrophoresis of both thylakoid membrane and phycobilisome fractions isolated from cells labelled with 32p orthophosphate. The results showed very close phosphoprotein patterns for either thylakoid membrane or PBS fractions in state 1 and state 2. These results do not support proposed models for the state transition which required phosphorylation of PBS or thylakoid membrane proteins.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have presented a Green's function method for the calculation of the atomic mean square displacement (MSD) for an anharmonic Hamil toni an . This method effectively sums a whole class of anharmonic contributions to MSD in the perturbation expansion in the high temperature limit. Using this formalism we have calculated the MSD for a nearest neighbour fcc Lennard Jones solid. The results show an improvement over the lowest order perturbation theory results, the difference with Monte Carlo calculations at temperatures close to melting is reduced from 11% to 3%. We also calculated the MSD for the Alkali metals Nat K/ Cs where a sixth neighbour interaction potential derived from the pseudopotential theory was employed in the calculations. The MSD by this method increases by 2.5% to 3.5% over the respective perturbation theory results. The MSD was calculated for Aluminum where different pseudopotential functions and a phenomenological Morse potential were used. The results show that the pseudopotentials provide better agreement with experimental data than the Morse potential. An excellent agreement with experiment over the whole temperature range is achieved with the Harrison modified point-ion pseudopotential with Hubbard-Sham screening function. We have calculated the thermodynamic properties of solid Kr by minimizing the total energy consisting of static and vibrational components, employing different schemes: The quasiharmonic theory (QH), ).2 and).4 perturbation theory, all terms up to 0 ().4) of the improved self consistent phonon theory (ISC), the ring diagrams up to o ().4) (RING), the iteration scheme (ITER) derived from the Greens's function method and a scheme consisting of ITER plus the remaining contributions of 0 ().4) which are not included in ITER which we call E(FULL). We have calculated the lattice constant, the volume expansion, the isothermal and adiabatic bulk modulus, the specific heat at constant volume and at constant pressure, and the Gruneisen parameter from two different potential functions: Lennard-Jones and Aziz. The Aziz potential gives generally a better agreement with experimental data than the LJ potential for the QH, ).2, ).4 and E(FULL) schemes. When only a partial sum of the).4 diagrams is used in the calculations (e.g. RING and ISC) the LJ results are in better agreement with experiment. The iteration scheme brings a definitive improvement over the).2 PT for both potentials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ABSTRACT Photosynthetic state transitions were investigated in the cyanobacterium Synechococcus sp. PCC 7002 in both wild-type cells and mutant cells lacking phycobilisomes. Preillumination in the presence of DCMU (3(3,4 dichlorophenyl) 1,1 dimethyl urea) induced state 1 and dark adaptation induced state 2 in both wild-type and mutant cells as determined by 77K fluorescence emission spectroscopy. Light-induced transitions were observed in the wildtype after preferential excitation of phycocyanin (state 2) or preferential excitation of chlorophyll .a. (state 1). The state 1 and 2 transitions in the wild-type had half-times of approximately 10 seconds. Cytochrome f and P-700 oxidation kinetics could not be correlated with any current state transition model as cells in state 1 showed faster oxidation kinetics regardless of excitation wavelength. Light-induced transitions were also observed in the phycobilisomeless mutant after preferential excitation of short wavelength chlorophyll !l. (state 2) or carotenoids and long wavelength chlorophyll it (state 1). One-dimensional electrophoresis revealed no significant differences in phosphorylation patterns of resolved proteins between wild-type cells in state 1 and state 2. It is concluded that the mechanism of the light state transition in cyanobacteria does not require the presence of the phycobilisome. The results contradict proposed models for the state transition which require an active role for the phycobilisome.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis examines the performance of Canadian fixed-income mutual funds in the context of an unobservable market factor that affects mutual fund returns. We use various selection and timing models augmented with univariate and multivariate regime-switching structures. These models assume a joint distribution of an unobservable latent variable and fund returns. The fund sample comprises six Canadian value-weighted portfolios with different investing objectives from 1980 to 2011. These are the Canadian fixed-income funds, the Canadian inflation protected fixed-income funds, the Canadian long-term fixed-income funds, the Canadian money market funds, the Canadian short-term fixed-income funds and the high yield fixed-income funds. We find strong evidence that more than one state variable is necessary to explain the dynamics of the returns on Canadian fixed-income funds. For instance, Canadian fixed-income funds clearly show that there are two regimes that can be identified with a turning point during the mid-eighties. This structural break corresponds to an increase in the Canadian bond index from its low values in the early 1980s to its current high values. Other fixed-income funds results show latent state variables that mimic the behaviour of the general economic activity. Generally, we report that Canadian bond fund alphas are negative. In other words, fund managers do not add value through their selection abilities. We find evidence that Canadian fixed-income fund portfolio managers are successful market timers who shift portfolio weights between risky and riskless financial assets according to expected market conditions. Conversely, Canadian inflation protected funds, Canadian long-term fixed-income funds and Canadian money market funds have no market timing ability. We conclude that these managers generally do not have positive performance by actively managing their portfolios. We also report that the Canadian fixed-income fund portfolios perform asymmetrically under different economic regimes. In particular, these portfolio managers demonstrate poorer selection skills during recessions. Finally, we demonstrate that the multivariate regime-switching model is superior to univariate models given the dynamic market conditions and the correlation between fund portfolios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Latent variable models in finance originate both from asset pricing theory and time series analysis. These two strands of literature appeal to two different concepts of latent structures, which are both useful to reduce the dimension of a statistical model specified for a multivariate time series of asset prices. In the CAPM or APT beta pricing models, the dimension reduction is cross-sectional in nature, while in time-series state-space models, dimension is reduced longitudinally by assuming conditional independence between consecutive returns, given a small number of state variables. In this paper, we use the concept of Stochastic Discount Factor (SDF) or pricing kernel as a unifying principle to integrate these two concepts of latent variables. Beta pricing relations amount to characterize the factors as a basis of a vectorial space for the SDF. The coefficients of the SDF with respect to the factors are specified as deterministic functions of some state variables which summarize their dynamics. In beta pricing models, it is often said that only the factorial risk is compensated since the remaining idiosyncratic risk is diversifiable. Implicitly, this argument can be interpreted as a conditional cross-sectional factor structure, that is, a conditional independence between contemporaneous returns of a large number of assets, given a small number of factors, like in standard Factor Analysis. We provide this unifying analysis in the context of conditional equilibrium beta pricing as well as asset pricing with stochastic volatility, stochastic interest rates and other state variables. We address the general issue of econometric specifications of dynamic asset pricing models, which cover the modern literature on conditionally heteroskedastic factor models as well as equilibrium-based asset pricing models with an intertemporal specification of preferences and market fundamentals. We interpret various instantaneous causality relationships between state variables and market fundamentals as leverage effects and discuss their central role relative to the validity of standard CAPM-like stock pricing and preference-free option pricing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose finite sample tests and confidence sets for models with unobserved and generated regressors as well as various models estimated by instrumental variables methods. The validity of the procedures is unaffected by the presence of identification problems or \"weak instruments\", so no detection of such problems is required. We study two distinct approaches for various models considered by Pagan (1984). The first one is an instrument substitution method which generalizes an approach proposed by Anderson and Rubin (1949) and Fuller (1987) for different (although related) problems, while the second one is based on splitting the sample. The instrument substitution method uses the instruments directly, instead of generated regressors, in order to test hypotheses about the \"structural parameters\" of interest and build confidence sets. The second approach relies on \"generated regressors\", which allows a gain in degrees of freedom, and a sample split technique. For inference about general possibly nonlinear transformations of model parameters, projection techniques are proposed. A distributional theory is obtained under the assumptions of Gaussian errors and strictly exogenous regressors. We show that the various tests and confidence sets proposed are (locally) \"asymptotically valid\" under much weaker assumptions. The properties of the tests proposed are examined in simulation experiments. In general, they outperform the usual asymptotic inference methods in terms of both reliability and power. Finally, the techniques suggested are applied to a model of Tobin’s q and to a model of academic performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The GARCH and Stochastic Volatility paradigms are often brought into conflict as two competitive views of the appropriate conditional variance concept : conditional variance given past values of the same series or conditional variance given a larger past information (including possibly unobservable state variables). The main thesis of this paper is that, since in general the econometrician has no idea about something like a structural level of disaggregation, a well-written volatility model should be specified in such a way that one is always allowed to reduce the information set without invalidating the model. To this respect, the debate between observable past information (in the GARCH spirit) versus unobservable conditioning information (in the state-space spirit) is irrelevant. In this paper, we stress a square-root autoregressive stochastic volatility (SR-SARV) model which remains true to the GARCH paradigm of ARMA dynamics for squared innovations but weakens the GARCH structure in order to obtain required robustness properties with respect to various kinds of aggregation. It is shown that the lack of robustness of the usual GARCH setting is due to two very restrictive assumptions : perfect linear correlation between squared innovations and conditional variance on the one hand and linear relationship between the conditional variance of the future conditional variance and the squared conditional variance on the other hand. By relaxing these assumptions, thanks to a state-space setting, we obtain aggregation results without renouncing to the conditional variance concept (and related leverage effects), as it is the case for the recently suggested weak GARCH model which gets aggregation results by replacing conditional expectations by linear projections on symmetric past innovations. Moreover, unlike the weak GARCH literature, we are able to define multivariate models, including higher order dynamics and risk premiums (in the spirit of GARCH (p,p) and GARCH in mean) and to derive conditional moment restrictions well suited for statistical inference. Finally, we are able to characterize the exact relationships between our SR-SARV models (including higher order dynamics, leverage effect and in-mean effect), usual GARCH models and continuous time stochastic volatility models, so that previous results about aggregation of weak GARCH and continuous time GARCH modeling can be recovered in our framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose an alternate parameterization of stationary regular finite-state Markov chains, and a decomposition of the parameter into time reversible and time irreversible parts. We demonstrate some useful properties of the decomposition, and propose an index for a certain type of time irreversibility. Two empirical examples illustrate the use of the proposed parameter, decomposition and index. One involves observed states; the other, latent states.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Le but de cette thèse est d étendre la théorie du bootstrap aux modèles de données de panel. Les données de panel s obtiennent en observant plusieurs unités statistiques sur plusieurs périodes de temps. Leur double dimension individuelle et temporelle permet de contrôler l 'hétérogénéité non observable entre individus et entre les périodes de temps et donc de faire des études plus riches que les séries chronologiques ou les données en coupe instantanée. L 'avantage du bootstrap est de permettre d obtenir une inférence plus précise que celle avec la théorie asymptotique classique ou une inférence impossible en cas de paramètre de nuisance. La méthode consiste à tirer des échantillons aléatoires qui ressemblent le plus possible à l échantillon d analyse. L 'objet statitstique d intérêt est estimé sur chacun de ses échantillons aléatoires et on utilise l ensemble des valeurs estimées pour faire de l inférence. Il existe dans la littérature certaines application du bootstrap aux données de panels sans justi cation théorique rigoureuse ou sous de fortes hypothèses. Cette thèse propose une méthode de bootstrap plus appropriée aux données de panels. Les trois chapitres analysent sa validité et son application. Le premier chapitre postule un modèle simple avec un seul paramètre et s 'attaque aux propriétés théoriques de l estimateur de la moyenne. Nous montrons que le double rééchantillonnage que nous proposons et qui tient compte à la fois de la dimension individuelle et la dimension temporelle est valide avec ces modèles. Le rééchantillonnage seulement dans la dimension individuelle n est pas valide en présence d hétérogénéité temporelle. Le ré-échantillonnage dans la dimension temporelle n est pas valide en présence d'hétérogénéité individuelle. Le deuxième chapitre étend le précédent au modèle panel de régression. linéaire. Trois types de régresseurs sont considérés : les caractéristiques individuelles, les caractéristiques temporelles et les régresseurs qui évoluent dans le temps et par individu. En utilisant un modèle à erreurs composées doubles, l'estimateur des moindres carrés ordinaires et la méthode de bootstrap des résidus, on montre que le rééchantillonnage dans la seule dimension individuelle est valide pour l'inférence sur les coe¢ cients associés aux régresseurs qui changent uniquement par individu. Le rééchantillonnage dans la dimen- sion temporelle est valide seulement pour le sous vecteur des paramètres associés aux régresseurs qui évoluent uniquement dans le temps. Le double rééchantillonnage est quand à lui est valide pour faire de l inférence pour tout le vecteur des paramètres. Le troisième chapitre re-examine l exercice de l estimateur de différence en di¤érence de Bertrand, Duflo et Mullainathan (2004). Cet estimateur est couramment utilisé dans la littérature pour évaluer l impact de certaines poli- tiques publiques. L exercice empirique utilise des données de panel provenant du Current Population Survey sur le salaire des femmes dans les 50 états des Etats-Unis d Amérique de 1979 à 1999. Des variables de pseudo-interventions publiques au niveau des états sont générées et on s attend à ce que les tests arrivent à la conclusion qu il n y a pas d e¤et de ces politiques placebos sur le salaire des femmes. Bertrand, Du o et Mullainathan (2004) montre que la non-prise en compte de l hétérogénéité et de la dépendance temporelle entraîne d importantes distorsions de niveau de test lorsqu'on évalue l'impact de politiques publiques en utilisant des données de panel. Une des solutions préconisées est d utiliser la méthode de bootstrap. La méthode de double ré-échantillonnage développée dans cette thèse permet de corriger le problème de niveau de test et donc d'évaluer correctement l'impact des politiques publiques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dans cette thèse, je me suis intéressé aux effets des fluctuations du prix de pétrole sur l'activité macroéconomique selon la cause sous-jacente ces fluctuations. Les modèles économiques utilisés dans cette thèse sont principalement les modèles d'équilibre général dynamique stochastique (de l'anglais Dynamic Stochastic General Equilibrium, DSGE) et les modèles Vecteurs Autorégressifs, VAR. Plusieurs études ont examiné les effets des fluctuations du prix de pétrole sur les principaux variables macroéconomiques, mais très peu d'entre elles ont fait spécifiquement le lien entre les effets des fluctuations du prix du pétrole et la l'origine de ces fluctuations. Pourtant, il est largement admis dans les études plus récentes que les augmentations du prix du pétrole peuvent avoir des effets très différents en fonction de la cause sous-jacente de cette augmentation. Ma thèse, structurée en trois chapitres, porte une attention particulière aux sources de fluctuations du prix de pétrole et leurs impacts sur l'activité macroéconomique en général, et en particulier sur l'économie du Canada. Le premier chapitre examine comment les chocs d'offre de pétrole, de demande agrégée, et de demande de précaution de pétrole affectent l'économie du Canada, dans un Modèle d'équilibre Général Dynamique Stochastique estimé. L'estimation est réalisée par la méthode Bayésienne, en utilisant des données trimestrielles canadiennes sur la période 1983Q1 à 2010Q4. Les résultats montrent que les effets dynamiques des fluctuations du prix du pétrole sur les principaux agrégats macro-économiques canadiens varient en fonction de leurs sources. En particulier, une augmentation de 10% du prix réel du pétrole causée par des chocs positifs sur la demande globale étrangère a un effet positif significatif de l'ordre de 0,4% sur le PIB réel du Canada au moment de l'impact et l'effet reste positif sur tous les horizons. En revanche, une augmentation du prix réel du pétrole causée par des chocs négatifs sur l'offre de pétrole ou par des chocs positifs de la demande de pétrole de précaution a un effet négligeable sur le PIB réel du Canada au moment de l'impact, mais provoque une baisse légèrement significative après l'impact. En outre, parmi les chocs pétroliers identifiés, les chocs sur la demande globale étrangère ont été relativement plus important pour expliquer la fluctuation des principaux agrégats macroéconomiques du Canada au cours de la période d'estimation. Le deuxième chapitre utilise un modèle Structurel VAR en Panel pour examiner les liens entre les chocs de demande et d'offre de pétrole et les ajustements de la demande de travail et des salaires dans les industries manufacturières au Canada. Le modèle est estimé sur des données annuelles désagrégées au niveau industriel sur la période de 1975 à 2008. Les principaux résultats suggèrent qu'un choc positif de demande globale a un effet positif sur la demande de travail et les salaires, à court terme et à long terme. Un choc négatif sur l'offre de pétrole a un effet négatif relativement faible au moment de l'impact, mais l'effet devient positif après la première année. En revanche, un choc positif sur la demande précaution de pétrole a un impact négatif à tous les horizons. Les estimations industrie-par-industrie confirment les précédents résultats en panel. En outre, le papier examine comment les effets des différents chocs pétroliers sur la demande travail et les salaires varient en fonction du degré d'exposition commerciale et de l'intensité en énergie dans la production. Il ressort que les industries fortement exposées au commerce international et les industries fortement intensives en énergie sont plus vulnérables aux fluctuations du prix du pétrole causées par des chocs d'offre de pétrole ou des chocs de demande globale. Le dernier chapitre examine les implications en terme de bien-être social de l'introduction des inventaires en pétrole sur le marché mondial à l'aide d'un modèle DSGE de trois pays dont deux pays importateurs de pétrole et un pays exportateur de pétrole. Les gains de bien-être sont mesurés par la variation compensatoire de la consommation sous deux règles de politique monétaire. Les principaux résultats montrent que l'introduction des inventaires en pétrole a des effets négatifs sur le bien-être des consommateurs dans chacun des deux pays importateurs de pétrole, alors qu'il a des effets positifs sur le bien-être des consommateurs dans le pays exportateur de pétrole, quelle que soit la règle de politique monétaire. Par ailleurs, l'inclusion de la dépréciation du taux de change dans les règles de politique monétaire permet de réduire les coûts sociaux pour les pays importateurs de pétrole. Enfin, l'ampleur des effets de bien-être dépend du niveau d'inventaire en pétrole à l'état stationnaire et est principalement expliquée par les chocs sur les inventaires en pétrole.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ce mémoire a pour but d’explorer la littérature sur le sujet de la justice scolaire. L’étude sera divisée sous trois axes. Il sera question d’abord de l’accessibilité à l’éducation. Il y a au moins quatre grands principes au libéralisme : (1) les individus sont libres et égaux ; (2) les individus ont tous droit à des chances égales de mener à terme leur projet de vie ; (3) les individus sont détenteurs d’un ensemble de droits garantis par la société ; (4) l’État adopte une posture de neutralité. Partant de ces valeurs, nous établissons des liens avec la nécessité d’une accessibilité à l’éducation. En second lieu, ce mémoire étudiera trois modèles d’école : l’école parentale, l’école étatique, et l’école orientée vers l’autonomie. Nous argumenterons, avec Harry Brighouse, à l’effet que l’éducation orientée vers l’autonomie constitue l’objectif qui respecte le plus les valeurs du libéralisme, dont l’impératif de neutralité, et les intérêts des jeunes. Dans la dernière partie de cette étude, nous étudierons trois conceptions de l’égalité : égalité des ressources (Jean-Fabien Spitz), égalité des opportunités (Richard Arneson) de bien-être et le suffisantisme (Debra Satz). Afin de juger de leurs qualités respectives, nous tenterons de les appliquer au système éducatif afin d’en faire ressortir les forces et les faiblesses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study is about the analysis of some queueing models related to N-policy.The optimal value the queue size has to attain in order to turn on a single server, assuming that the policy is to turn on a single server when the queue size reaches a certain number, N, and turn him off when the system is empty.The operating policy is the usual N-policy, but with random N and in model 2, a system similar to the one described here.This study analyses “ Tandem queue with two servers”.Here assume that the first server is a specialized one.In a queueing system,under N-policy ,the server will be on vacation until N units accumulate for the first time after becoming idle.A modified version of the N-policy for an M│M│1 queueing system is considered here.The novel feature of this model is that a busy service unit prevents the access of new customers to servers further down the line.It is deals with a queueing model consisting of two servers connected in series with a finite intermediate waiting room of capacity k.Here assume that server I is a specialized server.For this model ,the steady state probability vector and the stability condition are obtained using matrix – geometric method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data mining is one of the hottest research areas nowadays as it has got wide variety of applications in common man’s life to make the world a better place to live. It is all about finding interesting hidden patterns in a huge history data base. As an example, from a sales data base, one can find an interesting pattern like “people who buy magazines tend to buy news papers also” using data mining. Now in the sales point of view the advantage is that one can place these things together in the shop to increase sales. In this research work, data mining is effectively applied to a domain called placement chance prediction, since taking wise career decision is so crucial for anybody for sure. In India technical manpower analysis is carried out by an organization named National Technical Manpower Information System (NTMIS), established in 1983-84 by India's Ministry of Education & Culture. The NTMIS comprises of a lead centre in the IAMR, New Delhi, and 21 nodal centres located at different parts of the country. The Kerala State Nodal Centre is located at Cochin University of Science and Technology. In Nodal Centre, they collect placement information by sending postal questionnaire to passed out students on a regular basis. From this raw data available in the nodal centre, a history data base was prepared. Each record in this data base includes entrance rank ranges, reservation, Sector, Sex, and a particular engineering. From each such combination of attributes from the history data base of student records, corresponding placement chances is computed and stored in the history data base. From this data, various popular data mining models are built and tested. These models can be used to predict the most suitable branch for a particular new student with one of the above combination of criteria. Also a detailed performance comparison of the various data mining models is done.This research work proposes to use a combination of data mining models namely a hybrid stacking ensemble for better predictions. A strategy to predict the overall absorption rate for various branches as well as the time it takes for all the students of a particular branch to get placed etc are also proposed. Finally, this research work puts forward a new data mining algorithm namely C 4.5 * stat for numeric data sets which has been proved to have competent accuracy over standard benchmarking data sets called UCI data sets. It also proposes an optimization strategy called parameter tuning to improve the standard C 4.5 algorithm. As a summary this research work passes through all four dimensions for a typical data mining research work, namely application to a domain, development of classifier models, optimization and ensemble methods.