969 resultados para Pelczynski`s decomposition method
Resumo:
The processes of seismic wave propagation in phase space and one way wave extrapolation in frequency-space domain, if without dissipation, are essentially transformation under the action of one parameter Lie groups. Consequently, the numerical calculation methods of the propagation ought to be Lie group transformation too, which is known as Lie group method. After a fruitful study on the fast methods in matrix inversion, some of the Lie group methods in seismic numerical modeling and depth migration are presented here. Firstly the Lie group description and method of seismic wave propagation in phase space is proposed, which is, in other words, symplectic group description and method for seismic wave propagation, since symplectic group is a Lie subgroup and symplectic method is a special Lie group method. Under the frame of Hamiltonian, the propagation of seismic wave is a symplectic group transformation with one parameter and consequently, the numerical calculation methods of the propagation ought to be symplectic method. After discrete the wave field in time and phase space, many explicit, implicit and leap-frog symplectic schemes are deduced for numerical modeling. Compared to symplectic schemes, Finite difference (FD) method is an approximate of symplectic method. Consequently, explicit, implicit and leap-frog symplectic schemes and FD method are applied in the same conditions to get a wave field in constant velocity model, a synthetic model and Marmousi model. The result illustrates the potential power of the symplectic methods. As an application, symplectic method is employed to give synthetic seismic record of Qinghai foothills model. Another application is the development of Ray+symplectic reverse-time migration method. To make a reasonable balance between the computational efficiency and accuracy, we combine the multi-valued wave field & Green function algorithm with symplectic reverse time migration and thus develop a new ray+wave equation prestack depth migration method. Marmousi model data and Qinghai foothills model data are processed here. The result shows that our method is a better alternative to ray migration for complex structure imaging. Similarly, the extrapolation of one way wave in frequency-space domain is a Lie group transformation with one parameter Z and consequently, the numerical calculation methods of the extrapolation ought to be Lie group methods. After discrete the wave field in depth and space, the Lie group transformation has the form of matrix exponential and each approximation of it gives a Lie group algorithm. Though Pade symmetrical series approximation of matrix exponential gives a extrapolation method which is traditionally regarded as implicit FD migration, it benefits the theoretic and applying study of seismic imaging for it represent the depth extrapolation and migration method in a entirely different way. While, the technique of coordinates of second kind for the approximation of the matrix exponential begins a new way to develop migration operator. The inversion of matrix plays a vital role in the numerical migration method given by Pade symmetrical series approximation. The matrix has a Toepelitz structure with a helical boundary condition and is easy to inverse with LU decomposition. A efficient LU decomposition method is spectral factorization. That is, after the minimum phase correlative function of each array of matrix had be given by a spectral factorization method, all of the functions are arranged in a position according to its former location to get a lower triangular matrix. The major merit of LU decomposition with spectral factorization (SF Decomposition) is its efficiency in dealing with a large number of matrixes. After the setup of a table of the spectral factorization results of each array of matrix, the SF decomposition can give the lower triangular matrix by reading the table. However, the relationship among arrays is ignored in this method, which brings errors in decomposition method. Especially for numerical calculation in complex model, the errors is fatal. Direct elimination method can give the exact LU decomposition But even it is simplified in our case, the large number of decomposition cost unendurable computer time. A hybrid method is proposed here, which combines spectral factorization with direct elimination. Its decomposition errors is 10 times little than that of spectral factorization, and its decomposition speed is quite faster than that of direct elimination, especially in dealing with a large number of matrix. With the hybrid method, the 3D implicit migration can be expected to apply on real seismic data. Finally, the impulse response of 3D implicit migration operator is presented.
Resumo:
In this paper, we shall critically examine a special class of graph matching algorithms that follow the approach of node-similarity measurement. A high-level algorithm framework, namely node-similarity graph matching framework (NSGM framework), is proposed, from which, many existing graph matching algorithms can be subsumed, including the eigen-decomposition method of Umeyama, the polynomial-transformation method of Almohamad, the hubs and authorities method of Kleinberg, and the kronecker product successive projection methods of Wyk, etc. In addition, improved algorithms can be developed from the NSGM framework with respects to the corresponding results in graph theory. As the observation, it is pointed out that, in general, any algorithm which can be subsumed from NSGM framework fails to work well for graphs with non-trivial auto-isomorphism structure.
Resumo:
A new three-limb, six-degree-of-freedom (DOF) parallel manipulator (PM), termed a selectively actuated PM (SA-PM), is proposed. The end-effector of the manipulator can produce 3-DOF spherical motion, 3-DOF translation, 3-DOF hybrid motion, or complete 6-DOF spatial motion, depending on the types of the actuation (rotary or linear) chosen for the actuators. The manipulator architecture completely decouples translation and rotation of the end-effector for individual control. The structure synthesis of SA-PM is achieved using the line geometry. Singularity analysis shows that the SA-PM is an isotropic translation PM when all the actuators are in linear mode. Because of the decoupled motion structure, a decomposition method is applied for both the displacement analysis and dimension optimization. With the index of maximal workspace satisfying given global conditioning requirements, the geometrical parameters are optimized. As a result, the translational workspace is a cube, and the orientation workspace is nearly unlimited.
Resumo:
We present here evidence for the observation of the magnetohydrodynamic (MHD) sausage modes in magnetic pores in the solar photosphere. Further evidence for the omnipresent nature of acoustic global modes is also found. The empirical decomposition method of wave analysis is used to identify the oscillations detected through a 4170 Å "blue continuum" filter observed with the Rapid Oscillations in the Solar Atmosphere (ROSA) instrument. Out of phase, periodic behavior in pore size and intensity is used as an indicator of the presence of magnetoacoustic sausage oscillations. Multiple signatures of the magnetoacoustic sausage mode are found in a number of pores. The periods range from as short as 30 s up to 450 s. A number of the magnetoacoustic sausage mode oscillations found have periods of 3 and 5 minutes, similar to the acoustic global modes of the solar interior. It is proposed that these global oscillations could be the driver of the sausage-type magnetoacoustic MHD wave modes in pores.
Resumo:
In this paper, a hardware solution for packet classification based on multi-fields is presented. The proposed scheme focuses on a new architecture based on the decomposition method. A hash circuit is used in order to reduce the memory space required for the Recursive Flow Classification (RFC) algorithm. The implementation results show that the proposed architecture achieves significant performance advantage that is comparable to that of some well-known algorithms. The solution is based on Altera Stratix III FPGA technology.
Resumo:
The selective hydrogenation of acetylene to ethylene on several Pd surfaces (Pd(111), Pd(100), Pd(211), and Pd(211)-defect) and Pd surfaces with subsurface species (carbon and hydrogen) as well as a number of Pd-based alloys (Pd-M/Pd(111) and Pd-M/Pd(211) (M = Cu, Ag and Au)) are investigated using density functional theory calculations to understand both the acetylene hydrogenation activity and the selectivity of ethylene formation. All the hydrogenation barriers are calculated, and the reaction rates on these surfaces are obtained using a two-step model. Pd(211) is found to have the highest activity for acetylene hydrogenation while Pd(100) gives rise to the lowest activity. In addition, more open surfaces result in over-hydrogenation to form ethane, while the close-packed surface (Pd(111)) is the most selective. However, we also find that the presence of subsurface carbon and hydrogen significantly changes the reactivity and selectivity of acetylene toward hydrogenation on Pd surfaces. On forming surface alloys of Pd with Cu, Ag and Au, the selectivity for ethylene is also found to be changed. A new energy decomposition method is used to quantitatively analyze the factors in determining the changes in selectivity. These surface modifiers are found to block low coordination unselective sites, leading to a decreased ethane production. (C) 2013 The Authors. Published by Elsevier Inc. All rights reserved.
Resumo:
Partial hydrogenation of acrolein, the simplest alpha, beta-unsaturated aldehyde, is not only a model system to understand the selectivity in heterogeneous catalysis, but also technologically an important reaction. In this work, the reaction on Pt(211) and Au(211) surfaces is thoroughly investigated using density functional theory calculations. The formation routes of three partial hydrogenation products, namely propenol, propanal and enol, on both metals are studied. It is found that the pathway to produce enol is kinetically favoured on Pt while on Au the route of forming propenol is preferred. Our calculations also show that the propanal formation follows an indirect pathway on Pt(211). An energy decomposition method to analyze the barrier is utilized to understand the selectivities at Pt(211) and Au(211), which reveals that the interaction energies between the reactants involved in the transition states play a key role in determining the selectivity difference.
Resumo:
Esta dissertação estuda essencialmente dois problemas: (A) uma classe de equações unidimensionais de reacção-difusão-convecção em meios não uniformes (dependentes do espaço), e (B) um problema elíptico não-linear e paramétrico ligado a fenómenos de capilaridade. A Análise de Perturbação Singular e a dinâmica de Hamilton-Jacobi são utilizadas na obtenção de expressões assimptóticas para a solução (com comportamento de frente) e para a sua velocidade de propagação. Os seguintes três métodos de decomposição, Adomian Decomposition Method (ADM), Decomposition Method based on Infinite Products (DIP), e New Iterative Method (NIM), são apresentados e brevemente comparados. Adicionalmente, condições suficientes para a convergência da solução em série, obtida pelo ADM, e uma aplicação a um problema da Telecomunicações por Fibras Ópticas, envolvendo EDOs não-lineares designadas equações de Raman, são discutidas. Um ponto de vista mais abrangente que unifica os métodos de decomposição referidos é também apresentado. Para subclasses desta EDP são obtidas soluções numa forma explícita, para diferentes tipos de dados e usando uma variante do método de simetrias de Bluman-Cole. Usando Teoria de Pontos Críticos (o teorema usualmente designado mountain pass) e técnicas de truncatura, prova-se a existência de duas soluções não triviais (uma positiva e uma negativa) para o problema elíptico não-linear e paramétrico (B). A existência de uma terceira solução não trivial é demonstrada usando Grupos Críticos e Teoria de Morse.
Resumo:
O transporte marítimo e o principal meio de transporte de mercadorias em todo o mundo. Combustíveis e produtos petrolíferos representam grande parte das mercadorias transportadas por via marítima. Sendo Cabo Verde um arquipelago o transporte por mar desempenha um papel de grande relevância na economia do país. Consideramos o problema da distribuicao de combustíveis em Cabo Verde, onde uma companhia e responsavel por coordenar a distribuicao de produtos petrolíferos com a gestão dos respetivos níveis armazenados em cada porto, de modo a satisfazer a procura dos varios produtos. O objetivo consiste em determinar políticas de distribuicão de combustíveis que minimizam o custo total de distribuiçao (transporte e operacões) enquanto os n íveis de armazenamento sao mantidos nos n íveis desejados. Por conveniencia, de acordo com o planeamento temporal, o prob¬lema e divido em dois sub-problemas interligados. Um de curto prazo e outro de medio prazo. Para o problema de curto prazo sao discutidos modelos matemáticos de programacao inteira mista, que consideram simultaneamente uma medicao temporal cont ínua e uma discreta de modo a modelar multiplas janelas temporais e taxas de consumo que variam diariamente. Os modelos sao fortalecidos com a inclusão de desigualdades validas. O problema e então resolvido usando um "software" comercial. Para o problema de medio prazo sao inicialmente discutidos e comparados varios modelos de programacao inteira mista para um horizonte temporal curto assumindo agora uma taxa de consumo constante, e sao introduzidas novas desigualdades validas. Com base no modelo escolhido sao compara¬das estrategias heurísticas que combinam três heur ísticas bem conhecidas: "Rolling Horizon", "Feasibility Pump" e "Local Branching", de modo a gerar boas soluçoes admissíveis para planeamentos com horizontes temporais de varios meses. Finalmente, de modo a lidar com situaçoes imprevistas, mas impor¬tantes no transporte marítimo, como as mas condicões meteorológicas e congestionamento dos portos, apresentamos um modelo estocastico para um problema de curto prazo, onde os tempos de viagens e os tempos de espera nos portos sao aleatórios. O problema e formulado como um modelo em duas etapas, onde na primeira etapa sao tomadas as decisões relativas as rotas do navio e quantidades a carregar e descarregar e na segunda etapa (designada por sub-problema) sao consideradas as decisoes (com recurso) relativas ao escalonamento das operacões. O problema e resolvido por um metodo de decomposto que usa um algoritmo eficiente para separar as desigualdades violadas no sub-problema.
Resumo:
This project aimed to engineer new T2 MRI contrast agents for cell labeling based on formulations containing monodisperse iron oxide magnetic nanoparticles (MNP) coated with natural and synthetic polymers. Monodisperse MNP capped with hydrophobic ligands were synthesized by a thermal decomposition method, and further stabilized in aqueous media with citric acid or meso-2,3-dimercaptosuccinic acid (DMSA) through a ligand exchange reaction. Hydrophilic MNP-DMSA, with optimal hydrodynamic size distribution, colloidal stability and magnetic properties, were used for further functionalization with different coating materials. A covalent coupling strategy was devised to bind the biopolymer gum Arabic (GA) onto MNPDMSA and produce an efficient contrast agent, which enhanced cellular uptake in human colorectal carcinoma cells (HCT116 cell line) compared to uncoated MNP-DMSA. A similar protocol was employed to coat MNP-DMSA with a novel biopolymer produced by a biotechnological process, the exopolysaccharide (EPS) Fucopol. Similar to MNP-DMSA-GA, MNP-DMSA-EPS improved cellular uptake in HCT116 cells compared to MNP-DMSA. However, MNP-DMSA-EPS were particularly efficient towards the neural stem/progenitor cell line ReNcell VM, for which a better iron dose-dependent MRI contrast enhancement was obtained at low iron concentrations and short incubation times. A combination of synthetic and biological coating materials was also explored in this project, to design a dynamic tumortargeting nanoprobe activated by the acidic pH of tumors. The pH-dependent affinity pair neutravidin/iminobiotin, was combined in a multilayer architecture with the synthetic polymers poy-L-lysine and poly(ethylene glycol) and yielded an efficient MRI nanoprobe with ability to distinguish cells cultured in acidic pH conditions form cells cultured in physiological pH conditions.
Resumo:
La survie des réseaux est un domaine d'étude technique très intéressant ainsi qu'une préoccupation critique dans la conception des réseaux. Compte tenu du fait que de plus en plus de données sont transportées à travers des réseaux de communication, une simple panne peut interrompre des millions d'utilisateurs et engendrer des millions de dollars de pertes de revenu. Les techniques de protection des réseaux consistent à fournir une capacité supplémentaire dans un réseau et à réacheminer les flux automatiquement autour de la panne en utilisant cette disponibilité de capacité. Cette thèse porte sur la conception de réseaux optiques intégrant des techniques de survie qui utilisent des schémas de protection basés sur les p-cycles. Plus précisément, les p-cycles de protection par chemin sont exploités dans le contexte de pannes sur les liens. Notre étude se concentre sur la mise en place de structures de protection par p-cycles, et ce, en supposant que les chemins d'opération pour l'ensemble des requêtes sont définis a priori. La majorité des travaux existants utilisent des heuristiques ou des méthodes de résolution ayant de la difficulté à résoudre des instances de grande taille. L'objectif de cette thèse est double. D'une part, nous proposons des modèles et des méthodes de résolution capables d'aborder des problèmes de plus grande taille que ceux déjà présentés dans la littérature. D'autre part, grâce aux nouveaux algorithmes, nous sommes en mesure de produire des solutions optimales ou quasi-optimales. Pour ce faire, nous nous appuyons sur la technique de génération de colonnes, celle-ci étant adéquate pour résoudre des problèmes de programmation linéaire de grande taille. Dans ce projet, la génération de colonnes est utilisée comme une façon intelligente d'énumérer implicitement des cycles prometteurs. Nous proposons d'abord des formulations pour le problème maître et le problème auxiliaire ainsi qu'un premier algorithme de génération de colonnes pour la conception de réseaux protegées par des p-cycles de la protection par chemin. L'algorithme obtient de meilleures solutions, dans un temps raisonnable, que celles obtenues par les méthodes existantes. Par la suite, une formulation plus compacte est proposée pour le problème auxiliaire. De plus, nous présentons une nouvelle méthode de décomposition hiérarchique qui apporte une grande amélioration de l'efficacité globale de l'algorithme. En ce qui concerne les solutions en nombres entiers, nous proposons deux méthodes heurisiques qui arrivent à trouver des bonnes solutions. Nous nous attardons aussi à une comparaison systématique entre les p-cycles et les schémas classiques de protection partagée. Nous effectuons donc une comparaison précise en utilisant des formulations unifiées et basées sur la génération de colonnes pour obtenir des résultats de bonne qualité. Par la suite, nous évaluons empiriquement les versions orientée et non-orientée des p-cycles pour la protection par lien ainsi que pour la protection par chemin, dans des scénarios de trafic asymétrique. Nous montrons quel est le coût de protection additionnel engendré lorsque des systèmes bidirectionnels sont employés dans de tels scénarios. Finalement, nous étudions une formulation de génération de colonnes pour la conception de réseaux avec des p-cycles en présence d'exigences de disponibilité et nous obtenons des premières bornes inférieures pour ce problème.
Resumo:
Parmi les blessures sportives reliées au genou, 20 % impliquent le ligament croisé antérieur (LCA). Le LCA étant le principal stabilisateur du genou, une lésion à cette structure engendre une importante instabilité articulaire influençant considérablement la fonction du genou. L’évaluation clinique actuelle des patients ayant une atteinte au LCA présente malheureusement des limitations importantes à la fois dans l’investigation de l’impact de la blessure et dans le processus diagnostic. Une évaluation biomécanique tridimensionnelle (3D) du genou pourrait s’avérer une avenue innovante afin de pallier à ces limitations. L’objectif général de la thèse est de démontrer la valeur ajoutée du domaine biomécanique dans (1) l’investigation de l’impact de la blessure sur la fonction articulaire du genou et dans (2) l’aide au diagnostic. Pour répondre aux objectifs de recherche un groupe de 29 patients ayant une rupture du LCA (ACLD) et un groupe contrôle de 15 participants sains ont pris part à une évaluation biomécanique 3D du genou lors de tâches de marche sur tapis roulant. L’évaluation des patrons biomécaniques 3D du genou a permis de démontrer que les patients ACLD adoptent un mécanisme compensatoire que nous avons intitulé pivot-shift avoidance gait. Cette adaptation biomécanique a pour objectif d’éviter de positionner le genou dans une condition susceptible de provoquer une instabilité antérolatérale du genou lors de la marche. Par la suite, une méthode de classification a été développée afin d’associer de manière automatique et objective des patrons biomécaniques 3D du genou soit au groupe ACLD ou au groupe contrôle. Pour cela, des paramètres ont été extraits des patrons biomécaniques en utilisant une décomposition en ondelettes et ont ensuite été classifiés par la méthode du plus proche voisin. Notre méthode de classification a obtenu un excellent niveau précision, de sensibilité et de spécificité atteignant respectivement 88%, 90% et 87%. Cette méthode a donc le potentiel de servir d’outil d’aide à la décision clinique. La présente thèse a démontré l’apport considérable d’une évaluation biomécanique 3D du genou dans la prise en charge orthopédique de patients présentant une rupture du LCA; plus spécifiquement dans l’investigation de l’impact de la blessure et dans l’aide au diagnostic.
Resumo:
Il est bien connu que les immigrants rencontrent plusieurs difficultés d’intégration dans le marché du travail canadien. Notamment, ils gagnent des salaires inférieurs aux natifs et ils sont plus susceptibles que ces derniers d’occuper des emplois précaires ou pour lesquels ils sont surqualifiés. Dans cette recherche, nous avons traité de ces trois problèmes sous l’angle de la qualité d’emploi. À partir des données des recensements de la population de 1991 à 2006, nous avons comparé l’évolution de la qualité d’emploi des immigrants et des natifs au Canada, mais aussi au Québec, en Ontario et en Colombie-Britannique. Ces comparaisons ont mis en évidence la hausse du retard de qualité d’emploi des immigrants par rapport aux natifs dans tous les lieux analysés, mais plus particulièrement au Québec. Le désavantage des immigrants persiste même lorsqu’on tient compte du capital humain, des caractéristiques démographiques et du taux de chômage à l’entrée dans le marché du travail. La scolarité, l’expérience professionnelle globale et les connaissances linguistiques améliorent la qualité d’emploi des immigrants et des natifs. Toutefois, lorsqu’on fait la distinction entre l’expérience de travail canadienne et l’expérience de travail étrangère, on s’aperçoit que ce dernier type d’expérience réduit la qualité d’emploi des immigrants. Dans ces circonstances, nous trouvons incohérent que le Canada et le Québec continuent à insister sur ce critère dans leur grille de sélection des travailleurs qualifiés. Pour valoriser les candidats les plus jeunes ayant peu d’expérience de travail dans leur pays d’origine, nous suggérons d’accroître l’importance accordée à l’âge dans ces grilles au détriment de l’expérience. Les jeunes, les étudiants étrangers et les travailleurs temporaires qui possèdent déjà une expérience de travail au Canada nous apparaissent comme des candidats à l’immigration par excellence. Par contre, les résultats obtenus à l’aide de la méthode de décomposition de Blinder-Oaxaca ont montré que l’écart de qualité d’emploi entre les immigrants et les natifs découle d’un traitement défavorable envers les immigrants dans le marché du travail. Cela signifie que les immigrants sont pénalisés au chapitre de la qualité d’emploi à la base, et ce, peu importe leurs caractéristiques. Dans ce contexte, la portée de tout ajustement aux grilles de sélection risque d’être limitée. Nous proposons donc d’agir également en aval du problème à l’aide des politiques d’aide à l’intégration des immigrants. Pour ce faire, une meilleure concertation entre les acteurs du marché du travail est nécessaire. Les ordres professionnels, le gouvernement, les employeurs et les immigrants eux-mêmes doivent s’engager afin d’établir des parcours accélérés pour la reconnaissance des compétences des nouveaux arrivants. Nos résultats indiquent aussi que le traitement défavorable à l’égard des immigrants dans le marché du travail est plus prononcé au Québec qu’en Ontario et en Colombie-Britannique. Il se peut que la société québécoise soit plus réfractaire à l’immigration vu son caractère francophone et minoritaire dans le reste de l’Amérique du Nord. Pourtant, le désir de protéger la langue française motive le Québec à s’impliquer activement en matière d’immigration depuis longtemps et la grille de sélection québécoise insiste déjà sur ce critère. D’ailleurs, près des deux tiers des nouveaux arrivants au Québec connaissent le français en 2011.
Resumo:
Réalisé en cotutelle avec Aix Marseille Université.
Resumo:
Cement industry ranks 2nd in energy consumption among the industries in India. It is one of the major emitter of CO2, due to combustion of fossil fuel and calcination process. As the huge amount of CO2 emissions cause severe environment problems, the efficient and effective utilization of energy is a major concern in Indian cement industry. The main objective of the research work is to assess the energy cosumption and energy conservation of the Indian cement industry and to predict future trends in cement production and reduction of CO2 emissions. In order to achieve this objective, a detailed energy and exergy analysis of a typical cement plant in Kerala was carried out. The data on fuel usage, electricity consumption, amount of clinker and cement production were also collected from a few selected cement industries in India for the period 2001 - 2010 and the CO2 emissions were estimated. A complete decomposition method was used for the analysis of change in CO2 emissions during the period 2001 - 2010 by categorising the cement industries according to the specific thermal energy consumption. A basic forecasting model for the cement production trend was developed by using the system dynamic approach and the model was validated with the data collected from the selected cement industries. The cement production and CO2 emissions from the industries were also predicted with the base year as 2010. The sensitivity analysis of the forecasting model was conducted and found satisfactory. The model was then modified for the total cement production in India to predict the cement production and CO2 emissions for the next 21 years under three different scenarios. The parmeters that influence CO2 emissions like population and GDP growth rate, demand of cement and its production, clinker consumption and energy utilization are incorporated in these scenarios. The existing growth rate of the population and cement production in the year 2010 were used in the baseline scenario. In the scenario-1 (S1) the growth rate of population was assumed to be gradually decreasing and finally reach zero by the year 2030, while in scenario-2 (S2) a faster decline in the growth rate was assumed such that zero growth rate is achieved in the year 2020. The mitigation strategiesfor the reduction of CO2 emissions from the cement production were identified and analyzed in the energy management scenarioThe energy and exergy analysis of the raw mill of the cement plant revealed that the exergy utilization was worse than energy utilization. The energy analysis of the kiln system showed that around 38% of heat energy is wasted through exhaust gases of the preheater and cooler of the kiln sysetm. This could be recovered by the waste heat recovery system. A secondary insulation shell was also recommended for the kiln in the plant in order to prevent heat loss and enhance the efficiency of the plant. The decomposition analysis of the change in CO2 emissions during 2001- 2010 showed that the activity effect was the main factor for CO2 emissions for the cement industries since it is directly dependent on economic growth of the country. The forecasting model showed that 15.22% and 29.44% of CO2 emissions reduction can be achieved by the year 2030 in scenario- (S1) and scenario-2 (S2) respectively. In analysing the energy management scenario, it was assumed that 25% of electrical energy supply to the cement plants is replaced by renewable energy. The analysis revealed that the recovery of waste heat and the use of renewable energy could lead to decline in CO2 emissions 7.1% for baseline scenario, 10.9 % in scenario-1 (S1) and 11.16% in scenario-2 (S2) in 2030. The combined scenario considering population stabilization by the year 2020, 25% of contribution from renewable energy sources of the cement industry and 38% thermal energy from the waste heat streams shows that CO2 emissions from Indian cement industry could be reduced by nearly 37% in the year 2030. This would reduce a substantial level of greenhouse gas load to the environment. The cement industry will remain one of the critical sectors for India to meet its CO2 emissions reduction target. India’s cement production will continue to grow in the near future due to its GDP growth. The control of population, improvement in plant efficiency and use of renewable energy are the important options for the mitigation of CO2 emissions from Indian cement industries