986 resultados para Pelczynski`s decomposition method
Resumo:
RESUMO - Este estudo teve como objectivo contribuir para o conhecimento sobre a equidade no sector do medicamento, com uma análise empírica aplicada ao sistema de saúde português. Para o efeito avaliou-se se indivíduos com as mesmas necessidades em saúde, mas com diferentes níveis de rendimento, tiveram idêntica prestação no que diz respeito ao medicamento. Adicionalmente, aprofundou-se esta análise através da identificação de factores associados ao sistema de prestação ou ao utente que contribuíram para gerar iniquidades, com particular destaque para os comportamentos de não aquisição de medicamentos – não adesão primária. A avaliação da equidade foi efectuada através de duas abordagens distintas, mas complementares: uma sob a perspectiva da utilização e outra sob a perspectiva da distribuição da despesa pública com medicamentos. Para estas análises aplicaram-se métodos baseados nos índices de concentração, utilizando dados do Inquérito Nacional de Saúde 2005/06 e dados relativos aos encargos do Serviço Nacional de Saúde com medicamentos. Os resultados revelaram que, perante as mesmas necessidades, o sistema de prestação tende a favorecer os indivíduos de nível socioeconómico superior, quer na utilização quer na distribuição de recursos do Estado com medicamentos. Adicionalmente, a aplicação do método da decomposição do índice de concentração revelou que tanto o rendimento como o nível educacional são atributos individuais que estão associados à iniquidade na utilização de medicamentos. A iniquidade observada neste estudo pode resultar de barreiras em diferentes fases do processo terapêutico, entre as quais se destacam o não acesso à prescrição médica ou a não aquisição dos medicamentos prescritos. Foi este comportamento - designado de não adesão primária - que se analisou na segunda parte da tese. Para tal cruzaram-se os dados de prescrição electrónica com os dados de dispensa no Serviço Nacional de Saúde. Os resultados revelaram que a taxa de não adesão primária foi cerca de 20% e que este comportamento está associado ao sexo feminino ou ser jovem, assim como a características do sistema de prestação como o valor dos copagamentos. Estes dados indiciam que as barreiras na aquisição podem ser indutoras de iniquidades na utilização de medicamentos. A identificação de iniquidade na utilização de medicamentos e dos factores que contribuem para esta situação constituem o primeiro passo para uma estratégia de redução da iniquidade que, de acordo com os resultados desta tese, deve abranger não só o sistema de saúde mas também outras áreas das políticas públicas em Portugal.
Resumo:
This project aimed to engineer new T2 MRI contrast agents for cell labeling based on formulations containing monodisperse iron oxide magnetic nanoparticles (MNP) coated with natural and synthetic polymers. Monodisperse MNP capped with hydrophobic ligands were synthesized by a thermal decomposition method, and further stabilized in aqueous media with citric acid or meso-2,3-dimercaptosuccinic acid (DMSA) through a ligand exchange reaction. Hydrophilic MNP-DMSA, with optimal hydrodynamic size distribution, colloidal stability and magnetic properties, were used for further functionalization with different coating materials. A covalent coupling strategy was devised to bind the biopolymer gum Arabic (GA) onto MNPDMSA and produce an efficient contrast agent, which enhanced cellular uptake in human colorectal carcinoma cells (HCT116 cell line) compared to uncoated MNP-DMSA. A similar protocol was employed to coat MNP-DMSA with a novel biopolymer produced by a biotechnological process, the exopolysaccharide (EPS) Fucopol. Similar to MNP-DMSA-GA, MNP-DMSA-EPS improved cellular uptake in HCT116 cells compared to MNP-DMSA. However, MNP-DMSA-EPS were particularly efficient towards the neural stem/progenitor cell line ReNcell VM, for which a better iron dose-dependent MRI contrast enhancement was obtained at low iron concentrations and short incubation times. A combination of synthetic and biological coating materials was also explored in this project, to design a dynamic tumortargeting nanoprobe activated by the acidic pH of tumors. The pH-dependent affinity pair neutravidin/iminobiotin, was combined in a multilayer architecture with the synthetic polymers poy-L-lysine and poly(ethylene glycol) and yielded an efficient MRI nanoprobe with ability to distinguish cells cultured in acidic pH conditions form cells cultured in physiological pH conditions.
Resumo:
This paper investigates vulnerability to poverty in Haiti. Research in vulnerability in developing countries has been scarce due to the high data requirements of vulnerability studies (e.g. panel or long series of cross-sections). The methodology adopted here allows the assessment of vulnerability to poverty by exploiting the short panel structure of nested data at different levels. The decomposition method reveals that vulnerability in Haiti is largely a rural phenomenon and that schooling correlates negatively with vulnerability. Most importantly, among the different shocks affecting household's income, it is found that meso-level shocks are in general far more important than covariate shocks. This finding points to some interesting policy implications in decentralizing policies to alleviate vulnerability to poverty.
Resumo:
Scarcities of environmental services are no longer merely a remote hypothesis. Consequently, analysis of their inequalities between nations becomes of paramount importance for the achievement of sustainability in terms either of international policy, or of Universalist ethical principles of equity. This paper aims, on the one hand, at revising methodological aspects of the inequality measurement of certain environmental data and, on the other, at extending the scarce empirical evidence relating to the international distribution of Ecological Footprint (EF), by using a longer EF time series. Most of the techniques currently important in the literature are revised and then tested on EF data with interesting results. We look in depth at Lorenz dominance analyses and consider the underlying properties of different inequality indices. Those indices which fit best with environmental inequality measurements are CV2 and GE(2) because of their neutrality property, however a trade-off may occur when subgroup decompositions are performed. A weighting factor decomposition method is proposed in order to isolate weighting factor changes in inequality growth rates. Finally, the only non-ambiguous way of decomposing inequality by source is the natural decomposition of CV2, which additionally allows the interpretation of marginal term contributions. Empirically, this paper contributes to the environmental inequality measurement of EF: this inequality has been quite stable and its change over time is due to per capita vector changes rather than population changes. Almost the entirety of the EF inequality is explainable by differences in the means between the countries of the World Bank group. This finding suggests that international environmental agreements should be attempted on a regional basis in an attempt to achieve greater consensus between the parties involved. Additionally, source decomposition warns of the dangers of confining CO2 emissions reduction to crop-based energies because of the implications for basic needs satisfaction.
Resumo:
Scarcities of environmental services are no longer merely a remote hypothesis. Consequently, analysis of their inequalities between nations becomes of paramount importance for the achievement of sustainability in terms either of international policy, or of Universalist ethical principles of equity. This paper aims, on the one hand, at revising methodological aspects of the inequality measurement of certain environmental data and, on the other, at extending the scarce empirical evidence relating to the international distribution of Ecological Footprint (EF), by using a longer EF time series. Most of the techniques currently important in the literature are revised and then tested on EF data with interesting results. We look in depth at Lorenz dominance analyses and consider the underlying properties of different inequality indices. Those indices which fit best with environmental inequality measurements are CV2 and GE(2) because of their neutrality property, however a trade-off may occur when subgroup decompositions are performed. A weighting factor decomposition method is proposed in order to isolate weighting factor changes in inequality growth rates. Finally, the only non-ambiguous way of decomposing inequality by source is the natural decomposition of CV2, which additionally allows the interpretation of marginal term contributions. Empirically, this paper contributes to the environmental inequality measurement of EF: this inequality has been quite stable and its change over time is due to per capita vector changes rather than population changes. Almost the entirety of the EF inequality is explainable by differences in the means between the countries of the World Bank group. This finding suggests that international environmental agreements should be attempted on a regional basis in an attempt to achieve greater consensus between the parties involved. Additionally, source decomposition warns of the dangers of confining CO2 emissions reduction to crop-based energies because of the implications for basic needs satisfaction. Keywords: ecological footprint; ecological inequality measurement, inequality decomposition.
Resumo:
We implemented Biot-type porous wave equations in a pseudo-spectral numerical modeling algorithm for the simulation of Stoneley waves in porous media. Fourier and Chebyshev methods are used to compute the spatial derivatives along the horizontal and vertical directions, respectively. To prevent from overly short time steps due to the small grid spacing at the top and bottom of the model as a consequence of the Chebyshev operator, the mesh is stretched in the vertical direction. As a large benefit, the Chebyshev operator allows for an explicit treatment of interfaces. Boundary conditions can be implemented with a characteristics approach. The characteristic variables are evaluated at zero viscosity. We use this approach to model seismic wave propagation at the interface between a fluid and a porous medium. Each medium is represented by a different mesh and the two meshes are connected through the above described characteristics domain-decomposition method. We show an experiment for sealed pore boundary conditions, where we first compare the numerical solution to an analytical solution. We then show the influence of heterogeneity and viscosity of the pore fluid on the propagation of the Stoneley wave and surface waves in general.
Resumo:
Given the adverse impact of image noise on the perception of important clinical details in digital mammography, routine quality control measurements should include an evaluation of noise. The European Guidelines, for example, employ a second-order polynomial fit of pixel variance as a function of detector air kerma (DAK) to decompose noise into quantum, electronic and fixed pattern (FP) components and assess the DAK range where quantum noise dominates. This work examines the robustness of the polynomial method against an explicit noise decomposition method. The two methods were applied to variance and noise power spectrum (NPS) data from six digital mammography units. Twenty homogeneously exposed images were acquired with PMMA blocks for target DAKs ranging from 6.25 to 1600 µGy. Both methods were explored for the effects of data weighting and squared fit coefficients during the curve fitting, the influence of the additional filter material (2 mm Al versus 40 mm PMMA) and noise de-trending. Finally, spatial stationarity of noise was assessed.Data weighting improved noise model fitting over large DAK ranges, especially at low detector exposures. The polynomial and explicit decompositions generally agreed for quantum and electronic noise but FP noise fraction was consistently underestimated by the polynomial method. Noise decomposition as a function of position in the image showed limited noise stationarity, especially for FP noise; thus the position of the region of interest (ROI) used for noise decomposition may influence fractional noise composition. The ROI area and position used in the Guidelines offer an acceptable estimation of noise components. While there are limitations to the polynomial model, when used with care and with appropriate data weighting, the method offers a simple and robust means of examining the detector noise components as a function of detector exposure.
Resumo:
We present a novel numerical algorithm for the simulation of seismic wave propagation in porous media, which is particularly suitable for the accurate modelling of surface wave-type phenomena. The differential equations of motion are based on Biot's theory of poro-elasticity and solved with a pseudospectral approach using Fourier and Chebyshev methods to compute the spatial derivatives along the horizontal and vertical directions, respectively. The time solver is a splitting algorithm that accounts for the stiffness of the differential equations. Due to the Chebyshev operator the grid spacing in the vertical direction is non-uniform and characterized by a denser spatial sampling in the vicinity of interfaces, which allows for a numerically stable and accurate evaluation of higher order surface wave modes. We stretch the grid in the vertical direction to increase the minimum grid spacing and reduce the computational cost. The free-surface boundary conditions are implemented with a characteristics approach, where the characteristic variables are evaluated at zero viscosity. The same procedure is used to model seismic wave propagation at the interface between a fluid and porous medium. In this case, each medium is represented by a different grid and the two grids are combined through a domain-decomposition method. This wavefield decomposition method accounts for the discontinuity of variables and is crucial for an accurate interface treatment. We simulate seismic wave propagation with open-pore and sealed-pore boundary conditions and verify the validity and accuracy of the algorithm by comparing the numerical simulations to analytical solutions based on zero viscosity obtained with the Cagniard-de Hoop method. Finally, we illustrate the suitability of our algorithm for more complex models of porous media involving viscous pore fluids and strongly heterogeneous distributions of the elastic and hydraulic material properties.
Resumo:
BACKGROUND AND PURPOSE: Knowledge of cerebral blood flow (CBF) alterations in cases of acute stroke could be valuable in the early management of these cases. Among imaging techniques affording evaluation of cerebral perfusion, perfusion CT studies involve sequential acquisition of cerebral CT sections obtained in an axial mode during the IV administration of iodinated contrast material. They are thus very easy to perform in emergency settings. Perfusion CT values of CBF have proved to be accurate in animals, and perfusion CT affords plausible values in humans. The purpose of this study was to validate perfusion CT studies of CBF by comparison with the results provided by stable xenon CT, which have been reported to be accurate, and to evaluate acquisition and processing modalities of CT data, notably the possible deconvolution methods and the selection of the reference artery. METHODS: Twelve stable xenon CT and perfusion CT cerebral examinations were performed within an interval of a few minutes in patients with various cerebrovascular diseases. CBF maps were obtained from perfusion CT data by deconvolution using singular value decomposition and least mean square methods. The CBF were compared with the stable xenon CT results in multiple regions of interest through linear regression analysis and bilateral t tests for matched variables. RESULTS: Linear regression analysis showed good correlation between perfusion CT and stable xenon CT CBF values (singular value decomposition method: R(2) = 0.79, slope = 0.87; least mean square method: R(2) = 0.67, slope = 0.83). Bilateral t tests for matched variables did not identify a significant difference between the two imaging methods (P >.1). Both deconvolution methods were equivalent (P >.1). The choice of the reference artery is a major concern and has a strong influence on the final perfusion CT CBF map. CONCLUSION: Perfusion CT studies of CBF achieved with adequate acquisition parameters and processing lead to accurate and reliable results.
Resumo:
Due to its non-storability, electricity must be produced at the same time that it is consumed, as a result prices are determined on an hourly basis and thus analysis becomes more challenging. Moreover, the seasonal fluctuations in demand and supply lead to a seasonal behavior of electricity spot prices. The purpose of this thesis is to seek and remove all causal effects from electricity spot prices and remain with pure prices for modeling purposes. To achieve this we use Qlucore Omics Explorer (QOE) for the visualization and the exploration of the data set and Time Series Decomposition method to estimate and extract the deterministic components from the series. To obtain the target series we use regression based on the background variables (water reservoir and temperature). The result obtained is three price series (for Sweden, Norway and System prices) with no apparent pattern.
Resumo:
La survie des réseaux est un domaine d'étude technique très intéressant ainsi qu'une préoccupation critique dans la conception des réseaux. Compte tenu du fait que de plus en plus de données sont transportées à travers des réseaux de communication, une simple panne peut interrompre des millions d'utilisateurs et engendrer des millions de dollars de pertes de revenu. Les techniques de protection des réseaux consistent à fournir une capacité supplémentaire dans un réseau et à réacheminer les flux automatiquement autour de la panne en utilisant cette disponibilité de capacité. Cette thèse porte sur la conception de réseaux optiques intégrant des techniques de survie qui utilisent des schémas de protection basés sur les p-cycles. Plus précisément, les p-cycles de protection par chemin sont exploités dans le contexte de pannes sur les liens. Notre étude se concentre sur la mise en place de structures de protection par p-cycles, et ce, en supposant que les chemins d'opération pour l'ensemble des requêtes sont définis a priori. La majorité des travaux existants utilisent des heuristiques ou des méthodes de résolution ayant de la difficulté à résoudre des instances de grande taille. L'objectif de cette thèse est double. D'une part, nous proposons des modèles et des méthodes de résolution capables d'aborder des problèmes de plus grande taille que ceux déjà présentés dans la littérature. D'autre part, grâce aux nouveaux algorithmes, nous sommes en mesure de produire des solutions optimales ou quasi-optimales. Pour ce faire, nous nous appuyons sur la technique de génération de colonnes, celle-ci étant adéquate pour résoudre des problèmes de programmation linéaire de grande taille. Dans ce projet, la génération de colonnes est utilisée comme une façon intelligente d'énumérer implicitement des cycles prometteurs. Nous proposons d'abord des formulations pour le problème maître et le problème auxiliaire ainsi qu'un premier algorithme de génération de colonnes pour la conception de réseaux protegées par des p-cycles de la protection par chemin. L'algorithme obtient de meilleures solutions, dans un temps raisonnable, que celles obtenues par les méthodes existantes. Par la suite, une formulation plus compacte est proposée pour le problème auxiliaire. De plus, nous présentons une nouvelle méthode de décomposition hiérarchique qui apporte une grande amélioration de l'efficacité globale de l'algorithme. En ce qui concerne les solutions en nombres entiers, nous proposons deux méthodes heurisiques qui arrivent à trouver des bonnes solutions. Nous nous attardons aussi à une comparaison systématique entre les p-cycles et les schémas classiques de protection partagée. Nous effectuons donc une comparaison précise en utilisant des formulations unifiées et basées sur la génération de colonnes pour obtenir des résultats de bonne qualité. Par la suite, nous évaluons empiriquement les versions orientée et non-orientée des p-cycles pour la protection par lien ainsi que pour la protection par chemin, dans des scénarios de trafic asymétrique. Nous montrons quel est le coût de protection additionnel engendré lorsque des systèmes bidirectionnels sont employés dans de tels scénarios. Finalement, nous étudions une formulation de génération de colonnes pour la conception de réseaux avec des p-cycles en présence d'exigences de disponibilité et nous obtenons des premières bornes inférieures pour ce problème.
Resumo:
Parmi les blessures sportives reliées au genou, 20 % impliquent le ligament croisé antérieur (LCA). Le LCA étant le principal stabilisateur du genou, une lésion à cette structure engendre une importante instabilité articulaire influençant considérablement la fonction du genou. L’évaluation clinique actuelle des patients ayant une atteinte au LCA présente malheureusement des limitations importantes à la fois dans l’investigation de l’impact de la blessure et dans le processus diagnostic. Une évaluation biomécanique tridimensionnelle (3D) du genou pourrait s’avérer une avenue innovante afin de pallier à ces limitations. L’objectif général de la thèse est de démontrer la valeur ajoutée du domaine biomécanique dans (1) l’investigation de l’impact de la blessure sur la fonction articulaire du genou et dans (2) l’aide au diagnostic. Pour répondre aux objectifs de recherche un groupe de 29 patients ayant une rupture du LCA (ACLD) et un groupe contrôle de 15 participants sains ont pris part à une évaluation biomécanique 3D du genou lors de tâches de marche sur tapis roulant. L’évaluation des patrons biomécaniques 3D du genou a permis de démontrer que les patients ACLD adoptent un mécanisme compensatoire que nous avons intitulé pivot-shift avoidance gait. Cette adaptation biomécanique a pour objectif d’éviter de positionner le genou dans une condition susceptible de provoquer une instabilité antérolatérale du genou lors de la marche. Par la suite, une méthode de classification a été développée afin d’associer de manière automatique et objective des patrons biomécaniques 3D du genou soit au groupe ACLD ou au groupe contrôle. Pour cela, des paramètres ont été extraits des patrons biomécaniques en utilisant une décomposition en ondelettes et ont ensuite été classifiés par la méthode du plus proche voisin. Notre méthode de classification a obtenu un excellent niveau précision, de sensibilité et de spécificité atteignant respectivement 88%, 90% et 87%. Cette méthode a donc le potentiel de servir d’outil d’aide à la décision clinique. La présente thèse a démontré l’apport considérable d’une évaluation biomécanique 3D du genou dans la prise en charge orthopédique de patients présentant une rupture du LCA; plus spécifiquement dans l’investigation de l’impact de la blessure et dans l’aide au diagnostic.
Resumo:
Il est bien connu que les immigrants rencontrent plusieurs difficultés d’intégration dans le marché du travail canadien. Notamment, ils gagnent des salaires inférieurs aux natifs et ils sont plus susceptibles que ces derniers d’occuper des emplois précaires ou pour lesquels ils sont surqualifiés. Dans cette recherche, nous avons traité de ces trois problèmes sous l’angle de la qualité d’emploi. À partir des données des recensements de la population de 1991 à 2006, nous avons comparé l’évolution de la qualité d’emploi des immigrants et des natifs au Canada, mais aussi au Québec, en Ontario et en Colombie-Britannique. Ces comparaisons ont mis en évidence la hausse du retard de qualité d’emploi des immigrants par rapport aux natifs dans tous les lieux analysés, mais plus particulièrement au Québec. Le désavantage des immigrants persiste même lorsqu’on tient compte du capital humain, des caractéristiques démographiques et du taux de chômage à l’entrée dans le marché du travail. La scolarité, l’expérience professionnelle globale et les connaissances linguistiques améliorent la qualité d’emploi des immigrants et des natifs. Toutefois, lorsqu’on fait la distinction entre l’expérience de travail canadienne et l’expérience de travail étrangère, on s’aperçoit que ce dernier type d’expérience réduit la qualité d’emploi des immigrants. Dans ces circonstances, nous trouvons incohérent que le Canada et le Québec continuent à insister sur ce critère dans leur grille de sélection des travailleurs qualifiés. Pour valoriser les candidats les plus jeunes ayant peu d’expérience de travail dans leur pays d’origine, nous suggérons d’accroître l’importance accordée à l’âge dans ces grilles au détriment de l’expérience. Les jeunes, les étudiants étrangers et les travailleurs temporaires qui possèdent déjà une expérience de travail au Canada nous apparaissent comme des candidats à l’immigration par excellence. Par contre, les résultats obtenus à l’aide de la méthode de décomposition de Blinder-Oaxaca ont montré que l’écart de qualité d’emploi entre les immigrants et les natifs découle d’un traitement défavorable envers les immigrants dans le marché du travail. Cela signifie que les immigrants sont pénalisés au chapitre de la qualité d’emploi à la base, et ce, peu importe leurs caractéristiques. Dans ce contexte, la portée de tout ajustement aux grilles de sélection risque d’être limitée. Nous proposons donc d’agir également en aval du problème à l’aide des politiques d’aide à l’intégration des immigrants. Pour ce faire, une meilleure concertation entre les acteurs du marché du travail est nécessaire. Les ordres professionnels, le gouvernement, les employeurs et les immigrants eux-mêmes doivent s’engager afin d’établir des parcours accélérés pour la reconnaissance des compétences des nouveaux arrivants. Nos résultats indiquent aussi que le traitement défavorable à l’égard des immigrants dans le marché du travail est plus prononcé au Québec qu’en Ontario et en Colombie-Britannique. Il se peut que la société québécoise soit plus réfractaire à l’immigration vu son caractère francophone et minoritaire dans le reste de l’Amérique du Nord. Pourtant, le désir de protéger la langue française motive le Québec à s’impliquer activement en matière d’immigration depuis longtemps et la grille de sélection québécoise insiste déjà sur ce critère. D’ailleurs, près des deux tiers des nouveaux arrivants au Québec connaissent le français en 2011.
Resumo:
Réalisé en cotutelle avec Aix Marseille Université.
Resumo:
Cement industry ranks 2nd in energy consumption among the industries in India. It is one of the major emitter of CO2, due to combustion of fossil fuel and calcination process. As the huge amount of CO2 emissions cause severe environment problems, the efficient and effective utilization of energy is a major concern in Indian cement industry. The main objective of the research work is to assess the energy cosumption and energy conservation of the Indian cement industry and to predict future trends in cement production and reduction of CO2 emissions. In order to achieve this objective, a detailed energy and exergy analysis of a typical cement plant in Kerala was carried out. The data on fuel usage, electricity consumption, amount of clinker and cement production were also collected from a few selected cement industries in India for the period 2001 - 2010 and the CO2 emissions were estimated. A complete decomposition method was used for the analysis of change in CO2 emissions during the period 2001 - 2010 by categorising the cement industries according to the specific thermal energy consumption. A basic forecasting model for the cement production trend was developed by using the system dynamic approach and the model was validated with the data collected from the selected cement industries. The cement production and CO2 emissions from the industries were also predicted with the base year as 2010. The sensitivity analysis of the forecasting model was conducted and found satisfactory. The model was then modified for the total cement production in India to predict the cement production and CO2 emissions for the next 21 years under three different scenarios. The parmeters that influence CO2 emissions like population and GDP growth rate, demand of cement and its production, clinker consumption and energy utilization are incorporated in these scenarios. The existing growth rate of the population and cement production in the year 2010 were used in the baseline scenario. In the scenario-1 (S1) the growth rate of population was assumed to be gradually decreasing and finally reach zero by the year 2030, while in scenario-2 (S2) a faster decline in the growth rate was assumed such that zero growth rate is achieved in the year 2020. The mitigation strategiesfor the reduction of CO2 emissions from the cement production were identified and analyzed in the energy management scenarioThe energy and exergy analysis of the raw mill of the cement plant revealed that the exergy utilization was worse than energy utilization. The energy analysis of the kiln system showed that around 38% of heat energy is wasted through exhaust gases of the preheater and cooler of the kiln sysetm. This could be recovered by the waste heat recovery system. A secondary insulation shell was also recommended for the kiln in the plant in order to prevent heat loss and enhance the efficiency of the plant. The decomposition analysis of the change in CO2 emissions during 2001- 2010 showed that the activity effect was the main factor for CO2 emissions for the cement industries since it is directly dependent on economic growth of the country. The forecasting model showed that 15.22% and 29.44% of CO2 emissions reduction can be achieved by the year 2030 in scenario- (S1) and scenario-2 (S2) respectively. In analysing the energy management scenario, it was assumed that 25% of electrical energy supply to the cement plants is replaced by renewable energy. The analysis revealed that the recovery of waste heat and the use of renewable energy could lead to decline in CO2 emissions 7.1% for baseline scenario, 10.9 % in scenario-1 (S1) and 11.16% in scenario-2 (S2) in 2030. The combined scenario considering population stabilization by the year 2020, 25% of contribution from renewable energy sources of the cement industry and 38% thermal energy from the waste heat streams shows that CO2 emissions from Indian cement industry could be reduced by nearly 37% in the year 2030. This would reduce a substantial level of greenhouse gas load to the environment. The cement industry will remain one of the critical sectors for India to meet its CO2 emissions reduction target. India’s cement production will continue to grow in the near future due to its GDP growth. The control of population, improvement in plant efficiency and use of renewable energy are the important options for the mitigation of CO2 emissions from Indian cement industries