929 resultados para Model Predictive Current Control
Resumo:
This paper presents a model of a control system for robot systems inspired by the functionality and organisation of human neuroregulatory system. Our model was specified using software agents within a formal framework and implemented through Web Services. This approach allows the implementation of the control logic of a robot system with relative ease, in an incremental way, using the addition of new control centres to the system as its behaviour is observed or needs to be detailed with greater precision, without the need to modify existing functionality. The tests performed verify that the proposed model has the general characteristics of biological systems together with the desirable features of software, such as robustness, flexibility, reuse and decoupling.
Resumo:
Part 1. Alternating-current control devices and assemblies.--Part 2. Alternating-current controllers.--Part 3. Direct-current controllers.
Resumo:
Calibration of a groundwater model requires that hydraulic properties be estimated throughout a model domain. This generally constitutes an underdetermined inverse problem, for which a Solution can only be found when some kind of regularization device is included in the inversion process. Inclusion of regularization in the calibration process can be implicit, for example through the use of zones of constant parameter value, or explicit, for example through solution of a constrained minimization problem in which parameters are made to respect preferred values, or preferred relationships, to the degree necessary for a unique solution to be obtained. The cost of uniqueness is this: no matter which regularization methodology is employed, the inevitable consequence of its use is a loss of detail in the calibrated field. This, ill turn, can lead to erroneous predictions made by a model that is ostensibly well calibrated. Information made available as a by-product of the regularized inversion process allows the reasons for this loss of detail to be better understood. In particular, it is easily demonstrated that the estimated value for an hydraulic property at any point within a model domain is, in fact, a weighted average of the true hydraulic property over a much larger area. This averaging process causes loss of resolution in the estimated field. Where hydraulic conductivity is the hydraulic property being estimated, high averaging weights exist in areas that are strategically disposed with respect to measurement wells, while other areas may contribute very little to the estimated hydraulic conductivity at any point within the model domain, this possibly making the detection of hydraulic conductivity anomalies in these latter areas almost impossible. A study of the post-calibration parameter field covariance matrix allows further insights into the loss of system detail incurred through the calibration process to be gained. A comparison of pre- and post-calibration parameter covariance matrices shows that the latter often possess a much smaller spectral bandwidth than the former. It is also demonstrated that, as all inevitable consequence of the fact that a calibrated model cannot replicate every detail of the true system, model-to-measurement residuals can show a high degree of spatial correlation, a fact which must be taken into account when assessing these residuals either qualitatively, or quantitatively in the exploration of model predictive uncertainty. These principles are demonstrated using a synthetic case in which spatial parameter definition is based oil pilot points, and calibration is Implemented using both zones of piecewise constancy and constrained minimization regularization. (C) 2005 Elsevier Ltd. All rights reserved.
Resumo:
La riduzione dei consumi di combustibili fossili e lo sviluppo di tecnologie per il risparmio energetico sono una questione di centrale importanza sia per l’industria che per la ricerca, a causa dei drastici effetti che le emissioni di inquinanti antropogenici stanno avendo sull’ambiente. Mentre un crescente numero di normative e regolamenti vengono emessi per far fronte a questi problemi, la necessità di sviluppare tecnologie a basse emissioni sta guidando la ricerca in numerosi settori industriali. Nonostante la realizzazione di fonti energetiche rinnovabili sia vista come la soluzione più promettente nel lungo periodo, un’efficace e completa integrazione di tali tecnologie risulta ad oggi impraticabile, a causa sia di vincoli tecnici che della vastità della quota di energia prodotta, attualmente soddisfatta da fonti fossili, che le tecnologie alternative dovrebbero andare a coprire. L’ottimizzazione della produzione e della gestione energetica d’altra parte, associata allo sviluppo di tecnologie per la riduzione dei consumi energetici, rappresenta una soluzione adeguata al problema, che può al contempo essere integrata all’interno di orizzonti temporali più brevi. L’obiettivo della presente tesi è quello di investigare, sviluppare ed applicare un insieme di strumenti numerici per ottimizzare la progettazione e la gestione di processi energetici che possa essere usato per ottenere una riduzione dei consumi di combustibile ed un’ottimizzazione dell’efficienza energetica. La metodologia sviluppata si appoggia su un approccio basato sulla modellazione numerica dei sistemi, che sfrutta le capacità predittive, derivanti da una rappresentazione matematica dei processi, per sviluppare delle strategie di ottimizzazione degli stessi, a fronte di condizioni di impiego realistiche. Nello sviluppo di queste procedure, particolare enfasi viene data alla necessità di derivare delle corrette strategie di gestione, che tengano conto delle dinamiche degli impianti analizzati, per poter ottenere le migliori prestazioni durante l’effettiva fase operativa. Durante lo sviluppo della tesi il problema dell’ottimizzazione energetica è stato affrontato in riferimento a tre diverse applicazioni tecnologiche. Nella prima di queste è stato considerato un impianto multi-fonte per la soddisfazione della domanda energetica di un edificio ad uso commerciale. Poiché tale sistema utilizza una serie di molteplici tecnologie per la produzione dell’energia termica ed elettrica richiesta dalle utenze, è necessario identificare la corretta strategia di ripartizione dei carichi, in grado di garantire la massima efficienza energetica dell’impianto. Basandosi su un modello semplificato dell’impianto, il problema è stato risolto applicando un algoritmo di Programmazione Dinamica deterministico, e i risultati ottenuti sono stati comparati con quelli derivanti dall’adozione di una più semplice strategia a regole, provando in tal modo i vantaggi connessi all’adozione di una strategia di controllo ottimale. Nella seconda applicazione è stata investigata la progettazione di una soluzione ibrida per il recupero energetico da uno scavatore idraulico. Poiché diversi layout tecnologici per implementare questa soluzione possono essere concepiti e l’introduzione di componenti aggiuntivi necessita di un corretto dimensionamento, è necessario lo sviluppo di una metodologia che permetta di valutare le massime prestazioni ottenibili da ognuna di tali soluzioni alternative. Il confronto fra i diversi layout è stato perciò condotto sulla base delle prestazioni energetiche del macchinario durante un ciclo di scavo standardizzato, stimate grazie all’ausilio di un dettagliato modello dell’impianto. Poiché l’aggiunta di dispositivi per il recupero energetico introduce gradi di libertà addizionali nel sistema, è stato inoltre necessario determinare la strategia di controllo ottimale dei medesimi, al fine di poter valutare le massime prestazioni ottenibili da ciascun layout. Tale problema è stato di nuovo risolto grazie all’ausilio di un algoritmo di Programmazione Dinamica, che sfrutta un modello semplificato del sistema, ideato per lo scopo. Una volta che le prestazioni ottimali per ogni soluzione progettuale sono state determinate, è stato possibile effettuare un equo confronto fra le diverse alternative. Nella terza ed ultima applicazione è stato analizzato un impianto a ciclo Rankine organico (ORC) per il recupero di cascami termici dai gas di scarico di autovetture. Nonostante gli impianti ORC siano potenzialmente in grado di produrre rilevanti incrementi nel risparmio di combustibile di un veicolo, è necessario per il loro corretto funzionamento lo sviluppo di complesse strategie di controllo, che siano in grado di far fronte alla variabilità della fonte di calore per il processo; inoltre, contemporaneamente alla massimizzazione dei risparmi di combustibile, il sistema deve essere mantenuto in condizioni di funzionamento sicure. Per far fronte al problema, un robusto ed efficace modello dell’impianto è stato realizzato, basandosi sulla Moving Boundary Methodology, per la simulazione delle dinamiche di cambio di fase del fluido organico e la stima delle prestazioni dell’impianto. Tale modello è stato in seguito utilizzato per progettare un controllore predittivo (MPC) in grado di stimare i parametri di controllo ottimali per la gestione del sistema durante il funzionamento transitorio. Per la soluzione del corrispondente problema di ottimizzazione dinamica non lineare, un algoritmo basato sulla Particle Swarm Optimization è stato sviluppato. I risultati ottenuti con l’adozione di tale controllore sono stati confrontati con quelli ottenibili da un classico controllore proporzionale integrale (PI), mostrando nuovamente i vantaggi, da un punto di vista energetico, derivanti dall’adozione di una strategia di controllo ottima.
Resumo:
The authors use social control theory to develop a conceptual model that addresses the effectiveness of regulatory agencies’ (e.g., Food and Drug Administration, Occupational Safety and Health Administration) field-level efforts to obtain conformance with product safety laws. Central to the model are the control processes agencies use when monitoring organizations and enforcing the safety rules. These approaches can be labeled formal control (e.g., rigid enforcement) and informal control (e.g., social instruction). The theoretical framework identifies an important antecedent of control and the relative effectiveness of control’s alternative forms in gaining compliance and reducing opportunism. Furthermore, the model predicts that the regulated firms’ level of agreement with the safety rules moderates the relationships between control and firm responses. A local health department’s administration of state food safety regulations provides the empirical context for testing the hypotheses. The results from a survey of 173 restaurants largely support the proposed model. The study findings inform a discussion of effective methods of administering product safety laws. The authors use social control theory to develop a conceptual model that addresses the effectiveness of regulatory agencies’ (e.g., Food and Drug Administration, Occupational Safety and Health Administration) field-level efforts to obtain conformance with product safety laws. Central to the model are the control processes agencies use when monitoring organizations and enforcing the safety rules. These approaches can be labeled formal control (e.g., rigid enforcement) and informal control (e.g., social instruction). The theoretical framework identifies an important antecedent of control and the relative effectiveness of control’s alternative forms in gaining compliance and reducing opportunism. Furthermore, the model predicts that the regulated firms’ level of agreement with the safety rules moderates the relationships between control and firm responses. A local health department’s administration of state food safety regulations provides the empirical context for testing the hypotheses. The results from a survey of 173 restaurants largely support the proposed model. The study findings inform a discussion of effective methods of administering product safety laws.
Resumo:
Modern enterprises work in highly dynamic environment. Thus, the developing of company strategy is of crucial importance. It determines the surviving of the enterprise and its evolution. Adapting the desired management goal in accordance with the environment changes is a complex problem. In the present paper, an approach for solving this problem is suggested. It is based on predictive control philosophy. The enterprise is modelled as a cybernetic system and the future plant response is predicted by a neural network model. The predictions are passed to an optimization routine, which attempts to minimize the quadratic performance criterion.
Resumo:
This article develops a relational model of institutional work and complexity. This model advances current institutional debates on institutional complexity and institutional work in three ways. First, it provides a relational and dynamic perspective on institutional complexity by explaining how constellations of logics - and their degree of internal contradiction - are constructed rather than given. Second, it refines our current understanding of agency, intentionality and effort in institutional work by demonstrating how different dimensions of agency interact dynamically in the institutional work of reconstructing institutional complexity. Third, it situates institutional work in the everyday practice of individuals coping with the institutional complexities of their work. In doing so, it reconnects the construction of institutionally complex settings to the actions and interactions of the individuals who inhabit them. © The Author(s) 2013.
Resumo:
A cascaded DC-DC boost converter is one of the ways to integrate hybrid battery types within a grid-tie inverter. Due to the presence of different battery parameters within the system such as, state-of-charge and/or capacity, a module based distributed power sharing strategy may be used. To implement this sharing strategy, the desired control reference for each module voltage/current control loop needs to be dynamically varied according to these battery parameters. This can cause stability problem within the cascaded converters due to relative battery parameter variations when using the conventional PI control approach. This paper proposes a new control method based on Lyapunov Functions to eliminate this issue. The proposed solution provides a global asymptotic stability at a module level avoiding any instability issue due to parameter variations. A detailed analysis and design of the nonlinear control structure are presented under the distributed sharing control. At last thorough experimental investigations are shown to prove the effectiveness of the proposed control under grid-tie conditions.
Resumo:
The long term goal of the work described is to contribute to the emerging literature of prevention science in general, and to school-based psychoeducational interventions in particular. The psychoeducational intervention reported in this study used a main effects prevention intervention model. The current study focused on promoting optimal cognitive and affective functioning. The goal of this intervention was to increase potential protective factors such as critical cognitive and communicative competencies (e.g., critical problem solving and decision making) and affective competencies (e.g., personal control and responsibility) in middle adolescents who have been identified by the school system as being at-risk for problem behaviors. The current psychoeducational intervention draws on an ongoing program of theory and research (Berman, Berman, Cass Lorente, Ferrer Wreder, Arrufat, & Kurtines 1996; Ferrer Wreder, 1996; Kurtines, Berman, Ittel, & Williamson, 1995) and extends it to include Freire's (1970) concept of transformative pedagogy in developing school-based psychoeducational programs that target troubled adolescents. The results of the quantitative and qualitative analyses indicated trends that were generally encouraging with respect to the effects of the intervention on increasing critical cognitive and affective competencies. ^
Resumo:
Abstract
The goal of modern radiotherapy is to precisely deliver a prescribed radiation dose to delineated target volumes that contain a significant amount of tumor cells while sparing the surrounding healthy tissues/organs. Precise delineation of treatment and avoidance volumes is the key for the precision radiation therapy. In recent years, considerable clinical and research efforts have been devoted to integrate MRI into radiotherapy workflow motivated by the superior soft tissue contrast and functional imaging possibility. Dynamic contrast-enhanced MRI (DCE-MRI) is a noninvasive technique that measures properties of tissue microvasculature. Its sensitivity to radiation-induced vascular pharmacokinetic (PK) changes has been preliminary demonstrated. In spite of its great potential, two major challenges have limited DCE-MRI’s clinical application in radiotherapy assessment: the technical limitations of accurate DCE-MRI imaging implementation and the need of novel DCE-MRI data analysis methods for richer functional heterogeneity information.
This study aims at improving current DCE-MRI techniques and developing new DCE-MRI analysis methods for particular radiotherapy assessment. Thus, the study is naturally divided into two parts. The first part focuses on DCE-MRI temporal resolution as one of the key DCE-MRI technical factors, and some improvements regarding DCE-MRI temporal resolution are proposed; the second part explores the potential value of image heterogeneity analysis and multiple PK model combination for therapeutic response assessment, and several novel DCE-MRI data analysis methods are developed.
I. Improvement of DCE-MRI temporal resolution. First, the feasibility of improving DCE-MRI temporal resolution via image undersampling was studied. Specifically, a novel MR image iterative reconstruction algorithm was studied for DCE-MRI reconstruction. This algorithm was built on the recently developed compress sensing (CS) theory. By utilizing a limited k-space acquisition with shorter imaging time, images can be reconstructed in an iterative fashion under the regularization of a newly proposed total generalized variation (TGV) penalty term. In the retrospective study of brain radiosurgery patient DCE-MRI scans under IRB-approval, the clinically obtained image data was selected as reference data, and the simulated accelerated k-space acquisition was generated via undersampling the reference image full k-space with designed sampling grids. Two undersampling strategies were proposed: 1) a radial multi-ray grid with a special angular distribution was adopted to sample each slice of the full k-space; 2) a Cartesian random sampling grid series with spatiotemporal constraints from adjacent frames was adopted to sample the dynamic k-space series at a slice location. Two sets of PK parameters’ maps were generated from the undersampled data and from the fully-sampled data, respectively. Multiple quantitative measurements and statistical studies were performed to evaluate the accuracy of PK maps generated from the undersampled data in reference to the PK maps generated from the fully-sampled data. Results showed that at a simulated acceleration factor of four, PK maps could be faithfully calculated from the DCE images that were reconstructed using undersampled data, and no statistically significant differences were found between the regional PK mean values from undersampled and fully-sampled data sets. DCE-MRI acceleration using the investigated image reconstruction method has been suggested as feasible and promising.
Second, for high temporal resolution DCE-MRI, a new PK model fitting method was developed to solve PK parameters for better calculation accuracy and efficiency. This method is based on a derivative-based deformation of the commonly used Tofts PK model, which is presented as an integrative expression. This method also includes an advanced Kolmogorov-Zurbenko (KZ) filter to remove the potential noise effect in data and solve the PK parameter as a linear problem in matrix format. In the computer simulation study, PK parameters representing typical intracranial values were selected as references to simulated DCE-MRI data for different temporal resolution and different data noise level. Results showed that at both high temporal resolutions (<1s) and clinically feasible temporal resolution (~5s), this new method was able to calculate PK parameters more accurate than the current calculation methods at clinically relevant noise levels; at high temporal resolutions, the calculation efficiency of this new method was superior to current methods in an order of 102. In a retrospective of clinical brain DCE-MRI scans, the PK maps derived from the proposed method were comparable with the results from current methods. Based on these results, it can be concluded that this new method can be used for accurate and efficient PK model fitting for high temporal resolution DCE-MRI.
II. Development of DCE-MRI analysis methods for therapeutic response assessment. This part aims at methodology developments in two approaches. The first one is to develop model-free analysis method for DCE-MRI functional heterogeneity evaluation. This approach is inspired by the rationale that radiotherapy-induced functional change could be heterogeneous across the treatment area. The first effort was spent on a translational investigation of classic fractal dimension theory for DCE-MRI therapeutic response assessment. In a small-animal anti-angiogenesis drug therapy experiment, the randomly assigned treatment/control groups received multiple fraction treatments with one pre-treatment and multiple post-treatment high spatiotemporal DCE-MRI scans. In the post-treatment scan two weeks after the start, the investigated Rényi dimensions of the classic PK rate constant map demonstrated significant differences between the treatment and the control groups; when Rényi dimensions were adopted for treatment/control group classification, the achieved accuracy was higher than the accuracy from using conventional PK parameter statistics. Following this pilot work, two novel texture analysis methods were proposed. First, a new technique called Gray Level Local Power Matrix (GLLPM) was developed. It intends to solve the lack of temporal information and poor calculation efficiency of the commonly used Gray Level Co-Occurrence Matrix (GLCOM) techniques. In the same small animal experiment, the dynamic curves of Haralick texture features derived from the GLLPM had an overall better performance than the corresponding curves derived from current GLCOM techniques in treatment/control separation and classification. The second developed method is dynamic Fractal Signature Dissimilarity (FSD) analysis. Inspired by the classic fractal dimension theory, this method measures the dynamics of tumor heterogeneity during the contrast agent uptake in a quantitative fashion on DCE images. In the small animal experiment mentioned before, the selected parameters from dynamic FSD analysis showed significant differences between treatment/control groups as early as after 1 treatment fraction; in contrast, metrics from conventional PK analysis showed significant differences only after 3 treatment fractions. When using dynamic FSD parameters, the treatment/control group classification after 1st treatment fraction was improved than using conventional PK statistics. These results suggest the promising application of this novel method for capturing early therapeutic response.
The second approach of developing novel DCE-MRI methods is to combine PK information from multiple PK models. Currently, the classic Tofts model or its alternative version has been widely adopted for DCE-MRI analysis as a gold-standard approach for therapeutic response assessment. Previously, a shutter-speed (SS) model was proposed to incorporate transcytolemmal water exchange effect into contrast agent concentration quantification. In spite of richer biological assumption, its application in therapeutic response assessment is limited. It might be intriguing to combine the information from the SS model and from the classic Tofts model to explore potential new biological information for treatment assessment. The feasibility of this idea was investigated in the same small animal experiment. The SS model was compared against the Tofts model for therapeutic response assessment using PK parameter regional mean value comparison. Based on the modeled transcytolemmal water exchange rate, a biological subvolume was proposed and was automatically identified using histogram analysis. Within the biological subvolume, the PK rate constant derived from the SS model were proved to be superior to the one from Tofts model in treatment/control separation and classification. Furthermore, novel biomarkers were designed to integrate PK rate constants from these two models. When being evaluated in the biological subvolume, this biomarker was able to reflect significant treatment/control difference in both post-treatment evaluation. These results confirm the potential value of SS model as well as its combination with Tofts model for therapeutic response assessment.
In summary, this study addressed two problems of DCE-MRI application in radiotherapy assessment. In the first part, a method of accelerating DCE-MRI acquisition for better temporal resolution was investigated, and a novel PK model fitting algorithm was proposed for high temporal resolution DCE-MRI. In the second part, two model-free texture analysis methods and a multiple-model analysis method were developed for DCE-MRI therapeutic response assessment. The presented works could benefit the future DCE-MRI routine clinical application in radiotherapy assessment.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Predicting user behaviour enables user assistant services provide personalized services to the users. This requires a comprehensive user model that can be created by monitoring user interactions and activities. BaranC is a framework that performs user interface (UI) monitoring (and collects all associated context data), builds a user model, and supports services that make use of the user model. A prediction service, Next-App, is built to demonstrate the use of the framework and to evaluate the usefulness of such a prediction service. Next-App analyses a user's data, learns patterns, makes a model for a user, and finally predicts, based on the user model and current context, what application(s) the user is likely to want to use. The prediction is pro-active and dynamic, reflecting the current context, and is also dynamic in that it responds to changes in the user model, as might occur over time as a user's habits change. Initial evaluation of Next-App indicates a high-level of satisfaction with the service.
Resumo:
Avec la disponibilité de capteurs fiables de teneur en eau exploitant la spectroscopie proche infrarouge (NIR pour near-infrared) et les outils chimiométriques, il est maintenant possible d’appliquer des stratégies de commande en ligne sur plusieurs procédés de séchage dans l’industrie pharmaceutique. Dans cet ouvrage, le séchage de granules pharmaceutiques avec un séchoir à lit fluidisé discontinu (FBD pour fluidized bed dryer) de taille pilote est étudié à l’aide d’un capteur d’humidité spectroscopique. Des modifications électriques sont d’abord effectuées sur le séchoir instrumenté afin d’acheminer les signaux mesurés et manipulés à un périphérique d’acquisition. La conception d’une interface homme-machine permet ensuite de contrôler directement le séchoir à l’aide d’un ordinateur portable. Par la suite, un algorithme de commande prédictive (NMPC pour nonlinear model predictive control), basée sur un modèle phénoménologique consolidé du FBD, est exécuté en boucle sur ce même ordinateur. L’objectif est d’atteindre une consigne précise de teneur en eau en fin de séchage tout en contraignant la température des particules ainsi qu’en diminuant le temps de lot. De plus, la consommation énergétique du FBD est explicitement incluse dans la fonction objectif du NMPC. En comparant à une technique d’opération typique en industrie (principalement en boucle ouverte), il est démontré que le temps de séchage et la consommation énergétique peuvent être efficacement gérés sur le procédé pilote tout en limitant plusieurs problèmes d’opération comme le sous-séchage, le surséchage ou le surchauffage des granules.
Resumo:
The thesis work deals with topics that led to the development of innovative control-oriented models and control algorithms for modern gasoline engines. Knock in boosted spark ignition engines is the widest topic discussed in this document because it remains one of the most limiting factors for maximizing combustion efficiency in this kind of engine. First chapter is thus focused on knock and a wide literature review is proposed to summarize the preliminary knowledge that even represents the background and the reference for discussed activities. Most relevant results achieved during PhD course in the field of knock modelling and control are then presented, describing every control-oriented model that led to the development of an adaptive model-based combustion control system. The complete controller has been developed in the context of the collaboration with Ferrari GT and it allowed to completely redefine the knock intensity evaluation as well as the combustion phase control. The second chapter is focused on the activity related to a prototyping Port Water Injection system that has been developed and tested on a turbocharged spark ignition engine, within the collaboration with Magneti Marelli. Such system and the effects of injected water on the combustion process were then modeled in a 1-D simulation environment (GT Power). Third chapter shows the development and validation of a control-oriented model for the real-time calculation of exhaust gas temperature that represents another important limitation to the performance increase in modern boosted engines. Indeed, modelling of exhaust gas temperature and thermocouple behavior are themes that play a key role in the optimization of combustion and catalyst efficiency.
Resumo:
Nowadays, the spreading of the air pollution crisis enhanced by greenhouse gases emission is leading to the worsening of global warming. Recently, several metropolitan cities introduced Zero-Emissions Zones where the use of the Internal Combustion Engine is forbidden to reduce localized pollutants emissions. This is particularly problematic for Plug-in Hybrid Electric Vehicles, which usually work in depleting mode. In order to address these issues, the present thesis presents a viable solution by exploiting vehicular connectivity to retrieve navigation data of the urban event along a selected route. The battery energy needed, in the form of a minimum State of Charge (SoC), is calculated by a Speed Profile Prediction algorithm and a Backward Vehicle Model. That value is then fed to both a Rule-Based Strategy, developed specifically for this application, and an Adaptive Equivalent Consumption Minimization Strategy (A-ECMS). The effectiveness of this approach has been tested with a Connected Hardware-in-the-Loop (C-HiL) on a driving cycle measured on-road, stimulating the predictions with multiple re-routings. However, even if hybrid electric vehicles have been recognized as a valid solution in response to increasingly tight regulations, the reduced engine load and the repeated engine starts and stops may reduce substantially the temperature of the exhaust after-treatment system (EATS), leading to relevant issues related to pollutant emission control. In this context, electrically heated catalysts (EHCs) represent a promising solution to ensure high pollutant conversion efficiency without affecting engine efficiency and performance. This work aims at studying the advantages provided by the introduction of a predictive EHC control function for a light-duty Diesel plug-in hybrid electric vehicle (PHEV) equipped with a Euro 7-oriented EATS. Based on the knowledge of future driving scenarios provided by vehicular connectivity, engine first start can be predicted and therefore an EATS pre-heating phase can be planned.