950 resultados para Lead-time reduction
Resumo:
Tesi incentrata sull'impatto che la riduzione del lead time di consegna, come strategia commerciale, ha sulla supply chain della Neri SpA. A tale scopo è stato sviluppato un modello basato sulla matrice di Kraljic per la classificazione dei fornitori. La matrice è stata adattata alle esigenze aziendali ed è stato sfruttato il metodo di analisi multicriterio AHP per determinare i pesi dei parametri che compongono la dimensione più complessa della matrice. Sono stati sviluppati i diagrammi di Gantt partendo dai lead time presenti in distinta base. Da questi si sono individuati i percorsi temporalmente critici e dalla loro analisi le filiere critiche per lo sviluppo delle nuove potenzialità commerciali. Le filiere critiche sono state poi analizzate nel loro complesso, andando a verificare il ruolo dei singoli fornitori sfruttando come base di analisi la classificazione effettuata con la matrice di Kraljic. Dall'analisi delle filiere e dal confronto con la funzione commerciale sono state ipotizzate strategie per la riduzione dei lead time e il raggiungimento delle nuove potenzialità commerciali.
Resumo:
Il progetto di tesi tratta l'industrializzazione di una famiglia di componenti meccanici con gli obiettivi di riduzione del lead time e dei costi di produzione ed un miglioramento della qualità costruttiva. Per fare ciò è stato definito un nuovo ciclo di fabbricazione che riunisse tutte le operazioni di asportazione di truciolo in un unico centro di lavoro di fresatura. Sono state definite, progettate e messe in atto tutte le fasi del processo produttivo utilizzando i moderni software CAD-CAM-CNC. Al termine sono stati elaborati i risultati ottenuti svolgendo anche una analisi economica per valutare il risparmio percentuale annuo sul costo di produzione. Infine è stato fatto un confronto fra il vecchio ed il nuovo processo produttivo realizzato secondo i principi dell'Industria 4.0 ed i relativi vantaggi che esso permette di ottenere.
Resumo:
Background DNA polymerase γ (POLG) is the only known mitochondrial DNA (mtDNA) polymerase. It mediates mtDNA replication and base excision repair. Mutations in the POLG gene lead to reduction of functional mtDNA (mtDNA depletion and/or deletions) and are therefore predicted to result in defective oxidative phosphorylation (OXPHOS). Many mutations map to the polymerase and exonuclease domains of the enzyme and produce a broad clinical spectrum. The most frequent mutation p.A467T is localised in the linker region between these domains. In compound heterozygote patients the p.A467T mutation has been described to be associated amongst others with fatal childhood encephalopathy. These patients have a poorer survival rate compared to homozygotes. Methods mtDNA content in various tissues (fibroblasts, muscle and liver) was quantified using quantitative PCR (qPCR). OXPHOS activities in the same tissues were assessed using spectrophotometric methods and catalytic stain of BN-PAGE. Results We characterise a novel splice site mutation in POLG found in trans with the p.A467T mutation in a 3.5 years old boy with valproic acid induced acute liver failure (Alpers-Huttenlocher syndrome). These mutations result in a tissue specific depletion of the mtDNA which correlates with the OXPHOS-activities. Conclusions mtDNA depletion can be expressed in a high tissue-specific manner and confirms the need to analyse primary tissue. Furthermore, POLG analysis optimises clinical management in the early stages of disease and reinforces the need for its evaluation before starting valproic acid treatment.
Resumo:
Accurate seasonal to interannual streamflow forecasts based on climate information are critical for optimal management and operation of water resources systems. Considering most water supply systems are multipurpose, operating these systems to meet increasing demand under the growing stresses of climate variability and climate change, population and economic growth, and environmental concerns could be very challenging. This study was to investigate improvement in water resources systems management through the use of seasonal climate forecasts. Hydrological persistence (streamflow and precipitation) and large-scale recurrent oceanic-atmospheric patterns such as the El Niño/Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO), North Atlantic Oscillation (NAO), the Atlantic Multidecadal Oscillation (AMO), the Pacific North American (PNA), and customized sea surface temperature (SST) indices were investigated for their potential to improve streamflow forecast accuracy and increase forecast lead-time in a river basin in central Texas. First, an ordinal polytomous logistic regression approach is proposed as a means of incorporating multiple predictor variables into a probabilistic forecast model. Forecast performance is assessed through a cross-validation procedure, using distributions-oriented metrics, and implications for decision making are discussed. Results indicate that, of the predictors evaluated, only hydrologic persistence and Pacific Ocean sea surface temperature patterns associated with ENSO and PDO provide forecasts which are statistically better than climatology. Secondly, a class of data mining techniques, known as tree-structured models, is investigated to address the nonlinear dynamics of climate teleconnections and screen promising probabilistic streamflow forecast models for river-reservoir systems. Results show that the tree-structured models can effectively capture the nonlinear features hidden in the data. Skill scores of probabilistic forecasts generated by both classification trees and logistic regression trees indicate that seasonal inflows throughout the system can be predicted with sufficient accuracy to improve water management, especially in the winter and spring seasons in central Texas. Lastly, a simplified two-stage stochastic economic-optimization model was proposed to investigate improvement in water use efficiency and the potential value of using seasonal forecasts, under the assumption of optimal decision making under uncertainty. Model results demonstrate that incorporating the probabilistic inflow forecasts into the optimization model can provide a significant improvement in seasonal water contract benefits over climatology, with lower average deficits (increased reliability) for a given average contract amount, or improved mean contract benefits for a given level of reliability compared to climatology. The results also illustrate the trade-off between the expected contract amount and reliability, i.e., larger contracts can be signed at greater risk.
Resumo:
Large Power transformers, an aging and vulnerable part of our energy infrastructure, are at choke points in the grid and are key to reliability and security. Damage or destruction due to vandalism, misoperation, or other unexpected events is of great concern, given replacement costs upward of $2M and lead time of 12 months. Transient overvoltages can cause great damage and there is much interest in improving computer simulation models to correctly predict and avoid the consequences. EMTP (the Electromagnetic Transients Program) has been developed for computer simulation of power system transients. Component models for most equipment have been developed and benchmarked. Power transformers would appear to be simple. However, due to their nonlinear and frequency-dependent behaviors, they can be one of the most complex system components to model. It is imperative that the applied models be appropriate for the range of frequencies and excitation levels that the system experiences. Thus, transformer modeling is not a mature field and newer improved models must be made available. In this work, improved topologically-correct duality-based models are developed for three-phase autotransformers having five-legged, three-legged, and shell-form cores. The main problem in the implementation of detailed models is the lack of complete and reliable data, as no international standard suggests how to measure and calculate parameters. Therefore, parameter estimation methods are developed here to determine the parameters of a given model in cases where available information is incomplete. The transformer nameplate data is required and relative physical dimensions of the core are estimated. The models include a separate representation of each segment of the core, including hysteresis of the core, λ-i saturation characteristic, capacitive effects, and frequency dependency of winding resistance and core loss. Steady-state excitation, and de-energization and re-energization transients are simulated and compared with an earlier-developed BCTRAN-based model. Black start energization cases are also simulated as a means of model evaluation and compared with actual event records. The simulated results using the model developed here are reasonable and more correct than those of the BCTRAN-based model. Simulation accuracy is dependent on the accuracy of the equipment model and its parameters. This work is significant in that it advances existing parameter estimation methods in cases where the available data and measurements are incomplete. The accuracy of EMTP simulation for power systems including three-phase autotransformers is thus enhanced. Theoretical results obtained from this work provide a sound foundation for development of transformer parameter estimation methods using engineering optimization. In addition, it should be possible to refine which information and measurement data are necessary for complete duality-based transformer models. To further refine and develop the models and transformer parameter estimation methods developed here, iterative full-scale laboratory tests using high-voltage and high-power three-phase transformer would be helpful.
Resumo:
Moderne generische Fertigungsverfahren für innengekühlte Werkzeuge bieten nahezu beliebige Freiheitsgrade zur Gestaltung konturnaher Kühlkanäle. Daraus resultiert ein erhöhter Anspruch an das Werkzeugengineering und die Optimierung der Kühlleistung. Geeignete Simulationsverfahren (wie z.B. Computational Fluid Dynamics - CFD) unterstützen die optimierte Werkzeugauslegung in idealer Weise. Mit der Erstellung virtueller Teststände können Varianten effizient und kostengünstig verglichen und die Kosten für Prototypen und Nacharbeiten reduziert werden. Im Computermodell des Werkzeugs erlauben Soft-Sensoren an beliebiger Position die Überwachung temperatur-kritischer Stellen sowohl im Fluid- als auch im Solidbereich. Der hier durchgeführte Benchmark vergleicht die Performance eines optimierten Werkzeugeinsatzes mit einer konventionellen Kühlung. Die im virtuellen Prozess vorhergesagte Zykluszeitreduzierung steht in guter Übereinstimmung mit realen Experimenten an den ausgeführten Werkzeugen.
Resumo:
Umformwerkzeuge sind eine neue und bislang nicht erforschte Anwendung generativ gefertigter Werkzeuge. Der Vortrag präsentiert ein Fallbeispiel, bei dem ein typisches Schmiedeteil mit recht komplexer Geometrie erfolgreich unter Verwendung eines generativ gefertigten Schmiedegesenks hergestellt werden konnte. Die Marktanforderungen zur frühestmöglichen Verfügbarkeit echter Schmiedeteile werden dargestellt. Die gesamte Prozesskette von der 3D-CAD-Werkzeugkonstruktion über die Schmiedeprozesssimulation, das Laserstrahlschmelzen der Gesenkeinsätze und die Gesenkmontage bis hin zu den eigentlichen Schmiedeversuchen unter produktionsähnlichen Bedingungen wird dargestellt und mit konventioneller Schmiedegesenkkonstruktion und ‑fertigung verglichen. Die Vorteile und Besonderheiten der generativen Prozesskette werden herausgestellt. Die gefertigten Schmiedeteile werden hinsichtlich Formfüllung, Maßhaltigkeit und Gefüge mit konventionell geschmiedeten Teilen verglichen. Die Lieferzeit der generativ gefertigten Schmiedegesenke wird der von konventionell hergestellten gegenübergestellt, ebenso die Kosten, um die Vorteile des Einsatzes generativer Fertigung herauszustellen. Es werden Randbedingungen beschrieben, unter denen die generative Fertigung von Schmiedegesenken technisch und wirtschaftlich sinnvoll ist.
Resumo:
Recent evidence suggests that transition risks from initial clinical high risk (CHR) status to psychosis are decreasing. The role played by remission in this context is mostly unknown. The present study addresses this issue by means of a meta-analysis including eight relevant studies published up to January 2012 that reported remission rates from an initial CHR status. The primary effect size measure was the longitudinal proportion of remissions compared to non-remission in subjects with a baseline CHR state. Random effect models were employed to address the high heterogeneity across studies included. To assess the robustness of the results, we performed sensitivity analyses by sequentially removing each study and rerunning the analysis. Of 773 subjects who met initial CHR criteria, 73% did not convert to psychosis along a 2-year follow. Of these, about 46% fully remitted from the baseline attenuated psychotic symptoms, as evaluated on the psychometric measures usually employed by prodromal services. The corresponding clinical remission was estimated as high as 35% of the baseline CHR sample. The CHR state is associated with a significant proportion of remitting subjects that can be accounted by the effective treatments received, a lead time bias, a dilution effect, a comorbid effect of other psychiatric diagnoses.
Resumo:
The redox property of ceria is a key factor in the catalytic activity of ceria-based catalysts. The oxidation state of well-defined ceria nanocubes in gas environments was analysed in situ by a novel combination of near-ambient pressure X-ray Photoelectron Spectroscopy (XPS) and high-energy XPS at a synchrotron X-ray source. In situ high-energy XPS is a promising new tool to determine the electronic structure of matter under defined conditions. The aim was to quantitatively determine the degree of cerium reduction in a nano-structured ceria-supported platinum catalyst as a function of the gas environment. To obtain a non-destructive depth profile at near-ambient pressure, in situ high-energy XPS analysis was performed by varying the kinetic energy of photoelectrons from 1 to 5 keV, and, thus, the probing depth. In ceria nanocubes doped with platinum, oxygen vacancies formed only in the uppermost layers of ceria in an atmosphere of 1 mbar hydrogen and 403 K. For pristine ceria nanocubes, no change in the cerium oxidation state in various hydrogen or oxygen atmospheres was observed as a function of probing depth. In the absence of platinum, hydrogen does not dissociate and, thus, does not lead to reduction of ceria.
Resumo:
OBJECTIVE We sought to evaluate the feasibility of k-t parallel imaging for accelerated 4D flow MRI in the hepatic vascular system by investigating the impact of different acceleration factors. MATERIALS AND METHODS k-t GRAPPA accelerated 4D flow MRI of the liver vasculature was evaluated in 16 healthy volunteers at 3T with acceleration factors R = 3, R = 5, and R = 8 (2.0 × 2.5 × 2.4 mm(3), TR = 82 ms), and R = 5 (TR = 41 ms); GRAPPA R = 2 was used as the reference standard. Qualitative flow analysis included grading of 3D streamlines and time-resolved particle traces. Quantitative evaluation assessed velocities, net flow, and wall shear stress (WSS). RESULTS Significant scan time savings were realized for all acceleration factors compared to standard GRAPPA R = 2 (21-71 %) (p < 0.001). Quantification of velocities and net flow offered similar results between k-t GRAPPA R = 3 and R = 5 compared to standard GRAPPA R = 2. Significantly increased leakage artifacts and noise were seen between standard GRAPPA R = 2 and k-t GRAPPA R = 8 (p < 0.001) with significant underestimation of peak velocities and WSS of up to 31 % in the hepatic arterial system (p <0.05). WSS was significantly underestimated up to 13 % in all vessels of the portal venous system for k-t GRAPPA R = 5, while significantly higher values were observed for the same acceleration with higher temporal resolution in two veins (p < 0.05). CONCLUSION k-t acceleration of 4D flow MRI is feasible for liver hemodynamic assessment with acceleration factors R = 3 and R = 5 resulting in a scan time reduction of at least 40 % with similar quantitation of liver hemodynamics compared with GRAPPA R = 2.
Resumo:
Objectives. Previous studies have shown a survival advantage in ovarian cancer patients with Ashkenazi-Jewish (AJ) BRCA founder mutations, compared to sporadic ovarian cancer patients. The purpose of this study was to determine if this association exists in ovarian cancer patients with non-Ashkenazi Jewish BRCA mutations. In addition, we sought to account for possible "survival bias" by minimizing any lead time that may exist between diagnosis and genetic testing. ^ Methods. Patients with stage III/IV ovarian, fallopian tube, or primary peritoneal cancer and a non-Ashkenazi Jewish BRCA1 or 2 mutation, seen for genetic testing January 1996-July 2007, were identified from genetics and institutional databases. Medical records were reviewed for clinical factors, including response to initial chemotherapy. Patients with sporadic (non-hereditary) ovarian, fallopian tube, or primary peritoneal cancer, without family history of breast or ovarian cancer, were compared to similar cases, matched by age, stage, year of diagnosis, and vital status at time interval to BRCA testing. When possible, 2 sporadic patients were matched to each BRCA patient. An additional group of unmatched, sporadic ovarian, fallopian tube and primary peritoneal cancer patients was included for a separate analysis. Progression-free (PFS) & overall survival (OS) were calculated by the Kaplan-Meier method. Multivariate Cox proportional hazards models were calculated for variables of interest. Matched pairs were treated as clusters. Stratified log rank test was used to calculate survival data for matched pairs using paired event times. Fisher's exact test, chi-square, and univariate logistic regression were also used for analysis. ^ Results. Forty five advanced-stage ovarian, fallopian tube and primary peritoneal cancer patients with non-Ashkenazi Jewish (non-AJ) BRCA mutations, 86 sporadic-matched and 414 sporadic-unmatched patients were analyzed. Compared to the sporadic-matched and sporadic-unmatched ovarian cancer patients, non-AJ BRCA mutation carriers had longer PFS (17.9 & 13.8 mos. vs. 32.0 mos., HR 1.76 [95% CI 1.13–2.75] & 2.61 [95% CI 1.70–4.00]). In relation to the sporadic- unmatched patients, non-AJ BRCA patients had greater odds of complete response to initial chemotherapy (OR 2.25 [95% CI 1.17–5.41]) and improved OS (37.6 mos. vs. 101.4 mos., HR 2.64 [95% CI 1.49–4.67]). ^ Conclusions. This study demonstrates a significant survival advantage in advanced-stage ovarian cancer patients with non-AJ BRCA mutations, confirming the previous studies in the Jewish population. Our efforts to account for "survival bias," by matching, will continue with collaborative studies. ^
Resumo:
At issue is whether or not isolated DNA is patent eligible under the U.S. Patent Law and the implications of that determination on public health. The U.S. Patent and Trademark Office has issued patents on DNA since the 1980s, and scientists and researchers have proceeded under that milieu since that time. Today, genetic research and testing related to the human breast cancer genes BRCA1 and BRCA2 is conducted within the framework of seven patents that were issued to Myriad Genetics and the University of Utah Research Foundation between 1997 and 2000. In 2009, suit was filed on behalf of multiple researchers, professional associations and others to invalidate fifteen of the claims underlying those patents. The Court of Appeals for the Federal Circuit, which hears patent cases, has invalidated claims for analyzing and comparing isolated DNA but has upheld claims to isolated DNA. The specific issue of whether isolated DNA is patent eligible is now before the Supreme Court, which is expected to decide the case by year's end. In this work, a systematic review was performed to determine the effects of DNA patents on various stakeholders and, ultimately, on public health; and to provide a legal analysis of the patent eligibility of isolated DNA and the likely outcome of the Supreme Court's decision. ^ A literature review was conducted to: first, identify principle stakeholders with an interest in patent eligibility of the isolated DNA sequences BRCA1 and BRCA2; and second, determine the effect of the case on those stakeholders. Published reports that addressed gene patents, the Myriad litigation, and implications of gene patents on stakeholders were included. Next, an in-depth legal analysis of the patent eligibility of isolated DNA and methods for analyzing it was performed pursuant to accepted methods of legal research and analysis based on legal briefs, federal law and jurisprudence, scholarly works and standard practice legal analysis. ^ Biotechnology, biomedical and clinical research, access to health care, and personalized medicine were identified as the principle stakeholders and interests herein. Many experts believe that the patent eligibility of isolated DNA will not greatly affect the biotechnology industry insofar as genetic testing is concerned; unlike for therapeutics, genetic testing does not require tremendous resources or lead time. The actual impact on biomedical researchers is uncertain, with greater impact expected for researchers whose work is intended for commercial purposes (versus basic science). The impact on access to health care has been surprisingly difficult to assess; while invalidating gene patents might be expected to decrease the cost of genetic testing and improve access to more laboratories and physicians' offices that provide the test, a 2010 study on the actual impact was inconclusive. As for personalized medicine, many experts believe that the availability of personalized medicine is ultimately a public policy issue for Congress, not the courts. ^ Based on the legal analysis performed in this work, this writer believes the Supreme Court is likely to invalidate patents on isolated DNA whose sequences are found in nature, because these gene sequences are a basic tool of scientific and technologic work and patents on isolated DNA would unduly inhibit their future use. Patents on complementary DNA (cDNA) are expected to stand, however, based on the human intervention required to craft cDNA and the product's distinction from the DNA found in nature. ^ In the end, the solution as to how to address gene patents may lie not in jurisprudence but in a fundamental change in business practices to provide expanded licenses to better address the interests of the several stakeholders. ^
Resumo:
In this paper we will see how the efficiency of the MBS simulations can be improved in two different ways, by considering both an explicit and implicit semi-recursive formulation. The explicit method is based on a double velocity transformation that involves the solution of a redundant but compatible system of equations. The high computational cost of this operation has been drastically reduced by taking into account the sparsity pattern of the system. Regarding this, the goal of this method is the introduction of MA48, a high performance mathematical library provided by Harwell Subroutine Library. The second method proposed in this paper has the particularity that, depending on the case, between 70 and 85% of the computation time is devoted to the evaluation of forces derivatives with respect to the relative position and velocity vectors. Keeping in mind that evaluating these derivatives can be decomposed into concurrent tasks, the main goal of this paper lies on a successful and straightforward parallel implementation that have led to a substantial improvement with a speedup of 3.2 by keeping all the cores busy in a quad-core processor and distributing the workload between them, achieving on this way a huge time reduction by doing an ideal CPU usage
Resumo:
En la actualidad, el interés por las plantas de potencia de ciclo combinado de gas y vapor ha experimentado un notable aumento debido a su alto rendimiento, bajo coste de generación y rápida construcción. El objetivo fundamental de la tesis es profundizar en el conocimiento de esta tecnología, insuficientemente conocida hasta el momento debido al gran número de grados de libertad que existen en el diseño de este tipo de instalaciones. El estudio se realizó en varias fases. La primera consistió en analizar y estudiar las distintas tecnologías que se pueden emplear en este tipo de centrales, algunas muy recientes o en fase de investigación, como las turbinas de gas de geometría variable, las turbinas de gas refrigeradas con agua o vapor del ciclo de vapor o las calderas de paso único que trabajan con agua en condiciones supercríticas. Posteriormente se elaboraron los modelos matemáticos que permiten la simulación termodinámica de cada uno de los componentes que integran las plantas, tanto en el punto de diseño como a cargas parciales. Al mismo tiempo, se desarrolló una metodología novedosa que permite resolver el sistema de ecuaciones que resulta de la simulación de cualquier configuración posible de ciclo combinado. De esa forma se puede conocer el comportamiento de cualquier planta en cualquier punto de funcionamiento. Por último se desarrolló un modelo de atribución de costes para este tipo de centrales. Con dicho modelo, los estudios se pueden realizar no sólo desde un punto de vista termodinámico sino también termoeconómico, con lo que se pueden encontrar soluciones de compromiso entre rendimiento y coste, asignar costes de producción, determinar curvas de oferta, beneficios económicos de la planta y delimitar el rango de potencias donde la planta es rentable. El programa informático, desarrollado en paralelo con los modelos de simulación, se ha empleado para obtener resultados de forma intensiva. El estudio de los resultados permite profundizar ampliamente en el conocimiento de la tecnología y, así, desarrollar una metodología de diseño de este tipo de plantas bajo un criterio termoeconómico. ABSTRACT The growing energy demand and the need of shrinking costs have led to the design of high efficiency and quick installation power plants. The success of combined cycle gas turbine power plants lies on their high efficiency, low cost and short construction lead time. The main objective of the work is to study in detail this technology, which is not thoroughly known owing to the great number of degrees of freedom that exist in the design of this kind of power plants. The study is divided into three parts. Firstly, the different technologies and components that could be used in any configuration of a combined cycle gas turbine power plant are studied. Some of them could be of recent technology, such as the variable inlet guide vane compressors, the H-technology for gas turbine cooling or the once-through heat recovery steam generators, used with water at supercritical conditions. Secondly, a mathematical model has been developed to simulate at full and part load the components of the power plant. At the same time, a new methodology is proposed in order to solve the equation system resulting for any possible power plant configuration. Therefore, any combined cycle gas turbine could be simulated at any part load condition. Finally a themoeconomic model is proposed. This model allows studying the power plant not only from a thermodynamic point of view but also from a thermoeconomic one. Likewise, it allows determining the generating costs or the cash flow, thus achieving a trade off between efficiency and cost. Likewise, the model calculates the part load range where the power plant is profitable. Once the thermodynamic and thermoeconomic models are developed, they are intensively used in order to gain knowledge in the combined cycle gas turbine technology and, in this way, to propose a methodology aimed at the design of this kind of power plants from a thermoeconomic point of view.
Resumo:
In recent years, the increasing sophistication of embedded multimedia systems and wireless communication technologies has promoted a widespread utilization of video streaming applications. It has been reported in 2013 that youngsters, aged between 13 and 24, spend around 16.7 hours a week watching online video through social media, business websites, and video streaming sites. Video applications have already been blended into people daily life. Traditionally, video streaming research has focused on performance improvement, namely throughput increase and response time reduction. However, most mobile devices are battery-powered, a technology that grows at a much slower pace than either multimedia or hardware developments. Since battery developments cannot satisfy expanding power demand of mobile devices, research interests on video applications technology has attracted more attention to achieve energy-efficient designs. How to efficiently use the limited battery energy budget becomes a major research challenge. In addition, next generation video standards impel to diversification and personalization. Therefore, it is desirable to have mechanisms to implement energy optimizations with greater flexibility and scalability. In this context, the main goal of this dissertation is to find an energy management and optimization mechanism to reduce the energy consumption of video decoders based on the idea of functional-oriented reconfiguration. System battery life is prolonged as the result of a trade-off between energy consumption and video quality. Functional-oriented reconfiguration takes advantage of the similarities among standards to build video decoders reconnecting existing functional units. If a feedback channel from the decoder to the encoder is available, the former can signal the latter changes in either the encoding parameters or the encoding algorithms for energy-saving adaption. The proposed energy optimization and management mechanism is carried out at the decoder end. This mechanism consists of an energy-aware manager, implemented as an additional block of the reconfiguration engine, an energy estimator, integrated into the decoder, and, if available, a feedback channel connected to the encoder end. The energy-aware manager checks the battery level, selects the new decoder description and signals to build a new decoder to the reconfiguration engine. It is worth noting that the analysis of the energy consumption is fundamental for the success of the energy management and optimization mechanism. In this thesis, an energy estimation method driven by platform event monitoring is proposed. In addition, an event filter is suggested to automate the selection of the most appropriate events that affect the energy consumption. At last, a detailed study on the influence of the training data on the model accuracy is presented. The modeling methodology of the energy estimator has been evaluated on different underlying platforms, single-core and multi-core, with different characteristics of workload. All the results show a good accuracy and low on-line computation overhead. The required modifications on the reconfiguration engine to implement the energy-aware manager have been assessed under different scenarios. The results indicate a possibility to lengthen the battery lifetime of the system in two different use-cases.