850 resultados para explosive cost and performance


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a prospective memory task responding to a prospective memory target involves switching between ongoing and prospective memory task which can result in a slowing of subsequent ongoing task performance (i.e., an after-effect). Moreover, a slowing can also occur when prospective memory targets occur after the prospective memory task is deactivated (i.e., another after-effect). In this study, we investigated both after-effects within the same study. Moreover, we also tested whether the latter after-effects even occur on subsequent ongoing task trials. The results show, in fact, after-effects of all kinds. Thus, (1) correctly responding to prospective memory targets results in after-effects, a so far neglected cost on ongoing task performance, (2) responding to deactivated prospective memory targets also slows down performance, probably due to the involuntary retrieval of the intention, and (3) this slowing is present even on subsequent ongoing task trials, suggesting that even deactivated intentions are sufficient to induce a conflict that requires subsequent adaptation. Overall, these results indicate that performance slowing in a prospective memory experiment includes various kinds of sources, not only monitoring cost, and these sources may be understood best in terms of conflict adaptation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Field deployments of the Aerodyne Aerosol Mass Spectrometer (AMS) have significantly advanced real-time measurements and source apportionment of non-refractory particulate matter. However, the cost and complex maintenance requirements of the AMS make its deployment at sufficient sites to determine regional characteristics impractical. Furthermore, the negligible transmission efficiency of the AMS inlet for supermicron particles significantly limits the characterization of their chemical nature and contributing sources. In this study, we utilize the AMS to characterize the water-soluble organic fingerprint of ambient particles collected onto conventional quartz filters, which are routinely sampled at many air quality sites. The method was applied to 256 particulate matter (PM) filter samples (PM1, PM2:5, and PM10, i.e., PM with aerodynamic diameters smaller than 1, 2.5, and 10 μm, respectively), collected at 16 urban and rural sites during summer and winter. We show that the results obtained by the present technique compare well with those from co-located online measurements, e.g., AMS or Aerosol Chemical Speciation Monitor (ACSM). The bulk recoveries of organic aerosol (60–91 %) achieved using this technique, together with low detection limits (0.8 μg of organic aerosol on the analyzed filter fraction) allow its application to environmental samples. We will discuss the recovery variability of individual hydrocarbon ions, ions containing oxygen, and other ions. The performance of such data in source apportionment is assessed in comparison to ACSM data. Recoveries of organic components related to different sources as traffic, wood burning, and secondary organic aerosol are presented. This technique, while subjected to the limitations inherent to filter-based measurements (e.g., filter artifacts and limited time resolution) may be used to enhance the AMS capabilities in measuring size-fractionated, spatially resolved longterm data sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

entral European grasslands vary widely in productivity and in mowing and grazing regimes. The resulting differences in competition and heterogeneity among grasslands might have direct effects on plants, but might also affect the growth and morphology of their offspring through maternal effects or adaptive evolution. To test for such transgenerational effects, we grew plants of the clonal herb Trifolium repens from seeds collected in 58 grassland sites differing in productivity and mowing and grazing intensities in different treatments: without competition, with homogeneous competition, and with heterogeneous competition. In the competition-free treatment, T. repens from more productive, less frequently mown, and less intensively grazed sites produced more vegetative offspring, but this was not the case in the other treatments. When grown among or in close proximity to competitors, T. repens plants did not show preferential growth towards open spaces (i.e., no horizontal foraging), but did show strong vertical foraging by petiole elongation. In the homogeneous competition treatment, petiole length increased with the productivity of the parental site, but this was not the case in the heterogeneous competition treatment. Moreover, petiole length increased with mowing frequency and grazing intensity of the parental site in all but the homogeneous competition treatment. In summary, although the expression of differences between plants from sites with different productivities and land-use intensities depended on the experimental treatment, our findings imply that there are transgenerational effects of land use on the morphology and performance of T. repens.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dua and Miller (1996) created leading and coincident employment indexes for the state of Connecticut, following Moore's (1981) work at the national level. The performance of the Dua-Miller indexes following the recession of the early 1990s fell short of expectations. This paper performs two tasks. First, it describes the process of revising the Connecticut Coincident and Leading Employment Indexes. Second, it analyzes the statistical properties and performance of the new indexes by comparing the lead profiles of the new and old indexes as well as their out-of-sample forecasting performance, using the Bayesian Vector Autoregressive (BVAR) method. The new indexes show improved performance in dating employment cycle chronologies. The lead profile test demonstrates that superiority in a rigorous, non-parametric statistic fashion. The mixed evidence on the BVAR forecasting experiments illustrates the truth in the Granger and Newbold (1986) caution that leading indexes properly predict cycle turning points and do not necessarily provide accurate forecasts except at turning points, a view that our results support.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper examines the effects of geographical deregulation on commercial bank performance across states. We reach some general conclusions. First, the process of deregulation on an intrastate and interstate basis generally improves bank profitability and performance. Second, the macroeconomic variables -- the unemployment rate and real personal income per capita -- and the average interest rate affect bank performance as much, or more, than the process of deregulation. Finally, while deregulation toward full interstate banking and branching may produce more efficient banks and a healthier banking system, we find mixed results on this issue.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study of the wholesale electricity market compares the efficiency performance of the auction mechanism currently in place in U.S. markets with the performance of a proposed mechanism. The analysis highlights the importance of considering strategic behavior when comparing different institutional systems. We find that in concentrated markets, neither auction mechanism can guarantee an efficient allocation. The advantage of the current mechanism increases with increased price competition if market demand is perfectly inelastic. However, if market demand has some responsiveness to price, the superiority of the current auction with respect to efficiency is not that obvious. We present a case where the proposed auction outperforms the current mechanism on efficiency even if all offers reflect true production costs. We also find that a market designer might face a choice problem with a tradeoff between lower electricity cost and production efficiency. Some implications for social welfare are discussed as well.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study aims to address two research questions. First, ‘Can we identify factors that are determinants both of improved health outcomes and of reduced costs for hospitalized patients with one of six common diagnoses?’ Second, ‘Can we identify other factors that are determinants of improved health outcomes for such hospitalized patients but which are not associated with costs?’ The Healthcare Cost and Utilization Project (HCUP) Nationwide Inpatient Sample (NIS) database from 2003 to 2006 was employed in this study. The total study sample consisted of hospitals which had at least 30 patients each year for the given diagnosis: 954 hospitals for acute myocardial infarction (AMI), 1552 hospitals for congestive heart failure (CHF), 1120 hospitals for stroke (STR), 1283 hospitals for gastrointestinal hemorrhage (GIH), 979 hospitals for hip fracture (HIP), and 1716 hospitals for pneumonia (PNE). This study used simultaneous equations models to investigate the determinants of improvement in health outcomes and of cost reduction in hospital inpatient care for these six common diagnoses. In addition, the study used instrumental variables and two-stage least squares random effect model for unbalanced panel data estimation. The study concluded that a few factors were determinants of high quality and low cost. Specifically, high specialty was the determinant of high quality and low costs for CHF patients; small hospital size was the determinant of high quality and low costs for AMI patients. Furthermore, CHF patients who were treated in Midwest, South, and West region hospitals had better health outcomes and lower hospital costs than patients who were treated in Northeast region hospitals. Gastrointestinal hemorrhage and pneumonia patients who were treated in South region hospitals also had better health outcomes and lower hospital costs than patients who were treated in Northeast region hospitals. This study found that six non-cost factors were related to health outcomes for a few diagnoses: hospital volume, percentage emergency room admissions for a given diagnosis, hospital competition, specialty, bed size, and hospital region.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation develops and tests a comparative effectiveness methodology utilizing a novel approach to the application of Data Envelopment Analysis (DEA) in health studies. The concept of performance tiers (PerT) is introduced as terminology to express a relative risk class for individuals within a peer group and the PerT calculation is implemented with operations research (DEA) and spatial algorithms. The analysis results in the discrimination of the individual data observations into a relative risk classification by the DEA-PerT methodology. The performance of two distance measures, kNN (k-nearest neighbor) and Mahalanobis, was subsequently tested to classify new entrants into the appropriate tier. The methods were applied to subject data for the 14 year old cohort in the Project HeartBeat! study.^ The concepts presented herein represent a paradigm shift in the potential for public health applications to identify and respond to individual health status. The resultant classification scheme provides descriptive, and potentially prescriptive, guidance to assess and implement treatments and strategies to improve the delivery and performance of health systems. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background. Retail clinics, also called convenience care clinics, have become a rapidly growing trend since their initial development in 2000. These clinics are coupled within a larger retail operation and are generally located in "big-box" discount stores such as Wal-mart or Target, grocery stores such as Publix or H-E-B, or in retail pharmacies such as CVS or Walgreen's (Deloitte Center for Health Solutions, 2008). Care is typically provided by nurse practitioners. Research indicates that this new health care delivery system reduces cost, raises quality, and provides a means of access to the uninsured population (e.g., Deloitte Center for Health Solutions, 2008; Convenient Care Association, 2008a, 2008b, 2008c; Hansen-Turton, Miller, Nash, Ryan, Counts, 2007; Salinsky, 2009; Scott, 2006; Ahmed & Fincham, 2010). Some healthcare analysts even suggest that retail clinics offer a feasible solution to the shortage of primary care physicians facing the nation (AHRQ Health Care Innovations Exchange, 2010). ^ The development and performance of retail clinics is heavily dependent upon individual state policies regulating NPs. Texas currently has one of the most highly regulated practice environments for NPs (Stout & Elton, 2007; Hammonds, 2008). In September 2009, Texas passed Senate Bill 532 addressing the scope of practice of nurse practitioners in the convenience care model. In comparison to other states, this law still heavily regulates nurse practitioners. However, little research has been conducted to evaluate the impact of state laws regulating nurse practitioners on the development and performance of retail clinics. ^ Objectives. (1). To describe the potential impact that SB 532 has on retail clinic performance. (2). To discuss the effectiveness, efficiency, and equity of the convenience care model. (3). To describe possible alternatives to Texas' nurse practitioner scope of practice guidelines as delineated in Texas Senate Bill 532. (4). To describe the type of nurse practitioner state regulation (i.e. independent, light, moderate, or heavy) that best promotes the convenience care model. ^ Methods. State regulations governing nurse practitioners can be characterized as independent, light, moderate, and heavy. Four state NP regulatory types and retail clinic performance were compared and contrasted to that of Texas regulations using Dunn and Aday's theoretical models for conducting policy analysis and evaluating healthcare systems. Criteria for measurement included effectiveness, efficiency, and equity. Comparison states were Arizona (Independent), Minnesota (Light), Massachusetts (Moderate), and Florida (Heavy). ^ Results. A comparative states analysis of Texas SB 532 and alternative NP scope of practice guidelines among the four states: Arizona, Florida, Massachusetts, and Minnesota, indicated that SB 532 has minimal potential to affect the shortage of primary care providers in the state. Although SB 532 may increase the number of NPs a physician may supervise, NPs are still heavily restricted in their scope of practice and limited in their ability to act as primary care providers. Arizona's example of independent NP practice provided the best alternative to affect the shortage of PCPs in Texas as evidenced by a lower uninsured rate and less ED visits per 1,000 population. A survey of comparison states suggests that retail clinics thrive in states that more heavily restrict NP scope of practice as opposed to those that are more permissive, with the exception of Arizona. An analysis of effectiveness, efficiency, and equity of the convenience care model indicates that retail clinics perform well in the areas of effectiveness and efficiency; but, fall short in the area of equity. ^ Conclusion. Texas Senate 532 represents an incremental step towards addressing the problem of a shortage of PCPs in the state. A comparative policy analysis of the other four states with varying degrees of NP scope of practice indicate that a more aggressive policy allowing for independent NP practice will be needed to achieve positive changes in health outcomes. Retail clinics pose a temporary solution to the shortage of PCPs and will need to expand their locations to poorer regions and incorporate some chronic care to obtain measurable health outcomes. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays computing platforms consist of a very large number of components that require to be supplied with diferent voltage levels and power requirements. Even a very small platform, like a handheld computer, may contain more than twenty diferent loads and voltage regulators. The power delivery designers of these systems are required to provide, in a very short time, the right power architecture that optimizes the performance, meets electrical specifications plus cost and size targets. The appropriate selection of the architecture and converters directly defines the performance of a given solution. Therefore, the designer needs to be able to evaluate a significant number of options in order to know with good certainty whether the selected solutions meet the size, energy eficiency and cost targets. The design dificulties of selecting the right solution arise due to the wide range of power conversion products provided by diferent manufacturers. These products range from discrete components (to build converters) to complete power conversion modules that employ diferent manufacturing technologies. Consequently, in most cases it is not possible to analyze all the alternatives (combinations of power architectures and converters) that can be built. The designer has to select a limited number of converters in order to simplify the analysis. In this thesis, in order to overcome the mentioned dificulties, a new design methodology for power supply systems is proposed. This methodology integrates evolutionary computation techniques in order to make possible analyzing a large number of possibilities. This exhaustive analysis helps the designer to quickly define a set of feasible solutions and select the best trade-off in performance according to each application. The proposed approach consists of two key steps, one for the automatic generation of architectures and other for the optimized selection of components. In this thesis are detailed the implementation of these two steps. The usefulness of the methodology is corroborated by contrasting the results using real problems and experiments designed to test the limits of the algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, switched reluctance motors (SRM) are proposed as an alternative for electric power assisted steering (EPAS) applications. A prototype machine has been developed as very attractive design for a steering electric motor, both from a cost and size perspective. A fourphase 8/6 SRM drive is designed for a rack type EPAS which should provide a maximum force of 10 kN. Two-dimension finite element analysis is used to validate the design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La termografía infrarroja (TI) es una técnica no invasiva y de bajo coste que permite, con el simple acto de tomar una fotografía, el registro sin contacto de la energía que irradia el cuerpo humano (Akimov & Son’kin, 2011, Merla et al., 2005, Ng et al., 2009, Costello et al., 2012, Hildebrandt et al., 2010). Esta técnica comenzó a utilizarse en el ámbito médico en los años 60, pero debido a los malos resultados como herramienta diagnóstica y la falta de protocolos estandarizados (Head & Elliot, 2002), ésta se dejó de utilizar en detrimento de otras técnicas más precisas a nivel diagnóstico. No obstante, las mejoras tecnológicas de la TI en los últimos años han hecho posible un resurgimiento de la misma (Jiang et al., 2005, Vainer et al., 2005, Cheng et al., 2009, Spalding et al., 2011, Skala et al., 2012), abriendo el camino a nuevas aplicaciones no sólo centradas en el uso diagnóstico. Entre las nuevas aplicaciones, destacamos las que se desarrollan en el ámbito de la actividad física y el deporte, donde recientemente se ha demostrado que los nuevos avances con imágenes de alta resolución pueden proporcionar información muy interesante sobre el complejo sistema de termorregulación humana (Hildebrandt et al., 2010). Entre las nuevas aplicaciones destacan: la cuantificación de la asimilación de la carga de trabajo físico (Čoh & Širok, 2007), la valoración de la condición física (Chudecka et al., 2010, 2012, Akimov et al., 2009, 2011, Merla et al., 2010), la prevención y seguimiento de lesiones (Hildebrandt et al., 2010, 2012, Badža et al., 2012, Gómez Carmona, 2012) e incluso la detección de agujetas (Al-Nakhli et al., 2012). Bajo estas circunstancias, se acusa cada vez más la necesidad de ampliar el conocimiento sobre los factores que influyen en la aplicación de la TI en los seres humanos, así como la descripción de la respuesta de la temperatura de la piel (TP) en condiciones normales, y bajo la influencia de los diferentes tipos de ejercicio. Por consiguiente, este estudio presenta en una primera parte una revisión bibliográfica sobre los factores que afectan al uso de la TI en los seres humanos y una propuesta de clasificación de los mismos. Hemos analizado la fiabilidad del software Termotracker, así como su reproducibilidad de la temperatura de la piel en sujetos jóvenes, sanos y con normopeso. Finalmente, se analizó la respuesta térmica de la piel antes de un entrenamiento de resistencia, velocidad y fuerza, inmediatamente después y durante un período de recuperación de 8 horas. En cuanto a la revisión bibliográfica, hemos propuesto una clasificación para organizar los factores en tres grupos principales: los factores ambientales, individuales y técnicos. El análisis y descripción de estas influencias deben representar la base de nuevas investigaciones con el fin de utilizar la TI en las mejores condiciones. En cuanto a la reproducibilidad, los resultados mostraron valores excelentes para imágenes consecutivas, aunque la reproducibilidad de la TP disminuyó ligeramente con imágenes separadas por 24 horas, sobre todo en las zonas con valores más fríos (es decir, zonas distales y articulaciones). Las asimetrías térmicas (que normalmente se utilizan para seguir la evolución de zonas sobrecargadas o lesionadas) también mostraron excelentes resultados pero, en este caso, con mejores valores para las articulaciones y el zonas centrales (es decir, rodillas, tobillos, dorsales y pectorales) que las Zonas de Interés (ZDI) con valores medios más calientes (como los muslos e isquiotibiales). Los resultados de fiabilidad del software Termotracker fueron excelentes en todas las condiciones y parámetros. En el caso del estudio sobre los efectos de los entrenamientos de la velocidad resistencia y fuerza en la TP, los resultados muestran respuestas específicas según el tipo de entrenamiento, zona de interés, el momento de la evaluación y la función de las zonas analizadas. Los resultados mostraron que la mayoría de las ZDI musculares se mantuvieron significativamente más calientes 8 horas después del entrenamiento, lo que indica que el efecto del ejercicio sobre la TP perdura por lo menos 8 horas en la mayoría de zonas analizadas. La TI podría ser útil para cuantificar la asimilación y recuperación física después de una carga física de trabajo. Estos resultados podrían ser muy útiles para entender mejor el complejo sistema de termorregulación humano, y por lo tanto, para utilizar la TI de una manera más objetiva, precisa y profesional con visos a mejorar las nuevas aplicaciones termográficas en el sector de la actividad física y el deporte Infrared Thermography (IRT) is a safe, non-invasive and low-cost technique that allows the rapid and non-contact recording of the irradiated energy released from the body (Akimov & Son’kin, 2011; Merla et al., 2005; Ng et al., 2009; Costello et al., 2012; Hildebrandt et al., 2010). It has been used since the early 1960’s, but due to poor results as diagnostic tool and a lack of methodological standards and quality assurance (Head et al., 2002), it was rejected from the medical field. Nevertheless, the technological improvements of IRT in the last years have made possible a resurgence of this technique (Jiang et al., 2005; Vainer et al., 2005; Cheng et al., 2009; Spalding et al., 2011; Skala et al., 2012), paving the way to new applications not only focused on the diagnose usages. Among the new applications, we highlighted those in physical activity and sport fields, where it has been recently proven that a high resolution thermal images can provide us with interesting information about the complex thermoregulation system of the body (Hildebrandt et al., 2010), information than can be used as: training workload quantification (Čoh & Širok, 2007), fitness and performance conditions (Chudecka et al., 2010, 2012; Akimov et al., 2009, 2011; Merla et al., 2010; Arfaoui et al., 2012), prevention and monitoring of injuries (Hildebrandt et al., 2010, 2012; Badža et al., 2012, Gómez Carmona, 2012) and even detection of Delayed Onset Muscle Soreness – DOMS- (Al-Nakhli et al., 2012). Under this context, there is a relevant necessity to broaden the knowledge about factors influencing the application of IRT on humans, and to better explore and describe the thermal response of Skin Temperature (Tsk) in normal conditions, and under the influence of different types of exercise. Consequently, this study presents a literature review about factors affecting the application of IRT on human beings and a classification proposal about them. We analysed the reliability of the software Termotracker®, and also its reproducibility of Tsk on young, healthy and normal weight subjects. Finally, we examined the Tsk thermal response before an endurance, speed and strength training, immediately after and during an 8-hour recovery period. Concerning the literature review, we proposed a classification to organise the factors into three main groups: environmental, individual and technical factors. Thus, better exploring and describing these influence factors should represent the basis of further investigations in order to use IRT in the best and optimal conditions to improve its accuracy and results. Regarding the reproducibility results, the outcomes showed excellent values for consecutive images, but the reproducibility of Tsk slightly decreased with time, above all in the colder Regions of Interest (ROI) (i.e. distal and joint areas). The side-to-side differences (ΔT) (normally used to follow the evolution of some injured or overloaded ROI) also showed highly accurate results, but in this case with better values for joints and central ROI (i.e. Knee, Ankles, Dorsal and Pectoral) than the hottest muscle ROI (as Thigh or Hamstrings). The reliability results of the IRT software Termotracker® were excellent in all conditions and parameters. In the part of the study about the effects on Tsk of aerobic, speed and strength training, the results of Tsk demonstrated specific responses depending on the type of training, ROI, moment of the assessment and the function of the considered ROI. The results showed that most of muscular ROI maintained warmer significant Tsk 8 hours after the training, indicating that the effect of exercise on Tsk last at least 8 hours in most of ROI, as well as IRT could help to quantify the recovery status of the athlete as workload assimilation indicator. Those results could be very useful to better understand the complex skin thermoregulation behaviour, and therefore, to use IRT in a more objective, accurate and professional way to improve the new IRT applications for the physical activity and sport sector.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The research in this thesis is related to static cost and termination analysis. Cost analysis aims at estimating the amount of resources that a given program consumes during the execution, and termination analysis aims at proving that the execution of a given program will eventually terminate. These analyses are strongly related, indeed cost analysis techniques heavily rely on techniques developed for termination analysis. Precision, scalability, and applicability are essential in static analysis in general. Precision is related to the quality of the inferred results, scalability to the size of programs that can be analyzed, and applicability to the class of programs that can be handled by the analysis (independently from precision and scalability issues). This thesis addresses these aspects in the context of cost and termination analysis, from both practical and theoretical perspectives. For cost analysis, we concentrate on the problem of solving cost relations (a form of recurrence relations) into closed-form upper and lower bounds, which is the heart of most modern cost analyzers, and also where most of the precision and applicability limitations can be found. We develop tools, and their underlying theoretical foundations, for solving cost relations that overcome the limitations of existing approaches, and demonstrate superiority in both precision and applicability. A unique feature of our techniques is the ability to smoothly handle both lower and upper bounds, by reversing the corresponding notions in the underlying theory. For termination analysis, we study the hardness of the problem of deciding termination for a speci�c form of simple loops that arise in the context of cost analysis. This study gives a better understanding of the (theoretical) limits of scalability and applicability for both termination and cost analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Variabilities associated with CMOS evolution affect the yield and performance of current digital designs. FPGAs, which are widely used for fast prototyping and implementation of digital circuits, also suffer from these issues. Proactive approaches start to appear to achieve self-awareness and dynamic adaptation of these devices. To support these techniques we propose the employment of a multi-purpose sensor network. This infrastructure, through adequate use of configuration and automation tools, is able to obtain relevant data along the life cycle of an FPGA. This is realised at a very reduced cost, not only in terms of area or other limited resources, but also regarding the design effort required to define and deploy the measuring infrastructure. Our proposal has been validated by measuring inter-die and intra-die variability in different FPGA families.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work is related to the improvement of the dynamic performance of the Buck converter by means of introducing an additional power path that virtually increase s the output capacitance during transients, thus improving the output impedance of the converter. It is well known that in VRM applications, with wide load steps, voltage overshoots and undershoots ma y lead to undesired performance of the load. To solve this problem, high-bandwidth high-switching frequency power converter s can be applied to reduce the transient time or a big output capacitor can be applied to reduce the output impedance. The first solution can degrade the efficiency by increasing switching losses of the MOSFETS, and the second solution is penalizing the cost and size of the output filter. The additional energy path, as presented here, is introduced with the Output Impedance Correction Circuit (OICC) based on the Controlled Current Source (CCS). The OICC is using CCS to inject or extract a current n - 1 times larger than the output capacitor current, thus virtually increasing n times the value of the output capacitance during the transients. This feature allows the usage of a low frequency Buck converter with smaller capacitor but satisfying the dynamic requirements.