977 resultados para Power Trading Methodology
Resumo:
Tomato (Lycopersicon esculentum Mill.), apart from being a functional food rich in carotenoids, vitamins and minerals, is also an important source of phenolic compounds [1 ,2]. As antioxidants, these functional molecules play an important role in the prevention of human pathologies and have many applications in nutraceutical, pharmaceutical and cosmeceutical industries. Therefore, the recovery of added-value phenolic compounds from natural sources, such as tomato surplus or industrial by-products, is highly desirable. Herein, the microwave-assisted extraction of the main phenolic acids and flavonoids from tomato was optimized. A S-Ieve! full factorial Box-Behnken design was implemented and response surface methodology used for analysis. The extraction time (0-20 min), temperature (60-180 "C), ethanol percentage (0-100%), solidlliquid ratio (5-45 g/L) and microwave power (0-400 W) were studied as independent variables. The phenolic profile of the studied tomato variety was initially characterized by HPLC-DAD-ESIIMS [2]. Then, the effect of the different extraction conditions, as defined by the used experimental design, on the target compounds was monitored by HPLC-DAD, using their UV spectra and retention time for identification and a series of calibrations based on external standards for quantification. The proposed model was successfully implemented and statistically validated. The microwave power had no effect on the extraction process. Comparing with the optimal extraction conditions for flavonoids, which demanded a short processing time (2 min), a low temperature (60 "C) and solidlliquid ratio (5 g/L), and pure ethanol, phenolic acids required a longer processing time ( 4.38 min), a higher temperature (145.6 •c) and solidlliquid ratio (45 g/L), and water as extraction solvent. Additionally, the studied tomato variety was highlighted as a source of added-value phenolic acids and flavonoids.
Resumo:
Ergosterol, a molecule with high commercial value, is the most abundant mycosterol in Agaricus bisporus L. To replace common conventional extraction techniques (e.g. Soxhlet), the present study reports the optimal ultrasound-assisted extraction conditions for ergosterol. After preliminary tests, the results showed that solvents, time and ultrasound power altered the extraction efficiency. Using response surface methodology, models were developed to investigate the favourable experimental conditions that maximize the extraction efficiency. All statistical criteria demonstrated the validity of the proposed models. Overall, ultrasound-assisted extraction with ethanol at 375 W during 15 min proved to be as efficient as the Soxhlet extraction, yielding 671.5 ± 0.5mg ergosterol/100 g dw. However, with n-hexane extracts with higher purity (mg ergosterol/g extract) were obtained. Finally, it was proposed for the removal of the saponification step, which simplifies the extraction process and makes it more feasible for its industrial transference.
Resumo:
As the semiconductor industry struggles to maintain its momentum down the path following the Moore's Law, three dimensional integrated circuit (3D IC) technology has emerged as a promising solution to achieve higher integration density, better performance, and lower power consumption. However, despite its significant improvement in electrical performance, 3D IC presents several serious physical design challenges. In this dissertation, we investigate physical design methodologies for 3D ICs with primary focus on two areas: low power 3D clock tree design, and reliability degradation modeling and management. Clock trees are essential parts for digital system which dissipate a large amount of power due to high capacitive loads. The majority of existing 3D clock tree designs focus on minimizing the total wire length, which produces sub-optimal results for power optimization. In this dissertation, we formulate a 3D clock tree design flow which directly optimizes for clock power. Besides, we also investigate the design methodology for clock gating a 3D clock tree, which uses shutdown gates to selectively turn off unnecessary clock activities. Different from the common assumption in 2D ICs that shutdown gates are cheap thus can be applied at every clock node, shutdown gates in 3D ICs introduce additional control TSVs, which compete with clock TSVs for placement resources. We explore the design methodologies to produce the optimal allocation and placement for clock and control TSVs so that the clock power is minimized. We show that the proposed synthesis flow saves significant clock power while accounting for available TSV placement area. Vertical integration also brings new reliability challenges including TSV's electromigration (EM) and several other reliability loss mechanisms caused by TSV-induced stress. These reliability loss models involve complex inter-dependencies between electrical and thermal conditions, which have not been investigated in the past. In this dissertation we set up an electrical/thermal/reliability co-simulation framework to capture the transient of reliability loss in 3D ICs. We further derive and validate an analytical reliability objective function that can be integrated into the 3D placement design flow. The reliability aware placement scheme enables co-design and co-optimization of both the electrical and reliability property, thus improves both the circuit's performance and its lifetime. Our electrical/reliability co-design scheme avoids unnecessary design cycles or application of ad-hoc fixes that lead to sub-optimal performance. Vertical integration also enables stacking DRAM on top of CPU, providing high bandwidth and short latency. However, non-uniform voltage fluctuation and local thermal hotspot in CPU layers are coupled into DRAM layers, causing a non-uniform bit-cell leakage (thereby bit flip) distribution. We propose a performance-power-resilience simulation framework to capture DRAM soft error in 3D multi-core CPU systems. In addition, a dynamic resilience management (DRM) scheme is investigated, which adaptively tunes CPU's operating points to adjust DRAM's voltage noise and thermal condition during runtime. The DRM uses dynamic frequency scaling to achieve a resilience borrow-in strategy, which effectively enhances DRAM's resilience without sacrificing performance. The proposed physical design methodologies should act as important building blocks for 3D ICs and push 3D ICs toward mainstream acceptance in the near future.
Resumo:
Forecasting abrupt variations in wind power generation (the so-called ramps) helps achieve large scale wind power integration. One of the main issues to be confronted when addressing wind power ramp forecasting is the way in which relevant information is identified from large datasets to optimally feed forecasting models. To this end, an innovative methodology oriented to systematically relate multivariate datasets to ramp events is presented. The methodology comprises two stages: the identification of relevant features in the data and the assessment of the dependence between these features and ramp occurrence. As a test case, the proposed methodology was employed to explore the relationships between atmospheric dynamics at the global/synoptic scales and ramp events experienced in two wind farms located in Spain. The achieved results suggested different connection degrees between these atmospheric scales and ramp occurrence. For one of the wind farms, it was found that ramp events could be partly explained from regional circulations and zonal pressure gradients. To perform a comprehensive analysis of ramp underlying causes, the proposed methodology could be applied to datasets related to other stages of the wind-topower conversion chain.
Resumo:
Modern power networks incorporate communications and information technology infrastructure into the electrical power system to create a smart grid in terms of control and operation. The smart grid enables real-time communication and control between consumers and utility companies allowing suppliers to optimize energy usage based on price preference and system technical issues. The smart grid design aims to provide overall power system monitoring, create protection and control strategies to maintain system performance, stability and security. This dissertation contributed to the development of a unique and novel smart grid test-bed laboratory with integrated monitoring, protection and control systems. This test-bed was used as a platform to test the smart grid operational ideas developed here. The implementation of this system in the real-time software creates an environment for studying, implementing and verifying novel control and protection schemes developed in this dissertation. Phasor measurement techniques were developed using the available Data Acquisition (DAQ) devices in order to monitor all points in the power system in real time. This provides a practical view of system parameter changes, system abnormal conditions and its stability and security information system. These developments provide valuable measurements for technical power system operators in the energy control centers. Phasor Measurement technology is an excellent solution for improving system planning, operation and energy trading in addition to enabling advanced applications in Wide Area Monitoring, Protection and Control (WAMPAC). Moreover, a virtual protection system was developed and implemented in the smart grid laboratory with integrated functionality for wide area applications. Experiments and procedures were developed in the system in order to detect the system abnormal conditions and apply proper remedies to heal the system. A design for DC microgrid was developed to integrate it to the AC system with appropriate control capability. This system represents realistic hybrid AC/DC microgrids connectivity to the AC side to study the use of such architecture in system operation to help remedy system abnormal conditions. In addition, this dissertation explored the challenges and feasibility of the implementation of real-time system analysis features in order to monitor the system security and stability measures. These indices are measured experimentally during the operation of the developed hybrid AC/DC microgrids. Furthermore, a real-time optimal power flow system was implemented to optimally manage the power sharing between AC generators and DC side resources. A study relating to real-time energy management algorithm in hybrid microgrids was performed to evaluate the effects of using energy storage resources and their use in mitigating heavy load impacts on system stability and operational security.
Resumo:
Power efficiency is one of the most important constraints in the design of embedded systems since such systems are generally driven by batteries with limited energy budget or restricted power supply. In every embedded system, there are one or more processor cores to run the software and interact with the other hardware components of the system. The power consumption of the processor core(s) has an important impact on the total power dissipated in the system. Hence, the processor power optimization is crucial in satisfying the power consumption constraints, and developing low-power embedded systems. A key aspect of research in processor power optimization and management is “power estimation”. Having a fast and accurate method for processor power estimation at design time helps the designer to explore a large space of design possibilities, to make the optimal choices for developing a power efficient processor. Likewise, understanding the processor power dissipation behaviour of a specific software/application is the key for choosing appropriate algorithms in order to write power efficient software. Simulation-based methods for measuring the processor power achieve very high accuracy, but are available only late in the design process, and are often quite slow. Therefore, the need has arisen for faster, higher-level power prediction methods that allow the system designer to explore many alternatives for developing powerefficient hardware and software. The aim of this thesis is to present fast and high-level power models for the prediction of processor power consumption. Power predictability in this work is achieved in two ways: first, using a design method to develop power predictable circuits; second, analysing the power of the functions in the code which repeat during execution, then building the power model based on average number of repetitions. In the first case, a design method called Asynchronous Charge Sharing Logic (ACSL) is used to implement the Arithmetic Logic Unit (ALU) for the 8051 microcontroller. The ACSL circuits are power predictable due to the independency of their power consumption to the input data. Based on this property, a fast prediction method is presented to estimate the power of ALU by analysing the software program, and extracting the number of ALU-related instructions. This method achieves less than 1% error in power estimation and more than 100 times speedup in comparison to conventional simulation-based methods. In the second case, an average-case processor energy model is developed for the Insertion sort algorithm based on the number of comparisons that take place in the execution of the algorithm. The average number of comparisons is calculated using a high level methodology called MOdular Quantitative Analysis (MOQA). The parameters of the energy model are measured for the LEON3 processor core, but the model is general and can be used for any processor. The model has been validated through the power measurement experiments, and offers high accuracy and orders of magnitude speedup over the simulation-based method.
Resumo:
This thesis proposes the development of a narrative methodology in the British Methodist Church. Such a methodology embraces and communicates both felt experience and critical theological thinking, thus producing and presenting a theology that might have a constructive transformative impact on wider society. In chapter one I explore the ways in which the Church speaks in public, identify some of the challenges it faces, and consider four models of engagement. If the Church is to engage in public discourses then I argue that its words need to be relevant and connect with people’s experiences. To ground the thinking I focus on the context of the British Methodist Church and explore how the Church engages in theological reflection through the lens of its thinking on issues of human sexuality. Chapter two reviews how theological reflection is undertaken in the British Methodist Church. I describe how the Methodist Quadrilateral of Scripture, tradition, reason and experience remains a foundational framework for theological reflection within the Methodist Church and consider the impact of institutional processes and the ways in which the Methodist people actually engage with theological thinking. The third and fourth chapters focus on how the British Methodist Church has produced its theology of human sexuality, giving particular attention to the use of personal and sexual stories in this process. I find that whilst there has been a desire to listen to the stories of the Methodist people, there has not been a corresponding interrogation or analysis of their stories so as to enable robust and constructive theological reflection on these experiences. Using resources from Foucauldian approaches to discourse analysis, I critique key statements and the processes involved in their production, offering an analysis of this body of theological thinking and indicating where possibilities for alternative ways of thinking and acting arise. The proposed methodology draws upon resources from social science methodologies, and in chapter five I look at the use of personal experience and relevant strategies of inquiry that prompt reflection on the hermeneutical process and employ narrative approaches in undertaking, analysing and presenting research. The exploration shows that qualitative research methodologies offer resources and methods of inquiry that could help the Church to engage with personal stories in its theological thinking in a robust, interrogative and imaginative way. In chapter six an examination of story and narrative is undertaken, to show how they have been understood as ways of knowing and how they relate to theological inquiry. Whilst acknowledging some of the limitations of narrative, I indicate how it offers constructive possibilities for theological reflection and could be a means for the British Methodist Church to engage in public discourse. This is explored further in chapter seven, which looks in more detail at how the British Methodist Church has used narrative in its theological thinking, and outlines areas requiring further attention in order for a narrative theological methodology to be developed, namely: attention to the question ‘whose experience?’; investigation of issues of power and the dynamics involved in the process of the production of theological thought; how personal stories and experiences are interrogated and how narrative is constructed; and how narrative might be employed within the Methodist Quadrilateral. The final chapter considers the advantages and limitations of such an approach, whether the development of such a method is possible in the Methodist Church today and its potential for helping the Church to engage in public discourse more effectively. I argue that this methodology can provoke new theological insights and enable new ways of being in the world
Resumo:
In the early modern period, trade became a truly global phenomenon. The logistics, financial and organizational complexity associated with it increased in order to connect distant geographies and merchants from different backgrounds. How did these merchants prevent their partners from dishonesty in a time where formal institutions and legislation did not traverse these different worlds? This book studies the mechanisms and criteria of cooperation in early modern trading networks. It uses an interdisciplinary approach, through the case study of a Castilian long-distance merchant of the sixteenth century, Simon Ruiz, who traded within the limits of the Portuguese and Spanish overseas empires. Early Modern Trading Networks in Europe discusses the importance of reciprocity mechanisms, trust and reputation in the context of early modern business relations, using network analysis methodology, combining quantitative data with qualitative information. It considers how cooperation and prevention could simultaneously create a business relationship, and describes the mechanisms of control, policing and punishment used to avoid opportunism and deception among a group of business partners. Using bills of exchange and correspondence from Simon Ruiz’s private archive, it charts the evolution of this business network through time, debating which criteria should be included or excluded from business networks, as well as the emergence of standards. This book intends to put forward a new approach to early modern trade which focuses on individuals interacting in self-organized structures, rather than on states or empires. It shows how indirect reciprocity was much more frequent than direct reciprocity among early modern merchants and how informal norms, like ostracism or signaling, helped to prevent defection and deception in an effective way.
Resumo:
Evaluating the nature of the earliest, often controversial, traces of life in the geological record (dating to the Palaeoarchaean, up to ~3.5 billion years before the present) is of fundamental relevance for placing constraints on the potential that life emerged on Mars at approximately the same time (the Noachian period). In their earliest histories, the two planets shared many palaeoenvironmental similarities, before the surface of Mars rapidly became inhospitable to life as we know it. Multi-scalar, multi-modal analyses of fossiliferous rocks from the Barberton greenstone belt of South Africa and the East Pilbara terrane of Western Australia are a window onto primitive prokaryotic ecoystems. Complementary petrographic, morphological, (bio)geochemical and nanostructural analyses of chert horizons and the carbonaceous material within using a wide range of techniques – including optical microscopy, SEM-EDS, Raman spectroscopy, PIXE, µCT, laser ablation ICP-MS, high-resolution TEM-based analytical techniques and secondary ion mass spectrometry – can characterise, at scales from macroscopic to nanoscopic, the fossilised biomes of the earliest Earth. These approaches enable the definition of the palaeoenvironments, and potentially metabolic networks, preserved in ancient rocks. Modifying these protocols is necessary for Martian exploration using rovers, since the range and power of space instrumentation is significantly reduced relative to terrestrial laboratories. Understanding the crucial observations possible using highly complementary rover-based payloads is therefore critical in scientific protocols aiming to detect traces of life on Mars.
Resumo:
This thesis studies how commercial practice is developing with artificial intelligence (AI) technologies and discusses some normative concepts in EU consumer law. The author analyses the phenomenon of 'algorithmic business', which defines the increasing use of data-driven AI in marketing organisations for the optimisation of a range of consumer-related tasks. The phenomenon is orienting business-consumer relations towards some general trends that influence power and behaviors of consumers. These developments are not taking place in a legal vacuum, but against the background of a normative system aimed at maintaining fairness and balance in market transactions. The author assesses current developments in commercial practices in the context of EU consumer law, which is specifically aimed at regulating commercial practices. The analysis is critical by design and without neglecting concrete practices tries to look at the big picture. The thesis consists of nine chapters divided in three thematic parts. The first part discusses the deployment of AI in marketing organisations, a brief history, the technical foundations, and their modes of integration in business organisations. In the second part, a selected number of socio-technical developments in commercial practice are analysed. The following are addressed: the monitoring and analysis of consumers’ behaviour based on data; the personalisation of commercial offers and customer experience; the use of information on consumers’ psychology and emotions, the mediation through marketing conversational applications. The third part assesses these developments in the context of EU consumer law and of the broader policy debate concerning consumer protection in the algorithmic society. In particular, two normative concepts underlying the EU fairness standard are analysed: manipulation, as a substantive regulatory standard that limits commercial behaviours in order to protect consumers’ informed and free choices and vulnerability, as a concept of social policy that portrays people who are more exposed to marketing practices.
Resumo:
Silicon-based discrete high-power devices need to be designed with optimal performance up to several thousand volts and amperes to reach power ratings ranging from few kWs to beyond the 1 GW mark. To this purpose, a key element is the improvement of the junction termination (JT) since it allows to drastically reduce surface electric field peaks which may lead to an earlier device failure. This thesis will be mostly focused on the negative bevel termination which from several years constitutes a standard processing step in bipolar production lines. A simple methodology to realize its counterpart, a planar JT with variation of the lateral doping concentration (VLD) will be also described. On the JT a thin layer of a semi insulating material is usually deposited, which acts as passivation layer reducing the interface defects and contributing to increase the device reliability. A thorough understanding of how the passivation layer properties affect the breakdown voltage and the leakage current of a fast-recovery diode is fundamental to preserve the ideal termination effect and provide a stable blocking capability. More recently, amorphous carbon, also called diamond-like carbon (DLC), has been used as a robust surface passivation material. By using a commercial TCAD tool, a detailed physical explanation of DLC electrostatic and transport properties has been provided. The proposed approach is able to predict the breakdown voltage and the leakage current of a negative beveled power diode passivated with DLC as confirmed by the successfully validation against the available experiments. In addition, the VLD JT proposed to overcome the limitation of the negative bevel architecture has been simulated showing a breakdown voltage very close to the ideal one with a much smaller area consumption. Finally, the effect of a low junction depth on the formation of current filaments has been analyzed by performing reverse-recovery simulations.
Resumo:
The world is quickly changing, and the field of power electronics assumes a pivotal role in addressing the challenges posed by climate change, global warming, and energy management. The introduction of wide-bandgap semiconductors, particularly gallium nitride (GaN), in contrast to the traditional silicon technology, is leading to lightweight, compact and evermore efficient circuitry. However, GaN technology is not mature yet and still presents reliability issues which constrain its widespread adoption. Therefore, GaN reliability is a hotspot for the research community. Extensive efforts have been directed toward understanding the physical mechanisms underlying the performance and reliability of GaN power devices. The goal of this thesis is to propose a novel in-circuit degradation analysis in order to evaluate the long-term reliability of GaN-based power devices accurately. The in-circuit setup is based on measure-stress-measure methodology where a high-speed synchronous buck converter ensures the stress while the measure is performed by means of full I-V characterizations. The switch from stress mode to characterization mode and vice versa is automatic thanks to electromechanical and solid-state relays controlled by external unit control. Because these relays are located in critical paths of the converter layout, the design has required a comprehensive study of electrical and thermal problems originated by the use of GaN technology. In addition, during the validation phase of the converter, electromagnetic-lumped-element circuit simulations are carried out to monitor the signal integrity and junction temperature of the devices under test. However, the core of this work is the in-circuit reliability analysis conducted with 80 V GaN HEMTs under several operating conditions of the converter in order to figure out the main stressors which contribute to the device's degradation.
Resumo:
One of the great challenges of the scientific community on theories of genetic information, genetic communication and genetic coding is to determine a mathematical structure related to DNA sequences. In this paper we propose a model of an intra-cellular transmission system of genetic information similar to a model of a power and bandwidth efficient digital communication system in order to identify a mathematical structure in DNA sequences where such sequences are biologically relevant. The model of a transmission system of genetic information is concerned with the identification, reproduction and mathematical classification of the nucleotide sequence of single stranded DNA by the genetic encoder. Hence, a genetic encoder is devised where labelings and cyclic codes are established. The establishment of the algebraic structure of the corresponding codes alphabets, mappings, labelings, primitive polynomials (p(x)) and code generator polynomials (g(x)) are quite important in characterizing error-correcting codes subclasses of G-linear codes. These latter codes are useful for the identification, reproduction and mathematical classification of DNA sequences. The characterization of this model may contribute to the development of a methodology that can be applied in mutational analysis and polymorphisms, production of new drugs and genetic improvement, among other things, resulting in the reduction of time and laboratory costs.
Resumo:
Response surface methodology based on Box-Behnken (BBD) design was successfully applied to the optimization in the operating conditions of the electrochemical oxidation of sanitary landfill leachate aimed for making this method feasible for scale up. Landfill leachate was treated in continuous batch-recirculation system, where a dimensional stable anode (DSA(©)) coated with Ti/TiO2 and RuO2 film oxide were used. The effects of three variables, current density (milliampere per square centimeter), time of treatment (minutes), and supporting electrolyte dosage (moles per liter) upon the total organic carbon removal were evaluated. Optimized conditions were obtained for the highest desirability at 244.11 mA/cm(2), 41.78 min, and 0.07 mol/L of NaCl and 242.84 mA/cm(2), 37.07 min, and 0.07 mol/L of Na2SO4. Under the optimal conditions, 54.99 % of chemical oxygen demand (COD) and 71.07 ammonia nitrogen (NH3-N) removal was achieved with NaCl and 45.50 of COD and 62.13 NH3-N with Na2SO4. A new kinetic model predicted obtained from the relation between BBD and the kinetic model was suggested.
Resumo:
Didanosine-loaded chitosan microspheres were developed applying a surface-response methodology and using a modified Maximum Likelihood Classification. The operational conditions were optimized with the aim of maintaining the active form of didanosine (ddI), which is sensitive to acid pH, and to develop a modified and mucoadhesive formulation. The loading of the drug within the chitosan microspheres was carried out by ionotropic gelation technique with sodium tripolyphosphate (TPP) as cross-linking agent and magnesium hydroxide (Mg(OH)2) to assure the stability of ddI. The optimization conditions were set using a surface-response methodology and applying the Maximum Likelihood Classification, where the initial chitosan concentration, TPP and ddI concentration were set as the independent variables. The maximum ddI-loaded in microspheres (i.e. 1433mg of ddI/g chitosan), was obtained with 2% (w/v) chitosan and 10% TPP. The microspheres depicted an average diameter of 11.42μm and ddI was gradually released during 2h in simulated enteric fluid.