904 resultados para Electricity -- Prices -- Mathematical models.
Resumo:
The thesis presents a two-dimensional Risk Assessment Method (RAM) where the assessment of risk to the groundwater resources incorporates both the quantification of the probability of the occurrence of contaminant source terms, as well as the assessment of the resultant impacts. The approach emphasizes the need for a greater dependency on the potential pollution sources, rather than the traditional approach where assessment is based mainly on the intrinsic geo-hydrologic parameters. The risk is calculated using Monte Carlo simulation methods whereby random pollution events were generated to the same distribution as historically occurring events or a priori potential probability distribution. Integrated mathematical models then simulate contaminant concentrations at the predefined monitoring points within the aquifer. The spatial and temporal distributions of the concentrations were calculated from repeated realisations, and the number of times when a user defined concentration magnitude was exceeded is quantified as a risk. The method was setup by integrating MODFLOW-2000, MT3DMS and a FORTRAN coded risk model, and automated, using a DOS batch processing file. GIS software was employed in producing the input files and for the presentation of the results. The functionalities of the method, as well as its sensitivities to the model grid sizes, contaminant loading rates, length of stress periods, and the historical frequencies of occurrence of pollution events were evaluated using hypothetical scenarios and a case study. Chloride-related pollution sources were compiled and used as indicative potential contaminant sources for the case study. At any active model cell, if a random generated number is less than the probability of pollution occurrence, then the risk model will generate synthetic contaminant source term as an input into the transport model. The results of the applications of the method are presented in the form of tables, graphs and spatial maps. Varying the model grid sizes indicates no significant effects on the simulated groundwater head. The simulated frequency of daily occurrence of pollution incidents is also independent of the model dimensions. However, the simulated total contaminant mass generated within the aquifer, and the associated volumetric numerical error appear to increase with the increasing grid sizes. Also, the migration of contaminant plume advances faster with the coarse grid sizes as compared to the finer grid sizes. The number of daily contaminant source terms generated and consequently the total mass of contaminant within the aquifer increases in a non linear proportion to the increasing frequency of occurrence of pollution events. The risk of pollution from a number of sources all occurring by chance together was evaluated, and quantitatively presented as risk maps. This capability to combine the risk to a groundwater feature from numerous potential sources of pollution proved to be a great asset to the method, and a large benefit over the contemporary risk and vulnerability methods.
Resumo:
This paper focuses on minimizing printed circuit board (PCB) assembly time for a chipshootermachine, which has a movable feeder carrier holding components, a movable X–Y table carrying a PCB, and a rotary turret with multiple assembly heads. The assembly time of the machine depends on two inter-related optimization problems: the component sequencing problem and the feeder arrangement problem. Nevertheless, they were often regarded as two individual problems and solved separately. This paper proposes two complete mathematical models for the integrated problem of the machine. The models are verified by two commercial packages. Finally, a hybrid genetic algorithm previously developed by the authors is presented to solve the model. The algorithm not only generates the optimal solutions quickly for small-sized problems, but also outperforms the genetic algorithms developed by other researchers in terms of total assembly time.
Resumo:
This paper presents an assessment of the technical and economic performance of thermal processes to generate electricity from a wood chip feedstock by combustion, gasification and fast pyrolysis. The scope of the work begins with the delivery of a wood chip feedstock at a conversion plant and ends with the supply of electricity to the grid, incorporating wood chip preparation, thermal conversion, and electricity generation in dual fuel diesel engines. Net generating capacities of 1–20 MWe are evaluated. The techno-economic assessment is achieved through the development of a suite of models that are combined to give cost and performance data for the integrated system. The models include feed pretreatment, combustion, atmospheric and pressure gasification, fast pyrolysis with pyrolysis liquid storage and transport (an optional step in de-coupled systems) and diesel engine or turbine power generation. The models calculate system efficiencies, capital costs and production costs. An identical methodology is applied in the development of all the models so that all of the results are directly comparable. The electricity production costs have been calculated for 10th plant systems, indicating the costs that are achievable in the medium term after the high initial costs associated with novel technologies have reduced. The costs converge at the larger scale with the mean electricity price paid in the EU by a large consumer, and there is therefore potential for fast pyrolysis and diesel engine systems to sell electricity directly to large consumers or for on-site generation. However, competition will be fierce at all capacities since electricity production costs vary only slightly between the four biomass to electricity systems that are evaluated. Systems de-coupling is one way that the fast pyrolysis and diesel engine system can distinguish itself from the other conversion technologies. Evaluations in this work show that situations requiring several remote generators are much better served by a large fast pyrolysis plant that supplies fuel to de-coupled diesel engines than by constructing an entire close-coupled system at each generating site. Another advantage of de-coupling is that the fast pyrolysis conversion step and the diesel engine generation step can operate independently, with intermediate storage of the fast pyrolysis liquid fuel, increasing overall reliability. Peak load or seasonal power requirements would also benefit from de-coupling since a small fast pyrolysis plant could operate continuously to produce fuel that is stored for use in the engine on demand. Current electricity production costs for a fast pyrolysis and diesel engine system are 0.091/kWh at 1 MWe when learning effects are included. These systems are handicapped by the typical characteristics of a novel technology: high capital cost, high labour, and low reliability. As such the more established combustion and steam cycle produces lower cost electricity under current conditions. The fast pyrolysis and diesel engine system is a low capital cost option but it also suffers from relatively low system efficiency particularly at high capacities. This low efficiency is the result of a low conversion efficiency of feed energy into the pyrolysis liquid, because of the energy in the char by-product. A sensitivity analysis has highlighted the high impact on electricity production costs of the fast pyrolysis liquids yield. The liquids yield should be set realistically during design, and it should be maintained in practice by careful attention to plant operation and feed quality. Another problem is the high power consumption during feedstock grinding. Efficiencies may be enhanced in ablative fast pyrolysis which can tolerate a chipped feedstock. This has yet to be demonstrated at commercial scale. In summary, the fast pyrolysis and diesel engine system has great potential to generate electricity at a profit in the long term, and at a lower cost than any other biomass to electricity system at small scale. This future viability can only be achieved through the construction of early plant that could, in the short term, be more expensive than the combustion alternative. Profitability in the short term can best be achieved by exploiting niches in the market place and specific features of fast pyrolysis. These include: •countries or regions with fiscal incentives for renewable energy such as premium electricity prices or capital grants; •locations with high electricity prices so that electricity can be sold direct to large consumers or generated on-site by companies who wish to reduce their consumption from the grid; •waste disposal opportunities where feedstocks can attract a gate fee rather than incur a cost; •the ability to store fast pyrolysis liquids as a buffer against shutdowns or as a fuel for peak-load generating plant; •de-coupling opportunities where a large, single pyrolysis plant supplies fuel to several small and remote generators; •small-scale combined heat and power opportunities; •sales of the excess char, although a market has yet to be established for this by-product; and •potential co-production of speciality chemicals and fuel for power generation in fast pyrolysis systems.
Resumo:
* This paper was made according to the program of fundamental scientific research of the Presidium of the Russian Academy of Sciences «Mathematical simulation and intellectual systems», the project "Theoretical foundation of the intellectual systems based on ontologies for intellectual support of scientific researches".
Resumo:
For metal and metal halide vapor lasers excited by high frequency pulsed discharge, the thermal effect mainly caused by the radial temperature distribution is of considerable importance for stable laser operation and improvement of laser output characteristics. A short survey of the obtained analytical and numerical-analytical mathematical models of the temperature profile in a high-powered He-SrBr2 laser is presented. The models are described by the steady-state heat conduction equation with mixed type nonlinear boundary conditions for the arbitrary form of the volume power density. A complete model of radial heat flow between the two tubes is established for precise calculating the inner wall temperature. The models are applied for simulating temperature profiles for newly designed laser. The author’s software prototype LasSim is used for carrying out the mathematical models and simulations.
Resumo:
A tanulmány arra keresi a választ, hogy a megújuló alapú áramtermelÅ‘k támogatása csökkentÅ‘leg hathat- e a villamos energia nagykereskedelmi és kiskereskedelmi árára. Ez utóbbi tartalmazza a megújulók támogatásának összegét is. Számos elméleti cikk rámutatott arra, hogy nemcsak a nagykereskedelmi árak, hanem a kiskereskedelmi villamosenergia-árak is csökkenhetnek a drágább, megújuló alapú áramtermelÅ‘k támogatása révén. A tanulmány során egy villamosenergia-piacokat szimuláló modell segÃtségével modellezi a szerzÅ‘, hogy a különbözÅ‘ mennyiségű szélerÅ‘művi és fotovoltaikus kapacitás támogatása hogyan hat a magyarországi nagykereskedelmi és kiskereskedelmi árakra. _____ Impact of the Hungarian renewable based power generation on electricity price The aim of this paper is to answer the question whether the support of renewable power generation could decrease the wholesale and retail electricity prices. The latter one includes the support of renewables. Several studies point out that not only the wholesale, but the retail electricity prices could decrease when supporting the more expensive, renewable power generation. A model, which simulates the electricity markets, is used in order to analyse the impact of different level of wind and photo voltaic power generator support fee on Hungarian wholesale and retail electricity prices.
Resumo:
The development of a new set of frost property measurement techniques to be used in the control of frost growth and defrosting processes in refrigeration systems was investigated. Holographic interferometry and infrared thermometry were used to measure the temperature of the frost-air interface, while a beam element load sensor was used to obtain the weight of a deposited frost layer. The proposed measurement techniques were tested for the cases of natural and forced convection, and the characteristic charts were obtained for a set of operational conditions. ^ An improvement of existing frost growth mathematical models was also investigated. The early stage of frost nucleation was commonly not considered in these models and instead an initial value of layer thickness and porosity was regularly assumed. A nucleation model to obtain the droplet diameter and surface porosity at the end of the early frosting period was developed. The drop-wise early condensation in a cold flat plate under natural convection to a hot (room temperature) and humid air was modeled. A nucleation rate was found, and the relation of heat to mass transfer (Lewis number) was obtained. It was found that the Lewis number was much smaller than unity, which is the standard value usually assumed for most frosting numerical models. The nucleation model was validated against available experimental data for the early nucleation and full growth stages of the frosting process. ^ The combination of frost top temperature and weight variation signals can now be used to control the defrosting timing and the developed early nucleation model can now be used to simulate the entire process of frost growth in any surface material. ^
Resumo:
Nel presente lavoro, ho studiato e trovato le soluzioni esatte di un modello matematico applicato ai recettori cellulari della famiglia delle integrine. Nel modello le integrine sono considerate come un sistema a due livelli, attivo e non attivo. Quando le integrine si trovano nello stato inattivo possono diffondere nella membrana, mentre quando si trovano nello stato attivo risultano cristallizzate nella membrana, incapaci di diffondere. La variazione di concentrazione nella superficie cellulare di una sostanza chiamata attivatore dà luogo all’attivazione delle integrine. Inoltre, questi eterodimeri possono legare una molecola inibitrice con funzioni di controllo e regolazione, che chiameremo v, la quale, legandosi al recettore, fa aumentare la produzione della sostanza attizzatrice, che chiameremo u. In questo modo si innesca un meccanismo di retroazione positiva. L’inibitore v regola il meccanismo di produzione di u, ed assume, pertanto, il ruolo di modulatore. Infatti, grazie a questo sistema di fine regolazione il meccanismo di feedback positivo è in grado di autolimitarsi. Si costruisce poi un modello di equazioni differenziali partendo dalle semplici reazioni chimiche coinvolte. Una volta che il sistema di equazioni è impostato, si possono desumere le soluzioni per le concentrazioni dell’inibitore e dell’attivatore per un caso particolare dei parametri. Infine, si può eseguire un test per vedere cosa predice il modello in termini di integrine. Per farlo, ho utilizzato un’attivazione del tipo funzione gradino e l’ho inserita nel sistema, valutando la dinamica dei recettori. Si ottiene in questo modo un risultato in accordo con le previsioni: le integrine legate si trovano soprattutto ai limiti della zona attivata, mentre le integrine libere vengono a mancare nella zona attivata.
Resumo:
In perifusion cell cultures, the culture medium flows continuously through a chamber containing immobilized cells and the effluent is collected at the end. In our main applications, gonadotropin releasing hormone (GnRH) or oxytocin is introduced into the chamber as the input. They stimulate the cells to secrete luteinizing hormone (LH), which is collected in the effluent. To relate the effluent LH concentration to the cellular processes producing it, we develop and analyze a mathematical model consisting of coupled partial differential equations describing the intracellular signaling and the movement of substances in the cell chamber. We analyze three different data sets and give cellular mechanisms that explain the data. Our model indicates that two negative feedback loops, one fast and one slow, are needed to explain the data and we give their biological bases. We demonstrate that different LH outcomes in oxytocin and GnRH stimulations might originate from different receptor dynamics. We analyze the model to understand the influence of parameters, like the rate of the medium flow or the fraction collection time, on the experimental outcomes. We investigate how the rate of binding and dissociation of the input hormone to and from its receptor influence its movement down the chamber. Finally, we formulate and analyze simpler models that allow us to predict the distortion of a square pulse due to hormone-receptor interactions and to estimate parameters using perifusion data. We show that in the limit of high binding and dissociation the square pulse moves as a diffusing Gaussian and in this limit the biological parameters can be estimated.
Resumo:
Uncertainty quantification (UQ) is both an old and new concept. The current novelty lies in the interactions and synthesis of mathematical models, computer experiments, statistics, field/real experiments, and probability theory, with a particular emphasize on the large-scale simulations by computer models. The challenges not only come from the complication of scientific questions, but also from the size of the information. It is the focus in this thesis to provide statistical models that are scalable to massive data produced in computer experiments and real experiments, through fast and robust statistical inference.
Chapter 2 provides a practical approach for simultaneously emulating/approximating massive number of functions, with the application on hazard quantification of Soufri\`{e}re Hills volcano in Montserrate island. Chapter 3 discusses another problem with massive data, in which the number of observations of a function is large. An exact algorithm that is linear in time is developed for the problem of interpolation of Methylation levels. Chapter 4 and Chapter 5 are both about the robust inference of the models. Chapter 4 provides a new criteria robustness parameter estimation criteria and several ways of inference have been shown to satisfy such criteria. Chapter 5 develops a new prior that satisfies some more criteria and is thus proposed to use in practice.
Resumo:
This dissertation studies capacity investments in energy sources, with a focus on renewable technologies, such as solar and wind energy. We develop analytical models to provide insights for policymakers and use real data from the state of Texas to corroborate our findings.
We first take a strategic perspective and focus on electricity pricing policies. Specifically, we investigate the capacity investments of a utility firm in renewable and conventional energy sources under flat and peak pricing policies. We consider generation patterns and intermittency of solar and wind energy in relation to the electricity demand throughout a day. We find that flat pricing leads to a higher investment level for solar energy and it can still lead to more investments in wind energy if considerable amount of wind energy is generated throughout the day.
In the second essay, we complement the first one by focusing on the problem of matching supply with demand in every operating period (e.g., every five minutes) from the perspective of a utility firm. We study the interaction between renewable and conventional sources with different levels of operational flexibility, i.e., the possibility
of quickly ramping energy output up or down. We show that operational flexibility determines these interactions: renewable and inflexible sources (e.g., nuclear energy) are substitutes, whereas renewable and flexible sources (e.g., natural gas) are complements.
In the final essay, rather than the capacity investments of the utility firms, we focus on the capacity investments of households in rooftop solar panels. We investigate whether or not these investments may cause a utility death spiral effect, which is a vicious circle of increased solar adoption and higher electricity prices. We observe that the current rate-of-return regulation may lead to a death spiral for utility firms. We show that one way to reverse the spiral effect is to allow the utility firms to maximize their profits by determining electricity prices.
Resumo:
Fire is a form of uncontrolled combustion which generates heat, smoke, toxic and irritant gases. All of these products are harmful to man and account for the heavy annual cost of 800 lives and £1,000,000,000 worth of property damage in Britain alone. The new discipline of Fire Safety Engineering has developed as a means of reducing these unacceptable losses. One of the main tools of Fire Safety Engineering is the mathematical model and over the past 15 years a number of mathematical models have emerged to cater for the needs of this discipline. Part of the difficulty faced by the Fire Safety Engineer is the selection of the most appropriate modelling tool to use for the job. To make an informed choice it is essential to have a good understanding of the various modelling approaches, their capabilities and limitations. In this paper some of the fundamental modelling tools used to predict fire and evacuation are investigated as are the issues associated with their use and recent developments in modelling technology.
Resumo:
Forecast is the basis for making strategic, tactical and operational business decisions. In financial economics, several techniques have been used to predict the behavior of assets over the past decades.Thus, there are several methods to assist in the task of time series forecasting, however, conventional modeling techniques such as statistical models and those based on theoretical mathematical models have produced unsatisfactory predictions, increasing the number of studies in more advanced methods of prediction. Among these, the Artificial Neural Networks (ANN) are a relatively new and promising method for predicting business that shows a technique that has caused much interest in the financial environment and has been used successfully in a wide variety of financial modeling systems applications, in many cases proving its superiority over the statistical models ARIMA-GARCH. In this context, this study aimed to examine whether the ANNs are a more appropriate method for predicting the behavior of Indices in Capital Markets than the traditional methods of time series analysis. For this purpose we developed an quantitative study, from financial economic indices, and developed two models of RNA-type feedfoward supervised learning, whose structures consisted of 20 data in the input layer, 90 neurons in one hidden layer and one given as the output layer (Ibovespa). These models used backpropagation, an input activation function based on the tangent sigmoid and a linear output function. Since the aim of analyzing the adherence of the Method of Artificial Neural Networks to carry out predictions of the Ibovespa, we chose to perform this analysis by comparing results between this and Time Series Predictive Model GARCH, developing a GARCH model (1.1).Once applied both methods (ANN and GARCH) we conducted the results' analysis by comparing the results of the forecast with the historical data and by studying the forecast errors by the MSE, RMSE, MAE, Standard Deviation, the Theil's U and forecasting encompassing tests. It was found that the models developed by means of ANNs had lower MSE, RMSE and MAE than the GARCH (1,1) model and Theil U test indicated that the three models have smaller errors than those of a naïve forecast. Although the ANN based on returns have lower precision indicator values than those of ANN based on prices, the forecast encompassing test rejected the hypothesis that this model is better than that, indicating that the ANN models have a similar level of accuracy . It was concluded that for the data series studied the ANN models show a more appropriate Ibovespa forecasting than the traditional models of time series, represented by the GARCH model
Resumo:
The analysis of steel and composite frames has traditionally been carried out by idealizing beam-to-column connections as either rigid or pinned. Although some advanced analysis methods have been proposed to account for semi-rigid connections, the performance of these methods strongly depends on the proper modeling of connection behavior. The primary challenge of modeling beam-to-column connections is their inelastic response and continuously varying stiffness, strength, and ductility. In this dissertation, two distinct approaches—mathematical models and informational models—are proposed to account for the complex hysteretic behavior of beam-to-column connections. The performance of the two approaches is examined and is then followed by a discussion of their merits and deficiencies. To capitalize on the merits of both mathematical and informational representations, a new approach, a hybrid modeling framework, is developed and demonstrated through modeling beam-to-column connections. Component-based modeling is a compromise spanning two extremes in the field of mathematical modeling: simplified global models and finite element models. In the component-based modeling of angle connections, the five critical components of excessive deformation are identified. Constitutive relationships of angles, column panel zones, and contact between angles and column flanges, are derived by using only material and geometric properties and theoretical mechanics considerations. Those of slip and bolt hole ovalization are simplified by empirically-suggested mathematical representation and expert opinions. A mathematical model is then assembled as a macro-element by combining rigid bars and springs that represent the constitutive relationship of components. Lastly, the moment-rotation curves of the mathematical models are compared with those of experimental tests. In the case of a top-and-seat angle connection with double web angles, a pinched hysteretic response is predicted quite well by complete mechanical models, which take advantage of only material and geometric properties. On the other hand, to exhibit the highly pinched behavior of a top-and-seat angle connection without web angles, a mathematical model requires components of slip and bolt hole ovalization, which are more amenable to informational modeling. An alternative method is informational modeling, which constitutes a fundamental shift from mathematical equations to data that contain the required information about underlying mechanics. The information is extracted from observed data and stored in neural networks. Two different training data sets, analytically-generated and experimental data, are tested to examine the performance of informational models. Both informational models show acceptable agreement with the moment-rotation curves of the experiments. Adding a degradation parameter improves the informational models when modeling highly pinched hysteretic behavior. However, informational models cannot represent the contribution of individual components and therefore do not provide an insight into the underlying mechanics of components. In this study, a new hybrid modeling framework is proposed. In the hybrid framework, a conventional mathematical model is complemented by the informational methods. The basic premise of the proposed hybrid methodology is that not all features of system response are amenable to mathematical modeling, hence considering informational alternatives. This may be because (i) the underlying theory is not available or not sufficiently developed, or (ii) the existing theory is too complex and therefore not suitable for modeling within building frame analysis. The role of informational methods is to model aspects that the mathematical model leaves out. Autoprogressive algorithm and self-learning simulation extract the missing aspects from a system response. In a hybrid framework, experimental data is an integral part of modeling, rather than being used strictly for validation processes. The potential of the hybrid methodology is illustrated through modeling complex hysteretic behavior of beam-to-column connections. Mechanics-based components of deformation such as angles, flange-plates, and column panel zone, are idealized to a mathematical model by using a complete mechanical approach. Although the mathematical model represents envelope curves in terms of initial stiffness and yielding strength, it is not capable of capturing the pinching effects. Pinching is caused mainly by separation between angles and column flanges as well as slip between angles/flange-plates and beam flanges. These components of deformation are suitable for informational modeling. Finally, the moment-rotation curves of the hybrid models are validated with those of the experimental tests. The comparison shows that the hybrid models are capable of representing the highly pinched hysteretic behavior of beam-to-column connections. In addition, the developed hybrid model is successfully used to predict the behavior of a newly-designed connection.