834 resultados para SUPPLY AND DEMAND
Resumo:
In his essay - Toward a Better Understanding of the Evolution of Hotel Development: A Discussion of Product-Specific Lodging Demand - by John A. Carnella, Consultant, Laventhol & Horwath, cpas, New York, Carnella initially describes his piece by stating: “The diversified hotel product in the united states lodging market has Resulted in latent room-night demand, or supply-driven demand resulting from the introduction of a lodging product which caters to a specific set of hotel patrons. The subject has become significant as the lodging market has moved toward segmentation with regard to guest room offerings. The author proposes that latent demand is a tangible, measurable phenomenon best understood in light of the history of the guest room product from its infancy to its present state.” The article opens with an ephemeral depiction of hotel development in the United States, both pre’ and post World War II. To put it succinctly, the author wants you to know that the advent of the inter-state highway system changed the complexion of the hotel industry in the U.S. “Two essential ingredients were necessary for the next phase of hotel development in this country. First was the establishment of the magnificently intricate infrastructure which facilitated motor vehicle transportation in and around the then 48 states of the nation,” says Carnella. “The second event…was the introduction of affordable highway travel. Carnella goes on to say that the next – big thing – in hotel evolution was the introduction of affordable air travel. “With the airways filled with potential lodging guests, developers moved next to erect a new genre of hotel, the airport hotel,” Carnella advances his picture. Growth progressed with the arrival of the suburban hotel concept, which wasn’t fueled by developments in transportation, but by changes in people’s living habits, i.e. suburban affiliations as opposed to urban and city population aggregates. The author explores the distinctions between full-service and limited service lodging operations. “The market of interest with consideration to the extended-stay facility is one dominated by corporate office parks,” Carnella proceeds. These evolutional states speak to latent demand, and even further to segmentation of the market. “Latent demand… is a product-generated phenomenon in which the number of potential hotel guests increases as the direct result of the introduction of a new lodging facility,” Carnella brings his unique insight to the table with regard to the specialization process. The demand is already there; just waiting to be tapped. In closing, “…there must be a consideration of the unique attributes of a lodging facility relative to its ability to attract guests to a subject market, just as there must be an examination of the property's ability to draw guests from within the subject market,” Carnella proposes.
Resumo:
This dissertation studies capacity investments in energy sources, with a focus on renewable technologies, such as solar and wind energy. We develop analytical models to provide insights for policymakers and use real data from the state of Texas to corroborate our findings.
We first take a strategic perspective and focus on electricity pricing policies. Specifically, we investigate the capacity investments of a utility firm in renewable and conventional energy sources under flat and peak pricing policies. We consider generation patterns and intermittency of solar and wind energy in relation to the electricity demand throughout a day. We find that flat pricing leads to a higher investment level for solar energy and it can still lead to more investments in wind energy if considerable amount of wind energy is generated throughout the day.
In the second essay, we complement the first one by focusing on the problem of matching supply with demand in every operating period (e.g., every five minutes) from the perspective of a utility firm. We study the interaction between renewable and conventional sources with different levels of operational flexibility, i.e., the possibility
of quickly ramping energy output up or down. We show that operational flexibility determines these interactions: renewable and inflexible sources (e.g., nuclear energy) are substitutes, whereas renewable and flexible sources (e.g., natural gas) are complements.
In the final essay, rather than the capacity investments of the utility firms, we focus on the capacity investments of households in rooftop solar panels. We investigate whether or not these investments may cause a utility death spiral effect, which is a vicious circle of increased solar adoption and higher electricity prices. We observe that the current rate-of-return regulation may lead to a death spiral for utility firms. We show that one way to reverse the spiral effect is to allow the utility firms to maximize their profits by determining electricity prices.
Resumo:
Periods of drought and low streamflow can have profound impacts on both human and natural systems. People depend on a reliable source of water for numerous reasons including potable water supply and to produce economic value through agriculture or energy production. Aquatic ecosystems depend on water in addition to the economic benefits they provide to society through ecosystem services. Given that periods of low streamflow may become more extreme and frequent in the future, it is important to study the factors that control water availability during these times. In the absence of precipitation the slower hydrological response of groundwater systems will play an amplified role in water supply. Understanding the variability of the fraction of streamflow contribution from baseflow or groundwater during periods of drought provides insight into what future water availability may look like and how it can best be managed. The Mills River Basin in North Carolina is chosen as a case-study to test this understanding. First, obtaining a physically meaningful estimation of baseflow from USGS streamflow data via computerized hydrograph analysis techniques is carried out. Then applying a method of time series analysis including wavelet analysis can highlight signals of non-stationarity and evaluate the changes in variance required to better understand the natural variability of baseflow and low flows. In addition to natural variability, human influence must be taken into account in order to accurately assess how the combined system reacts to periods of low flow. Defining a combined demand that consists of both natural and human demand allows us to be more rigorous in assessing the level of sustainable use of a shared resource, in this case water. The analysis of baseflow variability can differ based on regional location and local hydrogeology, but it was found that baseflow varies from multiyear scales such as those associated with ENSO (3.5, 7 years) up to multi decadal time scales, but with most of the contributing variance coming from decadal or multiyear scales. It was also found that the behavior of baseflow and subsequently water availability depends a great deal on overall precipitation, the tracks of hurricanes or tropical storms and associated climate indices, as well as physiography and hydrogeology. Evaluating and utilizing the Duke Combined Hydrology Model (DCHM), reasonably accurate estimates of streamflow during periods of low flow were obtained in part due to the model’s ability to capture subsurface processes. Being able to accurately simulate streamflow levels and subsurface interactions during periods of drought can be very valuable to water suppliers, decision makers, and ultimately impact citizens. Knowledge of future droughts and periods of low flow in addition to tracking customer demand will allow for better management practices on the part of water suppliers such as knowing when they should withdraw more water during a surplus so that the level of stress on the system is minimized when there is not ample water supply.
Resumo:
Power systems require a reliable supply and good power quality. The impact of power supply interruptions is well acknowledged and well quantified. However, a system may perform reliably without any interruptions but may have poor power quality. Although poor power quality has cost implications for all actors in the electrical power systems, only some users are aware of its impact. Power system operators are much attuned to the impact of low power quality on their equipment and have the appropriate monitoring systems in place. However, over recent years certain industries have come increasingly vulnerable to negative cost implications of poor power quality arising from changes in their load characteristics and load sensitivities, and therefore increasingly implement power quality monitoring and mitigation solutions. This paper reviews several historical studies which investigate the cost implications of poor power quality on industry. These surveys are largely focused on outages, whilst the impact of poor power quality such as harmonics, short interruptions, voltage dips and swells, and transients is less well studied and understood. This paper examines the difficulties in quantifying the costs of poor power quality, and uses the chi-squared method to determine the consequences for industry of power quality phenomenon using a case study of over 40 manufacturing and data centres in Ireland.
Resumo:
Numerous studies show that increasing species richness leads to higher ecosystem productivity. This effect is often attributed to more efficient portioning of multiple resources in communities with higher numbers of competing species, indicating the role of resource supply and stoichiometry for biodiversity-ecosystem functioning relationships. Here, we merged theory on ecological stoichiometry with a framework of biodiversity-ecosystem functioning to understand how resource use transfers into primary production. We applied a structural equation model to define patterns of diversity-productivity relationships with respect to available resources. Meta-analysis was used to summarize the findings across ecosystem types ranging from aquatic ecosystems to grasslands and forests. As hypothesized, resource supply increased realized productivity and richness, but we found significant differences between ecosystems and study types. Increased richness was associated with increased productivity, although this effect was not seen in experiments. More even communities had lower productivity, indicating that biomass production is often maintained by a few dominant species, and reduced dominance generally reduced ecosystem productivity. This synthesis, which integrates observational and experimental studies in a variety of ecosystems and geographical regions, exposes common patterns and differences in biodiversity-functioning relationships, and increases the mechanistic understanding of changes in ecosystems productivity.
Resumo:
Numerous studies show that increasing species richness leads to higher ecosystem productivity. This effect is often attributed to more efficient portioning of multiple resources in communities with higher numbers of competing species, indicating the role of resource supply and stoichiometry for biodiversity-ecosystem functioning relationships. Here, we merged theory on ecological stoichiometry with a framework of biodiversity-ecosystem functioning to understand how resource use transfers into primary production. We applied a structural equation model to define patterns of diversity-productivity relationships with respect to available resources. Meta-analysis was used to summarize the findings across ecosystem types ranging from aquatic ecosystems to grasslands and forests. As hypothesized, resource supply increased realized productivity and richness, but we found significant differences between ecosystems and study types. Increased richness was associated with increased productivity, although this effect was not seen in experiments. More even communities had lower productivity, indicating that biomass production is often maintained by a few dominant species, and reduced dominance generally reduced ecosystem productivity. This synthesis, which integrates observational and experimental studies in a variety of ecosystems and geographical regions, exposes common patterns and differences in biodiversity-functioning relationships, and increases the mechanistic understanding of changes in ecosystems productivity.
Resumo:
This article examines regulatory governance of the post-initial training market in The Netherlands. From an historical perspective on policy formation processes, it examines market formation in terms of social, economic, and cultural factors in the development of provision and demand for post-initial training; the roles of stakeholders in the longterm construction of regulatory governance of the market; regulation of and public providers; policy responses to market failure; and tripartite division of responsibilities between the state, social partners, commercial and publicly-funded providers. Historical description and analysis examine policy narratives of key stakeholders with reference to: a) influence of societal stakeholders on regulatory decision-making; b) state regulation of the post-initial training market; c) public intervention regulating the market to prevent market failure; d) market deregulation, competition, employability and individual responsibility; and, e) regulatory governance to prevent ‘allocative failure’ by the market in non-delivery of post-initial training to specific target groups, particularly the low-qualified. Dominant policy narratives have resulted in limited state regulation of the supply-side, a tripartite system of regulatory governance by the state, social partners and commercial providers as regulatory actors. Current policy discourses address interventions on the demand-side to redistribute structures of opportunity throughout the life courses of individuals. Further empirical research from a comparative historical perspective is required to deepen contemporary understandings of regulatory governance of markets and the commodification of adult learning in knowledge societies and information economies. (DIPF/Orig.)
Resumo:
Actualmente, para la planeación de la cadena de suministro, las organizaciones utilizan métodos convencionales basados en modelos estadísticos que miran el pasado y no reconocen avances -- Con este estudio se busca proyectar el futuro estos procesos -- Para lograrlo se tiene en cuenta la metodología Demand Driven que, para cambiar esta situación, basa su teoría en la adaptación de la cadena logística para reaccionar ante la venta en tiempo real, mediante la organización de buffers o pequeñas cajas de inventario que según las características de la cadena tendrán distintas propiedades para garantizar siempre la disponibilidad de stock, al menor coste y cantidad posible -- Se pretende demostrar la metodología mediante un caso de empresa: la viabilidad del sistema para garantizar la reducción del capital invertido en inventarios, garantizando mayor flujo de capital para inversión en nuevos aspectos; se presupone mejora en servicio que se traduce en mayores ventas para la compañía y reducción de costos al tener menor nivel de inventarios -- Al realizar el ejemplo empresarial, este estudio entrega a los líderes de las organizaciones las herramientas necesarias para toma de decisiones sobre cambios estructurales en la forma de realizar su proceso de planeación y ventas operacionales en busca de adaptaciones a las necesidades del mercado cambiante y exigente en el que vivimos hoy, donde los consumidores buscan bajo costo, alta calidad y disponibilidad a la mano -- En este estudio se puede observar cómo se pueden incrementar los ingresos en un 20% con sólo mejorar el nivel de servicio y entregas a tiempo a los clientes
Resumo:
The supply side of the food security engine is the way we farm. The current engine of conventional tillage farming is faltering and needs to be replaced. This presentation will address supply side issues of agriculture to meet future agricultural demands for food and industry using the alternate no-till Conservation Agriculture (CA) paradigm (involving no-till farming with mulch soil cover and diversified cropping) that is able to raise productivity sustainably and efficiently, reduce inputs, regenerate degraded land, minimise soil erosion, and harness the flow of ecosystem services. CA is an ecosystems approach to farming capable of enhancing not only the economic and environmental performance of crop production and land management, but also promotes a mindset change for producing ‘more from less’, the key attitude towards sustainable production intensification. CA is now spreading globally in all continents at an annual rate of 10 Mha and covers some 157 Mha of cropland. Today global agriculture produces enough food to feed three times the current population of 7.21 billion. In 1976, when the world population was 4.15 billion, world food production far exceeded the amount necessary to feed that population. However, our urban and industrialised lifestyle leads to wastage of food of some 30%-40%, as well as waste of enormous amount of energy and protein while transforming crop-based food into animal-derived food; we have a higher proportion of people than ever before who are obese; we continue to degrade our ecosystems including much of our agricultural land of which some 400 Mha is reported to be abandoned due to severe soil and land degradation; and yields of staple cereals appear to have stagnated. These are signs of unsustainability at the structural level in the society, and it is at the structural level, for both supply side and demand side, that we need transformed mind sets about production, consumption and distribution. CA not only provides the possibility of increased crop yields for the low input smallholder farmer, it also provides a pro-poor rural and agricultural development model to support agricultural intensification in an affordable manner. For the high output farmer, it offers greater efficiency (productivity) and profit, resilience and stewardship. For farming anywhere, it addresses the root causes of agricultural land degradation, sub-optimal ecological crop and land potentials or yield ceilings, and poor crop phenotypic expressions or yield gaps. As national economies expand and diversify, more people become integrated into the economy and are able to access food. However, for those whose livelihoods continue to depend on agriculture to feed themselves and the rest of the world population, the challenge is for agriculture to produce the needed food and raw material for industry with minimum harm to the environment and the society, and to produce it with maximum efficiency and resilience against abiotic and biotic stresses, including those arising from climate change. There is growing empirical and scientific evidence worldwide that the future global supplies of food and agricultural raw materials can be assured sustainably at much lower environmental and economic cost by shifting away from conventional tillage-based food and agriculture systems to no-till CA-based food and agriculture systems. To achieve this goal will require effective national and global policy and institutional support (including research and education).
Resumo:
Principal Topic The study of the origin and characteristics of venture ideas - or ''opportunities'' as they are often called - and their contextual fit are key research goals in entrepreneurship (Davidsson, 2004). We define venture idea as ''the core ideas of an entrepreneur about what to sell, how to sell, whom to sell and how an entrepreneur acquire or produce the product or service which he/she sells'' for the purpose of this study. When realized the venture idea becomes a ''business model''. Even though venture ideas are central to entrepreneurship yet its characteristics and their effect to the entrepreneurial process is mysterious. According to Schumpeter (1934) entrepreneurs could creatively destruct the existing market condition by introducing new product/service, new production methods, new markets, and new sources of supply and reorganization of industries. The introduction, development and use of new ideas are generally called as ''innovation'' (Damanpour & Wischnevsky, 2006) and ''newness'' is a property of innovation and is a relative term which means that the degree of unfamiliarity of venture idea either to a firm or to a market. However Schumpeter's (1934) discusses five different types of newness, indicating that type of newness is an important issue. More recently, Shane and Venkataraman (2000) called for research taking into consideration not only the variation of characteristics of individuals but also heterogeneity of venture ideas, Empirically, Samuelson (2001, 2004) investigated process differences between innovative venture ideas and imitative venture ideas. However, he used only a crude dichotomy regarding the venture idea newness. According to Davidsson, (2004) as entrepreneurs could introduce new economic activities ranging from pure imitation to being new to the entire world market, highlighting that newness is a matter of degree. Dahlqvist (2007) examined the venture idea newness and made and attempt at more refined assessment of the degree and type of newness of venture idea. Building on these predecessors our study refines the assessment of venture idea newness by measuring the degree of venture idea newness (new to the world, new to the market, substantially improved while not entirely new, and imitation) for four different types of newness (product/service, method of production, method of promotion, and customer/target market). We then related type and degree of newness to the pace of progress in nascent venturing process. We hypothesize that newness will slow down the business creation process. Shane & Venkataraman (2000) introduced entrepreneurship as the nexus of opportunities and individuals. In line with this some scholars has investigated the relationship between individuals and opportunities. For example Shane (2000) investigates the relatedness between individuals' prior knowledge and identification of opportunities. Shepherd & DeTinne (2005) identified that there is a positive relationship between potential financial reward and the identification of innovative venture ideas. Sarasvathy's 'Effectuation Theory'' assumes high degree of relatedness with founders' skills, knowledge and resources in the selection of venture ideas. However entrepreneurship literature is scant with analyses of how this relatedness affects to the progress of venturing process. Therefore, we assess the venture ideas' degree of relatedness to prior knowledge and resources, and relate these, too, to the pace of progress in nascent venturing process. We hypothesize that relatedness will increase the speed of business creation. Methodology For this study we will compare early findings from data collected through the Comprehensive Australian Study of Entrepreneurial Emergence (CAUSEE). CAUSEE is a longitudinal study whose primary objective is to uncover the factors that initiate, hinder and facilitate the process of emergence and development of new firms. Data were collected from a representative sample of some 30,000 households in Australia using random digit dialing (RDD) telephone survey interviews. Through the first round of data collection identified 600 entrepreneurs who are currently involved in the business start-up process. The unit of the analysis is the emerging venture, with the respondent acting as its spokesperson. The study methodology allows researchers to identify ventures in early stages of creation and to longitudinally follow their progression through data collection periods over time. Our measures of newness build on previous work by Dahlqvist (2007). Our adapted version was developed over two pre-tests with about 80 participants in each. The measures of relatedness were developed through the two rounds of pre-testing. The pace of progress in the venture creation process is assessed with the help of time-stamped gestation activities; a technique developed in the Panel Study of Entrepreneurial Dynamics (PSED). Results and Implications We hypothesized that venture idea newness slows down the venturing process whereas relatedness facilitates the venturing process. Results of 600 nascent entrepreneurs in Australia indicated that there is marginal support for the hypothesis that relatedness assists the gestation progress. Newness is significant but is the opposite sign to the hypothesized. The results give number of implications for researchers, business founders, consultants and policy makers in terms of better knowledge of the venture creation process.
Resumo:
The healing process for bone fractures is sensitive to mechanical stability and blood supply at the fracture site. Most currently available mechanobiological algorithms of bone healing are based solely on mechanical stimuli, while the explicit analysis of revascularization and its influences on the healing process have not been thoroughly investigated in the literature. In this paper, revascularization was described by two separate processes: angiogenesis and nutrition supply. The mathematical models for angiogenesis and nutrition supply have been proposed and integrated into an existing fuzzy algorithm of fracture healing. The computational algorithm of fracture healing, consisting of stress analysis, analyses of angiogenesis and nutrient supply, and tissue differentiation, has been tested on and compared with animal experimental results published previously. The simulation results showed that, for a small and medium-sized fracture gap, the nutrient supply is sufficient for bone healing, for a large fracture gap, non-union may be induced either by deficient nutrient supply or inadequate mechanical conditions. The comparisons with experimental results demonstrated that the improved computational algorithm is able to simulate a broad spectrum of fracture healing cases and to predict and explain delayed unions and non-union induced by large gap sizes and different mechanical conditions. The new algorithm will allow the simulation of more realistic clinical fracture healing cases with various fracture gaps and geometries and may be helpful to optimise implants and methods for fracture fixation.
Resumo:
The biomechanical or biophysical principles can be applied to study biological structures in their modern or fossil form. Bone is an important tissue in paleontological studies as it is a commonly preserved element in most fossil vertebrates, and can often allow its microstructures such as lacuna and canaliculi to be studied in detail. In this context, the principles of Fluid Mechanics and Scaling Laws have been previously applied to enhance the understanding of bone microarchitecture and their implications for the evolution of hydraulic structures to transport fluid. It has been shown that the microstructure of bone has evolved to maintain efficient transport between the nutrient supply and cells, the living components of the tissue. Application of the principle of minimal expenditure of energy to this analysis shows that the path distance comprising five or six lamellar regions represents an effective limit for fluid and solute transport between the nutrient supply and cells; beyond this threshold, hydraulic resistance in the network increases and additional energy expenditure is necessary for further transportation. This suggests an optimization of the size of bone’s building blocks (such as osteon or trabecular thickness) to meet the metabolic demand concomitant to minimal expenditure of energy. This biomechanical aspect of bone microstructure is corroborated from the ratio of osteon to Haversian canal diameters and scaling constants of several mammals considered in this study. This aspect of vertebrate bone microstructure and physiology may provide a basis of understanding of the form and function relationship in both extinct and extant taxa.
Resumo:
World economies increasingly demand reliable and economical power supply and distribution. To achieve this aim the majority of power systems are becoming interconnected, with several power utilities supplying the one large network. One problem that occurs in a large interconnected power system is the regular occurrence of system disturbances which can result in the creation of intra-area oscillating modes. These modes can be regarded as the transient responses of the power system to excitation, which are generally characterised as decaying sinusoids. For a power system operating ideally these transient responses would ideally would have a “ring-down” time of 10-15 seconds. Sometimes equipment failures disturb the ideal operation of power systems and oscillating modes with ring-down times greater than 15 seconds arise. The larger settling times associated with such “poorly damped” modes cause substantial power flows between generation nodes, resulting in significant physical stresses on the power distribution system. If these modes are not just poorly damped but “negatively damped”, catastrophic failures of the system can occur. To ensure system stability and security of large power systems, the potentially dangerous oscillating modes generated from disturbances (such as equipment failure) must be quickly identified. The power utility must then apply appropriate damping control strategies. In power system monitoring there exist two facets of critical interest. The first is the estimation of modal parameters for a power system in normal, stable, operation. The second is the rapid detection of any substantial changes to this normal, stable operation (because of equipment breakdown for example). Most work to date has concentrated on the first of these two facets, i.e. on modal parameter estimation. Numerous modal parameter estimation techniques have been proposed and implemented, but all have limitations [1-13]. One of the key limitations of all existing parameter estimation methods is the fact that they require very long data records to provide accurate parameter estimates. This is a particularly significant problem after a sudden detrimental change in damping. One simply cannot afford to wait long enough to collect the large amounts of data required for existing parameter estimators. Motivated by this gap in the current body of knowledge and practice, the research reported in this thesis focuses heavily on rapid detection of changes (i.e. on the second facet mentioned above). This thesis reports on a number of new algorithms which can rapidly flag whether or not there has been a detrimental change to a stable operating system. It will be seen that the new algorithms enable sudden modal changes to be detected within quite short time frames (typically about 1 minute), using data from power systems in normal operation. The new methods reported in this thesis are summarised below. The Energy Based Detector (EBD): The rationale for this method is that the modal disturbance energy is greater for lightly damped modes than it is for heavily damped modes (because the latter decay more rapidly). Sudden changes in modal energy, then, imply sudden changes in modal damping. Because the method relies on data from power systems in normal operation, the modal disturbances are random. Accordingly, the disturbance energy is modelled as a random process (with the parameters of the model being determined from the power system under consideration). A threshold is then set based on the statistical model. The energy method is very simple to implement and is computationally efficient. It is, however, only able to determine whether or not a sudden modal deterioration has occurred; it cannot identify which mode has deteriorated. For this reason the method is particularly well suited to smaller interconnected power systems that involve only a single mode. Optimal Individual Mode Detector (OIMD): As discussed in the previous paragraph, the energy detector can only determine whether or not a change has occurred; it cannot flag which mode is responsible for the deterioration. The OIMD seeks to address this shortcoming. It uses optimal detection theory to test for sudden changes in individual modes. In practice, one can have an OIMD operating for all modes within a system, so that changes in any of the modes can be detected. Like the energy detector, the OIMD is based on a statistical model and a subsequently derived threshold test. The Kalman Innovation Detector (KID): This detector is an alternative to the OIMD. Unlike the OIMD, however, it does not explicitly monitor individual modes. Rather it relies on a key property of a Kalman filter, namely that the Kalman innovation (the difference between the estimated and observed outputs) is white as long as the Kalman filter model is valid. A Kalman filter model is set to represent a particular power system. If some event in the power system (such as equipment failure) causes a sudden change to the power system, the Kalman model will no longer be valid and the innovation will no longer be white. Furthermore, if there is a detrimental system change, the innovation spectrum will display strong peaks in the spectrum at frequency locations associated with changes. Hence the innovation spectrum can be monitored to both set-off an “alarm” when a change occurs and to identify which modal frequency has given rise to the change. The threshold for alarming is based on the simple Chi-Squared PDF for a normalised white noise spectrum [14, 15]. While the method can identify the mode which has deteriorated, it does not necessarily indicate whether there has been a frequency or damping change. The PPM discussed next can monitor frequency changes and so can provide some discrimination in this regard. The Polynomial Phase Method (PPM): In [16] the cubic phase (CP) function was introduced as a tool for revealing frequency related spectral changes. This thesis extends the cubic phase function to a generalised class of polynomial phase functions which can reveal frequency related spectral changes in power systems. A statistical analysis of the technique is performed. When applied to power system analysis, the PPM can provide knowledge of sudden shifts in frequency through both the new frequency estimate and the polynomial phase coefficient information. This knowledge can be then cross-referenced with other detection methods to provide improved detection benchmarks.
Resumo:
In recent years, multilevel converters are becoming more popular and attractive than traditional converters in high voltage and high power applications. Multilevel converters are particularly suitable for harmonic reduction in high power applications where semiconductor devices are not able to operate at high switching frequencies or in high voltage applications where multilevel converters reduce the need to connect devices in series to achieve high switch voltage ratings. This thesis investigated two aspects of multilevel converters: structure and control. The first part of this thesis focuses on inductance between a DC supply and inverter components in order to minimise loop inductance, which causes overvoltages and stored energy losses during switching. Three dimensional finite element simulations and experimental tests have been carried out for all sections to verify theoretical developments. The major contributions of this section of the thesis are as follows: The use of a large area thin conductor sheet with a rectangular cross section separated by dielectric sheets (planar busbar) instead of circular cross section wires, contributes to a reduction of the stray inductance. A number of approximate equations exist for calculating the inductance of a rectangular conductor but an assumption was made that the current density was uniform throughout the conductors. This assumption is not valid for an inverter with a point injection of current. A mathematical analysis of a planar bus bar has been performed at low and high frequencies and the inductance and the resistance values between the two points of the planar busbar have been determined. A new physical structure for a voltage source inverter with symmetrical planar bus bar structure called Reduced Layer Planar Bus bar, is proposed in this thesis based on the current point injection theory. This new type of planar busbar minimises the variation in stray inductance for different switching states. The reduced layer planar busbar is a new innovation in planar busbars for high power inverters with minimum separation between busbars, optimum stray inductance and improved thermal performances. This type of the planar busbar is suitable for high power inverters, where the voltage source is supported by several capacitors in parallel in order to provide a low ripple DC voltage during operation. A two layer planar busbar with different materials has been analysed theoretically in order to determine the resistance of bus bars during switching. Increasing the resistance of the planar busbar can gain a damping ratio between stray inductance and capacitance and affects the performance of current loop during switching. The aim of this section is to increase the resistance of the planar bus bar at high frequencies (during switching) and without significantly increasing the planar busbar resistance at low frequency (50 Hz) using the skin effect. This contribution shows a novel structure of busbar suitable for high power applications where high resistance is required at switching times. In multilevel converters there are different loop inductances between busbars and power switches associated with different switching states. The aim of this research is to consider all combinations of the switching states for each multilevel converter topology and identify the loop inductance for each switching state. Results show that the physical layout of the busbars is very important for minimisation of the loop inductance at each switch state. Novel symmetrical busbar structures are proposed for multilevel converters with diode-clamp and flying-capacitor topologies which minimise the worst case in stray inductance for different switching states. Overshoot voltages and thermal problems are considered for each topology to optimise the planar busbar structure. In the second part of the thesis, closed loop current techniques have been investigated for single and three phase multilevel converters. The aims of this section are to investigate and propose suitable current controllers such as hysteresis and predictive techniques for multilevel converters with low harmonic distortion and switching losses. This section of the thesis can be classified into three parts as follows: An optimum space vector modulation technique for a three-phase voltage source inverter based on a minimum-loss strategy is proposed. One of the degrees of freedom for optimisation of the space vector modulation is the selection of the zero vectors in the switching sequence. This new method improves switching transitions per cycle for a given level of distortion as the zero vector does not alternate between each sector. The harmonic spectrum and weighted total harmonic distortion for these strategies are compared and results show up to 7% weighted total harmonic distortion improvement over the previous minimum-loss strategy. The concept of SVM technique is a very convenient representation of a set of three-phase voltages or currents used for current control techniques. A new hysteresis current control technique for a single-phase multilevel converter with flying-capacitor topology is developed. This technique is based on magnitude and time errors to optimise the level change of converter output voltage. This method also considers how to improve unbalanced voltages of capacitors using voltage vectors in order to minimise switching losses. Logic controls require handling a large number of switches and a Programmable Logic Device (PLD) is a natural implementation for state transition description. The simulation and experimental results describe and verify the current control technique for the converter. A novel predictive current control technique is proposed for a three-phase multilevel converter, which controls the capacitors' voltage and load current with minimum current ripple and switching losses. The advantage of this contribution is that the technique can be applied to more voltage levels without significantly changing the control circuit. The three-phase five-level inverter with a pure inductive load has been implemented to track three-phase reference currents using analogue circuits and a programmable logic device.