874 resultados para Electricity Demand, Causality, Cointegration Analysis
Resumo:
In China in particular, large, planned special events (e.g., the Olympic Games, etc.) are viewed as great opportunities for economic development. Large numbers of visitors from other countries and provinces may be expected to attend such events, bringing in significant tourism dollars. However, as a direct result of such events, the transportation system is likely to face great challenges as travel demand increases beyond its original design capacity. Special events in central business districts (CBD) in particular will further exacerbate traffic congestion on surrounding freeway segments near event locations. To manage the transportation system, it is necessary to plan and prepare for such special events, which requires prediction of traffic conditions during the events. This dissertation presents a set of novel prototype models to forecast traffic volumes along freeway segments during special events. Almost all research to date has focused solely on traffic management techniques under special event conditions. These studies, at most, provided a qualitative analysis and there was a lack of an easy-to-implement method for quantitative analyses. This dissertation presents a systematic approach, based separately on univariate time series model with intervention analysis and multivariate time series model with intervention analysis for forecasting traffic volumes on freeway segments near an event location. A case study was carried out, which involved analyzing and modelling the historical time series data collected from loop-detector traffic monitoring stations on the Second and Third Ring Roads near Beijing Workers Stadium. The proposed time series models, with expected intervention, are found to provide reasonably accurate forecasts of traffic pattern changes efficiently. They may be used to support transportation planning and management for special events.
Resumo:
Smokeless powder additives are usually detected by their extraction from post-blast residues or unburned powder particles followed by analysis using chromatographic techniques. This work presents the first comprehensive study of the detection of the volatile and semi-volatile additives of smokeless powders using solid phase microextraction (SPME) as a sampling and pre-concentration technique. Seventy smokeless powders were studied using laboratory based chromatography techniques and a field deployable ion mobility spectrometer (IMS). The detection of diphenylamine, ethyl and methyl centralite, 2,4-dinitrotoluene, diethyl and dibutyl phthalate by IMS to associate the presence of these compounds to smokeless powders is also reported for the first time. A previously reported SPME-IMS analytical approach facilitates rapid sub-nanogram detection of the vapor phase components of smokeless powders. A mass calibration procedure for the analytical techniques used in this study was developed. Precise and accurate mass delivery of analytes in picoliter volumes was achieved using a drop-on-demand inkjet printing method. Absolute mass detection limits determined using this method for the various analytes of interest ranged between 0.03 - 0.8 ng for the GC-MS and between 0.03 - 2 ng for the IMS. Mass response graphs generated for different detection techniques help in the determination of mass extracted from the headspace of each smokeless powder. The analyte mass present in the vapor phase was sufficient for a SPME fiber to extract most analytes at amounts above the detection limits of both chromatographic techniques and the ion mobility spectrometer. Analysis of the large number of smokeless powders revealed that diphenylamine was present in the headspace of 96% of the powders. Ethyl centralite was detected in 47% of the powders and 8% of the powders had methyl centralite available for detection from the headspace sampling of the powders by SPME. Nitroglycerin was the dominant peak present in the headspace of the double-based powders. 2,4-dinitrotoluene which is another important headspace component was detected in 44% of the powders. The powders therefore have more than one headspace component and the detection of a combination of these compounds is achievable by SPME-IMS leading to an association to the presence of smokeless powders.
Resumo:
The purpose of this study is to produce a model to be used by state regulating agencies to assess demand for subacute care. In accomplishing this goal, the study refines the definition of subacute care, demonstrates a method for bed need assessment, and measures the effectiveness of this new level of care. This was the largest study of subacute care to date. Research focused on 19 subacute units in 16 states, each of which provides high-intensity rehabilitative and/or restorative care carried out in a high-tech unit. Each of the facilities was based in a nursing home, but utilized separate staff, equipment, and services. Because these facilities are under local control, it was possible to study regional differences in subacute care demand. Using this data, a model for predicting demand for subacute care services was created, building on earlier models submitted by John Whitman for the American Hospital Association and Robin E. MacStravic. The Broderick model uses the "bootstrapping" method and takes advantage of high technology: computers and software, databases in business and government, publicly available databases from providers or commercial vendors, professional organizations, and other information sources. Using newly available sources of information, this new model addresses the problems and needs of health care planners as they approach the challenges of the 21st century.
Resumo:
In the oil industry, natural gas is a vital component of the world energy supply and an important source of hydrocarbons. It is one of the cleanest, safest and most relevant of all energy sources, and helps to meet the world's growing demand for clean energy in the future. With the growing share of natural gas in the Brazil energy matrix, the main purpose of its use has been the supply of electricity by thermal power generation. In the current production process, as in a Natural Gas Processing Unit (NGPU), natural gas undergoes various separation units aimed at producing liquefied natural gas and fuel gas. The latter should be specified to meet the thermal machines specifications. In the case of remote wells, the process of absorption of heavy components aims the match of fuel gas application and thereby is an alternative to increase the energy matrix. Currently, due to the high demand for this raw gas, research and development techniques aimed at adjusting natural gas are studied. Conventional methods employed today, such as physical absorption, show good results. The objective of this dissertation is to evaluate the removal of heavy components of natural gas by absorption. In this research it was used as the absorbent octyl alcohol (1-octanol). The influence of temperature (5 and 40 °C) and flowrate (25 and 50 ml/min) on the absorption process was studied. Absorption capacity expressed by the amount absorbed and kinetic parameters, expressed by the mass transfer coefficient, were evaluated. As expected from the literature, it was observed that the absorption of heavy hydrocarbon fraction is favored by lowering the temperature. Moreover, both temperature and flowrate favors mass transfer (kinetic effect). The absorption kinetics for removal of heavy components was monitored by chromatographic analysis and the experimental results demonstrated a high percentage of recovery of heavy components. Furthermore, it was observed that the use of octyl alcohol as absorbent was feasible for the requested separation process.
Resumo:
In the oil industry, natural gas is a vital component of the world energy supply and an important source of hydrocarbons. It is one of the cleanest, safest and most relevant of all energy sources, and helps to meet the world's growing demand for clean energy in the future. With the growing share of natural gas in the Brazil energy matrix, the main purpose of its use has been the supply of electricity by thermal power generation. In the current production process, as in a Natural Gas Processing Unit (NGPU), natural gas undergoes various separation units aimed at producing liquefied natural gas and fuel gas. The latter should be specified to meet the thermal machines specifications. In the case of remote wells, the process of absorption of heavy components aims the match of fuel gas application and thereby is an alternative to increase the energy matrix. Currently, due to the high demand for this raw gas, research and development techniques aimed at adjusting natural gas are studied. Conventional methods employed today, such as physical absorption, show good results. The objective of this dissertation is to evaluate the removal of heavy components of natural gas by absorption. In this research it was used as the absorbent octyl alcohol (1-octanol). The influence of temperature (5 and 40 °C) and flowrate (25 and 50 ml/min) on the absorption process was studied. Absorption capacity expressed by the amount absorbed and kinetic parameters, expressed by the mass transfer coefficient, were evaluated. As expected from the literature, it was observed that the absorption of heavy hydrocarbon fraction is favored by lowering the temperature. Moreover, both temperature and flowrate favors mass transfer (kinetic effect). The absorption kinetics for removal of heavy components was monitored by chromatographic analysis and the experimental results demonstrated a high percentage of recovery of heavy components. Furthermore, it was observed that the use of octyl alcohol as absorbent was feasible for the requested separation process.
Resumo:
A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentes; e (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.
Resumo:
A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentes; e (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.
Resumo:
The increasing demand in electricity and decrease forecast, increasingly, of fossil fuel reserves, as well as increasing environmental concern in the use of these have generated a concern about the quality of electricity generation, making it well welcome new investments in generation through alternative, clean and renewable sources. Distributed generation is one of the main solutions for the independent and selfsufficient generating systems, such as the sugarcane industry. This sector has grown considerably, contributing expressively in the production of electricity to the distribution networks. Faced with this situation, one of the main objectives of this study is to propose the implementation of an algorithm to detect islanding disturbances in the electrical system, characterized by situations of under- or overvoltage. The algorithm should also commonly quantize the time that the system was operating in these conditions, to check the possible consequences that will be caused in the electric power system. In order to achieve this it used the technique of wavelet multiresolution analysis (AMR) for detecting the generated disorders. The data obtained can be processed so as to be used for a possible predictive maintenance in the protection equipment of electrical network, since they are prone to damage on prolonged operation under abnormal conditions of frequency and voltage.
Resumo:
Incumbent telecommunication lasers emitting at 1.5 µm are fabricated on InP substrates and consist of multiple strained quantum well layers of the ternary alloy InGaAs, with barriers of InGaAsP or InGaAlAs. These lasers have been seen to exhibit very strong temperature dependence of the threshold current. This strong temperature dependence leads to a situation where external cooling equipment is required to stabilise the optical output power of these lasers. This results in a significant increase in the energy bill associated with telecommunications, as well as a large increase in equipment budgets. If the exponential growth trend of end user bandwidth demand associated with the internet continues, these inefficient lasers could see the telecommunications industry become the dominant consumer of world energy. For this reason there is strong interest in developing new, much more efficient telecommunication lasers. One avenue being investigated is the development of quantum dot lasers on InP. The confinement experienced in these low dimensional structures leads to a strong perturbation of the density of states at the band edge, and has been predicted to result in reduced temperature dependence of the threshold current in these devices. The growth of these structures is difficult due to the large lattice mismatch between InP and InAs; however, recently quantum dots elongated in one dimension, known as quantum dashes, have been demonstrated. Chapter 4 of this thesis provides an experimental analysis of one of these quantum dash lasers emitting at 1.5 µm along with a numerical investigation of threshold dynamics present in this device. Another avenue being explored to increase the efficiency of telecommunications lasers is bandstructure engineering of GaAs-based materials to emit at 1.5 µm. The cause of the strong temperature sensitivity in InP-based quantum well structures has been shown to be CHSH Auger recombination. Calculations have shown and experiments have verified that the addition of bismuth to GaAs strongly reduces the bandgap and increases the spin orbit splitting energy of the alloy GaAs1−xBix. This leads to a bandstructure condition at x = 10 % where not only is 1.5 µm emission achieved on GaAs-based material, but also the bandstructure of the material can naturally suppress the costly CHSH Auger recombination which plagues InP-based quantum-well-based material. It has been predicted that telecommunications lasers based on this material system should operate in the absence of external cooling equipment and offer electrical and optical benefits over the incumbent lasers. Chapters 5, 6, and 7 provide a first analysis of several aspects of this material system relevant to the development of high bismuth content telecommunication lasers.
Resumo:
© 2016 International Journal of the Economics of Business.Human blood plasma and its derivative therapies have been used therapeutically for more than 50 years, after first being widely used to treat injuries during World War II. In certain countries, manufacturers of these therapies – known as plasma-derived medicinal products (PDMPs) – compensate plasma donors, raising healthcare and ethical concerns among some parties. In particular, the World Health Organization has taken a strong advocacy position that compensation for blood donations should be eliminated worldwide. This review evaluates the key economic factors underlying the supply and demand for PDMPs and the evidence pointing to the policy options that are most likely to maintain a reliable supply of life-sustaining therapies. It concludes that compensated plasma donation is important for maintaining adequate and consistent supplies of plasma and limits the risk of under-treatment for the foreseeable future.
Resumo:
The aim of this dissertation is to examine, model and estimate firm responses to
demand shocks by focusing on specific industries where demand shocks are well
identified. Combining reduced-form evidence and structural analysis, this dissertation
extends the economic literature by focusing on within-firm responses of firms
to two important demand shocks that are identifiable in empirical settings. First,
I focus on how firms respond to a decrease in effective demand due to competition
shocks coming from globalization. By considering China's accession to the World
Trade Organization in 2001 and its impact on the apparel industry, the aim of these
chapters is to answer how firms react to the increase in Chinese import competition,
what is the mechanism behind these responses, and how important they are in explaining
the survival of the Peruvian apparel industry. Second, I study how suppliers'
survival probability relates to the sudden disruption of their main customer-supplier
relationships with downstream manufacturers, conditional on suppliers' own idiosyncratic
characteristics such as physical productivity.
Resumo:
The aim of this work is to evaluate the roles of age and emotional valence in word recognition in terms of ex-Gaussian distribution components. In order to do that, a word recognition task was carried out with two age groups, in which emotional valence was manipulated. Older participants did not present a clear trend for reaction times. The younger participants showed significant statistical differences in negative words for target and distracting conditions. Addressing the ex-Gaussian tau parameter, often related to attentional demands in the literature, age-related differences in emotional valence seem not to have an effect for negative words. Focusing on emotional valence for each group, the younger participants only showed an effect on negative distracting words. The older participants showed an effect regarding negative and positive target words, and negative distracting words. This suggests that the attentional demand is higher for emotional words, in particular, for the older participants.
Resumo:
Power systems require a reliable supply and good power quality. The impact of power supply interruptions is well acknowledged and well quantified. However, a system may perform reliably without any interruptions but may have poor power quality. Although poor power quality has cost implications for all actors in the electrical power systems, only some users are aware of its impact. Power system operators are much attuned to the impact of low power quality on their equipment and have the appropriate monitoring systems in place. However, over recent years certain industries have come increasingly vulnerable to negative cost implications of poor power quality arising from changes in their load characteristics and load sensitivities, and therefore increasingly implement power quality monitoring and mitigation solutions. This paper reviews several historical studies which investigate the cost implications of poor power quality on industry. These surveys are largely focused on outages, whilst the impact of poor power quality such as harmonics, short interruptions, voltage dips and swells, and transients is less well studied and understood. This paper examines the difficulties in quantifying the costs of poor power quality, and uses the chi-squared method to determine the consequences for industry of power quality phenomenon using a case study of over 40 manufacturing and data centres in Ireland.
Resumo:
This thesis uses models of firm-heterogeneity to complete empirical analyses in economic history and agricultural economics. In Chapter 2, a theoretical model of firm heterogeneity is used to derive a statistic that summarizes the welfare gains from the introduction of a new technology. The empirical application considers the use of mechanical steam power in the Canadian manufacturing sector during the late nineteenth century. I exploit exogenous variation in geography to estimate several parameters of the model. My results indicate that the use of steam power resulted in a 15.1 percent increase in firm-level productivity and a 3.0-5.2 percent increase in aggregate welfare. Chapter 3 considers various policy alternatives to price ceiling legislation in the market for production quotas in the dairy farming sector in Quebec. I develop a dynamic model of the demand for quotas with farmers that are heterogeneous in their marginal cost of milk production. The econometric analysis uses farm-level data and estimates a parameter of the theoretical model that is required for the counterfactual experiments. The results indicate that the price of quotas could be reduced to the ceiling price through a 4.16 percent expansion of the aggregate supply of quotas, or through moderate trade liberalization of Canadian dairy products. In Chapter 4, I study the relationship between farm-level productivity and participation in the Commercial Export Milk (CEM) program. I use a difference-in-difference research design with inverse propensity weights to test for causality between participation in the CEM program and total factor productivity (TFP). I find a positive correlation between participation in the CEM program and TFP, however I find no statistically significant evidence that the CEM program affected TFP.
Resumo:
The inherent analogue nature of medical ultrasound signals in conjunction with the abundant merits provided by digital image acquisition, together with the increasing use of relatively simple front-end circuitries, have created considerable demand for single-bit beamformers in digital ultrasound imaging systems. Furthermore, the increasing need to design lightweight ultrasound systems with low power consumption and low noise, provide ample justification for development and innovation in the use of single-bit beamformers in ultrasound imaging systems. The overall aim of this research program is to investigate, establish, develop and confirm through a combination of theoretical analysis and detailed simulations, that utilize raw phantom data sets, suitable techniques for the design of simple-to-implement hardware efficient digital ultrasound beamformers to address the requirements for 3D scanners with large channel counts, as well as portable and lightweight ultrasound scanners for point-of-care applications and intravascular imaging systems. In addition, the stability boundaries of higher-order High-Pass (HP) and Band-Pass (BP) Σ−Δ modulators for single- and dual- sinusoidal inputs are determined using quasi-linear modeling together with the describing-function method, to more accurately model the modulator quantizer. The theoretical results are shown to be in good agreement with the simulation results for a variety of input amplitudes, bandwidths, and modulator orders. The proposed mathematical models of the quantizer will immensely help speed up the design of higher order HP and BP Σ−Δ modulators to be applicable for digital ultrasound beamformers. Finally, a user friendly design and performance evaluation tool for LP, BP and HP modulators is developed. This toolbox, which uses various design methodologies and covers an assortment of modulators topologies, is intended to accelerate the design process and evaluation of modulators. This design tool is further developed to enable the design, analysis and evaluation of beamformer structures including the noise analyses of the final B-scan images. Thus, this tool will allow researchers and practitioners to design and verify different reconstruction filters and analyze the results directly on the B-scan ultrasound images thereby saving considerable time and effort.