1000 resultados para Catastrophe Model
Resumo:
Catastrophe risk models used by the insurance industry are likely subject to significant uncertainty, but due to their proprietary nature and strict licensing conditions they are not available for experimentation. In addition, even if such experiments were conducted, these would not be repeatable by other researchers because commercial confidentiality issues prevent the details of proprietary catastrophe model structures from being described in public domain documents. However, such experimentation is urgently required to improve decision making in both insurance and reinsurance markets. In this paper we therefore construct our own catastrophe risk model for flooding in Dublin, Ireland, in order to assess the impact of typical precipitation data uncertainty on loss predictions. As we consider only a city region rather than a whole territory and have access to detailed data and computing resources typically unavailable to industry modellers, our model is significantly more detailed than most commercial products. The model consists of four components, a stochastic rainfall module, a hydrological and hydraulic flood hazard module, a vulnerability module, and a financial loss module. Using these we undertake a series of simulations to test the impact of driving the stochastic event generator with four different rainfall data sets: ground gauge data, gauge-corrected rainfall radar, meteorological reanalysis data (European Centre for Medium-Range Weather Forecasts Reanalysis-Interim; ERA-Interim) and a satellite rainfall product (The Climate Prediction Center morphing method; CMORPH). Catastrophe models are unusual because they use the upper three components of the modelling chain to generate a large synthetic database of unobserved and severe loss-driving events for which estimated losses are calculated. We find the loss estimates to be more sensitive to uncertainties propagated from the driving precipitation data sets than to other uncertainties in the hazard and vulnerability modules, suggesting that the range of uncertainty within catastrophe model structures may be greater than commonly believed.
Resumo:
Work-related flow is defined as a sudden and enjoyable merging of action and awareness that represents a peak experience in the daily lives of workers. Employees" perceptions of challenge and skill and their subjective experiences in terms of enjoyment, interest and absorption were measured using the experience sampling method, yielding a total of 6981 observations from a sample of 60 employees. Linear and nonlinear approaches were applied in order to model both continuous and sudden changes. According to the R2, AICc and BIC indexes, the nonlinear dynamical systems model (i.e. cusp catastrophe model) fit the data better than the linear and logistic regression models. Likewise, the cusp catastrophe model appears to be especially powerful for modelling those cases of high levels of flow. Overall, flow represents a nonequilibrium condition that combines continuous and abrupt changes across time. Research and intervention efforts concerned with this process should focus on the variable of challenge, which, according to our study, appears to play a key role in the abrupt changes observed in work-related flow.
Resumo:
Tropical Cyclones are a continuing threat to life and property. Willoughby (2012) found that a Pareto (power-law) cumulative distribution fitted to the most damaging 10% of US hurricane seasons fit their impacts well. Here, we find that damage follows a Pareto distribution because the assets at hazard follow a Zipf distribution, which can be thought of as a Pareto distribution with exponent 1. The Z-CAT model is an idealized hurricane catastrophe model that represents a coastline where populated places with Zipf- distributed assets are randomly scattered and damaged by virtual hurricanes with sizes and intensities generated through a Monte-Carlo process. Results produce realistic Pareto exponents. The ability of the Z-CAT model to simulate different climate scenarios allowed testing of sensitivities to Maximum Potential Intensity, landfall rates and building structure vulnerability. The Z-CAT model results demonstrate that a statistical significant difference in damage is found when only changes in the parameters create a doubling of damage.
Resumo:
The ability to grow ultrathin films layer-by-layer with well-defined epitaxial relationships has allowed research groups worldwide to grow a range of artificial films and superlattices, first for semiconductors, and now with oxides. In the oxides thin film research community, there have been concerted efforts recently to develop a number of epitaxial oxide systems grown on single crystal oxide substrates that display a wide variety of novel interfacial functionality, such as enhanced ferromagnetic ordering, increased charge carrier density, increased optical absorption, etc, at interfaces. The magnitude of these novel properties is dependent upon the structure of thin films, especially interface sharpness, intermixing, defects, and strain, layering sequence in the case of superlattices and the density of interfaces relative to the film thicknesses. To understand the relationship between the interfacial thin film oxide atomic structure and its properties, atomic scale characterization is required. Transmission electron microscopy (TEM) offers the ability to study interfaces of films at high resolution. Scanning transmission electron microscopy (STEM) allows for real space imaging of materials with directly interpretable atomic number contrast. Electron energy loss spectroscopy (EELS), together with STEM, can probe the local chemical composition as well as local electronic states of transition metals and oxygen. Both techniques have been significantly improved by aberration correctors, which reduce the probe size to 1 Å, or less. Aberration correctors have thus made it possible to resolve individual atomic columns, and possibly probe the electronic structure at atomic scales. Separately, using electron probe forming lenses, structural information such as the crystal structure, strain, lattice mismatches, and superlattice ordering can be measured by nanoarea electron diffraction (NED). The combination of STEM, EELS, and NED techniques allows us to gain a fundamental understanding of the properties of oxide superlattices and ultrathin films and their relationship with the corresponding atomic and electronic structure. In this dissertation, I use the aforementioned electron microscopy techniques to investigate several oxide superlattice and ultrathin film systems. The major findings are summarized below. These results were obtained with stringent specimen preparation methods that I developed for high resolution studies, which are described in Chapter 2. The essential materials background and description of electron microscopy techniques are given in Chapter 1 and 2. In a LaMnO3-SrMnO3 superlattice, we demonstrate the interface of LaMnO3-SrMnO3 is sharper than the SrMnO3-LaMnO3 interface. Extra spectral weights in EELS are confined to the sharp interface, whereas at the rougher interface, the extra states are either not present or are not confined to the interface. Both the structural and electronic asymmetries correspond to asymmetric magnetic ordering at low temperature. In a short period LaMnO3-SrTiO3 superlattice for optical applications, we discovered a modified band structure in SrTiO3 ultrathin films relative to thick films and a SrTiO3 substrate, due to charge leakage from LaMnO3 in SrTiO3. This was measured by chemical shifts of the Ti L and O K edges using atomic scale EELS. The interfacial sharpness of LaAlO3 films grown on SrTiO3 was investigated by the STEM/EELS technique together with electron diffraction. This interface, when prepared under specific conditions, is conductive with high carrier mobility. Several suggestions for the conductive interface have been proposed, including a polar catastrophe model, where a large built-in electric field in LaAlO3 films results in electron charge transfer into the SrTiO3 substrate. Other suggested possibilities include oxygen vacancies at the interface and/or oxygen vacancies in the substrate. The abruptness of the interface as well as extent of intermixing has not been thoroughly investigated at high resolution, even though this can strongly influence the electrical transport properties. We found clear evidence for cation intermixing through the LaAlO3-SrTiO3 interface with high spatial resolution EELS and STEM, which contributes to the conduction at the interface. We also found structural defects, such as misfit dislocations, which leads to increased intermixing over coherent interfaces.
Resumo:
The 1994 Northridge earthquake sent ripples to insurance conpanieseverywhere. This was one in a series of natural disasters such asHurricane Andrew which together with the problems in Lloyd's of Londonhave insurance companies running for cover. This paper presents a calibration of the U.S. economy in a model with financial markets forinsurance derivatives that suggests the U.S. economy can deal with thedamage of natural catastrophe far better than one might think.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
El agotamiento, la ausencia o, simplemente, la incertidumbre sobre la cantidad de las reservas de combustibles fósiles se añaden a la variabilidad de los precios y a la creciente inestabilidad en la cadena de aprovisionamiento para crear fuertes incentivos para el desarrollo de fuentes y vectores energéticos alternativos. El atractivo de hidrógeno como vector energético es muy alto en un contexto que abarca, además, fuertes inquietudes por parte de la población sobre la contaminación y las emisiones de gases de efecto invernadero. Debido a su excelente impacto ambiental, la aceptación pública del nuevo vector energético dependería, a priori, del control de los riesgos asociados su manipulación y almacenamiento. Entre estos, la existencia de un innegable riesgo de explosión aparece como el principal inconveniente de este combustible alternativo. Esta tesis investiga la modelización numérica de explosiones en grandes volúmenes, centrándose en la simulación de la combustión turbulenta en grandes dominios de cálculo en los que la resolución que es alcanzable está fuertemente limitada. En la introducción, se aborda una descripción general de los procesos de explosión. Se concluye que las restricciones en la resolución de los cálculos hacen necesario el modelado de los procesos de turbulencia y de combustión. Posteriormente, se realiza una revisión crítica de las metodologías disponibles tanto para turbulencia como para combustión, que se lleva a cabo señalando las fortalezas, deficiencias e idoneidad de cada una de las metodologías. Como conclusión de esta investigación, se obtiene que la única estrategia viable para el modelado de la combustión, teniendo en cuenta las limitaciones existentes, es la utilización de una expresión que describa la velocidad de combustión turbulenta en función de distintos parámetros. Este tipo de modelos se denominan Modelos de velocidad de llama turbulenta y permiten cerrar una ecuación de balance para la variable de progreso de combustión. Como conclusión también se ha obtenido, que la solución más adecuada para la simulación de la turbulencia es la utilización de diferentes metodologías para la simulación de la turbulencia, LES o RANS, en función de la geometría y de las restricciones en la resolución de cada problema particular. Sobre la base de estos hallazgos, el crea de un modelo de combustión en el marco de los modelos de velocidad de la llama turbulenta. La metodología propuesta es capaz de superar las deficiencias existentes en los modelos disponibles para aquellos problemas en los que se precisa realizar cálculos con una resolución moderada o baja. Particularmente, el modelo utiliza un algoritmo heurístico para impedir el crecimiento del espesor de la llama, una deficiencia que lastraba el célebre modelo de Zimont. Bajo este enfoque, el énfasis del análisis se centra en la determinación de la velocidad de combustión, tanto laminar como turbulenta. La velocidad de combustión laminar se determina a través de una nueva formulación capaz de tener en cuenta la influencia simultánea en la velocidad de combustión laminar de la relación de equivalencia, la temperatura, la presión y la dilución con vapor de agua. La formulación obtenida es válida para un dominio de temperaturas, presiones y dilución con vapor de agua más extenso de cualquiera de las formulaciones previamente disponibles. Por otra parte, el cálculo de la velocidad de combustión turbulenta puede ser abordado mediante el uso de correlaciones que permiten el la determinación de esta magnitud en función de distintos parámetros. Con el objetivo de seleccionar la formulación más adecuada, se ha realizado una comparación entre los resultados obtenidos con diversas expresiones y los resultados obtenidos en los experimentos. Se concluye que la ecuación debida a Schmidt es la más adecuada teniendo en cuenta las condiciones del estudio. A continuación, se analiza la importancia de las inestabilidades de la llama en la propagación de los frentes de combustión. Su relevancia resulta significativa para mezclas pobres en combustible en las que la intensidad de la turbulencia permanece moderada. Estas condiciones son importantes dado que son habituales en los accidentes que ocurren en las centrales nucleares. Por ello, se lleva a cabo la creación de un modelo que permita estimar el efecto de las inestabilidades, y en concreto de la inestabilidad acústica-paramétrica, en la velocidad de propagación de llama. El modelado incluye la derivación matemática de la formulación heurística de Bauwebs et al. para el cálculo de la incremento de la velocidad de combustión debido a las inestabilidades de la llama, así como el análisis de la estabilidad de las llamas con respecto a una perturbación cíclica. Por último, los resultados se combinan para concluir el modelado de la inestabilidad acústica-paramétrica. Tras finalizar esta fase, la investigación se centro en la aplicación del modelo desarrollado en varios problemas de importancia para la seguridad industrial y el posterior análisis de los resultados y la comparación de los mismos con los datos experimentales correspondientes. Concretamente, se abordo la simulación de explosiones en túneles y en contenedores, con y sin gradiente de concentración y ventilación. Como resultados generales, se logra validar el modelo confirmando su idoneidad para estos problemas. Como última tarea, se ha realizado un analisis en profundidad de la catástrofe de Fukushima-Daiichi. El objetivo del análisis es determinar la cantidad de hidrógeno que explotó en el reactor número uno, en contraste con los otros estudios sobre el tema que se han centrado en la determinación de la cantidad de hidrógeno generado durante el accidente. Como resultado de la investigación, se determinó que la cantidad más probable de hidrogeno que fue consumida durante la explosión fue de 130 kg. Es un hecho notable el que la combustión de una relativamente pequeña cantidad de hidrogeno pueda causar un daño tan significativo. Esta es una muestra de la importancia de este tipo de investigaciones. Las ramas de la industria para las que el modelo desarrollado será de interés abarca la totalidad de la futura economía de hidrógeno (pilas de combustible, vehículos, almacenamiento energético, etc) con un impacto especial en los sectores del transporte y la energía nuclear, tanto para las tecnologías de fisión y fusión. ABSTRACT The exhaustion, absolute absence or simply the uncertainty on the amount of the reserves of fossil fuels sources added to the variability of their prices and the increasing instability and difficulties on the supply chain are strong incentives for the development of alternative energy sources and carriers. The attractiveness of hydrogen in a context that additionally comprehends concerns on pollution and emissions is very high. Due to its excellent environmental impact, the public acceptance of the new energetic vector will depend on the risk associated to its handling and storage. Fromthese, the danger of a severe explosion appears as the major drawback of this alternative fuel. This thesis investigates the numerical modeling of large scale explosions, focusing on the simulation of turbulent combustion in large domains where the resolution achievable is forcefully limited. In the introduction, a general description of explosion process is undertaken. It is concluded that the restrictions of resolution makes necessary the modeling of the turbulence and combustion processes. Subsequently, a critical review of the available methodologies for both turbulence and combustion is carried out pointing out their strengths and deficiencies. As a conclusion of this investigation, it appears clear that the only viable methodology for combustion modeling is the utilization of an expression for the turbulent burning velocity to close a balance equation for the combustion progress variable, a model of the Turbulent flame velocity kind. Also, that depending on the particular resolution restriction of each problem and on its geometry the utilization of different simulation methodologies, LES or RANS, is the most adequate solution for modeling the turbulence. Based on these findings, the candidate undertakes the creation of a combustion model in the framework of turbulent flame speed methodology which is able to overcome the deficiencies of the available ones for low resolution problems. Particularly, the model utilizes a heuristic algorithm to maintain the thickness of the flame brush under control, a serious deficiency of the Zimont model. Under the approach utilized by the candidate, the emphasis of the analysis lays on the accurate determination of the burning velocity, both laminar and turbulent. On one side, the laminar burning velocity is determined through a newly developed correlation which is able to describe the simultaneous influence of the equivalence ratio, temperature, steam dilution and pressure on the laminar burning velocity. The formulation obtained is valid for a larger domain of temperature, steam dilution and pressure than any of the previously available formulations. On the other side, a certain number of turbulent burning velocity correlations are available in the literature. For the selection of the most suitable, they have been compared with experiments and ranked, with the outcome that the formulation due to Schmidt was the most adequate for the conditions studied. Subsequently, the role of the flame instabilities on the development of explosions is assessed. Their significance appears to be of importance for lean mixtures in which the turbulence intensity remains moderate. These are important conditions which are typical for accidents on Nuclear Power Plants. Therefore, the creation of a model to account for the instabilities, and concretely, the acoustic parametric instability is undertaken. This encloses the mathematical derivation of the heuristic formulation of Bauwebs et al. for the calculation of the burning velocity enhancement due to flame instabilities as well as the analysis of the stability of flames with respect to a cyclic velocity perturbation. The results are combined to build a model of the acoustic-parametric instability. The following task in this research has been to apply the model developed to several problems significant for the industrial safety and the subsequent analysis of the results and comparison with the corresponding experimental data was performed. As a part of such task simulations of explosions in a tunnel and explosions in large containers, with and without gradient of concentration and venting have been carried out. As a general outcome, the validation of the model is achieved, confirming its suitability for the problems addressed. As a last and final undertaking, a thorough study of the Fukushima-Daiichi catastrophe has been carried out. The analysis performed aims at the determination of the amount of hydrogen participating on the explosion that happened in the reactor one, in contrast with other analysis centered on the amount of hydrogen generated during the accident. As an outcome of the research, it was determined that the most probable amount of hydrogen exploding during the catastrophe was 130 kg. It is remarkable that the combustion of such a small quantity of material can cause tremendous damage. This is an indication of the importance of these types of investigations. The industrial branches that can benefit from the applications of the model developed in this thesis include the whole future hydrogen economy, as well as nuclear safety both in fusion and fission technology.
Resumo:
RNA viruses evolve rapidly. One source of this ability to rapidly change is the apparently high mutation frequency in RNA virus populations. A high mutation frequency is a central tenet of the quasispecies theory. A corollary of the quasispecies theory postulates that, given their high mutation frequency, animal RNA viruses may be susceptible to error catastrophe, where they undergo a sharp drop in viability after a modest increase in mutation frequency. We recently showed that the important broad-spectrum antiviral drug ribavirin (currently used to treat hepatitis C virus infections, among others) is an RNA virus mutagen, and we proposed that ribavirin's antiviral effect is by forcing RNA viruses into error catastrophe. However, a direct demonstration of error catastrophe has not been made for ribavirin or any RNA virus mutagen. Here we describe a direct demonstration of error catastrophe by using ribavirin as the mutagen and poliovirus as a model RNA virus. We demonstrate that ribavirin's antiviral activity is exerted directly through lethal mutagenesis of the viral genetic material. A 99.3% loss in viral genome infectivity is observed after a single round of virus infection in ribavirin concentrations sufficient to cause a 9.7-fold increase in mutagenesis. Compiling data on both the mutation levels and the specific infectivities of poliovirus genomes produced in the presence of ribavirin, we have constructed a graph of error catastrophe showing that normal poliovirus indeed exists at the edge of viability. These data suggest that RNA virus mutagens may represent a promising new class of antiviral drugs.
Resumo:
The birth, death and catastrophe process is an extension of the birth-death process that incorporates the possibility of reductions in population of arbitrary size. We will consider a general form of this model in which the transition rates are allowed to depend on the current population size in an arbitrary manner. The linear case, where the transition rates are proportional to current population size, has been studied extensively. In particular, extinction probabilities, the expected time to extinction, and the distribution of the population size conditional on nonextinction (the quasi-stationary distribution) have all been evaluated explicitly. However, whilst these characteristics are of interest in the modelling and management of populations, processes with linear rate coefficients represent only a very limited class of models. We address this limitation by allowing for a wider range of catastrophic events. Despite this generalisation, explicit expressions can still be found for the expected extinction times.
Resumo:
Nowadays financial institutions due to regulation and internal motivations care more intensively on their risks. Besides previously dominating market and credit risk new trend is to handle operational risk systematically. Operational risk is the risk of loss resulting from inadequate or failed internal processes, people and systems or from external events. First we show the basic features of operational risk and its modelling and regulatory approaches, and after we will analyse operational risk in an own developed simulation model framework. Our approach is based on the analysis of latent risk process instead of manifest risk process, which widely popular in risk literature. In our model the latent risk process is a stochastic risk process, so called Ornstein- Uhlenbeck process, which is a mean reversion process. In the model framework we define catastrophe as breach of a critical barrier by the process. We analyse the distributions of catastrophe frequency, severity and first time to hit, not only for single process, but for dual process as well. Based on our first results we could not falsify the Poisson feature of frequency, and long tail feature of severity. Distribution of “first time to hit” requires more sophisticated analysis. At the end of paper we examine advantages of simulation based forecasting, and finally we concluding with the possible, further research directions to be done in the future.
Resumo:
Understanding the molecular mechanisms of oral carcinogenesis will yield important advances in diagnostics, prognostics, effective treatment, and outcome of oral cancer. Hence, in this study we have investigated the proteomic and peptidomic profiles by combining an orthotopic murine model of oral squamous cell carcinoma (OSCC), mass spectrometry-based proteomics and biological network analysis. Our results indicated the up-regulation of proteins involved in actin cytoskeleton organization and cell-cell junction assembly events and their expression was validated in human OSCC tissues. In addition, the functional relevance of talin-1 in OSCC adhesion, migration and invasion was demonstrated. Taken together, this study identified specific processes deregulated in oral cancer and provided novel refined OSCC-targeting molecules.
Resumo:
Two single crystalline surfaces of Au vicinal to the (111) plane were modified with Pt and studied using scanning tunneling microscopy (STM) and X-ray photoemission spectroscopy (XPS) in ultra-high vacuum environment. The vicinal surfaces studied are Au(332) and Au(887) and different Pt coverage (θPt) were deposited on each surface. From STM images we determine that Pt deposits on both surfaces as nanoislands with heights ranging from 1 ML to 3 ML depending on θPt. On both surfaces the early growth of Pt ad-islands occurs at the lower part of the step edge, with Pt ad-atoms being incorporated into the steps in some cases. XPS results indicate that partial alloying of Pt occurs at the interface at room temperature and at all coverage, as suggested by the negative chemical shift of Pt 4f core line, indicating an upward shift of the d-band center of the alloyed Pt. Also, the existence of a segregated Pt phase especially at higher coverage is detected by XPS. Sample annealing indicates that the temperature rise promotes a further incorporation of Pt atoms into the Au substrate as supported by STM and XPS results. Additionally, the catalytic activity of different PtAu systems reported in the literature for some electrochemical reactions is discussed considering our findings.
Resumo:
Congenital diaphragmatic hernia (CDH) is associated with pulmonary hypertension which is often difficult to manage, and a significant cause of morbidity and mortality. In this study, we have used a rabbit model of CDH to evaluate the effects of BAY 60-2770 on the in vitro reactivity of left pulmonary artery. CDH was performed in New Zealand rabbit fetuses (n = 10 per group) and compared to controls. Measurements of body, total and left lung weights (BW, TLW, LLW) were done. Pulmonary artery rings were pre-contracted with phenylephrine (10 μM), after which cumulative concentration-response curves to glyceryl trinitrate (GTN; NO donor), tadalafil (PDE5 inhibitor) and BAY 60-2770 (sGC activator) were obtained as well as the levels of NO (NO3/NO2). LLW, TLW and LBR were decreased in CDH (p < 0.05). In left pulmonary artery, the potency (pEC50) for GTN was markedly lower in CDH (8.25 ± 0.02 versus 9.27 ± 0.03; p < 0.01). In contrast, the potency for BAY 60-2770 was markedly greater in CDH (11.7 ± 0.03 versus 10.5 ± 0.06; p < 0.01). The NO2/NO3 levels were 62 % higher in CDH (p < 0.05). BAY 60-2770 exhibits a greater potency to relax the pulmonary artery in CDH, indicating a potential use for pulmonary hypertension in this disease.
Resumo:
To characterize the relaxation induced by BAY 41-2272 in human ureteral segments. Ureter specimens (n = 17) from multiple organ human deceased donors (mean age 40 ± 3.2 years, male/female ratio 2:1) were used to characterize the relaxing response of BAY 41-2272. Immunohistochemical analysis for endothelial and neuronal nitric oxide synthase, guanylate cyclase stimulator (sGC) and type 5 phosphodiesterase was also performed. The potency values were determined as the negative log of the molar to produce 50% of the maximal relaxation in potassium chloride-precontracted specimens. The unpaired Student t test was used for the comparisons. Immunohistochemistry revealed the presence of endothelial nitric oxide synthase in vessel endothelia and neuronal nitric oxide synthase in urothelium and nerve structures. sGC was expressed in the smooth muscle and urothelium layer, and type 5 phosphodiesterase was present in the smooth muscle only. BAY 41-2272 (0.001-100 μM) relaxed the isolated ureter in a concentration dependent manner, with a potency and maximal relaxation value of 5.82 ± 0.14 and 84% ± 5%, respectively. The addition of nitric oxide synthase and sGC inhibitors reduced the maximal relaxation values by 21% and 45%, respectively. However, the presence of sildenafil (100 nM) significantly potentiated (6.47 ± 0.10, P <.05) this response. Neither glibenclamide or tetraethylammonium nor ureteral urothelium removal influenced the relaxation response by BAY 41-2272. BAY 41-2272 relaxes the human isolated ureter in a concentration-dependent manner, mainly by activating the sGC enzyme in smooth muscle cells rather than in the urothelium, although a cyclic guanosine monophosphate-independent mechanism might have a role. The potassium channels do not seem to be involved.
Resumo:
Atomic charge transfer-counter polarization effects determine most of the infrared fundamental CH intensities of simple hydrocarbons, methane, ethylene, ethane, propyne, cyclopropane and allene. The quantum theory of atoms in molecules/charge-charge flux-dipole flux model predicted the values of 30 CH intensities ranging from 0 to 123 km mol(-1) with a root mean square (rms) error of only 4.2 km mol(-1) without including a specific equilibrium atomic charge term. Sums of the contributions from terms involving charge flux and/or dipole flux averaged 20.3 km mol(-1), about ten times larger than the average charge contribution of 2.0 km mol(-1). The only notable exceptions are the CH stretching and bending intensities of acetylene and two of the propyne vibrations for hydrogens bound to sp hybridized carbon atoms. Calculations were carried out at four quantum levels, MP2/6-311++G(3d,3p), MP2/cc-pVTZ, QCISD/6-311++G(3d,3p) and QCISD/cc-pVTZ. The results calculated at the QCISD level are the most accurate among the four with root mean square errors of 4.7 and 5.0 km mol(-1) for the 6-311++G(3d,3p) and cc-pVTZ basis sets. These values are close to the estimated aggregate experimental error of the hydrocarbon intensities, 4.0 km mol(-1). The atomic charge transfer-counter polarization effect is much larger than the charge effect for the results of all four quantum levels. Charge transfer-counter polarization effects are expected to also be important in vibrations of more polar molecules for which equilibrium charge contributions can be large.