890 resultados para continuous model theory
Resumo:
This paper presents a deterministic continuous model of proliferative cell activity. The classical series of connected compartments is revisited along with a simple mathematical treatment of two hypotheses: constant transit times and harmonic Ts. Several examples are presented to support these ideas, both taken from previous literature and recent experiences with the fish Carassius auratus, developed at the Junta de Energía Nuclear, Madrid, Spain.
Resumo:
Esta tesis está enmarcada en el estudio de diferentes procedimientos numéricos para resolver la dinámica de un sistema multicuerpo sometido a restricciones e impacto, que puede estar compuesto por sólidos rígidos y deformables conectados entre sí por diversos tipos de uniones. Dentro de los métodos numéricos analizados se presta un especial interés a los métodos consistentes, los cuales tienen por objetivo que la energía calculada en cada paso de tiempo, para un sistema mecánico, tenga una evolución coherente con el comportamiento teórico de la energía. En otras palabras, un método consistente mantiene constante la energía total en un problema conservativo, y en presencia de fuerzas disipativas proporciona un decremento positivo de la energía total. En esta línea se desarrolla un algoritmo numérico consistente con la energía total para resolver las ecuaciones de la dinámica de un sistema multicuerpo. Como parte de este algoritmo se formulan energéticamente consistentes las restricciones y el contacto empleando multiplicadores de Lagrange, penalización y Lagrange aumentado. Se propone también un método para el contacto con sólidos rígidos representados mediante superficies implícitas, basado en una restricción regularizada que se adaptada adecuadamente para el cumplimiento exacto de la restricción de contacto y para ser consistente con la conservación de la energía total. En este contexto se estudian dos enfoques: uno para el contacto elástico puro (sin deformación) formulado con penalización y Lagrange aumentado; y otro basado en un modelo constitutivo para el contacto con penetración. En el segundo enfoque se usa un potencial de penalización que, en ausencia de componentes disipativas, restaura la energía almacenada en el contacto y disipa energía de forma consistente con el modelo continuo cuando las componentes de amortiguamiento y fricción son consideradas. This thesis focuses on the study of several numerical procedures used to solve the dynamics of a multibody system subjected to constraints and impact. The system may be composed by rigid and deformable bodies connected by different types of joints. Within this framework, special attention is paid to consistent methods, which preserve the theoretical behavior of the energy at each time step. In other words, a consistent method keeps the total energy constant in a conservative problem, and provides a positive decrease in the total energy when dissipative forces are present. A numerical algorithm has been developed for solving the dynamical equations of multibody systems, which is energetically consistent. Energetic consistency in contacts and constraints is formulated using Lagrange multipliers, penalty and augmented Lagrange methods. A contact methodology is proposed for rigid bodies with a boundary represented by implicit surfaces. The method is based on a suitable regularized constraint formulation, adapted both to fulfill exactly the contact constraint, and to be consistent with the conservation of the total energy. In this context two different approaches are studied: the first applied to pure elastic contact (without deformation), formulated with penalty and augmented Lagrange; and a second one based on a constitutive model for contact with penetration. In this second approach, a penalty potential is used in the constitutive model, that restores the energy stored in the contact when no dissipative effects are present. On the other hand, the energy is dissipated consistently with the continuous model when friction and damping are considered.
Resumo:
Caracterización de los procesos de disipación mecánica basándose en la microestructura de los tejidos blandos. We present a continuous damage model with regularized softening (smeared crack models) for fiber reinforced soft tissues. Material parameters of the continuous model derive from the mesoscopic scale. In the mesoscopic scale continuum is considered as a collagenous fibrilreinforced composite. We want to study the continnumlevel response as a function of the nanoscale properties of the collagen and the adherent forces between the tropocollagen molecules.
Resumo:
Se está produciendo en la geodesia un cambio de paradigma en la concepción de los modelos digitales del terreno, pasando de diseñar el modelo con el menor número de puntos posibles a hacerlo con cientos de miles o millones de puntos. Este cambio ha sido consecuencia de la introducción de nuevas tecnologías como el escáner láser, la interferometría radar y el tratamiento de imágenes. La rápida aceptación de estas nuevas tecnologías se debe principalmente a la gran velocidad en la toma de datos, a la accesibilidad por no precisar de prisma y al alto grado de detalle de los modelos. Los métodos topográficos clásicos se basan en medidas discretas de puntos que considerados en su conjunto forman un modelo; su precisión se deriva de la precisión en la toma singular de estos puntos. La tecnología láser escáner terrestre (TLS) supone una aproximación diferente para la generación del modelo del objeto observado. Las nubes de puntos, producto del escaneo con TLS, pasan a ser tratadas en su conjunto mediante análisis de áreas, de forma que ahora el modelo final no es el resultado de una agregación de puntos sino la de la mejor superficie que se adapta a las nubes de puntos. Al comparar precisiones en la captura de puntos singulares realizados con métodos taquimétricos y equipos TLS la inferioridad de estos últimos es clara; sin embargo es en el tratamiento de las nubes de puntos, con los métodos de análisis basados en áreas, se han obtenido precisiones aceptables y se ha podido considerar plenamente la incorporación de esta tecnología en estudios de deformaciones y movimientos de estructuras. Entre las aplicaciones del TLS destacan las de registro del patrimonio, registro de las fases en la construcción de plantas industriales y estructuras, atestados de accidentes y monitorización de movimientos del terreno y deformaciones de estructuras. En la auscultación de presas, comparado con la monitorización de puntos concretos dentro, en coronación o en el paramento de la presa, disponer de un modelo continuo del paramento aguas abajo de la presa abre la posibilidad de introducir los métodos de análisis de deformaciones de superficies y la creación de modelos de comportamiento que mejoren la comprensión y previsión de sus movimientos. No obstante, la aplicación de la tecnología TLS en la auscultación de presas debe considerarse como un método complementario a los existentes. Mientras que los péndulos y la reciente técnica basada en el sistema de posicionamiento global diferencial (DGPS) dan una información continua de los movimientos de determinados puntos de la presa, el TLS permite ver la evolución estacional y detectar posibles zonas problemáticas en todo el paramento. En este trabajo se analizan las características de la tecnología TLS y los parámetros que intervienen en la precisión final de los escaneos. Se constata la necesidad de utilizar equipos basados en la medida directa del tiempo de vuelo, también llamados pulsados, para distancias entre 100 m y 300 m Se estudia la aplicación del TLS a la modelización de estructuras y paramentos verticales. Se analizan los factores que influyen en la precisión final, como el registro de nubes, tipo de dianas y el efecto conjunto del ángulo y la distancia de escaneo. Finalmente, se hace una comparación de los movimientos dados por los péndulos directos de una presa con los obtenidos del análisis de las nubes de puntos correspondientes a varias campañas de escaneos de la misma presa. Se propone y valida el empleo de gráficos patrón para relacionar las variables precisión o exactitud con los factores distancia y ángulo de escaneo en el diseño de trabajos de campo. Se expone su aplicación en la preparación del trabajo de campo para la realización de una campaña de escaneos dirigida al control de movimientos de una presa y se realizan recomendaciones para la aplicación de la técnica TLS a grandes estructuras. Se ha elaborado el gráfico patrón de un equipo TLS concreto de alcance medio. Para ello se hicieron dos ensayos de campo en condiciones reales de trabajo, realizando escaneos en todo el rango de distancias y ángulos de escaneo del equipo. Se analizan dos métodos para obtener la precisión en la modelización de paramentos y la detección de movimientos de estos: el método del “plano de mejor ajuste” y el método de la “deformación simulada”. Por último, se presentan los resultados de la comparación de los movimientos estacionales de una presa arco-gravedad entre los registrados con los péndulos directos y los obtenidos a partir de los escaneos realizados con un TLS. Los resultados muestran diferencias de milímetros, siendo el mejor de ellos del orden de un milímetro. Se explica la metodología utilizada y se hacen consideraciones respecto a la densidad de puntos de las nubes y al tamaño de las mallas de triángulos. A shift of paradigm in the conception of the survey digital models is taking place in geodesy, moving from designing a model with the fewer possible number of points to models of hundreds of thousand or million points. This change has happened because of the introduction of new technologies like the laser scanner, the interferometry radar and the processing of images. The fast acceptance of these new technologies has been due mainly to the great speed getting the data, to the accessibility as reflectorless technique, and to the high degree of detail of the models. Classic survey methods are based on discreet measures of points that, considered them as a whole, form a model; the precision of the model is then derived from the precision measuring the single points. The terrestrial laser scanner (TLS) technology supposes a different approach to the model generation of the observed object. Point cloud, the result of a TLS scan, must be treated as a whole, by means of area-based analysis; so, the final model is not an aggregation of points but the one resulting from the best surface that fits with the point cloud. Comparing precisions between the one resulting from the capture of singular points made with tachometric measurement methods and with TLS equipment, the inferiority of this last one is clear; but it is in the treatment of the point clouds, using area-based analysis methods, when acceptable precisions have been obtained and it has been possible to consider the incorporation of this technology for monitoring structures deformations. Among TLS applications it have to be emphasized those of registry of the cultural heritage, stages registry during construction of industrial plants and structures, police statement of accidents and monitorization of land movements and structures deformations. Compared with the classical dam monitoring, approach based on the registry of a set of points, the fact having a continuous model of the downstream face allows the possibility of introducing deformation analysis methods and behavior models that would improve the understanding and forecast of dam movements. However, the application of TLS technology for dam monitoring must be considered like a complementary method with the existing ones. Pendulums and recently the differential global positioning system (DGPS) give a continuous information of the movements of certain points of the dam, whereas TLS allows following its seasonal evolution and to detect damaged zones of the dam. A review of the TLS technology characteristics and the factors affecting the final precision of the scanning data is done. It is stated the need of selecting TLS based on the direct time of flight method, also called pulsed, for scanning distances between 100m and 300m. Modelling of structures and vertical walls is studied. Factors that influence in the final precision, like the registry of point clouds, target types, and the combined effect of scanning distance and angle of incidence are analyzed. Finally, a comparison among the movements given by the direct pendulums of a dam and the ones obtained from the analysis of point clouds is done. A new approach to obtain a complete map-type plot of the precisions of TLS equipment based on the direct measurement of time of flight method at midrange distances is presented. Test were developed in field-like conditions, similar to dam monitoring and other civil engineering works. Taking advantage of graphic semiological techniques, a “distance - angle of incidence” map based was designed and evaluated for field-like conditions. A map-type plot was designed combining isolines with sized and grey scale points, proportional to the precision values they represent. Precisions under different field conditions were compared with specifications. For this purpose, point clouds were evaluated under two approaches: the standar "plane-of-best-fit" and the proposed "simulated deformation”, that showed improved performance. These results lead to a discussion and recommendations about optimal TLS operation in civil engineering works. Finally, results of the comparison of seasonal movements of an arc-gravity dam between the registered by the direct pendulums ant the obtained from the TLS scans, are shown. The results show differences of millimeters, being the best around one millimeter. The used methodology is explained and considerations with respect to the point cloud density and to the size of triangular meshes are done.
Resumo:
This paper presents empirical evidence suggesting that healthy humans can perform a two degree of freedom visuo-motor pursuit tracking task with the same response time delay as a one degree of freedom task. In contrast, the time delay of the response is influenced markedly by the nature of the motor synergy required to produce it. We suggest a conceptual account of this evidence based on adaptive model theory, which combines theories of intermittency from psychology and adaptive optimal control from engineering. The intermittent response planning stage has a fixed period. It possesses multiple optimal trajectory generators such that multiple degrees of freedom can be planned concurrently, without requiring an increase in the planning period. In tasks which require unfamiliar motor synergies, or are deemed to be incompatible, internal adaptive models representing movement dynamics are inaccurate. This means that the actual response which is produced will deviate from the one which is planned. For a given target-response discrepancy, corrective response trajectories of longer duration are planned, consistent with the principle of speed-accuracy trade-off. Compared to familiar or compatible tasks, this results in a longer response time delay and reduced accuracy. From the standpoint of the intermittency approach, the findings of this study help make possible a more integral and predictive account of purposive action. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
This symposium describes a multi-dimensional strategy to examine fidelity of implementation in an authentic school district context. An existing large-district peer mentoring program provides an example. The presentation will address development of a logic model to articulate a theory of change; collaborative creation of a data set aligned with essential concepts and research questions; identification of independent, dependent, and covariate variables; issues related to use of big data that include conditioning and transformation of data prior to analysis; operationalization of a strategy to capture fidelity of implementation data from all stakeholders; and ways in which fidelity indicators might be used.
Resumo:
A detailed non-equilibrium state diagram of shape-anisotropic particle fluids is constructed. The effects of particle shape are explored using Naive Mode Coupling Theory (NMCT), and a single particle Non-linear Langevin Equation (NLE) theory. The dynamical behavior of non-ergodic fluids are discussed. We employ a rotationally frozen approach to NMCT in order to determine a transition to center of mass (translational) localization. Both ideal and kinetic glass transitions are found to be highly shape dependent, and uniformly increase with particle dimensionality. The glass transition volume fraction of quasi 1- and 2- dimensional particles fall monotonically with the number of sites (aspect ratio), while 3-dimensional particles display a non-monotonic dependence of glassy vitrification on the number of sites. Introducing interparticle attractions results in a far more complex state diagram. The ideal non-ergodic boundary shows a glass-fluid-gel re-entrance previously predicted for spherical particle fluids. The non-ergodic region of the state diagram presents qualitatively different dynamics in different regimes. They are qualified by the different behaviors of the NLE dynamic free energy. The caging dominated, repulsive glass regime is characterized by long localization lengths and barrier locations, dictated by repulsive hard core interactions, while the bonding dominated gel region has short localization lengths (commensurate with the attraction range), and barrier locations. There exists a small region of the state diagram which is qualified by both glassy and gel localization lengths in the dynamic free energy. A much larger (high volume fraction, and high attraction strength) region of phase space is characterized by short gel-like localization lengths, and long barrier locations. The region is called the attractive glass and represents a 2-step relaxation process whereby a particle first breaks attractive physical bonds, and then escapes its topological cage. The dynamic fragility of fluids are highly particle shape dependent. It increases with particle dimensionality and falls with aspect ratio for quasi 1- and 2- dimentional particles. An ultralocal limit analysis of the NLE theory predicts universalities in the behavior of relaxation times, and elastic moduli. The equlibrium phase diagram of chemically anisotropic Janus spheres and Janus rods are calculated employing a mean field Random Phase Approximation. The calculations for Janus rods are corroborated by the full liquid state Reference Interaction Site Model theory. The Janus particles consist of attractive and repulsive regions. Both rods and spheres display rich phase behavior. The phase diagrams of these systems display fluid, macrophase separated, attraction driven microphase separated, repulsion driven microphase separated and crystalline regimes. Macrophase separation is predicted in highly attractive low volume fraction systems. Attraction driven microphase separation is charaterized by long length scale divergences, where the ordering length scale determines the microphase ordered structures. The ordering length scale of repulsion driven microphase separation is determined by the repulsive range. At the high volume fractions, particles forgo the enthalpic considerations of attractions and repulsions to satisfy hard core constraints and maximize vibrational entropy. This results in site length scale ordering in rods, and the sphere length scale ordering in Janus spheres, i.e., crystallization. A change in the Janus balance of both rods and spheres results in quantitative changes in spinodal temperatures and the position of phase boundaries. However, a change in the block sequence of Janus rods causes qualitative changes in the type of microphase ordered state, and induces prominent features (such as the Lifshitz point) in the phase diagrams of these systems. A detailed study of the number of nearest neighbors in Janus rod systems reflect a deep connection between this local measure of structure, and the structure factor which represents the most global measure of order.
Resumo:
Human operators are unique in their decision making capability, judgment and nondeterminism. Their sense of judgment, unpredictable decision procedures, susceptibility to environmental elements can cause them to erroneously execute a given task description to operate a computer system. Usually, a computer system is protected against some erroneous human behaviors by having necessary safeguard mechanisms in place. But some erroneous human operator behaviors can lead to severe or even fatal consequences especially in safety critical systems. A generalized methodology that can allow modeling and analyzing the interactions between computer systems and human operators where the operators are allowed to deviate from their prescribed behaviors will provide a formal understanding of the robustness of a computer system against possible aberrant behaviors by its human operators. We provide several methodology for assisting in modeling and analyzing human behaviors exhibited while operating computer systems. Every human operator is usually given a specific recommended set of guidelines for operating a system. We first present process algebraic methodology for modeling and verifying recommended human task execution behavior. We present how one can perform runtime monitoring of a computer system being operated by a human operator for checking violation of temporal safety properties. We consider the concept of a protection envelope giving a wider class of behaviors than those strictly prescribed by a human task that can be tolerated by a system. We then provide a framework for determining whether a computer system can maintain its guarantees if the human operators operate within their protection envelopes. This framework also helps to determine the robustness of the computer system under weakening of the protection envelopes. In this regard, we present a tool called Tutela that assists in implementing the framework. We then examine the ability of a system to remain safe under broad classes of variations of the prescribed human task. We develop a framework for addressing two issues. The first issue is: given a human task specification and a protection envelope, will the protection envelope properties still hold under standard erroneous executions of that task by the human operators? In other words how robust is the protection envelope? The second issue is: in the absence of a protection envelope, can we approximate a protection envelope encompassing those standard erroneous human behaviors that can be safely endured by the system? We present an extension of Tutela that implements this framework. The two frameworks mentioned above use Concurrent Game Structures (CGS) as models for both computer systems and their human operators. However, there are some shortcomings of this formalism for our uses. We add incomplete information concepts in CGSs to achieve better modularity for the players. We introduce nondeterminism in both the transition system and strategies of players and in the modeling of human operators and computer systems. Nondeterministic action strategies for players in \emph{i}ncomplete information \emph{N}ondeterministic CGS (iNCGS) is a more precise formalism for modeling human behaviors exhibited while operating a computer system. We show how we can reason about a human behavior satisfying a guarantee by providing a semantics of Alternating Time Temporal Logic based on iNCGS player strategies. In a nutshell this dissertation provides formal methodology for modeling and analyzing system robustness against both expected and erroneous human operator behaviors.
Simulação numérica da convecção mista em cavidade preenchida com meio poroso heterogêneo e homogêneo
Resumo:
In this work is presented mixed convection heat transfer inside a lid-driven cavity heated from below and filled with heterogeneous and homogeneous porous medium. In the heterogeneous approach, the solid domain is represented by heat conductive equally spaced blocks; the fluid phase surrounds the blocks being limited by the cavity walls. The homogeneous or pore-continuum approach is characterized by the cavity porosity and permeability. Generalized mass, momentum and energy conservation equations are obtained in dimensionless form to represent both the continuum and the pore-continuum models. The numerical solution is obtained via the finite volume method. QUICK interpolation scheme is set for numerical treatment of the advection terms and SIMPLE algorithm is applied for pressure-velocity coupling. Aiming the laminar regime, the flow parameters are kept in the range of 102≤Re≤103 and 103≤Ra≤106 for both the heterogeneous and homogeneous approaches. In the tested configurations for the continuous model, 9, 16, 36, and 64 blocks are considered for each combination of Re and Ra being the microscopic porosity set as constant φ=0,64 . For the pore-continuum model the Darcy number (Da) is set according to the number of blocks in the heterogeneous cavity and the φ. Numerical results of the comparative study between the microscopic and macroscopic approaches are presented. As a result, average Nusselt number equations for the continuum and the pore continuum models as a function of Ra and Re are obtained.
Resumo:
Modeling physiological processes using tracer kinetic methods requires knowledge of the time course of the tracer concentration in blood supplying the organ. For liver studies, however, inaccessibility of the portal vein makes direct measurement of the hepatic dual-input function impossible in humans. We want to develop a method to predict the portal venous time-activity curve from measurements of an arterial time-activity curve. An impulse-response function based on a continuous distribution of washout constants is developed and validated for the gut. Experiments with simultaneous blood sampling in aorta and portal vein were made in 13 anesthetized pigs following inhalation of intravascular [O-15] CO or injections of diffusible 3-O[ C-11] methylglucose (MG). The parameters of the impulse-response function have a physiological interpretation in terms of the distribution of washout constants and are mathematically equivalent to the mean transit time ( T) and standard deviation of transit times. The results include estimates of mean transit times from the aorta to the portal vein in pigs: (T) over bar = 0.35 +/- 0.05 min for CO and 1.7 +/- 0.1 min for MG. The prediction of the portal venous time-activity curve benefits from constraining the regression fits by parameters estimated independently. This is strong evidence for the physiological relevance of the impulse-response function, which includes asymptotically, and thereby justifies kinetically, a useful and simple power law. Similarity between our parameter estimates in pigs and parameter estimates in normal humans suggests that the proposed model can be adapted for use in humans.
Resumo:
We present the quantum theory of the far-off-resonance continuous-wave Raman laser using the Heisenberg-Langevin approach. We show that the simplified quantum Langevin equations for this system are mathematically identical to those of the nondegenerate optical parametric oscillator in the time domain with the following associations: pump pump, Stokes signal, and Raman coherence idler. We derive analytical results for both the steady-state behavior and the time-dependent noise spectra, using standard linearization procedures. In the semiclassical limit, these results match with previous purely semiclassical treatments, which yield excellent agreement with experimental observations. The analytical time-dependent results predict perfect photon statistics conversion from the pump to the Stokes and nonclassical behavior under certain operational conditions.
Resumo:
The importance of interaction between Operations Management (OM) and Human Behavior has been recently re-addressed. This paper introduced the Reasoned Action Theory suggested by Froehle and Roth (2004) to analyze Operational Capabilities exploring the suitability of this model in the context of OM. It also seeks to discuss the behavioral aspects of operational capabilities from the perspective of organizational routines. This theory was operationalized using Fishbein and Ajzen (F/A) behavioral model and a multi-case strategy was employed to analyze the Continuous Improvement (CI) capability. The results posit that the model explains partially the CI behavior in an operational context and some contingency variables might influence the general relations among the variables involved in the F/A model. Thus intention might not be the determinant variable of behavior in this context.
Resumo:
We apply the formalism of the continuous-time random walk to the study of financial data. The entire distribution of prices can be obtained once two auxiliary densities are known. These are the probability densities for the pausing time between successive jumps and the corresponding probability density for the magnitude of a jump. We have applied the formalism to data on the U.S. dollardeutsche mark future exchange, finding good agreement between theory and the observed data.
Resumo:
La présente thèse se base sur les principes de la théorisation ancrée (Strauss & Corbin, 1998) afin de répondre au manque de documentation concernant les stratégies adoptées par des « agents intermédiaires » pour promouvoir l’utilisation des connaissances issues de la recherche auprès des intervenants en éducation. Le terme « agent intermédiaire » réfère aux personnes qui sont positionnées à l’interface entre les producteurs et les utilisateurs des connaissances scientifiques et qui encouragent et soutiennent les intervenants scolaires dans l’application des connaissances scientifiques dans leur pratique. L’étude s’inscrit dans le cadre d’un projet du ministère de l’Éducation, du Loisir et du Sport du Québec visant à améliorer la réussite scolaire des élèves du secondaire provenant de milieux défavorisés. Des agents intermédiaires de différents niveaux du système éducatif ayant obtenu le mandat de transférer des connaissances issues de la recherche auprès des intervenants scolaires dans les écoles visées par le projet ont été sollicités pour participer à l’étude. Une stratégie d’échantillonnage de type « boule-de-neige » (Biernacki & Waldorf, 1981; Patton, 1990) a été employée afin d’identifier les personnes reconnues par leurs pairs pour la qualité du soutien offert aux intervenants scolaires quant à l’utilisation de la recherche dans leur pratique. Seize entrevues semi-structurées ont été réalisées. L’analyse des données permet de proposer un modèle d’intervention en transfert de connaissances composé de 32 stratégies d’influence, regroupées en 6 composantes d’intervention, soit : relationnelle, cognitive, politique, facilitatrice, évaluative, de même que de soutien et de suivi continu. Les résultats suggèrent que les stratégies d’ordre relationnelle, cognitive et politique sont interdépendantes et permettent d’établir un climat favorable dans lequel les agents peuvent exercer une plus grande influence sur l’appropriation du processus de l’utilisation des connaissances des intervenants scolaire. Ils montrent en outre que la composante de soutien et de suivi continu est importante pour maintenir les changements quant à l’utilisation de la recherche dans la pratique chez les intervenants scolaires. Les implications théoriques qui découlent du modèle, ainsi que les explications des mécanismes impliqués dans les différentes composantes, sont mises en perspective tant avec la documentation scientifique en transfert de connaissances dans les secteurs de la santé et de l’éducation, qu’avec les travaux provenant de disciplines connexes (notamment la psychologie). Enfin, des pistes d’action pour la pratique sont proposées.
Resumo:
Atomic charge transfer-counter polarization effects determine most of the infrared fundamental CH intensities of simple hydrocarbons, methane, ethylene, ethane, propyne, cyclopropane and allene. The quantum theory of atoms in molecules/charge-charge flux-dipole flux model predicted the values of 30 CH intensities ranging from 0 to 123 km mol(-1) with a root mean square (rms) error of only 4.2 km mol(-1) without including a specific equilibrium atomic charge term. Sums of the contributions from terms involving charge flux and/or dipole flux averaged 20.3 km mol(-1), about ten times larger than the average charge contribution of 2.0 km mol(-1). The only notable exceptions are the CH stretching and bending intensities of acetylene and two of the propyne vibrations for hydrogens bound to sp hybridized carbon atoms. Calculations were carried out at four quantum levels, MP2/6-311++G(3d,3p), MP2/cc-pVTZ, QCISD/6-311++G(3d,3p) and QCISD/cc-pVTZ. The results calculated at the QCISD level are the most accurate among the four with root mean square errors of 4.7 and 5.0 km mol(-1) for the 6-311++G(3d,3p) and cc-pVTZ basis sets. These values are close to the estimated aggregate experimental error of the hydrocarbon intensities, 4.0 km mol(-1). The atomic charge transfer-counter polarization effect is much larger than the charge effect for the results of all four quantum levels. Charge transfer-counter polarization effects are expected to also be important in vibrations of more polar molecules for which equilibrium charge contributions can be large.