974 resultados para deadweight losses


Relevância:

10.00% 10.00%

Publicador:

Resumo:

[ES]El proyecto consiste en analizar un edificio cualquiera, situado en la zona norte de la península, que necesita una rehabilitación energética. Se estudian todos los posibles elementos por los que hay mayores pérdidas de energía: ventanas, fachada, tejado, iluminación y caldera. A continuación, se desarrollan unas propuestas de mejora para reducir estas pérdidas y se realiza el presupuesto de la inversión que debe hacer el cliente, así como la rentabilidad del proyecto. Una vez terminado el estudio, se llevarían a cabo las obras, logrando un edificio eficiente y rentable energéticamente.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O objeto de estudo foi o preparo e a administração de medicamentos por cateter pela enfermagem em pacientes que recebem nutrição enteral. O objetivo geral foi investigar o padrão de preparo e administração dos medicamentos por cateter em pacientes que recebem nutrição enteral concomitante. Os objetivos específicos foram apresentar o perfil dos medicamentos preparados e administrados de acordo com a possibilidade de serem administrados por cateter enteral e avaliar o tipo e a freqüência de erros que ocorrem no preparo e administração de medicamentos por cateter. Tratou-se de uma pesquisa com desenho transversal de natureza observacional, sem modelo de intervenção. Foi desenvolvida em um hospital do Rio de Janeiro onde foram observados técnicos de enfermagem preparando e administrando medicamentos por cateter na Unidade de Terapia Intensiva. Foram observadas 350 doses de medicamentos sendo preparados e administrados. Os grupos de medicamentos prevalentes foram os que agem no Sistema Cardiovascular Renal com 164 doses (46,80%), seguido pelos que agem no Sistema Respiratório e Sangue com 12,85% e 12,56% respectivamente. Foram encontrados 19 medicamentos diferentes do primeiro grupo, dois no segundo e cinco no terceiro. As categorias de erro no preparo foram trituração, diluição e misturas. Encontrou-se uma taxa média de 67,71% no preparo de medicamentos. Comprimidos simples foram preparados errados em 72,54% das doses, e todos os comprimidos revestidos e de liberação prolongada foram triturados indevidamente entre sólidos a categoria de erro prevalente foi trituração com 45,47%, preparar misturando medicamentos foi um erro encontrado em quase 40% das doses de medicamentos sólidos. A trituração insuficiente ocorreu em 73,33% das doses de ácido fólico, do cloridrato de amiodarona (58,97%) e bromoprida (50,00%). A mistura com outros medicamentos ocorreu em 66,66% das doses de bromoprida, de besilato de anlodipina (53,33%), bamifilina (43,47%), ácido fólico (40,00%) e ácido acetilsalicílico (33,33%). Os erros na administração foram ausência de pausa e manejo indevido do cateter. A taxa média de erros na administração foi de 32,64%, distribuídas entre 17,14% para pausa e 48,14% para manejo do cateter. A ausência de lavagem do cateter antes foi o erro mais comum e o mais incomum foi não lavar o cateter após a administração. Os medicamentos mais envolvidos em erros na administração foram: cloridrato de amiodarona (n=39), captopril (n=33), cloridrato de hidralazina (n=7), levotiroxina sódica (n=7). Com relação à lavagem dos cateteres antes, ela não ocorreu em 330 doses de medicamentos. O preparo e administração inadequados de medicamentos podem levar à perdas na biodisponibilidade, diminuição do nível sérico e riscos de intoxicações para o paciente. Preparar e administrar medicamentos são procedimentos comuns, porém apresentou altas taxas de erros, o que talvez reflita pouco conhecimento desses profissionais sobre as boas práticas da terapia medicamentosa. Constata-se a necessidade de maior investimento de todos os profissionais envolvidos, médicos, enfermeiros e farmacêuticos nas questões que envolvam a segurança com medicamentos assim como repensar o processo de trabalho da enfermagem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The intensities and relative abundances of galactic cosmic ray protons and antiprotons have been measured with the Isotope Matter Antimatter Experiment (IMAX), a balloon-borne magnet spectrometer. The IMAX payload had a successful flight from Lynn Lake, Manitoba, Canada on July 16, 1992. Particles detected by IMAX were identified by mass and charge via the Cherenkov-Rigidity and TOP-Rigidity techniques, with measured rms mass resolution ≤0.2 amu for Z=1 particles.

Cosmic ray antiprotons are of interest because they can be produced by the interactions of high energy protons and heavier nuclei with the interstellar medium as well as by more exotic sources. Previous cosmic ray antiproton experiments have reported an excess of antiprotons over that expected solely from cosmic ray interactions.

Analysis of the flight data has yielded 124405 protons and 3 antiprotons in the energy range 0.19-0.97 GeV at the instrument, 140617 protons and 8 antiprotons in the energy range 0.97-2.58 GeV, and 22524 protons and 5 antiprotons in the energy range 2.58-3.08 GeV. These measurements are a statistical improvement over previous antiproton measurements, and they demonstrate improved separation of antiprotons from the more abundant fluxes of protons, electrons, and other cosmic ray species.

When these results are corrected for instrumental and atmospheric background and losses, the ratios at the top of the atmosphere are p/p=3.21(+3.49, -1.97)x10^(-5) in the energy range 0.25-1.00 GeV, p/p=5.38(+3.48, -2.45) x10^(-5) in the energy range 1.00-2.61 GeV, and p/p=2.05(+1.79, -1.15) x10^(-4) in the energy range 2.61-3.11 GeV. The corresponding antiproton intensities, also corrected to the top of the atmosphere, are 2.3(+2.5, -1.4) x10^(-2) (m^2 s sr GeV)^(-1), 2.1(+1.4, -1.0) x10^(-2) (m^2 s sr GeV)^(-1), and 4.3(+3.7, -2.4) x10^(-2) (m^2 s sr GeV)^(-1) for the same energy ranges.

The IMAX antiproton fluxes and antiproton/proton ratios are compared with recent Standard Leaky Box Model (SLBM) calculations of the cosmic ray antiproton abundance. According to this model, cosmic ray antiprotons are secondary cosmic rays arising solely from the interaction of high energy cosmic rays with the interstellar medium. The effects of solar modulation of protons and antiprotons are also calculated, showing that the antiproton/proton ratio can vary by as much as an order of magnitude over the solar cycle. When solar modulation is taken into account, the IMAX antiproton measurements are found to be consistent with the most recent calculations of the SLBM. No evidence is found in the IMAX data for excess antiprotons arising from the decay of galactic dark matter, which had been suggested as an interpretation of earlier measurements. Furthermore, the consistency of the current results with the SLBM calculations suggests that the mean antiproton lifetime is at least as large as the cosmic ray storage time in the galaxy (~10^7 yr, based on measurements of cosmic ray ^(10)Be). Recent measurements by two other experiments are consistent with this interpretation of the IMAX antiproton results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An instrument, the Caltech High Energy Isotope Spectrometer Telescope (HEIST), has been developed to measure isotopic abundances of cosmic ray nuclei in the charge range 3 ≤ Z ≤ 28 and the energy range between 30 and 800 MeV/nuc by employing an energy loss -- residual energy technique. Measurements of particle trajectories and energy losses are made using a multiwire proportional counter hodoscope and a stack of CsI(TI) crystal scintillators, respectively. A detailed analysis has been made of the mass resolution capabilities of this instrument.

Landau fluctuations set a fundamental limit on the attainable mass resolution, which for this instrument ranges between ~.07 AMU for z~3 and ~.2 AMU for z~2b. Contributions to the mass resolution due to uncertainties in measuring the path-length and energy losses of the detected particles are shown to degrade the overall mass resolution to between ~.1 AMU (z~3) and ~.3 AMU (z~2b).

A formalism, based on the leaky box model of cosmic ray propagation, is developed for obtaining isotopic abundance ratios at the cosmic ray sources from abundances measured in local interstellar space for elements having three or more stable isotopes, one of which is believed to be absent at the cosmic ray sources. This purely secondary isotope is used as a tracer of secondary production during propagation. This technique is illustrated for the isotopes of the elements O, Ne, S, Ar and Ca.

The uncertainties in the derived source ratios due to errors in fragmentation and total inelastic cross sections, in observed spectral shapes, and in measured abundances are evaluated. It is shown that the dominant sources of uncertainty are uncorrelated errors in the fragmentation cross sections and statistical uncertainties in measuring local interstellar abundances.

These results are applied to estimate the extent to which uncertainties must be reduced in order to distinguish between cosmic ray production in a solar-like environment and in various environments with greater neutron enrichments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Analysis of the data from the Heavy Nuclei Experiment on the HEAO-3 spacecraft has yielded the cosmic ray abundances of odd-even element pairs with atomic number, Z, in the range 33 ≤ Z ≤60, and the abundances of broad element groups in the range 62 ≤ Z ≤83, relative to iron. These data show that the cosmic ray source composition in this charge range is quite similar to that of the solar system provided an allowance is made for a source fractionation based on first ionization potential. The observations are inconsistent with a source composition which is dominated by either r-process or s-process material, whether or not an allowance is made for first ionization potential. Although the observations do not exclude a source containing the same mixture of r- and s-process material as in the solar system. the data are best fit by a source having an r- to s-process ratio of 1.22^(+0.25)_(0.21), relative to the solar system The abundances of secondary elements are consistent with the leaky box model of galactic propagation, implying a pathlength distribution similar to that which explains the abundances of nuclei with Z<29.

The energy spectra of the even elements in the range 38 ≤ Z ≤ 60 are found to have a deficiency of particles in the range ~1.5 to 3 GeV/amu, compared to iron. This deficiency may result from ionization energy loss in the interstellar medium, and is not predicted by propagation models which ignore such losses. ln addition, the energy spectra of secondary elements are found to be different to those of the primary elements. Such effects are consistent with observations of lighter nuclei, and are in qualitative agreement with galactic propagation models using a rigidity dependent escape length. The energy spectra of secondaries arising from the platinum group are found to be much steeper than those of lower Z. This effect may result from energy dependent fragmentation cross sections.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is the first time in China that the phase variations and phase shift of microwave cavity in a miniature Rb fountain frequency standard are studied, considering the effect of imperfect metallic walls. Wall losses in the microwave cavity lead to small traveling wave components that deliver power from the cavity feed to the walls of cavity. The small traveling wave components produce a microradian distribution of phase throughout the cavity ity, and therefore distributed cavity phase shifts need to be considered. The microwave cavity is a TE011 circular cylinder copper cavity, with round cut-hole of end plates (14mm in diameter) for access for the atomic flux and two small apertures in the center of the side wall for coupling in microwave power. After attenuation alpha is calculated, field variations in cavity are solved. The field variations of the cavity are given. At the same time, the influences of loaded quality factor QL and diameter/height (2a/d) of the microwave cavity on the phase variations and phase shift are considered. According to the phase variation and phase shift of microwave cavity we select the parameters of cavity, diameter 2a = 69.2mm, height d = 34.6mm, QL = 5000, which will result in an uncertainty delta(Delta f / f0 ) < 4.7 x 10(-17) and meets the requirement for the miniature Rb fountain frequency standard with accuracy 10(-15).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Surface plasma waves arise from the collective oscillations of billions of electrons at the surface of a metal in unison. The simplest way to quantize these waves is by direct analogy to electromagnetic fields in free space, with the surface plasmon, the quantum of the surface plasma wave, playing the same role as the photon. It follows that surface plasmons should exhibit all of the same quantum phenomena that photons do, including quantum interference and entanglement.

Unlike photons, however, surface plasmons suffer strong losses that arise from the scattering of free electrons from other electrons, phonons, and surfaces. Under some circumstances, these interactions might also cause “pure dephasing,” which entails a loss of coherence without absorption. Quantum descriptions of plasmons usually do not account for these effects explicitly, and sometimes ignore them altogether. In light of this extra microscopic complexity, it is necessary for experiments to test quantum models of surface plasmons.

In this thesis, I describe two such tests that my collaborators and I performed. The first was a plasmonic version of the Hong-Ou-Mandel experiment, in which we observed two-particle quantum interference between plasmons with a visibility of 93 ± 1%. This measurement confirms that surface plasmons faithfully reproduce this effect with the same visibility and mutual coherence time, to within measurement error, as in the photonic case.

The second experiment demonstrated path entanglement between surface plasmons with a visibility of 95 ± 2%, confirming that a path-entangled state can indeed survive without measurable decoherence. This measurement suggests that elastic scattering mechanisms of the type that might cause pure dephasing must have been weak enough not to significantly perturb the state of the metal under the experimental conditions we investigated.

These two experiments add quantum interference and path entanglement to a growing list of quantum phenomena that surface plasmons appear to exhibit just as clearly as photons, confirming the predictions of the simplest quantum models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem is to calculate the attenuation of plane sound waves passing through a viscous, heat-conducting fluid containing small spherical inhomogeneities. The attenuation is calculated by evaluating the rate of increase of entropy caused by two irreversible processes: (1) the mechanical work done by the viscous stresses in the presence of velocity gradients, and (2) the flow of heat down the thermal gradients. The method is first applied to a homogeneous fluid with no spheres and shown to give the classical Stokes-Kirchhoff expressions. The method is then used to calculate the additional viscous and thermal attenuation when small spheres are present. The viscous attenuation agrees with Epstein's result obtained in 1941 for a non-heat-conducting fluid. The thermal attenuation is found to be similar in form to the viscous attenuation and, for gases, of comparable magnitude. The general results are applied to the case of water drops in air and air bubbles in water.

For water drops in air the viscous and thermal attenuations are camparable; the thermal losses occur almost entirely in the air, the thermal dissipation in the water being negligible. The theoretical values are compared with Knudsen's experimental data for fogs and found to agree in order of magnitude and dependence on frequency. For air bubbles in water the viscous losses are negligible and the calculated attenuation is almost completely due to thermal losses occurring in the air inside the bubbles, the thermal dissipation in the water being relatively small. (These results apply only to non-resonant bubbles whose radius changes but slightly during the acoustic cycle.)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The current power grid is on the cusp of modernization due to the emergence of distributed generation and controllable loads, as well as renewable energy. On one hand, distributed and renewable generation is volatile and difficult to dispatch. On the other hand, controllable loads provide significant potential for compensating for the uncertainties. In a future grid where there are thousands or millions of controllable loads and a large portion of the generation comes from volatile sources like wind and solar, distributed control that shifts or reduces the power consumption of electric loads in a reliable and economic way would be highly valuable.

Load control needs to be conducted with network awareness. Otherwise, voltage violations and overloading of circuit devices are likely. To model these effects, network power flows and voltages have to be considered explicitly. However, the physical laws that determine power flows and voltages are nonlinear. Furthermore, while distributed generation and controllable loads are mostly located in distribution networks that are multiphase and radial, most of the power flow studies focus on single-phase networks.

This thesis focuses on distributed load control in multiphase radial distribution networks. In particular, we first study distributed load control without considering network constraints, and then consider network-aware distributed load control.

Distributed implementation of load control is the main challenge if network constraints can be ignored. In this case, we first ignore the uncertainties in renewable generation and load arrivals, and propose a distributed load control algorithm, Algorithm 1, that optimally schedules the deferrable loads to shape the net electricity demand. Deferrable loads refer to loads whose total energy consumption is fixed, but energy usage can be shifted over time in response to network conditions. Algorithm 1 is a distributed gradient decent algorithm, and empirically converges to optimal deferrable load schedules within 15 iterations.

We then extend Algorithm 1 to a real-time setup where deferrable loads arrive over time, and only imprecise predictions about future renewable generation and load are available at the time of decision making. The real-time algorithm Algorithm 2 is based on model-predictive control: Algorithm 2 uses updated predictions on renewable generation as the true values, and computes a pseudo load to simulate future deferrable load. The pseudo load consumes 0 power at the current time step, and its total energy consumption equals the expectation of future deferrable load total energy request.

Network constraints, e.g., transformer loading constraints and voltage regulation constraints, bring significant challenge to the load control problem since power flows and voltages are governed by nonlinear physical laws. Remarkably, distribution networks are usually multiphase and radial. Two approaches are explored to overcome this challenge: one based on convex relaxation and the other that seeks a locally optimal load schedule.

To explore the convex relaxation approach, a novel but equivalent power flow model, the branch flow model, is developed, and a semidefinite programming relaxation, called BFM-SDP, is obtained using the branch flow model. BFM-SDP is mathematically equivalent to a standard convex relaxation proposed in the literature, but numerically is much more stable. Empirical studies show that BFM-SDP is numerically exact for the IEEE 13-, 34-, 37-, 123-bus networks and a real-world 2065-bus network, while the standard convex relaxation is numerically exact for only two of these networks.

Theoretical guarantees on the exactness of convex relaxations are provided for two types of networks: single-phase radial alternative-current (AC) networks, and single-phase mesh direct-current (DC) networks. In particular, for single-phase radial AC networks, we prove that a second-order cone program (SOCP) relaxation is exact if voltage upper bounds are not binding; we also modify the optimal load control problem so that its SOCP relaxation is always exact. For single-phase mesh DC networks, we prove that an SOCP relaxation is exact if 1) voltage upper bounds are not binding, or 2) voltage upper bounds are uniform and power injection lower bounds are strictly negative; we also modify the optimal load control problem so that its SOCP relaxation is always exact.

To seek a locally optimal load schedule, a distributed gradient-decent algorithm, Algorithm 9, is proposed. The suboptimality gap of the algorithm is rigorously characterized and close to 0 for practical networks. Furthermore, unlike the convex relaxation approach, Algorithm 9 ensures a feasible solution. The gradients used in Algorithm 9 are estimated based on a linear approximation of the power flow, which is derived with the following assumptions: 1) line losses are negligible; and 2) voltages are reasonably balanced. Both assumptions are satisfied in practical distribution networks. Empirical results show that Algorithm 9 obtains 70+ times speed up over the convex relaxation approach, at the cost of a suboptimality within numerical precision.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Terns and skimmers nesting on saltmarsh islands often suffer large nest losses due to tidal and storm flooding. Nests located near the center of an island and on wrack (mats of dead vegetation, mostly eelgrass Zostera) are less susceptible to flooding than those near the edge of an island and those on bare soil or in saltmarsh cordgrass (Spartina alterniflora). In the 1980’s Burger and Gochfeld constructed artificial eelgrass mats on saltmarsh islands in Ocean County, New Jersey. These mats were used as nesting substrate by common terns (Sterna hirundo) and black skimmers (Rynchops niger). Every year since 2002 I have transported eelgrass to one of their original sites to make artificial mats. This site, Pettit Island, typically supports between 125 and 200 pairs of common terns. There has often been very little natural wrack present on the island at the start of the breeding season, and in most years natural wrack has been most common along the edges of the island. The terns readily used the artificial mats for nesting substrate. Because I placed artificial mats in the center of the island, the terns have often avoided the large nest losses incurred by terns nesting in peripheral locations. However, during particularly severe flooding events even centrally located nests on mats are vulnerable. Construction of eelgrass mats represents an easy habitat manipulation that can improve the nesting success of marsh-nesting seabirds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pulse-height and time-of-flight methods have been used to measure the electronic stopping cross sections for projectiles of 12C, 16O, 19F, 23Na, 24Mg, and 27Al, slowing in helium, neon, argon, krypton, and xenon. The ion energies were in the range 185 keV ≤ E ≤ 2560 keV.

A semiempirical calculation of the electronic stopping cross section for projectiles with atomic numbers between 6 and 13 passing through the inert gases has been performed using a modification of the Firsov model. Using Hartree-Slater-Fock orbitals, and summing over the losses for the individual charge states of the projectiles, good agreement has been obtained with the experimental data. The main features of the stopping cross section seen in the data, such as the Z1 oscillation and the variation of the velocity dependence on Z1 and Z2, are present in the calculation. The inclusion of a modified form of the Bethe-Bloch formula as an additional term allows the increase of the velocity dependence for projectile velocities above vo to be reproduced in the calculation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Experimental research was conducted to study the development of eggs of Eudiaptomus gracilis Sars. The egg production was studied as well as the population dynamics. Factors like losses in the lake and through the effluent Rhine at Konstanz were considered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A fronteira é uma área fragilizada; o trânsito livre de pessoas na região faz com que ela seja altamente vigiada, tanto para a segurança quanto o contrabando e tráfico. Essa facilidade de acesso no transito entre os países, traz um número de pessoas que fazem um trajeto longo, muitas vezes com risco de vida eminente, em busca de assistência médica, inexistente no seu país de origem. O pagamento dessa fatura é de responsabilidade do país que realizou o atendimento, assim como a estatística. Os gestores estaduais e municipais tentam contornar essa situação da melhor maneira possível, sem causar perdas financeiras no seu orçamento. A partir da experiência internacional de parcerias entre cidades de fronteira (transfronteirização), esta dissertação tem como eixo principal analisar o caso do município de Foz do Iguaçu, onde a problemática da política brasileira de saúde nas fronteiras se revela em sua potência máxima. O trabalho apresenta a situação do financiamento da saúde na fronteira oeste do Estado do Paraná, propondo um termo de cooperação na assistência e no financiamento.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este trabajo analiza el fraude empresarial desde una perspectiva internacional. En los últimos años, ha estado muy presente en casos como el de Enron, WorldCom, Royal Ahold o PARMALAT, en los que se ven afectados la propia empresa, los trabajadores, el gobierno y, especialmente, los inversores, con pérdidas que pueden alcanzar millones de dólares. El fraude también afecta a la imagen de las empresas y en la motivación de los trabajadores, y además a menudo es causa de denuncias y penas de prisión. En función del tamaño de la empresa y del sector, la frecuencia con la que se cometen los actos fraudulentos y las pérdidas causadas varían. Asimismo, estos fraudes afectan a todas las regiones del mundo, aunque de manera desigual. Pero es en aquellas regiones más desarrolladas donde se le presta más atención al tema, y donde se han tomado numerosas medidas para intentar impedir estos actos ilícitos. Algunas de las más importantes son las propuestas por las Naciones Unidas: El Pacto Mundial y la Convención contra la Corrupción. También cabe destacar la Oficina Europea de Lucha Contra el Fraude, a nivel europeo, o la Foreign Corrupt Practices Act y la Ley Sarbanes-Oxley, en Estados Unidos. A pesar de estas medidas, en los últimos años el nivel de fraude ha aumentado.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Our understanding of the processes and mechanisms by which secondary organic aerosol (SOA) is formed is derived from laboratory chamber studies. In the atmosphere, SOA formation is primarily driven by progressive photooxidation of SOA precursors, coupled with their gas-particle partitioning. In the chamber environment, SOA-forming vapors undergo multiple chemical and physical processes that involve production and removal via gas-phase reactions; partitioning onto suspended particles vs. particles deposited on the chamber wall; and direct deposition on the chamber wall. The main focus of this dissertation is to characterize the interactions of organic vapors with suspended particles and the chamber wall and explore how these intertwined processes in laboratory chambers govern SOA formation and evolution.

A Functional Group Oxidation Model (FGOM) that represents SOA formation and evolution in terms of the competition between functionalization and fragmentation, the extent of oxygen atom addition, and the change of volatility, is developed. The FGOM contains a set of parameters that are to be determined by fitting of the model to laboratory chamber data. The sensitivity of the model prediction to variation of the adjustable parameters allows one to assess the relative importance of various pathways involved in SOA formation.

A critical aspect of the environmental chamber is the presence of the wall, which can induce deposition of SOA-forming vapors and promote heterogeneous reactions. An experimental protocol and model framework are first developed to constrain the vapor-wall interactions. By optimal fitting the model predictions to the observed wall-induced decay profiles of 25 oxidized organic compounds, the dominant parameter governing the extent of wall deposition of a compound is identified, i.e., wall accommodation coefficient. By correlating this parameter with the molecular properties of a compound via its volatility, the wall-induced deposition rate of an organic compound can be predicted based on its carbon and oxygen numbers in the molecule.

Heterogeneous transformation of δ-hydroxycarbonyl, a major first-generation product from long-chain alkane photochemistry, is observed on the surface of particles and walls. The uniqueness of this reaction scheme is the production of substituted dihydrofuran, which is highly reactive towards ozone, OH, and NO3, thereby opening a reaction pathway that is not usually accessible to alkanes. A spectrum of highly-oxygenated products with carboxylic acid, ester, and ether functional groups is produced from the substituted dihydrofuran chemistry, thereby affecting the average oxidation state of the alkane-derived SOA.

The vapor wall loss correction is applied to several chamber-derived SOA systems generated from both anthropogenic and biogenic sources. Experimental and modeling approaches are employed to constrain the partitioning behavior of SOA-forming vapors onto suspended particles vs. chamber walls. It is demonstrated that deposition of SOA-forming vapors to the chamber wall during photooxidation experiments can lead to substantial and systematic underestimation of SOA. Therefore, it is likely that a lack of proper accounting for vapor wall losses that suppress chamber-derived SOA yields contribute substantially to the underprediction of ambient SOA concentrations in atmospheric models.