986 resultados para Cure fraction models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Solid-state dye-sensitized solar cells rely on effective infiltration of a solid-state hole-transporting material into the pores of a nanoporous TiO 2 network to allow for dye regeneration and hole extraction. Using microsecond transient absorption spectroscopy and femtosecond photoluminescence upconversion spectroscopy, the hole-transfer yield from the dye to the hole-transporting material 2,2′,7,7′-tetrakis(N,N-di-p- methoxyphenylamine)-9,9'-spirobifluorene (spiro-OMeTAD) is shown to rise rapidly with higher pore-filling fractions as the dye-coated pore surface is increasingly covered with hole-transporting material. Once a pore-filling fraction of ≈30% is reached, further increases do not significantly change the hole-transfer yield. Using simple models of infiltration of spiro-OMeTAD into the TiO2 porous network, it is shown that this pore-filling fraction is less than the amount required to cover the dye surface with at least a single layer of hole-transporting material, suggesting that charge diffusion through the dye monolayer network precedes transfer to the hole-transporting material. Comparison of these results with device parameters shows that improvements of the power-conversion efficiency beyond ≈30% pore filling are not caused by a higher hole-transfer yield, but by a higher charge-collection efficiency, which is found to occur in steps. The observed sharp onsets in photocurrent and power-conversion efficiencies with increasing pore-filling fraction correlate well with percolation theory, predicting the points of cohesive pathway formation in successive spiro-OMeTAD layers adhered to the pore walls. From percolation theory it is predicted that, for standard mesoporous TiO2 with 20 nm pore size, the photocurrent should show no further improvement beyond an ≈83% pore-filling fraction. Solid-state dye-sensitized solar cells capable of complete hole transfer with pore-filling fractions as low as ∼30% are demonstrated. Improvements of device efficiencies beyond ∼30% are explained by a stepwise increase in charge-collection efficiency in agreement with percolation theory. Furthermore, it is predicted that, for a 20 nm pore size, the photocurrent reaches a maximum at ∼83% pore-filling fraction. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The remote sensing based Production Efficiency Models (PEMs), springs from the concept of "Light Use Efficiency" and has been applied more and more in estimating terrestrial Net Primary Productivity (NPP) regionally and globally. However, global NPP estimates vary greatly among different models in different data sources and handling methods. Because direct observation or measurement of NPP is unavailable at global scale, the precision and reliability of the models cannot be guaranteed. Though, there are ways to improve the accuracy of the models from input parameters. In this study, five remote sensing based PEMs have been compared: CASA, GLO-PEM, TURC, SDBM and VPM. We divided input parameters into three categories, and analyzed the uncertainty of (1) vegetation distribution, (2) fraction of photosynthetically active radiation absorbed by the canopy (fPAR) and (3) light use efficiency (e). Ground measurements of Hulunbeier typical grassland and meteorology measurements were introduced for accuracy evaluation. Results show that a real-time, more accurate vegetation distribution could significantly affect the accuracy of the models, since it's applied directly or indirectly in all models and affects other parameters simultaneously. Higher spatial and spectral resolution remote sensing data may reduce uncertainty of fPAR up to 51.3%, which is essential to improve model accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The curing temperature, pressure, and curing time have significant influence on finished thermosetting composite products. The time of pressure application is one of the most important processing parameters in the manufacture of a thermosetting composite. The determination of the time of pressure application relies on analysis of the viscosity variation of the polymer, associated with curing temperature and curing time. To determine it, the influence of the time of pressure application on the physical properties of epoxy-terminated poly(phenylene ether ketone) (E-PEK)-based continuous carbon fiber composite was studied. It was found that a stepwise temperature cure cycle is more suitable for manufacture of this composite. There are two viscosity valleys, in the case of the E-PEK system, associated with temperature during a stepwise cure cycle. The analysis on the effects of reinforcement fraction and defect content on the composite sheet quality indicates that the width-adjustable second viscosity valley provides a suitable pressing window. The viscosity, ranging from 400 to 1200 Pa . s at the second viscosity valley, is the optimal viscosity range for applying pressure to ensure appropriate resin flow during curing process, which enables one to get a finished composite with optimal fiber volume fraction and low void content. (C) 1997 John Wiley & Sons, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Chemorheology and corresponding models for an epoxy-terminated poly(phenylene ether ketone) (E-PEK) and 4,4'-diaminodiphenyl sulfone (DDS) system were investigated using a differential scanning calorimeter (DSC) and a cone-and-plate rheometer. For this system, the reported four-parameter chemorheological model and modified WLF chemorheological model can only be used in an isothermal or nonisothermal process, respectively. In order to predict the resin viscosity variation during a stepwise temperature cure cycle actually used, a new model based on the combination of the four-parameter model and the modified WLF model was developed. The combined model can predict the resin viscosity variation during a stepwise temperature cure cycle more accurately than the above two models. In order to simplify the establishment of this model, a new five-parameter chemorheological model was then developed. The parameters in this five-parameter model can be determined through very few rheology and DSC experiments. This model is practicable to describe the resin viscosity variation for isothermal, nonisothermal, or stepwise temperature cure cycles accurately. The five-parameter chemorheological model has also successfully been used in the E-PEK systems with two other curing agents, i.e., the diamine curing agent with the addition of a boron trifluride monoethylamine (BF3-MEA) accelerator and an anhydride curing agent (hexahydrophthalic acid anhydride). (C) 1997 John Wiley & Sons, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The curing of conductive adhesives and underfills can save considerable time and offer cost benefits for the microsystems and electronics packaging industry. In contrast to conventional ovens, curing by microwave energy generates heat internally within each individual component of an assembly. The rate at which heat is generated is different for each of the components and depends on the material properties as well as the oven power and frequency. This leads to a very complex and transient thermal state, which is extremely difficult to measure experimentally. Conductive adhesives need to be raised to a minimum temperature to initiate the cross-linking of the resin polymers, whilst some advanced packaging materials currently under investigation impose a maximum temperature constraint to avoid damage. Thermal imagery equipment integrated with the microwave oven can offer some information on the thermal state but such data is based on the surface temperatures. This paper describes computational models that can simulate the internal temperatures within each component of an assembly including the critical region between the chip and substrate. The results obtained demonstrate that due to the small mass of adhesive used in the joints, the temperatures reached are highly dependent on the material properties of the adjacent chip and substrate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A review of polymer cure models used in microelectronics packaging applications reveals no clear consensus of the chemical rate constants for the cure reactions, or even of an effective model. The problem lies in the contrast between the actual cure process, which involves a sequence of distinct chemical reactions, and the models, which typically assume only one, (or two with some restrictions on the independence of their characteristic constants.) The standard techniques to determine the model parameters are based on differential scanning calorimetry (DSC), which cannot distinguish between the reactions, and hence yields results useful only under the same conditions, which completely misses the point of modeling. The obvious solution is for manufacturers to provide the modeling parameters, but failing that, an alternative experimental technique is required to determine individual reaction parameters, e.g. Fourier transform infra-red spectroscopy (FTIR).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A review of polymer cure models used in microelectronics packaging applications reveals no clear consensus of the chemical rate constants for the cure reactions, or even of an effective model. The problem lies in the contrast between the actual cure process, which involves a sequence of distinct chemical reactions, and the models, which typically assume only one, (or two with some restrictions on the independence of their characteristic constants.) The standard techniques to determine the model parameters are based on differential scanning calorimetry (DSC), which cannot distinguish between the reactions, and hence yields results useful only under the same conditions, which completely misses the point of modeling. The obvious solution is for manufacturers to provide the modeling parameters, but failing that, an alternative experimental technique is required to determine individual reaction parameters, e.g. Fourier transform infra-red spectroscopy (FTIR).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Chlorophyll-a satellite products are routinely used in oceanography, providing a synoptic and global view of phytoplankton abundance. However, these products lack information on the community structure of the phytoplankton, which is crucial for ecological modelling and ecosystem studies. To assess the usefulness of existing methods to differentiate phytoplankton functional types (PFT) or phytoplankton size classes from satellite data, in-situ phytoplankton samples collected in the Western Iberian coast, on the North-East Atlantic, were analysed for pigments and absorption spectra. Water samples were collected in five different locations, four of which were located near the shore and another in an open-ocean, seamount region. Three different modelling approaches for deriving phytoplankton size classes were applied to the in situ data. Approaches tested provide phytoplankton size class information based on the input of pigments data (Brewin et al., 2010), absorption spectra data (Ciotti et al., 2002) or both (Uitz et al., 2008). Following Uitz et al. (2008), results revealed high variability in microphytoplankton chlorophyll-specific absorption coefficients, ranging from 0.01 to 0.09 m2 (mg chl)− 1 between 400 and 500 nm. This spectral analysis suggested, in one of the regions, the existence of small cells (< 20 μm) in the fraction of phytoplankton presumed to be microphytoplankton (based on diagnostic pigments). Ciotti et al. (2002) approach yielded the highest differences between modelled and measured absorption spectra for the locations where samples had high variability in community structure and cell size. The Brewin et al. (2010) pigment-based model was adjusted and a set of model coefficients are presented and recommended for future studies in offshore water of the Western Iberian coast.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present theoretical delay times and rates of thermonuclear explosions that are thought to produce Type Ia supernovae (SNe Ia), including the double-detonation sub-Chandrasekhar mass model, using the population synthesis binary evolution code startrack. If detonations of sub-Chandrasekhar mass carbon-oxygen white dwarfs following a detonation in an accumulated layer of helium on the white dwarf's surface ('double-detonation' models) are able to produce thermonuclear explosions which are characteristically similar to those of SNe Ia, then these sub-Chandrasekhar mass explosions may account for at least some substantial fraction of the observed SN Ia rate. Regardless of whether all double-detonations look like 'normal' SNe Ia, in any case the explosions are expected to be bright and thus potentially detectable. Additionally, we find that the delay time distribution of double-detonation sub-Chandrasekhar mass SNe Ia can be divided into two distinct formation channels: the 'prompt' helium-star channel with delay times

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a companion paper, Seitenzahl et al. have presented a set of three-dimensional delayed detonation models for thermonuclear explosions of near-Chandrasekhar-mass white dwarfs (WDs). Here,we present multidimensional radiative transfer simulations that provide synthetic light curves and spectra for those models. The model sequence explores both changes in the strength of the deflagration phase (which is controlled by the ignition configuration in our models) and the WD central density. In agreement with previous studies, we find that the strength of the deflagration significantly affects the explosion and the observables. Variations in the central density also have an influence on both brightness and colour, but overall it is a secondary parameter in our set of models. In many respects, the models yield a good match to the observed properties of normal Type Ia supernovae (SNe Ia): peak brightness, rise/decline time-scales and synthetic spectra are all in reasonable agreement. There are, however, several differences. In particular, the models are systematically too red around maximum light, manifest spectral line velocities that are a little too high and yield I-band light curves that do not match observations. Although some of these discrepancies may simply relate to approximations made in the modelling, some pose real challenges to the models. If viewed as a complete sequence, our models do not reproduce the observed light-curve width- luminosity relation (WLR) of SNe Ia: all our models show rather similar B-band decline rates, irrespective of peak brightness. This suggests that simple variations in the strength of the deflagration phase in Chandrasekhar-mass deflagration-to-detonation models do not readily explain the observed diversity of normal SNe Ia. This may imply that some other parameter within the Chandrasekhar-mass paradigm is key to the WLR, or that a substantial fraction of normal SNe Ia arise from an alternative explosion scenario.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper tests a simple market fraction asset pricing model with heterogeneous
agents. By selecting a set of structural parameters of the model through a systematic procedure, we show that the autocorrelations (of returns, absolute returns and squared returns) of the market fraction model share the same pattern as those of the DAX 30. By conducting econometric analysis via Monte Carlo simulations, we characterize these power-law behaviours and find that estimates of the power-law decay indices, the (FI)GARCH parameters, and the tail index of the selected market fraction model closely match those of the DAX 30. The results strongly support the explanatory power of the heterogeneous agent models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context. Binary stellar evolution calculations predict thatChandrasekhar-mass carbon/oxygen white dwarfs (WDs) show a radiallyvarying profile for the composition with a carbon depleted core. Manyrecent multi-dimensional simulations of Type Ia supernovae (SNe Ia),however, assume the progenitor WD has a homogeneous chemicalcomposition.
Aims: In this work, we explore the impact ofdifferent initial carbon profiles of the progenitor WD on the explosionphase and on synthetic observables in the Chandrasekhar-mass delayeddetonation model. Spectra and light curves are compared to observationsto judge the validity of the model.
Methods: The explosion phaseis simulated using the finite volume supernova code Leafs, which isextended to treat different compositions of the progenitor WD. Thesynthetic observables are computed with the Monte Carlo radiativetransfer code Artis. Results: Differences in binding energies ofcarbon and oxygen lead to a lower nuclear energy release for carbondepleted material; thus, the burning fronts that develop are weaker andthe total nuclear energy release is smaller. For otherwise identicalconditions, carbon depleted models produce less 56Ni.Comparing different models with similar 56Ni yields showslower kinetic energies in the ejecta for carbon depleted models, butonly small differences in velocity distributions and line velocities inspectra. The light curve width-luminosity relation (WLR) obtained formodels with differing carbon depletion is roughly perpendicular to theobserved WLR, hence the carbon mass fraction is probably only asecondary parameter in the family of SNe Ia.
Tables 3 and 4 are available in electronic form at http://www.aanda.org

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação de mestrado, Ciências Biomédicas, Departamento de Ciências Biomédicas e Medicina, Universidade do Algarve, 2015

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A large fraction of genome variation between individuals is comprised of submicroscopic copy number variation of genomic DNA segments. We assessed the relative contribution of structural changes and gene dosage alterations on phenotypic outcomes with mouse models of Smith-Magenis and Potocki-Lupski syndromes. We phenotyped mice with 1n (Deletion/+), 2n (+/+), 3n (Duplication/+), and balanced 2n compound heterozygous (Deletion/Duplication) copies of the same region. Parallel to the observations made in humans, such variation in gene copy number was sufficient to generate phenotypic consequences: in a number of cases diametrically opposing phenotypes were associated with gain versus loss of gene content. Surprisingly, some neurobehavioral traits were not rescued by restoration of the normal gene copy number. Transcriptome profiling showed that a highly significant propensity of transcriptional changes map to the engineered interval in the five assessed tissues. A statistically significant overrepresentation of the genes mapping to the entire length of the engineered chromosome was also found in the top-ranked differentially expressed genes in the mice containing rearranged chromosomes, regardless of the nature of the rearrangement, an observation robust across different cell lineages of the central nervous system. Our data indicate that a structural change at a given position of the human genome may affect not only locus and adjacent gene expression but also "genome regulation." Furthermore, structural change can cause the same perturbation in particular pathways regardless of gene dosage. Thus, the presence of a genomic structural change, as well as gene dosage imbalance, contributes to the ultimate phenotype.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Canopy interception of incident precipitation is a critical component of the forest water balance during each of the four seasons. Models have been developed to predict precipitation interception from standard meteorological variables because of acknowledged difficulty in extrapolating direct measurements of interception loss from forest to forest. No known study has compared and validated canopy interception models for a leafless deciduous forest stand in the eastern United States. Interception measurements from an experimental plot in a leafless deciduous forest in northeastern Maryland (39°42'N, 75°5'W) for 11 rainstorms in winter and early spring 2004/05 were compared to predictions from three models. The Mulder model maintains a moist canopy between storms. The Gash model requires few input variables and is formulated for a sparse canopy. The WiMo model optimizes the canopy storage capacity for the maximum wind speed during each storm. All models showed marked underestimates and overestimates for individual storms when the measured ratio of interception to gross precipitation was far more or less, respectively, than the specified fraction of canopy cover. The models predicted the percentage of total gross precipitation (PG) intercepted to within the probable standard error (8.1%) of the measured value: the Mulder model overestimated the measured value by 0.1% of PG; the WiMo model underestimated by 0.6% of PG; and the Gash model underestimated by 1.1% of PG. The WiMo model’s advantage over the Gash model indicates that the canopy storage capacity increases logarithmically with the maximum wind speed. This study has demonstrated that dormant-season precipitation interception in a leafless deciduous forest may be satisfactorily predicted by existing canopy interception models.