999 resultados para Thermodynamic modeling
Resumo:
This investigation re-examines theoretical aspects of the allowance for effects of thermodynamic non-ideality on the characterization of protein self-association by frontal exclusion chromatography, and thereby provides methods of analysis with greater thermodynamic rigor than those used previously. Their application is illustrated by reappraisal of published exclusion chromatography data for hemoglobin on the controlled-pore-glass matrix CPG-120. The equilibrium constant of 100/M that is obtained for dimerization of the (02 species by this means is also deduced from re-examination of published studies of concentrated hemoglobin solutions by osmotic pressure and sedimentation equilibrium methods. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
The inhibitory effect of sucrose on the kinetics of thrombin-catalyzed hydrolysis of the chromogenic substrate S-2238 (D-phenylalanyl-pipecolyl-arginoyl-p-nitroanilide) is re-examined as a possible consequence of thermodynamic non-ideality-an inhibition originally attributed to the increased viscosity of reaction mixtures. However, those published results may also be rationalized in terms of the suppression of a substrate-induced isomerization of thrombin to a slightly more expanded (or more asymmetric) transition state prior to the irreversible kinetic steps that lead to substrate hydrolysis. This reinterpretation of the kinetic results solely in terms of molecular crowding does not signify the lack of an effect of viscosity on any reaction step(s) subject to diffusion control. Instead, it highlights the need for development of analytical procedures that can accommodate the concomitant operation of thermodynamic non-ideality and viscosity effects.
Resumo:
A new modeling approach-multiple mapping conditioning (MMC)-is introduced to treat mixing and reaction in turbulent flows. The model combines the advantages of the probability density function and the conditional moment closure methods and is based on a certain generalization of the mapping closure concept. An equivalent stochastic formulation of the MMC model is given. The validity of the closuring hypothesis of the model is demonstrated by a comparison with direct numerical simulation results for the three-stream mixing problem. (C) 2003 American Institute of Physics.
Resumo:
This communications describes an electromagnetic model of a radial line planar antenna consisting of a radial guide with one central probe and many peripheral probes arranged in concentric circles feeding an array of antenna elements such as patches or wire curls. The model takes into account interactions between the coupling probes while assuming isolation of radiating elements. Based on this model, computer programs are developed to determine equivalent circuit parameters of the feed network and the radiation pattern of the radial line planar antenna. Comparisons are made between the present model and the two-probe model developed earlier by other researchers.
Resumo:
Under certain conditions, cross-sectional analysis of cross-twin intertrait correlations can provide important information about the direction of causation (DOC) between two variables. A community-based sample of Australian female twins aged 18 to 45 years was mailed an extensive Health and Lifestyle Questionnaire (HLQ) that covered a wide range of personality and behavioral measures. Included were self-report measures of recent psychological distress and perceived childhood environment (PBI). Factor analysis of the PBI yielded three interpretable dimensions: Coldness, Overprotection, and Autonomy. Univariate analysis revealed that parental Overprotection and Autonomy were best explained by additive genetic, shared, and nonshared environmental effects (ACE), whereas the best-fitting model for PBI Coldness and the three measures of psychological distress (Depression, Phobic Anxiety, and Somatic Distress) included only additive genetic and nonshared environmental effects (AE). A common pathway model best explained the covariation between (1) the three PBI dimensions and (2) the three measures of psychological distress. DOC modeling between latent constructs of parenting and psychological distress revealed that a model which specified recollected parental behavior as the cause of psychological distress provided a better fit than a model which specified psychological distress as the cause of recollected parental behavior. Power analyses and limitations of the findings are discussed.
Resumo:
Free-space optical interconnects (FSOIs), made up of dense arrays of vertical-cavity surface-emitting lasers, photodetectors and microlenses can be used for implementing high-speed and high-density communication links, and hence replace the inferior electrical interconnects. A major concern in the design of FSOIs is minimization of the optical channel cross talk arising from laser beam diffraction. In this article we introduce modifications to the mode expansion method of Tanaka et al. [IEEE Trans. Microwave Theory Tech. MTT-20, 749 (1972)] to make it an efficient tool for modelling and design of FSOIs in the presence of diffraction. We demonstrate that our modified mode expansion method has accuracy similar to the exact solution of the Huygens-Kirchhoff diffraction integral in cases of both weak and strong beam clipping, and that it is much more accurate than the existing approximations. The strength of the method is twofold: first, it is applicable in the region of pronounced diffraction (strong beam clipping) where all other approximations fail and, second, unlike the exact-solution method, it can be efficiently used for modelling diffraction on multiple apertures. These features make the mode expansion method useful for design and optimization of free-space architectures containing multiple optical elements inclusive of optical interconnects and optical clock distribution systems. (C) 2003 Optical Society of America.
Influence of magnetically-induced E-fields on cardiac electric activity during MRI: A modeling study
Resumo:
In modern magnetic resonance imaging (MRI), patients are exposed to strong, time-varying gradient magnetic fields that may be able to induce electric fields (E-fields)/currents in tissues approaching the level of physiological significance. In this work we present theoretical investigations into induced E-fields in the thorax, and evaluate their potential influence on cardiac electric activity under the assumption that the sites of maximum E-field correspond to the myocardial stimulation threshold (an abnormal circumstance). Whole-body cylindrical and planar gradient coils were included in the model. The calculations of the induced fields are based on an efficient, quasi-static, finite-difference scheme and an anatomically realistic, whole-body model. The potential for cardiac stimulation was evaluated using an electrical model of the heart. Twelve-lead electrocardiogram (ECG) signals were simulated and inspected for arrhythmias caused by the applied fields for both healthy and diseased hearts. The simulations show that the shape of the thorax and the conductive paths significantly influence induced E-fields. In healthy patients, these fields are not sufficient to elicit serious arrhythmias with the use of contemporary gradient sets. However, raising the strength and number of repeated switching episodes of gradients, as is certainly possible in local chest gradient sets, could expose patients to increased risk. For patients with cardiac disease, the risk factors are elevated. By the use of this model, the sensitivity of cardiac pathologies, such as abnormal conductive pathways, to the induced fields generated by an MRI sequence can be investigated. (C) 2003 Wiley-Liss, Inc.
Resumo:
[1] Comprehensive measurements are presented of the piezometric head in an unconfined aquifer during steady, simple harmonic oscillations driven by a hydrostatic clear water reservoir through a vertical interface. The results are analyzed and used to test existing hydrostatic and nonhydrostatic, small-amplitude theories along with capillary fringe effects. As expected, the amplitude of the water table wave decays exponentially. However, the decay rates and phase lags indicate the influence of both vertical flow and capillary effects. The capillary effects are reconciled with observations of water table oscillations in a sand column with the same sand. The effects of vertical flows and the corresponding nonhydrostatic pressure are reasonably well described by small-amplitude theory for water table waves in finite depth aquifers. That includes the oscillation amplitudes being greater at the bottom than at the top and the phase lead of the bottom compared with the top. The main problems with respect to interpreting the measurements through existing theory relate to the complicated boundary condition at the interface between the driving head reservoir and the aquifer. That is, the small-amplitude, finite depth expansion solution, which matches a hydrostatic boundary condition between the bottom and the mean driving head level, is unrealistic with respect to the pressure variation above this level. Hence it cannot describe the finer details of the multiple mode behavior close to the driving head boundary. The mean water table height initially increases with distance from the forcing boundary but then decreases again, and its asymptotic value is considerably smaller than that previously predicted for finite depth aquifers without capillary effects. Just as the mean water table over-height is smaller than predicted by capillarity-free shallow aquifer models, so is the amplitude of the second harmonic. In fact, there is no indication of extra second harmonics ( in addition to that contained in the driving head) being generated at the interface or in the interior.
Resumo:
Drying kinetics of low molecular weight sugars such as fructose, glucose, sucrose and organic acid such as citric acid and high molecular weight carbohydrate such as maltodextrin (DE 6) were determined experimentally using single drop drying experiments as well as predicted numerically by solving the mass and heat transfer equations. The predicted moisture and temperature histories agreed with the experimental ones within 6% average relative (absolute) error and average difference of +/- 1degreesC, respectively. The stickiness histories of these drops were determined experimentally and predicted numerically based on the glass transition temperature (T-g) of surface layer. The model predicted the experimental observations with good accuracy. A nonsticky regime for these materials during spray drying is proposed by simulating a drop, initially 120 mum in diameter, in a spray drying environment.
Resumo:
Electronic energy transfer (EET) rate constants between a naphthalene donor and anthracene acceptor in [ZnL4a](ClO4)(2) and [ZnL4b](ClO4)(2) were determined by time-resolved fluorescence where L-4a and L-4b are the trans and cis isomers of 6-((anthracen-9-yl-methyl)amino)-6,13-dimethyl-13-((naphthalen-1-yl-methyl)amino)-1,4,8,11-tetraazacyclotetradecane, respectively. These isomers differ in the relative disposition of the appended chromophores with respect to the macrocyclic plane. The trans isomer has an energy transfer rate constant (k(EET)) of 8.7 x 10(8) s(-1), whereas that of the cis isomer is significantly faster (2.3 x 10(9) s(-1)). Molecular modeling was used to determine the likely distribution of conformations in CH3CN solution for these complexes in an attempt to identify any distance or orientation dependency that may account for the differing rate constants observed. The calculated conformational distributions together with analysis by H-1 NMR for the [ZnL4a](2+) trans complex in the common trans-III N-based isomer gave a calculated Forster rate constant close to that observed experimentally. For the [ZnL4b](2+) cis complex, the experimentally determined rate constant may be attributed to a combination of trans-Ill and trans-I N-based isomeric forms of the complex in solution.
Resumo:
Talvez não seja nenhum exagero afirmar que há quase um consenso entre os praticantes da Termoeconomia de que a exergia, ao invés de só entalpia, seja a magnitude Termodinâmica mais adequada para ser combinada com o conceito de custo na modelagem termoeconômica, pois esta leva em conta aspectos da Segunda Lei da Termodinâmica e permite identificar as irreversibilidades. Porém, muitas vezes durante a modelagem termoeconômica se usa a exergia desagregada em suas parcelas (química, térmica e mecânica), ou ainda, se inclui a neguentropia que é um fluxo fictício, permitindo assim a desagregação do sistema em seus componentes (ou subsistemas) visando melhorar e detalhar a modelagem para a otimização local, diagnóstico e alocação dos resíduos e equipamentos dissipativos. Alguns autores também afirmam que a desagregação da exergia física em suas parcelas (térmica e mecânica) permite aumentar a precisão dos resultados na alocação de custos, apesar de fazer aumentar a complexidade do modelo termoeconômico e consequentemente os custos computacionais envolvidos. Recentemente alguns autores apontaram restrições e possíveis inconsistências do uso da neguentropia e deste tipo de desagregação da exergia física, propondo assim alternativas para o tratamento de resíduos e equipamentos dissipativos que permitem a desagregação dos sistemas em seus componentes. Estas alternativas consistem, basicamente, de novas propostas de desagregação da exergia física na modelagem termoeconômica. Sendo assim, este trabalho tem como objetivo avaliar as diferentes metodologias de desagregação da exergia física para a modelagem termoeconômica, tendo em conta alguns aspectos como vantagens, restrições, inconsistências, melhoria na precisão dos resultados, aumento da complexidade e do esforço computacional e o tratamento dos resíduos e equipamentos dissipativos para a total desagregação do sistema térmico. Para isso, as diferentes metodologias e níveis de desagregação da exergia física são aplicados na alocação de custos para os produtos finais (potência líquida e calor útil) em diferentes plantas de cogeração considerando como fluido de trabalho tanto o gás ideal bem como o fluido real. Plantas essas com equipamentos dissipativos (condensador ou válvula) ou resíduos (gases de exaustão da caldeira de recuperação). Porém, foi necessário que uma das plantas de cogeração não incorporasse equipamentos dissipativos e nem caldeira de recuperação com o intuito de avaliar isoladamente o efeito da desagregação da exergia física na melhoria da precisão dos resultados da alocação de custos para os produtos finais.
Resumo:
In this work it is demonstrated that the capacitance between two cylinders increases with the rotation angle and it has a fundamental influence on the composite dielectric constant. The dielectric constant is lower for nematic materials than for isotropic ones and this can be attributed to the effect of the filler alignment in the capacitance. The effect of aspect ratio in the conductivity is also studied in this work. Finally, based on previous work and by comparing to results from the literature it is found that the electrical conductivity in this type of composites is due to hopping between nearest fillers resulting in a weak disorder regime that is similar to the single junction expression.
Resumo:
Polymers have become the reference material for high reliability and performance applications. In this work, a multi-scale approach is proposed to investigate the mechanical properties of polymeric based material under strain. To achieve a better understanding of phenomena occurring at the smaller scales, a coupling of a Finite Element Method (FEM) and Molecular Dynamics (MD) modeling in an iterative procedure was employed, enabling the prediction of the macroscopic constitutive response. As the mechanical response can be related to the local microstructure, which in turn depends on the nano-scale structure, the previous described multi-scale method computes the stress-strain relationship at every analysis point of the macro-structure by detailed modeling of the underlying micro- and meso-scale deformation phenomena. The proposed multi-scale approach can enable prediction of properties at the macroscale while taking into consideration phenomena that occur at the mesoscale, thus offering an increased potential accuracy compared to traditional methods.
Resumo:
Pectus Carinatum (PC) is a chest deformity consisting on the anterior protrusion of the sternum and adjacent costal cartilages. Non-operative corrections, such as the orthotic compression brace, require previous information of the patient chest surface, to improve the overall brace fit. This paper focuses on the validation of the Kinect scanner for the modelling of an orthotic compression brace for the correction of Pectus Carinatum. To this extent, a phantom chest wall surface was acquired using two scanner systems – Kinect and Polhemus FastSCAN – and compared through CT. The results show a RMS error of 3.25mm between the CT data and the surface mesh from the Kinect sensor and 1.5mm from the FastSCAN sensor
Resumo:
Pectus Carinatum is a deformity of the chest wall, characterized by an anterior protrusion of the sternum, often corrected surgically due to cosmetic motivation. This work presents an alternative approach to the current open surgery option, proposing a novel technique based on a personalized orthosis. Two different processes for the orthosis’ personalization are presented. One based on a 3D laser scan of the patient chest, followed by the reconstruction of the thoracic wall mesh using a radial basis function, and a second one, based on a computer tomography scan followed by a neighbouring cells algorithm. The axial position where the orthosis is to be located is automatically calculated using a Ray-Triangle intersection method, whose outcome is input to a pseudo Kochenek interpolating spline method to define the orthosis curvature. Results show that no significant differences exist between the patient chest physiognomy and the curvature angle and size of the orthosis, allowing a better cosmetic outcome and less initial discomfort