940 resultados para 240503 Thermodynamics and Statistical Physics
Resumo:
Biomolecular structure elucidation is one of the major techniques for studying the basic processes of life. These processes get modulated, hindered or altered due to various causes like diseases, which is why biomolecular analysis and imaging play an important role in diagnosis, treatment prognosis and monitoring. Vibrational spectroscopy (IR and Raman), which is a molecular bond specific technique, can assist the researcher in chemical structure interpretation. Based on the combination with microscopy, vibrational microspectroscopy is currently emerging as an important tool for biomedical research, with a spatial resolution at the cellular and sub-cellular level. These techniques offer various advantages, enabling label-free, biomolecular fingerprinting in the native state. However, the complexity involved in deciphering the required information from a spectrum hampered their entry into the clinic. Today with the advent of automated algorithms, vibrational microspectroscopy excels in the field of spectropathology. However, researchers should be aware of how quantification based on absolute band intensities may be affected by instrumental parameters, sample thickness, water content, substrate backgrounds and other possible artefacts. In this review these practical issues and their effects on the quantification of biomolecules will be discussed in detail. In many cases ratiometric analysis can help to circumvent these problems and enable the quantitative study of biological samples, including ratiometric imaging in 1D, 2D and 3D. We provide an extensive overview from the recent scientific literature on IR and Raman band ratios used for studying biological systems and for disease diagnosis and treatment prognosis.
Resumo:
Multiscale coupling attracts broad interests from mechanics, physics and chemistry to biology. The diversity and coupling of physics at different scales are two essential features of multiscale problems in far-from-equilibrium systems. The two features present fundamental difficulties and are great challenges to multiscale modeling and simulation. The theory of dynamical system and statistical mechanics provide fundamental tools for the multiscale coupling problems. The paper presents some closed multiscale formulations, e.g., the mapping closure approximation, multiscale large-eddy simulation and statistical mesoscopic damage mechanics, for two typical multiscale coupling problems in mechanics, that is, turbulence in fluids and failure in solids. It is pointed that developing a tractable, closed nonequilibrium statistical theory may be an effective approach to deal with the multiscale coupling problems. Some common characteristics of the statistical theory are discussed.
Resumo:
Multiscale coupling is ubiquitous in nature and attracts broad interests of scientists from mathematicians, physicists, machinists, chemists to biologists. However, much less attention has been paid to its intrinsic implication. In this paper, multiscale coupling is introduced by studying two typical examples in classic mechanics: fluid turbulence and solid failure. The nature of multiscale coupling in the two examples lies in their physical diversities and strong coupling over wide-range scales. The theories of dynamical system and statistical mechanics provide fundamental methods for the multiscale coupling problems. The diverse multiscale couplings call for unified approaches and might expedite new concepts, theories and disciplines.
Resumo:
Very-High-Cycle Fatigue (VHCF) is the phenomenon of fatigue damage and failure of metallic materials or structures subjected to 108 cycles of fatigue loading and beyond. This paper attempts to investigate the VHCF behavior and mechanism of a high strength low alloy steel (main composition: C-1% and Cr-1.5%; quenched at 1108K and tempered at 453K). The fractography of fatigue failure was observed by optical microscopy and scanning electron microscopy. The observations reveal that, for the number of cycles to fatigue failure between 106 and 4108 cycles, fatigue cracks almost initiated in the interior of specimen and originated at non-metallic inclusions. An “optical dark area” (ODA) around initiation site is observed when fatigue initiation from interior. ODA size increases with the decrease of fatigue stress, and becomes more roundness. Fracture mechanics analysis gives the stress intensity factor of ODA, which is nearly equivalent to the corresponding fatigue threshold of the test material. The results indicate that the fatigue life of specimens with crack origin at the interior of specimen is longer than that with crack origin at specimen surface. The experimental results and the fatigue mechanism were further analyzed in terms of fracture mechanics and fracture physics, suggesting that the primary propagation of fatigue crack within the fish-eye local region is the main characteristics of VHCF.
Resumo:
The gas flows in micro-electro-mechanical systems possess relatively large Knudsen number and usually belong to the slip flow and transitional flow regimes. Recently the lattice Boltzmann method (LBM) was proposed by Nie et al. in Journal of Statistical Physics, vol. 107, pp. 279-289, in 2002 to simulate the microchannel and microcavity flows in the transitional flow regime. The present article intends to test the feasibility of doing so. The results of using the lattice Boltzmann method and the direct simulation Monte Carlo method show good agreement between them for small Kn (Kn = 0.0194), poor agreement for Kn = 0.194, and large deviation for Kn = 0.388 in simulating microchannel flows. This suggests that the present version of the lattice Boltzmann method is not feasible to simulate the transitional channel flow.
Resumo:
A theoretical description of thermo-plastic instability in simple shear is presented in a system of equations describing plastic deformation, the first law of thermodynamics and Fourier's heat transfer rule. Both mechanical and thermodynamical parameters influence instability and it is shown that two different modes of instability may exist. One of them is dominated by thermal softening and has a characteristic time and length, connected to each other by thermal diffusion.A criterion combining thermal softening, current stress, density, specific heat, work-hardening, thermal conductivity and current strain rate is obtained and practical implications are discussed.
Resumo:
Background: Few studies have analyzed predictors of length of stay (LOS) in patients admitted due to acute bipolar manic episodes. The purpose of the present study was to estimate LOS and to determine the potential sociodemographic and clinical risk factors associated with a longer hospitalization. Such information could be useful to identify those patients at high risk for long LOS and to allocate them to special treatments, with the aim of optimizing their hospital management. Methods: This was a cross-sectional study recruiting adult patients with a diagnosis of bipolar disorder (Diagnostic and Statistical Manual of Mental Disorders, 4th edition, text revision (DSM-IV-TR) criteria) who had been hospitalized due to an acute manic episode with a Young Mania Rating Scale total score greater than 20. Bivariate correlational and multiple linear regression analyses were performed to identify independent predictors of LOS. Results: A total of 235 patients from 44 centers were included in the study. The only factors that were significantly associated to LOS in the regression model were the number of previous episodes and the Montgomery-Åsberg Depression Rating Scale (MADRS) total score at admission (P < 0.05). Conclusions: Patients with a high number of previous episodes and those with depressive symptoms during mania are more likely to stay longer in hospital. Patients with severe depressive symptoms may have a more severe or treatment-resistant course of the acute bipolar manic episode.
Resumo:
We compare results of bottom trawl surveys off Washington, Oregon, and California in 1977, 1980, 1983, and 1986 to discern trends in population abundance, distribution, and biology. Catch per unit of effort, area-swept biomass estimates, and age and length compositions for 12 commercially important west coast groundfishes are presented to illustrate trends over the lO-year period. We discuss the precision, accuracy, and statistical significance of observed trends in abundance estimates. The influence of water temperature on the distribution of groundfishes is also briefly examined. Abundance estimates of canary rockfish, Sebastes pinniger, and yellowtail rockfish, S. Jlavidus, declined during the study period; greater declines were observed in Pacific ocean perch, S. alutus, lingcod, Ophiodon elongatus, and arrowtooth flounder, Atheresthes stomias. Biomass estimates of Pacific hake, Merluccius productus, and English, rex, and Dover soles (Pleuronectes vetulus, Errex zachirus, and Microstomus pacificus) increased, while bocaccio, S. paucispinis, and chilipepper, S. goodei, were stable. Sablefish, Anoplopoma fimbria, biomass estimates increased markedly from 1977 to 1980 and declined moderately thereafter. Precision was lowest for rockfishes, lingcod, and sablefish; it was highest for flatfishes because they were uniformly distributed. The accuracy of survey estimates could be gauged only for yellowtail and canary rockfish and sablefish. All fishery-based analyses produced much larger estimates of abundance than bottom trawl surveys-indicative of the true catchability of survey trawls. Population trends from all analyses compared well except in canary rockfish, the species that presents the greatest challenge to obtaining reasonable precision and one that casts doubts on the usefulness of bottom trawl surveys for estimating its abundance. (PDF file contains 78 pages.)
Resumo:
This study addresses the problem of obtaining reliable velocities and displacements from accelerograms, a concern which often arises in earthquake engineering. A closed-form acceleration expression with random parameters is developed to test any strong-motion accelerogram processing method. Integration of this analytical time history yields the exact velocities, displacements and Fourier spectra. Noise and truncation can also be added. A two-step testing procedure is proposed and the original Volume II routine is used as an illustration. The main sources of error are identified and discussed. Although these errors may be reduced, it is impossible to extract the true time histories from an analog or digital accelerogram because of the uncertain noise level and missing data. Based on these uncertainties, a probabilistic approach is proposed as a new accelerogram processing method. A most probable record is presented as well as a reliability interval which reflects the level of error-uncertainty introduced by the recording and digitization process. The data is processed in the frequency domain, under assumptions governing either the initial value or the temporal mean of the time histories. This new processing approach is tested on synthetic records. It induces little error and the digitization noise is adequately bounded. Filtering is intended to be kept to a minimum and two optimal error-reduction methods are proposed. The "noise filters" reduce the noise level at each harmonic of the spectrum as a function of the signal-to-noise ratio. However, the correction at low frequencies is not sufficient to significantly reduce the drifts in the integrated time histories. The "spectral substitution method" uses optimization techniques to fit spectral models of near-field, far-field or structural motions to the amplitude spectrum of the measured data. The extremes of the spectrum of the recorded data where noise and error prevail are then partly altered, but not removed, and statistical criteria provide the choice of the appropriate cutoff frequencies. This correction method has been applied to existing strong-motion far-field, near-field and structural data with promising results. Since this correction method maintains the whole frequency range of the record, it should prove to be very useful in studying the long-period dynamics of local geology and structures.
Resumo:
An electrostatic mechanism for the flocculation of charged particles by polyelectrolytes of opposite charge is proposed. The difference between this and previous electrostatic coagulation mechanisms is the formation of charged polyion patches on the oppositely charged surfaces. The size of a patch is primarily a function of polymer molecular weight and the total patch area is a function of the amount of polymer adsorbed. The theoretical predictions of the model agree with the experimental dependence of the polymer dose required for flocculation on polymer molecular weight and solution ionic strength.
A theoretical analysis based on the Derjaguin-Landau, Verwey- Overbeek electrical double layer theory and statistical mechanical treatments of adsorbed polymer configurations indicates that flocculation of charged particles in aqueous solutions by polyelectrolytes of opposite charge does not occur by the commonly accepted polymerbridge mechanism.
A series of 1, 2-dimethyl-5 -vinylpyridinium bromide polymers with a molecular weight range of 6x10^3 to 5x10^6 was synthesized and used to flocculate dilute polystyrene latex and silica suspensions in solutions of various ionic strengths. It was found that with high molecular weight polymers and/or high ionic strengths the polymer dose required for flocculation is independent of molecular weight. With low molecular weights and/or low ionic strengths, the flocculation dose decreases with increasing molecular weight.
Resumo:
Part I. Novel composite polyelectrolyte materials were developed that exhibit desirable charge propagation and ion-retention properties. The morphology of electrode coatings cast from these materials was shown to be more important for its electrochemical behavior than its chemical composition.
Part II. The Wilhelmy plate technique for measuring dynamic surface tension was extended to electrified liquid-liquid interphases. The dynamical response of the aqueous NaF-mercury electrified interphase was examined by concomitant measurement of surface tension, current, and applied electrostatic potential. Observations of the surface tension response to linear sweep voltammetry and to step function perturbations in the applied electrostatic potential (e.g., chronotensiometry) provided strong evidence that relaxation processes proceed for time-periods that are at least an order of magnitude longer than the time periods necessary to establish diffusion equilibrium. The dynamical response of the surface tension is analyzed within the context of non-equilibrium thermodynamics and a kinetic model that requires three simultaneous first order processes.
Resumo:
For more than 55 years, data have been collected on the population of pike Esox lucius in Windermere, first by the Freshwater Biological Association (FBA) and, since 1989, by the Institute of Freshwater Ecology (IFE) of the NERC Centre for Ecology and Hydrology. The aim of this article is to explore some methodological and statistical issues associated with the precision of pike gill net catches and catch-per-unit-effort (CPUE) data, further to those examined by Bagenal (1972) and especially in the light of the current deployment within the Windermere long-term sampling programme. Specifically, consideration is given to the precision of catch estimates from gill netting, including the effects of sampling different locations, the effectiveness of sampling for distinguishing between years, and the effects of changing fishing effort.
Resumo:
An instrument, the Caltech High Energy Isotope Spectrometer Telescope (HEIST), has been developed to measure isotopic abundances of cosmic ray nuclei in the charge range 3 ≤ Z ≤ 28 and the energy range between 30 and 800 MeV/nuc by employing an energy loss -- residual energy technique. Measurements of particle trajectories and energy losses are made using a multiwire proportional counter hodoscope and a stack of CsI(TI) crystal scintillators, respectively. A detailed analysis has been made of the mass resolution capabilities of this instrument.
Landau fluctuations set a fundamental limit on the attainable mass resolution, which for this instrument ranges between ~.07 AMU for z~3 and ~.2 AMU for z~2b. Contributions to the mass resolution due to uncertainties in measuring the path-length and energy losses of the detected particles are shown to degrade the overall mass resolution to between ~.1 AMU (z~3) and ~.3 AMU (z~2b).
A formalism, based on the leaky box model of cosmic ray propagation, is developed for obtaining isotopic abundance ratios at the cosmic ray sources from abundances measured in local interstellar space for elements having three or more stable isotopes, one of which is believed to be absent at the cosmic ray sources. This purely secondary isotope is used as a tracer of secondary production during propagation. This technique is illustrated for the isotopes of the elements O, Ne, S, Ar and Ca.
The uncertainties in the derived source ratios due to errors in fragmentation and total inelastic cross sections, in observed spectral shapes, and in measured abundances are evaluated. It is shown that the dominant sources of uncertainty are uncorrelated errors in the fragmentation cross sections and statistical uncertainties in measuring local interstellar abundances.
These results are applied to estimate the extent to which uncertainties must be reduced in order to distinguish between cosmic ray production in a solar-like environment and in various environments with greater neutron enrichments.
Resumo:
Part I: The dynamic response of an elastic half space to an explosion in a buried spherical cavity is investigated by two methods. The first is implicit, and the final expressions for the displacements at the free surface are given as a series of spherical wave functions whose coefficients are solutions of an infinite set of linear equations. The second method is based on Schwarz's technique to solve boundary value problems, and leads to an iterative solution, starting with the known expression for the point source in a half space as first term. The iterative series is transformed into a system of two integral equations, and into an equivalent set of linear equations. In this way, a dual interpretation of the physical phenomena is achieved. The systems are treated numerically and the Rayleigh wave part of the displacements is given in the frequency domain. Several comparisons with simpler cases are analyzed to show the effect of the cavity radius-depth ratio on the spectra of the displacements.
Part II: A high speed, large capacity, hypocenter location program has been written for an IBM 7094 computer. Important modifications to the standard method of least squares have been incorporated in it. Among them are a new way to obtain the depth of shocks from the normal equations, and the computation of variable travel times for the local shocks in order to account automatically for crustal variations. The multiregional travel times, largely based upon the investigations of the United States Geological Survey, are confronted with actual traverses to test their validity.
It is shown that several crustal phases provide control enough to obtain good solutions in depth for nuclear explosions, though not all the recording stations are in the region where crustal corrections are considered. The use of the European travel times, to locate the French nuclear explosion of May 1962 in the Sahara, proved to be more adequate than previous work.
A simpler program, with manual crustal corrections, is used to process the Kern County series of aftershocks, and a clearer picture of tectonic mechanism of the White Wolf fault is obtained.
Shocks in the California region are processed automatically and statistical frequency-depth and energy depth curves are discussed in relation to the tectonics of the area.