986 resultados para ARISTOTELIAN PHYSICS
Resumo:
The efficacy of photodynamic therapy (PDT) depends on a variety of parameters: concentration of the photosensitizer at the time of treatment, light wavelength, fluence, fluence rate, availability of oxygen within the illuminated volume, and light distribution in the tissue. Dosimetry in PDT requires the congregation of adequate amounts of light, drug, and tissue oxygen. The adequate dosimetry should be able to predict the extension of the tissue damage. Photosensitizer photobleaching rate depends on the availability of molecular oxygen in the tissue. Based on photosensitizers photobleaching models, high photobleaching has to be associated with high production of singlet oxygen and therefore with higher photodynamic action, resulting in a greater depth of necrosis. The purpose of this work is to show a possible correlation between depth of necrosis and the in vivo photosensitizer (in this case, Photogem (R)) photodegradation during PDT. Such correlation allows possibilities for the development of a real time evaluation of the photodynamic action during PDT application. Experiments were performed in a range of fluence (0-450 J/cm(2)) at a constant fluence rate of 250 mW/cm(2) and applying different illumination times (0-1800 s) to achieve the desired fluence. A quantity was defined (psi) as the product of fluorescence ratio (related to the photosensitizer degradation at the surface) and the observed depth of necrosis. The correlation between depth of necrosis and surface fluorescence signal is expressed in psi and could allow, in principle, a noninvasive monitoring of PDT effects during treatment. High degree of correlation is observed and a simple mathematical model to justify the results is presented.
Resumo:
Ion channels are pores formed by proteins and responsible for carrying ion fluxes through cellular membranes. The ion channels can assume conformational states thereby controlling ion flow. Physically, the conformational transitions from one state to another are associated with energy barriers between them and are dependent on stimulus, such as, electrical field, ligands, second messengers, etc. Several models have been proposed to describe the kinetics of ion channels. The classical Markovian model assumes that a future transition is independent of the time that the ion channel stayed in a previous state. Others models as the fractal and the chaotic assume that the rate of transitions between the states depend on the time that the ionic channel stayed in a previous state. For the calcium activated potassium channels of Leydig cells the R/S Hurst analysis has indicated that the channels are long-term correlated with a Hurst coefficient H around 0.7, showing a persistent memory in this kinetic. Here, we applied the R/S analysis to the opening and closing dwell time series obtained from simulated data from a chaotic model proposed by L. Liebovitch and T. Toth [J. Theor. Biol. 148, 243 (1991)] and we show that this chaotic model or any model that treats the set of channel openings and closings as independent events is inadequate to describe the long-term correlation (memory) already described for the experimental data. (C) 2008 American Institute of Physics.
Resumo:
In this paper, we study the behavior of immune memory against antigenic mutation. Using a dynamic model proposed by one of the authors in a previous study (A. de Castro [Phys. J. Appl. Phys. 33, 147 (2006) and Simul. Mod. Pract. Theory. 15, 831 (2007)]), we have performed simulations of several inoculations, where in each virtual sample the viral population undergoes mutations. Our results suggest that the sustainability of the immunizations is dependent on viral variability and that the memory lifetimes are not random, what contradicts what was suggested by Tarlinton et al. [Curr. Opin. Immunol. 20, 162 (2008)]. We show that what may cause an apparent random behavior of the immune memory is the antigenic variability.
Three-dimensional finite element thermal analysis of dental tissues irradiated with Er,Cr:YSGG laser
Resumo:
In the present study, a finite element model of a half-sectioned molar tooth was developed in order to understand the thermal behavior of dental hard tissues (both enamel and dentin) under laser irradiation. The model was validated by comparing it with an in vitro experiment where a sound molar tooth was irradiated by an Er,Cr:YSGG pulsed laser. The numerical tooth model was conceived to simulate the in vitro experiment, reproducing the dimensions and physical conditions of the typical molar sound tooth, considering laser energy absorption and calculating the heat transfer through the dental tissues in three dimensions. The numerical assay considered the same three laser energy densities at the same wavelength (2.79 mu m) used in the experiment. A thermographic camera was used to perform the in vitro experiment, in which an Er, Cr: YSGG laser (2.79 mu m) was used to irradiate tooth samples and the infrared images obtained were stored and analyzed. The temperature increments in both the finite element model and the in vitro experiment were compared. The distribution of temperature inside the tooth versus time plotted for two critical points showed a relatively good agreement between the results of the experiment and model. The three dimensional model allows one to understand how the heat propagates through the dentin and enamel and to relate the amount of energy applied, width of the laser pulses, and temperature inside the tooth. (C) 2008 American Institute of Physics. [DOI: 10.1063/1.2953526]
Resumo:
The existence of a classical limit describing the interacting particles in a second-quantized theory of identical particles with bosonic symmetry is proved. This limit exists in addition to the previously established classical limit with a classical field behavior, showing that the limit h -> 0 of the theory is not unique. An analogous result is valid for a free massive scalar field: two distinct classical limits are proved to exist, describing a system of particles or a classical field. The introduction of local operators in order to represent kinematical properties of interest is shown to break the permutation symmetry under some localizability conditions, allowing the study of individual particle properties.
Resumo:
We introduce the Coupled Aerosol and Tracer Transport model to the Brazilian developments on the Regional Atmospheric Modeling System (CATT-BRAMS). CATT-BRAMS is an on-line transport model fully consistent with the simulated atmospheric dynamics. Emission sources from biomass burning and urban-industrial-vehicular activities for trace gases and from biomass burning aerosol particles are obtained from several published datasets and remote sensing information. The tracer and aerosol mass concentration prognostics include the effects of sub-grid scale turbulence in the planetary boundary layer, convective transport by shallow and deep moist convection, wet and dry deposition, and plume rise associated with vegetation fires in addition to the grid scale transport. The radiation parameterization takes into account the interaction between the simulated biomass burning aerosol particles and short and long wave radiation. The atmospheric model BRAMS is based on the Regional Atmospheric Modeling System (RAMS), with several improvements associated with cumulus convection representation, soil moisture initialization and surface scheme tuned for the tropics, among others. In this paper the CATT-BRAMS model is used to simulate carbon monoxide and particulate material (PM(2.5)) surface fluxes and atmospheric transport during the 2002 LBA field campaigns, conducted during the transition from the dry to wet season in the southwest Amazon Basin. Model evaluation is addressed with comparisons between model results and near surface, radiosondes and airborne measurements performed during the field campaign, as well as remote sensing derived products. We show the matching of emissions strengths to observed carbon monoxide in the LBA campaign. A relatively good comparison to the MOPITT data, in spite of the fact that MOPITT a priori assumptions imply several difficulties, is also obtained.
Resumo:
This paper presents a new statistical algorithm to estimate rainfall over the Amazon Basin region using the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI). The algorithm relies on empirical relationships derived for different raining-type systems between coincident measurements of surface rainfall rate and 85-GHz polarization-corrected brightness temperature as observed by the precipitation radar (PR) and TMI on board the TRMM satellite. The scheme includes rain/no-rain area delineation (screening) and system-type classification routines for rain retrieval. The algorithm is validated against independent measurements of the TRMM-PR and S-band dual-polarization Doppler radar (S-Pol) surface rainfall data for two different periods. Moreover, the performance of this rainfall estimation technique is evaluated against well-known methods, namely, the TRMM-2A12 [ the Goddard profiling algorithm (GPROF)], the Goddard scattering algorithm (GSCAT), and the National Environmental Satellite, Data, and Information Service (NESDIS) algorithms. The proposed algorithm shows a normalized bias of approximately 23% for both PR and S-Pol ground truth datasets and a mean error of 0.244 mm h(-1) ( PR) and -0.157 mm h(-1)(S-Pol). For rain volume estimates using PR as reference, a correlation coefficient of 0.939 and a normalized bias of 0.039 were found. With respect to rainfall distributions and rain area comparisons, the results showed that the formulation proposed is efficient and compatible with the physics and dynamics of the observed systems over the area of interest. The performance of the other algorithms showed that GSCAT presented low normalized bias for rain areas and rain volume [0.346 ( PR) and 0.361 (S-Pol)], and GPROF showed rainfall distribution similar to that of the PR and S-Pol but with a bimodal distribution. Last, the five algorithms were evaluated during the TRMM-Large-Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) 1999 field campaign to verify the precipitation characteristics observed during the easterly and westerly Amazon wind flow regimes. The proposed algorithm presented a cumulative rainfall distribution similar to the observations during the easterly regime, but it underestimated for the westerly period for rainfall rates above 5 mm h(-1). NESDIS(1) overestimated for both wind regimes but presented the best westerly representation. NESDIS(2), GSCAT, and GPROF underestimated in both regimes, but GPROF was closer to the observations during the easterly flow.
Resumo:
Creation of cold dark matter (CCDM) can macroscopically be described by a negative pressure, and, therefore, the mechanism is capable to accelerate the Universe, without the need of an additional dark energy component. In this framework, we discuss the evolution of perturbations by considering a Neo-Newtonian approach where, unlike in the standard Newtonian cosmology, the fluid pressure is taken into account even in the homogeneous and isotropic background equations (Lima, Zanchin, and Brandenberger, MNRAS 291, L1, 1997). The evolution of the density contrast is calculated in the linear approximation and compared to the one predicted by the Lambda CDM model. The difference between the CCDM and Lambda CDM predictions at the perturbative level is quantified by using three different statistical methods, namely: a simple chi(2)-analysis in the relevant space parameter, a Bayesian statistical inference, and, finally, a Kolmogorov-Smirnov test. We find that under certain circumstances, the CCDM scenario analyzed here predicts an overall dynamics (including Hubble flow and matter fluctuation field) which fully recovers that of the traditional cosmic concordance model. Our basic conclusion is that such a reduction of the dark sector provides a viable alternative description to the accelerating Lambda CDM cosmology.
Resumo:
It is possible that a system composed of up, down, and strange quarks exists as the true ground state of nuclear matter at high densities and low temperatures. This exotic plasma, called strange quark matter (SQM), seems to be even more favorable energetically if quarks are in a superconducting state, the so-called color-flavor locked state. Here we present calculations made on the basis of the MIT bag model, considering the influence of finite temperature on the allowed parameters characterizing the system for stability of bulk SQM (the so-called stability windows) and also for strangelets, small lumps of SQM, both in the color-flavor locking scenario. We compare these results with the unpaired SQM and also briefly discuss some astrophysical implications of them. Also, the issue of the strangelet's electric charge is discussed. The effects of dynamical screening, though important for nonpaired SQM strangelets, are not relevant when considering pairing among all three flavors and colors of quarks.
Resumo:
The influence of a possible nonzero chemical potential mu on the nature of dark energy is investigated by assuming that the dark energy is a relativistic perfect simple fluid obeying the equation of state, p=omega rho (omega < 0, constant). The entropy condition, S >= 0, implies that the possible values of omega are heavily dependent on the magnitude, as well as on the sign of the chemical potential. For mu > 0, the omega parameter must be greater than -1 (vacuum is forbidden) while for mu < 0 not only the vacuum but even a phantomlike behavior (omega <-1) is allowed. In any case, the ratio between the chemical potential and temperature remains constant, that is, mu/T=mu(0)/T(0). Assuming that the dark energy constituents have either a bosonic or fermionic nature, the general form of the spectrum is also proposed. For bosons mu is always negative and the extended Wien's law allows only a dark component with omega <-1/2, which includes vacuum and the phantomlike cases. The same happens in the fermionic branch for mu < 0. However, fermionic particles with mu > 0 are permitted only if -1
Resumo:
A component of dark energy has been recently proposed to explain the current acceleration of the Universe. Unless some unknown symmetry in Nature prevents or suppresses it, such a field may interact with the pressureless component of dark matter, giving rise to the so-called models of coupled quintessence. In this paper we propose a new cosmological scenario where radiation and baryons are conserved, while the dark energy component is decaying into cold dark matter. The dilution of cold dark matter particles, attenuated with respect to the usual a(-3) scaling due to the interacting process, is characterized by a positive parameter epsilon, whereas the dark energy satisfies the equation of state p(x) = omega rho(x) (omega < 0). We carry out a joint statistical analysis involving recent observations from type Ia supernovae, baryon acoustic oscillation peak, and cosmic microwave background shift parameter to check the observational viability of the coupled quintessence scenario here proposed.
Resumo:
Magnetic fields of intensities similar to those in our galaxy are also observed in high redshift galaxies, where a mean field dynamo would not have had time to produce them. Therefore, a primordial origin is indicated. It has been suggested that magnetic fields were created at various primordial eras: during inflation, the electroweak phase transition, the quark-hadron phase transition (QHPT), during the formation of the first objects, and during reionization. We suggest here that the large-scale fields similar to mu G, observed in galaxies at both high and low redshifts by Faraday rotation measurements (FRMs), have their origin in the electromagnetic fluctuations that naturally occurred in the dense hot plasma that existed just after the QHPT. We evolve the predicted fields to the present time. The size of the region containing a coherent magnetic field increased due to the fusion of smaller regions. Magnetic fields (MFs) similar to 10 mu G over a comoving similar to 1 pc region are predicted at redshift z similar to 10. These fields are orders of magnitude greater than those predicted in previous scenarios for creating primordial magnetic fields. Line-of-sight average MFs similar to 10(-2) mu G, valid for FRMs, are obtained over a 1 Mpc comoving region at the redshift z similar to 10. In the collapse to a galaxy (comoving size similar to 30 kpc) at z similar to 10, the fields are amplified to similar to 10 mu G. This indicates that the MFs created immediately after the QHPT (10(-4) s), predicted by the fluctuation-dissipation theorem, could be the origin of the similar to mu G fields observed by FRMs in galaxies at both high and low redshifts. Our predicted MFs are shown to be consistent with present observations. We discuss the possibility that the predicted MFs could cause non-negligible deflections of ultrahigh energy cosmic rays and help create the observed isotropic distribution of their incoming directions. We also discuss the importance of the volume average magnetic field predicted by our model in producing the first stars and in reionizing the Universe.
Resumo:
Strangelets arriving from the interstellar medium are an interesting target for experiments searching for evidence of this hypothetical state of hadronic matter. We entertain the possibility of a trapped strangelet population, quite analogous to ordinary nuclei and electron belts. For a population of strangelets to be trapped by the geomagnetic field, these incoming particles would have to fulfill certain conditions, namely, having magnetic rigidities above the geomagnetic cutoff and below a certain threshold for adiabatic motion to hold. We show in this work that, for fully ionized strangelets, there is a narrow window for stable trapping. An estimate of the stationary population is presented and the dominant loss mechanisms discussed. It is shown that the population would be substantially enhanced with respect to the interstellar medium flux (up to 2 orders of magnitude) due to quasistable trapping.
Resumo:
We discuss the dynamics of the Universe within the framework of the massive graviton cold dark matter scenario (MGCDM) in which gravitons are geometrically treated as massive particles. In this modified gravity theory, the main effect of the gravitons is to alter the density evolution of the cold dark matter component in such a way that the Universe evolves to an accelerating expanding regime, as presently observed. Tight constraints on the main cosmological parameters of the MGCDM model are derived by performing a joint likelihood analysis involving the recent supernovae type Ia data, the cosmic microwave background shift parameter, and the baryonic acoustic oscillations as traced by the Sloan Digital Sky Survey red luminous galaxies. The linear evolution of small density fluctuations is also analyzed in detail. It is found that the growth factor of the MGCDM model is slightly different (similar to 1-4%) from the one provided by the conventional flat Lambda CDM cosmology. The growth rate of clustering predicted by MGCDM and Lambda CDM models are confronted to the observations and the corresponding best fit values of the growth index (gamma) are also determined. By using the expectations of realistic future x-ray and Sunyaev-Zeldovich cluster surveys we derive the dark matter halo mass function and the corresponding redshift distribution of cluster-size halos for the MGCDM model. Finally, we also show that the Hubble flow differences between the MGCDM and the Lambda CDM models provide a halo redshift distribution departing significantly from the those predicted by other dark energy models. These results suggest that the MGCDM model can observationally be distinguished from Lambda CDM and also from a large number of dark energy models recently proposed in the literature.
Resumo:
Context. Precise S abundances are important in the study of the early chemical evolution of the Galaxy. In particular the site of the formation remains uncertain because, at low metallicity, the trend of this alpha-element versus [Fe/H] remains unclear. Moreover, although sulfur is not bound significantly in dust grains in the ISM, it seems to behave differently in DLAs and old metal-poor stars. Aims. We attempt a precise measurement of the S abundance in a sample of extremely metal-poor stars observed with the ESO VLT equipped with UVES, taking into account NLTE and 3D effects. Methods. The NLTE profiles of the lines of multiplet 1 of S I were computed with a version of the program MULTI, including opacity sources from ATLAS9 and based on a new model atom for S. These profiles were fitted to the observed spectra. Results. We find that sulfur in EMP stars behaves like the other alpha-elements, with [S/Fe] remaining approximately constant below [Fe/H] = -3. However, [S/Mg] seems to decrease slightly with increasing [Mg/H]. The overall abundance patterns of O, Na, Mg, Al, S, and K are most closely matched by the SN model yields by Heger & Woosley. The [S/Zn] ratio in EMP stars is solar, as also found in DLAs. We derive an upper limit to the sulfur abundance [S/Fe] < +0.5 for the ultra metal-poor star CS 22949-037. This, along with a previously reported measurement of zinc, argues against the conjecture that the light-element abundance pattern of this star (and by analogy, the hyper iron-poor stars HE 0107-5240 and HE 1327-2326) would be due to dust depletion.