956 resultados para Dispersion Model
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
A detailed magnetostratigraphic and rock-magnetism study of two Late Palaeozoic rhythmite exposures (Itu and Rio do Sul) from the Itarare Group (Parana Basin, Brazil) is presented in this paper. After stepwise alterning-field procedures and thermal cleaning were performed, samples from both collections show reversed characteristic magnetization components, which is expected for Late Palaeozoic rocks. However, the Itu rocks presented an odd, flat inclination pattern that could not be corrected with mathematical methods based on the virtual geomagnetic pole (VGP) distributions. Correlation tests between the maximum anisotropy of the magnetic susceptibility axis (K1) and the magnetic declination indicated a possible mechanical influence on the remanence acquisition. The Rio do Sul sequence displayed medium to high inclinations and provided a high-quality palaeomagnetic pole (after shallowing corrections of f = 0.8) of 347.5 degrees E 63.2 degrees S (N = 119; A95 = 3.3; K = 31), which is in accordance with the Palaeozoic apparent wander pole path of South America. The angular dispersion (Sb) for the distribution of the VGPs calculated on the basis of both the 45 degrees cut-off angle and Vandamme method was compared to the best-fit Model G for mid-latitudes. Both of the Sb results are in reasonable agreement with the predicted (palaeo) latitudinal S-? relationship during the Cretaceous Normal Superchron (CNS), although the Sb value after the Vandamme cut-off has been applied is a little lower than expected. This result, in addition to those for low palaeolatitudes during the Permo-Carboniferous Reversed Superchron (PCRS) previously reported, indicates that the low secular variation regime for the geodynamo that has already been discovered in the CNS might have also been predominant during the PCRS.
Resumo:
On the basis of the full analytical solution of the overall unitary dynamics, the time evolution of entanglement is studied in a simple bipartite model system evolving unitarily from a pure initial state. The system consists of two particles in one spatial dimension bound by harmonic forces and having its free center of mass initially localized in space in a minimum uncertainty wavepacket. The existence of such initial states in which the bound particles are not entangled is discussed. Galilean invariance of the system ensures that the dynamics of entanglement between the two particles is independent of the wavepacket mean momentum. In fact, as shown, it is driven by the dispersive center of mass free dynamics, and evolves in a time scale that depends on the interparticle interaction in an essential way.
Resumo:
We propose a stage-structured integrodifference model for blowflies' growth and dispersion taking into account the density dependence of fertility and survival rates and the non-overlap of generations. We assume a discrete-time, stage-structured, model. The spatial dynamics is introduced by means of a redistribution kernel. We treat one and two dimensional cases, the latter on the semi-plane, with a reflexive boundary. We analytically show that the upper bound for the invasion front speed is the same as in the one-dimensional case. Using laboratory data for fertility and survival parameters and dispersal data of a single generation from a capture-recapture experiment in South Africa, we obtain an estimate for the velocity of invasion of blowflies of the species Chrysomya albiceps. This model predicts a speed of invasion which was compared to actual observational data for the invasion of the focal species in the Neotropics. Good agreement was found between model and observations.
Resumo:
In this article, for the first time, we propose the negative binomial-beta Weibull (BW) regression model for studying the recurrence of prostate cancer and to predict the cure fraction for patients with clinically localized prostate cancer treated by open radical prostatectomy. The cure model considers that a fraction of the survivors are cured of the disease. The survival function for the population of patients can be modeled by a cure parametric model using the BW distribution. We derive an explicit expansion for the moments of the recurrence time distribution for the uncured individuals. The proposed distribution can be used to model survival data when the hazard rate function is increasing, decreasing, unimodal and bathtub shaped. Another advantage is that the proposed model includes as special sub-models some of the well-known cure rate models discussed in the literature. We derive the appropriate matrices for assessing local influence on the parameter estimates under different perturbation schemes. We analyze a real data set for localized prostate cancer patients after open radical prostatectomy.
Resumo:
In this article, we propose a new Bayesian flexible cure rate survival model, which generalises the stochastic model of Klebanov et al. [Klebanov LB, Rachev ST and Yakovlev AY. A stochastic-model of radiation carcinogenesis - latent time distributions and their properties. Math Biosci 1993; 113: 51-75], and has much in common with the destructive model formulated by Rodrigues et al. [Rodrigues J, de Castro M, Balakrishnan N and Cancho VG. Destructive weighted Poisson cure rate models. Technical Report, Universidade Federal de Sao Carlos, Sao Carlos-SP. Brazil, 2009 (accepted in Lifetime Data Analysis)]. In our approach, the accumulated number of lesions or altered cells follows a compound weighted Poisson distribution. This model is more flexible than the promotion time cure model in terms of dispersion. Moreover, it possesses an interesting and realistic interpretation of the biological mechanism of the occurrence of the event of interest as it includes a destructive process of tumour cells after an initial treatment or the capacity of an individual exposed to irradiation to repair altered cells that results in cancer induction. In other words, what is recorded is only the damaged portion of the original number of altered cells not eliminated by the treatment or repaired by the repair system of an individual. Markov Chain Monte Carlo (MCMC) methods are then used to develop Bayesian inference for the proposed model. Also, some discussions on the model selection and an illustration with a cutaneous melanoma data set analysed by Rodrigues et al. [Rodrigues J, de Castro M, Balakrishnan N and Cancho VG. Destructive weighted Poisson cure rate models. Technical Report, Universidade Federal de Sao Carlos, Sao Carlos-SP. Brazil, 2009 (accepted in Lifetime Data Analysis)] are presented.
Resumo:
Aerosol particles are likely important contributors to our future climate. Further, during recent years, effects on human health arising from emissions of particulate material have gained increasing attention. In order to quantify the effect of aerosols on both climate and human health we need to better quantify the interplay between sources and sinks of aerosol particle number and mass on large spatial scales. So far long-term, regional observations of aerosol properties have been scarce, but argued necessary in order to bring the knowledge of regional and global distribution of aerosols further. In this context, regional studies of aerosol properties and aerosol dynamics are truly important areas of investigation. This thesis is devoted to investigations of aerosol number size distribution observations performed through the course of one year encompassing observational data from five stations covering an area from southern parts of Sweden up to northern parts of Finland. This thesis tries to give a description of aerosol size distribution dynamics from both a quantitative and qualitative point of view. The thesis focuses on properties and changes in aerosol size distribution as a function of location, season, source area, transport pathways and links to various meteorological conditions. The investigations performed in this thesis show that although the basic behaviour of the aerosol number size distribution in terms of seasonal and diurnal characteristics is similar at all stations in the measurement network, the aerosol over the Nordic countries is characterised by a typically sharp gradient in aerosol number and mass. This gradient is argued to derive from geographical locations of the stations in relation to the dominant sources and transport pathways. It is clear that the source area significantly determine the aerosol size distribution properties, but it is obvious that transport condition in terms of frequency of precipitation and cloudiness in some cases even more strongly control the evolution of the number size distribution. Aerosol dynamic processes under clear sky transport are however likewise argued to be highly important. Southerly transport of marine air and northerly transport of air from continental sources is studied in detail under clear sky conditions by performing a pseudo-Lagrangian box model evaluation of the two type cases. Results from both modelling and observations suggest that nucleation events contribute to integral number increase during southerly transport of comparably clean marine air, while number depletion dominates the evolution of the size distribution during northerly transport. This difference is largely explained by different concentration of pre-existing aerosol surface associated with the two type cases. Mass is found to be accumulated in many of the individual transport cases studied. This mass increase was argued to be controlled by emission of organic compounds from the boreal forest. This puts the boreal forest in a central position for estimates of aerosol forcing on a regional scale.
Resumo:
[EN]This paper shows a finite element method for pollutant transport with several pollutant sources. An Eulerian convection–diffusion–reaction model to simulate the pollutant dispersion is used. The discretization of the different sources allows to impose the emissions as boundary conditions. The Eulerian description can deal with the coupling of several plumes. An adaptive stabilized finite element formulation, specifically Least-Squares, with a Crank-Nicolson temporal integration is proposed to solve the problem. An splitting scheme has been used to treat separately the transport and the reaction. A mass-consistent model has been used to compute the wind field of the problem…
Resumo:
In recent years, new precision experiments have become possible withthe high luminosity accelerator facilities at MAMIand JLab, supplyingphysicists with precision data sets for different hadronic reactions inthe intermediate energy region, such as pion photo- andelectroproduction and real and virtual Compton scattering.By means of the low energy theorem (LET), the global properties of thenucleon (its mass, charge, and magnetic moment) can be separated fromthe effects of the internal structure of the nucleon, which areeffectively described by polarizabilities. Thepolarizabilities quantify the deformation of the charge andmagnetization densities inside the nucleon in an applied quasistaticelectromagnetic field. The present work is dedicated to develop atool for theextraction of the polarizabilities from these precise Compton data withminimum model dependence, making use of the detailed knowledge of pionphotoproduction by means of dispersion relations (DR). Due to thepresence of t-channel poles, the dispersion integrals for two ofthe six Compton amplitudes diverge. Therefore, we have suggested to subtract the s-channel dispersion integrals at zero photon energy($nu=0$). The subtraction functions at $nu=0$ are calculated through DRin the momentum transfer t at fixed $nu=0$, subtracted at t=0. For this calculation, we use the information about the t-channel process, $gammagammatopipito Nbar{N}$. In this way, four of thepolarizabilities can be predicted using the unsubtracted DR in the $s$-channel. The other two, $alpha-beta$ and $gamma_pi$, are free parameters in ourformalism and can be obtained from a fit to the Compton data.We present the results for unpolarized and polarized RCS observables,%in the kinematics of the most recent experiments, and indicate anenhanced sensitivity to the nucleon polarizabilities in theenergy range between pion production threshold and the $Delta(1232)$-resonance.newlineindentFurthermore,we extend the DR formalism to virtual Compton scattering (radiativeelectron scattering off the nucleon), in which the concept of thepolarizabilities is generalized to the case of avirtual initial photon by introducing six generalizedpolarizabilities (GPs). Our formalism provides predictions for the fourspin GPs, while the two scalar GPs $alpha(Q^2)$ and $beta(Q^2)$ have to befitted to the experimental data at each value of $Q^2$.We show that at energies betweenpion threshold and the $Delta(1232)$-resonance position, thesensitivity to the GPs can be increased significantly, as compared tolow energies, where the LEX is applicable. Our DR formalism can be used for analysing VCS experiments over a widerange of energy and virtuality $Q^2$, which allows one to extract theGPs from VCS data in different kinematics with a minimum of model dependence.
Resumo:
The use of guided ultrasonic waves (GUW) has increased considerably in the fields of non-destructive (NDE) testing and structural health monitoring (SHM) due to their ability to perform long range inspections, to probe hidden areas as well as to provide a complete monitoring of the entire waveguide. Guided waves can be fully exploited only once their dispersive properties are known for the given waveguide. In this context, well stated analytical and numerical methods are represented by the Matrix family methods and the Semi Analytical Finite Element (SAFE) methods. However, while the former are limited to simple geometries of finite or infinite extent, the latter can model arbitrary cross-section waveguides of finite domain only. This thesis is aimed at developing three different numerical methods for modelling wave propagation in complex translational invariant systems. First, a classical SAFE formulation for viscoelastic waveguides is extended to account for a three dimensional translational invariant static prestress state. The effect of prestress, residual stress and applied loads on the dispersion properties of the guided waves is shown. Next, a two-and-a-half Boundary Element Method (2.5D BEM) for the dispersion analysis of damped guided waves in waveguides and cavities of arbitrary cross-section is proposed. The attenuation dispersive spectrum due to material damping and geometrical spreading of cavities with arbitrary shape is shown for the first time. Finally, a coupled SAFE-2.5D BEM framework is developed to study the dispersion characteristics of waves in viscoelastic waveguides of arbitrary geometry embedded in infinite solid or liquid media. Dispersion of leaky and non-leaky guided waves in terms of speed and attenuation, as well as the radiated wavefields, can be computed. The results obtained in this thesis can be helpful for the design of both actuation and sensing systems in practical application, as well as to tune experimental setup.
Resumo:
Basic concepts and definitions relative to Lagrangian Particle Dispersion Models (LPDMs)for the description of turbulent dispersion are introduced. The study focusses on LPDMs that use as input, for the large scale motion, fields produced by Eulerian models, with the small scale motions described by Lagrangian Stochastic Models (LSMs). The data of two different dynamical model have been used: a Large Eddy Simulation (LES) and a General Circulation Model (GCM). After reviewing the small scale closure adopted by the Eulerian model, the development and implementation of appropriate LSMs is outlined. The basic requirement of every LPDM used in this work is its fullfillment of the Well Mixed Condition (WMC). For the dispersion description in the GCM domain, a stochastic model of Markov order 0, consistent with the eddy-viscosity closure of the dynamical model, is implemented. A LSM of Markov order 1, more suitable for shorter timescales, has been implemented for the description of the unresolved motion of the LES fields. Different assumptions on the small scale correlation time are made. Tests of the LSM on GCM fields suggest that the use of an interpolation algorithm able to maintain an analytical consistency between the diffusion coefficient and its derivative is mandatory if the model has to satisfy the WMC. Also a dynamical time step selection scheme based on the diffusion coefficient shape is introduced, and the criteria for the integration step selection are discussed. Absolute and relative dispersion experiments are made with various unresolved motion settings for the LSM on LES data, and the results are compared with laboratory data. The study shows that the unresolved turbulence parameterization has a negligible influence on the absolute dispersion, while it affects the contribution of the relative dispersion and meandering to absolute dispersion, as well as the Lagrangian correlation.
Resumo:
In this communication, solid-state/melt extrusion (SSME) is introduced as a novel technique that combines solid-state shear pulverization (SSSP) and conventional twin screw extrusion (TSE) in a single extrusion system. The morphology and property enhancements in a model linear low-density polyethylene/organically modified clay nanocomposite sample fabricated via SSME were compared to those fabricated via SSSP and TSE. The results show that SSME is capable of exfoliating and dispersing the nanofillers similarly to SSSP, while achieving a desirable output rate and producing extrudate similar in form to that from TSE.
Resumo:
A range of societal issues have been caused by fossil fuel consumption in the transportation sector in the United States (U.S.), including health related air pollution, climate change, the dependence on imported oil, and other oil related national security concerns. Biofuels production from various lignocellulosic biomass types such as wood, forest residues, and agriculture residues have the potential to replace a substantial portion of the total fossil fuel consumption. This research focuses on locating biofuel facilities and designing the biofuel supply chain to minimize the overall cost. For this purpose an integrated methodology was proposed by combining the GIS technology with simulation and optimization modeling methods. The GIS based methodology was used as a precursor for selecting biofuel facility locations by employing a series of decision factors. The resulted candidate sites for biofuel production served as inputs for simulation and optimization modeling. As a precursor to simulation or optimization modeling, the GIS-based methodology was used to preselect potential biofuel facility locations for biofuel production from forest biomass. Candidate locations were selected based on a set of evaluation criteria, including: county boundaries, a railroad transportation network, a state/federal road transportation network, water body (rivers, lakes, etc.) dispersion, city and village dispersion, a population census, biomass production, and no co-location with co-fired power plants. The simulation and optimization models were built around key supply activities including biomass harvesting/forwarding, transportation and storage. The built onsite storage served for spring breakup period where road restrictions were in place and truck transportation on certain roads was limited. Both models were evaluated using multiple performance indicators, including cost (consisting of the delivered feedstock cost, and inventory holding cost), energy consumption, and GHG emissions. The impact of energy consumption and GHG emissions were expressed in monetary terms to keep consistent with cost. Compared with the optimization model, the simulation model represents a more dynamic look at a 20-year operation by considering the impacts associated with building inventory at the biorefinery to address the limited availability of biomass feedstock during the spring breakup period. The number of trucks required per day was estimated and the inventory level all year around was tracked. Through the exchange of information across different procedures (harvesting, transportation, and biomass feedstock processing procedures), a smooth flow of biomass from harvesting areas to a biofuel facility was implemented. The optimization model was developed to address issues related to locating multiple biofuel facilities simultaneously. The size of the potential biofuel facility is set up with an upper bound of 50 MGY and a lower bound of 30 MGY. The optimization model is a static, Mathematical Programming Language (MPL)-based application which allows for sensitivity analysis by changing inputs to evaluate different scenarios. It was found that annual biofuel demand and biomass availability impacts the optimal results of biofuel facility locations and sizes.
Resumo:
In this paper we make a further step towards a dispersive description of the hadronic light-by-light (HLbL) tensor, which should ultimately lead to a data-driven evaluation of its contribution to (g − 2) μ . We first provide a Lorentz decomposition of the HLbL tensor performed according to the general recipe by Bardeen, Tung, and Tarrach, generalizing and extending our previous approach, which was constructed in terms of a basis of helicity amplitudes. Such a tensor decomposition has several advantages: the role of gauge invariance and crossing symmetry becomes fully transparent; the scalar coefficient functions are free of kinematic singularities and zeros, and thus fulfill a Mandelstam double-dispersive representation; and the explicit relation for the HLbL contribution to (g − 2) μ in terms of the coefficient functions simplifies substantially. We demonstrate explicitly that the dispersive approach defines both the pion-pole and the pion-loop contribution unambiguously and in a model-independent way. The pion loop, dispersively defined as pion-box topology, is proven to coincide exactly with the one-loop scalar QED amplitude, multiplied by the appropriate pion vector form factors.