983 resultados para Lp Extremal Polynomials
Resumo:
Large-eddy simulation (LES) has emerged as a promising tool for simulating turbulent flows in general and, in recent years,has also been applied to the particle-laden turbulence with some success (Kassinos et al., 2007). The motion of inertial particles is much more complicated than fluid elements, and therefore, LES of turbulent flow laden with inertial particles encounters new challenges. In the conventional LES, only large-scale eddies are explicitly resolved and the effects of unresolved, small or subgrid scale (SGS) eddies on the large-scale eddies are modeled. The SGS turbulent flow field is not available. The effects of SGS turbulent velocity field on particle motion have been studied by Wang and Squires (1996), Armenio et al. (1999), Yamamoto et al. (2001), Shotorban and Mashayek (2006a,b), Fede and Simonin (2006), Berrouk et al. (2007), Bini and Jones (2008), and Pozorski and Apte (2009), amongst others. One contemporary method to include the effects of SGS eddies on inertial particle motions is to introduce a stochastic differential equation (SDE), that is, a Langevin stochastic equation to model the SGS fluid velocity seen by inertial particles (Fede et al., 2006; Shotorban and Mashayek, 2006a; Shotorban and Mashayek, 2006b; Berrouk et al., 2007; Bini and Jones, 2008; Pozorski and Apte, 2009).However, the accuracy of such a Langevin equation model depends primarily on the prescription of the SGS fluid velocity autocorrelation time seen by an inertial particle or the inertial particle–SGS eddy interaction timescale (denoted by $\delt T_{Lp}$ and a second model constant in the diffusion term which controls the intensity of the random force received by an inertial particle (denoted by C_0, see Eq. (7)). From the theoretical point of view, dTLp differs significantly from the Lagrangian fluid velocity correlation time (Reeks, 1977; Wang and Stock, 1993), and this carries the essential nonlinearity in the statistical modeling of particle motion. dTLp and C0 may depend on the filter width and particle Stokes number even for a given turbulent flow. In previous studies, dTLp is modeled either by the fluid SGS Lagrangian timescale (Fede et al., 2006; Shotorban and Mashayek, 2006b; Pozorski and Apte, 2009; Bini and Jones, 2008) or by a simple extension of the timescale obtained from the full flow field (Berrouk et al., 2007). In this work, we shall study the subtle and on-monotonic dependence of $\delt T_{Lp}$ on the filter width and particle Stokes number using a flow field obtained from Direct Numerical Simulation (DNS). We then propose an empirical closure model for $\delta T_{Lp}$. Finally, the model is validated against LES of particle-laden turbulence in predicting single-particle statistics such as particle kinetic energy. As a first step, we consider the particle motion under the one-way coupling assumption in isotropic turbulent flow and neglect the gravitational settling effect. The one-way coupling assumption is only valid for low particle mass loading.
Resumo:
The theories of relativity and quantum mechanics, the two most important physics discoveries of the 20th century, not only revolutionized our understanding of the nature of space-time and the way matter exists and interacts, but also became the building blocks of what we currently know as modern physics. My thesis studies both subjects in great depths --- this intersection takes place in gravitational-wave physics.
Gravitational waves are "ripples of space-time", long predicted by general relativity. Although indirect evidence of gravitational waves has been discovered from observations of binary pulsars, direct detection of these waves is still actively being pursued. An international array of laser interferometer gravitational-wave detectors has been constructed in the past decade, and a first generation of these detectors has taken several years of data without a discovery. At this moment, these detectors are being upgraded into second-generation configurations, which will have ten times better sensitivity. Kilogram-scale test masses of these detectors, highly isolated from the environment, are probed continuously by photons. The sensitivity of such a quantum measurement can often be limited by the Heisenberg Uncertainty Principle, and during such a measurement, the test masses can be viewed as evolving through a sequence of nearly pure quantum states.
The first part of this thesis (Chapter 2) concerns how to minimize the adverse effect of thermal fluctuations on the sensitivity of advanced gravitational detectors, thereby making them closer to being quantum-limited. My colleagues and I present a detailed analysis of coating thermal noise in advanced gravitational-wave detectors, which is the dominant noise source of Advanced LIGO in the middle of the detection frequency band. We identified the two elastic loss angles, clarified the different components of the coating Brownian noise, and obtained their cross spectral densities.
The second part of this thesis (Chapters 3-7) concerns formulating experimental concepts and analyzing experimental results that demonstrate the quantum mechanical behavior of macroscopic objects - as well as developing theoretical tools for analyzing quantum measurement processes. In Chapter 3, we study the open quantum dynamics of optomechanical experiments in which a single photon strongly influences the quantum state of a mechanical object. We also explain how to engineer the mechanical oscillator's quantum state by modifying the single photon's wave function.
In Chapters 4-5, we build theoretical tools for analyzing the so-called "non-Markovian" quantum measurement processes. Chapter 4 establishes a mathematical formalism that describes the evolution of a quantum system (the plant), which is coupled to a non-Markovian bath (i.e., one with a memory) while at the same time being under continuous quantum measurement (by the probe field). This aims at providing a general framework for analyzing a large class of non-Markovian measurement processes. Chapter 5 develops a way of characterizing the non-Markovianity of a bath (i.e.,whether and to what extent the bath remembers information about the plant) by perturbing the plant and watching for changes in the its subsequent evolution. Chapter 6 re-analyzes a recent measurement of a mechanical oscillator's zero-point fluctuations, revealing nontrivial correlation between the measurement device's sensing noise and the quantum rack-action noise.
Chapter 7 describes a model in which gravity is classical and matter motions are quantized, elaborating how the quantum motions of matter are affected by the fact that gravity is classical. It offers an experimentally plausible way to test this model (hence the nature of gravity) by measuring the center-of-mass motion of a macroscopic object.
The most promising gravitational waves for direct detection are those emitted from highly energetic astrophysical processes, sometimes involving black holes - a type of object predicted by general relativity whose properties depend highly on the strong-field regime of the theory. Although black holes have been inferred to exist at centers of galaxies and in certain so-called X-ray binary objects, detecting gravitational waves emitted by systems containing black holes will offer a much more direct way of observing black holes, providing unprecedented details of space-time geometry in the black-holes' strong-field region.
The third part of this thesis (Chapters 8-11) studies black-hole physics in connection with gravitational-wave detection.
Chapter 8 applies black hole perturbation theory to model the dynamics of a light compact object orbiting around a massive central Schwarzschild black hole. In this chapter, we present a Hamiltonian formalism in which the low-mass object and the metric perturbations of the background spacetime are jointly evolved. Chapter 9 uses WKB techniques to analyze oscillation modes (quasi-normal modes or QNMs) of spinning black holes. We obtain analytical approximations to the spectrum of the weakly-damped QNMs, with relative error O(1/L^2), and connect these frequencies to geometrical features of spherical photon orbits in Kerr spacetime. Chapter 11 focuses mainly on near-extremal Kerr black holes, we discuss a bifurcation in their QNM spectra for certain ranges of (l,m) (the angular quantum numbers) as a/M → 1. With tools prepared in Chapter 9 and 10, in Chapter 11 we obtain an analytical approximate for the scalar Green function in Kerr spacetime.
Resumo:
Curve samplers are sampling algorithms that proceed by viewing the domain as a vector space over a finite field, and randomly picking a low-degree curve in it as the sample. Curve samplers exhibit a nice property besides the sampling property: the restriction of low-degree polynomials over the domain to the sampled curve is still low-degree. This property is often used in combination with the sampling property and has found many applications, including PCP constructions, local decoding of codes, and algebraic PRG constructions.
The randomness complexity of curve samplers is a crucial parameter for its applications. It is known that (non-explicit) curve samplers using O(log N + log(1/δ)) random bits exist, where N is the domain size and δ is the confidence error. The question of explicitly constructing randomness-efficient curve samplers was first raised in [TU06] where they obtained curve samplers with near-optimal randomness complexity.
In this thesis, we present an explicit construction of low-degree curve samplers with optimal randomness complexity (up to a constant factor) that sample curves of degree (m logq(1/δ))O(1) in Fqm. Our construction is a delicate combination of several components, including extractor machinery, limited independence, iterated sampling, and list-recoverable codes.
Resumo:
The Hamilton Jacobi Bellman (HJB) equation is central to stochastic optimal control (SOC) theory, yielding the optimal solution to general problems specified by known dynamics and a specified cost functional. Given the assumption of quadratic cost on the control input, it is well known that the HJB reduces to a particular partial differential equation (PDE). While powerful, this reduction is not commonly used as the PDE is of second order, is nonlinear, and examples exist where the problem may not have a solution in a classical sense. Furthermore, each state of the system appears as another dimension of the PDE, giving rise to the curse of dimensionality. Since the number of degrees of freedom required to solve the optimal control problem grows exponentially with dimension, the problem becomes intractable for systems with all but modest dimension.
In the last decade researchers have found that under certain, fairly non-restrictive structural assumptions, the HJB may be transformed into a linear PDE, with an interesting analogue in the discretized domain of Markov Decision Processes (MDP). The work presented in this thesis uses the linearity of this particular form of the HJB PDE to push the computational boundaries of stochastic optimal control.
This is done by crafting together previously disjoint lines of research in computation. The first of these is the use of Sum of Squares (SOS) techniques for synthesis of control policies. A candidate polynomial with variable coefficients is proposed as the solution to the stochastic optimal control problem. An SOS relaxation is then taken to the partial differential constraints, leading to a hierarchy of semidefinite relaxations with improving sub-optimality gap. The resulting approximate solutions are shown to be guaranteed over- and under-approximations for the optimal value function. It is shown that these results extend to arbitrary parabolic and elliptic PDEs, yielding a novel method for Uncertainty Quantification (UQ) of systems governed by partial differential constraints. Domain decomposition techniques are also made available, allowing for such problems to be solved via parallelization and low-order polynomials.
The optimization-based SOS technique is then contrasted with the Separated Representation (SR) approach from the applied mathematics community. The technique allows for systems of equations to be solved through a low-rank decomposition that results in algorithms that scale linearly with dimensionality. Its application in stochastic optimal control allows for previously uncomputable problems to be solved quickly, scaling to such complex systems as the Quadcopter and VTOL aircraft. This technique may be combined with the SOS approach, yielding not only a numerical technique, but also an analytical one that allows for entirely new classes of systems to be studied and for stability properties to be guaranteed.
The analysis of the linear HJB is completed by the study of its implications in application. It is shown that the HJB and a popular technique in robotics, the use of navigation functions, sit on opposite ends of a spectrum of optimization problems, upon which tradeoffs may be made in problem complexity. Analytical solutions to the HJB in these settings are available in simplified domains, yielding guidance towards optimality for approximation schemes. Finally, the use of HJB equations in temporal multi-task planning problems is investigated. It is demonstrated that such problems are reducible to a sequence of SOC problems linked via boundary conditions. The linearity of the PDE allows us to pre-compute control policy primitives and then compose them, at essentially zero cost, to satisfy a complex temporal logic specification.
Resumo:
The objective of this thesis is to develop a framework to conduct velocity resolved - scalar modeled (VR-SM) simulations, which will enable accurate simulations at higher Reynolds and Schmidt (Sc) numbers than are currently feasible. The framework established will serve as a first step to enable future simulation studies for practical applications. To achieve this goal, in-depth analyses of the physical, numerical, and modeling aspects related to Sc>>1 are presented, specifically when modeling in the viscous-convective subrange. Transport characteristics are scrutinized by examining scalar-velocity Fourier mode interactions in Direct Numerical Simulation (DNS) datasets and suggest that scalar modes in the viscous-convective subrange do not directly affect large-scale transport for high Sc. Further observations confirm that discretization errors inherent in numerical schemes can be sufficiently large to wipe out any meaningful contribution from subfilter models. This provides strong incentive to develop more effective numerical schemes to support high Sc simulations. To lower numerical dissipation while maintaining physically and mathematically appropriate scalar bounds during the convection step, a novel method of enforcing bounds is formulated, specifically for use with cubic Hermite polynomials. Boundedness of the scalar being transported is effected by applying derivative limiting techniques, and physically plausible single sub-cell extrema are allowed to exist to help minimize numerical dissipation. The proposed bounding algorithm results in significant performance gain in DNS of turbulent mixing layers and of homogeneous isotropic turbulence. Next, the combined physical/mathematical behavior of the subfilter scalar-flux vector is analyzed in homogeneous isotropic turbulence, by examining vector orientation in the strain-rate eigenframe. The results indicate no discernible dependence on the modeled scalar field, and lead to the identification of the tensor-diffusivity model as a good representation of the subfilter flux. Velocity resolved - scalar modeled simulations of homogeneous isotropic turbulence are conducted to confirm the behavior theorized in these a priori analyses, and suggest that the tensor-diffusivity model is ideal for use in the viscous-convective subrange. Simulations of a turbulent mixing layer are also discussed, with the partial objective of analyzing Schmidt number dependence of a variety of scalar statistics. Large-scale statistics are confirmed to be relatively independent of the Schmidt number for Sc>>1, which is explained by the dominance of subfilter dissipation over resolved molecular dissipation in the simulations. Overall, the VR-SM framework presented is quite effective in predicting large-scale transport characteristics of high Schmidt number scalars, however, it is determined that prediction of subfilter quantities would entail additional modeling intended specifically for this purpose. The VR-SM simulations presented in this thesis provide us with the opportunity to overlap with experimental studies, while at the same time creating an assortment of baseline datasets for future validation of LES models, thereby satisfying the objectives outlined for this work.
Resumo:
This is a two-part thesis concerning the motion of a test particle in a bath. In part one we use an expansion of the operator PLeit(1-P)LLP to shape the Zwanzig equation into a generalized Fokker-Planck equation which involves a diffusion tensor depending on the test particle's momentum and the time.
In part two the resultant equation is studied in some detail for the case of test particle motion in a weakly coupled Lorentz Gas. The diffusion tensor for this system is considered. Some of its properties are calculated; it is computed explicitly for the case of a Gaussian potential of interaction.
The equation for the test particle distribution function can be put into the form of an inhomogeneous Schroedinger equation. The term corresponding to the potential energy in the Schroedinger equation is considered. Its structure is studied, and some of its simplest features are used to find the Green's function in the limiting situations of low density and long time.
Resumo:
A exposição materna durante o período gestacional a uma dieta restrita em proteínas (LP) prejudica o desenvolvimento do pâncreas endócrino em sua prole e aumenta a susceptibilidade à hipertensão, diabetes e obesidade na vida adulta. Há evidências de que esse fenômeno pode persistir em gerações subsequentes. Objetivou-se avaliar o efeito da restrição proteica sobre o metabolismo da glicose e morfometria pancreática na prole F3 de camundongos ao nascimento e ao desmame. Para tanto, fêmeas virgens de camundongos Suíços (F0) foram acasaladas e receberam dieta normo-proteica (19% de proteína - NP) ou uma dieta isocalórica restrita em proteínas (5% de proteína - LP) durante toda a gravidez. Durante a lactação e o restante do experimento, todos os grupos receberam a dieta NP. Os filhotes machos foram nomeados F1 (NP1 e LP1). As fêmeas F1 e F2 foram acasaladas para produzir F2 e F3 (NP2, LP2, NP3 e LP3), respectivamente. Semanalmente, os filhotes foram pesados e calculada a taxa de crescimento alométrico (log [massa corporal] = log a + log b [idade]). Os animais foram sacrificados nos dias 1 e 21 de idade, a glicemia foi determinada e o pâncreas retirado, pesado e analisado por estereologia e imunofluorescência; a insulina foi mensurada aos 21 dias. Como resultados, os filhotes restritos na primeira geração (LP1) foram menores ao nascer, mas apresentaram um crescimento acelerado nos primeiros sete dias de vida, mostrando catch-up com os controles; a prole LP2 demonstrou a maior massa corporal ao nascimento e tiveram uma taxa de crescimento mais lenta durante a lactação; não houve diferença na massa corporal e na taxa de crescimento na geração F3. A massa de pâncreas foi diminuída em LP1-LP3 ao nascimento, contudo foi aumentada em LP2 ao desmame. A densidade de volume e o diâmetro das ilhotas foram menores em todos os grupos restritos no dia 1 e 21, somente LP1 teve o menor número de ilhotas. Ao nascer, a massa de células beta foi menor em LP1-LP3 e permaneceu baixa durante a lactação. No dia 1 e 21, os filhotes foram normoglicêmicos, entretanto foram hipoinsulinêmicos ao desmame. Portanto, a restrição de proteínas em camundongos durante a gestação produz alterações morfológicas nas ilhotas pancreáticas, sugerindo que a homeostase da glicose foi mantida por um aumento da sensibilidade à insulina durante os primeiros estágios de vida na prole ao longo de três gerações consecutivas.
Resumo:
In this thesis, we consider two main subjects: refined, composite invariants and exceptional knot homologies of torus knots. The main technical tools are double affine Hecke algebras ("DAHA") and various insights from topological string theory.
In particular, we define and study the composite DAHA-superpolynomials of torus knots, which depend on pairs of Young diagrams and generalize the composite HOMFLY-PT polynomials from the full HOMFLY-PT skein of the annulus. We also describe a rich structure of differentials that act on homological knot invariants for exceptional groups. These follow from the physics of BPS states and the adjacencies/spectra of singularities associated with Landau-Ginzburg potentials. At the end, we construct two DAHA-hyperpolynomials which are closely related to the Deligne-Gross exceptional series of root systems.
In addition to these main themes, we also provide new results connecting DAHA-Jones polynomials to quantum torus knot invariants for Cartan types A and D, as well as the first appearance of quantum E6 knot invariants in the literature.
Resumo:
We analyze mutual alignment errors due to wave-front aberrations. To solve the central obscured problem, we introduce modified Zernike polynomials, which are a set of complete orthogonal polynomials. It is found that different aberrations have different effects on mutual alignment errors. Some aberrations influence only the line of sight, while some aberrations influence both the line of sight and the intensity distributions. (c) 2005 Optical Society of America
Resumo:
This thesis studies Frobenius traces in Galois representations from two different directions. In the first problem we explore how often they vanish in Artin-type representations. We give an upper bound for the density of the set of vanishing Frobenius traces in terms of the multiplicities of the irreducible components of the adjoint representation. Towards that, we construct an infinite family of representations of finite groups with an irreducible adjoint action.
In the second problem we partially extend for Hilbert modular forms a result of Coleman and Edixhoven that the Hecke eigenvalues ap of classical elliptical modular newforms f of weight 2 are never extremal, i.e., ap is strictly less than 2[square root]p. The generalization currently applies only to prime ideals p of degree one, though we expect it to hold for p of any odd degree. However, an even degree prime can be extremal for f. We prove our result in each of the following instances: when one can move to a Shimura curve defined by a quaternion algebra, when f is a CM form, when the crystalline Frobenius is semi-simple, and when the strong Tate conjecture holds for a product of two Hilbert modular surfaces (or quaternionic Shimura surfaces) over a finite field.
Resumo:
O objetivo deste trabalho foi investigar o desempenho de uma membrana comercial na remoção de um metal pesado (níquel) de efluentes sintéticos, por osmose inversa. Na primeira etapa foi realizada uma comparação com os resultados obtidos com soluções de alimentação contendo sais, como NaCl, NaNO3 e Ni(NO3)2.6 H2O, nas concentrações de 50, 100 e 200 ppm, e nas pressões de 10, 20 e 26 bar. Os resultados mostraram que a influência da concentração e da pressão aplicada ao sistema não afetaram as rejeições de forma significativa. Na segunda etapa, como os parâmetros não influíram significativamente na rejeição dos sais, optou-se, pela aplicação de uma pressão de operação de 10 bar, para avaliar a eficiência de remoção de níquel. A membrana utilizada, constituída de poliamida, modelo HR98PP e fornecida pela DOW/Filmtec, apresentou uma boa permeabilidade hidráulica. Os resultados mostraram que para todas as concentrações testadas, as rejeições de níquel ultrapassaram 96%, comprovando a boa seletividade deste tipo de membrana na rejeição do referido metal, com fluxos de permeado variando entre 4,78 e 5,55 L/h.m2 , sob pressão de operação de 10 bar. Para estudar o efeito do tamanho iônico na rejeição da membrana, o níquel foi complexado pela adição de um agente quelante na solução de alimentação. O agente escolhido foi o Na2EDTA, devido à formação de um complexo estável com o níquel e por ser um agente não prejudicial à saúde humana. Os resultados com adição de EDTA indicaram um aumento na rejeição de níquel, atingindo o índice máximo de 98,22 %, partindo-se de uma solução com 40,39 ppm de Ni2+, e confirmam que o processo de osmose inversa com a membrana HR98PP é altamente adequado para o tratamento de efluentes contendo níquel
Resumo:
Let F(θ) be a separable extension of degree n of a field F. Let Δ and D be integral domains with quotient fields F(θ) and F respectively. Assume that Δ ᴝ D. A mapping φ of Δ into the n x n D matrices is called a Δ/D rep if (i) it is a ring isomorphism and (ii) it maps d onto dIn whenever d ϵ D. If the matrices are also symmetric, φ is a Δ/D symrep.
Every Δ/D rep can be extended uniquely to an F(θ)/F rep. This extension is completely determined by the image of θ. Two Δ/D reps are called equivalent if the images of θ differ by a D unimodular similarity. There is a one-to-one correspondence between classes of Δ/D reps and classes of Δ ideals having an n element basis over D.
The condition that a given Δ/D rep class contain a Δ/D symrep can be phrased in various ways. Using these formulations it is possible to (i) bound the number of symreps in a given class, (ii) count the number of symreps if F is finite, (iii) establish the existence of an F(θ)/F symrep when n is odd, F is an algebraic number field, and F(θ) is totally real if F is formally real (for n = 3 see Sapiro, “Characteristic polynomials of symmetric matrices” Sibirsk. Mat. Ž. 3 (1962) pp. 280-291), and (iv) study the case D = Z, the integers (see Taussky, “On matrix classes corresponding to an ideal and its inverse” Illinois J. Math. 1 (1957) pp. 108-113 and Faddeev, “On the characteristic equations of rational symmetric matrices” Dokl. Akad. Nauk SSSR 58 (1947) pp. 753-754).
The case D = Z and n = 2 is studied in detail. Let Δ’ be an integral domain also having quotient field F(θ) and such that Δ’ ᴝ Δ. Let φ be a Δ/Z symrep. A method is given for finding a Δ’/Z symrep ʘ such that the Δ’ ideal class corresponding to the class of ʘ is an extension to Δ’ of the Δ ideal class corresponding to the class of φ. The problem of finding all Δ/Z symreps equivalent to a given one is studied.
Resumo:
Let L be the algebra of all linear transformations on an n-dimensional vector space V over a field F and let A, B, ƐL. Let Ai+1 = AiB - BAi, i = 0, 1, 2,…, with A = Ao. Let fk (A, B; σ) = A2K+1 - σ1A2K-1 + σ2A2K-3 -… +(-1)KσKA1 where σ = (σ1, σ2,…, σK), σi belong to F and K = k(k-1)/2. Taussky and Wielandt [Proc. Amer. Math. Soc., 13(1962), 732-735] showed that fn(A, B; σ) = 0 if σi is the ith elementary symmetric function of (β4- βs)2, 1 ≤ r ˂ s ≤ n, i = 1, 2, …, N, with N = n(n-1)/2, where β4 are the characteristic roots of B. In this thesis we discuss relations involving fk(X, Y; σ) where X, Y Ɛ L and 1 ≤ k ˂ n. We show: 1. If F is infinite and if for each X Ɛ L there exists σ so that fk(A, X; σ) = 0 where 1 ≤ k ˂ n, then A is a scalar transformation. 2. If F is algebraically closed, a necessary and sufficient condition that there exists a basis of V with respect to which the matrices of A and B are both in block upper triangular form, where the blocks on the diagonals are either one- or two-dimensional, is that certain products X1, X2…Xr belong to the radical of the algebra generated by A and B over F, where Xi has the form f2(A, P(A,B); σ), for all polynomials P(x, y). We partially generalize this to the case where the blocks have dimensions ≤ k. 3. If A and B generate L, if the characteristic of F does not divide n and if there exists σ so that fk(A, B; σ) = 0, for some k with 1 ≤ k ˂ n, then the characteristic roots of B belong to the splitting field of gk(w; σ) = w2K+1 - σ1w2K-1 + σ2w2K-3 - …. +(-1)K σKw over F. We use this result to prove a theorem involving a generalized form of property L [cf. Motzkin and Taussky, Trans. Amer. Math. Soc., 73(1952), 108-114]. 4. Also we give mild generalizations of results of McCoy [Amer. Math. Soc. Bull., 42(1936), 592-600] and Drazin [Proc. London Math. Soc., 1(1951), 222-231].
Resumo:
An investigation was conducted to estimate the error when the flat-flux approximation is used to compute the resonance integral for a single absorber element embedded in a neutron source.
The investigation was initiated by assuming a parabolic flux distribution in computing the flux-averaged escape probability which occurs in the collision density equation. Furthermore, also assumed were both wide resonance and narrow resonance expressions for the resonance integral. The fact that this simple model demonstrated a decrease in the resonance integral motivated the more detailed investigation of the thesis.
An integral equation describing the collision density as a function of energy, position and angle is constructed and is subsequently specialized to the case of energy and spatial dependence. This equation is further simplified by expanding the spatial dependence in a series of Legendre polynomials (since a one-dimensional case is considered). In this form, the effects of slowing-down and flux depression may be accounted for to any degree of accuracy desired. The resulting integral equation for the energy dependence is thus solved numerically, considering the slowing down model and the infinite mass model as separate cases.
From the solution obtained by the above method, the error ascribable to the flat-flux approximation is obtained. In addition to this, the error introduced in the resonance integral in assuming no slowing down in the absorber is deduced. Results by Chernick for bismuth rods, and by Corngold for uranium slabs, are compared to the latter case, and these agree to within the approximations made.
Resumo:
Techniques are developed for estimating activity profiles in fixed bed reactors and catalyst deactivation parameters from operating reactor data. These techniques are applicable, in general, to most industrial catalytic processes. The catalytic reforming of naphthas is taken as a broad example to illustrate the estimation schemes and to signify the physical meaning of the kinetic parameters of the estimation equations. The work is described in two parts. Part I deals with the modeling of kinetic rate expressions and the derivation of the working equations for estimation. Part II concentrates on developing various estimation techniques.
Part I: The reactions used to describe naphtha reforming are dehydrogenation and dehydroisomerization of cycloparaffins; isomerization, dehydrocyclization and hydrocracking of paraffins; and the catalyst deactivation reactions, namely coking on alumina sites and sintering of platinum crystallites. The rate expressions for the above reactions are formulated, and the effects of transport limitations on the overall reaction rates are discussed in the appendices. Moreover, various types of interaction between the metallic and acidic active centers of reforming catalysts are discussed as characterizing the different types of reforming reactions.
Part II: In catalytic reactor operation, the activity distribution along the reactor determines the kinetics of the main reaction and is needed for predicting the effect of changes in the feed state and the operating conditions on the reactor output. In the case of a monofunctional catalyst and of bifunctional catalysts in limiting conditions, the cumulative activity is sufficient for predicting steady reactor output. The estimation of this cumulative activity can be carried out easily from measurements at the reactor exit. For a general bifunctional catalytic system, the detailed activity distribution is needed for describing the reactor operation, and some approximation must be made to obtain practicable estimation schemes. This is accomplished by parametrization techniques using measurements at a few points along the reactor. Such parametrization techniques are illustrated numerically with a simplified model of naphtha reforming.
To determine long term catalyst utilization and regeneration policies, it is necessary to estimate catalyst deactivation parameters from the the current operating data. For a first order deactivation model with a monofunctional catalyst or with a bifunctional catalyst in special limiting circumstances, analytical techniques are presented to transform the partial differential equations to ordinary differential equations which admit more feasible estimation schemes. Numerical examples include the catalytic oxidation of butene to butadiene and a simplified model of naphtha reforming. For a general bifunctional system or in the case of a monofunctional catalyst subject to general power law deactivation, the estimation can only be accomplished approximately. The basic feature of an appropriate estimation scheme involves approximating the activity profile by certain polynomials and then estimating the deactivation parameters from the integrated form of the deactivation equation by regression techniques. Different bifunctional systems must be treated by different estimation algorithms, which are illustrated by several cases of naphtha reforming with different feed or catalyst composition.