23 resultados para simplified CDD


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Biomolecular circuit engineering is critical for implementing complex functions in vivo, and is a baseline method in the synthetic biology space. However, current methods for conducting biomolecular circuit engineering are time-consuming and tedious. A complete design-build-test cycle typically takes weeks' to months' time due to the lack of an intermediary between design ex vivo and testing in vivo. In this work, we explore the development and application of a "biomolecular breadboard" composed of an in-vitro transcription-translation (TX-TL) lysate to rapidly speed up the engineering design-build-test cycle. We first developed protocols for creating and using lysates for conducting biological circuit design. By doing so we simplified the existing technology to an affordable ($0.03/uL) and easy to use three-tube reagent system. We then developed tools to accelerate circuit design by allowing for linear DNA use in lieu of plasmid DNA, and by utilizing principles of modular assembly. This allowed the design-build-test cycle to be reduced to under a business day. We then characterized protein degradation dynamics in the breadboard to aid to implementing complex circuits. Finally, we demonstrated that the breadboard could be applied to engineer complex synthetic circuits in vitro and in vivo. Specifically, we utilized our understanding of linear DNA prototyping, modular assembly, and protein degradation dynamics to characterize the repressilator oscillator and to prototype novel three- and five-node negative feedback oscillators both in vitro and in vivo. We therefore believe the biomolecular breadboard has wide application for acting as an intermediary for biological circuit engineering.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An exact solution to the monoenergetic Boltzmann equation is obtained for the case of a plane isotropic burst of neutrons introduced at the interface separating two adjacent, dissimilar, semi-infinite media. The method of solution used is to remove the time dependence by a Laplace transformation, solve the transformed equation by the normal mode expansion method, and then invert to recover the time dependence.

The general result is expressed as a sum of definite, multiple integrals, one of which contains the uncollided wave of neutrons originating at the source plane. It is possible to obtain a simplified form for the solution at the interface, and certain numerical calculations are made there.

The interface flux in two adjacent moderators is calculated and plotted as a function of time for several moderator materials. For each case it is found that the flux decay curve has an asymptotic slope given accurately by diffusion theory. Furthermore, the interface current is observed to change directions when the scattering and absorption cross sections of the two moderator materials are related in a certain manner. More specifically, the reflection process in two adjacent moderators appears to depend initially on the scattering properties and for long times on the absorption properties of the media.

This analysis contains both the single infinite and semi-infinite medium problems as special cases. The results in these two special cases provide a check on the accuracy of the general solution since they agree with solutions of these problems obtained by separate analyses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As a simplified approach for estimating theoretically the influence of local subsoils upon the ground motion during an earthquake, the problem of an idealized layered system subjected to vertically incident plane body waves was studied. Both the technique of steady-state analysis and the technique of transient analysis have been used to analyze the problem.

In the steady-state analysis, a recursion formula has been derived for obtaining the response of a layered system to sinusoidally steady-state input. Several conclusions are drawn concerning the nature of the amplification spectrum of a nonviscous layered system having its layer stiffnesses increasing with depth. Numerical examples are given to demonstrate the effect of layer parameters on the amplification spectrum of a layered system.

In the transient analysis, two modified shear beam models have been established for obtaining approximately the response of a layered system to earthquake-like excitation. The method of continuous modal analysis was adopted for approximate analysis of the models, with energy dissipation in the layers, if any, taken into account. Numerical examples are given to demonstrate the accuracy of the models and the effect of a layered system in modifying the input motion.

Conditions are established, under which the theory is applicable to predict the influence of local subsoils on the ground motion during an earthquake. To demonstrate the applicability of the models to actual cases, three examples of actually recorded earthquake events are examined. It is concluded that significant modification of the incoming seismic waves, as predicted by the theory, is likely to occur in well defined soft subsoils during an earthquake, provided that certain conditions concerning the nature of the incoming seismic waves are satisfied.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this thesis is to characterize the behavior of the smallest turbulent scales in high Karlovitz number (Ka) premixed flames. These scales are particularly important in the two-way coupling between turbulence and chemistry and better understanding of these scales will support future modeling efforts using large eddy simulations (LES). The smallest turbulent scales are studied by considering the vorticity vector, ω, and its transport equation.

Due to the complexity of turbulent combustion introduced by the wide range of length and time scales, the two-dimensional vortex-flame interaction is first studied as a simplified test case. Numerical and analytical techniques are used to discern the dominate transport terms and their effects on vorticity based on the initial size and strength of the vortex. This description of the effects of the flame on a vortex provides a foundation for investigating vorticity in turbulent combustion.

Subsequently, enstrophy, ω2 = ω • ω, and its transport equation are investigated in premixed turbulent combustion. For this purpose, a series of direct numerical simulations (DNS) of premixed n-heptane/air flames are performed, the conditions of which span a wide range of unburnt Karlovitz numbers and turbulent Reynolds numbers. Theoretical scaling analysis along with the DNS results support that, at high Karlovitz number, enstrophy transport is controlled by the viscous dissipation and vortex stretching/production terms. As a result, vorticity scales throughout the flame with the inverse of the Kolmogorov time scale, τη, just as in homogeneous isotropic turbulence. As τη is only a function of the viscosity and dissipation rate, this supports the validity of Kolmogorov’s first similarity hypothesis for sufficiently high Ka numbers (Ka ≳ 100). These conclusions are in contrast to low Karlovitz number behavior, where dilatation and baroclinic torque have a significant impact on vorticity within the flame. Results are unaffected by the transport model, chemical model, turbulent Reynolds number, and lastly the physical configuration.

Next, the isotropy of vorticity is assessed. It is found that given a sufficiently large value of the Karlovitz number (Ka ≳ 100) the vorticity is isotropic. At lower Karlovitz numbers, anisotropy develops due to the effects of the flame on the vortex stretching/production term. In this case, the local dynamics of vorticity in the strain-rate tensor, S, eigenframe are altered by the flame. At sufficiently high Karlovitz numbers, the dynamics of vorticity in this eigenframe resemble that of homogeneous isotropic turbulence.

Combined, the results of this thesis support that both the magnitude and orientation of vorticity resemble the behavior of homogeneous isotropic turbulence, given a sufficiently high Karlovitz number (Ka ≳ 100). This supports the validity of Kolmogorov’s first similarity hypothesis and the hypothesis of local isotropy under these condition. However, dramatically different behavior is found at lower Karlovitz numbers. These conclusions provides/suggests directions for modeling high Karlovitz number premixed flames using LES. With more accurate models, the design of aircraft combustors and other combustion based devices may better mitigate the detrimental effects of combustion, from reducing CO2 and soot production to increasing engine efficiency.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The equations of relativistic, perfect-fluid hydrodynamics are cast in Eulerian form using six scalar "velocity-potential" fields, each of which has an equation of evolution. These equations determine the motion of the fluid through the equation

Uʋ-1 (ø,ʋ + αβ,ʋ + ƟS,ʋ).

Einstein's equations and the velocity-potential hydrodynamical equations follow from a variational principle whose action is

I = (R + 16π p) (-g)1/2 d4x,

where R is the scalar curvature of spacetime and p is the pressure of the fluid. These equations are also cast into Hamiltonian form, with Hamiltonian density –T00 (-goo)-1/2.

The second variation of the action is used as the Lagrangian governing the evolution of small perturbations of differentially rotating stellar models. In Newtonian gravity this leads to linear dynamical stability criteria already known. In general relativity it leads to a new sufficient condition for the stability of such models against arbitrary perturbations.

By introducing three scalar fields defined by

ρ ᵴ = λ + x(xi + i)

(where ᵴ is the vector displacement of the perturbed fluid element, ρ is the mass-density, and i, is an arbitrary vector), the Newtonian stability criteria are greatly simplified for the purpose of practical applications. The relativistic stability criterion is not yet in a form that permits practical calculations, but ways to place it in such a form are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis examines two problems concerned with surface effects in simple molecular systems. The first is the problem associated with the interaction of a fluid with a solid boundary, and the second originates from the interaction of a liquid with its own vapor.

For a fluid in contact with a solid wall, two sets of integro-differential equations, involving the molecular distribution functions of the system, are derived. One of these is a particular form of the well-known Bogolyubov-Born-Green-Kirkwood-Yvon equations. For the second set, the derivation, in contrast with the formulation of the B.B.G.K.Y. hierarchy, is independent of the pair-potential assumption. The density of the fluid, expressed as a power series in the uniform fluid density, is obtained by solving these equations under the requirement that the wall be ideal.

The liquid-vapor interface is analyzed with the aid of equations that describe the density and pair-correlation function. These equations are simplified and then solved by employing the superposition and the low vapor density approximations. The solutions are substituted into formulas for the surface energy and surface tension, and numerical results are obtained for selected systems. Finally, the liquid-vapor system near the critical point is examined by means of the lowest order B.B.G.K.Y. equation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Techniques are developed for estimating activity profiles in fixed bed reactors and catalyst deactivation parameters from operating reactor data. These techniques are applicable, in general, to most industrial catalytic processes. The catalytic reforming of naphthas is taken as a broad example to illustrate the estimation schemes and to signify the physical meaning of the kinetic parameters of the estimation equations. The work is described in two parts. Part I deals with the modeling of kinetic rate expressions and the derivation of the working equations for estimation. Part II concentrates on developing various estimation techniques.

Part I: The reactions used to describe naphtha reforming are dehydrogenation and dehydroisomerization of cycloparaffins; isomerization, dehydrocyclization and hydrocracking of paraffins; and the catalyst deactivation reactions, namely coking on alumina sites and sintering of platinum crystallites. The rate expressions for the above reactions are formulated, and the effects of transport limitations on the overall reaction rates are discussed in the appendices. Moreover, various types of interaction between the metallic and acidic active centers of reforming catalysts are discussed as characterizing the different types of reforming reactions.

Part II: In catalytic reactor operation, the activity distribution along the reactor determines the kinetics of the main reaction and is needed for predicting the effect of changes in the feed state and the operating conditions on the reactor output. In the case of a monofunctional catalyst and of bifunctional catalysts in limiting conditions, the cumulative activity is sufficient for predicting steady reactor output. The estimation of this cumulative activity can be carried out easily from measurements at the reactor exit. For a general bifunctional catalytic system, the detailed activity distribution is needed for describing the reactor operation, and some approximation must be made to obtain practicable estimation schemes. This is accomplished by parametrization techniques using measurements at a few points along the reactor. Such parametrization techniques are illustrated numerically with a simplified model of naphtha reforming.

To determine long term catalyst utilization and regeneration policies, it is necessary to estimate catalyst deactivation parameters from the the current operating data. For a first order deactivation model with a monofunctional catalyst or with a bifunctional catalyst in special limiting circumstances, analytical techniques are presented to transform the partial differential equations to ordinary differential equations which admit more feasible estimation schemes. Numerical examples include the catalytic oxidation of butene to butadiene and a simplified model of naphtha reforming. For a general bifunctional system or in the case of a monofunctional catalyst subject to general power law deactivation, the estimation can only be accomplished approximately. The basic feature of an appropriate estimation scheme involves approximating the activity profile by certain polynomials and then estimating the deactivation parameters from the integrated form of the deactivation equation by regression techniques. Different bifunctional systems must be treated by different estimation algorithms, which are illustrated by several cases of naphtha reforming with different feed or catalyst composition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Constitutive modeling in granular materials has historically been based on macroscopic experimental observations that, while being usually effective at predicting the bulk behavior of these type of materials, suffer important limitations when it comes to understanding the physics behind grain-to-grain interactions that induce the material to macroscopically behave in a given way when subjected to certain boundary conditions.

The advent of the discrete element method (DEM) in the late 1970s helped scientists and engineers to gain a deeper insight into some of the most fundamental mechanisms furnishing the grain scale. However, one of the most critical limitations of classical DEM schemes has been their inability to account for complex grain morphologies. Instead, simplified geometries such as discs, spheres, and polyhedra have typically been used. Fortunately, in the last fifteen years, there has been an increasing development of new computational as well as experimental techniques, such as non-uniform rational basis splines (NURBS) and 3D X-ray Computed Tomography (3DXRCT), which are contributing to create new tools that enable the inclusion of complex grain morphologies into DEM schemes.

Yet, as the scientific community is still developing these new tools, there is still a gap in thoroughly understanding the physical relations connecting grain and continuum scales as well as in the development of discrete techniques that can predict the emergent behavior of granular materials without resorting to phenomenology, but rather can directly unravel the micro-mechanical origin of macroscopic behavior.

In order to contribute towards closing the aforementioned gap, we have developed a micro-mechanical analysis of macroscopic peak strength, critical state, and residual strength in two-dimensional non-cohesive granular media, where typical continuum constitutive quantities such as frictional strength and dilation angle are explicitly related to their corresponding grain-scale counterparts (e.g., inter-particle contact forces, fabric, particle displacements, and velocities), providing an across-the-scale basis for better understanding and modeling granular media.

In the same way, we utilize a new DEM scheme (LS-DEM) that takes advantage of a mathematical technique called level set (LS) to enable the inclusion of real grain shapes into a classical discrete element method. After calibrating LS-DEM with respect to real experimental results, we exploit part of its potential to study the dependency of critical state (CS) parameters such as the critical state line (CSL) slope, CSL intercept, and CS friction angle on the grain's morphology, i.e., sphericity, roundness, and regularity.

Finally, we introduce a first computational algorithm to ``clone'' the grain morphologies of a sample of real digital grains. This cloning algorithm allows us to generate an arbitrary number of cloned grains that satisfy the same morphological features (e.g., roundness and aspect ratio) displayed by their real parents and can be included into a DEM simulation of a given mechanical phenomenon. In turn, this will help with the development of discrete techniques that can directly predict the engineering scale behavior of granular media without resorting to phenomenology.