882 resultados para CONTINUOUS THERMODYNAMICS
Resumo:
L-Glutamine amidohydrolase (L-glutaminase, EC 3.5.1.2) is a therapeutically and industrially important enzyme. Because it is a potent antileukemic agent and a flavor-enhancing agent used in the food industry, many researchers have focused their attention on L-glutaminase. In this article, we report the continuous production of extracellular L-glutaminase by the marine fungus Beauveria bassiana BTMF S-10 in a packed-bed reactor. Parameters influencing bead production and performance under batch mode were optimized in the order-support (Na-alginate) concentration, concentration of CaCl2 for bead preparation, curing time of beads, spore inoculum concentration, activation time, initial pH of enzyme production medium, temperature of incubation, and retention time. Parameters optimized under batch mode for L-glutaminase production were incorporated into the continuous production studies. Beads with 12 × 108 spores/g of beads were activated in a solution of 1% glutamine in seawater for 15 h, and the activated beads were packed into a packed-bed reactor. Enzyme production medium (pH 9.0) was pumped through the bed, and the effluent was collected from the top of the column. The effect of flow rate of the medium, substrate concentration, aeration, and bed height on continuous production of L-glutaminase was studied. Production was monitored for 5 h in each case, and the volumetric productivity was calculated. Under the optimized conditions for continuous production, the reactor gave a volumetric productivity of 4.048 U/(mL·h), which indicates that continuous production of the enzyme by Ca-alginate-immobilizedspores is well suited for B. bassiana and results in a higher yield of enzyme within a shorter time. The results indicate the scope of utilizing immobilized B. bassiana for continuous commercial production of L-glutaminase
Resumo:
Inthis paper,we define partial moments for a univariate continuous random variable. A recurrence relationship for the Pearson curve using the partial moments is established. The interrelationship between the partial moments and other reliability measures such as failure rate, mean residual life function are proved. We also prove some characterization theorems using the partial moments in the context of length biased models and equilibrium distributions
Resumo:
One of the interesting consequences of Einstein's General Theory of Relativity is the black hole solutions. Until the observation made by Hawking in 1970s, it was believed that black holes are perfectly black. The General Theory of Relativity says that black holes are objects which absorb both matter and radiation crossing the event horizon. The event horizon is a surface through which even light is not able to escape. It acts as a one sided membrane that allows the passage of particles only in one direction i.e. towards the center of black holes. All the particles that are absorbed by black hole increases the mass of the black hole and thus the size of event horizon also increases. Hawking showed in 1970s that when applying quantum mechanical laws to black holes they are not perfectly black but they can emit radiation. Thus the black hole can have temperature known as Hawking temperature. In the thesis we have studied some aspects of black holes in f(R) theory of gravity and Einstein's General Theory of Relativity. The scattering of scalar field in this background space time studied in the first chapter shows that the extended black hole will scatter scalar waves and have a scattering cross section and applying tunneling mechanism we have obtained the Hawking temperature of this black hole. In the following chapter we have investigated the quasinormal properties of the extended black hole. We have studied the electromagnetic and scalar perturbations in this space-time and find that the black hole frequencies are complex and show exponential damping indicating the black hole is stable against the perturbations. In the present study we show that not only the black holes exist in modified gravities but also they have similar properties of black hole space times in General Theory of Relativity. 2 + 1 black holes or three dimensional black holes are simplified examples of more complicated four dimensional black holes. Thus these models of black holes are known as toy models of black holes in four dimensional black holes in General theory of Relativity. We have studied some properties of these types of black holes in Einstein model (General Theory of Relativity). A three dimensional black hole known as MSW is taken for our study. The thermodynamics and spectroscopy of MSW black hole are studied and obtained the area spectrum which is equispaced and different thermo dynamical properties are studied. The Dirac perturbation of this three dimensional black hole is studied and the resulting quasinormal spectrum of this three dimensional black hole is obtained. The different quasinormal frequencies are tabulated in tables and these values show an exponential damping of oscillations indicating the black hole is stable against the mass less Dirac perturbation. In General Theory of Relativity almost all solutions contain singularities. The cosmological solution and different black hole solutions of Einstein's field equation contain singularities. The regular black hole solutions are those which are solutions of Einstein's equation and have no singularity at the origin. These solutions possess event horizon but have no central singularity. Such a solution was first put forward by Bardeen. Hayward proposed a similar regular black hole solution. We have studied the thermodynamics and spectroscopy of Hay-ward regular black holes. We have also obtained the different thermodynamic properties and the area spectrum. The area spectrum is a function of the horizon radius. The entropy-heat capacity curve has a discontinuity at some value of entropy showing a phase transition.
Resumo:
Diese Arbeit umfaßt das elektromechanische Design und die Designoptimierung von weit durchstimmbaren optischen multimembranbasierten Bauelementen, mit vertikal orientierten Kavitäten, basierend auf der Finiten Element Methode (FEM). Ein multimembran InP/Luft Fabry-Pérot optischer Filter wird dargestellt und umfassend analysiert. In dieser Arbeit wird ein systematisches strukturelles Designverfahren dargestellt. Genaue analytische elektromechanischer Modelle für die Bauelemente sind abgeleitet worden. Diese können unschätzbare Werkzeuge sein, um am Anfang der Designphase schnell einen klaren Einblick zur Verfügung zu stellen. Mittels des FEM Programms ist der durch die nicht-lineare Verspannung hervorgerufene versteifende Effekt nachgeforscht und sein Effekt auf die Verlängerung der mechanischen Durchstimmungsstrecke der Bauelemente demonstriert worden. Interessant war auch die Beobachtung, dass die normierte Relation zwischen Ablenkung und Spannung ein unveränderliches Profil hat. Die Deformation der Membranflächen der in dieser Arbeit dargestellten Bauelementformen erwies sich als ein unerwünschter, jedoch manchmal unvermeidbarer Effekt. Es zeigt sich aber, dass die Wahl der Größe der strukturellen Dimensionen den Grad der Membrandeformation im Falle der Aktuation beeinflusst. Diese Arbeit stellt ein elektromechanisches in FEMLAB implementierte quasi-3D Modell, das allgemein für die Modellierung dünner Strukturen angewendet werden kann, dar; und zwar indem man diese als 2D-Objekte betrachtet und die dritte Dimension als eine konstante Größe (z.B. die Schichtdicke) oder eine Größe, welche eine mathematische Funktion ist, annimmt. Diese Annahme verringert drastisch die Berechnungszeit sowie den erforderlichen Arbeitsspeicherbedarf. Weiter ist es für die Nachforschung des Effekts der Skalierung der durchstimmbaren Bauelemente verwendet worden. Eine neuartige Skalierungstechnik wurde abgeleitet und verwendet. Die Ergebnisse belegen, dass das daraus resultierende, skalierte Bauelement fast genau die gleiche mechanische Durchstimmung wie das unskalierte zeigt. Die Einbeziehung des Einflusses von axialen Verspannungen und Gradientenverspannungen in die Berechnungen erforderte die Änderung der Standardimplementierung des 3D Mechanikberechnungsmodus, der mit der benutzten FEM Software geliefert wurde. Die Ergebnisse dieser Studie zeigen einen großen Einfluss der Verspannung auf die Durchstimmungseigenschaften der untersuchten Bauelemente. Ferner stimmten die Ergebnisse der theoretischen Modellrechnung mit den experimentellen Resultaten sehr gut überein.
Resumo:
When triangulating a belief network we aim to obtain a junction tree of minimum state space. Searching for the optimal triangulation can be cast as a search over all the permutations of the network's vaeriables. Our approach is to embed the discrete set of permutations in a convex continuous domain D. By suitably extending the cost function over D and solving the continous nonlinear optimization task we hope to obtain a good triangulation with respect to the aformentioned cost. In this paper we introduce an upper bound to the total junction tree weight as the cost function. The appropriatedness of this choice is discussed and explored by simulations. Then we present two ways of embedding the new objective function into continuous domains and show that they perform well compared to the best known heuristic.
Resumo:
We present an unsupervised learning algorithm that acquires a natural-language lexicon from raw speech. The algorithm is based on the optimal encoding of symbol sequences in an MDL framework, and uses a hierarchical representation of language that overcomes many of the problems that have stymied previous grammar-induction procedures. The forward mapping from symbol sequences to the speech stream is modeled using features based on articulatory gestures. We present results on the acquisition of lexicons and language models from raw speech, text, and phonetic transcripts, and demonstrate that our algorithm compares very favorably to other reported results with respect to segmentation performance and statistical efficiency.
Resumo:
We develop an extension to the tactical planning model (TPM) for a job shop by the third author. The TPM is a discrete-time model in which all transitions occur at the start of each time period. The time period must be defined appropriately in order for the model to be meaningful. Each period must be short enough so that a job is unlikely to travel through more than one station in one period. At the same time, the time period needs to be long enough to justify the assumptions of continuous workflow and Markovian job movements. We build an extension to the TPM that overcomes this restriction of period sizing by permitting production control over shorter time intervals. We achieve this by deriving a continuous-time linear control rule for a single station. We then determine the first two moments of the production level and queue length for the workstation.
Resumo:
We present the results of GaInNAs/GaAs quantum dot structures with GaAsN barrier layers grown by solid source molecular beam epitaxy. Extension of the emission wavelength of GaInNAs quantum dots by ~170nm was observed in samples with GaAsN barriers in place of GaAs. However, optimization of the GaAsN barrier layer thickness is necessary to avoid degradation in luminescence intensity and structural property of the GaInNAs dots. Lasers with GaInNAs quantum dots as active layer were fabricated and room-temperature continuous-wave lasing was observed for the first time. Lasing occurs via the ground state at ~1.2μm, with threshold current density of 2.1kA/cm[superscript 2] and maximum output power of 16mW. These results are significantly better than previously reported values for this quantum-dot system.
Resumo:
There are two principal chemical concepts that are important for studying the natural environment. The first one is thermodynamics, which describes whether a system is at equilibrium or can spontaneously change by chemical reactions. The second main concept is how fast chemical reactions (kinetics or rate of chemical change) take place whenever they start. In this work we examine a natural system in which both thermodynamics and kinetic factors are important in determining the abundance of NH+4 , NO−2 and NO−3 in superficial waters. Samples were collected in the Arno Basin (Tuscany, Italy), a system in which natural and antrophic effects both contribute to highly modify the chemical composition of water. Thermodynamical modelling based on the reduction-oxidation reactions involving the passage NH+4 -> NO−2 -> NO−3 in equilibrium conditions has allowed to determine the Eh redox potential values able to characterise the state of each sample and, consequently, of the fluid environment from which it was drawn. Just as pH expresses the concentration of H+ in solution, redox potential is used to express the tendency of an environment to receive or supply electrons. In this context, oxic environments, as those of river systems, are said to have a high redox potential because O2 is available as an electron acceptor. Principles of thermodynamics and chemical kinetics allow to obtain a model that often does not completely describe the reality of natural systems. Chemical reactions may indeed fail to achieve equilibrium because the products escape from the site of the rection or because reactions involving the trasformation are very slow, so that non-equilibrium conditions exist for long periods. Moreover, reaction rates can be sensitive to poorly understood catalytic effects or to surface effects, while variables as concentration (a large number of chemical species can coexist and interact concurrently), temperature and pressure can have large gradients in natural systems. By taking into account this, data of 91 water samples have been modelled by using statistical methodologies for compositional data. The application of log–contrast analysis has allowed to obtain statistical parameters to be correlated with the calculated Eh values. In this way, natural conditions in which chemical equilibrium is hypothesised, as well as underlying fast reactions, are compared with those described by a stochastic approach
Resumo:
This paper examines a dataset which is modeled well by the Poisson-Log Normal process and by this process mixed with Log Normal data, which are both turned into compositions. This generates compositional data that has zeros without any need for conditional models or assuming that there is missing or censored data that needs adjustment. It also enables us to model dependence on covariates and within the composition
Resumo:
Exercises, exams and solutions for a third year finance course.
Resumo:
Graphic
Resumo:
This article intends to show the relationships between quality practices and the process of organizational learning. When we look at the literature about programs of continuous improvement we see that theoreticians consider that the process of organizational learning is a superior stage in the quality culture adopted by companies. To investigate this possibility, we put together a series of indicators taken from classic authors who have written about organizational learning. Adopting a multiple methodology, we applied these indicators to two plants belonging to the Nestlé food product company which have introduced continuous improvement programs over the last two years.