925 resultados para General equilibrium


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The three-phase equilibrium between alloy, spinel solid solution and alpha -Al sub 2 O sub 3 in the Fe--Co--Al--O system at 1873k was fully characterized as a function of alloy composition using both experimental and computational methods. The equilibrium oxygen content of the liquid alloy was measured by suction sampling and inert gas fusion analysis. The O potential corresponding to the three-phase equilibrium was determined by emf measurements on a solid state galvanic cell incorporating (Y sub 2 O sub 3 )ThO sub 2 as the solid electrolyte and Cr + Cr sub 2 O sub 3 as the reference electrode. The equilibrium composition of the spinel phase formed at the interface between the alloy and alumina crucible was measured by electron probe microanalysis (EPMA). The experimental results were compared with the values computed using a thermodynamic model. The model used values for standard Gibbs energies of formation of pure end-member spinels and Gibbs energies of solution of gaseous O in liquid Fe and cobalt available in the literature. The activity--composition relationship in the spinel solid solution was computed using a cation distribution model. The variation of the activity coefficient of O with alloy composition in the Fe--Co--O system was estimated using both the quasichemical model of Jacob and Alcock and Wagner's model along with the correlations of Chiang and Chang and Kuo and Chang. The computed results of spinel composition and O potential are in excellent agreement with the experimental data. Graphs. 29 ref.--AA

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article, a general definition of the process average temperature has been developed, and the impact of the various dissipative mechanisms on 1/COP of the chiller evaluated. The present component-by-component black box analysis removes the assumptions regarding the generator outlet temperature(s) and the component effective thermal conductances. Mass transfer resistance is also incorporated into the absorber analysis to arrive at a more realistic upper limit to the cooling capacity. Finally, the theoretical foundation for the absorption chiller T-s diagram is derived. This diagrammatic approach only requires the inlet and outlet conditions of the chiller components and can be employed as a practical tool for system analysis and comparison. (C) 2000 Elsevier Science Ltd and IIR. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A general method for generation of base-pairs in a curved DNA structure, for any prescribed values of helical parameters--unit rise (h), unit twist (theta), wedge roll (theta R) and wedge tilt (theta T), propeller twist (theta p) and displacement (D) is described. Its application for generation of uniform as well curved structures is also illustrated with some representative examples. An interesting relationship is observed between helical twist (theta), base-pair parameters theta x, theta y and the wedge parameters theta R, theta T, which has important consequences for the description and estimation of DNA curvature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This master thesis studies how trade liberalization affects the firm-level productivity and industrial evolution. To do so, I built a dynamic model that considers firm-level productivity as endogenous to investigate the influence of trade on firm’s productivity and the market structure. In the framework, heterogeneous firms in the same industry operate differently in equilibrium. Specifically, firms are ex ante identical but heterogeneity arises as an equilibrium outcome. Under the setting of monopolistic competition, this type of model yields an industry that is represented not by a steady-state outcome, but by an evolution that rely on the decisions made by individual firms. I prove that trade liberalization has a general positive impact on technological adoption rates and hence increases the firm-level productivity. Besides, this endogenous technology adoption model also captures the stylized facts: exporting firms are larger and more productive than their non-exporting counterparts in the same sector. I assume that the number of firms is endogenous, since, according to the empirical literature, the industrial evolution shows considerably different patterns across countries; some industries experience large scale of firms’ exit in the period of contracting market shares, while some industries display relative stable number of firms or gradually increase quantities. The special word “shakeout” is used to describe the dramatic decrease in the number of firms. In order to explain the causes of shakeout, I construct a model where forward-looking firms decide to enter and exit the market on the basis of their state of technology. In equilibrium, firms choose different dates to adopt innovation which generate a gradual diffusion process. It is exactly this gradual diffusion process that generates the rapid, large-scale exit phenomenon. Specifically, it demonstrates that there is a positive feedback between firm’s exit and adoption, the reduction in the number of firms increases the incentives for remaining firms to adopt innovation. Therefore, in the setting of complete information, this model not only generates a shakeout but also captures the stability of an industry. However, the solely national view of industrial evolution neglects the importance of international trade in determining the shape of market structure. In particular, I show that the higher trade barriers lead to more fragile markets, encouraging the over-entry in the initial stage of industry life cycle and raising the probability of a shakeout. Therefore, more liberalized trade generates more stable market structure from both national and international viewpoints. The main references are Ederington and McCalman(2008,2009).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The three phase equilibrium between alloy, spinel solid solution and α-alumina in the Fe-Ni-Al-O system has been fully characterized at 1823K as a function of alloy composition using both experimental and computational methods. The oxygen potential was measured using a solid state cell incorporating yttria-doped thoria as the electrolyte and Cr+ Cr2O3 as the reference electrode. Oxygen concentration of the alloy was determined by an inert gas fusion technique. The composition of the spinel solid solution, formed at the interface between the alloy and an alumina crucible, was determined by EPMA. The variation of the oxygen concentration and potential and composition of the spinel solid solution with mole fraction of nickel in the alloy have been computed using activities in binary Fe-Ni system, free energies of formation of end member spinels FeO•(1+x)Al2O3 and NiO•(1+x)Al2O3 and free energies of solution of oxygen in liquid iron and nickel, available in the literature. Activities in the spinel solid solution were computed using a cation distribution model. The variation of the activity coefficient of oxygen with alloy composition in Fe-Ni-O system was calculated using both the quasichemical model of Jacob and Alcock and the Wagner's model, with the correlation of Chiang and Chang. The computed results for the oxygen potential and the composition of the spinel solid solution are in good agreement with the measurements. The measured oxygen concentration lies between the values computed using models of Wagner and Jacob and Alcock. The results of the study indicate that the deoxidation hyper-surface in multicomponent systems can be computed with useful accuracy using data for end member systems and thermodynamic models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are a number of large networks which occur in many problems dealing with the flow of power, communication signals, water, gas, transportable goods, etc. Both design and planning of these networks involve optimization problems. The first part of this paper introduces the common characteristics of a nonlinear network (the network may be linear, the objective function may be non linear, or both may be nonlinear). The second part develops a mathematical model trying to put together some important constraints based on the abstraction for a general network. The third part deals with solution procedures; it converts the network to a matrix based system of equations, gives the characteristics of the matrix and suggests two solution procedures, one of them being a new one. The fourth part handles spatially distributed networks and evolves a number of decomposition techniques so that we can solve the problem with the help of a distributed computer system. Algorithms for parallel processors and spatially distributed systems have been described.There are a number of common features that pertain to networks. A network consists of a set of nodes and arcs. In addition at every node, there is a possibility of an input (like power, water, message, goods etc) or an output or none. Normally, the network equations describe the flows amoungst nodes through the arcs. These network equations couple variables associated with nodes. Invariably, variables pertaining to arcs are constants; the result required will be flows through the arcs. To solve the normal base problem, we are given input flows at nodes, output flows at nodes and certain physical constraints on other variables at nodes and we should find out the flows through the network (variables at nodes will be referred to as across variables).The optimization problem involves in selecting inputs at nodes so as to optimise an objective function; the objective may be a cost function based on the inputs to be minimised or a loss function or an efficiency function. The above mathematical model can be solved using Lagrange Multiplier technique since the equalities are strong compared to inequalities. The Lagrange multiplier technique divides the solution procedure into two stages per iteration. Stage one calculates the problem variables % and stage two the multipliers lambda. It is shown that the Jacobian matrix used in stage one (for solving a nonlinear system of necessary conditions) occurs in the stage two also.A second solution procedure has also been imbedded into the first one. This is called total residue approach. It changes the equality constraints so that we can get faster convergence of the iterations.Both solution procedures are found to coverge in 3 to 7 iterations for a sample network.The availability of distributed computer systems — both LAN and WAN — suggest the need for algorithms to solve the optimization problems. Two types of algorithms have been proposed — one based on the physics of the network and the other on the property of the Jacobian matrix. Three algorithms have been deviced, one of them for the local area case. These algorithms are called as regional distributed algorithm, hierarchical regional distributed algorithm (both using the physics properties of the network), and locally distributed algorithm (a multiprocessor based approach with a local area network configuration). The approach used was to define an algorithm that is faster and uses minimum communications. These algorithms are found to converge at the same rate as the non distributed (unitary) case.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Vuorokausivirtaaman ennustaminen yhdyskuntien vesi- ja viemärilaitosten yleissuunnittelussa.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this study was to evaluate and test methods which could improve local estimates of a general model fitted to a large area. In the first three studies, the intention was to divide the study area into sub-areas that were as homogeneous as possible according to the residuals of the general model, and in the fourth study, the localization was based on the local neighbourhood. According to spatial autocorrelation (SA), points closer together in space are more likely to be similar than those that are farther apart. Local indicators of SA (LISAs) test the similarity of data clusters. A LISA was calculated for every observation in the dataset, and together with the spatial position and residual of the global model, the data were segmented using two different methods: classification and regression trees (CART) and the multiresolution segmentation algorithm (MS) of the eCognition software. The general model was then re-fitted (localized) to the formed sub-areas. In kriging, the SA is modelled with a variogram, and the spatial correlation is a function of the distance (and direction) between the observation and the point of calculation. A general trend is corrected with the residual information of the neighbourhood, whose size is controlled by the number of the nearest neighbours. Nearness is measured as Euclidian distance. With all methods, the root mean square errors (RMSEs) were lower, but with the methods that segmented the study area, the deviance in single localized RMSEs was wide. Therefore, an element capable of controlling the division or localization should be included in the segmentation-localization process. Kriging, on the other hand, provided stable estimates when the number of neighbours was sufficient (over 30), thus offering the best potential for further studies. Even CART could be combined with kriging or non-parametric methods, such as most similar neighbours (MSN).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Stability results are given for a class of feedback systems arising from the regulation of time-varying discrete-time systems using optimal infinite-horizon and moving-horizon feedback laws. The class is characterized by joint constraints on the state and the control, a general nonlinear cost function and nonlinear equations of motion possessing two special properties. It is shown that weak conditions on the cost function and the constraints are sufficient to guarantee uniform asymptotic stability of both the optimal infinite-horizon and movinghorizon feedback systems. The infinite-horizon cost associated with the moving-horizon feedback law approaches the optimal infinite-horizon cost as the moving horizon is extended.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a physical mechanism to explain the origin of the intense burst of massive-star formation seen in colliding/merging, gas-rich, field spiral galaxies. We explicitly take account of the different parameters for the two main mass components, H-2 and H I, of the interstellar medium within a galaxy and follow their consequent different evolution during a collision between two galaxies. We also note that, in a typical spiral galaxy-like our galaxy, the Giant Molecular Clouds (GMCs) are in a near-virial equilibrium and form the current sites of massive-star formation, but have a low star formation rate. We show that this star formation rate is increased following a collision between galaxies. During a typical collision between two field spiral galaxies, the H I clouds from the two galaxies undergo collisions at a relative velocity of approximately 300 km s-1. However, the GMCs, with their smaller volume filling factor, do not collide. The collisions among the H I clouds from the two galaxies lead to the formation of a hot, ionized, high-pressure remnant gas. The over-pressure due to this hot gas causes a radiative shock compression of the outer layers of a preexisting GMC in the overlapping wedge region. This makes these layers gravitationally unstable, thus triggering a burst of massive-star formation in the initially barely stable GMCs.The resulting value of the typical IR luminosity from the young, massive stars from a pair of colliding galaxies is estimated to be approximately 2 x 10(11) L., in agreement with the observed values. In our model, the massive-star formation occurs in situ in the overlapping regions of a pair of colliding galaxies. We can thus explain the origin of enhanced star formation over an extended, central area approximately several kiloparsecs in size, as seen in typical colliding galaxies, and also the origin of starbursts in extranuclear regions of disk overlap as seen in Arp 299 (NGC 3690/IC 694) and in Arp 244 (NGC 4038/39). Whether the IR emission from the central region or that from the surrounding extranuclear galactic disk dominates depends on the geometry and the epoch of the collision and on the initial radial gas distribution in the two galaxies. In general, the central starburst would be stronger than that in the disks, due to the higher preexisting gas densities in the central region. The burst of star formation is expected to last over a galactic gas disk crossing time approximately 4 x 10(7) yr. We can also explain the simultaneous existence of nearly normal CO galaxy luminosities and shocked H-2 gas, as seen in colliding field galaxies.This is a minimal model, in that the only necessary condition for it to work is that there should be a sufficient overlap between the spatial gas distributions of the colliding galaxy pair.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

All protein-encoding genes in eukaryotes are transcribed into messenger RNA (mRNA) by RNA Polymerase II (RNAP II), whose activity therefore needs to be tightly controlled. An important and only partially understood level of regulation is the multiple phosphorylations of RNAP II large subunit C-terminal domain (CTD). Sequential phosphorylations regulate transcription initiation and elongation, and recruit factors involved in co-transcriptional processing of mRNA. Based largely on studies in yeast models and in vitro, the kinase activity responsible for the phosphorylation of the serine-5 (Ser5) residues of RNAP II CTD has been attributed to the Mat1/Cdk7/CycH trimer as part of Transcription Factor IIH. However, due to the lack of good mammalian genetic models, the roles of both RNAP II Ser5 phosphorylation as well as TFIIH kinase in transcription have provided ambiguous results and the in vivo kinase of Ser5 has remained elusive. The primary objective of this study was to elucidate the role of mammalian TFIIH, and specifically the Mat1 subunit in CTD phosphorylation and general RNAP II-mediated transcription. The approach utilized the Cre-LoxP system to conditionally delete murine Mat1 in cardiomyocytes and hepatocytes in vivo and and in cell culture models. The results identify the TFIIH kinase as the major mammalian Ser5 kinase and demonstrate its requirement for general transcription, noted by the use of nascent mRNA labeling. Also a role for Mat1 in regulating general mRNA turnover was identified, providing a possible rationale for earlier negative findings. A secondary objective was to identify potential gene- and tissue-specific roles of Mat1 and the TFIIH kinase through the use of tissue-specific Mat1 deletion. Mat1 was found to be required for the transcriptional function of PGC-1 in cardiomyocytes. Transriptional activation of lipogenic SREBP1 target genes following Mat1 deletion in hepatocytes revealed a repressive role for Mat1apparently mediated via co-repressor DMAP1 and the DNA methyltransferase Dnmt1. Finally, Mat1 and Cdk7 were also identified as a negative regulators of adipocyte differentiation through the inhibitory phosphorylation of Peroxisome proliferator-activated receptor (PPAR) γ. Together, these results demonstrate gene- and tissue-specific roles for the Mat1 subunit of TFIIH and open up new therapeutic possibilities in the treatment of diseases such as type II diabetes, hepatosteatosis and obesity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The concept of short range strong spin-two (f) field (mediated by massive f-mesons) and interacting directly with hadrons was introduced along with the infinite range (g) field in early seventies. In the present review of this growing area (often referred to as strong gravity) we give a general relativistic treatment in terms of Einstein-type (non-abelian gauge) field equations with a coupling constant Gf reverse similar, equals 1038 GN (GN being the Newtonian constant) and a cosmological term λf ƒ;μν (ƒ;μν is strong gravity metric and λf not, vert, similar 1028 cm− is related to the f-meson mass). The solutions of field equations linearized over de Sitter (uniformly curves) background are capable of having connections with internal symmetries of hadrons and yielding mass formulae of SU(3) or SU(6) type. The hadrons emerge as de Sitter “microuniverses” intensely curved within (radius of curvature not, vert, similar10−14 cm).The study of spinor fields in the context of strong gravity has led to Heisenberg's non-linear spinor equation with a fundamental length not, vert, similar2 × 10−14 cm. Furthermore, one finds repulsive spin-spin interaction when two identical spin-Image particles are in parallel configuration and a connection between weak interaction and strong gravity.Various other consequences of strong gravity embrace black hole (solitonic) solutions representing hadronic bags with possible quark confinement, Regge-like relations between spins and masses, connection with monopoles and dyons, quantum geons and friedmons, hadronic temperature, prevention of gravitational singularities, providing a physical basis for Dirac's two metric and large numbers hypothesis and projected unification with other basic interactions through extended supergravity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Side chain bromination of aromatic amidomethylated compounds yields aldehydes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In rapid parallel magnetic resonance imaging, the problem of image reconstruction is challenging. Here, a novel image reconstruction technique for data acquired along any general trajectory in neural network framework, called ``Composite Reconstruction And Unaliasing using Neural Networks'' (CRAUNN), is proposed. CRAUNN is based on the observation that the nature of aliasing remains unchanged whether the undersampled acquisition contains only low frequencies or includes high frequencies too. Here, the transformation needed to reconstruct the alias-free image from the aliased coil images is learnt, using acquisitions consisting of densely sampled low frequencies. Neural networks are made use of as machine learning tools to learn the transformation, in order to obtain the desired alias-free image for actual acquisitions containing sparsely sampled low as well as high frequencies. CRAUNN operates in the image domain and does not require explicit coil sensitivity estimation. It is also independent of the sampling trajectory used, and could be applied to arbitrary trajectories as well. As a pilot trial, the technique is first applied to Cartesian trajectory-sampled data. Experiments performed using radial and spiral trajectories on real and synthetic data, illustrate the performance of the method. The reconstruction errors depend on the acceleration factor as well as the sampling trajectory. It is found that higher acceleration factors can be obtained when radial trajectories are used. Comparisons against existing techniques are presented. CRAUNN has been found to perform on par with the state-of-the-art techniques. Acceleration factors of up to 4, 6 and 4 are achieved in Cartesian, radial and spiral cases, respectively. (C) 2010 Elsevier Inc. All rights reserved.