968 resultados para Order of Convergence


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The kinetics of the water-gas shift reaction Were Studied on a 0.2% Pt/CeO2 catalyst between 177 and 300 degrees C over a range of CO and steam pressures. A rate decrease with increasing partial pressure of CO was experimentally observed over this sample, confirming that a negative order in CO can occur under certain conditions at low temperatures. The apparent reaction order of CO measured at 197 degrees C was about -0.27. This value is significantly larger than that (i.e, -0.03) reported by Ribeiro and co-workers [A.A. Phatak, N. Koryabkina, S. Rai, J.L. Ratts, W. Ruettinger, R.J. Farrauto, G.E. Blau, W.N. Delgass, F.H. Ribeiro, Catal. Today 123 (2007) 224] at a similar temperature. A kinetic peculiarity was also evidenced, i.e. a maximum of the reaction rate as a function of the CO concentration or possibly a kinetic break, which is sometimes observed in the oxidation of simple molecules. These observations support the idea that competitive adsorption of CO and H2O play an essential role in the reaction mechanism. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Corrigendum Vol. 30, Issue 2, 259, Article first published online: 15 MAR 2009 to correct the order of authors names: Bu R., K. Hadri, and B. McCabe.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The momentum term has long been used in machine learning algorithms, especially back-propagation, to improve their speed of convergence. In this paper, we derive an expression to prove the O(1/k2) convergence rate of the online gradient method, with momentum type updates, when the individual gradients are constrained by a growth condition. We then apply these type of updates to video background modelling by using it in the update equations of the Region-based Mixture of Gaussians algorithm. Extensive evaluations are performed on both simulated data, as well as challenging real world scenarios with dynamic backgrounds, to show that these regularised updates help the mixtures converge faster than the conventional approach and consequently improve the algorithm’s performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research examines media integration in China, choosing two Chinese newspaper groups as cases for comparative study. The study analyses the convergence strategies of these Chinese groups by reference to an Role Model of convergence developed from a literature review of studies of cases of media convergence in the UK – in particular the Guardian (GNM), Telegraph Media Group (TMG), the Daily Mail and the Times. UK cases serve to establish the characteristics, causes and consequences of different forms of convergence and formulate a model of convergence. The model will specify the levels of newsroom convergence and the sub-units of analysis which will be used to collect empirical data from Chinese News Organisations and compare their strategies, practices and results with the UK experience. The literature review shows that there is a need for more comparative studies of media convergence strategy in general, and particularly in relation to Chinese media. Therefore, the study will address a gap in the understanding of media convergence in China. For this reason, my innovations have three folds: Firstly, to develop a new and comprehensive model of media convergence and a detailed understanding of the reasons why media companies pursue differing strategies in managing convergence across a wide range of units of analysis. Secondly, this study tries to compare the multimedia strategies of media groups under radically different political systems. Since, there is no standard research method or systematic theoretical framework for the study of Newsroom Convergence, this study develops an integrated perspective. The research will use the triangulation analysis of textual, field observation and interviews to explain systematically what was the newsroom structure like in the past and how did the copy flow change and why. Finally, this case study of media groups can provide an industrial model or framework for the other media groups.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A certificate of initiation and acceptance to the Canadian Order Chosen Friends, Thomas Cowan. The certificate reads "This certifies that evidence has been received that Thomas Cowan has been accepted and initiated by the Council name below, and has thus become a member of the Canadian Order of Chosen Friends, and entitled to all the rights and privileges of membership and a benefit of not exceeding one thousand dollars from the relief fund of said order, which shall in case of death be paid to Annie Cowan his wife in the manner and subject to the conditions set forth in the laws governing said relief fund and in the application for membership. This certificate to be in force and binding when accepted in writing by the said member, with the acceptance attested by the Councilor and Recorder and the seal of the Subordinate Council affixed, so long as said member shall comply with the requirements of the Constitution, Laws and Regulations now in force or hereafter adopted for the government of the Order: otherwise, and also in the case of granting of a new certificate, to be null and void. In witness whereof, we have hereunto attached our signatures, and affixed the seal of the Grand Council of the Canadian Order of Chosen Friends. Dated the Twenty Seventh day of July, A.D. 1891." The front and back of the certificate are available for viewing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The structural, electronic and magnetic properties of one-dimensional 3d transition-metal (TM) monoatomic chains having linear, zigzag and ladder geometries are investigated in the frame-work of first-principles density-functional theory. The stability of long-range magnetic order along the nanowires is determined by computing the corresponding frozen-magnon dispersion relations as a function of the 'spin-wave' vector q. First, we show that the ground-state magnetic orders of V, Mn and Fe linear chains at the equilibrium interatomic distances are non-collinear (NC) spin-density waves (SDWs) with characteristic equilibrium wave vectors q that depend on the composition and interatomic distance. The electronic and magnetic properties of these novel spin-spiral structures are discussed from a local perspective by analyzing the spin-polarized electronic densities of states, the local magnetic moments and the spin-density distributions for representative values q. Second, we investigate the stability of NC spin arrangements in Fe zigzag chains and ladders. We find that the non-collinear SDWs are remarkably stable in the biatomic chains (square ladder), whereas ferromagnetic order (q =0) dominates in zigzag chains (triangular ladders). The different magnetic structures are interpreted in terms of the corresponding effective exchange interactions J(ij) between the local magnetic moments μ(i) and μ(j) at atoms i and j. The effective couplings are derived by fitting a classical Heisenberg model to the ab initio magnon dispersion relations. In addition they are analyzed in the framework of general magnetic phase diagrams having arbitrary first, second, and third nearest-neighbor (NN) interactions J(ij). The effect of external electric fields (EFs) on the stability of NC magnetic order has been quantified for representative monoatomic free-standing and deposited chains. We find that an external EF, which is applied perpendicular to the chains, favors non-collinear order in V chains, whereas it stabilizes the ferromagnetic (FM) order in Fe chains. Moreover, our calculations reveal a change in the magnetic order of V chains deposited on the Cu(110) surface in the presence of external EFs. In this case the NC spiral order, which was unstable in the absence of EF, becomes the most favorable one when perpendicular fields of the order of 0.1 V/Å are applied. As a final application of the theory we study the magnetic interactions within monoatomic TM chains deposited on graphene sheets. One observes that even weak chain substrate hybridizations can modify the magnetic order. Mn and Fe chains show incommensurable NC spin configurations. Remarkably, V chains show a transition from a spiral magnetic order in the freestanding geometry to FM order when they are deposited on a graphene sheet. Some TM-terminated zigzag graphene-nanoribbons, for example V and Fe terminated nanoribbons, also show NC spin configurations. Finally, the magnetic anisotropy energies (MAEs) of TM chains on graphene are investigated. It is shown that Co and Fe chains exhibit significant MAEs and orbital magnetic moments with in-plane easy magnetization axis. The remarkable changes in the magnetic properties of chains on graphene are correlated to charge transfers from the TMs to NN carbon atoms. Goals and limitations of this study and the resulting perspectives of future investigations are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent developments in the area of reinforcement learning have yielded a number of new algorithms for the prediction and control of Markovian environments. These algorithms, including the TD(lambda) algorithm of Sutton (1988) and the Q-learning algorithm of Watkins (1989), can be motivated heuristically as approximations to dynamic programming (DP). In this paper we provide a rigorous proof of convergence of these DP-based learning algorithms by relating them to the powerful techniques of stochastic approximation theory via a new convergence theorem. The theorem establishes a general class of convergent algorithms to which both TD(lambda) and Q-learning belong.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we consider the problem of approximating a function belonging to some funtion space Φ by a linear comination of n translates of a given function G. Ussing a lemma by Jones (1990) and Barron (1991) we show that it is possible to define function spaces and functions G for which the rate of convergence to zero of the erro is 0(1/n) in any number of dimensions. The apparent avoidance of the "curse of dimensionality" is due to the fact that these function spaces are more and more constrained as the dimension increases. Examples include spaces of the Sobolev tpe, in which the number of weak derivatives is required to be larger than the number of dimensions. We give results both for approximation in the L2 norm and in the Lc norm. The interesting feature of these results is that, thanks to the constructive nature of Jones" and Barron"s lemma, an iterative procedure is defined that can achieve this rate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Different optimization methods can be employed to optimize a numerical estimate for the match between an instantiated object model and an image. In order to take advantage of gradient-based optimization methods, perspective inversion must be used in this context. We show that convergence can be very fast by extrapolating to maximum goodness-of-fit with Newton's method. This approach is related to methods which either maximize a similar goodness-of-fit measure without use of gradient information, or else minimize distances between projected model lines and image features. Newton's method combines the accuracy of the former approach with the speed of convergence of the latter.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this article, we use the no-response test idea, introduced in Luke and Potthast (2003) and Potthast (Preprint) and the inverse obstacle problem, to identify the interface of the discontinuity of the coefficient gamma of the equation del (.) gamma(x)del + c(x) with piecewise regular gamma and bounded function c(x). We use infinitely many Cauchy data as measurement and give a reconstructive method to localize the interface. We will base this multiwave version of the no-response test on two different proofs. The first one contains a pointwise estimate as used by the singular sources method. The second one is built on an energy (or an integral) estimate which is the basis of the probe method. As a conclusion of this, the probe and the singular sources methods are equivalent regarding their convergence and the no-response test can be seen as a unified framework for these methods. As a further contribution, we provide a formula to reconstruct the values of the jump of gamma(x), x is an element of partial derivative D at the boundary. A second consequence of this formula is that the blow-up rate of the indicator functions of the probe and singular sources methods at the interface is given by the order of the singularity of the fundamental solution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper is addressed to the numerical solving of the rendering equation in realistic image creation. The rendering equation is integral equation describing the light propagation in a scene accordingly to a given illumination model. The used illumination model determines the kernel of the equation under consideration. Nowadays, widely used are the Monte Carlo methods for solving the rendering equation in order to create photorealistic images. In this work we consider the Monte Carlo solving of the rendering equation in the context of the parallel sampling scheme for hemisphere. Our aim is to apply this sampling scheme to stratified Monte Carlo integration method for parallel solving of the rendering equation. The domain for integration of the rendering equation is a hemisphere. We divide the hemispherical domain into a number of equal sub-domains of orthogonal spherical triangles. This domain partitioning allows to solve the rendering equation in parallel. It is known that the Neumann series represent the solution of the integral equation as a infinity sum of integrals. We approximate this sum with a desired truncation error (systematic error) receiving the fixed number of iteration. Then the rendering equation is solved iteratively using Monte Carlo approach. At each iteration we solve multi-dimensional integrals using uniform hemisphere partitioning scheme. An estimate of the rate of convergence is obtained using the stratified Monte Carlo method. This domain partitioning allows easy parallel realization and leads to convergence improvement of the Monte Carlo method. The high performance and Grid computing of the corresponding Monte Carlo scheme are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Simultaneous observations of cloud microphysical properties were obtained by in-situ aircraft measurements and ground based Radar/Lidar. Widespread mid-level stratus cloud was present below a temperature inversion (~5 °C magnitude) at 3.6 km altitude. Localised convection (peak updraft 1.5 m s−1) was observed 20 km west of the Radar station. This was associated with convergence at 2.5 km altitude. The convection was unable to penetrate the inversion capping the mid-level stratus. The mid-level stratus cloud was vertically thin (~400 m), horizontally extensive (covering 100 s of km) and persisted for more than 24 h. The cloud consisted of supercooled water droplets and small concentrations of large (~1 mm) stellar/plate like ice which slowly precipitated out. This ice was nucleated at temperatures greater than −12.2 °C and less than −10.0 °C, (cloud top and cloud base temperatures, respectively). No ice seeding from above the cloud layer was observed. This ice was formed by primary nucleation, either through the entrainment of efficient ice nuclei from above/below cloud, or by the slow stochastic activation of immersion freezing ice nuclei contained within the supercooled drops. Above cloud top significant concentrations of sub-micron aerosol were observed and consisted of a mixture of sulphate and carbonaceous material, a potential source of ice nuclei. Particle number concentrations (in the size range 0.1of ~25 cm−3. Ice crystal concentrations in the cloud were constant at around 0.2 L−1. It is estimated that entrainment of aerosol particles into cloud cannot replenish the loss of ice nuclei from the cloud layer via precipitation. Precipitation from the mid-level stratus evaporated before reaching the surface, whereas rates of up to 1 mm h−1 were observed below the convective feature. There is strong evidence for the Hallett-Mossop (HM) process of secondary ice particle production leading to the formation of the precipitation observed. This includes (1) Ice concentrations in the convective feature were more than an order of magnitude greater than the concentration of primary ice in the overlaying stratus, (2) Large concentrations of small pristine columns were observed at the ~−5 °C level together with liquid water droplets and a few rimed ice particles, (3) Columns were larger and increasingly rimed at colder temperatures. Calculated ice splinter production rates are consistent with observed concentrations if the condition that only droplets greater than 24 μm are capable of generating secondary ice splinters is relaxed. This case demonstrates the importance of understanding the formation of ice at slightly supercooled temperatures, as it can lead to secondary ice production and the formation of precipitation in clouds which may not otherwise be considered as significant precipitation sources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The UPSCALE (UK on PRACE: weather-resolving Simulations of Climate for globAL Environmental risk) project, using PRACE (Partnership for Advanced Computing in Europe) resources, constructed and ran an ensemble of atmosphere-only global climate model simulations, using the Met Office Unified Model GA3 configuration. Each simulation is 27 years in length for both the present climate and an end-of-century future climate, at resolutions of N96 (130 km), N216 (60 km) and N512 (25 km), in order to study the impact of model resolution on high impact climate features such as tropical cyclones. Increased model resolution is found to improve the simulated frequency of explicitly tracked tropical cyclones, and correlations of interannual variability in the North Atlantic and North West Pacific lie between 0.6 and 0.75. Improvements in the deficit of genesis in the eastern North Atlantic as resolution increases appear to be related to the representation of African Easterly Waves and the African Easterly Jet. However, the intensity of the modelled tropical cyclones as measured by 10 m wind speed remain weak, and there is no indication of convergence over this range of resolutions. In the future climate ensemble, there is a reduction of 50% in the frequency of Southern Hemisphere tropical cyclones, while in the Northern Hemisphere there is a reduction in the North Atlantic, and a shift in the Pacific with peak intensities becoming more common in the Central Pacific. There is also a change in tropical cyclone intensities, with the future climate having fewer weak storms and proportionally more stronger storms

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Films of isotropic nanocrystalline Pd(80)Co(20) alloys were obtained by electrodeposition onto brass substrate in plating baths maintained at different pH values. Increasing the pH of the plating bath led to an increase in mean grain size without inducing significant changes in the composition of the alloy. The magnetocrystalline anisotropy constant was estimated and the value was of the same order of magnitude as that reported for samples with perpendicular magnetic anisotropy. First order reversal curve (FORC) analysis revealed the presence of an important component of reversible magnetization. Also, FORC diagrams obtained at different sweep rate of the applied magnetic field, revealed that this reversible component is strongly affected by kinetic effect. The slight bias observed in the irreversible part of the FORC distribution suggested the dominance of magnetizing intergrain exchange coupling over demagnetizing dipolar interactions and microstructural disorder. (c) 2009 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This Paper Tackles the Problem of Aggregate Tfp Measurement Using Stochastic Frontier Analysis (Sfa). Data From Penn World Table 6.1 are Used to Estimate a World Production Frontier For a Sample of 75 Countries Over a Long Period (1950-2000) Taking Advantage of the Model Offered By Battese and Coelli (1992). We Also Apply the Decomposition of Tfp Suggested By Bauer (1990) and Kumbhakar (2000) to a Smaller Sample of 36 Countries Over the Period 1970-2000 in Order to Evaluate the Effects of Changes in Efficiency (Technical and Allocative), Scale Effects and Technical Change. This Allows Us to Analyze the Role of Productivity and Its Components in Economic Growth of Developed and Developing Nations in Addition to the Importance of Factor Accumulation. Although not Much Explored in the Study of Economic Growth, Frontier Techniques Seem to Be of Particular Interest For That Purpose Since the Separation of Efficiency Effects and Technical Change Has a Direct Interpretation in Terms of the Catch-Up Debate. The Estimated Technical Efficiency Scores Reveal the Efficiency of Nations in the Production of Non Tradable Goods Since the Gdp Series Used is Ppp-Adjusted. We Also Provide a Second Set of Efficiency Scores Corrected in Order to Reveal Efficiency in the Production of Tradable Goods and Rank Them. When Compared to the Rankings of Productivity Indexes Offered By Non-Frontier Studies of Hall and Jones (1996) and Islam (1995) Our Ranking Shows a Somewhat More Intuitive Order of Countries. Rankings of the Technical Change and Scale Effects Components of Tfp Change are Also Very Intuitive. We Also Show That Productivity is Responsible For Virtually All the Differences of Performance Between Developed and Developing Countries in Terms of Rates of Growth of Income Per Worker. More Important, We Find That Changes in Allocative Efficiency Play a Crucial Role in Explaining Differences in the Productivity of Developed and Developing Nations, Even Larger Than the One Played By the Technology Gap