799 resultados para Policy Process Theory
Resumo:
The nuclear gross theory, originally formulated by Takahashi and Yamada (1969 Prog. Theor. Phys. 41 1470) for the beta-decay, is applied to the electronic-neutrino nucleus reactions, employing a more realistic description of the energetics of the Gamow-Teller resonances. The model parameters are gauged from the most recent experimental data, both for beta(-)-decay and electron capture, separately for even-even, even-odd, odd-odd and odd-even nuclei. The numerical estimates for neutrino-nucleus cross-sections agree fairly well with previous evaluations done within the framework of microscopic models. The formalism presented here can be extended to the heavy nuclei mass region, where weak processes are quite relevant, which is of astrophysical interest because of its applications in supernova explosive nucleosynthesis.
Resumo:
In this work, we report a density functional theory study of nitric oxide (NO) adsorption on close-packed transition metal (TM) Rh(111), Ir(111), Pd(111) and Pt(111) surfaces in terms of adsorption sites, binding mechanism and charge transfer at a coverage of Theta(NO) = 0.25, 0.50, 0.75 monolayer (ML). Based on our study, an unified picture for the interaction between NO and TM(111) and site preference is established, and valuable insights are obtained. At low coverage (0.25 ML), we find that the interaction of NO/TM(111) is determined by an electron donation and back-donation process via the interplay between NO 5 sigma/2 pi* and TM d-bands. The extent of the donation and back-donation depends critically on the coordination number (adsorption sites) and TM d-band filling, and plays an essential role for NO adsorption on TM surfaces. DFT calculations shows that for TMs with high d-band filling such as Pd and Pt, hollow-site NO is energetically the most favorable, and top-site NO prefers to tilt away from the normal direction. While for TMs with low d-band filling (Rh and Ir), top-site NO perpendicular to the surfaces is energetically most favorable. Electronic structure analysis show that irrespective of the TM and adsorption site, there is a net charge transfer from the substrate to the adsorbate due to overwhelming back-donation from the TM substrate to the adsorbed NO molecules. The adsorption-induced change of the work function with respect to bare surfaces and dipole moment is however site dependent, and the work function increases for hollow-site NO, but decreases for top-site NO, because of differences in the charge redistribution. The interplay between the energetics, lateral interaction and charge transfer, which is element dependent, rationalizes the structural evolution of NO adsorption on TM(111) surfaces in the submonolayer regime.
Resumo:
This paper investigates how to make improved action selection for online policy learning in robotic scenarios using reinforcement learning (RL) algorithms. Since finding control policies using any RL algorithm can be very time consuming, we propose to combine RL algorithms with heuristic functions for selecting promising actions during the learning process. With this aim, we investigate the use of heuristics for increasing the rate of convergence of RL algorithms and contribute with a new learning algorithm, Heuristically Accelerated Q-learning (HAQL), which incorporates heuristics for action selection to the Q-Learning algorithm. Experimental results on robot navigation show that the use of even very simple heuristic functions results in significant performance enhancement of the learning rate.
Resumo:
The concrete offshore platforms, which are subjected a several loading combinations and, thus, requires an analysis more generic possible, can be designed using the concepts adopted to shell elements, but the resistance must be verify in particular cross-sections to shear forces. This work about design of shell elements will be make using the three-layer shell theory. The elements are subject to combined loading of membrane and plate, totalizing eight components of internal forces, which are three membrane forces, three moments (two out-of-plane bending moments and one in-plane, or torsion, moment) and two shear forces. The design method adopted, utilizing the iterative process proposed by Lourenco & Figueiras (1993) obtained from equations of equilibrium developed by Gupta (1896) , will be compared to results of experimentally tested shell elements found in the literature using the program DIANA.
Diagnostic errors and repetitive sequential classifications in on-line process control by attributes
Resumo:
The procedure of on-line process control by attributes, known as Taguchi`s on-line process control, consists of inspecting the mth item (a single item) at every m produced items and deciding, at each inspection, whether the fraction of conforming items was reduced or not. If the inspected item is nonconforming, the production is stopped for adjustment. As the inspection system can be subject to diagnosis errors, one develops a probabilistic model that classifies repeatedly the examined item until a conforming or b non-conforming classification is observed. The first event that occurs (a conforming classifications or b non-conforming classifications) determines the final classification of the examined item. Proprieties of an ergodic Markov chain were used to get the expression of average cost of the system of control, which can be optimized by three parameters: the sampling interval of the inspections (m); the number of repeated conforming classifications (a); and the number of repeated non-conforming classifications (b). The optimum design is compared with two alternative approaches: the first one consists of a simple preventive policy. The production system is adjusted at every n produced items (no inspection is performed). The second classifies the examined item repeatedly r (fixed) times and considers it conforming if most classification results are conforming. Results indicate that the current proposal performs better than the procedure that fixes the number of repeated classifications and classifies the examined item as conforming if most classifications were conforming. On the other hand, the preventive policy can be averagely the most economical alternative rather than those ones that require inspection depending on the degree of errors and costs. A numerical example illustrates the proposed procedure. (C) 2009 Elsevier B. V. All rights reserved.
Resumo:
The procedure for online process control by attributes consists of inspecting a single item at every m produced items. It is decided on the basis of the inspection result whether the process is in-control (the conforming fraction is stable) or out-of-control (the conforming fraction is decreased, for example). Most articles about online process control have cited the stoppage of the production process for an adjustment when the inspected item is non-conforming (then the production is restarted in-control, here denominated as corrective adjustment). Moreover, the articles related to this subject do not present semi-economical designs (which may yield high quantities of non-conforming items), as they do not include a policy of preventive adjustments (in such case no item is inspected), which can be more economical, mainly if the inspected item can be misclassified. In this article, the possibility of preventive or corrective adjustments in the process is decided at every m produced item. If a preventive adjustment is decided upon, then no item is inspected. On the contrary, the m-th item is inspected; if it conforms, the production goes on, otherwise, an adjustment takes place and the process restarts in-control. This approach is economically feasible for some practical situations and the parameters of the proposed procedure are determined minimizing an average cost function subject to some statistical restrictions (for example, to assure a minimal levelfixed in advanceof conforming items in the production process). Numerical examples illustrate the proposal.
Resumo:
This is a draft for a chapter of the book version of my Ph.D thesis. The chapter addresses the following question: Are the creative processes of musical composers and academic economists essentially the same, or are there significant differences? The paper finds that there are deep similarities between the creative processes of theoretical economists and the creative processes of artists. The chapter builds a process oriented lifecycle account of creative activity, drawing on testimonial material from the arts and the sciences, and relates the model to the creative work of economists developing economic theory.
Resumo:
Rupture of a light cellophane diaphragm in an expansion tube has been studied by an optical method. The influence of the light diaphragm on test flow generation has long been recognised, however the diaphragm rupture mechanism is less well known. It has been previously postulated that the diaphragm ruptures around its periphery due to the dynamic pressure loading of the shock wave, with the diaphragm material at some stage being removed from the flow to allow the shock to accelerate to the measured speeds downstream. The images obtained in this series of experiments are the first to show the mechanism of diaphragm rupture and mass removal in an expansion tube. A light diaphragm was impulsively loaded via a shock wave and a series of images was recorded holographically throughout the rupture process, showing gradual destruction of the diaphragm. Features such as the diaphragm material, the interface between gases, and a reflected shock were clearly visualised. Both qualitative and quantitative aspects of the rupture dynamics were derived from the images and compared with existing one-dimensional theory.
Resumo:
Quasi-birth-and-death (QBD) processes with infinite “phase spaces” can exhibit unusual and interesting behavior. One of the simplest examples of such a process is the two-node tandem Jackson network, with the “phase” giving the state of the first queue and the “level” giving the state of the second queue. In this paper, we undertake an extensive analysis of the properties of this QBD. In particular, we investigate the spectral properties of Neuts’s R-matrix and show that the decay rate of the stationary distribution of the “level” process is not always equal to the convergence norm of R. In fact, we show that we can obtain any decay rate from a certain range by controlling only the transition structure at level zero, which is independent of R. We also consider the sequence of tandem queues that is constructed by restricting the waiting room of the first queue to some finite capacity, and then allowing this capacity to increase to infinity. We show that the decay rates for the finite truncations converge to a value, which is not necessarily the decay rate in the infinite waiting room case. Finally, we show that the probability that the process hits level n before level 0 given that it starts in level 1 decays at a rate which is not necessarily the same as the decay rate for the stationary distribution.
Resumo:
Except for a few large scale projects, language planners have tended to talk and argue among themselves rather than to see language policy development as an inherently political process. A comparison with a social policy example, taken from the United States, suggests that it is important to understand the problem and to develop solutions in the context of the political process, as this is where decisions will ultimately be made.
Resumo:
Business process design is primarily driven by process improvement objectives. However, the role of control objectives stemming from regulations and standards is becoming increasingly important for businesses in light of recent events that led to some of the largest scandals in corporate history. As organizations strive to meet compliance agendas, there is an evident need to provide systematic approaches that assist in the understanding of the interplay between (often conflicting) business and control objectives during business process design. In this paper, our objective is twofold. We will firstly present a research agenda in the space of business process compliance, identifying major technical and organizational challenges. We then tackle a part of the overall problem space, which deals with the effective modeling of control objectives and subsequently their propagation onto business process models. Control objective modeling is proposed through a specialized modal logic based on normative systems theory, and the visualization of control objectives on business process models is achieved procedurally. The proposed approach is demonstrated in the context of a purchase-to-pay scenario.
Transaction costs and bounded rationality implications for public administration and economic policy
Resumo:
The authors use experimental surveys to investigate the association between individuals' knowledge of particular wildlife species and their stated willingness to allocate funds to conserve each. The nature of variations in these allocations between species (e.g., their dispersion) as participants' knowledge increases is examined. Factors influencing these changes are suggested. Willingness-to-pay allocations are found not to measure the economic value of species, but are shown to be policy relevant. The results indicate that poorly known species, e.g., in remote areas, may obtain relatively less conservation support than they deserve.
Resumo:
The debate about the dynamics and potential policy responses to asset inflation has intensified in recent years. Some analysts, notably Borio and Lowe, have called for 'subtle' changes to existing monetary targeting frameworks to try to deal with the problems of asset inflation and have attempted to developed indicators of financial vulnerability to aid this process. In contrast, this paper argues that the uncertainties involved in understanding financial market developments and their potential impact on the real economy are likely to remain too high to embolden policy makers. The political and institutional risks associated with policy errors are also significant. The fundamental premise that a liberalised financial system is based on 'efficient' market allocation cannot be overlooked. The corollary is that any serious attempt to stabilize financial market outcomes must involve at least a partial reversal of deregulation.