958 resultados para Standard models
Resumo:
We propose a way to incorporate NTBs for the four workhorse models of the modern trade literature in computable general equilibrium models (CGEs). CGE models feature intermediate linkages and thus allow us to study global value chains (GVCs). We show that the Ethier-Krugman monopolistic competition model, the Melitz firm heterogeneity model and the Eaton and Kortum model can be defined as an Armington model with generalized marginal costs, generalized trade costs and a demand externality. As already known in the literature in both the Ethier-Krugman model and the Melitz model generalized marginal costs are a function of the amount of factor input bundles. In the Melitz model generalized marginal costs are also a function of the price of the factor input bundles. Lower factor prices raise the number of firms that can enter the market profitably (extensive margin), reducing generalized marginal costs of a representative firm. For the same reason the Melitz model features a demand externality: in a larger market more firms can enter. We implement the different models in a CGE setting with multiple sectors, intermediate linkages, non-homothetic preferences and detailed data on trade costs. We find the largest welfare effects from trade cost reductions in the Melitz model. We also employ the Melitz model to mimic changes in Non tariff Barriers (NTBs) with a fixed cost-character by analysing the effect of changes in fixed trade costs. While we work here with a model calibrated to the GTAP database, the methods developed can also be applied to CGE models based on the WIOD database.
Resumo:
"April 1977."
Resumo:
Includes index.
Resumo:
"Department of the Marine Corps stock list, SL-4-81283B."
Resumo:
"United States Marine Corps stock list SL-4-03521A."
Resumo:
New dimensionally consistent modified solvate complex models are derived to correlate solubilities of solids in supercritical fluids both in the presence and absence of entrainers (cosolvents). These models are compared against the standard solvate complex models [J.Chrastil, J. Phys. Chem. 86 (1982) 3016-3021; J.C. Gonzalez, M.R.Vieytes, A.M. Botana, J.M. Vieites, L.M. Botana, J. Chromatogr. A 910 (2001) 119-125; Y. Adachi, B.C.Y. Lu, Fluid Phase Equilb. 14 (1983) 47-156; J.M. del Valle, J.M. Aguilera, Ind. Eng. Chem. Res. 27 (1988) 1551-1553] by correlating the solubilities of 13 binary and 12 ternary systems. Though the newly derived models are not significantly better than the standard models in predicting the solubilities, they are dimensionally consistent. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The accurate prediction of time-changing covariances is an important problem in the modeling of multivariate financial data. However, some of the most popular models suffer from a) overfitting problems and multiple local optima, b) failure to capture shifts in market conditions and c) large computational costs. To address these problems we introduce a novel dynamic model for time-changing covariances. Over-fitting and local optima are avoided by following a Bayesian approach instead of computing point estimates. Changes in market conditions are captured by assuming a diffusion process in parameter values, and finally computationally efficient and scalable inference is performed using particle filters. Experiments with financial data show excellent performance of the proposed method with respect to current standard models.
Resumo:
Sea ice contains flaws including frictional contacts. We aim to describe quantitatively the mechanics of those contacts, providing local physics for geophysical models. With a focus on the internal friction of ice, we review standard micro-mechanical models of friction. The solid's deformation under normal load may be ductile or elastic. The shear failure of the contact may be by ductile flow, brittle fracture, or melting and hydrodynamic lubrication. Combinations of these give a total of six rheological models. When the material under study is ice, several of the rheological parameters in the standard models are not constant, but depend on the temperature of the bulk, on the normal stress under which samples are pressed together, or on the sliding velocity and acceleration. This has the effect of making the shear stress required for sliding dependent on sliding velocity, acceleration, and temperature. In some cases, it also perturbs the exponent in the normal-stress dependence of that shear stress away from the value that applies to most materials. We unify the models by a principle of maximum displacement for normal deformation, and of minimum stress for shear failure, reducing the controversy over the mechanism of internal friction in ice to the choice of values of four parameters in a single model. The four parameters represent, for a typical asperity contact, the sliding distance required to expel melt-water, the sliding distance required to break contact, the normal strain in the asperity, and the thickness of any ductile shear zone.
Resumo:
We report results from a search for neutral Higgs bosons produced in association with b quarks using data recorded by the D0 experiment at the Fermilab Tevatron Collider and corresponding to an integrated luminosity of 7.3fb-1. This production mode can be enhanced in several extensions of the standard model (SM) such as in its minimal supersymmetric extension (MSSM) at high tan β. We search for Higgs bosons decaying to tau pairs with one tau decaying to a muon and neutrinos and the other to hadrons. The data are found to be consistent with SM expectations, and we set upper limits on the cross section times branching ratio in the Higgs boson mass range from 90 to 320GeV/c2. We interpret our result in the MSSM parameter space, excluding tan β values down to 25 for Higgs boson masses below 170GeV/c2. © 2011 American Physical Society.
Resumo:
Workflows are increasingly used to manage and share scientific computations and methods. Workflow tools can be used to design, validate, execute and visualize scientific workflows and their execution results. Other tools manage workflow libraries or mine their contents. There has been a lot of recent work on workflow system integration as well as common workflow interlinguas, but the interoperability among workflow systems remains a challenge. Ideally, these tools would form a workflow ecosystem such that it should be possible to create a workflow with a tool, execute it with another, visualize it with another, and use yet another tool to mine a repository of such workflows or their executions. In this paper, we describe our approach to create a workflow ecosystem through the use of standard models for provenance (OPM and W3C PROV) and extensions (P-PLAN and OPMW) to represent workflows. The ecosystem integrates different workflow tools with diverse functions (workflow generation, execution, browsing, mining, and visualization) created by a variety of research groups. This is, to our knowledge, the first time that such a variety of workflow systems and functions are integrated.
Resumo:
The dominant economic paradigm currently guiding industry policy making in Australia and much of the rest of the world is the neoclassical approach. Although neoclassical theories acknowledge that growth is driven by innovation, such innovation is exogenous to their standard models and hence often not explored. Instead the focus is on the allocation of scarce resources, where innovation is perceived as an external shock to the system. Indeed, analysis of innovation is largely undertaken by other disciplines, such as evolutionary economics and institutional economics. As more has become known about innovation processes, linear models, based on research and development or market demand, have been replaced by more complex interactive models which emphasise the existence of feedback loops between the actors and activities involved in the commercialisation of ideas (Manley 2003). Currently dominant among these approaches is the national or sectoral innovation system model (Breschi and Malerba 2000; Nelson 1993), which is based on the notion of increasingly open innovation systems (Chesbrough, Vanhaverbeke, and West 2008). This chapter reports on the ‘BRITE Survey’ funded by the Cooperative Research Centre for Construction Innovation which investigated the open sectoral innovation system operating in the Australian construction industry. The BRITE Survey was undertaken in 2004 and it is the largest construction innovation survey ever conducted in Australia. The results reported here give an indication of how construction innovation processes operate, as an example that should be of interest to international audiences interested in construction economics. The questionnaire was based on a broad range of indicators recommended in the OECD’s Community Innovation Survey guidelines (OECD/Eurostat 2005). Although the ABS has recently begun to undertake regular innovation surveys that include the construction industry (2006), they employ a very narrow definition of the industry and only collect very basic data compared to that provided by the BRITE Survey, which is presented in this chapter. The term ‘innovation’ is defined here as a new or significantly improved technology or organisational practice, based broadly on OECD definitions (OECD/Eurostat 2005). Innovation may be technological or organisational in nature and it may be new to the world, or just new to the industry or the business concerned. The definition thus includes the simple adoption of existing technological and organisational advancements. The survey collected information about respondents’ perceptions of innovation determinants in the industry, comprising various aspects of business strategy and business environment. It builds on a pilot innovation survey undertaken by PricewaterhouseCoopers (PWC) for the Australian Construction Industry Forum on behalf of the Australian Commonwealth Department of Industry Tourism and Resources, in 2001 (PWC 2002). The survey responds to an identified need within the Australian construction industry to have accurate and timely innovation data upon which to base effective management strategies and public policies (Focus Group 2004).
Resumo:
The notion of plaintext awareness ( PA ) has many applications in public key cryptography: it offers unique, stand-alone security guarantees for public key encryption schemes, has been used as a sufficient condition for proving indistinguishability against adaptive chosen-ciphertext attacks ( IND-CCA ), and can be used to construct privacy-preserving protocols such as deniable authentication. Unlike many other security notions, plaintext awareness is very fragile when it comes to differences between the random oracle and standard models; for example, many implications involving PA in the random oracle model are not valid in the standard model and vice versa. Similarly, strategies for proving PA of schemes in one model cannot be adapted to the other model. Existing research addresses PA in detail only in the public key setting. This paper gives the first formal exploration of plaintext awareness in the identity-based setting and, as initial work, proceeds in the random oracle model. The focus is laid mainly on identity-based key encapsulation mechanisms (IB-KEMs), for which the paper presents the first definitions of plaintext awareness, highlights the role of PA in proof strategies of IND-CCA security, and explores relationships between PA and other security properties. On the practical side, our work offers the first, highly efficient, general approach for building IB-KEMs that are simultaneously plaintext-aware and IND-CCA -secure. Our construction is inspired by the Fujisaki-Okamoto (FO) transform, but demands weaker and more natural properties of its building blocks. This result comes from a new look at the notion of γ -uniformity that was inherent in the original FO transform. We show that for IB-KEMs (and PK-KEMs), this assumption can be replaced with a weaker computational notion, which is in fact implied by one-wayness. Finally, we give the first concrete IB-KEM scheme that is PA and IND-CCA -secure by applying our construction to a popular IB-KEM and optimizing it for better performance.
Resumo:
Introduction & Aims Optimising fracture treatments requires a sound understanding of relationships between stability, callus development and healing outcomes. This has been the goal of computational modelling, but discrepancies remain between simulations and experimental results. We compared healing patterns vs fixation stiffness between a novel computational callus growth model and corresponding experimental data. Hypothesis We hypothesised that callus growth is stimulated by diffusible signals, whose production is in turn regulated by mechanical conditions at the fracture site. We proposed that introducing this scheme into computational models would better replicate the observed tissue patterns and the inverse relationship between callus size and fixation stiffness. Method Finite element models of bone healing under stiff and flexible fixation were constructed, based on the parameters of a parallel rat femoral osteotomy study. An iterative procedure was implemented, to simulate the development of callus and its mechanical regulation. Tissue changes were regulated according to published mechano-biological criteria. Predictions of healing patterns were compared between standard models, with a pre-defined domain for callus development, and a novel approach, in which periosteal callus growth is driven by a diffusible signal. Production of this signal was driven by local mechanical conditions. Finally, each model’s predictions were compared to the corresponding histological data. Results Models in which healing progressed within a prescribed callus domain predicted that greater interfragmentary movements would displace early periosteal bone formation further from the fracture. This results from artificially large distortional strains predicted near the fracture edge. While experiments showed increased hard callus size under flexible fixation, this was not reflected in the standard models. Allowing the callus to grow from a thin soft tissue layer, in response to a mechanically stimulated diffusible signal, results in a callus shape and tissue distribution closer to those observed histologically. Importantly, the callus volume increased with increasing interfragmentary movement. Conclusions A novel method to incorporate callus growth into computational models of fracture healing allowed us to successfully capture the relationship between callus size and fixation stability observed in our rat experiments. This approach expands our toolkit for understanding the influence of different fixation strategies on healing outcomes.
Resumo:
We know from anecdote and research, science and art, that human resilience is a powerful, seemingly ubiquitous force. What is needed is a better understanding of the properties, variations, and applications of that concept to health and well-being. In this paper we put forth two definitions of resilience: Sustainability of purpose in the face of stress, and recovery from adversity. We review current thinking in the social sciences on the nature of biological, psychological and socio-community processes that may confer resilience. In doing so, we encourage greater attention to aspects of biopsychosocial resourcefulness as a dimension of influence on health and mental health distinct from measures of risk found in standard models of public health inquiry. Multi-level, longitudinal, and intervention methods are advocated for research and applications of the concept with conceptual guidelines for the examination of laboratory, diary, and community indicator data on manifestations of resilience across the life span.
Resumo:
We consider a single server queue with the interarrival times and the service times forming a regenerative sequence. This traffic class includes the standard models: lid, periodic, Markov modulated (e.g., BMAP model of Lucantoni [18]) and their superpositions. This class also includes the recently proposed traffic models in high speed networks, exhibiting long range dependence. Under minimal conditions we obtain the rates of convergence to stationary distributions, finiteness of stationary moments, various functional limit theorems and the continuity of stationary distributions and moments. We use the continuity results to obtain approximations for stationary distributions and moments of an MMPP/GI/1 queue where the modulating chain has a countable state space. We extend all our results to feedforward networks where the external arrivals to each queue can be regenerative. In the end we show that the output process of a leaky bucket is regenerative if the input process is and hence our results extend to a queue with arrivals controlled by a leaky bucket.