62 resultados para one-boson-exchange models
Resumo:
Macroeconomic models of equity and exchange rate returns perform poorly at high frequencies. The proportion of daily returns that these models explain is essentially zero. Instead of relying on macroeconomic determinants, we model equity price and exchange rate behavior based on a concept from microstructure – order flow. The international order flows are derived from belief changes of different investor groups in a two-country setting. We obtain a structural relationship between equity returns, exchange rate returns and their relationship to home and foreign equity market order flow. To test the model we construct daily aggregate order flow data from 800 million equity trades in the U.S. and France from 1999 to 2003. Almost 60% of the daily returns in the S&P100 index are explained jointly by exchange rate returns and aggregate order flows in both markets. As predicted by the model, daily exchange rate returns and order flow into the French market have significant incremental explanatory power for the daily S&P returns. The model implications are also validated for intraday returns.
Resumo:
The eng-genes concept involves the use of fundamental known system functions as activation functions in a neural model to create a 'grey-box' neural network. One of the main issues in eng-genes modelling is to produce a parsimonious model given a model construction criterion. The challenges are that (1) the eng-genes model in most cases is a heterogenous network consisting of more than one type of nonlinear basis functions, and each basis function may have different set of parameters to be optimised; (2) the number of hidden nodes has to be chosen based on a model selection criterion. This is a mixed integer hard problem and this paper investigates the use of a forward selection algorithm to optimise both the network structure and the parameters of the system-derived activation functions. Results are included from case studies performed on a simulated continuously stirred tank reactor process, and using actual data from a pH neutralisation plant. The resulting eng-genes networks demonstrate superior simulation performance and transparency over a range of network sizes when compared to conventional neural models. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
Labelling of silica grains and energy dispersive X-ray spectroscopy (EDX) in a TEM-FEG (field emission gun) were used to demonstrate the migration of Pt(NH3)(4)(2+) species from one grain to another during Pt/SiO2 catalyst preparation by the ion-exchange procedure.
Resumo:
People tend to attribute more regret to a character who has decided to take action and experienced a negative outcome than to one who has decided not to act and experienced a negative outcome. For some decisions, however, this finding is not observed in a between-participants design and thus appears to rely on comparisons between people's representations of action and their representations of inaction. In this article, we outline a mental models account that explains findings from studies that have used within- and between-participants designs, and we suggest that, for decisions with uncertain counterfactual outcomes, information about the consequences of a decision to act causes people to flesh out their representation of the counterfactual states of affairs for inaction. In three experiments, we confirm our predictions about participants' fleshing out of representations, demonstrating that an action effect occurs only when information about the consequences of action is available to participants as they rate the nonactor and when this information about action is informative with respect to judgments about inaction. It is important to note that the action effect always occurs when the decision scenario specifies certain counterfactual outcomes. These results suggest that people sometimes base their attributions of regret on comparisons among different sets of mental models.
Resumo:
We study the effects of amplitude and phase damping decoherence in d-dimensional one-way quantum computation. We focus our attention on low dimensions and elementary unidimensional cluster state resources. Our investigation shows how information transfer and entangling gate simulations are affected for d >= 2. To understand motivations for extending the one-way model to higher dimensions, we describe how basic qudit cluster states deteriorate under environmental noise of experimental interest. In order to protect quantum information from the environment, we consider encoding logical qubits into qudits and compare entangled pairs of linear qubit-cluster states to single qudit clusters of equal length and total dimension. A significant reduction in the performance of cluster state resources for d > 2 is found when Markovian-type decoherence models are present.
Resumo:
The growing importance of understanding past abrupt climate variability at a regional and global scale has led to the realisation that independent chronologies of past environmental change need to be compared between various archives. This has in turn led to attempts at significant improvements in the required precision at which records can be dated. Radiocarbon dating is still the most prominent method for dating organic material from terrestrial and marine archives, and as such many of the recent developments in improving precision have been aimed at this technique. These include: (1) selection of the most suitable datable fractions within a record, (2) the development of better calibration curves, and (3) more precise age modelling techniques. While much attention has been focussed oil the first two items, testing the possibilities of the relatively new age modelling approaches has not received much attention. Here, we test the potential for methods designed to significantly improve precision in radiocarbon-based age models, wiggle match dating and various forms of Bayesian analyses. We demonstrate that while all of the methods can perform very well, in some scenarios, caution must be taken when applying them. It appears that an integrated approach is required in real life dating situations where more than one model is applied, with strict error calculation, and with the integration of radiocarbon data with sedimentological analyses of site formation processes. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Despite the simultaneous progress of traffic modelling both on the macroscopic and microscopic front, recent works [E. Bourrel, J.B. Lessort, Mixing micro and macro representation of traffic flow: a hybrid model based on the LWR theory, Transport. Res. Rec. 1852 (2003) 193–200; D. Helbing, M. Treiber, Critical discussion of “synchronized flow”, Coop. Transport. Dyn. 1 (2002) 2.1–2.24; A. Hennecke, M. Treiber, D. Helbing, Macroscopic simulations of open systems and micro–macro link, in: D. Helbing, H.J. Herrmann, M. Schreckenberg, D.E. Wolf (Eds.), Traffic and Granular Flow ’99, Springer, Berlin, 2000, pp. 383–388] highlighted that one of the most promising way to simulate efficiently traffic flow on large road networks is a clever combination of both traffic representations: the hybrid modelling. Our focus in this paper is to propose two hybrid models for which the macroscopic (resp. mesoscopic) part is based on a class of second order model [A. Aw, M. Rascle, Resurection of second order models of traffic flow?, SIAM J. Appl. Math. 60 (2000) 916–938] whereas the microscopic part is a Follow-the Leader type model [D.C. Gazis, R. Herman, R.W. Rothery, Nonlinear follow-the-leader models of traffic flow, Oper. Res. 9 (1961) 545–567; R. Herman, I. Prigogine, Kinetic Theory of Vehicular Traffic, American Elsevier, New York, 1971]. For the first hybrid model, we define precisely the translation of boundary conditions at interfaces and for the second one we explain the synchronization processes. Furthermore, through some numerical simulations we show that the waves propagation is not disturbed and the mass is accurately conserved when passing from one traffic representation to another.
Resumo:
One of the attractive features of sound synthesis by physical modeling is the potential to build acoustic-sounding digital instruments that offer more flexibility and different options in its design and control than their real-life counterparts. In order to develop such virtual-acoustic instruments, the models they are based on need to be fully parametric, i.e., all coefficients employed in the model are functions of physical parameters that are controlled either online or at the (offline) design stage. In this letter we show how propagation losses can be parametrically incorporated in digital waveguide string models with the use of zero-phase FIR filters. Starting from the simplest possible design in the form of a three-tap FIR filter, a higher-order FIR strategy is presented and discussed within the perspective of string sound synthesis with digital waveguide models.
Resumo:
The motivation for this paper is to present procedures for automatically creating idealised finite element models from the 3D CAD solid geometry of a component. The procedures produce an accurate and efficient analysis model with little effort on the part of the user. The technique is applicable to thin walled components with local complex features and automatically creates analysis models where 3D elements representing the complex regions in the component are embedded in an efficient shell mesh representing the mid-faces of the thin sheet regions. As the resulting models contain elements of more than one dimension, they are referred to as mixed dimensional models. Although these models are computationally more expensive than some of the idealisation techniques currently employed in industry, they do allow the structural behaviour of the model to be analysed more accurately, which is essential if appropriate design decisions are to be made. Also, using these procedures, analysis models can be created automatically whereas the current idealisation techniques are mostly manual, have long preparation times, and are based on engineering judgement. In the paper the idealisation approach is first applied to 2D models that are used to approximate axisymmetric components for analysis. For these models 2D elements representing the complex regions are embedded in a 1D mesh representing the midline of the cross section of the thin sheet regions. Also discussed is the coupling, which is necessary to link the elements of different dimensionality together. Analysis results from a 3D mixed dimensional model created using the techniques in this paper are compared to those from a stiffened shell model and a 3D solid model to demonstrate the improved accuracy of the new approach. At the end of the paper a quantitative analysis of the reduction in computational cost due to shell meshing thin sheet regions demonstrates that the reduction in degrees of freedom is proportional to the square of the aspect ratio of the region, and for long slender solids, the reduction can be proportional to the aspect ratio of the region if appropriate meshing algorithms are used.
Resumo:
Polypropylene (PP), a semi-crystalline material, is typically solid phase thermoformed at temperatures associated with crystalline melting, generally in the 150° to 160°Celsius range. In this very narrow thermoforming window the mechanical properties of the material rapidly decline with increasing temperature and these large changes in properties make Polypropylene one of the more difficult materials to process by thermoforming. Measurement of the deformation behaviour of a material under processing conditions is particularly important for accurate numerical modelling of thermoforming processes. This paper presents the findings of a study into the physical behaviour of industrial thermoforming grades of Polypropylene. Practical tests were performed using custom built materials testing machines and thermoforming equipment at Queen′s University Belfast. Numerical simulations of these processes were constructed to replicate thermoforming conditions using industry standard Finite Element Analysis software, namely ABAQUS and custom built user material model subroutines. Several variant constitutive models were used to represent the behaviour of the Polypropylene materials during processing. This included a range of phenomenological, rheological and blended constitutive models. The paper discusses approaches to modelling industrial plug-assisted thermoforming operations using Finite Element Analysis techniques and the range of material models constructed and investigated. It directly compares practical results to numerical predictions. The paper culminates discussing the learning points from using Finite Element Methods to simulate the plug-assisted thermoforming of Polypropylene, which presents complex contact, thermal, friction and material modelling challenges. The paper makes recommendations as to the relative importance of these inputs in general terms with regard to correlating to experimentally gathered data. The paper also presents recommendations as to the approaches to be taken to secure simulation predictions of improved accuracy.
Resumo:
Charge exchange (CE) plays a fundamental role in the collisions of solar- and stellar-wind ions with lunar and planetary exospheres, comets, and circumstellar clouds. Reported herein are absolute cross sections for single, double, triple, and quadruple CE of Feq+ (q = 5-13) ions with H2O at a collision energy of 7q keV. One measured value of the pentuple CE is also given for Fe9+ ions. An electron cyclotron resonance ion source is used to provide currents of the highly charged Fe ions. Absolute data are derived from knowledge of the target gas pressure, target path length, and incident and charge-exchanged ion currents. Experimental cross sections are compared with new results of the n-electron classical trajectory Monte Carlo approximation. The radiative and non-radiative cascades following electron transfers are approximated using scaled hydrogenic transition probabilities and scaled Auger rates. Also given are estimates of cross sections for single capture, and multiple capture followed by autoionization, as derived from the extended overbarrier model. These estimates are based on new theoretical calculations of the vertical ionization potentials of H2O up to H2O10+.
Resumo:
In 1700 few Irishwomen were literate. Most lived in a rural environment, rarely encountered a book or a play or ventured much beyond their own domestic space. By 1960 literacy was universal, all Irishwomen attended primary school, had access to a variety of books, magazines, newspapers and other forms of popular media and the wider world was now part of their every-day life. This study seeks to examine the cultural encounters and exchanges inherent in this transformation. It analyses reading and popular and consumer culture as sites of negotiation of gender roles. This is not an exhaustive treatment of the theme but focusses on three key points of cultural encounter: the Enlightenment, emigration and modernism. The writings and intellectual discourse generated by the Enlightenment was one of the most influential forces shaping western society. It set the agenda for scientific, political and social thought for the eighteenth and nineteenth centuries. The migration of peoples to north America was another key historical marker in the development of the modern world. Emigration altered and shaped American society as well as the lives of those who remained behind. By the twentieth century, aesthetic modernism suspicious of enlightenment rationalism and determined to produce new cultural forms developed in a complex relationship with the forces of industrialisation, urbanisation and social change. This study analyses the impact of these three key forces in Western culture on changing roles and perceptions of Irish women from 1700 to 1960.
Resumo:
The states of a boson pair in a one-dimensional double-well potential are investigated. Properties of the ground and lowest excited states of this system are studied, including the two-particle wave function, momentum pair distribution, and entanglement. The effects of varying both the barrier height and the effective interaction strength are investigated.
Resumo:
An alternative models framework was used to test three confirmatory factor analytic models for the Short Leyton Obsessional Inventory-Children's Version (Short LOI-CV) in a general population sample of 517 young adolescent twins (11-16 years). A one-factor model as implicit in current classification systems of Obsessive-Compulsive Disorder (OCD), a two-factor obsessions and compulsions model, and a multidimensional model corresponding to the three proposed subscales of the Short LOI-CV (labelled Obsessions/Incompleteness, Numbers/Luck and Cleanliness) were considered. The three-factor model was the only model to provide an adequate explanation of the data. Twin analyses suggested significant quantitative sex differences in heritability for both the Obsessions/Incompleteness and Numbers/Luck dimensions with these being significantly heritable in males only (heritability of 60% and 65% respectively). The correlation between the additive genetic effects for these two dimensions in males was 0.95 suggesting they largely share the same genetic risk factors.
Resumo:
Concern with what can explain variation in generalized social trust has led to an abundance of theoretical models. Defining generalized social trust as a belief in human benevolence, we focus on the emancipation theory and social capital theory as well as the ethnic diversity and economic development models of trust. We then determine which dimensions of individuals’ behavior and attitudes as well as of their national context are the most important predictors. Using data from 20 countries that participated in round one of the European Social Survey, we test these models at their respective level of analysis, individual and/or national. Our analysis revealed that individuals’ own trust in the political system as a moral and competent institution was the most important predictor of generalized social trust at the individual level, while a country’s level of affluence was the most important contextual predictor, indicating that different dimensions are significant at the two levels of analysis. This analysis also raised further questions as to the meaning of social capital at the two levels of analysis and the conceptual equivalence of its civic engagement dimension across cultures.