888 resultados para Theory of models
Resumo:
We propose cotunneling as the microscopic mechanism that makes possible inelastic electron tunneling spectroscopy of magnetic atoms in surfaces for a wide range of systems, including single magnetic adatoms, molecules, and molecular stacks. We describe electronic transport between the scanning tip and the conducting surface through the magnetic system (MS) with a generalized Anderson model, without making use of effective spin models. Transport and spin dynamics are described with an effective cotunneling Hamiltonian in which the correlations in the magnetic system are calculated exactly and the coupling to the electrodes is included up to second order in the tip MS and MS substrate. In the adequate limit our approach is equivalent to the phenomenological Kondo exchange model that successfully describes the experiments. We apply our method to study in detail inelastic transport in two systems, stacks of cobalt phthalocyanines and a single Mn atom on Cu2N. Our method accounts for both the large contribution of the inelastic spin exchange events to the conductance and the observed conductance asymmetry.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
The mean abundances of Mg, Si, Ca, Ti, Cr, and Fe based on both strong and weak lines of alpha CenAare determined by matching the observed line profiles with those synthesised from stellar atmospheric models and comparing these results with a similar analysis for the Sun. There is good agreement between the abundances from strong and weak lines. Strong lines should generally be an excellent indicator of abundance and far easier to measure than the weak lines normally used. Until the development of the Anstee, Barklem, and O'Mara ( ABO) theory for collisional line broadening, the uncertainty in the value of the damping constant prevented strong lines being used for abundance determinations other than in close differential analyses. We found that alpha Cen A has a mean overabundance of 0.12 +/- 0.06 dex compared to solar mean abundances. This result agrees remarkably well with previous studies that did not use strong lines or the ABO theory for collisional line broadening. Our result supports the conclusion that reliable abundances can be derived from strong lines provided this new theory for line broadening is used to calculate the van derWaals damping.
Resumo:
Through a prospective study of 70 youths staying at homeless-youth shelters, the authors tested the utility of I. Ajzen's (1991) theory of planned behavior (TPB), by comparing the constructs of self-efficacy with perceived behavioral control (PBC), in predicting people's rule-following behavior during shelter stays. They performed the 1st wave of data collection through a questionnaire assessing the standard TPB components of attitudes, subjective norms, PBC, and behavioral intentions in relation to following the set rules at youth shelters. Further, they distinguished between items assessing PBC (or perceived control) and those reflecting self-efficacy (or perceived difficulty). At the completion of each youth's stay at the shelter, shelter staff rated the rule adherence for that participant. Regression analyses revealed some support for the TPB in that subjective norm was a significant predictor of intentions. However, self-efficacy emerged as the strongest predictor of intentions and was the only significant predictor of rule-following behavior. Thus, the results of the present study indicate the possibility that self-efficacy is integral to predicting rule adherence within this context and reaffirm the importance of incorporating notions of people's perceived ease or difficulty in performing actions in models of attitude-behavior prediction.
Resumo:
In this thesis work we develop a new generative model of social networks belonging to the family of Time Varying Networks. The importance of correctly modelling the mechanisms shaping the growth of a network and the dynamics of the edges activation and inactivation are of central importance in network science. Indeed, by means of generative models that mimic the real-world dynamics of contacts in social networks it is possible to forecast the outcome of an epidemic process, optimize the immunization campaign or optimally spread an information among individuals. This task can now be tackled taking advantage of the recent availability of large-scale, high-quality and time-resolved datasets. This wealth of digital data has allowed to deepen our understanding of the structure and properties of many real-world networks. Moreover, the empirical evidence of a temporal dimension in networks prompted the switch of paradigm from a static representation of graphs to a time varying one. In this work we exploit the Activity-Driven paradigm (a modeling tool belonging to the family of Time-Varying-Networks) to develop a general dynamical model that encodes fundamental mechanism shaping the social networks' topology and its temporal structure: social capital allocation and burstiness. The former accounts for the fact that individuals does not randomly invest their time and social interactions but they rather allocate it toward already known nodes of the network. The latter accounts for the heavy-tailed distributions of the inter-event time in social networks. We then empirically measure the properties of these two mechanisms from seven real-world datasets and develop a data-driven model, analytically solving it. We then check the results against numerical simulations and test our predictions with real-world datasets, finding a good agreement between the two. Moreover, we find and characterize a non-trivial interplay between burstiness and social capital allocation in the parameters phase space. Finally, we present a novel approach to the development of a complete generative model of Time-Varying-Networks. This model is inspired by the Kaufman's adjacent possible theory and is based on a generalized version of the Polya's urn. Remarkably, most of the complex and heterogeneous feature of real-world social networks are naturally reproduced by this dynamical model, together with many high-order topological properties (clustering coefficient, community structure etc.).
Resumo:
Over recent years, evidence has been accumulating in favour of the importance of long-term information as a variable which can affect the success of short-term recall. Lexicality, word frequency, imagery and meaning have all been shown to augment short term recall performance. Two competing theories as to the causes of this long-term memory influence are outlined and tested in this thesis. The first approach is the order-encoding account, which ascribes the effect to the usage of resources at encoding, hypothesising that word lists which require less effort to process will benefit from increased levels of order encoding, in turn enhancing recall success. The alternative view, trace redintegration theory, suggests that order is automatically encoded phonologically, and that long-term information can only influence the interpretation of the resultant memory trace. The free recall experiments reported here attempted to determine the importance of order encoding as a facilitatory framework and to determine the locus of the effects of long-term information in free recall. Experiments 1 and 2 examined the effects of word frequency and semantic categorisation over a filled delay, and experiments 3 and 4 did the same for immediate recall. Free recall was improved by both long-term factors tested. Order information was not used over a short filled delay, but was evident in immediate recall. Furthermore, it was found that both long-term factors increased the amount of order information retained. Experiment 5 induced an order encoding effect over a filled delay, leaving a picture of short-term processes which are closely associated with long-term processes, and which fit conceptions of short-term memory being part of language processes rather better than either the encoding or the retrieval-based models. Experiments 6 and 7 aimed to determine to what extent phonological processes were responsible for the pattern of results observed. Articulatory suppression affected the encoding of order information where speech rate had no direct influence, suggesting that it is ease of lexical access which is the most important factor in the influence of long-term memory on immediate recall tasks. The evidence presented in this thesis does not offer complete support for either the retrieval-based account or the order encoding account of long-term influence. Instead, the evidence sits best with models that are based upon language-processing. The path urged for future research is to find ways in which this diffuse model can be better specified, and which can take account of the versatility of the human brain.
Resumo:
The controlled from distance teaching (DT) in the system of technical education has a row of features: complication of informative content, necessity of development of simulation models and trainers for conducting of practical and laboratory employments, conducting of knowledge diagnostics on the basis of mathematical-based algorithms, organization of execution collective projects of the applied setting. For development of the process of teaching bases of fundamental discipline control system Theory of automatic control (TAC) the combined approach of optimum combination of existent programmatic instruments of support was chosen DT and own developments. The system DT TAC included: controlled from distance course (DC) of TAC, site of virtual laboratory practical works in LAB.TAC and students knowledge remote diagnostic system d-tester.
Resumo:
In the specific area of software engineering (SE) for self-adaptive systems (SASs) there is a growing research awareness about the synergy between SE and artificial intelligence (AI). However, just few significant results have been published so far. In this paper, we propose a novel and formal Bayesian definition of surprise as the basis for quantitative analysis to measure degrees of uncertainty and deviations of self-adaptive systems from normal behavior. A surprise measures how observed data affects the models or assumptions of the world during runtime. The key idea is that a "surprising" event can be defined as one that causes a large divergence between the belief distributions prior to and posterior to the event occurring. In such a case the system may decide either to adapt accordingly or to flag that an abnormal situation is happening. In this paper, we discuss possible applications of Bayesian theory of surprise for the case of self-adaptive systems using Bayesian dynamic decision networks. Copyright © 2014 ACM.
Resumo:
Chlamydia is a common sexually transmitted infection that has potentially serious consequences unless detected and treated early. The health service in the UK offers clinic-based testing for chlamydia but uptake is low. Identifying the predictors of testing behaviours may inform interventions to increase uptake. Self-tests for chlamydia may facilitate testing and treatment in people who avoid clinic-based testing. Self-testing and being tested by a health care professional (HCP) involve two contrasting contexts that may influence testing behaviour. However, little is known about how predictors of behaviour differ as a function of context. In this study, theoretical models of behaviour were used to assess factors that may predict intention to test in two different contexts: self-testing and being tested by a HCP. Individuals searching for or reading about chlamydia testing online were recruited using Google Adwords. Participants completed an online questionnaire that addressed previous testing behaviour and measured constructs of the Theory of Planned Behaviour and Protection Motivation Theory, which propose a total of eight possible predictors of intention. The questionnaire was completed by 310 participants. Sufficient data for multiple regression were provided by 102 and 118 respondents for self-testing and testing by a HCP respectively. Intention to self-test was predicted by vulnerability and self-efficacy, with a trend-level effect for response efficacy. Intention to be tested by a HCP was predicted by vulnerability, attitude and subjective norm. Thus, intentions to carry out two testing behaviours with very similar goals can have different predictors depending on test context. We conclude that interventions to increase self-testing should be based on evidence specifically related to test context.
Resumo:
Previous work has demonstrated that planning behaviours may be more adaptive than avoidance strategies in driving self-regulation, but ways of encouraging planning have not been investigated. The efficacy of an extended theory of planned behaviour (TPB) plus implementation intention based intervention to promote planning self-regulation in drivers across the lifespan was tested. An age stratified group of participants (N=81, aged 18-83 years) was randomly assigned to an experimental or control condition. The intervention prompted specific goal setting with action planning and barrier identification. Goal setting was carried out using an agreed behavioural contract. Baseline and follow-up measures of TPB variables, self-reported, driving self-regulation behaviours (avoidance and planning) and mobility goal achievements were collected using postal questionnaires. Like many previous efforts to change planned behaviour by changing its predictors using models of planned behaviour such as the TPB, results showed that the intervention did not significantly change any of the model components. However, more than 90% of participants achieved their primary driving goal, and self-regulation planning as measured on a self-regulation inventory was marginally improved. The study demonstrates the role of pre-decisional, or motivational components as contrasted with post-decisional goal enactment, and offers promise for the role of self-regulation planning and implementation intentions in assisting drivers in achieving their mobility goals and promoting safer driving across the lifespan, even in the context of unchanging beliefs such as perceived risk or driver anxiety.
Resumo:
Koopmans gyakorlati problémák megoldása során szerzett tapasztalatait általánosítva fogott hozzá a lineáris tevékenységelemzési modell kidolgozásához. Meglepődve tapasztalta, hogy a korabeli közgazdaságtan nem rendelkezett egységes, kellően egzakt termeléselmélettel és fogalomrendszerrel. Úttörő dolgozatában ezért - mintegy a lineáris tevékenységelemzési modell elméleti kereteként - lerakta a technológiai halmazok fogalmán nyugvó axiomatikus termeléselmélet alapjait is. Nevéhez fűződik a termelési hatékonyság és a hatékonysági árak fogalmának egzakt definíciója, s az egymást kölcsönösen feltételező viszonyuk igazolása a lineáris tevékenységelemzési modell keretében. A hatékonyság manapság használatos, pusztán műszaki szempontból értelmezett definícióját Koopmans csak sajátos esetként tárgyalta, célja a gazdasági hatékonyság fogalmának a bevezetése és elemzése volt. Dolgozatunkban a lineáris programozás dualitási tételei segítségével rekonstruáljuk ez utóbbira vonatkozó eredményeit. Megmutatjuk, hogy egyrészt bizonyításai egyenértékűek a lineáris programozás dualitási tételeinek igazolásával, másrészt a gazdasági hatékonysági árak voltaképpen a mai értelemben vett árnyékárak. Rámutatunk arra is, hogy a gazdasági hatékonyság értelmezéséhez megfogalmazott modellje az Arrow-Debreu-McKenzie-féle általános egyensúlyelméleti modellek közvetlen előzményének tekinthető, tartalmazta azok szinte minden lényeges elemét és fogalmát - az egyensúlyi árak nem mások, mint a Koopmans-féle hatékonysági árak. Végezetül újraértelmezzük Koopmans modelljét a vállalati technológiai mikroökonómiai leírásának lehetséges eszközeként. Journal of Economic Literature (JEL) kód: B23, B41, C61, D20, D50. /===/ Generalizing from his experience in solving practical problems, Koopmans set about devising a linear model for analysing activity. Surprisingly, he found that economics at that time possessed no uniform, sufficiently exact theory of production or system of concepts for it. He set out in a pioneering study to provide a theoretical framework for a linear model for analysing activity by expressing first the axiomatic bases of production theory, which rest on the concept of technological sets. He is associated with exact definition of the concept of production efficiency and efficiency prices, and confirmation of their relation as mutual postulates within the linear model of activity analysis. Koopmans saw the present, purely technical definition of efficiency as a special case; he aimed to introduce and analyse the concept of economic efficiency. The study uses the duality precepts of linear programming to reconstruct the results for the latter. It is shown first that evidence confirming the duality precepts of linear programming is equal in value, and secondly that efficiency prices are really shadow prices in today's sense. Furthermore, the model for the interpretation of economic efficiency can be seen as a direct predecessor of the Arrow–Debreu–McKenzie models of general equilibrium theory, as it contained almost every essential element and concept of them—equilibrium prices are nothing other than Koopmans' efficiency prices. Finally Koopmans' model is reinterpreted as a necessary tool for microeconomic description of enterprise technology.
Resumo:
Bródy András kutatásainak egyik központi témaköre a gazdasági mozgás vizsgálata volt. Írásunkban Bródy elméletét kívánjuk röviden áttekinteni és összefoglalni. A termelés sokszektoros leírása egyben árelméletét (értékelméletét, méréselméletét) is keretbe foglalja. Ebben a keretben a gazdasági mozgás összetett ingadozása technológiai alapon elemezhető. Bródy megközelítésében a gazdasági ciklust nem külső megrázkódások magyarázzák, hanem a termelési rendszer belső arányai és kapcsolatai. A termelési struktúrát az árak és a volumenek egyformán alakítják, ezek között nincsen kitüntetett vagy domináns tényező. Az árak és a volumenek a köztük lévő duális kapcsolatban alakulnak ki. A gazdaság mozgásegyenleteit technológiai mérlegösszefüggések, valamint a piaci csere útján a gazdaságban újraelosztásra (újratermelésre) kerülő termékek felhasználása és az eszközlekötés változása írja le. Az így meghatározott mozgásegyenletek a gazdaság természetes mozgását ciklusmozgás alakjában írják le. A technológia vagy az értékviszonyok megváltozása (sokkok) a gazdaság ciklikus mozgásának megváltozásában tükröződik. Bródy munkáiban technológiai megalapozást nyer a történelemből ismert számos jellegzetes gazdasági ciklus. / === / Economic motion and dynamics are at the heart of Andras Brody's creative output. This paper attempts a bird's-eye view of his theory of economic cycles. Brody's multi-sector modelling of production has provided a framework for price theory (the theory of value and measurement). His theory of economic motion with cyclical characteristics is technology driven. It argues that the complex web of economic cycles is determined by the proportions and interrelationships of the system of production, not by arbitrary external shocks. The structure's behaviour are driven by prices and proportions, with the duality of prices and proportions as a dominant feature. These are features in common with the Leontief models, which Brody extended to economic cycles. Brody saw economic cycles as natural motions of economic systems with accumulated assets (time lags) and market exchange of goods (demand and supply adjustment). Changes in technology or valuations (shocks) are reflected in changing patterns of motion. His model of the economy is a fine instrument that enabled him to show how the technological parameters of its system determine the frequency and other characteristics of various economic cycles identified in economic history.
Resumo:
Mennyiben képes jelenleg a közösségi gazdaságtan az adópolitikák nemzetek fölötti centralizációjára vonatkozó politikai döntések megalapozására? Válaszunk röviden az lesz, hogy a közösségi gazdaságtan főárama - noha számos releváns gazdasági és politikai tényező hatását sikeresen elemzi - jelenleg nem kínál kielégítőnek tekinthető döntési kritériumokat a döntéshozók számára. Ennek oka, hogy központi szerepet játszik benne egy, a modellek szempontjából exogén és a közgazdasági elmélettől idegen tényező: a kormányzatok jóindulatára, pontosabban annak mértékére vonatkozó premissza. Tanulmányunk az adóverseny fiskális föderalista elméletét vizsgálja, és megpróbál általánosabb szinten is a közszektor gazdaságelméletének jelenlegi állapotára, valamint továbbfejlesztésére vonatkozó tanulságokat levonni. A kiutat az elméleti zsákutcából a kormányzati működés és döntéshozatal, valamint a kívánatos gazdaságpolitikai döntések elméletének összekapcsolása jelentheti. Erre megtörténtek az első kísérletek, de a szisztematikus és átfogó elemzés egyelőre várat magára. / === / How far can community economics provide a basis for political decision-making on supranational centralization of taxation policies? The short answer here will be that although the mainstream of community economics succeeds in analysing many relevant economic and political factors, it fails at present to provide satisfactory criteria for decisionmakers. This is because a central role is played in it by a factor exogenous to the models and alien to economic theory: the premise of the measure of goodwill from governments. The study examines the fiscal federalist theory of tax competition. It tries to draw conclusions, on a more general level, about the present state of the economic theory of the public sector and future development of it. The way out of the theoretical blind alley could be to link the theories of government operation and decision-making and of desirable economic-policy decision-making. The first attempts to do so have been made, but a systematic and comprehensive analysis is still awaited.
Resumo:
A cikk Oliver Hart és szerzőtársai modelljeinek következtetéseit hasonlítja össze Williamson tranzakciós költségekre vonatkozó nézeteivel. Megmutatja, hogy a két irányzat a vállalat vagy piac kérdéskörében más eszközöket használ, de hasonlóan érvel. Megismerkedhetünk Williamson Harttal szemben megfogalmazott azon kritikájával, hogy Hart modelljeiben az alkunak nincsenek tranzakciós költségei, illetve a kritika kritikájával is. Hart elképzeléseit támasztja alá a tulajdonjogi irányzaton belül nemrégiben kialakult referenciapont-elmélet, amely kísérleti lehetőségeket is nyújt a különböző feltételezések igazolására. ____ The article compares the conclusions from the models of Oliver Hart et al. with the views of Williamson on transaction costs. It shows that the two schools use different means on the question of the firm or the market, but similar reasoning. The author covers Williamson's criticism of Hart that there are no transaction costs in his models, and also the criticism of that criticism. Hart's notions are supported by the recently developed theory of reference point within the property-right trend, which offers chances of experimental proof of the various assumptions.
Resumo:
This dissertation examines one category of international capital flows, private portfolio investments (private refers to the source of capital). There is an overall lack of a coherent and consistent definition of foreign portfolio investment. We clarify these definitional issues.^ Two main questions that pertain to private foreign portfolio investments (FPI) are explored. The first problem is the phenomenon of home preference, often referred to as home bias. Related to this are the observed cross-investment flows between countries that seem to contradict the textbook rendition of private FPI. A description of the theories purporting to resolve the home preference puzzle (and the cross-investment one) are summarized and evaluated. Most of this literature considers investors from major developed countries. I consider--as well--whether investors in less developed countries have home preference.^ The dissertation shows that home preference is indeed pervasive and profound across countries, in both developed and emerging markets. For the U.S., I examine home bias in both equity and bond holdings as well. I find that home bias is greater when we look at equity and bond holdings than equity holdings solely.^ In this dissertation a model is developed to explain home bias. This model is original and fills a gap in the literature as there have been no satisfactory models that handle at the same time both home preference and cross-border holdings in the context of information asymmetries. This model reflects what we see in the data and permits us to reach certain results by the use of comparative statics methods. The model suggests, counter-intuitively, that as the rate of return in a country relative to the world rate of return increases, home preference decreases. In the context of our relatively simple model we ascribe this result to the higher variance of the now higher return for home assets. We also find, this time as intended, that as risk aversion increases, investors diversify further so that home preference decreases.^ The second question that the dissertation deals with is the volatility of private foreign portfolio investment. Countries that are recipients of these flows have been wary of such flows because of their perceived volatility. Often the contrast is made with the perceived absence of volatility in foreign direct investment flows. I analyze the validity of these concerns using first net flow data and then gross flow data. The results show that FPI is not, in relative terms, more volatile than other flows in our sample of eight countries (half were developed countries and the rest were emerging markets).^ The implication therefore is that restricting FPI flows may be harmful in the sense that private capital may not be allocated efficiently worldwide to the detriment of capital poor economies. More to the point, any such restrictions would in fact be misguided. ^