994 resultados para Haag-Kastler Axioms.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aquesta tesi té la intenció de realitzar una contribució metodològica en el camp de la direcció estratègica, per mitjà de tres objectius: la revisió del concepte de risc ex post o realitzat per l'àmbit de la direcció estratègica; la concreció d'aquest concepte en una mesura de risc vàlida; i l'exploració de les possibilitats i l'interès de la descomposició del risc en diferents determinants que puguin explicar-ne la seva naturalesa. El primer objectiu es du a terme prenent com a base el concepte intuïtiu de risc i revisant la literatura en els camps més afins, especialment en la teoria comportamental de la decisió i la direcció estratègica. L'anàlisi porta a formular el risc ex post d'una activitat com el grau en què no s'han assolit els objectius per a aquesta activitat. La concreció d'aquesta definició al camp de la direcció estratègica implica que els objectius han de portar a l'obtenció de l'avantatge competitiu sostenible, el que descobreix l'interès de realitzar la mesura del risc a curt termini, és a dir, estàticament, i a llarg termini, és a dir, dinàmicament, pel que es defineix una mesura de Risc Estàtic i una altra de Risc dinàmic, respectivament. En l'anàlisi apareixen quatre dimensions conceptuals bàsiques a incorporar en les mesures: sign dependence, relativa, longitudinal i path dependence. Addicionalment, la consideració de que els resultats puguin ser cardinals o ordinals justifica que es formulin les dues mesures anteriors per a resultats cardinals i, en segon lloc, per a resultats ordinals. Les mesures de risc que es proposen sintetitzen els resultats ex post obtinguts en una mesura de centralitat relativa dels resultats, el Risc Estàtic, i una mesura de la tendència temporal dels resultats, el Risc Dinàmic. Aquesta proposta contrasta amb el plantejament tradicional dels models esperança-variància. Les mesures desenvolupades s'avaluen amb un sistema de propietats conceptuals i tècniques que s'elaboren expressament en la tesi i que permeten demostrar el seu gra de validesa i el de les mesures existents en la literatura, destacant els problemes de validesa d'aquestes darreres. També es proporciona un exemple teòric il·lustratiu de les mesures proposades que dóna suport a l'avaluació realitzada amb el sistema de propietats. Una contribució destacada d'aquesta tesi és la demostració de que les mesures de risc proposades permeten la descomposició additiva del risc si els resultats o diferencials de resultats es descomponen additivament. Finalment, la tesi inclou una aplicació de les mesures de Risc Estàtic i Dinàmic cardinals, així com de la seva descomposició, a l'anàlisi de la rendibilitat del sector bancari espanyol, en el període 1987-1999. L'aplicació il·lustra la capacitat de les mesures proposades per a analitzar la manifestació de l'avantatge competitiu, la seva evolució i naturalesa econòmica. En les conclusions es formulen possibles línees d'investigació futures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La obra del escritor argentino Almafuerte presenta notables contrastes, dentro del contexto de emergencia de la literatura porteña de la década del 1880. Esta peculiaridad, que articula aspectos biográficos y preferencias literarias, puede analizarse tomando en cuenta la fractura que genera en el imaginario social dominante, legitimado por la élite dirigente del 80. Este artículo propone un estudio de estas problemáticas, mediante el recorte de un tópico de la poesía de Almafuerte, la creación discursiva de la chusma, y las consecutivas denuncias y postulaciones superadoras, coronadas por la formulación de una utopía. Tras este recorrido, puede apreciarse en detalle los conflictos sociohistóricos contemporáneos, especialmente las tensiones entre imaginarios sociales, que fueron registrados y resignificados por la escritura de Almafuerte.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is argued that the truth status of emergent properties of complex adaptive systems models should be based on an epistemology of proof by constructive verification and therefore on the ontological axioms of a non-realist logical system such as constructivism or intuitionism. ‘Emergent’ properties of complex adaptive systems (CAS) models create particular epistemological and ontological challenges. These challenges bear directly on current debates in the philosophy of mathematics and in theoretical computer science. CAS research, with its emphasis on computer simulation, is heavily reliant on models which explore the entailments of Formal Axiomatic Systems (FAS). The incompleteness results of Gödel, the incomputability results of Turing, and the Algorithmic Information Theory results of Chaitin, undermine a realist (platonic) truth model of emergent properties. These same findings support the hegemony of epistemology over ontology and point to alternative truth models such as intuitionism, constructivism and quasi-empiricism.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Theorem-proving is a one-player game. The history of computer programs being the players goes back to 1956 and the ‘LT’ LOGIC THEORY MACHINE of Newell, Shaw and Simon. In game-playing terms, the ‘initial position’ is the core set of axioms chosen for the particular logic and the ‘moves’ are the rules of inference. Now, the Univalent Foundations Program at IAS Princeton and the resulting ‘HoTT’ book on Homotopy Type Theory have demonstrated the success of a new kind of experimental mathematics using computer theorem proving.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose first, a simple task for the eliciting attitudes toward risky choice, the SGG lottery-panel task, which consists in a series of lotteries constructed to compensate riskier options with higher risk-return trade-offs. Using Principal Component Analysis technique, we show that the SGG lottery-panel task is capable of capturing two dimensions of individual risky decision making i.e. subjects’ average risk taking and their sensitivity towards variations in risk-return. From the results of a large experimental dataset, we confirm that the task systematically captures a number of regularities such as: A tendency to risk averse behavior (only around 10% of choices are compatible with risk neutrality); An attraction to certain payoffs compared to low risk lotteries, compatible with over-(under-) weighting of small (large) probabilities predicted in PT and; Gender differences, i.e. males being consistently less risk averse than females but both genders being similarly responsive to the increases in risk-premium. Another interesting result is that in hypothetical choices most individuals increase their risk taking responding to the increase in return to risk, as predicted by PT, while across panels with real rewards we see even more changes, but opposite to the expected pattern of riskier choices for higher risk-returns. Therefore, we conclude from our data that an “economic anomaly” emerges in the real reward choices opposite to the hypothetical choices. These findings are in line with Camerer's (1995) view that although in many domains, paid subjects probably do exert extra mental effort which improves their performance, choice over money gambles is not likely to be a domain in which effort will improve adherence to rational axioms (p. 635). Finally, we demonstrate that both dimensions of risk attitudes, average risk taking and sensitivity towards variations in the return to risk, are desirable not only to describe behavior under risk but also to explain behavior in other contexts, as illustrated by an example. In the second study, we propose three additional treatments intended to elicit risk attitudes under high stakes and mixed outcome (gains and losses) lotteries. Using a dataset obtained from a hypothetical implementation of the tasks we show that the new treatments are able to capture both dimensions of risk attitudes. This new dataset allows us to describe several regularities, both at the aggregate and within-subjects level. We find that in every treatment over 70% of choices show some degree of risk aversion and only between 0.6% and 15.3% of individuals are consistently risk neutral within the same treatment. We also confirm the existence of gender differences in the degree of risk taking, that is, in all treatments females prefer safer lotteries compared to males. Regarding our second dimension of risk attitudes we observe, in all treatments, an increase in risk taking in response to risk premium increases. Treatment comparisons reveal other regularities, such as a lower degree of risk taking in large stake treatments compared to low stake treatments and a lower degree of risk taking when losses are incorporated into the large stake lotteries. Results that are compatible with previous findings in the literature, for stake size effects (e.g., Binswanger, 1980; Antoni Bosch-Domènech & Silvestre, 1999; Hogarth & Einhorn, 1990; Holt & Laury, 2002; Kachelmeier & Shehata, 1992; Kühberger et al., 1999; B. J. Weber & Chapman, 2005; Wik et al., 2007) and domain effect (e.g., Brooks and Zank, 2005, Schoemaker, 1990, Wik et al., 2007). Whereas for small stake treatments, we find that the effect of incorporating losses into the outcomes is not so clear. At the aggregate level an increase in risk taking is observed, but also more dispersion in the choices, whilst at the within-subjects level the effect weakens. Finally, regarding responses to risk premium, we find that compared to only gains treatments sensitivity is lower in the mixed lotteries treatments (SL and LL). In general sensitivity to risk-return is more affected by the domain than the stake size. After having described the properties of risk attitudes as captured by the SGG risk elicitation task and its three new versions, it is important to recall that the danger of using unidimensional descriptions of risk attitudes goes beyond the incompatibility with modern economic theories like PT, CPT etc., all of which call for tests with multiple degrees of freedom. Being faithful to this recommendation, the contribution of this essay is an empirically and endogenously determined bi-dimensional specification of risk attitudes, useful to describe behavior under uncertainty and to explain behavior in other contexts. Hopefully, this will contribute to create large datasets containing a multidimensional description of individual risk attitudes, while at the same time allowing for a robust context, compatible with present and even future more complex descriptions of human attitudes towards risk.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Increasing evidence demonstrates that beta-amyloid (Ab) is toxic to synapses, resulting in the progressive dismantling of neuronal circuits. Counteract the synaptotoxic effects of Ab could be particularly relevant for providing effective treatments for Alzheimer’s disease (AD). Curcumin was recently reported to improve learning and memory in animal models of AD. Little is currently known about the specific mechanisms by which Ab affects neuronal excitability and curcumin ameliorates synaptic transmission in the hippocampus. Organotypic hippocampal slice cultures exposed to Ab1–42 were used to study the neuroprotective effects of curcumin through a spectral analysis of multi-electrode array (MEA) recordings of spontaneous neuronal activity. Curcumin counteracted both deleterious effects of Ab; the initial synaptic dysfunction and the later neuronal death. The analysis of MEA recordings of spontaneous neuronal activity showed an attenuation of signal propagation induced by Ab before cell death and curcumin-induced alterations to local field potential (LFP) phase coherence. Curcumin-mediated attenuation of Ab-induced synaptic dysfunction involved regulation of synaptic proteins, namely phospho-CaMKII and phosphosynapsin I. Taken together, our results expand the neuroprotective role of curcumin to a synaptic level. The identification of these mechanisms underlying the effects of curcumin may lead to new targets for future therapies for AD.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Paraconsistent logics are non-classical logics which allow non-trivial and consistent reasoning about inconsistent axioms. They have been pro- posed as a formal basis for handling inconsistent data, as commonly arise in human enterprises, and as methods for fuzzy reasoning, with applica- tions in Artificial Intelligence and the control of complex systems. Formalisations of paraconsistent logics usually require heroic mathe- matical efforts to provide a consistent axiomatisation of an inconsistent system. Here we use transreal arithmetic, which is known to be consis- tent, to arithmetise a paraconsistent logic. This is theoretically simple and should lead to efficient computer implementations. We introduce the metalogical principle of monotonicity which is a very simple way of making logics paraconsistent. Our logic has dialetheaic truth values which are both False and True. It allows contradictory propositions, allows variable contradictions, but blocks literal contradictions. Thus literal reasoning, in this logic, forms an on-the- y, syntactic partition of the propositions into internally consistent sets. We show how the set of all paraconsistent, possible worlds can be represented in a transreal space. During the development of our logic we discuss how other paraconsistent logics could be arithmetised in transreal arithmetic.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We explore the role of deeply held beliefs, known as social axioms, in the context of employee–organization relationships. Specifically, we examine how the beliefs identified as social cynicism and reward for application moderate the relationship between employees’ work-related experiences, perceptions of CSR, attitudes, and behavioral intentions toward their firm. Utilizing a sample of 130 retail employees, we find that CSR affects more positively employees low on social cynicism and reduces distrust more so than with cynical employees. Employees exhibiting strong reward for application are less positively affected by CSR, whereas their experiences of other work-related factors are more likely to reduce distrust. Our findings suggest the need for a differentiated view of CSR in the context of employee studies and offer suggestions for future research and management practice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Monolayers of neurons and glia have been employed for decades as tools for the study of cellular physiology and as the basis for a variety of standard toxicological assays. A variety of three dimensional (3D) culture techniques have been developed with the aim to produce cultures that recapitulate desirable features of intact. In this study, we investigated the effect of preparing primary mouse mixed neuron and glial cultures in the inert 3D scaffold, Alvetex. Using planar multielectrode arrays, we compared the spontaneous bioelectrical activity exhibited by neuroglial networks grown in the scaffold with that seen in the same cells prepared as conventional monolayer cultures. Two dimensional (monolayer; 2D) cultures exhibited a significantly higher spike firing rate than that seen in 3D cultures although no difference was seen in total signal power (<50 Hz) while pharmacological responsiveness of each culture type to antagonism of GABAAR, NMDAR and AMPAR was highly comparable. Interestingly, correlation of burst events, spike firing and total signal power (<50 Hz) revealed that local field potential events were associated with action potential driven bursts as was the case for 2D cultures. Moreover, glial morphology was more physiologically normal in 3D cultures. These results show that 3D culture in inert scaffolds represents a more physiologically normal preparation which has advantages for physiological, pharmacological, toxicological and drug development studies, particularly given the extensive use of such preparations in high throughput and high content systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Policy hierarchies and automated policy refinement are powerful approaches to simplify administration of security services in complex network environments. A crucial issue for the practical use of these approaches is to ensure the validity of the policy hierarchy, i.e. since the policy sets for the lower levels are automatically derived from the abstract policies (defined by the modeller), we must be sure that the derived policies uphold the high-level ones. This paper builds upon previous work on Model-based Management, particularly on the Diagram of Abstract Subsystems approach, and goes further to propose a formal validation approach for the policy hierarchies yielded by the automated policy refinement process. We establish general validation conditions for a multi-layered policy model, i.e. necessary and sufficient conditions that a policy hierarchy must satisfy so that the lower-level policy sets are valid refinements of the higher-level policies according to the criteria of consistency and completeness. Relying upon the validation conditions and upon axioms about the model representativeness, two theorems are proved to ensure compliance between the resulting system behaviour and the abstract policies that are modelled.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present measurements of the charge balance function, from the charged particles, for diverse pseudorapidity and transverse momentum ranges in Au + Au collisions at root S(NN) = 200 GeV using the STAR detector at RHIC. We observe that the balance function is boost-invariant within the pseudorapidity coverage vertical bar-1.3, 1.3 vertical bar. The balance function properly scaled by the width of the observed pseudorapidity window does not depend on the position or size of the pseudorapidity window. This scaling property also holds for particles in different transverse momentum ranges. In addition, we find that the width of the balance function decreases monotonically with increasing transverse momentum for all centrality classes. (c) 2010 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nuclear collisions recreate conditions in the universe microseconds after the Big Bang. Only a very small fraction of the emitted fragments are light nuclei, but these states are of fundamental interest. We report the observation of antihypertritons-comprising an antiproton, an antineutron, and an antilambda hyperon-produced by colliding gold nuclei at high energy. Our analysis yields 70 +/- 17 antihypertritons (3/Lambda(H) over bar) and 157 +/- 30 hypertritons ((3)(Lambda)H). The measured yields of (3)(Lambda)H (3/Lambda(H) over bar) and (3)He ((3)(He) over bar) are similar, suggesting an equilibrium in coordinate and momentum space populations of up, down, and strange quarks and antiquarks, unlike the pattern observed at lower collision energies. The production and properties of antinuclei, and of nuclei containing strange quarks, have implications spanning nuclear and particle physics, astrophysics, and cosmology.