920 resultados para J-o theories
Resumo:
Investigación elaborada a partir de una estancia en la Columbia Law School of New York, Estados Unidos, entre los meses de septiembre y noviembre del 2006. El Derecho comunitario europeo y español regulan la reparación de los daños causados por el uso o la proximidad de un producto defectuoso mediante gracias al impulso ideológico ejercido por la jurisprudencia norteamericana. El principio de responsabilidad objetiva rector de la directiva europea es fruto de un transfondo operado en los Estados Unidos en los años sesenta, coincidiendo con la revolución tecnològica y el inicio de la producción y del consumo masivos. Tales fenómenos suscitaron la búsqueda de mecanismos jurídicos aptos para canalizar la reparación de los daños inherentes a las actividades industriales tecnológicamente avanzadas. Su principal efecto fue la preocupación por una más justa distribución social de los llamados “costes del progreso”, preocupación que, jurídicamente, desembocó en la solución de la responsabilidad aun sin culpa del fabricante por los daños derivados de su producción industrial. El mérito de tal solución corresponde a determinados teóricos norteamericanos de la responsabilidad empresarial, quienes, inspirándose en ideas formuladas a inicios del siglo XX por los especialistas en Derecho laboral, concluyeron que es la empresa productora quien está en mejor situación de soportar el coste del accidente industrial: al imponerse al fabricante una responsabildad desvinculada de su eventual culpa en la causación del accidente, repercutirá en el precio de sus productos el coste del seguro de responsabilidad civil que se verá abocado a contratar para hacer frente a su responsabilidad objetiva o por riesgo, de manera que el coste de los accidentes acabará siendo soportado por el público consumidor al pagar el sobreprecio de los productos que adquiere. Las repercusiones de tal construcción han sido tanto normativas como judiciales.
Resumo:
The first main result of the paper is a criterion for a partially commutative group G to be a domain. It allows us to reduce the study of algebraic sets over G to the study of irreducible algebraic sets, and reduce the elementary theory of G (of a coordinate group over G) to the elementary theories of the direct factors of G (to the elementary theory of coordinate groups of irreducible algebraic sets). Then we establish normal forms for quantifier-free formulas over a non-abelian directly indecomposable partially commutative group H. Analogously to the case of free groups, we introduce the notion of a generalised equation and prove that the positive theory of H has quantifier elimination and that arbitrary first-order formulas lift from H to H * F, where F is a free group of finite rank. As a consequence, the positive theory of an arbitrary partially commutative group is decidable.
Resumo:
Samples of volcanic rocks from Alboran Island, the Alboran Sea floor and from the Gourougou volcanic centre in northern Morocco have been analyzed for major and trace elements and Sr-Nd isotopes to test current theories on the tectonic geodynamic evolution of the Alboran Sea. The Alboran Island samples are low-K tholeiitic basaltic andesites whose depleted contents of HFS elements (similar to0.5xN-MORB), especially Nb (similar to0.2xN-MORB), show marked geochemical parallels with volcanics from immature intra-oceanic arcs and back-arc basins. Several of the submarine samples have similar compositions, one showing low-Ca boninite affinity. Nd-143/Nd-144 ratios fall in the same range as many island-arc and back-arc basin samples, whereas Sr-87/Sr-86 ratios (on leached samples) are somewhat more radiogenic. Our data point to active subduction taking place beneath the Alboran region in Miocene times, and imply the presence of an associated back-arc spreading centre. Our sea floor suite includes a few more evolved dacite and rhyolite samples with (Sr-87/Sr-86)(0) up to 0.717 that probably represent varying degrees of crustal melting. The shoshonite and high-K basaltic andesite lavas from Gourougou have comparable normalized incompatible-element enrichment diagrams and Ce/Y ratios to shoshonitic volcanics from oceanic island arcs, though they have less pronounced Nb deficits. They are much less LIL- and LREE-enriched than continental arc analogues and post-collisional shoshonites from Tibet. The magmas probably originated by melting in subcontinental lithospheric mantle that had experienced negligible subduction input. Sr-Nd isotope compositions point to significant crustal contamination which appears to account for the small Nb anomalies. The unmistakable supra-subduction zone (SSZ) signature shown by our Alboran basalts and basaltic andesite samples refutes geodynamic models that attribute all Neogene volcanism in the Alboran domain to decompression melting of upwelling asthenosphere arising from convective thinning of over-thickened lithosphere. Our data support recent models in which subsidence is caused by westward rollback of an eastward-dipping subduction zone beneath the westemmost Mediterranean. Moreover, severance of the lithosphere at the edges of the rolling-back slab provides opportunities for locally melting lithospheric mantle, providing a possible explanation for the shoshonitic volcanism seen in northern Morocco and more sporadically in SE Spain. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
The present thesis is a contribution to the debate on the applicability of mathematics; it examines the interplay between mathematics and the world, using historical case studies. The first part of the thesis consists of four small case studies. In chapter 1, I criticize "ante rem structuralism", proposed by Stewart Shapiro, by showing that his so-called "finite cardinal structures" are in conflict with mathematical practice. In chapter 2, I discuss Leonhard Euler's solution to the Königsberg bridges problem. I propose interpreting Euler's solution both as an explanation within mathematics and as a scientific explanation. I put the insights from the historical case to work against recent philosophical accounts of the Königsberg case. In chapter 3, I analyze the predator-prey model, proposed by Lotka and Volterra. I extract some interesting philosophical lessons from Volterra's original account of the model, such as: Volterra's remarks on mathematical methodology; the relation between mathematics and idealization in the construction of the model; some relevant details in the derivation of the Third Law, and; notions of intervention that are motivated by one of Volterra's main mathematical tools, phase spaces. In chapter 4, I discuss scientific and mathematical attempts to explain the structure of the bee's honeycomb. In the first part, I discuss a candidate explanation, based on the mathematical Honeycomb Conjecture, presented in Lyon and Colyvan (2008). I argue that this explanation is not scientifically adequate. In the second part, I discuss other mathematical, physical and biological studies that could contribute to an explanation of the bee's honeycomb. The upshot is that most of the relevant mathematics is not yet sufficiently understood, and there is also an ongoing debate as to the biological details of the construction of the bee's honeycomb. The second part of the thesis is a bigger case study from physics: the genesis of GR. Chapter 5 is a short introduction to the history, physics and mathematics that is relevant to the genesis of general relativity (GR). Chapter 6 discusses the historical question as to what Marcel Grossmann contributed to the genesis of GR. I will examine the so-called "Entwurf" paper, an important joint publication by Einstein and Grossmann, containing the first tensorial formulation of GR. By comparing Grossmann's part with the mathematical theories he used, we can gain a better understanding of what is involved in the first steps of assimilating a mathematical theory to a physical question. In chapter 7, I introduce, and discuss, a recent account of the applicability of mathematics to the world, the Inferential Conception (IC), proposed by Bueno and Colyvan (2011). I give a short exposition of the IC, offer some critical remarks on the account, discuss potential philosophical objections, and I propose some extensions of the IC. In chapter 8, I put the Inferential Conception (IC) to work in the historical case study: the genesis of GR. I analyze three historical episodes, using the conceptual apparatus provided by the IC. In episode one, I investigate how the starting point of the application process, the "assumed structure", is chosen. Then I analyze two small application cycles that led to revisions of the initial assumed structure. In episode two, I examine how the application of "new" mathematics - the application of the Absolute Differential Calculus (ADC) to gravitational theory - meshes with the IC. In episode three, I take a closer look at two of Einstein's failed attempts to find a suitable differential operator for the field equations, and apply the conceptual tools provided by the IC so as to better understand why he erroneously rejected both the Ricci tensor and the November tensor in the Zurich Notebook.
Resumo:
The hypothesis that extravagant ornaments signal parasite resistance has received support in several species for ornamented males but more rarely for ornamented females. However, recent theories have proposed that females should often be under sexual selection, and therefore females may signal the heritable capacity to resist parasites. We investigated this hypothesis in the socially monogamous barn owl, Tyto alba, in which females exhibit on average more and larger black spots on the plumage than males, and in which males were suggested to choose a mate with respect to female plumage spottiness. We hypothesized that the proportion of the plumage surface covered by black spots signals parasite resistance. In line with this hypothesis, we found that the ectoparasitic fly, Carnus hemapterus, was less abundant on young raised by more heavily spotted females and those flies were less fecund. In an experiment, where entire clutches were cross-fostered between nests, we found that the fecundity of the flies collected on nestlings was negatively correlated with the genetic mother's plumage spottiness. These results suggest that the ability to resist parasites covaries with the extent of female plumage spottiness. Among females collected dead along roads, those with a lot of black spots had a small bursa of Fabricius. Given that parasites bigger the development of this immune organ, this observation further suggests that more spotted females are usually less parasitized. The same analyses performed on male plumage spottiness all provided non-significant results. To our knowledge, this study is the first one showing that a heritable secondary sexual characteristics displayed by females reflects parasite resistance.
Resumo:
Abstract: This article presents both a brief systemic intervention method (IBS) consisting in 6 sessions developed in an ambulatory service for couples and families, and two research projects done in collaboration with the Institute for Psychotherapy of the University of Lausanne. The first project is quantitative and it aims at evaluating the effectiveness of ISB. One of its main feature is that outcomes are assessed at different levels of individual and family functioning: 1) symptoms and individual functioning; 2) quality of marital relationship; 3) parental and co-parental relationships; 4) familial relationships. The second project is a qualitative case study about a marital therapy which identifies and analyses significant moments of the therapeutic process from the patients' perspective. Methodology was largely inspired by Daniel Stem's work about "moments of meeting" in psychotherapy. Results show that patients' theories about relationship and change are important elements that deepen our understanding of the change process in couple and family therapy. The interest of associating clinicians and researchers for the development and validation of a new clinical model is discussed.
Resumo:
We define the Jacobian of a Riemann surface with analytically parametrized boundary components. These Jacobians belong to a moduli space of "open abelian varieties" which satisfies gluing axioms similar to those of Riemann surfaces, and therefore allows a notion of "conformal field theory" to be defined on this space. We further prove that chiral conformal field theories corresponding to even lattices factor through this moduli space of open abelian varieties.
Resumo:
This paper reviews four economic theories of leadership selection in conflictual settings. The first of these by Cukierman and Tomassi (1998) labeled the ‘information rationale’, argues that hawks may actually be necessary to initiate peace agreements. The second labeled the ‘bargaining rationale’ borrowing from Hamlin and Jennings (2007) agrees with the conventional wisdom that doves are more likely to secure peace, but post-conflict there are good reasons for hawks to be rationally selected. The third found in Jennings and Roelfsema (2008) is labeled the social psychological rationale. This captures the idea of a competition over which group can form the strongest identity, so can apply to group choices which do not impinge upon bargaining power. As in the bargaining rationale, dove selection can be predicted during conflict, but hawk selection post-conflict. Finally, the expressive rationale is discussed which predicts that regardless of the underlying structure of the game (informational, bargaining, psychological) the large group nature of decision-making by making individual decision makers non-decisive in determining the outcome of elections may cause them to make choices based primarily on emotions which may be invariant with the mode of group interaction, be it conflictual or peaceful. Finally, the paper analyses the extent to which the theories can throw light on Northern Ireland electoral history over the last 25 years.
Resumo:
A growing literature has focussed attention on ‘expressive’ rather than ‘instrumental’ behaviour in political settings - particularly voting A common criticism of the expressive idea is that its myriad possibilities make it rather ad hoc and lacking in both predictive and normative bite. We agree that no single clear definition of expressive behaviour has emerged to date, and no detailed foundations of specific expressive motivations have been provided, so that there are rather few specific implications drawn from the analysis of expressive behaviour. In response, we provide a foundational discussion and definition of expressive behaviour that accounts for a range of factors. We also discuss the content of expressive choice distinguishing between moral, social and emotional cases, and relate this more general account to the specific theories of expressive choice in the literature. Finally, we discuss the normative and institutional implications of expressive behaviour.
Resumo:
Quantum indeterminism is frequently invoked as a solution to the problem of how a disembodied soul might interact with the brain (as Descartes proposed), and is sometimes invoked in theories of libertarian free will even when they do not involve dualistic assumptions. Taking as example the Eccles-Beck model of interaction between self (or soul) and brain at the level of synaptic exocytosis, I here evaluate the plausibility of these approaches. I conclude that Heisenbergian uncertainty is too small to affect synaptic function, and that amplification by chaos or by other means does not provide a solution to this problem. Furthermore, even if Heisenbergian effects did modify brain functioning, the changes would be swamped by those due to thermal noise. Cells and neural circuits have powerful noise-resistance mechanisms, that are adequate protection against thermal noise and must therefore be more than sufficient to buffer against Heisenbergian effects. Other forms of quantum indeterminism must be considered, because these can be much greater than Heisenbergian uncertainty, but these have not so far been shown to play a role in the brain.
Resumo:
This paper estimates individual wage equations in order to test two rival non-nested theories of economic agglomeration, namely New Economic Geography (NEG), as represented by the NEG wage equation and urban economic (UE) theory , in which wages relate to employment density. The paper makes an original contribution by evidently being the first empirical paper to examine the issue of agglomeration processes associated with contemporary theory working with micro-level data, highlighting the role of gender and other individual-level characteristics. For male respondents, there is no significant evidence that wage levels are an outcome of the mechanisms suggested by NEG or UE theory, but this is not the case for female respondents. We speculate on the reasons for the gender difference.
Resumo:
Theories of firm profitability make different predictions about the relative importance of firm, industry and time specific factors. We assess, empirically, the relevance of these effects over a sixteen year period in India, as a regime of control and regulation, pre 1985, gave way to partial liberalisation between 1985 and 1991 and to more decisive liberalisation after 1991. We find that firm effects are important throughout, when rent seeking opportunities proliferated, as well as when competitive forces were enhanced by institutional change. In contrast, industry effects significantly increased after liberalisation, suggesting that industry structure matters more within competitive markets. These findings help understand the relevance of different models over different stages of liberalisation, and have important implications for both theory and policy.
Resumo:
‘Modern’ Phillips curve theories predict inflation is an integrated, or near integrated, process. However, inflation appears bounded above and below in developed economies and so cannot be ‘truly’ integrated and more likely stationary around a shifting mean. If agents believe inflation is integrated as in the ‘modern’ theories then they are making systematic errors concerning the statistical process of inflation. An alternative theory of the Phillips curve is developed that is consistent with the ‘true’ statistical process of inflation. It is demonstrated that United States inflation data is consistent with the alternative theory but not with the existing ‘modern’ theories.
Resumo:
Achievement careers are regarded as a distinctive element of the post-war period in occidental societies. Such a career was at once a modal trajectory of the modern parts of middleclass men and a social emblem for progress and success. However, if the achievement career came to be a biographical pattern with great normative power, its precise sequential course remained vague. Theories of the 1960s and 1970s described it as an orderly advancement within large firms. By the 1990s, scholars postulated an erosion of the organizational structures that once contributed to the institutionalization of careers, accompanied by a weakening of the normative weight of the achievement career by management discourse. We question the thesis of the corrosion of achievement career by analysing the trajectories of 442 engineers and business economists in Switzerland in regard to their orderliness, loyalty, and temporal rhythm. An inspection of types of careers and cohorts reveals that even if we face a decline of loyalty over time, hierarchical orderliness is not touched by those changes. Foremost, technical-industrial careers fit the loyal and regular pattern. Hence, this trajectory-type represents only a minority and is by far the slowest and least successful in terms of hierarchical ascension.
Resumo:
Game theorists typically assume that changing a game’s payoff levels—by adding the same constant to, or subtracting it from, all payoffs—should not affect behavior. While this invariance is an implication of the theory when payoffs mirror expected utilities, it is an empirical question when the “payoffs” are actually money amounts. In particular, if individuals treat monetary gains and losses differently, then payoff–level changes may matter when they result in positive payoffs becoming negative, or vice versa. We report the results of a human–subjects experiment designed to test for two types of loss avoidance: certain–loss avoidance (avoiding a strategy leading to a sure loss, in favor of an alternative that might lead to a gain) and possible–loss avoidance (avoiding a strategy leading to a possible loss, in favor of an alternative that leads to a sure gain). Subjects in the experiment play three versions of Stag Hunt, which are identical up to the level of payoffs, under a variety of treatments. We find differences in behavior across the three versions of Stag Hunt; these differences are hard to detect in the first round of play, but grow over time. When significant, the differences we find are in the direction predicted by certain– and possible–loss avoidance. Our results carry implications for games with multiple equilibria, and for theories that attempt to select among equilibria in such games.