953 resultados para Explanation of the reasoning
Resumo:
In this paper, a theoretical framework for analyzing the selection of governance structures for implementing collaboration agreements between firms and Technological Centers is presented and empirically discussed. This framework includes Transaction Costs and Property Rights’ theoretical assumptions, though complemented with several proposals coming from the Transactional Value Theory. This last theory is used for adding some dynamism in the governance structure selection. As empirical evidence of this theoretical explanation, we analyse four real experiences of collaboration between firms and one Technological Center. These experiences are aimed to represent the typology of relationships which Technological Centers usually face. Among others, a key interesting result is obtained: R&D collaboration activities do not need to always be organized through hierarchical solutions. In those cases where future expected benefits and/or reputation issues could play an important role, the traditional more static theories could not fully explain the selected governance structure for managing the R&D relationship. As a consequence, these results justify further research about the adequacy of the theoretical framework presented in this paper in other contexts, for example, R&D collaborations between firms and/or between Universities or Public Research Centers and firms.
Resumo:
Marxs conclusions about the falling rate of profit have been analysed exhaustively. Usually this has been done by building models which broadly conform to Marxs views and then showing that his conclusions are either correct or, more frequently, that they can not be sustained. By contrast, this paper examines, both descriptively and analytically, Marxs arguments from the Hodgskin section of Theories of Surplus Value, the General Law section of the recently published Volume 33 of the Collected Works and Chapter 3 of Volume III of Capital. It also gives a new interpretation of Part III of this last work. The main conclusions are first, that Marx had an intrinsic explanation of the falling rate of profit but was unable to give it a satisfactory demonstration and second, that he had a number of subsidiary explanations of which the most important was resource scarcity. The paper closes with an assessment of the pedigree of various currents of Marxian thought on this issue.
Resumo:
Marxs conclusions about the falling rate of profit have been analysed exhaustively. Usually this has been done by building models which broadly conform to Marxs views and then showing that his conclusions are either correct or, more frequently, that they can not be sustained. By contrast, this paper examines, both descriptively and analytically, Marxs arguments from the Hodgskin section of Theories of Surplus Value, the General Law section of the recently published Volume 33 of the Collected Works and Chapter 3 of Volume III of Capital. It also gives a new interpretation of Part III of this last work. The main conclusions are first, that Marx had an intrinsic explanation of the falling rate of profit but was unable to give it a satisfactory demonstration and second, that he had a number of subsidiary explanations of which the most important was resource scarcity. The paper closes with an assessment of the pedigree of various currents of Marxian thought on this issue.
Resumo:
This paper sets out a Marxian model that is based on the one by Stephen Marglin with one sector and continuous substitution. It is extended by adding technical progress and land as a factor of production. It is then shown that capital accumulation causes the preconditions for the breakdown of capitalism to emerge; that is, it causes the organic composition of capital to rise, the rate of profit to fall and the rate of exploitation to rise. A compressed history of the idea of the breakdown of capitalism is then set out and an explanation is given as to how the model relates to this and how it may serve as the basis for further research.
Resumo:
The present thesis is a contribution to the debate on the applicability of mathematics; it examines the interplay between mathematics and the world, using historical case studies. The first part of the thesis consists of four small case studies. In chapter 1, I criticize "ante rem structuralism", proposed by Stewart Shapiro, by showing that his so-called "finite cardinal structures" are in conflict with mathematical practice. In chapter 2, I discuss Leonhard Euler's solution to the Königsberg bridges problem. I propose interpreting Euler's solution both as an explanation within mathematics and as a scientific explanation. I put the insights from the historical case to work against recent philosophical accounts of the Königsberg case. In chapter 3, I analyze the predator-prey model, proposed by Lotka and Volterra. I extract some interesting philosophical lessons from Volterra's original account of the model, such as: Volterra's remarks on mathematical methodology; the relation between mathematics and idealization in the construction of the model; some relevant details in the derivation of the Third Law, and; notions of intervention that are motivated by one of Volterra's main mathematical tools, phase spaces. In chapter 4, I discuss scientific and mathematical attempts to explain the structure of the bee's honeycomb. In the first part, I discuss a candidate explanation, based on the mathematical Honeycomb Conjecture, presented in Lyon and Colyvan (2008). I argue that this explanation is not scientifically adequate. In the second part, I discuss other mathematical, physical and biological studies that could contribute to an explanation of the bee's honeycomb. The upshot is that most of the relevant mathematics is not yet sufficiently understood, and there is also an ongoing debate as to the biological details of the construction of the bee's honeycomb. The second part of the thesis is a bigger case study from physics: the genesis of GR. Chapter 5 is a short introduction to the history, physics and mathematics that is relevant to the genesis of general relativity (GR). Chapter 6 discusses the historical question as to what Marcel Grossmann contributed to the genesis of GR. I will examine the so-called "Entwurf" paper, an important joint publication by Einstein and Grossmann, containing the first tensorial formulation of GR. By comparing Grossmann's part with the mathematical theories he used, we can gain a better understanding of what is involved in the first steps of assimilating a mathematical theory to a physical question. In chapter 7, I introduce, and discuss, a recent account of the applicability of mathematics to the world, the Inferential Conception (IC), proposed by Bueno and Colyvan (2011). I give a short exposition of the IC, offer some critical remarks on the account, discuss potential philosophical objections, and I propose some extensions of the IC. In chapter 8, I put the Inferential Conception (IC) to work in the historical case study: the genesis of GR. I analyze three historical episodes, using the conceptual apparatus provided by the IC. In episode one, I investigate how the starting point of the application process, the "assumed structure", is chosen. Then I analyze two small application cycles that led to revisions of the initial assumed structure. In episode two, I examine how the application of "new" mathematics - the application of the Absolute Differential Calculus (ADC) to gravitational theory - meshes with the IC. In episode three, I take a closer look at two of Einstein's failed attempts to find a suitable differential operator for the field equations, and apply the conceptual tools provided by the IC so as to better understand why he erroneously rejected both the Ricci tensor and the November tensor in the Zurich Notebook.
Resumo:
In recent years there has been extensive debate in the energy economics and policy literature on the likely impacts of improvements in energy efficiency. This debate has focussed on the notion of rebound effects. Rebound effects occur when improvements in energy efficiency actually stimulate the direct and indirect demand for energy in production and/or consumption. This phenomenon occurs through the impact of the increased efficiency on the effective, or implicit, price of energy. If demand is stimulated in this way, the anticipated reduction in energy use, and the consequent environmental benefits, will be partially or possibly even more than wholly (in the case of ‘backfire’ effects) offset. A recent report published by the UK House of Lords identifies rebound effects as a plausible explanation as to why recent improvements in energy efficiency in the UK have not translated to reductions in energy demand at the macroeconomic level, but calls for empirical investigation of the factors that govern the extent of such effects. Undoubtedly the single most important conclusion of recent analysis in the UK, led by the UK Energy Research Centre (UKERC) is that the extent of rebound and backfire effects is always and everywhere an empirical issue. It is simply not possible to determine the degree of rebound and backfire from theoretical considerations alone, notwithstanding the claims of some contributors to the debate. In particular, theoretical analysis cannot rule out backfire. Nor, strictly, can theoretical considerations alone rule out the other limiting case, of zero rebound, that a narrow engineering approach would imply. In this paper we use a computable general equilibrium (CGE) framework to investigate the conditions under which rebound effects may occur in the Scottish regional and UK national economies. Previous work has suggested that rebound effects will occur even where key elasticities of substitution in production are set close to zero. Here, we carry out a systematic sensitivity analysis, where we gradually introduce relative price sensitivity into the system, focusing in particular on elasticities of substitution in production and trade parameters, in order to determine conditions under which rebound effects become a likely outcome. We find that, while there is positive pressure for rebound effects even where (direct and indirect) demand for energy is very price inelastic, this may be partially or wholly offset by negative income and disinvestment effects, which also occur in response to falling energy prices.
Resumo:
The geodynamic forces acting in the Earth's interior manifest themselves in a variety of ways. Volcanoes are amongst the most impressive examples in this respect, but like with an iceberg, they only represent the tip of a more extensive system hidden underground. This system consists of a source region where melt forms and accumulates, feeder connections in which magma is transported towards the surface, and different reservoirs where it is stored before it eventually erupts to form a volcano. A magma represents a mixture of melt and crystals. The latter can be extracted from the source region, or form anywhere along the path towards their final crystallization place. They will retain information of the overall plumbing system. The host rocks of an intrusion, in contrast, provide information at the emplacement level. They record the effects of thermal and mechanical forces imposed by the magma. For a better understanding of the system, both parts - magmatic and metamorphic petrology - have to be integrated. I will demonstrate in my thesis that information from both is complementary. It is an iterative process, using constraints from one field to better constrain the other. Reading the history of the host rocks is not always straightforward. This is shown in chapter two, where a model for the formation of clustered garnets observed in the contact aureole is proposed. Fragments of garnets, older than the intrusive rocks are overgrown by garnet crystallizing due to the reheating during emplacement of the adjacent pluton. The formation of the clusters is therefore not a single event as generally assumed but the result of a two-stage process, namely the alteration of the old grains and the overgrowth and amalgamation of new garnet rims. This makes an important difference when applying petrological methods such as thermobarometry, geochronology or grain size distributions. The thermal conditions in the aureole are a strong function of the emplacement style of the pluton. therefore it is necessary to understand the pluton before drawing conclusions about its aureole. A study investigating the intrusive rocks by means of field, geochemical, geochronologi- cal and structural methods is presented in chapter three. This provided important information about the assembly of the intrusion, but also new insights on the nature of large, homogeneous plutons and the structure of the plumbing system in general. The incremental nature of the emplacement of the Western Adamello tonalité is documented, and the existence of an intermediate reservoir beneath homogeneous plutons is proposed. In chapter four it is demonstrated that information extracted from the host rock provides further constraints on the emplacement process of the intrusion. The temperatures obtain by combining field observations with phase petrology modeling are used together with thermal models to constrain the magmatic activity in the immediate intrusion. Instead of using the thermal models to control the petrology result, the inverse is done. The model parameters were changed until a match with the aureole temperatures was obtained. It is shown, that only a few combinations give a positive match and that temperature estimates from the aureole can constrain the frequency of ancient magmatic systems. In the fifth chapter, the Anisotropy of Magnetic Susceptibility of intrusive rocks is compared to 3D tomography. The obtained signal is a function of the shape and distribution of ferromagnetic grains, and is often used to infer flow directions of magma. It turns out that the signal is dominated by the shape of the magnetic crystals, and where they form tight clusters, also by their distribution. This is in good agreement with the predictions made in the theoretical and experimental literature. In the sixth chapter arguments for partial melting of host rock carbonates are presented. While at first very surprising, this is to be expected when considering the prior results from the intrusive study and experiments from the literature. Partial melting is documented by compelling microstructures, geochemical and structural data. The necessary conditions are far from extreme and this process might be more frequent than previously thought. The carbonate melt is highly mobile and can move along grain boundaries, infiltrating other rocks and ultimately alter the existing mineral assemblage. Finally, a mineralogical curiosity is presented in chapter seven. The mineral assemblage magne§site and calcite is in apparent equilibrium. It is well known that these two carbonates are not stable together in the system Ca0-Mg0-Fe0-C02. Indeed, magnesite and calcite should react to dolomite during metamorphism. The presented explanation for this '"forbidden" assemblage is, that a calcite melt infiltrated the magnesite bearing rock along grain boundaries and caused the peculiar microstructure. This is supported by isotopie disequilibrium between calcite and magnesite. A further implication of partially molten carbonates is, that the host rock drastically looses its strength so that its physical properties may be comparable to the ones of the intrusive rocks. This contrasting behavior of the host rock may ease the emplacement of the intrusion. We see that the circle closes and the iterative process of better constraining the emplacement could start again. - La Terre est en perpétuel mouvement et les forces tectoniques associées à ces mouvements se manifestent sous différentes formes. Les volcans en sont l'un des exemples les plus impressionnants, mais comme les icebergs, les laves émises en surfaces ne représentent que la pointe d'un vaste système caché dans les profondeurs. Ce système est constitué d'une région source, région où la roche source fond et produit le magma ; ce magma peut s'accumuler dans cette région source ou être transporté à travers différents conduits dans des réservoirs où le magma est stocké. Ce magma peut cristalliser in situ et produire des roches plutoniques ou alors être émis en surface. Un magma représente un mélange entre un liquide et des cristaux. Ces cristaux peuvent être extraits de la source ou se former tout au long du chemin jusqu'à l'endroit final de cristallisation. L'étude de ces cristaux peut ainsi donner des informations sur l'ensemble du système magmatique. Au contraire, les roches encaissantes fournissent des informations sur le niveau d'emplacement de l'intrusion. En effet ces roches enregistrent les effets thermiques et mécaniques imposés par le magma. Pour une meilleure compréhension du système, les deux parties, magmatique et métamorphique, doivent être intégrées. Cette thèse a pour but de montrer que les informations issues de l'étude des roches magmatiques et des roches encaissantes sont complémentaires. C'est un processus itératif qui utilise les contraintes d'un domaine pour améliorer la compréhension de l'autre. Comprendre l'histoire des roches encaissantes n'est pas toujours aisé. Ceci est démontré dans le chapitre deux, où un modèle de formation des grenats observés sous forme d'agrégats dans l'auréole de contact est proposé. Des fragments de grenats plus vieux que les roches intru- sives montrent une zone de surcroissance générée par l'apport thermique produit par la mise en place du pluton adjacent. La formation des agrégats de grenats n'est donc pas le résultat d'un seul événement, comme on le décrit habituellement, mais d'un processus en deux phases, soit l'altération de vieux grains engendrant une fracturation de ces grenats, puis la formation de zone de surcroissance autour de ces différents fragments expliquant la texture en agrégats observée. Cette interprétation en deux phases est importante, car elle engendre des différences notables lorsque l'on applique des méthodes pétrologiques comme la thermobarométrie, la géochronologie ou encore lorsque l'on étudie la distribution relative de la taille des grains. Les conditions thermales dans l'auréole de contact dépendent fortement du mode d'emplacement de l'intrusion et c'est pourquoi il est nécessaire de d'abord comprendre le pluton avant de faire des conclusions sur son auréole de contact. Une étude de terrain des roches intrusives ainsi qu'une étude géochimique, géochronologique et structurale est présente dans le troisième chapitre. Cette étude apporte des informations importantes sur la formation de l'intrusion mais également de nouvelles connaissances sur la nature de grands plutons homogènes et la structure de système magmatique en général. L'emplacement incrémental est mis en évidence et l'existence d'un réservoir intermédiaire en-dessous des plutons homogènes est proposé. Le quatrième chapitre de cette thèse illustre comment utiliser l'information extraite des roches encaissantes pour expliquer la mise en place de l'intrusion. Les températures obtenues par la combinaison des observations de terrain et l'assemblage métamorphique sont utilisées avec des modèles thermiques pour contraindre l'activité magmatique au contact directe de cette auréole. Au lieu d'utiliser le modèle thermique pour vérifier le résultat pétrologique, une approche inverse a été choisie. Les paramètres du modèle ont été changés jusqu'à ce qu'on obtienne une correspondance avec les températures observées dans l'auréole de contact. Ceci montre qu'il y a peu de combinaison qui peuvent expliquer les températures et qu'on peut contraindre la fréquence de l'activité magmatique d'un ancien système magmatique de cette manière. Dans le cinquième chapitre, les processus contrôlant l'anisotropie de la susceptibilité magnétique des roches intrusives sont expliqués à l'aide d'images de la distribution des minéraux dans les roches obtenues par tomographie 3D. Le signal associé à l'anisotropie de la susceptibilité magnétique est une fonction de la forme et de la distribution des grains ferromagnétiques. Ce signal est fréquemment utilisé pour déterminer la direction de mouvement d'un magma. En accord avec d'autres études de la littérature, les résultats montrent que le signal est dominé par la forme des cristaux magnétiques, ainsi que par la distribution des agglomérats de ces minéraux dans la roche. Dans le sixième chapitre, une étude associée à la fusion partielle de carbonates dans les roches encaissantes est présentée. Si la présence de liquides carbonatés dans les auréoles de contact a été proposée sur la base d'expériences de laboratoire, notre étude démontre clairement leur existence dans la nature. La fusion partielle est documentée par des microstructures caractéristiques pour la présence de liquides ainsi que par des données géochimiques et structurales. Les conditions nécessaires sont loin d'être extrêmes et ce processus pourrait être plus fréquent qu'attendu. Les liquides carbonatés sont très mobiles et peuvent circuler le long des limites de grain avant d'infiltrer d'autres roches en produisant une modification de leurs assemblages minéralogiques. Finalement, une curiosité minéralogique est présentée dans le chapitre sept. L'assemblage de minéraux de magnésite et de calcite en équilibre apparent est observé. Il est bien connu que ces deux carbonates ne sont pas stables ensemble dans le système CaO-MgO-FeO-CO.,. En effet, la magnésite et la calcite devraient réagir et produire de la dolomite pendant le métamorphisme. L'explication présentée pour cet assemblage à priori « interdit » est que un liquide carbonaté provenant des roches adjacentes infiltre cette roche et est responsable pour cette microstructure. Une autre implication associée à la présence de carbonates fondus est que la roche encaissante montre une diminution drastique de sa résistance et que les propriétés physiques de cette roche deviennent comparables à celles de la roche intrusive. Cette modification des propriétés rhéologiques des roches encaissantes peut faciliter la mise en place des roches intrusives. Ces différentes études démontrent bien le processus itératif utilisé et l'intérêt d'étudier aussi bien les roches intrusives que les roches encaissantes pour la compréhension des mécanismes de mise en place des magmas au sein de la croûte terrestre.
Resumo:
Methyl-CpG Binding Domain (MBD) proteins are thought to be key molecules in the interpretation of DNA methylation signals leading to gene silencing through recruitment of chromatin remodeling complexes. In cancer, the MBD-family member, MBD2, may be primarily involved in the repression of genes exhibiting methylated CpG at their 5' end. Here we ask whether MBD2 randomly associates methylated sequences, producing chance effects on transcription, or exhibits a more specific recognition of some methylated regions. Using chromatin and DNA immunoprecipitation, we analyzed MBD2 and RNA polymerase II deposition and DNA methylation in HeLa cells on arrays representing 25,500 promoter regions. This first whole-genome mapping revealed the preferential localization of MBD2 near transcription start sites (TSSs), within the region analyzed, 7.5 kb upstream through 2.45 kb downstream of 5' transcription start sites. Probe by probe analysis correlated MBD2 deposition and DNA methylation. Motif analysis did not reveal specific sequence motifs; however, CCG and CGC sequences seem to be overrepresented. Nonrandom association (multiple correspondence analysis, p < 0.0001) between silent genes, DNA methylation and MBD2 binding was observed. The association between MBD2 binding and transcriptional repression weakened as the distance between binding site and TSS increased, suggesting that MBD2 represses transcriptional initiation. This hypothesis may represent a functional explanation for the preferential binding of MBD2 at methyl-CpG in TSS regions.
Resumo:
BACKGROUND AND OBJECTIVE: Deciding about treatment goals at the end of life is a frequent and difficult challenge to medical staff. As more health care institutions issue ethico-legal guidelines to their staff the effects of such a guideline should be investigated in a pilot project.¦PARTICIPANTS AND METHODS: Prospective evaluation study using the pre-post method. Physicians and nurses working in ten intensive care units of a university medical center in Germany answered a specially designed questionnaire before and one year after issuance of the guideline.¦RESULTS: 197 analyzable answers were obtained from the first (pre-guideline) and 251 from the second (post-guideline) survey (54 % and 58 % response rate, respectively). Initially the clinicians expressed their need for guidelines, advice on ethical problems, and continuing education. One year after introduction of the guideline one third of the clinicians was familiar with the guideline's content and another third was aware of its existence. 90% of those who knew the document welcomed it. Explanation of the legal aspects was seen as its most useful element. The pre- and post-guideline comparison demonstrated that uncertainty in decision making and fear of legal consequences were reduced, while knowledge of legal aspects and the value given to advance directives increased. The residents had derived the greatest benefit.¦CONCLUSION: By promoting the knowledge of legal aspects and ethical considerations, guidelines given to medical staff can lead to more certainty when making in end of life decision.
Resumo:
The present study investigates the short- and long-term outcomes of a computer-assisted cognitive remediation (CACR) program in adolescents with psychosis or at high risk. 32 adolescents participated in a blinded 8-week randomized controlled trial of CACR treatment compared to computer games (CG). Clinical and neuropsychological evaluations were undertaken at baseline, at the end of the program and at 6-month. At the end of the program (n = 28), results indicated that visuospatial abilities (Repeatable Battery for the Assessment of Neuropsychological Status, RBANS; P = .005) improved signifi cantly more in the CACR group compared to the CG group. Furthermore, other cognitive functions (RBANS), psychotic symptoms (Positive and Negative Symptom Scale) and psychosocial functioning (Social and Occupational Functioning Assessment Scale) improved signifi cantly, but at similar rates, in the two groups. At long term (n = 22), cognitive abilities did not demonstrated any amelioration in the control group while, in the CACR group, signifi cant long-term improvements in inhibition (Stroop; P = .040) and reasoning (Block Design Test; P = .005) were observed. In addition, symptom severity (Clinical Global Improvement) decreased signifi cantly in the control group (P = .046) and marginally in the CACR group (P = .088). To sum up, CACR can be successfully administered in this population. CACR proved to be effective over and above CG for the most intensively trained cognitive ability. Finally, on the long-term, enhanced reasoning and inhibition abilities, which are necessary to execute higher-order goals or to adapt behavior to the ever-changing environment, were observed in adolescents benefi ting from a CACR.
Resumo:
The Wechsler Intelligence Scale for Children-fourth edition (i.e. WISC-IV) recognizes a four-factor scoring structure in addition to the Full Scale IQ (FSIQ) score: Verbal Comprehension (VCI), Perceptual Reasoning (PRI), Working Memory (WMI), and Processing Speed (PSI) indices. However, several authors suggested that models based on the Cattell-Horn-Carroll (CHC) theory with 5 or 6 factors provided a better fit to the data than does the current four-factor solution. By comparing the current four-factor structure to CHC-based models, this research aimed to investigate the factorial structure and the constructs underlying the WISC-IV subtest scores with French-speaking Swiss children (N = 249). To deal with this goal, confirmatory factor analyses (CFAs) were conducted. Results showed that a CHC-based model with five factors better fitted the French-Swiss data than did the current WISC-IV scoring structure. All together, these results support the hypothesis of the appropriateness of the CHC model with French-speaking children.
Resumo:
The hypothesis that Helicobactermight be a risk factor for human liver diseases has arisen after the detection of Helicobacter DNA in hepatic tissue of patients with hepatobiliary diseases. Nevertheless, no explanation that justifies the presence of the bacterium in the human liver has been proposed. We evaluated the presence of Helicobacterin the liver of patients with hepatic diseases of different aetiologies. We prospectively evaluated 147 patients (106 with primary hepatic diseases and 41 with hepatic metastatic tumours) and 20 liver donors as controls. Helicobacter species were investigated in the liver by culture and specific 16S rDNA nested-polymerase chain reaction followed by sequencing. Serum and hepatic levels of representative cytokines of T regulatory cell, T helper (Th)1 and Th17 cell lineages were determined using enzyme linked immunosorbent assay. The data were evaluated using logistic models. Detection of Helicobacter pylori DNA in the liver was independently associated with hepatitis B virus/hepatitis C virus, pancreatic carcinoma and a cytokine pattern characterised by high interleukin (IL)-10, low/absent interferon-γ and decreased IL-17A concentrations (p < 10-3). The bacterial DNA was never detected in the liver of patients with alcoholic cirrhosis and autoimmune hepatitis that are associated with Th1/Th17 polarisation. H. pylori may be observed in the liver of patients with certain hepatic and pancreatic diseases, but this might depend on the patient cytokine profile.
Resumo:
RATIONALE AND OBJECTIVE:. The information assessment method (IAM) permits health professionals to systematically document the relevance, cognitive impact, use and health outcomes of information objects delivered by or retrieved from electronic knowledge resources. The companion review paper (Part 1) critically examined the literature, and proposed a 'Push-Pull-Acquisition-Cognition-Application' evaluation framework, which is operationalized by IAM. The purpose of the present paper (Part 2) is to examine the content validity of the IAM cognitive checklist when linked to email alerts. METHODS: A qualitative component of a mixed methods study was conducted with 46 doctors reading and rating research-based synopses sent on email. The unit of analysis was a doctor's explanation of a rating of one item regarding one synopsis. Interviews with participants provided 253 units that were analysed to assess concordance with item definitions. RESULTS AND CONCLUSION: The content relevance of seven items was supported. For three items, revisions were needed. Interviews suggested one new item. This study has yielded a 2008 version of IAM.