997 resultados para Historical investigation
Resumo:
While the theoretical industrial organization literature has long argued that excess capacity can be used to deter entry into markets, there is little empirical evidence that incumbent firms effectively behave in this way. Bagwell and Ramey (1996) propose a game with a specific sequence of moves and partially-recoverable capacity costs in which forward induction provides a theoretical rationalization for firm behavior in the field. We conduct an experiment with a game inspired by their work. In our data the incumbent tends to keep the market, in contrast to what the forward induction argument of Bagwell and Ramey would suggest. The results indicate that players perceive that the first mover has an advantage without having to pre-commit capacity. In our game, evolution and learning do not drive out this perception. We back these claims with data analysis, a theoretical framework for dynamics, and simulation results.
Resumo:
Investigación producida a partir de una estancia en Buenos Aires entre los meses de septiembre y octubre del 2006. La construcción del estado nacional argentino en el siglo XIX implicó la definición –por parte de las elites dirigentes - de un pueblo que cumpliera con las expectativas que se esperaban de una joven nación que se encaminaba hacia la civilización y el progreso. El objeto de la investigación que se está llevando a cabo en el marco del doctorado en Historia de América de la Universitat de Barcelona, es el análisis de la población afroargentina de Buenos Aires en las últimas décadas del siglo XIX, un momento en que su presencia e historia estaban siendo negadas de los discursos y de las prácticas, promoviendo su “invisibilización”. Para llevar a cabo esta investigación, se hace fundamental recurrir a fuentes y documentos que deben ser buscados y hallados en los Archivos Generales y Locales, y en Bibliotecas Nacionales y Municipales, sitos en la ciudad de Buenos Aires.
Resumo:
Ecological economics has five good reasons to consider that economic globalisation, spurred by commercial and financial fluxes, to be one of the main driving forces responsible for causing environmental degradation to our planet. The first, is the energy consumption and the socio-environmental impacts which long-distance haulage entails. The second, is the ever-increasing flow of goods to far-away destinations which renders their recycling practically impossible. This is particularly significant, because it prevents the metabolic lock of the nutrients present in food and other agrarian products from taking place. The third, is that the high degree of specialization attained in agriculture, forestry, cattle, mining and industry in each region, generates deleterious effects not only on the eco-landscape structure of the uses of the soil, but on the capability to provide habitat and environmental functions to maintain biodiversity as well
Resumo:
Pharmacogenomics is a field with origins in the study of monogenic variations in drug metabolism in the 1950s. Perhaps because of these historical underpinnings, there has been an intensive investigation of 'hepatic pharmacogenes' such as CYP450s and liver drug metabolism using pharmacogenomics approaches over the past five decades. Surprisingly, kidney pathophysiology, attendant diseases and treatment outcomes have been vastly under-studied and under-theorized despite their central importance in maintenance of health, susceptibility to disease and rational personalized therapeutics. Indeed, chronic kidney disease (CKD) represents an increasing public health burden worldwide, both in developed and developing countries. Patients with CKD suffer from high cardiovascular morbidity and mortality, which is mainly attributable to cardiovascular events before reaching end-stage renal disease. In this paper, we focus our analyses on renal function before end-stage renal disease, as seen through the lens of pharmacogenomics and human genomic variation. We herein synthesize the recent evidence linking selected Very Important Pharmacogenes (VIP) to renal function, blood pressure and salt-sensitivity in humans, and ways in which these insights might inform rational personalized therapeutics. Notably, we highlight and present the rationale for three applications that we consider as important and actionable therapeutic and preventive focus areas in renal pharmacogenomics: 1) ACE inhibitors, as a confirmed application, 2) VDR agonists, as a promising application, and 3) moderate dietary salt intake, as a suggested novel application. Additionally, we emphasize the putative contributions of gene-environment interactions, discuss the implications of these findings to treat and prevent hypertension and CKD. Finally, we conclude with a strategic agenda and vision required to accelerate advances in this under-studied field of renal pharmacogenomics with vast significance for global public health.
Resumo:
The present thesis is a contribution to the debate on the applicability of mathematics; it examines the interplay between mathematics and the world, using historical case studies. The first part of the thesis consists of four small case studies. In chapter 1, I criticize "ante rem structuralism", proposed by Stewart Shapiro, by showing that his so-called "finite cardinal structures" are in conflict with mathematical practice. In chapter 2, I discuss Leonhard Euler's solution to the Königsberg bridges problem. I propose interpreting Euler's solution both as an explanation within mathematics and as a scientific explanation. I put the insights from the historical case to work against recent philosophical accounts of the Königsberg case. In chapter 3, I analyze the predator-prey model, proposed by Lotka and Volterra. I extract some interesting philosophical lessons from Volterra's original account of the model, such as: Volterra's remarks on mathematical methodology; the relation between mathematics and idealization in the construction of the model; some relevant details in the derivation of the Third Law, and; notions of intervention that are motivated by one of Volterra's main mathematical tools, phase spaces. In chapter 4, I discuss scientific and mathematical attempts to explain the structure of the bee's honeycomb. In the first part, I discuss a candidate explanation, based on the mathematical Honeycomb Conjecture, presented in Lyon and Colyvan (2008). I argue that this explanation is not scientifically adequate. In the second part, I discuss other mathematical, physical and biological studies that could contribute to an explanation of the bee's honeycomb. The upshot is that most of the relevant mathematics is not yet sufficiently understood, and there is also an ongoing debate as to the biological details of the construction of the bee's honeycomb. The second part of the thesis is a bigger case study from physics: the genesis of GR. Chapter 5 is a short introduction to the history, physics and mathematics that is relevant to the genesis of general relativity (GR). Chapter 6 discusses the historical question as to what Marcel Grossmann contributed to the genesis of GR. I will examine the so-called "Entwurf" paper, an important joint publication by Einstein and Grossmann, containing the first tensorial formulation of GR. By comparing Grossmann's part with the mathematical theories he used, we can gain a better understanding of what is involved in the first steps of assimilating a mathematical theory to a physical question. In chapter 7, I introduce, and discuss, a recent account of the applicability of mathematics to the world, the Inferential Conception (IC), proposed by Bueno and Colyvan (2011). I give a short exposition of the IC, offer some critical remarks on the account, discuss potential philosophical objections, and I propose some extensions of the IC. In chapter 8, I put the Inferential Conception (IC) to work in the historical case study: the genesis of GR. I analyze three historical episodes, using the conceptual apparatus provided by the IC. In episode one, I investigate how the starting point of the application process, the "assumed structure", is chosen. Then I analyze two small application cycles that led to revisions of the initial assumed structure. In episode two, I examine how the application of "new" mathematics - the application of the Absolute Differential Calculus (ADC) to gravitational theory - meshes with the IC. In episode three, I take a closer look at two of Einstein's failed attempts to find a suitable differential operator for the field equations, and apply the conceptual tools provided by the IC so as to better understand why he erroneously rejected both the Ricci tensor and the November tensor in the Zurich Notebook.
Resumo:
In traditional criminal investigation, uncertainties are often dealt with using a combination of common sense, practical considerations and experience, but rarely with tailored statistical models. For example, in some countries, in order to search for a given profile in the national DNA database, it must have allelic information for six or more of the ten SGM Plus loci for a simple trace. If the profile does not have this amount of information then it cannot be searched in the national DNA database (NDNAD). This requirement (of a result at six or more loci) is not based on a statistical approach, but rather on the feeling that six or more would be sufficient. A statistical approach, however, could be more rigorous and objective and would take into consideration factors such as the probability of adventitious matches relative to the actual database size and/or investigator's requirements in a sensible way. Therefore, this research was undertaken to establish scientific foundations pertaining to the use of partial SGM Plus loci profiles (or similar) for investigation.
Resumo:
Complete achromatopsia is a rare autosomal recessive disease associated with CNGA3, CNGB3, GNAT2 and PDE6C mutations. This retinal disorder is characterized by complete loss of color discrimination due to the absence or alteration of the cones function. The purpose of the present study was the clinical and the genetic characterization of achromatopsia in a large consanguineous Tunisian family. Ophthalmic evaluation included a full clinical examination, color vision testing and electroretinography. Linkage analysis using microsatellite markers flanking CNGA3, CNGB3, GNAT2 and PDE6C genes was performed. Mutations were screened by direct sequencing. A total of 12 individuals were diagnosed with congenital complete achromatopsia. They are members of six nuclear consanguineous families belonging to the same large consanguineous family. Linkage analysis revealed linkage to GNAT2. Mutational screening of GNAT2 revealed three intronic variations c.119-69G>C, c.161+66A>T and c.875-31G>C that co-segregated with a novel mutation p.R313X. An identical GNAT2 haplotype segregating with this mutation was identified, indicating a founder mutation. All patients were homozygous for the p.R313X mutation. This is the first report of the clinical and genetic investigation of complete achromatopsia in North Africa and the largest family with recessive achromatopsia involving GNAT2; thus, providing a unique opportunity for genotype-phenotype correlation for this extremely rare condition.
Resumo:
Report for the scientific sojourn at the James Cook University, Australia, between June to December 2007. Free convection in enclosed spaces is found widely in natural and industrial systems. It is a topic of primary interest because in many systems it provides the largest resistance to the heat transfer in comparison with other heat transfer modes. In such systems the convection is driven by a density gradient within the fluid, which, usually, is produced by a temperature difference between the fluid and surrounding walls. In the oil industry, the oil, which has High Prandtl, usually is stored and transported in large tanks at temperatures high enough to keep its viscosity and, thus the pumping requirements, to a reasonable level. A temperature difference between the fluid and the walls of the container may give rise to the unsteady buoyancy force and hence the unsteady natural convection. In the initial period of cooling the natural convection regime dominates over the conduction contribution. As the oil cools down it typically becomes more viscous and this increase of viscosity inhibits the convection. At this point the oil viscosity becomes very large and unloading of the tank becomes very difficult. For this reason it is of primary interest to be able to predict the cooling rate of the oil. The general objective of this work is to develop and validate a simulation tool able to predict the cooling rates of high Prandtl fluid considering the variable viscosity effects.
Resumo:
We report experiments designed to test between Nash equilibria that are stable and unstable under learning. The “TASP” (Time Average of the Shapley Polygon) gives a precise prediction about what happens when there is divergence from equilibrium under fictitious play like learning processes. We use two 4 x 4 games each with a unique mixed Nash equilibrium; one is stable and one is unstable under learning. Both games are versions of Rock-Paper-Scissors with the addition of a fourth strategy, Dumb. Nash equilibrium places a weight of 1/2 on Dumb in both games, but the TASP places no weight on Dumb when the equilibrium is unstable. We also vary the level of monetary payoffs with higher payoffs predicted to increase instability. We find that the high payoff unstable treatment differs from the others. Frequency of Dumb is lower and play is further from Nash than in the other treatments. That is, we find support for the comparative statics prediction of learning theory, although the frequency of Dumb is substantially greater than zero in the unstable treatments.
Resumo:
This paper examines the effect that heterogeneous customer orders flows have on exchange rates by using a new, and the largest, proprietary dataset of weekly net order flow segmented by customer type across nine of the most liquid currency pairs. We make several contributions. Firstly, we investigate the extent to which customer order flow can help to explain exchange rate movements over and above the influence of macroeconomic variables. Secondly, we address the issue of whether order flows contain (private) information which explain exchange rates changes. Thirdly, we look at the usefulness of order flow in forecasting exchange rate movements at longer horizons than those generally considered in the microstructure literature. Finally we address the question of whether the out-of-sample exchange rate forecasts generated by order flows can be employed profitably in the foreign exchange markets