979 resultados para Arguments
Resumo:
Motivation for Speaker recognition work is presented in the first part of the thesis. An exhaustive survey of past work in this field is also presented. A low cost system not including complex computation has been chosen for implementation. Towards achieving this a PC based system is designed and developed. A front end analog to digital convertor (12 bit) is built and interfaced to a PC. Software to control the ADC and to perform various analytical functions including feature vector evaluation is developed. It is shown that a fixed set of phrases incorporating evenly balanced phonemes is aptly suited for the speaker recognition work at hand. A set of phrases are chosen for recognition. Two new methods are adopted for the feature evaluation. Some new measurements involving a symmetry check method for pitch period detection and ACE‘ are used as featured. Arguments are provided to show the need for a new model for speech production. Starting from heuristic, a knowledge based (KB) speech production model is presented. In this model, a KB provides impulses to a voice producing mechanism and constant correction is applied via a feedback path. It is this correction that differs from speaker to speaker. Methods of defining measurable parameters for use as features are described. Algorithms for speaker recognition are developed and implemented. Two methods are presented. The first is based on the model postulated. Here the entropy on the utterance of a phoneme is evaluated. The transitions of voiced regions are used as speaker dependent features. The second method presented uses features found in other works, but evaluated differently. A knock—out scheme is used to provide the weightage values for the selection of features. Results of implementation are presented which show on an average of 80% recognition. It is also shown that if there are long gaps between sessions, the performance deteriorates and is speaker dependent. Cross recognition percentages are also presented and this in the worst case rises to 30% while the best case is 0%. Suggestions for further work are given in the concluding chapter.
Resumo:
Optimum conditions and experimental details for the formation of v-Fe203 from goethite have been worked out. In another method, a cheap complexing medium of starch was employed for precipitating acicular ferrous oxalate, which on decomposition in nitrogen and subsequent oxidation yielded acicular y-Fe203. On the basis of thermal decomposition in dry and moist nitrogen, DTA, XRD, GC and thermodynamic arguments, the mechanism of decomposition was elucidated. New materials obtained by doping ~'-Fe203 with 1-16 atomic percent magnesium, cobalt, nickel and copper, were synthesised and characterized
Resumo:
The eighteen-eighties under Chancellor Otto von Bismarck saw the establishment of statutory workers' insurance in Germany. Germany remained backwards, however, in the statutory protection of workers at their workplace, the prevention of dangers arising from industrial work, and the limitation of hours of work for children, young persons, women or even workers in general. The protection of young workers, for example, remained until 1891 as it had been in 1853. That was due to the fundamental refusal of all improvments in matters of regulations for the protection of workers on the part of Bismarck, who blocked all relevant initiatives. Along with other sources this article draws on previously rarely used marginalia of Bismarck's in ministerial documents on factory inspection, children's and women's labour, the prohibition of Sunday work, and the introduction of a standard working day. The investigation deals with the Chancellor's motives and the arguments deployed in his prevention of measures of workers' protection, which he called an infringement of workers' freedom of action.
Resumo:
The aim of this paper is a comprehensive presentation of some important basic and general aspects of the topic applications and modelling, with emphasis on the secondary school level. Owing to the review character of this paper, some overlap with the survey paper Blum and Niss (1989) for ICME-6 in Budapest is inevitable. The paper will consist of three parts. In part 1, I shall try to clarify some basic concepts and remind the reader of a few application and modelling examples suitable for teaching. In part 2, I shall formulate some general aims for mathematics instruction and, on that basis, summarise the most important arguments for and against applications and modelling in mathematics teaching. Finally, in part 3, I shall discuss some relevant instructional aspects resulting from the considerations in part 2.
Resumo:
The paper will consist of three parts. In part I we shall present some background considerations which are necessary as a basis for what follows. We shall try to clarify some basic concepts and notions, and we shall collect the most important arguments (and related goals) in favour of problem solving, modelling and applications to other subjects in mathematics instruction. In the main part II we shall review the present state, recent trends, and prospective lines of development, both in empirical or theoretical research and in the practice of mathematics instruction and mathematics education, concerning problem solving, modelling, applications and relations to other subjects. In particular, we shall identify and discuss four major trends: a widened spectrum of arguments, an increased globality, an increased unification, and an extended use of computers. In the final part III we shall comment upon some important issues and problems related to our topic.
Resumo:
This paper aims at giving a concise survey of the present state-of-the-art of mathematical modelling in mathematics education and instruction. It will consist of four parts. In part 1, some basic concepts relevant to the topic will be clarified and, in particular, mathematical modelling will be defined in a broad, comprehensive sense. Part 2 will review arguments for the inclusion of modelling in mathematics teaching at schools and universities, and identify certain schools of thought within mathematics education. Part 3 will describe the role of modelling in present mathematics curricula and in everyday teaching practice. Some obstacles for mathematical modelling in the classroom will be analysed, as well as the opportunities and risks of computer usage. In part 4, selected materials and resources for teaching mathematical modelling, developed in the last few years in America, Australia and Europe, will be presented. The examples will demonstrate many promising directions of development.
Resumo:
The paper will consist of three parts. In part I we shall present some background considerations which are necessary as a basis for what follows. We shall try to clarify some basic concepts and notions, and we shall collect the most important arguments (and related goals) in favour of problem solving, modelling and applications to other subjects in mathematics instruction. In the main part II we shall review the present state, recent trends, and prospective lines of development, both in empirical or theoretical research and in the practice of mathematics instruction and mathematics education, concerning (applied) problem solving, modelling, applications and relations to other subjects. In particular, we shall identify and discuss four major trends: a widened spectrum of arguments, an increased globality, an increased unification, and an extended use of computers. In the final part III we shall comment upon some important issues and problems related to our topic.
Resumo:
The design, reformulation, and final signing of Plan Colombia by the then US President, Bill Clinton, on the 13 July 2000 initiated in a new era of the US State´s involvement in supposedly sovereign-territorial issues of Colombian politics. The implementation of Plan Colombia there-on-after brought about a major realignment of political-military scales and terrains of conflict that have renewed discourses concerning the contemporary imperialist interests of key US-based but transnationally-projected social forces, leading to arguments that stress the invigorated geo-political dimension of present-day strategies of capitalist accumulation. With the election of Álvaro Uribe Vélez as Colombian President in May 2002 and his pledge to strengthen the national military campaign aganist the region´s longest-surviving insurgency guerrilla group, Las FARC-EP, as well as other guerrilla factions, combined with a new focus on establishing the State project of “Democratic Security”; the military realm of governance and attempts to ensure property security and expanding capitalist investment have attained precedence in Colombia´s national political domains. This working paper examines the interrelated nature of Plan Colombia -as a binational and indeed regional security strategy- and Uribe´s Democratic Security project as a means of showing the manner in which they have worked to pave the way for the implementation of a new “total market” regime of accumulation, based on large-scale agro-industrial investment which is accelerated through processes of accumulation via dispossession. As such, the political and social reconfigurations involved manifest the multifarious scales of governance that become intertwined in incorporating neoliberalism in specific regions of the world economy. Furthermore, the militarisation-securitisation of such policies also illustrate the explicit contradictions of neoliberalism in a peripheral context, where coercion seems to prevail, something which leads to a profound questioning of the extent to which neoliberalism can be thought of as a hegemonic politico-economic project.
Resumo:
Regionale Arbeitsmärkte unterscheiden sich erheblich hinsichtlich wesentlicher Kennzahlen wie der Arbeitslosenquote, des Lohnniveaus oder der Beschäftigungsentwicklung. Wegen ihrer Persistenz sind diese Unterschiede von hoher Relevanz für die Politik. Die wirtschaftswissenschaftliche Literatur liefert bereits theoretische Modelle für die Analyse regionaler Arbeitsmärkte. In der Regel sind diese Modelle aber nicht dazu geeignet, regionale Arbeitsmarktunterschiede endogen zu erklären. Das bedeutet, dass sich die Unterschiede regionaler Arbeitsmärkte in der Regel nicht aus den Modellzusammenhängen selbst ergeben, sondern „von außen“ eingebracht werden müssen. Die empirische Literatur liefert Hinweise, dass die Unterschiede zwischen regionalen Arbeitsmärkten auf die Höhe der regionalen Arbeitsnachfrage zurückzuführen sind. Die Arbeitsnachfrage wiederum leitet sich aus den Gütermärkten ab: Es hängt von der Entwicklung der regionalen Gütermärkte ab, wie viele Arbeitskräfte benötigt werden. Daraus folgt, dass die Ursachen für Unterschiede regionaler Arbeitsmärkte in den Unterschieden zwischen den regionalen Gütermärkten zu suchen sind. Letztere werden durch die Literatur zur Neuen Ökonomischen Geographie (NÖG) untersucht. Die Literatur zur NÖG erklärt Unterschiede regionaler Gütermärkte, indem sie zentripetale und zentrifugale Kräfte gegenüberstellt. Zentripetale Kräfte sind solche, welche hin zur Agglomeration ökonomischer Aktivität wirken. Im Zentrum dieser Diskussion steht vor allem das Marktpotenzial: Unternehmen siedeln sich bevorzugt an solchen Standorten an, welche nahe an großen Märkten liegen. Erwerbspersonen wiederum bevorzugen solche Regionen, welche ihnen entsprechende Erwerbsaussichten bieten. Beides zusammen bildet einen sich selbst verstärkenden Prozess, der zur Agglomeration ökonomischer Aktivität führt. Dem stehen jedoch zentrifugale Kräfte gegenüber, welche eine gleichmäßigere Verteilung ökonomischer Aktivität bewirken. Diese entstehen beispielsweise durch immobile Produktionsfaktoren oder Ballungskosten wie etwa Umweltverschmutzung, Staus oder hohe Mietpreise. Sind die zentripetalen Kräfte hinreichend stark, so bilden sich Zentren heraus, in denen sich die ökonomische Aktivität konzentriert, während die Peripherie ausdünnt. In welchem Ausmaß dies geschieht, hängt von dem Verhältnis beider Kräfte ab. Üblicherweise konzentriert sich die Literatur zur NÖG auf Unterschiede zwischen regionalen Gütermärkten und geht von der Annahme perfekter Arbeitsmärkte ohne Arbeitslosigkeit aus. Die Entstehung und Persistenz regionaler Arbeitsmarktunterschiede kann die NÖG daher üblicherweise nicht erklären. An dieser Stelle setzt die Dissertation an. Sie erweitert die NÖG um Friktionen auf dem Arbeitsmarkt, um die Entstehung und Persistenz regionaler Arbeitsmarktunterschiede zu erklären. Sie greift dazu auf eine empirische Regelmäßigkeit zurück: Zahlreiche Studien belegen einen negativen Zusammenhang zwischen Lohn und Arbeitslosigkeit. In Regionen, in denen die Arbeitslosigkeit hoch ist, ist das Lohnniveau gering und umgekehrt. Dieser Zusammenhang wird als Lohnkurve bezeichnet. Auf regionaler Ebene lässt sich die Lohnkurve mithilfe der Effizienzlohntheorie erklären, die als theoretische Grundlage in der Dissertation Anwendung findet. Konzentriert sich nun die ökonomische Aktivität aufgrund der zentripetalen Kräfte in einer Region, so ist in diesem Zentrum die Arbeitsnachfrage höher. Damit befindet sich das Zentrum auf einer günstigen Position der Lohnkurve mit geringer Arbeitslosigkeit und hohem Lohnniveau. Umgekehrt findet sich die Peripherie auf einer ungünstigen Position mit hoher Arbeitslosigkeit und geringem Lohnniveau wieder. Allerdings kann sich die Lohnkurve in Abhängigkeit des Agglomerationsgrades verschieben. Das komplexe Zusammenspiel der endogenen Agglomeration mit den Arbeitsmarktfriktionen kann dann unterschiedliche Muster regionaler Arbeitsmarktdisparitäten hervorrufen. Die Dissertation zeigt auf, wie im Zusammenspiel der NÖG mit Effizienzlöhnen regionale Arbeitsmarktdisparitäten hervorgerufen werden. Es werden theoretische Modelle formuliert, die diese Interaktionen erklären und welche die bestehende Literatur durch spezifische Beiträge erweitern. Darüber hinaus werden die zentralen Argumente der Theorie einem empirischen Test unterworfen. Es kann gezeigt werden, dass das zentrale Argument – der positive Effekt des Marktpotentials auf die Arbeitsnachfrage – relevant ist. Außerdem werden Politikimplikationen abgeleitet und der weitere Forschungsbedarf aufgezeigt.
Resumo:
The traditional task of a central bank is to preserve price stability and, in doing so, not to impair the real economy more than necessary. To meet this challenge, it is of great relevance whether inflation is only driven by inflation expectations and the current output gap or whether it is, in addition, influenced by past inflation. In the former case, as described by the New Keynesian Phillips curve, the central bank can immediately and simultaneously achieve price stability and equilibrium output, the so-called ‘divine coincidence’ (Blanchard and Galí 2007). In the latter case, the achievement of price stability is costly in terms of output and will be pursued over several periods. Similarly, it is important to distinguish this latter case, which describes ‘intrinsic’ inflation persistence, from that of ‘extrinsic’ inflation persistence, where the sluggishness of inflation is not a ‘structural’ feature of the economy but merely ‘inherited’ from the sluggishness of the other driving forces, inflation expectations and output. ‘Extrinsic’ inflation persistence is usually considered to be the less challenging case, as policy-makers are supposed to fight against the persistence in the driving forces, especially to reduce the stickiness of inflation expectations by a credible monetary policy, in order to reestablish the ‘divine coincidence’. The scope of this dissertation is to contribute to the vast literature and ongoing discussion on inflation persistence: Chapter 1 describes the policy consequences of inflation persistence and summarizes the empirical and theoretical literature. Chapter 2 compares two models of staggered price setting, one with a fixed two-period duration and the other with a stochastic duration of prices. I show that in an economy with a timeless optimizing central bank the model with the two-period alternating price-setting (for most parameter values) leads to more persistent inflation than the model with stochastic price duration. This result amends earlier work by Kiley (2002) who found that the model with stochastic price duration generates more persistent inflation in response to an exogenous monetary shock. Chapter 3 extends the two-period alternating price-setting model to the case of 3- and 4-period price durations. This results in a more complex Phillips curve with a negative impact of past inflation on current inflation. As simulations show, this multi-period Phillips curve generates a too low degree of autocorrelation and too early turnings points of inflation and is outperformed by a simple Hybrid Phillips curve. Chapter 4 starts from the critique of Driscoll and Holden (2003) on the relative real-wage model of Fuhrer and Moore (1995). While taking the critique seriously that Fuhrer and Moore’s model will collapse to a much simpler one without intrinsic inflation persistence if one takes their arguments literally, I extend the model by a term for inequality aversion. This model extension is not only in line with experimental evidence but results in a Hybrid Phillips curve with inflation persistence that is observably equivalent to that presented by Fuhrer and Moore (1995). In chapter 5, I present a model that especially allows to study the relationship between fairness attitudes and time preference (impatience). In the model, two individuals take decisions in two subsequent periods. In period 1, both individuals are endowed with resources and are able to donate a share of their resources to the other individual. In period 2, the two individuals might join in a common production after having bargained on the split of its output. The size of the production output depends on the relative share of resources at the end of period 1 as the human capital of the individuals, which is built by means of their resources, cannot fully be substituted one against each other. Therefore, it might be rational for a well-endowed individual in period 1 to act in a seemingly ‘fair’ manner and to donate own resources to its poorer counterpart. This decision also depends on the individuals’ impatience which is induced by the small but positive probability that production is not possible in period 2. As a general result, the individuals in the model economy are more likely to behave in a ‘fair’ manner, i.e., to donate resources to the other individual, the lower their own impatience and the higher the productivity of the other individual. As the (seemingly) ‘fair’ behavior is modelled as an endogenous outcome and as it is related to the aspect of time preference, the presented framework might help to further integrate behavioral economics and macroeconomics.
Resumo:
Presentation given at the Al-Azhar Engineering First Conference, AEC’89, Dec. 9-12 1989, Cairo, Egypt. The paper presented at AEC'89 suggests an infinite storage scheme divided into one volume which is online and an arbitrary number of off-line volumes arranged into a linear chain which hold records which haven't been accessed recently. The online volume holds the records in sorted order (e.g. as a B-tree) and contains shortest prefixes of keys of records already pushed offline. As new records enter, older ones are retired to the first volume which is going offline next. Statistical arguments are given for the rate at which an off-line volume needs to be fetched to reload a record which had been retired before. The rate depends on the distribution of access probabilities as a function of time. Applications are medical records, production records or other data which need to be kept for a long time for legal reasons.
Resumo:
Computational theories of action have generally understood the organized nature of human activity through the construction and execution of plans. By consigning the phenomena of contingency and improvisation to peripheral roles, this view has led to impractical technical proposals. As an alternative, I suggest that contingency is a central feature of everyday activity and that improvisation is the central kind of human activity. I also offer a computational model of certain aspects of everyday routine activity based on an account of improvised activity called running arguments and an account of representation for situated agents called deictic representation .
Resumo:
In most classical frameworks for learning from examples, it is assumed that examples are randomly drawn and presented to the learner. In this paper, we consider the possibility of a more active learner who is allowed to choose his/her own examples. Our investigations are carried out in a function approximation setting. In particular, using arguments from optimal recovery (Micchelli and Rivlin, 1976), we develop an adaptive sampling strategy (equivalent to adaptive approximation) for arbitrary approximation schemes. We provide a general formulation of the problem and show how it can be regarded as sequential optimal recovery. We demonstrate the application of this general formulation to two special cases of functions on the real line 1) monotonically increasing functions and 2) functions with bounded derivative. An extensive investigation of the sample complexity of approximating these functions is conducted yielding both theoretical and empirical results on test functions. Our theoretical results (stated insPAC-style), along with the simulations demonstrate the superiority of our active scheme over both passive learning as well as classical optimal recovery. The analysis of active function approximation is conducted in a worst-case setting, in contrast with other Bayesian paradigms obtained from optimal design (Mackay, 1992).