40 resultados para Strictly hyperbolic polynomial


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The study seeks to find out whether the real burden of the personal taxation has increased or decreased. In order to determine this, we investigate how the same real income has been taxed in different years. Whenever the taxes for the same real income for a given year are higher than for the base year, the real tax burden has increased. If they are lower, the real tax burden has decreased. The study thus seeks to estimate how changes in the tax regulations affect the real tax burden. It should be kept in mind that the progression in the central government income tax schedule ensures that a real change in income will bring about a change in the tax ration. In case of inflation when the tax schedules are kept nominally the same will also increase the real tax burden. In calculations of the study it is assumed that the real income remains constant, so that we can get an unbiased measure of the effects of governmental actions in real terms. The main factors influencing the amount of income taxes an individual must pay are as follows: - Gross income (income subject to central and local government taxes). - Deductions from gross income and taxes calculated according to tax schedules. - The central government income tax schedule (progressive income taxation). - The rates for the local taxes and for social security payments (proportional taxation). In the study we investigate how much a certain group of taxpayers would have paid in taxes according to the actual tax regulations prevailing indifferent years if the income were kept constant in real terms. Other factors affecting tax liability are kept strictly unchanged (as constants). The resulting taxes, expressed in fixed prices, are then compared to the taxes levied in the base year (hypothetical taxation). The question we are addressing is thus how much taxes a certain group of taxpayers with the same socioeconomic characteristics would have paid on the same real income according to the actual tax regulations prevailing in different years. This has been suggested as the main way to measure real changes in taxation, although there are several alternative measures with essentially the same aim. Next an aggregate indicator of changes in income tax rates is constructed. It is designed to show how much the taxation of income has increased or reduced from one year to next year on average. The main question remains: How aggregation over all income levels should be performed? In order to determine the average real changes in the tax scales the difference functions (difference between actual and hypothetical taxation functions) were aggregated using taxable income as weights. Besides the difference functions, the relative changes in real taxes can be used as indicators of change. In this case the ratio between the taxes computed according to the new and the old situation indicates whether the taxation has become heavier or easier. The relative changes in tax scales can be described in a way similar to that used in describing the cost of living, or by means of price indices. For example, we can use Laspeyres´ price index formula for computing the ratio between taxes determined by the new tax scales and the old tax scales. The formula answers the question: How much more or less will be paid in taxes according to the new tax scales than according to the old ones when the real income situation corresponds to the old situation. In real terms the central government tax burden experienced a steady decline from its high post-war level up until the mid-1950s. The real tax burden then drifted upwards until the mid-1970s. The real level of taxation in 1975 was twice that of 1961. In the 1980s there was a steady phase due to the inflation corrections of tax schedules. In 1989 the tax schedule fell drastically and from the mid-1990s tax schedules have decreased the real tax burden significantly. Local tax rates have risen continuously from 10 percent in 1948 to nearly 19 percent in 2008. Deductions have lowered the real tax burden especially in recent years. Aggregate figures indicate how the tax ratio for the same real income has changed over the years according to the prevailing tax regulations. We call the tax ratio calculated in this manner the real income tax ratio. A change in the real income tax ratio depicts an increase or decrease in the real tax burden. The real income tax ratio declined after the war for some years. In the beginning of the 1960s it nearly doubled to mid-1970. From mid-1990s the real income tax ratio has fallen about 35 %.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tämän Pro gradu -tutkielman tarkoituksena on selventää Kansallisen Kokoomuspuolueen kommunisminvastaisuutta 1920-luvulla, tarkemmin ottaen vuoden 1929 eduskuntavaalikamppailussa. Tutkimuskysymykseni liittyvät puolueen identiteetin ja kommunisminvastaisuuden yhteyteen: miten kokoomuspuolueen suhtautuminen kommunismiin kytkeytyi, yhtäältä, oman porvarillisen poliittis-kansallisen identiteetin puolustamiseen, ja toisaalta, vuonna 1929 vallitseviin poliittisiin oloihin, jolloin muun muassa parlamentaarinen järjestelmä herätti laajaa epäluottamusta, kokoomus ajautui sisäiseen kriisiin ja kommunistien kumoustavoite korostui suomalaisessa julkisuudessa? Etsimällä vastauksia näihin kysymyksiin pyrin selittämään mistä osakokonaisuuksista antikommunismi koostui, miten uhkaa muokattiin ja perusteltiin. Mielestäni tärkeää ja mielenkiintoista on miettiä, kuinka kommunisminvastaisuus ilmeni ikään kuin vastauksena muihin yhteiskunnallis-poliittisiin ongelmiin ja turhautumiin. Tutkimukseni teoreettinen viitekehys perustuu toiseuden ja viholliskuvien tutkimukseen, koska toiseuden merkitys identiteetin kehittymiselle on kiistaton. Tähän liittyvän kirjallisuuden lisäksi olen käyttänyt lähteinäni tutkimuskirjallisuutta, sanomalehtiä ja julkaisemattomia arkistolähteitä. Tutkimukseni aatehistoriallisen luonteen vuoksi ensisijainen alkuperäislähteeni on julkaistu materiaali – vaalijulkaisut ja kokoomuslehdistö – jonka avulla olen pyrkinyt analysoimaan puolueen suhtautumista kommunismiin ja sen vaikutusta puolueen identiteetille. Metodini on historiallis-kvalitatiivinen, joka tarkoittaa sitä, että pyrin samaan aikaan huomioimaan sekä puolueen julkisuuskuvan että sen toiminnan kulisseissa. Tämä edellyttää huomion kohdistamista sekä julkaistuun että julkaisemattomaan lähdeaineistoon. Julkaistuun materiaaliin kohdistuneen analyysin pohjalta on mahdollista päätellä, että kokoomus halusi luoda itsestään kuvan jyrkästi kommunisminvastaisena puolueena. Toiseus-analyysin perusteella voidaan sanoa, että kommunismi oli puolueen selvä toinen. Julkisuuskuva ei kuitenkaan välttämättä vastannut puolueen todellisia käsityksiä kommunismista ja sen pohjalle muodostetusta viholliskuvasta. Antikommunismi ja viholliskuvan vahvistaminen palvelivat myös muita päämääriä, joista merkittävimmät liittyivät katkenneen kokoomuksen yhtenäisyyden pönkittämiseen ja huomion kääntämiseen pois muista vuonna 1929 esiin työntyneistä ongelmista. Kommunismin muodostamaa uhkaa Suomen kansalliselle olemassaololle pyrittiin perustelemaan monelta eri kantilta. Kommunismin nähtiin rapauttavan kristillisen moraalin ja siveellisyyden, lisäävän yhteiskunnallisia levottomuuksia, heikentävän parlamentarismia sekä vaarantavan Suomen sotilaallisen turvallisuuden ja pyhäksi koetun etuvartiotehtävän. Antikommunismi yhdistyi läheisesti myös ideologisen venäläisvastaisuuden ääri-ilmiöön, ryssävihaan. Näihin eri ilmiöihin liittyvä "antikommunistinen diskurssi" oli siis yksi niistä perustoista, jonka päällä kokoomuksen poliittis-kansallinen identiteetti kehittyi 1920-luvun mittaan. Kevätkesän 1929 tuomien, lähes ylitsepääsemättömien vaikeuksien myötä antikommunistisen diskurssin merkitys kasvoi entisestään ja loi pohjan puolueen suhtautumiselle lapualaisvuosiin.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We show that information sharing among banks may serve as a collusive device. An informational sharing agreement is an a-priori commitment to reduce informational asymmetries between banks in future lending. Hence, information sharing tends to increase the intensity of competition in future periods and, thus, reduces the value of informational rents in current competition. We contribute to the existing literature by emphasizing that a reduction in informational rents will also reduce the intensity of competition in the current period, thereby reducing competitive pressure in current credit markets. We provide a large class of economic environments, where a ban on information sharing would be strictly welfare-enhancing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this paper is to improve option risk monitoring by examining the information content of implied volatility and by introducing the calculation of a single-sum expected risk exposure similar to the Value-at-Risk. The figure is calculated in two steps. First, there is a need to estimate the value of a portfolio of options for a number of different market scenarios, while the second step is to summarize the information content of the estimated scenarios into a single-sum risk measure. This involves the use of probability theory and return distributions, which confronts the user with the problems of non-normality in the return distribution of the underlying asset. Here the hyperbolic distribution is used to describe one alternative for dealing with heavy tails. Results indicate that the information content of implied volatility is useful when predicting future large returns in the underlying asset. Further, the hyperbolic distribution provides a good fit to historical returns enabling a more accurate definition of statistical intervals and extreme events.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reorganizing a dataset so that its hidden structure can be observed is useful in any data analysis task. For example, detecting a regularity in a dataset helps us to interpret the data, compress the data, and explain the processes behind the data. We study datasets that come in the form of binary matrices (tables with 0s and 1s). Our goal is to develop automatic methods that bring out certain patterns by permuting the rows and columns. We concentrate on the following patterns in binary matrices: consecutive-ones (C1P), simultaneous consecutive-ones (SC1P), nestedness, k-nestedness, and bandedness. These patterns reflect specific types of interplay and variation between the rows and columns, such as continuity and hierarchies. Furthermore, their combinatorial properties are interlinked, which helps us to develop the theory of binary matrices and efficient algorithms. Indeed, we can detect all these patterns in a binary matrix efficiently, that is, in polynomial time in the size of the matrix. Since real-world datasets often contain noise and errors, we rarely witness perfect patterns. Therefore we also need to assess how far an input matrix is from a pattern: we count the number of flips (from 0s to 1s or vice versa) needed to bring out the perfect pattern in the matrix. Unfortunately, for most patterns it is an NP-complete problem to find the minimum distance to a matrix that has the perfect pattern, which means that the existence of a polynomial-time algorithm is unlikely. To find patterns in datasets with noise, we need methods that are noise-tolerant and work in practical time with large datasets. The theory of binary matrices gives rise to robust heuristics that have good performance with synthetic data and discover easily interpretable structures in real-world datasets: dialectical variation in the spoken Finnish language, division of European locations by the hierarchies found in mammal occurrences, and co-occuring groups in network data. In addition to determining the distance from a dataset to a pattern, we need to determine whether the pattern is significant or a mere occurrence of a random chance. To this end, we use significance testing: we deem a dataset significant if it appears exceptional when compared to datasets generated from a certain null hypothesis. After detecting a significant pattern in a dataset, it is up to domain experts to interpret the results in the terms of the application.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The study presents a theory of utility models based on aspiration levels, as well as the application of this theory to the planning of timber flow economics. The first part of the study comprises a derivation of the utility-theoretic basis for the application of aspiration levels. Two basic models are dealt with: the additive and the multiplicative. Applied here solely for partial utility functions, aspiration and reservation levels are interpreted as defining piecewisely linear functions. The standpoint of the choices of the decision-maker is emphasized by the use of indifference curves. The second part of the study introduces a model for the management of timber flows. The model is based on the assumption that the decision-maker is willing to specify a shape of income flow which is different from that of the capital-theoretic optimum. The utility model comprises four aspiration-based compound utility functions. The theory and the flow model are tested numerically by computations covering three forest holdings. The results show that the additive model is sensitive even to slight changes in relative importances and aspiration levels. This applies particularly to nearly linear production possibility boundaries of monetary variables. The multiplicative model, on the other hand, is stable because it generates strictly convex indifference curves. Due to a higher marginal rate of substitution, the multiplicative model implies a stronger dependence on forest management than the additive function. For income trajectory optimization, a method utilizing an income trajectory index is more efficient than one based on the use of aspiration levels per management period. Smooth trajectories can be attained by squaring the deviations of the feasible trajectories from the desired one.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the first decade of the 21st century, national notables were a significant theme in the Finnish theatre. The lives of artists, in particular, inspired the performances that combined historical and fictional elements. In this study, I focus on the characters of female artists in 18 Finnish plays or performances from the first decade of the 21st century. The study pertains to the field of performance analysis. I approach the characters from three points of view. Firstly, I examine them through the action of performances at the thematic level. Secondly, I concentrate on the forms of relationships between the audience and the half-historical character. Thirdly, I examine the representations of characters and their relationships to the audience using myth as a tool. I approach characters from the frame of feminist phenomenological theatre study but also combine the points of view of other traditions. As a model, I adapt the approach of the theatre researcher Bert O. States, which concentrates on the relation between a play s text and an actor, and between an actor and the public. Furthermore, I use the analysing tools of performance art in an examination of performances counted among the contemporary performance genre. The biographical plays about these artists are concentrated in the domestic sphere and take part in the conversation about the position of women in both the community and private life. They represent the heroines work, love, temptations and hardships. The artists do not carry out heroic acts, being more like everyday heroines whose lives and art were shared with the audience in an aphoristic atmosphere. In the examined performances, criticism of the heterosexual matrix was mainly conservative and the myths of female and male artists differed from each other: the woman artist was presented as a super heroine whose strength often meant sacrifices; the male artist was a weaker figure primarily pursuing his individualistic objectives. The performances proved to be a kind of documentary theatre, a hybrid of truth and fiction. Nonetheless, the constructions of subject and identity mainly represented the characters of the mythical stories and only secondarily gave a faithful rendition of the artists lives. Although these performances were addressed to the general and heterogeneous public, their audience proved to be a strictly predefined group, for which the national myths and the experience of a collective identity emerged as an important theme. The heroine characters offered the audience "safe" idols who ensured the solidity of the community. These performances contained common, shared values and gave the audience an opportunity to feel empathy and to be charmed by the confessions of well-known national characters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating–dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating–dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs – these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating–dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Research on reading has been successful in revealing how attention guides eye movements when people read single sentences or text paragraphs in simplified and strictly controlled experimental conditions. However, less is known about reading processes in more naturalistic and applied settings, such as reading Web pages. This thesis investigates online reading processes by recording participants eye movements. The thesis consists of four experimental studies that examine how location of stimuli presented outside the currently fixated region (Study I and III), text format (Study II), animation and abrupt onset of online advertisements (Study III), and phase of an online information search task (Study IV) affect written language processing. Furthermore, the studies investigate how the goal of the reading task affects attention allocation during reading by comparing reading for comprehension with free browsing, and by varying the difficulty of an information search task. The results show that text format affects the reading process, that is, vertical text (word/line) is read at a slower rate than a standard horizontal text, and the mean fixation durations are longer for vertical text than for horizontal text. Furthermore, animated online ads and abrupt ad onsets capture online readers attention and direct their gaze toward the ads, and distract the reading process. Compared to a reading-for-comprehension task, online ads are attended to more in a free browsing task. Moreover, in both tasks abrupt ad onsets result in rather immediate fixations toward the ads. This effect is enhanced when the ad is presented in the proximity of the text being read. In addition, the reading processes vary when Web users proceed in online information search tasks, for example when they are searching for a specific keyword, looking for an answer to a question, or trying to find a subjectively most interesting topic. A scanning type of behavior is typical at the beginning of the tasks, after which participants tend to switch to a more careful reading state before finishing the tasks in the states referred to as decision states. Furthermore, the results also provided evidence that left-to-right readers extract more parafoveal information to the right of the fixated word than to the left, suggesting that learning biases attentional orienting towards the reading direction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The blood-brain barrier (BBB) is a unique barrier that strictly regulates the entry of endogenous substrates and xenobiotics into the brain. This is due to its tight junctions and the array of transporters and metabolic enzymes that are expressed. The determination of brain concentrations in vivo is difficult, laborious and expensive which means that there is interest in developing predictive tools of brain distribution. Predicting brain concentrations is important even in early drug development to ensure efficacy of central nervous system (CNS) targeted drugs and safety of non-CNS drugs. The literature review covers the most common current in vitro, in vivo and in silico methods of studying transport into the brain, concentrating on transporter effects. The consequences of efflux mediated by p-glycoprotein, the most widely characterized transporter expressed at the BBB, is also discussed. The aim of the experimental study was to build a pharmacokinetic (PK) model to describe p-glycoprotein substrate drug concentrations in the brain using commonly measured in vivo parameters of brain distribution. The possibility of replacing in vivo parameter values with their in vitro counterparts was also studied. All data for the study was taken from the literature. A simple 2-compartment PK model was built using the Stella™ software. Brain concentrations of morphine, loperamide and quinidine were simulated and compared with published studies. Correlation of in vitro measured efflux ratio (ER) from different studies was evaluated in addition to studying correlation between in vitro and in vivo measured ER. A Stella™ model was also constructed to simulate an in vitro transcellular monolayer experiment, to study the sensitivity of measured ER to changes in passive permeability and Michaelis-Menten kinetic parameter values. Interspecies differences in rats and mice were investigated with regards to brain permeability and drug binding in brain tissue. Although the PK brain model was able to capture the concentration-time profiles for all 3 compounds in both brain and plasma and performed fairly well for morphine, for quinidine it underestimated and for loperamide it overestimated brain concentrations. Because the ratio of concentrations in brain and blood is dependent on the ER, it is suggested that the variable values cited for this parameter and its inaccuracy could be one explanation for the failure of predictions. Validation of the model with more compounds is needed to draw further conclusions. In vitro ER showed variable correlation between studies, indicating variability due to experimental factors such as test concentration, but overall differences were small. Good correlation between in vitro and in vivo ER at low concentrations supports the possibility of using of in vitro ER in the PK model. The in vitro simulation illustrated that in the simulation setting, efflux is significant only with low passive permeability, which highlights the fact that the cell model used to measure ER must have low enough paracellular permeability to correctly mimic the in vivo situation.