401 resultados para MULTIPLICATIVE NOISES
Resumo:
This paper analyzes repeated procurement of services as a four-stage game divided into two periods. In each period there is (1) a contest stage à la Tullock in which the principal selects an agent and (2) a service stage in which the selected agent provides a service. Since this service effort is non-verifiable, the principal faces a moral hazard problem at the service stages. This work considers how the principal should design the period-two contest to mitigate the moral hazard problem in the period-one service stage and to maximize total service and contest efforts. It is shown that the principal must take account of the agent's past service effort in the period-two contest success function. The results indicate that the optimal way to introduce this `bias' is to choose a certain degree of complementarity between past service and current contest efforts. This result shows that contests with `additive bias' (`multiplicative bias') are optimal in incentive problems when effort cost is low (high). Furthermore, it is shown that the severity of the moral hazard problem increases with the cost of service effort (compared to the cost of contest effort) and the number of agents. Finally, the results are extended to more general contest success functions. JEL classification: C72; D82 Key words: Biased contests; Moral Hazard; Repeated Game; Incentives.
Resumo:
The biotechnological techniques may help solve many problems of guava culture, such as the high perishability of fruits. Somatic embryogenesis can generate highly multiplicative cell cultures and with high regenerative potential, serving as basis for genetic transformation. The aim of this work was to obtain somatic embryogenesis of guava (Psidium guajava L.) cv. Paluma. Immature seeds were used, and they were inoculated in MS environment containing 400 mg L-1 of L-glutamine, 100 mg L-1 myo-inositol, 60 g L-1 sucrose, 100 mg L-1 ascorbic acid and supplemented with different types and concentrations of growth regulators. Embryogenic callus appeared after 37 days of culture in an environment containing 1.0 mg L-1 2.4-D + 2.0 mg L-1 2-ip, in 7% of the explants. After 65 days of culture, the treatment containing 0.5 mg L-1 CPA showed 20% of explants with direct embryos, while the treatment with 1 mg L-1 had 14% of explants with direct embryos and 7% of explants with embryogenic callus. In 66.6% of embryos regenerated with 0.5 mg L¹ CPA there was the formation of secondary embryos. The use of IASP and BAP, aiming embryogenesis proliferation, led to an increase in the cellular proliferation, but calli apparently lost their embryogenic potential.
Resumo:
This paper deals with a phenomenologically motivated magneto-viscoelastic coupled finite strain framework for simulating the curing process of polymers under the application of a coupled magneto-mechanical road. Magneto-sensitive polymers are prepared by mixing micron-sized ferromagnetic particles in uncured polymers. Application of a magnetic field during the curing process causes the particles to align and form chain-like structures lending an overall anisotropy to the material. The polymer curing is a viscoelastic complex process where a transformation from fluid. to solid occurs in the course of time. During curing, volume shrinkage also occurs due to the packing of polymer chains by chemical reactions. Such reactions impart a continuous change of magneto-mechanical properties that can be modelled by an appropriate constitutive relation where the temporal evolution of material parameters is considered. To model the shrinkage during curing, a magnetic-induction-dependent approach is proposed which is based on a multiplicative decomposition of the deformation gradient into a mechanical and a magnetic-induction-dependent volume shrinkage part. The proposed model obeys the relevant laws of thermodynamics. Numerical examples, based on a generalised Mooney-Rivlin energy function, are presented to demonstrate the model capacity in the case of a magneto-viscoelastically coupled load.
Resumo:
Tutkielma pyrkii selvittämään millaista viestinnällistä järjestelmää käytettiin kerrottaessa Kansallis-Osake-Pankin henkilöstölle Kansallis-Osake-Pankin ja Suomen Yhdyspankin fuusiosta. Näkökulmana on henkilöstön asema viestin vastaanottajana. Tutkielma hahmottaa johdon taholta tulleita sisäisen viestinnässä käytettyjä äänenpainoja ja sävyjä. Lisäksi tutkielma etsii kahden pankin pääjohtajien muutospuheista sisällöllisiä eroja ja kunkin organisaation totuttuja viestintäkäytänteitä. Tutkimusmenetelmänä käytetään kvalitatiivista tutkimusta ja tutkimusinstrumenttina tässä tutkielmassa toimii diskurssianalyysi. Diskurssianalyysi on teoreettinen viitekehys, jonka tarkoituksena on etsiä mitä merkityksiä teksteissä tuotetaan ja nähdä kieli todellisuuden rakentajana. Tutkimusaineistona on Kansallis-Osake-Pankin henkilöstölehdet, Kansallis-Osake-Pankin ja Suomen Yhdyspankin vuosikertomukset, fuusioesite ja valikoidut aiheesta kirjoitetut lehtiartikkelit. Kansallis-Osake-Pankin sisäistä viestintää leimasi fuusiouutisen julkitultua kriisiviestinnän piirteet: huhujen karkoittaminen ja virallisen tiedon voimakas levittäminen. Fuusiouutisen alkushokin jälkeen KOP:n viestinnästä nousi esiin viisi teemaa: viestien henkilöityminen pääjohtajaan, empatia henkilöstöä kohtaan vaikeissa tilanteissa, yhteisöllisyys, henkilön huomioonottaminen ja läheisyys.
Resumo:
NlmCategory="UNASSIGNED">A version of cascaded systems analysis was developed specifically with the aim of studying quantum noise propagation in x-ray detectors. Signal and quantum noise propagation was then modelled in four types of x-ray detectors used for digital mammography: four flat panel systems, one computed radiography and one slot-scan silicon wafer based photon counting device. As required inputs to the model, the two dimensional (2D) modulation transfer function (MTF), noise power spectra (NPS) and detective quantum efficiency (DQE) were measured for six mammography systems that utilized these different detectors. A new method to reconstruct anisotropic 2D presampling MTF matrices from 1D radial MTFs measured along different angular directions across the detector is described; an image of a sharp, circular disc was used for this purpose. The effective pixel fill factor for the FP systems was determined from the axial 1D presampling MTFs measured with a square sharp edge along the two orthogonal directions of the pixel lattice. Expectation MTFs were then calculated by averaging the radial MTFs over all possible phases and the 2D EMTF formed with the same reconstruction technique used for the 2D presampling MTF. The quantum NPS was then established by noise decomposition from homogenous images acquired as a function of detector air kerma. This was further decomposed into the correlated and uncorrelated quantum components by fitting the radially averaged quantum NPS with the radially averaged EMTF(2). This whole procedure allowed a detailed analysis of the influence of aliasing, signal and noise decorrelation, x-ray capture efficiency and global secondary gain on NPS and detector DQE. The influence of noise statistics, pixel fill factor and additional electronic and fixed pattern noises on the DQE was also studied. The 2D cascaded model and decompositions performed on the acquired images also enlightened the observed quantum NPS and DQE anisotropy.
Resumo:
The basic goal of this study is to extend old and propose new ways to generate knapsack sets suitable for use in public key cryptography. The knapsack problem and its cryptographic use are reviewed in the introductory chapter. Terminology is based on common cryptographic vocabulary. For example, solving the knapsack problem (which is here a subset sum problem) is termed decipherment. Chapter 1 also reviews the most famous knapsack cryptosystem, the Merkle Hellman system. It is based on a superincreasing knapsack and uses modular multiplication as a trapdoor transformation. The insecurity caused by these two properties exemplifies the two general categories of attacks against knapsack systems. These categories provide the motivation for Chapters 2 and 4. Chapter 2 discusses the density of a knapsack and the dangers of having a low density. Chapter 3 interrupts for a while the more abstract treatment by showing examples of small injective knapsacks and extrapolating conjectures on some characteristics of knapsacks of larger size, especially their density and number. The most common trapdoor technique, modular multiplication, is likely to cause insecurity, but as argued in Chapter 4, it is difficult to find any other simple trapdoor techniques. This discussion also provides a basis for the introduction of various categories of non injectivity in Chapter 5. Besides general ideas of non injectivity of knapsack systems, Chapter 5 introduces and evaluates several ways to construct such systems, most notably the "exceptional blocks" in superincreasing knapsacks and the usage of "too small" a modulus in the modular multiplication as a trapdoor technique. The author believes that non injectivity is the most promising direction for development of knapsack cryptosystema. Chapter 6 modifies two well known knapsack schemes, the Merkle Hellman multiplicative trapdoor knapsack and the Graham Shamir knapsack. The main interest is in aspects other than non injectivity, although that is also exploited. In the end of the chapter, constructions proposed by Desmedt et. al. are presented to serve as a comparison for the developments of the subsequent three chapters. Chapter 7 provides a general framework for the iterative construction of injective knapsacks from smaller knapsacks, together with a simple example, the "three elements" system. In Chapters 8 and 9 the general framework is put into practice in two different ways. Modularly injective small knapsacks are used in Chapter 9 to construct a large knapsack, which is called the congruential knapsack. The addends of a subset sum can be found by decrementing the sum iteratively by using each of the small knapsacks and their moduli in turn. The construction is also generalized to the non injective case, which can lead to especially good results in the density, without complicating the deciphering process too much. Chapter 9 presents three related ways to realize the general framework of Chapter 7. The main idea is to join iteratively small knapsacks, each element of which would satisfy the superincreasing condition. As a whole, none of these systems need become superincreasing, though the development of density is not better than that. The new knapsack systems are injective but they can be deciphered with the same searching method as the non injective knapsacks with the "exceptional blocks" in Chapter 5. The final Chapter 10 first reviews the Chor Rivest knapsack system, which has withstood all cryptanalytic attacks. A couple of modifications to the use of this system are presented in order to further increase the security or make the construction easier. The latter goal is attempted by reducing the size of the Chor Rivest knapsack embedded in the modified system. '
Resumo:
This paper sets out to identify the initial positions of the different decisionmakers who intervene in a group decision making process with a reducednumber of actors, and to establish possible consensus paths between theseactors. As a methodological support, it employs one of the most widely-knownmulticriteria decision techniques, namely, the Analytic Hierarchy Process(AHP). Assuming that the judgements elicited by the decision makers follow theso-called multiplicative model (Crawford and Williams, 1985; Altuzarra et al.,1997; Laininen and Hämäläinen, 2003) with log-normal errors and unknownvariance, a Bayesian approach is used in the estimation of the relative prioritiesof the alternatives being compared. These priorities, estimated by way of themedian of the posterior distribution and normalised in a distributive manner(priorities add up to one), are a clear example of compositional data that will beused in the search for consensus between the actors involved in the resolution ofthe problem through the use of Multidimensional Scaling tools
Resumo:
In this work a fast method for the determination of the total sugar levels in samples of raw coffee was developed using the near infrared spectroscopy technique and multivariate regression. The sugar levels were initially obtained using gravimety as the reference method. Later on, the regression models were built from the near infrared spectra of the coffee samples. The original spectra were pre-treated according to the Kubelka-Munk transformation and multiplicative signal correction. The proposed analytical method made possible the direct determination of the total sugar levels in the samples with an error lower by 8% with respect to the conventional methodology.
Resumo:
The paper examines the international distribution of energy intensities as a conventional proxy indicator of energy efficiency and sustainability in the consumption of resources, by employing some descriptive tools from the analysis of inequality and polarization. The analysis specifically focuses on the following points: firstly, inequalities are evaluated synthetically based on diverse summary measures and Lorenz curves; secondly, different factorial decompositions are undertaken that assist in investigating some explanatory factors (weighting factors, multiplicative factors and decomposition by groups); and thirdly, an analysis is made of the polarization of intensities when groups of countries are defined endogenously and exogenously. The results obtained have significant implications from both academic and political perspectives.
Resumo:
This paper analyses the international inequalities in CO2 emissions intensity for the period 1971- 2009 and assesses explanatory factors. Multiplicative, group and additive methodologies of inequality decomposition are employed. The first allows us to clarify the separated role of the carbonisation index and the energy intensity in the pattern observed for inequalities in CO2 intensities; the second allows us to understand the role of regional groups; and the third allows us to investigate the role of different fossil energy sources (coal, oil and gas). The results show that, first, the reduction in global emissions intensity has coincided with a significant reduction in international inequality. Second, the bulk of this inequality and its reduction are attributed to differences between the groups of countries considered. Third, coal is the main energy source explaining these inequalities, although the growth in the relative contribution of gas is also remarkable. Fourth, the bulk of inequalities between countries and its decline are explained by differences in energy intensities, although there are significant differences in the patterns demonstrated by different groups of countries.
Resumo:
This paper examines the factors that have influenced the energy intensity of the Spanish road freight transport of heavy goods vehicles over the period 1996–2012. This article aims to contribute to a better understanding of the factors behind the energy intensity change of road freight and also to inform the design of measures to improve energy efficiency in road freight transport. The paper uses both annual single-period and chained multi-period multiplicative LMDI-II decomposition analysis. The results suggest that the decrease in the energy intensity of Spanish road freight in the period is explained by the change in the real energy intensity index (lower energy consumption per tonne-kilometre transported), which is partially offset by the behaviour of the structural index (greater share in freight transport of those commodities the transportation of which is more energy intensive). The change in energy intensity is analysed in more depth by quantifying the contribution of each commodity through the attribution of changes in Divisia indices.
Resumo:
A neural network procedure to solve inverse chemical kinetic problems is discussed in this work. Rate constants are calculated from the product concentration of an irreversible consecutive reaction: the hydrogenation of Citral molecule, a process with industrial interest. Simulated and experimental data are considered. Errors in the simulated data, up to 7% in the concentrations, were assumed to investigate the robustness of the inverse procedure. Also, the proposed method is compared with two common methods in nonlinear analysis; the Simplex and Levenberg-Marquardt approaches. In all situations investigated, the neural network approach was numerically stable and robust with respect to deviations in the initial conditions or experimental noises.
Resumo:
The objective of this study was to simulate the impact of elevated temperature scenarios on leaf development of potato in Santa Maria, RS, Brazil. Leaf appearance was estimated using a multiplicative model that has a non-linear temperature response function which calculates the daily leaf appearance rate (LAR, leaves day-1) and the accumulated number of leaves (LN) from crop emergence to the appearance of the upper last leaf. Leaf appearance was estimated during 100 years in the following scenarios: current climate, +1 °C, +2 °C, +3 °C, +4 °C e +5 °C. The LAR model was estimated with coefficients of the Asterix cultivar in five emergence dates and in two growing seasons (Fall and Spring). Variable of interest was the duration (days) of the crop emergence to the appearance of the final leaf number (EM-FLN) phase. Statistical analysis was performed assuming a three-factorial experiment, with main effects being climate scenarios, growing seasons, and emergence dates in a completely randomized design using years (one hundred) as replications. The results showed that warmer scenarios lead to an increase, in the fall, and a decrease, in the spring growing season, in the duration of the leaf appearance phase, indicating high vulnerability and complexity of the response of potato crop grown in a Subtropical environment to climate change.
Resumo:
This master’s thesis examines budgeting decision-making in Finnish municipalities; an issue that has not received a lot of attention in the academic literature. Furthermore, this thesis investigates whether the current budgeting decision-making practices could be improved by using a new kind of budget decision-making tool that is based on presenting multiple investment or divestment alternatives simultaneously to the decision makers as a frontier, rather than one by one. In the empirical part of the thesis, the results from three case interviews are introduced in order to answer the research questions of the study. The empirical evidence of this thesis suggests that there is a need for the presented budgeting decision-making tool in Finnish municipalities. The current routine is seen as good even though the interviewees would warmly welcome the alternative method that would function as a linkage be-tween strategy and the budget. The results also indicate that even though municipalities are left with a lot of room in their budgeting decision-making routine, the routine closely, though not always purposely, follows given guidelines and legislation. The major problem in the current practices seems to be the lack of understanding, as the decision-makers find it hard fully to understand the multiplicative effects of the budget-related decisions.
Resumo:
In this thesis, the main point of interest is the robust control of a DC/DC converter. The use of reactive components in the power conversion gives rise to dynamical effects in DC/DC converters and the dynamical effects of the converter mandates the use of active control. Active control uses measurements from the converter to correct errors present in the converter’s output. The controller needs to be able to perform in the presence of varying component values and different kinds of disturbances in loading and noises in measurements. Such a feature in control design is referred as robustness. This thesis also contains survey of general properties of DC/DC converters and their effects on control design. In this thesis, a linear robust control design method is studied. A robust controller is then designed and applied to the current control of a phase shifted full bridge converter. The experimental results are shown to match simulations.