964 resultados para Distribution (Probability theory)
Resumo:
A könyvvizsgálati kockázat a téves auditjelentés kiadásának kockázata olyan esetekben, amikor a beszámoló lényeges hibás állítást tartalmaz. Ez a kockázat indirekt módon a hitelintézetek és pénzügyi vállalkozások működésében is megjelenik azokban az esetekben, amikor a lényeges hibás állítást a finanszírozott vállalkozás auditált beszámolója tartalmazza, amelynek az alapján finanszírozási döntést hoznak, vagy a finanszírozás folytatásáról a beszámolóban szereplő, hibás információkból számított hitelkovenánsok alapján döntenek. A könyvvizsgálat kockázatában a vizsgált gazdálkodó üzleti kockázatai tükröződnek vissza, ezért a kockázat felmérése és az ellenőrzés ennek alapján való megtervezése, majd végrehajtása kulcsfontosságú. Jelen tanulmány – kapcsolódva a Hitelintézeti Szemle 2011. évi 4. számához – szintén a kockázat és bizonytalanság témakörét tárgyalja, pontosabban ennek egy gyakorlati vetületét: a bizonyosságfüggvények (belief functions) alkalmazását a könyvvizsgálatban; mindezt a teljesség és a tankönyvszerű rendszerfelépítés igénye nélkül. A módszer ugyanis hazánkban szinte ismeretlen, nemzetközi viszonylatban viszont empirikus kutatásban is rámutattak már az alkalmazás lehetséges előnyeire a hagyományos valószínűségelméleten alapuló számszerű kockázatbecslésekkel szemben. Eszerint a bizonyosságfüggvények jobban reprezentálják a könyvvizsgálóknak a kockázatról alkotott képét, mint a valószínűségek, mert – szemben a hagyományos modellel – nem két, hanem három állapotot kezelnek: a pozitív bizonyíték létezését, a negatív bizonyíték létezését és a bizonyíték hiányának esetét. _______ Audit risk is the risk that the auditor expresses an inappropriate audit opinion when the fi nancial statements are materially misstated. This kind of risk indirectly appears in the fi nancial statements of fi nancial institutions, when the material misstatement is in the fi nanced entity’s statements that serve as a basis for lending decisions or when the decision is made based upon credit covenants calculated from misstated information. The risks of the audit process refl ect the business risks of the auditee, so the assessment of risks, and further the planning and performance of the audit based on it is of key importance. The current study – connecting to No 4 2011 of Hitelintézeti Szemle – also discusses the topic of risk and uncertainty, or to be more precise a practical implementation of the aforementioned: the application of belief functions in the fi eld of external audit. All this without the aim of achieving completeness or textbook-like scrutiny in building up the theory. While the formalism is virtually unknown in Hungary, on the international scene empirical studies pointed out the possible advantages of the application of the method in contrast to risk assessments based on the traditional theory of probability. Accordingly, belief functions provide a better representation of auditors’ perception of risk, as in contrast to the traditional model, belief functions deal with three rather than two states: the existence of supportive evidence, that of negative evidence and the lack of evidence.
Resumo:
In this dissertation, I examine both theoretically and empirically the relationship between stock prices and income distribution using an endogenous growth model with social status impatience.^ The theoretical part looks into how status impatience and current economic status jointly determine time preference, savings, future economic status, stock prices, growth and wealth distribution in the steady state. This work builds on Burgstaller and Karayalcin (1996).^ More specifically, I look at (i) the effects of the distribution of status impatience levels on the distribution of steady state assets, incomes and consumption and (ii) the effects of changes in relative levels of status impatience on stock prices. Therefore, from (i) and (ii), I derive the correlation between stock prices, incomes and asset distribution. Also, the analysis of the stack market is undertaken in the presence of adjustment costs to investments.^ The empirical chapter looks at (i) the correlation between income inequality and long run economic growth on the one hand and (ii) the correlation between stock market prices and income inequality on the other. The role of stock prices and social status is examined to better understand the forces that enable a country to grow overtime and to determine why output per capita varies across countries. The data are from Summers and Heston (1988), Barro and Wolf (1989), Alesina and Rodrik (1994), Global financial Database (1997) and the World Bank. Data for social status are collected through a primary sample survey on the internet. Twenty-five developed and developing countries are included in the sample.^ The model developed in this study was specified as a system of simultaneous equations, in which per capita growth rate and income inequality were endogenous variables. Additionally, stock price index and social status measures were also incorporated. The results indicate that income inequality is inversely related to economic growth. In addition, increase in income inequality arising from higher stock prices constrains growth. Moreover, where social status is determined by income levels, it influences long run growth. Therefore, these results support findings of Persson and Tabellini (1994) and Alesina and Rodrik (1994). ^
Resumo:
Extreme stock price movements are of great concern to both investors and the entire economy. For investors, a single negative return, or a combination of several smaller returns, can possible wipe out so much capital that the firm or portfolio becomes illiquid or insolvent. If enough investors experience this loss, it could shock the entire economy. An example of such a case is the stock market crash of 1987. Furthermore, there has been a lot of recent interest regarding the increasing volatility of stock prices. ^ This study presents an analysis of extreme stock price movements. The data utilized was the daily returns for the Standard and Poor's 500 index from January 3, 1978 to May 31, 2001. Research questions were analyzed using the statistical models provided by extreme value theory. One of the difficulties in examining stock price data is that there is no consensus regarding the correct shape of the distribution function generating the data. An advantage with extreme value theory is that no detailed knowledge of this distribution function is required to apply the asymptotic theory. We focus on the tail of the distribution. ^ Extreme value theory allows us to estimate a tail index, which we use to derive bounds on the returns for very low probabilities on an excess. Such information is useful in evaluating the volatility of stock prices. There are three possible limit laws for the maximum: Gumbel (thick-tailed), Fréchet (thin-tailed) or Weibull (no tail). Results indicated that extreme returns during the time period studied follow a Fréchet distribution. Thus, this study finds that extreme value analysis is a valuable tool for examining stock price movements and can be more efficient than the usual variance in measuring risk. ^
Resumo:
Habitat loss and fragmentation have been implicated as driving forces behind recent waves of extinction. The regional landscape where this study occurred is a mosaic of forest and grassland, and therefore provides an ideal system with which to investigate the implications of habitat patchiness for the distribution and ecology of organisms. Here I describe patterns of amphibian and reptile distribution among and within habitats at the study site, investigate associations between habitat and community structure, describe nested subset patterns on forest islands, and quantify the relationship between body size and density across ecological scales and taxonomic groups. ^ Species richness did not vary across habitats, between forest island isolation classes or between island edges and cores. In contrast, species composition varied at all three ecological scales, reflecting differences in the distribution of both forest and open-habitat affiliated species. Species composition was associated with multivariate habitat profiles, with differences occurring along the isolation gradient of forest islands rather than the area gradient. The relationship between species composition and habitat was stronger for amphibians than for reptiles, a pattern that may be ascribed to physiological differences between the two groups. Analysis of nested subset pattern of community structure indicated that species composition of islands is nested as a function of isolation. Four species whose distribution on forest islands seems to be dispersal-limited drive the relationship between nestedness and isolation. Although there were several examples of shifts in body size across spatial scales and taxonomic groups, body size was not associated with density as predicted by theory, which may reflect differences between real and habitat islands, or differential responses of poikilothermic vertebrates to changes in density relative to homeotherms. ^ Taken together, the strongest result to emerge from this research is the importance of isolation, rather than area, on community structure in this system. Much evidence suggested that different ecological groups of species show distinct patterns of distribution both within and among habitat types. This suggests that species distributions at this site are not the result of 'neutral' processes at the community level, but rather reflect fundamental differences in the ecology of component species. ^
Resumo:
The field of chemical kinetics is an exciting and active field. The prevailing theories make a number of simplifying assumptions that do not always hold in actual cases. Another current problem concerns a development of efficient numerical algorithms for solving the master equations that arise in the description of complex reactions. The objective of the present work is to furnish a completely general and exact theory of reaction rates, in a form reminiscent of transition state theory, valid for all fluid phases and also to develop a computer program that can solve complex reactions by finding the concentrations of all participating substances as a function of time. To do so, the full quantum scattering theory is used for deriving the exact rate law, and then the resulting cumulative reaction probability is put into several equivalent forms that take into account all relativistic effects if applicable, including one that is strongly reminiscent of transition state theory, but includes corrections from scattering theory. Then two programs, one for solving complex reactions, the other for solving first order linear kinetic master equations to solve them, have been developed and tested for simple applications.
Resumo:
The influence of hydrological dynamics on vegetation distribution and the structuring of wetland environments is of growing interest as wetlands are modified by human action and the increasing threat from climate change. Hydrological properties have long been considered a driving force in structuring wetland communities. We link hydrological dynamics with vegetation distribution across Everglades National Park (ENP) using two publicly available datasets to study the probability structure of the frequency, duration, and depth of inundation events along with their relationship to vegetation distribution. This study is among the first to show hydrologic structuring of vegetation communities at wide spatial and temporal scales, as results indicate that the percentage of time a location is inundated and its mean depth are the principal structuring variables to which individual communities respond. For example, sawgrass, the most abundant vegetation type within the ENP, is found across a wide range of time inundated percentages and mean depths. Meanwhile, other communities like pine savanna or red mangrove scrub are more restricted in their distribution and found disproportionately at particular depths and inundations. These results, along with the probabilistic structure of hydropatterns, potentially allow for the evaluation of climate change impacts on wetland vegetation community structure and distribution.
Resumo:
This paper addresses the issues of hotel operators identifying effective means of allocating rooms through various electronic channels of distribution. Relying upon the theory of coercive isomorphism, a think tank was constructed to identify and define electronic channels of distribution currently being utilized in the hotel industry. Through two full-day focus groups consisting of key hotel executives and industry practitioners, distribution channels were identified as were challenges and solutions associated with each.
Resumo:
In the mid 19th century, Horace Mann insisted that a broad provision of public schooling should take precedence over the liberal education of an elite group. In that regard, his generation constructed a state sponsored common schooling enterprise to educate the masses. More than 100 years later, the institution of public schooling fails to maintain an image fully representative of the ideals of equity and inclusion. Critical theory in educational thought associates the dominant practice of functional schooling with maintenance of the status quo, an unequal distribution of financial, political, and social resources. This study examined the empirical basis for the association of public schooling with the status quo using the most recent and comparable cross-country income inequality data. Multiple regression analysis evaluated the possible relationship between national income inequality change over the period 1985-2005 and variables representative of national measures of education supply in the prior decade. The estimated model of income inequality development attempted to quantify the relationship between education supply factors and subsequent income inequality developments by controlling for economic, demographic, and exogenous factors. The sample included all nations with comparable income inequality data over the measurement period, N = 56. Does public school supply affect national income distribution? The estimated model suggested that an increase in the average years of schooling among the population age 15 years or older, measured over the period 1975-1985, provided a mechanism that resulted in a more equal distribution of income over the period 1985-2005 among low and lower-middle income nations. The model also suggested that income inequality increased less or decreased more in smaller economies and when the percentage of the population age < 15 years grew more slowly over the period 1985-2000. In contrast, this study identified no significant relationship between school supply changes measured over prior periods and income inequality development over the period 1985-2005 among upper-middle and high income nations.
Resumo:
The three-parameter lognormal distribution is the extension of the two-parameter lognormal distribution to meet the need of the biological, sociological, and other fields. Numerous research papers have been published for the parameter estimation problems for the lognormal distributions. The inclusion of the location parameter brings in some technical difficulties for the parameter estimation problems, especially for the interval estimation. This paper proposes a method for constructing exact confidence intervals and exact upper confidence limits for the location parameter of the three-parameter lognormal distribution. The point estimation problem is discussed as well. The performance of the point estimator is compared with the maximum likelihood estimator, which is widely used in practice. Simulation result shows that the proposed method is less biased in estimating the location parameter. The large sample size case is discussed in the paper.
Resumo:
Secrecy is fundamental to computer security, but real systems often cannot avoid leaking some secret information. For this reason, the past decade has seen growing interest in quantitative theories of information flow that allow us to quantify the information being leaked. Within these theories, the system is modeled as an information-theoretic channel that specifies the probability of each output, given each input. Given a prior distribution on those inputs, entropy-like measures quantify the amount of information leakage caused by the channel. ^ This thesis presents new results in the theory of min-entropy leakage. First, we study the perspective of secrecy as a resource that is gradually consumed by a system. We explore this intuition through various models of min-entropy consumption. Next, we consider several composition operators that allow smaller systems to be combined into larger systems, and explore the extent to which the leakage of a combined system is constrained by the leakage of its constituents. Most significantly, we prove upper bounds on the leakage of a cascade of two channels, where the output of the first channel is used as input to the second. In addition, we show how to decompose a channel into a cascade of channels. ^ We also establish fundamental new results about the recently-proposed g-leakage family of measures. These results further highlight the significance of channel cascading. We prove that whenever channel A is composition refined by channel B, that is, whenever A is the cascade of B and R for some channel R, the leakage of A never exceeds that of B, regardless of the prior distribution or leakage measure (Shannon leakage, guessing entropy leakage, min-entropy leakage, or g-leakage). Moreover, we show that composition refinement is a partial order if we quotient away channel structure that is redundant with respect to leakage alone. These results are strengthened by the proof that composition refinement is the only way for one channel to never leak more than another with respect to g-leakage. Therefore, composition refinement robustly answers the question of when a channel is always at least as secure as another from a leakage point of view.^
Resumo:
The performance of building envelopes and roofing systems significantly depends on accurate knowledge of wind loads and the response of envelope components under realistic wind conditions. Wind tunnel testing is a well-established practice to determine wind loads on structures. For small structures much larger model scales are needed than for large structures, to maintain modeling accuracy and minimize Reynolds number effects. In these circumstances the ability to obtain a large enough turbulence integral scale is usually compromised by the limited dimensions of the wind tunnel meaning that it is not possible to simulate the low frequency end of the turbulence spectrum. Such flows are called flows with Partial Turbulence Simulation. In this dissertation, the test procedure and scaling requirements for tests in partial turbulence simulation are discussed. A theoretical method is proposed for including the effects of low-frequency turbulences in the post-test analysis. In this theory the turbulence spectrum is divided into two distinct statistical processes, one at high frequencies which can be simulated in the wind tunnel, and one at low frequencies which can be treated in a quasi-steady manner. The joint probability of load resulting from the two processes is derived from which full-scale equivalent peak pressure coefficients can be obtained. The efficacy of the method is proved by comparing predicted data derived from tests on large-scale models of the Silsoe Cube and Texas-Tech University buildings in Wall of Wind facility at Florida International University with the available full-scale data. For multi-layer building envelopes such as rain-screen walls, roof pavers, and vented energy efficient walls not only peak wind loads but also their spatial gradients are important. Wind permeable roof claddings like roof pavers are not well dealt with in many existing building codes and standards. Large-scale experiments were carried out to investigate the wind loading on concrete pavers including wind blow-off tests and pressure measurements. Simplified guidelines were developed for design of loose-laid roof pavers against wind uplift. The guidelines are formatted so that use can be made of the existing information in codes and standards such as ASCE 7-10 on pressure coefficients on components and cladding.
Resumo:
This thesis proposes some confidence intervals for the mean of a positively skewed distribution. The following confidence intervals are considered: Student-t, Johnson-t, median-t, mad-t, bootstrap-t, BCA, T1 , T3 and six new confidence intervals, the median bootstrap-t, mad bootstrap-t, median T1, mad T1 , median T3 and the mad T3. A simulation study has been conducted and average widths, coefficient of variation of widths, and coverage probabilities were recorded and compared across confidence intervals. To compare confidence intervals, the width and coverage probabilities were compared so that smaller widths indicated a better confidence interval when coverage probabilities were the same. Results showed that the median T1 and median T3 outperformed other confidence intervals in terms of coverage probability and the mad bootstrap-t, mad-t, and mad T3 outperformed others in terms of width. Some real life data are considered to illustrate the findings of the thesis.
Resumo:
The Standard Cosmological Model is generally accepted by the scientific community, there are still an amount of unresolved issues. From the observable characteristics of the structures in the Universe,it should be possible to impose constraints on the cosmological parameters. Cosmic Voids (CV) are a major component of the LSS and have been shown to possess great potential for constraining DE and testing theories of gravity. But a gap between CV observations and theory still persists. A theoretical model for void statistical distribution as a function of size exists (SvdW) However, the SvdW model has been unsuccesful in reproducing the results obtained from cosmological simulations. This undermines the possibility of using voids as cosmological probes. The goal of our thesis work is to cover the gap between theoretical predictions and measured distributions of cosmic voids. We develop an algorithm to identify voids in simulations,consistently with theory. We inspecting the possibilities offered by a recently proposed refinement of the SvdW (the Vdn model, Jennings et al., 2013). Comparing void catalogues to theory, we validate the Vdn model, finding that it is reliable over a large range of radii, at all the redshifts considered and for all the cosmological models inspected. We have then searched for a size function model for voids identified in a distribution of biased tracers. We find that, naively applying the same procedure used for the unbiased tracers to a halo mock distribution does not provide success- full results, suggesting that the Vdn model requires to be reconsidered when dealing with biased samples. Thus, we test two alternative exten- sions of the model and find that two scaling relations exist: both the Dark Matter void radii and the underlying Dark Matter density contrast scale with the halo-defined void radii. We use these findings to develop a semi-analytical model which gives promising results.
Resumo:
This research is funded by UK Medical Research Council grant number MR/L011115/1
Resumo:
This research is funded by UK Medical Research Council grant number MR/L011115/1. We would like to thank the 105 experts in behaviour change who have committed their time and offered their expertise for study 2 of this research. We are also very grateful to all those who sent us peer-reviewed behaviour change intervention descriptions for study 1. Finally, we would like thank Dr. Emma Beard and Dr. Dan Dediu for their statistical input and to all the researchers, particularly Holly Walton, who have assisted in the coding of papers for study 1.