959 resultados para Empirical Testing
Resumo:
The main purpose of this study was to analyze how stress tests are used in risk management in the Finnish banking and insurance sectors. In order to enhance understanding of the topic, stress testing was explored in the context of corporate governance and regulato-ry implications of Basel II and Solvency II on stress testing were examined. In addition, the effects of the global financial crisis on stress testing were mapped and the differences in stress testing practices between the banking and insurance sector were discussed. The research method was qualitative case study and it was conducted by interviewing risk managers from ten institutions and a representative from FIN-FSA. Findings pointed out that stress testing practices vary significantly between different institutions. Interesting observations were made in terms of stress testing practices in the banking and insurance sectors. The increasing importance and use of stress tests were recognized as a result of the financial crisis. Stress testing was even considered more like art than science given the amount of challenges it involves. In general, improvements in stress tests were suggested, with an emphasis on stress concentration between different types of risks.
Resumo:
Cloud computing is a practically relevant paradigm in computing today. Testing is one of the distinct areas where cloud computing can be applied. This study addressed the applicability of cloud computing for testing within organizational and strategic contexts. The study focused on issues related to the adoption, use and effects of cloudbased testing. The study applied empirical research methods. The data was collected through interviews with practitioners from 30 organizations and was analysed using the grounded theory method. The research process consisted of four phases. The first phase studied the definitions and perceptions related to cloud-based testing. The second phase observed cloud-based testing in real-life practice. The third phase analysed quality in the context of cloud application development. The fourth phase studied the applicability of cloud computing in the gaming industry. The results showed that cloud computing is relevant and applicable for testing and application development, as well as other areas, e.g., game development. The research identified the benefits, challenges, requirements and effects of cloud-based testing; and formulated a roadmap and strategy for adopting cloud-based testing. The study also explored quality issues in cloud application development. As a special case, the research included a study on applicability of cloud computing in game development. The results can be used by companies to enhance the processes for managing cloudbased testing, evaluating practical cloud-based testing work and assessing the appropriateness of cloud-based testing for specific testing needs.
Resumo:
Using the plausible model of activated carbon proposed by Harris and co-workers and grand canonical Monte Carlo simulations, we study the applicability of standard methods for describing adsorption data on microporous carbons widely used in adsorption science. Two carbon structures are studied, one with a small distribution of micropores in the range up to 1 nm, and the other with micropores covering a wide range of porosity. For both structures, adsorption isotherms of noble gases (from Ne to Xe), carbon tetrachloride and benzene are simulated. The data obtained are considered in terms of Dubinin-Radushkevich plots. Moreover, for benzene and carbon tetrachloride the temperature invariance of the characteristic curve is also studied. We show that using simulated data some empirical relationships obtained from experiment can be successfully recovered. Next we test the applicability of Dubinin's related models including the Dubinin-Izotova, Dubinin-Radushkevich-Stoeckli, and Jaroniec-Choma equations. The results obtained demonstrate the limits and applications of the models studied in the field of carbon porosity characterization.
Resumo:
We estimate and test two alternative functional forms, which have been used in the growth literature, representing the aggregate production function for a panel of countries: the model of Mankiw, Romer and Weil (Quarterly Journal of Economics, 1992), and a mincerian formulation of schooling-returns to skills. Estimation is performed using instrumental-variable techniques, and both functional forms are confronted using a Box-Cox test, since human capital inputs enter in levels in the mincerian specification and in logs in the extended neoclassical growth model.
Resumo:
We estimate and test two alternative functional forms representing the aggregate production function for a panel of countries: the extended neoclassical growth model, and a mincerian formulation of schooling-returns to skills. Estimation is performed using instrumentalvariable techniques, and both functional forms are confronted using a Box-Cox test, since human capital inputs enter in levels in the mincerian specification and in logs in the extended neoclassical growth model. Our evidence rejects the extended neoclassical growth model in favor of the mincerian specification, with an estimated capital share of about 42%, a marginal return to education of about 7.5% per year, and an estimated productivity growth of about 1.4% per year. Differences in productivity cannot be disregarded as an explanation of why output per worker varies so much across countries: a variance decomposition exercise shows that productivity alone explains 54% of the variation in output per worker across countries.
Resumo:
An optimal multiple testing procedure is identified for linear hypotheses under the general linear model, maximizing the expected number of false null hypotheses rejected at any significance level. The optimal procedure depends on the unknown data-generating distribution, but can be consistently estimated. Drawing information together across many hypotheses, the estimated optimal procedure provides an empirical alternative hypothesis by adapting to underlying patterns of departure from the null. Proposed multiple testing procedures based on the empirical alternative are evaluated through simulations and an application to gene expression microarray data. Compared to a standard multiple testing procedure, it is not unusual for use of an empirical alternative hypothesis to increase by 50% or more the number of true positives identified at a given significance level.
Resumo:
The Free Core Nutation (FCN) is a free mode of the Earth's rotation caused by the different material characteristics of the Earth's core and mantle. This causes the rotational axes of those layers to slightly diverge from each other, resulting in a wobble of the Earth's rotation axis comparable to nutations. In this paper we focus on estimating empirical FCN models using the observed nutations derived from the VLBI sessions between 1993 and 2013. Assuming a fixed value for the oscillation period, the time-variable amplitudes and phases are estimated by means of multiple sliding window analyses. The effects of using different a priori Earth Rotation Parameters (ERP) in the derivation of models are also addressed. The optimal choice of the fundamental parameters of the model, namely the window width and step-size of its shift, is searched by performing a thorough experimental analysis using real data. The former analyses lead to the derivation of a model with a temporal resolution higher than the one used in the models currently available, with a sliding window reduced to 400 days and a day-by-day shift. It is shown that this new model increases the accuracy of the modeling of the observed Earth's rotation. Besides, empirical models determined from USNO Finals as a priori ERP present a slightly lower Weighted Root Mean Square (WRMS) of residuals than IERS 08 C04 along the whole period of VLBI observations, according to our computations. The model is also validated through comparisons with other recognized models. The level of agreement among them is satisfactory. Let us remark that our estimates give rise to the lowest residuals and seem to reproduce the FCN signal in more detail.
Resumo:
"January, 1989."
Resumo:
Photocopy.
Resumo:
The first part of the study examined the effect of industry risk changes on perceived audit risk at the financial statement level and whether these changes depended on individual differences such as experience and tolerance for ambiguity. ^ Forty-eight auditors from two offices of one of the “Big 5” CPA firms participated in this study. The ANOVA results supported the effect of industry risk in the assessment of audit risk at the financial statement level. Higher industry risk was associated with higher perceived audit risk. Tolerance for ambiguity was also significant in explaining the changes in the assessment of audit risk. Auditors with a high tolerance for ambiguity perceived lower audit risk than auditors with a low tolerance for ambiguity. Although ANOVA results did not find experience to be significant, a t-test for experience showed it to be marginally significant and inversely related to audit risk. ^ The second part of this study examined whether differences in perceived audit risk at the financial statement level altered the extent, nature or timing of the planned auditing procedures. The results of the MANOVA suggested an overall audit risk effect at the financial statement level. Perceived audit risk was significant in explaining the variation in the number of hours planned for the total cycle and the number of hours p1anned for the tests of balances and details. Perceived audit risk was not significant in determining the analytical review procedures planned, but assessed inherent risk at the cycle level was significant. The higher the inherent risk the more analytical procedures were planned. Perceived audit risk was not significant in explaining the timing of the procedures, but individual differences were significant. The results showed that experienced auditors and those with a high tolerance for ambiguity were less likely to postpone the performance of the interim procedures or the time at which the majority of audit work would be done. ^
Resumo:
Recent empirical studies have found significant evidence of departures from competition in the input side of the Australian bread, breakfast cereal and margarine end-product markets. For example, Griffith (2000) found that firms in some parts of the processing and marketing sector exerted market power when purchasing grains and oilseeds from farmers. As noted at the time, this result accorded well with the views of previous regulatory authorities (p.358). In the mid-1990s, the Prices Surveillence Authority (PSA 1994) determined that the markets for products contained in the Breakfast Cereals and Cooking Oils and Fats indexes were "not effectively competitive" (p.14). The PSA consequently maintained price surveillence on the major firms in this product group. The Griffith result is also consistent with the large number of legal judgements against firms in this sector over the past decade for price fixing or other types of non-competitive behaviour. For example, bread manufacturer George Weston was fined twice during 2000 for non-competitive conduct and the ACCC has also recently pursued and won cases against retailer Safeway in grains and oilseeds product lines.
Resumo:
Recent developments in evolutionary physiology have seen many of the long-held assumptions within comparative physiology receive rigorous experimental analysis. Studies of the adaptive significance of physiological acclimation exemplify this new evolutionary approach. The beneficial acclimation hypothesis (BAH) was proposed to describe the assumption that all acclimation changes enhance the physiological performance or fitness of an individual organism. To the surprise of most physiologists, all empirical examinations of the BAH have rejected its generality. However, we suggest that these examinations are neither direct nor complete tests of the functional benefit of acclimation. We consider them to be elegant analyses of the adaptive significance of developmental plasticity, a type of phenotypic plasticity that is very different from the traditional concept of acclimation that is used by comparative physiologists.