938 resultados para Empirical asset pricing
Resumo:
Business process models have become an effective way of examining business practices to identify areas for improvement. While common information gathering approaches are generally efficacious, they can be quite time consuming and have the risk of developing inaccuracies when information is forgotten or incorrectly interpreted by analysts. In this study, the potential of a role-playing approach to process elicitation and specification has been examined. This method allows stakeholders to enter a virtual world and role-play actions similarly to how they would in reality. As actions are completed, a model is automatically developed, removing the need for stakeholders to learn and understand a modelling grammar. An empirical investigation comparing both the modelling outputs and participant behaviour of this virtual world role-play elicitor with an S-BPM process modelling tool found that while the modelling approaches of the two groups varied greatly, the virtual world elicitor may not only improve both the number of individual process task steps remembered and the correctness of task ordering, but also provide a reduction in the time required for stakeholders to model a process view.
Resumo:
Pricing is an effective tool to control congestion and achieve quality of service (QoS) provisioning for multiple differentiated levels of service. In this paper, we consider the problem of pricing for congestion control in the case of a network of nodes under a single service class and multiple queues, and present a multi-layered pricing scheme. We propose an algorithm for finding the optimal state dependent price levels for individual queues, at each node. The pricing policy used depends on a weighted average queue length at each node. This helps in reducing frequent price variations and is in the spirit of the random early detection (RED) mechanism used in TCP/IP networks. We observe in our numerical results a considerable improvement in performance using our scheme over that of a recently proposed related scheme in terms of both throughput and delay performance. In particular, our approach exhibits a throughput improvement in the range of 34 to 69 percent in all cases studied (over all routes) over the above scheme.
Resumo:
EEG recordings are often contaminated with ocular artifacts such as eye blinks and eye movements. These artifacts may obscure underlying brain activity in the electroencephalogram (EEG) data and make the analysis of the data difficult. In this paper, we explore the use of empirical mode decomposition (EMD) based filtering technique to correct the eye blinks and eye movementartifacts in single channel EEG data. In this method, the single channel EEG data containing ocular artifact is segmented such that the artifact in each of the segment is considered as some type of slowly varying trend in the dataand the EMD is used to remove the trend. The filtering is done using partial reconstruction from components of the decomposition. The method is completely data dependent and hence adaptive and nonlinear. Experimental results are provided to check the applicability of the method on real EEG data and the results are quantified using power spectral density (PSD) as a measure. The method has given fairlygood results and does not make use of any preknowledge of artifacts or the EEG data used.
Resumo:
Pricing is an effective tool to control congestion and achieve quality of service (QoS) provisioning for multiple differentiated levels of service. In this paper, we consider the problem of pricing for congestion control in the case of a network of nodes under multiple service classes. Our work draws upon [1] and [2] in various ways. We use the Tirupati pricing scheme in conjunction with the stochastic approximation based adaptive pricing methodology for queue control (proposed in [1]) for minimizing network congestion. However, unlike the methodology of [1] where pricing for entire routes is directly considered, we consider prices for individual link-service grade tuples. Further, we adapt the methodology proposed in [21 for a single-node scenario to the case of a network of nodes, for evaluating performance in terms of price, revenue rate and disutility. We obtain considerable performance improvements using our approach over that in [1]. In particular, our approach exhibits a throughput improvement in the range of 54 to 80 percent in all cases studied (over all routes) while exhibiting a lower packet delay in the range of 26 to 38 percent over the scheme in [1].
Resumo:
Inventory management (IM) has a decisive role in the enhancement of manufacturing industry's competitiveness. Therefore, major manufacturing industries are following IM practices with the intention of improving their performance. However, the effort to introduce IM in SMEs is very limited due to lack of initiation, expertise, and financial constraints. This paper aims to provide a guideline for entrepreneurs in enhancing their IM performance, as it presents the results of a survey based study carried out for machine tool Small and Medium Enterprises (SMEs) in Bangalore. Having established the significance of inventory as an input, we probed the relationship between IM performance and economic performance of these SMEs. To the extent possible all the factors of production and performance indicators were deliberately considered in pure economic terms. All economic performance indicators adopted seem to have a positive and significant association with IM performance in SMEs. On the whole, we found that SMEs which are IM efficient are likely to perform better on the economic front also and experience higher returns to scale.
Resumo:
The electrical conduction in insulating materials is a complex process and several theories have been suggested in the literature. Many phenomenological empirical models are in use in the DC cable literature. However, the impact of using different models for cable insulation has not been investigated until now, but for the claims of relative accuracy. The steady state electric field in the DC cable insulation is known to be a strong function of DC conductivity. The DC conductivity, in turn, is a complex function of electric field and temperature. As a result, under certain conditions, the stress at cable screen is higher than that at the conductor boundary. The paper presents detailed investigations on using different empirical conductivity models suggested in the literature for HV DC cable applications. It has been expressly shown that certain models give rise to erroneous results in electric field and temperature computations. It is pointed out that the use of these models in the design or evaluation of cables will lead to errors.
Resumo:
Supercritical processes are gaining importance in the last few years in the food, environmental and pharmaceutical product processing. The design of any supercritical process needs accurate experimental data on solubilities of solids in the supercritical fluids (SCFs). The empirical equations are quite successful in correlating the solubilities of solid compounds in SCF both in the presence and absence of cosolvents. In this work, existing solvate complex models are discussed and a new set of empirical equations is proposed. These equations correlate the solubilities of solids in supercritical carbon dioxide (both in the presence and absence of cosolvents) as a function of temperature, density of supercritical carbon dioxide and the mole fraction of cosolvent. The accuracy of the proposed models was evaluated by correlating 15 binary and 18 ternary systems. The proposed models provided the best overall correlations. (C) 2009 Elsevier BA/. All rights reserved.
Resumo:
This paper deals with the development of simplified semi-empirical relations for the prediction of residual velocities of small calibre projectiles impacting on mild steel target plates, normally or at an angle, and the ballistic limits for such plates. It has been shown, for several impact cases for which test results on perforation of mild steel plates are available, that most of the existing semi-empirical relations which are applicable only to normal projectile impact do not yield satisfactory estimations of residual velocity. Furthermore, it is difficult to quantify some of the empirical parameters present in these relations for a given problem. With an eye towards simplicity and ease of use, two new regression-based relations employing standard material parameters have been discussed here for predicting residual velocity and ballistic limit for both normal and oblique impact. The latter expressions differ in terms of usage of quasi-static or strain rate-dependent average plate material strength. Residual velocities yielded by the present semi-empirical models compare well with the experimental results. Additionally, ballistic limits from these relations show close correlation with the corresponding finite element-based predictions.
Resumo:
This paper proposes the use of empirical modeling techniques for building microarchitecture sensitive models for compiler optimizations. The models we build relate program performance to settings of compiler optimization flags, associated heuristics and key microarchitectural parameters. Unlike traditional analytical modeling methods, this relationship is learned entirely from data obtained by measuring performance at a small number of carefully selected compiler/microarchitecture configurations. We evaluate three different learning techniques in this context viz. linear regression, adaptive regression splines and radial basis function networks. We use the generated models to a) predict program performance at arbitrary compiler/microarchitecture configurations, b) quantify the significance of complex interactions between optimizations and the microarchitecture, and c) efficiently search for'optimal' settings of optimization flags and heuristics for any given microarchitectural configuration. Our evaluation using benchmarks from the SPEC CPU2000 suits suggests that accurate models (< 5% average error in prediction) can be generated using a reasonable number of simulations. We also find that using compiler settings prescribed by a model-based search can improve program performance by as much as 19% (with an average of 9.5%) over highly optimized binaries.
Resumo:
This paper examines the association between the level of audit fees paid and asset revaluations, one use of fair value accounting. This Australian study also investigates attributes of asset revaluations and the association with the level of audit fees paid. We find that firms choosing the revaluation model incur higher audit fees than those that chose the cost model; asset revaluations made by directors lead to the firm incurring higher audit fees than for those made by external independent appraisers; and revaluation of investment properties leads to lower audit fees. The findings suggest that asset revaluations can result in higher agency costs and audit fees vary with the reliability of the revaluations and the class of assets being revalued.
Resumo:
Objective(s) To describe how doctors define and use the terms “futility” and “futile treatment” in end-of-life care. Design, Setting, Participants A qualitative study using semi-structured interviews with 96 doctors across a range of specialties who treat adults at the end of life. Doctors were recruited from three large Australian teaching hospitals and were interviewed from May to July 2013. Results Doctors’ conceptions of futility focused on the quality and chance of patient benefit. Aspects of benefit included physiological effect, weighing benefits and burdens, and quantity and quality of life. Quality and length of life were linked, but many doctors discussed instances when benefit was determined by quality of life alone. Most doctors described the assessment of chance of success in achieving patient benefit as a subjective exercise. Despite a broad conceptual consensus about what futility means, doctors noted variability in how the concept was applied in clinical decision-making. Over half the doctors also identified treatment that is futile but nevertheless justified, such as short-term treatment as part of supporting the family of a dying person. Conclusions There is an overwhelming preference for a qualitative approach to assessing futility, which brings with it variation in clinical decision-making. “Patient benefit” is at the heart of doctors’ definitions of futility. Determining patient benefit requires discussions with patients and families about their values and goals as well as the burdens and benefits of further treatment.
Resumo:
This thesis studies binary time series models and their applications in empirical macroeconomics and finance. In addition to previously suggested models, new dynamic extensions are proposed to the static probit model commonly used in the previous literature. In particular, we are interested in probit models with an autoregressive model structure. In Chapter 2, the main objective is to compare the predictive performance of the static and dynamic probit models in forecasting the U.S. and German business cycle recession periods. Financial variables, such as interest rates and stock market returns, are used as predictive variables. The empirical results suggest that the recession periods are predictable and dynamic probit models, especially models with the autoregressive structure, outperform the static model. Chapter 3 proposes a Lagrange Multiplier (LM) test for the usefulness of the autoregressive structure of the probit model. The finite sample properties of the LM test are considered with simulation experiments. Results indicate that the two alternative LM test statistics have reasonable size and power in large samples. In small samples, a parametric bootstrap method is suggested to obtain approximately correct size. In Chapter 4, the predictive power of dynamic probit models in predicting the direction of stock market returns are examined. The novel idea is to use recession forecast (see Chapter 2) as a predictor of the stock return sign. The evidence suggests that the signs of the U.S. excess stock returns over the risk-free return are predictable both in and out of sample. The new "error correction" probit model yields the best forecasts and it also outperforms other predictive models, such as ARMAX models, in terms of statistical and economic goodness-of-fit measures. Chapter 5 generalizes the analysis of univariate models considered in Chapters 2 4 to the case of a bivariate model. A new bivariate autoregressive probit model is applied to predict the current state of the U.S. business cycle and growth rate cycle periods. Evidence of predictability of both cycle indicators is obtained and the bivariate model is found to outperform the univariate models in terms of predictive power.
Resumo:
A careful comparison of the distribution in the (R, θ)-plane of all NH ... O hydrogen bonds with that for bonds between neutral NH and neutral C=O groups indicated that the latter has a larger mean R and a wider range of θ and that the distribution was also broader than for the average case. Therefore, the potential function developed earlier for an average NH ... O hydrogen bond was modified to suit the peptide case. A three-parameter expression of the form {Mathematical expression}, with △ = R - Rmin, was found to be satisfactory. By comparing the theoretically expected distribution in R and θ with observed data (although limited), the best values were found to be p1 = 25, p3 = - 2 and q1 = 1 × 10-3, with Rmin = 2·95 Å and Vmin = - 4·5 kcal/mole. The procedure for obtaining a smooth transition from Vhb to the non-bonded potential Vnb for large R and θ is described, along with a flow chart useful for programming the formulae. Calculated values of ΔH, the enthalpy of formation of the hydrogen bond, using this function are in reasonable agreement with observation. When the atoms involved in the hydrogen bond occur in a five-membered ring as in the sequence[Figure not available: see fulltext.] a different formula for the potential function is needed, which is of the form Vhb = Vmin +p1△2 +q1x2 where x = θ - 50° for θ ≥ 50°, with p1 = 15, q1 = 0·002, Rmin = 2· Å and Vmin = - 2·5 kcal/mole. © 1971 Indian Academy of Sciences.
Resumo:
This report derives from the EU funded research project “Key Factors Influencing Economic Relationships and Communication in European Food Chains” (FOODCOMM). The research consortium consisted of the following organisations: University of Bonn (UNI BONN), Department of Agricultural and Food Marketing Research (overall project co-ordination); Institute of Agricultural Development in Central and Eastern Europe (IAMO), Department for Agricultural Markets, Marketing and World Agricultural Trade, Halle (Saale), Germany; University of Helsinki, Ruralia Institute Seinäjoki Unit, Finland; Scottish Agricultural College (SAC), Food Marketing Research Team - Land Economy Research Group, Edinburgh and Aberdeen; Ashtown Food Research Centre (AFRC), Teagasc, Food Marketing Unit, Dublin; Institute of Agricultural & Food Economics (IAFE), Department of Market Analysis and Food Processing, Warsaw and Government of Aragon, Center for Agro-Food Research and Technology (CITA), Zaragoza, Spain. The aim of the FOODCOMM project was to examine the role (prevalence, necessity and significance) of economic relationships in selected European food chains and to identify the economic, social and cultural factors which influence co-ordination within these chains. The research project considered meat and cereal commodities in six different European countries (Finland, Germany, Ireland, Poland, Spain, UK/Scotland) and was commissioned against a background of changing European food markets. The research project as a whole consisted of seven different work packages. This report presents the results of qualitative research conducted for work package 5 (WP5) in the pig meat and rye bread chains in Finland. Ruralia Institute would like to give special thanks for all the individuals and companies that kindly gave up their time to take part in the study. Their input has been invaluable to the project. The contribution of research assistant Sanna-Helena Rantala was significant in the data gathering. FOODCOMM project was coordinated by the University of Bonn, Department of Agricultural and Food Market Research. Special thanks especially to Professor Monika Hartmann for acting as the project leader of FOODCOMM.
Resumo:
Governance has been one of the most popular buzzwords in recent political science. As with any term shared by numerous fields of research, as well as everyday language, governance is encumbered by a jungle of definitions and applications. This work elaborates on the concept of network governance. Network governance refers to complex policy-making situations, where a variety of public and private actors collaborate in order to produce and define policy. Governance is processes of autonomous, self-organizing networks of organizations exchanging information and deliberating. Network governance is a theoretical concept that corresponds to an empirical phenomenon. Often, this phenomenon is used to descirbe a historical development: governance is often used to describe changes in political processes of Western societies since the 1980s. In this work, empirical governance networks are used as an organizing framework, and the concepts of autonomy, self-organization and network structure are developed as tools for empirical analysis of any complex decision-making process. This work develops this framework and explores the governance networks in the case of environmental policy-making in the City of Helsinki, Finland. The crafting of a local ecological sustainability programme required support and knowledge from all sectors of administration, a number of entrepreneurs and companies and the inhabitants of Helsinki. The policy process relied explicitly on networking, with public and private actors collaborating to design policy instruments. Communication between individual organizations led to the development of network structures and patterns. This research analyses these patterns and their effects on policy choice, by applying the methods of social network analysis. A variety of social network analysis methods are used to uncover different features of the networked process. Links between individual network positions, network subgroup structures and macro-level network patterns are compared to the types of organizations involved and final policy instruments chosen. By using governance concepts to depict a policy process, the work aims to assess whether they contribute to models of policy-making. The conclusion is that the governance literature sheds light on events that would otherwise go unnoticed, or whose conceptualization would remain atheoretical. The framework of network governance should be in the toolkit of the policy analyst.