956 resultados para Conditional CAPM
Resumo:
A pervasive and puzzling feature of banks’ Value-at-Risk (VaR) is its abnormally high level, which leads to excessive regulatory capital. A possible explanation for the tendency of commercial banks to overstate their VaR is that they incompletely account for the diversification effect among broad risk categories (e.g., equity, interest rate, commodity, credit spread, and foreign exchange). By underestimating the diversification effect, bank’s proprietary VaR models produce overly prudent market risk assessments. In this paper, we examine empirically the validity of this hypothesis using actual VaR data from major US commercial banks. In contrast to the VaR diversification hypothesis, we find that US banks show no sign of systematic underestimation of the diversification effect. In particular, diversification effects used by banks is very close to (and quite often larger than) our empirical diversification estimates. A direct implication of this finding is that individual VaRs for each broad risk category, just like aggregate VaRs, are biased risk assessments.
Resumo:
Despite considerable success in treatment of early stage localized prostate cancer (PC), acute inadequacy of late stage PC treatment and its inherent heterogeneity poses a formidable challenge. Clearly, an improved understanding of PC genesis and progression along with the development of new targeted therapies are warranted. Animal models, especially, transgenic immunocompetent mouse models, have proven to be the best ally in this respect. A series of models have been developed by modulation of expression of genes implicated in cancer-genesis and progression; mainly, modulation of expression of oncogenes, steroid hormone receptors, growth factors and their receptors, cell cycle and apoptosis regulators, and tumor suppressor genes have been used. Such models have contributed significantly to our understanding of the molecular and pathological aspects of PC initiation and progression. In particular, the transgenic mouse models based on multiple genetic alterations can more accurately address the inherent complexity of PC, not only in revealing the mechanisms of tumorigenesis and progression but also for clinically relevant evaluation of new therapies. Further, with advances in conditional knockout technologies, otherwise embryonically lethal gene changes can be incorporated leading to the development of new generation transgenics, thus adding significantly to our existing knowledge base. Different models and their relevance to PC research are discussed.
Resumo:
We consider the problem of binary classification where the classifier can, for a particular cost, choose not to classify an observation. Just as in the conventional classification problem, minimization of the sample average of the cost is a difficult optimization problem. As an alternative, we propose the optimization of a certain convex loss function φ, analogous to the hinge loss used in support vector machines (SVMs). Its convexity ensures that the sample average of this surrogate loss can be efficiently minimized. We study its statistical properties. We show that minimizing the expected surrogate loss—the φ-risk—also minimizes the risk. We also study the rate at which the φ-risk approaches its minimum value. We show that fast rates are possible when the conditional probability P(Y=1|X) is unlikely to be close to certain critical values.
Resumo:
This paper presents a method of spatial sampling based on stratification by Local Moran’s I i calculated using auxiliary information. The sampling technique is compared to other design-based approaches including simple random sampling, systematic sampling on a regular grid, conditional Latin Hypercube sampling and stratified sampling based on auxiliary information, and is illustrated using two different spatial data sets. Each of the samples for the two data sets is interpolated using regression kriging to form a geostatistical map for their respective areas. The proposed technique is shown to be competitive in reproducing specific areas of interest with high accuracy.
Resumo:
A Networked Control System (NCS) is a feedback-driven control system wherein the control loops are closed through a real-time network. Control and feedback signals in an NCS are exchanged among the system’s components in the form of information packets via the network. Nowadays, wireless technologies such as IEEE802.11 are being introduced to modern NCSs as they offer better scalability, larger bandwidth and lower costs. However, this type of network is not designed for NCSs because it introduces a large amount of dropped data, and unpredictable and long transmission latencies due to the characteristics of wireless channels, which are not acceptable for real-time control systems. Real-time control is a class of time-critical application which requires lossless data transmission, small and deterministic delays and jitter. For a real-time control system, network-introduced problems may degrade the system’s performance significantly or even cause system instability. It is therefore important to develop solutions to satisfy real-time requirements in terms of delays, jitter and data losses, and guarantee high levels of performance for time-critical communications in Wireless Networked Control Systems (WNCSs). To improve or even guarantee real-time performance in wireless control systems, this thesis presents several network layout strategies and a new transport layer protocol. Firstly, real-time performances in regard to data transmission delays and reliability of IEEE 802.11b-based UDP/IP NCSs are evaluated through simulations. After analysis of the simulation results, some network layout strategies are presented to achieve relatively small and deterministic network-introduced latencies and reduce data loss rates. These are effective in providing better network performance without performance degradation of other services. After the investigation into the layout strategies, the thesis presents a new transport protocol which is more effcient than UDP and TCP for guaranteeing reliable and time-critical communications in WNCSs. From the networking perspective, introducing appropriate communication schemes, modifying existing network protocols and devising new protocols, have been the most effective and popular ways to improve or even guarantee real-time performance to a certain extent. Most previously proposed schemes and protocols were designed for real-time multimedia communication and they are not suitable for real-time control systems. Therefore, devising a new network protocol that is able to satisfy real-time requirements in WNCSs is the main objective of this research project. The Conditional Retransmission Enabled Transport Protocol (CRETP) is a new network protocol presented in this thesis. Retransmitting unacknowledged data packets is effective in compensating for data losses. However, every data packet in realtime control systems has a deadline and data is assumed invalid or even harmful when its deadline expires. CRETP performs data retransmission only in the case that data is still valid, which guarantees data timeliness and saves memory and network resources. A trade-off between delivery reliability, transmission latency and network resources can be achieved by the conditional retransmission mechanism. Evaluation of protocol performance was conducted through extensive simulations. Comparative studies between CRETP, UDP and TCP were also performed. These results showed that CRETP significantly: 1). improved reliability of communication, 2). guaranteed validity of received data, 3). reduced transmission latency to an acceptable value, and 4). made delays relatively deterministic and predictable. Furthermore, CRETP achieved the best overall performance in comparative studies which makes it the most suitable transport protocol among the three for real-time communications in a WNCS.
Resumo:
In recent years a great deal of case law has been generated in relation to mortgages where the mortgagee has not engaged in adequate identity verification of the mortgagor and the mortgage has subsequently been found to be forged. As a result, careless mortgagee provisions operate in Queensland as an exception to indefeasibility. Similar provisions are expected to commence soon in New South Wales. This article examines the mortgagee’s position with the benefit of indefeasibility and then considers the impact of the careless mortgagee provisions on the rights of a mortgagee under a forged mortgage, concluding that the provisions significantly change the dynamic between a registered mortgagee and registered owner who has not signed the mortgage. These provisions appear to give the mortgagee a conditional indefeasibility, with the intention of reducing the State’s exposure to the payment of compensation in the case of identity fraud. They are however, more successful in the case of forgery by a third party rather than forgery by a co-owner.
Resumo:
Affect modulates the blink startle reflex in the picture-viewing paradigm, however, the process responsible for reflex modulation during conditional stimuli (CSs) that have acquired valence through affective conditioning remains unclear. In Experiment 1, neutral shapes (CSs) and valenced or neutral pictures (USs) were paired in a forward (CS → US) manner. Pleasantness ratings supported affective learning of positive and negative valence. Post-acquisition, blink reflexes were larger during the pleasant and unpleasant CSs than during the neutral CS. Rather than affect, attention or anticipatory arousal were suggested as sources of startle modulation. Experiment 2 confirmed that affective learning in the picture–picture paradigm was not affected by whether the CS preceded the US. Pleasantness ratings and affective priming revealed similar extents of affective learning following forward, backward or simultaneous pairings of CSs and USs. Experiment 3 utilized a backward conditioning procedure (US → CS) to minimize effects of US anticipation. Again, blink reflexes were larger during CSs paired with valenced USs regardless of US valence implicating attention rather than anticipatory arousal or affect as the process modulating startle in this paradigm.
Resumo:
The research objectives of this thesis were to contribute to Bayesian statistical methodology by contributing to risk assessment statistical methodology, and to spatial and spatio-temporal methodology, by modelling error structures using complex hierarchical models. Specifically, I hoped to consider two applied areas, and use these applications as a springboard for developing new statistical methods as well as undertaking analyses which might give answers to particular applied questions. Thus, this thesis considers a series of models, firstly in the context of risk assessments for recycled water, and secondly in the context of water usage by crops. The research objective was to model error structures using hierarchical models in two problems, namely risk assessment analyses for wastewater, and secondly, in a four dimensional dataset, assessing differences between cropping systems over time and over three spatial dimensions. The aim was to use the simplicity and insight afforded by Bayesian networks to develop appropriate models for risk scenarios, and again to use Bayesian hierarchical models to explore the necessarily complex modelling of four dimensional agricultural data. The specific objectives of the research were to develop a method for the calculation of credible intervals for the point estimates of Bayesian networks; to develop a model structure to incorporate all the experimental uncertainty associated with various constants thereby allowing the calculation of more credible credible intervals for a risk assessment; to model a single day’s data from the agricultural dataset which satisfactorily captured the complexities of the data; to build a model for several days’ data, in order to consider how the full data might be modelled; and finally to build a model for the full four dimensional dataset and to consider the timevarying nature of the contrast of interest, having satisfactorily accounted for possible spatial and temporal autocorrelations. This work forms five papers, two of which have been published, with two submitted, and the final paper still in draft. The first two objectives were met by recasting the risk assessments as directed, acyclic graphs (DAGs). In the first case, we elicited uncertainty for the conditional probabilities needed by the Bayesian net, incorporated these into a corresponding DAG, and used Markov chain Monte Carlo (MCMC) to find credible intervals, for all the scenarios and outcomes of interest. In the second case, we incorporated the experimental data underlying the risk assessment constants into the DAG, and also treated some of that data as needing to be modelled as an ‘errors-invariables’ problem [Fuller, 1987]. This illustrated a simple method for the incorporation of experimental error into risk assessments. In considering one day of the three-dimensional agricultural data, it became clear that geostatistical models or conditional autoregressive (CAR) models over the three dimensions were not the best way to approach the data. Instead CAR models are used with neighbours only in the same depth layer. This gave flexibility to the model, allowing both the spatially structured and non-structured variances to differ at all depths. We call this model the CAR layered model. Given the experimental design, the fixed part of the model could have been modelled as a set of means by treatment and by depth, but doing so allows little insight into how the treatment effects vary with depth. Hence, a number of essentially non-parametric approaches were taken to see the effects of depth on treatment, with the model of choice incorporating an errors-in-variables approach for depth in addition to a non-parametric smooth. The statistical contribution here was the introduction of the CAR layered model, the applied contribution the analysis of moisture over depth and estimation of the contrast of interest together with its credible intervals. These models were fitted using WinBUGS [Lunn et al., 2000]. The work in the fifth paper deals with the fact that with large datasets, the use of WinBUGS becomes more problematic because of its highly correlated term by term updating. In this work, we introduce a Gibbs sampler with block updating for the CAR layered model. The Gibbs sampler was implemented by Chris Strickland using pyMCMC [Strickland, 2010]. This framework is then used to consider five days data, and we show that moisture in the soil for all the various treatments reaches levels particular to each treatment at a depth of 200 cm and thereafter stays constant, albeit with increasing variances with depth. In an analysis across three spatial dimensions and across time, there are many interactions of time and the spatial dimensions to be considered. Hence, we chose to use a daily model and to repeat the analysis at all time points, effectively creating an interaction model of time by the daily model. Such an approach allows great flexibility. However, this approach does not allow insight into the way in which the parameter of interest varies over time. Hence, a two-stage approach was also used, with estimates from the first-stage being analysed as a set of time series. We see this spatio-temporal interaction model as being a useful approach to data measured across three spatial dimensions and time, since it does not assume additivity of the random spatial or temporal effects.
Resumo:
Much has been said and documented about the key role that reflection can play in the ongoing development of e-portfolios, particularly e-portfolios utilised for teaching and learning. A review of e-portfolio platforms reveals that a designated space for documenting and collating personal reflections is a typical design feature of both open source and commercial off-the-shelf software. Further investigation of tools within e-portfolio systems for facilitating reflection reveals that, apart from enabling personal journalism through blogs or other writing, scaffolding tools that encourage the actual process of reflection are under-developed. Investigation of a number of prominent e-portfolio projects also reveals that reflection, while presented as critically important, is often viewed as an activity that takes place after a learning activity or experience and not intrinsic to it. This paper assumes an alternative, richer conception of reflection: a process integral to a wide range of activities associated with learning, such as inquiry, communication, editing, analysis and evaluation. Such a conception is consistent with the literature associated with ‘communities of practice’, which is replete with insight into ‘learning through doing’, and with a ‘whole minded’ approach to inquiry. Thus, graduates who are ‘reflective practitioners’ who integrate reflection into their learning will have more to offer a prospective employer than graduates who have adopted an episodic approach to reflection. So, what kinds of tools might facilitate integrated reflection? This paper outlines a number of possibilities for consideration and development. Such tools do not have to be embedded within e-portfolio systems, although there are benefits in doing so. In order to inform future design of e-portfolio systems this paper presents a faceted model of knowledge creation that depicts an ‘ecology of knowing’ in which interaction with, and the production of, learning content is deepened through the construction of well-formed questions of that content. In particular, questions that are initiated by ‘why’ are explored because they are distinguished from the other ‘journalist’ questions (who, what, when, where, and where) in that answers to them demand explanative, as opposed to descriptive, content. They require a rationale. Although why questions do not belong to any one genre and are not simple to classify — responses can contain motivational, conditional, causal, and/or existential content — they do make a difference in the acquisition of understanding. The development of scaffolding that builds on why-questioning to enrich learning is the motivation behind the research that has informed this paper.
Resumo:
In this paper, we describe an analysis for data collected on a three-dimensional spatial lattice with treatments applied at the horizontal lattice points. Spatial correlation is accounted for using a conditional autoregressive model. Observations are defined as neighbours only if they are at the same depth. This allows the corresponding variance components to vary by depth. We use the Markov chain Monte Carlo method with block updating, together with Krylov subspace methods, for efficient estimation of the model. The method is applicable to both regular and irregular horizontal lattices and hence to data collected at any set of horizontal sites for a set of depths or heights, for example, water column or soil profile data. The model for the three-dimensional data is applied to agricultural trial data for five separate days taken roughly six months apart in order to determine possible relationships over time. The purpose of the trial is to determine a form of cropping that leads to less moist soils in the root zone and beyond.We estimate moisture for each date, depth and treatment accounting for spatial correlation and determine relationships of these and other parameters over time.
Resumo:
Modern technology now has the ability to generate large datasets over space and time. Such data typically exhibit high autocorrelations over all dimensions. The field trial data motivating the methods of this paper were collected to examine the behaviour of traditional cropping and to determine a cropping system which could maximise water use for grain production while minimising leakage below the crop root zone. They consist of moisture measurements made at 15 depths across 3 rows and 18 columns, in the lattice framework of an agricultural field. Bayesian conditional autoregressive (CAR) models are used to account for local site correlations. Conditional autoregressive models have not been widely used in analyses of agricultural data. This paper serves to illustrate the usefulness of these models in this field, along with the ease of implementation in WinBUGS, a freely available software package. The innovation is the fitting of separate conditional autoregressive models for each depth layer, the ‘layered CAR model’, while simultaneously estimating depth profile functions for each site treatment. Modelling interest also lay in how best to model the treatment effect depth profiles, and in the choice of neighbourhood structure for the spatial autocorrelation model. The favoured model fitted the treatment effects as splines over depth, and treated depth, the basis for the regression model, as measured with error, while fitting CAR neighbourhood models by depth layer. It is hierarchical, with separate onditional autoregressive spatial variance components at each depth, and the fixed terms which involve an errors-in-measurement model treat depth errors as interval-censored measurement error. The Bayesian framework permits transparent specification and easy comparison of the various complex models compared.
Resumo:
We explore theoretically and empirically whether corruption is contagious and whether conditional cooperation matters. We argue that the decision to bribe bureaucrats depends on the frequency of corruption within a society. We provide a behavioral model to explain this conduct: engaging in corruption results in a disutility of guilt. This disutility depends negatively on the number of people engaging in corruption. The empirical section presents evidence using two international panel data data sets, one at the micro and one at the macro level. Results indicate that corruption is influenced by the perceived activities of peers. Moreover, macro level data indicates that past levels of corruption impact current corruption levels.
Resumo:
A rule-based approach for classifying previously identified medical concepts in the clinical free text into an assertion category is presented. There are six different categories of assertions for the task: Present, Absent, Possible, Conditional, Hypothetical and Not associated with the patient. The assertion classification algorithms were largely based on extending the popular NegEx and Context algorithms. In addition, a health based clinical terminology called SNOMED CT and other publicly available dictionaries were used to classify assertions, which did not fit the NegEx/Context model. The data for this task includes discharge summaries from Partners HealthCare and from Beth Israel Deaconess Medical Centre, as well as discharge summaries and progress notes from University of Pittsburgh Medical Centre. The set consists of 349 discharge reports, each with pairs of ground truth concept and assertion files for system development, and 477 reports for evaluation. The system’s performance on the evaluation data set was 0.83, 0.83 and 0.83 for recall, precision and F1-measure, respectively. Although the rule-based system shows promise, further improvements can be made by incorporating machine learning approaches.