49 resultados para Kernel estimator and ROC-GLM methodology
Resumo:
This paper considers the problem of estimation when one of a number of populations, assumed normal with known common variance, is selected on the basis of it having the largest observed mean. Conditional on selection of the population, the observed mean is a biased estimate of the true mean. This problem arises in the analysis of clinical trials in which selection is made between a number of experimental treatments that are compared with each other either with or without an additional control treatment. Attempts to obtain approximately unbiased estimates in this setting have been proposed by Shen [2001. An improved method of evaluating drug effect in a multiple dose clinical trial. Statist. Medicine 20, 1913–1929] and Stallard and Todd [2005. Point estimates and confidence regions for sequential trials involving selection. J. Statist. Plann. Inference 135, 402–419]. This paper explores the problem in the simple setting in which two experimental treatments are compared in a single analysis. It is shown that in this case the estimate of Stallard and Todd is the maximum-likelihood estimate (m.l.e.), and this is compared with the estimate proposed by Shen. In particular, it is shown that the m.l.e. has infinite expectation whatever the true value of the mean being estimated. We show that there is no conditionally unbiased estimator, and propose a new family of approximately conditionally unbiased estimators, comparing these with the estimators suggested by Shen.
Resumo:
In this paper, we apply one-list capture-recapture models to estimate the number of scrapie-affected holdings in Great Britain. We applied this technique to the Compulsory Scrapie Flocks Scheme dataset where cases from all the surveillance sources monitoring the presence of scrapie in Great Britain, the abattoir survey, the fallen stock survey and the statutory reporting of clinical cases, are gathered. Consequently, the estimates of prevalence obtained from this scheme should be comprehensive and cover all the different presentations of the disease captured individually by the surveillance sources. Two estimators were applied under the one-list approach: the Zelterman estimator and Chao's lower bound estimator. Our results could only inform with confidence the scrapie-affected holding population with clinical disease; this moved around the figure of 350 holdings in Great Britain for the period under study, April 2005-April 2006. Our models allowed the stratification by surveillance source and the input of covariate information, holding size and country of origin. None of the covariates appear to inform the model significantly. Crown Copyright (C) 2008 Published by Elsevier B.V. All rights reserved.
Resumo:
A novel and generic miniaturization methodology for the determination of partition coefficient values of organic compounds in noctanol/water by using magnetic nanoparticles is, for the first time, described. We have successfully designed, synthesised and characterised new colloidal stable porous silica-encapsulated magnetic nanoparticles of controlled dimensions. These nanoparticles absorbing a tiny amount of n-octanol in their porous silica over-layer are homogeneously dispersed into a bulk aqueous phase (pH 7.40) containing an organic compound prior to magnetic separation. The small size of the particles and the efficient mixing allow a rapid establishment of the partition equilibrium of the organic compound between the solid supported n-octanol nano-droplets and the bulk aqueous phase. UV-vis spectrophotometry is then applied as a quantitative method to determine the concentration of the organic compound in the aqueous phase both before and after partitioning (after magnetic separation). log D values of organic compounds of pharmaceutical interest (0.65-3.50), determined by this novel methodology, were found to be in excellent agreement with the values measured by the shake-flask method in two independent laboratories, which are also consistent with the literature data. It was also found that this new technique gives a number of advantages such as providing an accurate measurement of log D value, a much shorter experimental time and a smaller sample size required. With this approach, the formation of a problematic emulsion, commonly encountered in shake-flask experiments, is eliminated. It is envisaged that this method could be applicable to the high throughput log D screening of drug candidates. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
The relative fast processing speed requirements in Wireless Personal Area Network (WPAN) consumer based products are often in conflict with their low power and cost requirements. In order to solve this conflict the efficiency and cost effectiveness of these products and the underlying functional modules become paramount. This paper presents a low-cost, simple, yet high performance solution for the receiver Channel Estimator and Equalizer for the Mutiband OFDM (MB-OFDM) system, particularly directed to the WiMedia Consortium Physical Later (ECMA-368) consumer implementation for Wireless-USB and Fast Bluetooth. In this paper, the receiver fixed point performance is measured and the results indicate excellent performance compared to the current literature(1).
Resumo:
Using the classical Parzen window estimate as the target function, the kernel density estimation is formulated as a regression problem and the orthogonal forward regression technique is adopted to construct sparse kernel density estimates. The proposed algorithm incrementally minimises a leave-one-out test error score to select a sparse kernel model, and a local regularisation method is incorporated into the density construction process to further enforce sparsity. The kernel weights are finally updated using the multiplicative nonnegative quadratic programming algorithm, which has the ability to reduce the model size further. Except for the kernel width, the proposed algorithm has no other parameters that need tuning, and the user is not required to specify any additional criterion to terminate the density construction procedure. Two examples are used to demonstrate the ability of this regression-based approach to effectively construct a sparse kernel density estimate with comparable accuracy to that of the full-sample optimised Parzen window density estimate.
Resumo:
We agree with Duckrow and Albano [Phys. Rev. E 67, 063901 (2003)] and Quian Quiroga et al. [Phys. Rev. E 67, 063902 (2003)] that mutual information (MI) is a useful measure of dependence for electroencephalogram (EEG) data, but we show that the improvement seen in the performance of MI on extracting dependence trends from EEG is more dependent on the type of MI estimator rather than any embedding technique used. In an independent study we conducted in search for an optimal MI estimator, and in particular for EEG applications, we examined the performance of a number of MI estimators on the data set used by Quian Quiroga et al. in their original study, where the performance of different dependence measures on real data was investigated [Phys. Rev. E 65, 041903 (2002)]. We show that for EEG applications the best performance among the investigated estimators is achieved by k-nearest neighbors, which supports the conjecture by Quian Quiroga et al. in Phys. Rev. E 67, 063902 (2003) that the nearest neighbor estimator is the most precise method for estimating MI.
Resumo:
We agree with Duckrow and Albano [Phys. Rev. E 67, 063901 (2003)] and Quian Quiroga [Phys. Rev. E 67, 063902 (2003)] that mutual information (MI) is a useful measure of dependence for electroencephalogram (EEG) data, but we show that the improvement seen in the performance of MI on extracting dependence trends from EEG is more dependent on the type of MI estimator rather than any embedding technique used. In an independent study we conducted in search for an optimal MI estimator, and in particular for EEG applications, we examined the performance of a number of MI estimators on the data set used by Quian Quiroga in their original study, where the performance of different dependence measures on real data was investigated [Phys. Rev. E 65, 041903 (2002)]. We show that for EEG applications the best performance among the investigated estimators is achieved by k-nearest neighbors, which supports the conjecture by Quian Quiroga in Phys. Rev. E 67, 063902 (2003) that the nearest neighbor estimator is the most precise method for estimating MI.
Resumo:
To investigate the perception of emotional facial expressions, researchers rely on shared sets of photos or videos, most often generated by actor portrayals. The drawback of such standardized material is a lack of flexibility and controllability, as it does not allow the systematic parametric manipulation of specific features of facial expressions on the one hand, and of more general properties of the facial identity (age, ethnicity, gender) on the other. To remedy this problem, we developed FACSGen: a novel tool that allows the creation of realistic synthetic 3D facial stimuli, both static and dynamic, based on the Facial Action Coding System. FACSGen provides researchers with total control over facial action units, and corresponding informational cues in 3D synthetic faces. We present four studies validating both the software and the general methodology of systematically generating controlled facial expression patterns for stimulus presentation.
Resumo:
A role for sequential test procedures is emerging in genetic and epidemiological studies using banked biological resources. This stems from the methodology's potential for improved use of information relative to comparable fixed sample designs. Studies in which cost, time and ethics feature prominently are particularly suited to a sequential approach. In this paper sequential procedures for matched case–control studies with binary data will be investigated and assessed. Design issues such as sample size evaluation and error rates are identified and addressed. The methodology is illustrated and evaluated using both real and simulated data sets.
Resumo:
Since 1998, the Aurora project has been investigating the use of a robotic platform as a tool for therapy use with children with autism. A key issue in this project is the evaluation of the interactions, which are not constricted and involve the child moving freely. Additionally, the response of the children is an important factor which must emerge from the robot trial sessions and the evaluation methodology, in order to guide further development work.
Resumo:
Purpose – The primary aim of this paper is to examine whether boards of directors with independent members function as effective corporate governance mechanisms in Chinese State-Owned Enterprises(SOEs), by analysing four characteristics of non-executive directors (NEDs) that impact on their effectiveness, namely their degree of independence, information, incentive, and competence. Design/methodology/approach – Being exploratory in nature, the research uses qualitative methods for data collection. It is based on an interpretivist perspective of social sciences, analysing and explaining the factors that influence the effectiveness of NEDs. Findings – The findings indicate that the NED system is weak in China as a result of the concentrated ownership structure, unique business culture, intervention of controlling shareholders and the lack of understanding of the benefits brought by NEDs. Research limitations/implications – The paper examines the salient features of and challenges to the system of NEDs of SOEs in present-day China. It provides an understanding of how the various perceptions of the board, gathered from in-depth interviews of corporate directors, leads to new interpretations of board effectiveness. The research, however, is limited owing to a relatively small sample size and the sensitive nature of the information collected. Originality/value – The study aims to fill gaps in the literature and contribute to it by assessing the “real” views and perceptions of NEDs in China in an institutional environment significantly different from that of the USA, the UK and other western economies.
Resumo:
This book investigates the challenges that the presence of digital imaging within the cinematic frame can pose for the task of interpretation. Applying close textual analysis to a series of case studies, the book demystifies the relationship of digital imaging to processes of watching and reading films, and develops a methodology for approaching the digital in popular cinema. In doing so, the study places contemporary digital imaging practice in relation to historical traditions of filmmaking and special effects practice, and proposes a fresh, flexible approach the the close reading of film that can take appropriate account of the presence of the digital.
Resumo:
Purpose – This paper seeks to problematise “accounting for biodiversity” and to provide a framework for analysing and understanding the role of accounting in preserving and enhancing biodiversity on Planet Earth. The paper aims to raise awareness of the urgent need to address biodiversity loss and extinction and the need for corporations to discharge accountability for their part in the current biodiversity crisis by accounting for their biodiversity-related strategies and policies. Such accounting is, it is believed, emancipatory and leads to engendering change in corporate behaviour and attitudes. Design/methodology/approach – The authors reviewed the literature relating to biodiversity across a wide array of disciplines including anthropology, biodiversity, ecology, finance, philosophy, and of course, accounting, in order to build an image of the current state of biodiversity and the role which accounting can and “should” play in the future of biodiversity. Findings – It is found that the problems underlying accounting for biodiversity fall into four broad categories: philosophical and scientific problems, accountability problems, technical accounting problems, and problems of accounting practice. Practical implications – Through establishing a framework problematising biodiversity, a roadmap is laid out for researchers and practitioners to navigate a route for future research and policymaking in biodiversity accounting. It is concluded that an interdisciplinary approach to accounting for biodiversity is crucial to ensuring effective action on biodiversity and for accounting for biodiversity to achieve its emancipatory potential. Originality/value – Although there is a wealth of sustainability reporting research, there is hardly any work exploring the role of accounting in preserving and enhancing biodiversity. There is no research exploring the current state of accounting for biodiversity. This paper summarises the current state of biodiversity using an interdisciplinary approach and introduces a series of papers devoted to the role of accounting in biodiversity accepted for this AAAJ special issue. The paper also provides a framework identifying the diverse problems associated with accounting for biodiversity.
Resumo:
Purpose – This paper aims to examine current research trends into corporate governance and to propose a different dynamic, humanistic approach based on individual purpose, values and psychology. Design/methodology/approach – The paper reviews selected literature to analyse the assumptions behind research into corporate governance and uses a multi-disciplinary body of literature to present a different theoretical approach based at the level of the individual rather than the organisation. Findings – The paper shows how the current recommendations of the corporate governance research models could backfire and lead to individual actions that are destructive when implemented in practice. This claim is based on identifying the hidden assumptions behind the principal-agent model in corporate governance, such as the Hobbesian view and the Homo Economicus approach. It argues against the axiomatic view that shareholders are the owners of the company, and it questions the way in which managers are assessed based either on the corporate share price (the shareholder view) or on a confusing set of measures which include more stakeholders (the stakeholder view), and shows how such a yardstick can be demotivating and put the corporation in danger. The paper proposes a humanistic, psychological approach that uses the individual manager as a unit of analysis instead of the corporation and illustrates how such an approach can help to build better governance. Research limitations/implications – The paper's limited scope can only outline a conceptual framework, but does not enter into detailed operationalisation. Practical implications – The paper illustrates the challenges in applying the proposed framework into practice. Originality/value – The paper calls for the use of an alternative unit of analysis, the manager, and for a dynamic and humanistic approach which encompasses the entirety of a person's cognition, including emotional and spiritual values, and which is as of yet usually not to be found in the corporate governance literature.
Resumo:
In this paper, we study the role of the volatility risk premium for the forecasting performance of implied volatility. We introduce a non-parametric and parsimonious approach to adjust the model-free implied volatility for the volatility risk premium and implement this methodology using more than 20 years of options and futures data on three major energy markets. Using regression models and statistical loss functions, we find compelling evidence to suggest that the risk premium adjusted implied volatility significantly outperforms other models, including its unadjusted counterpart. Our main finding holds for different choices of volatility estimators and competing time-series models, underlying the robustness of our results.