20 resultados para Smoothed bootstrap

em Aston University Research Archive


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this study is to provide a comparative analysis of the efficiency of Islamic and conventional banks in Gulf Cooperation Council (GCC) countries. In this study, we explain inefficiencies obtained by introducing firm-specific as well as macroeconomic variables. Our findings indicate that during the eight years of study, conventional banks largely outperform Islamic banks with an average technical efficiency score of 81% compared to 95.57%. However, it is clear that since 2008, efficiency of conventional banks was in a downward trend while the efficiency of their Islamic counterparts was in an upward trend since 2009. This indicates that Islamic banks have succeeded to maintain a level of efficiency during the subprime crisis period. Finally, for the whole sample, the analysis demonstrates the strong link of macroeconomic indicators with efficiency for GCC banks. Surprisingly, we have not found any significant relationship in the case of Islamic banks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Two new methodologies are introduced to improve inference in the evaluation of mutual fund performance against benchmarks. First, the benchmark models are estimated using panel methods with both fund and time effects. Second, the non-normality of individual mutual fund returns is accounted for by using panel bootstrap methods. We also augment the standard benchmark factors with fund-specific characteristics, such as fund size. Using a dataset of UK equity mutual fund returns, we find that fund size has a negative effect on the average fund manager’s benchmark-adjusted performance. Further, when we allow for time effects and the non-normality of fund returns, we find that there is no evidence that even the best performing fund managers can significantly out-perform the augmented benchmarks after fund management charges are taken into account.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim in this paper is to replicate and extend the analysis of visual technical patterns by Lo et al. (2000) using data on the UK market. A non-parametric smoother is used to model a nonlinear trend in stock price series. Technical patterns, such as the 'head-and-shoulders' pattern, that are characterised by a sequence of turning points are identified in the smoothed data. Statistical tests are used to determine whether returns conditioned on the technical patterns are different from random returns and, in an extension to the analysis of Lo et al. (2000), whether they can outperform a market benchmark return. For the stocks in the FTSE 100 and FTSE 250 indices over the period 1986 to 2001, we find that technical patterns occur with different frequencies to each other and in different relativities to the frequencies found in the US market. Our extended statistical testing indicates that UK stock returns are less influenced by technical patterns than was the case for US stock returns.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A multi-scale model of edge coding based on normalized Gaussian derivative filters successfully predicts perceived scale (blur) for a wide variety of edge profiles [Georgeson, M. A., May, K. A., Freeman, T. C. A., & Hesse, G. S. (in press). From filters to features: Scale-space analysis of edge and blur coding in human vision. Journal of Vision]. Our model spatially differentiates the luminance profile, half-wave rectifies the 1st derivative, and then differentiates twice more, to give the 3rd derivative of all regions with a positive gradient. This process is implemented by a set of Gaussian derivative filters with a range of scales. Peaks in the inverted normalized 3rd derivative across space and scale indicate the positions and scales of the edges. The edge contrast can be estimated from the height of the peak. The model provides a veridical estimate of the scale and contrast of edges that have a Gaussian integral profile. Therefore, since scale and contrast are independent stimulus parameters, the model predicts that the perceived value of either of these parameters should be unaffected by changes in the other. This prediction was found to be incorrect: reducing the contrast of an edge made it look sharper, and increasing its scale led to a decrease in the perceived contrast. Our model can account for these effects when the simple half-wave rectifier after the 1st derivative is replaced by a smoothed threshold function described by two parameters. For each subject, one pair of parameters provided a satisfactory fit to the data from all the experiments presented here and in the accompanying paper [May, K. A. & Georgeson, M. A. (2007). Added luminance ramp alters perceived edge blur and contrast: A critical test for derivative-based models of edge coding. Vision Research, 47, 1721-1731]. Thus, when we allow for the visual system's insensitivity to very shallow luminance gradients, our multi-scale model can be extended to edge coding over a wide range of contrasts and blurs. © 2007 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Edges are key points of information in visual scenes. One important class of models supposes that edges correspond to the steepest parts of the luminance profile, implying that they can be found as peaks and troughs in the response of a gradient (first-derivative) filter, or as zero-crossings (ZCs) in the second-derivative. A variety of multi-scale models are based on this idea. We tested this approach by devising a stimulus that has no local peaks of gradient and no ZCs, at any scale. Our stimulus profile is analogous to the classic Mach-band stimulus, but it is the local luminance gradient (not the absolute luminance) that increases as a linear ramp between two plateaux. The luminance profile is a smoothed triangle wave and is obtained by integrating the gradient profile. Subjects used a cursor to mark the position and polarity of perceived edges. For all the ramp-widths tested, observers marked edges at or close to the corner points in the gradient profile, even though these were not gradient maxima. These new Mach edges correspond to peaks and troughs in the third-derivative. They are analogous to Mach bands - light and dark bars are seen where there are no luminance peaks but there are peaks in the second derivative. Here, peaks in the third derivative were seen as light-to-dark edges, troughs as dark-to-light edges. Thus Mach edges are inconsistent with many standard edge detectors, but are nicely predicted by a new model that uses a (nonlinear) third-derivative operator to find edge points.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the extensive use of pulse modulation methods in telecommunications, much work has been done in the search for a better utilisation of the transmission channel.The present research is an extension of these investigations. A new modulation method, 'Variable Time-Scale Information Processing', (VTSIP), is proposed.The basic principles of this system have been established, and the main advantages and disadvantages investigated. With the proposed system, comparison circuits detect the instants at which the input signal voltage crosses predetermined amplitude levels.The time intervals between these occurrences are measured digitally and the results are temporarily stored, before being transmitted.After reception, an inverse process enables the original signal to be reconstituted.The advantage of this system is that the irregularities in the rate of information contained in the input signal are smoothed out before transmission, allowing the use of a smaller transmission bandwidth. A disadvantage of the system is the time delay necessarily introduced by the storage process.Another disadvantage is a type of distortion caused by the finite store capacity.A simulation of the system has been made using a standard speech signal, to make some assessment of this distortion. It is concluded that the new system should be an improvement on existing pulse transmission systems, allowing the use of a smaller transmission bandwidth, but introducing a time delay.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This doctoral thesis responds to the need for greater understanding of small businesses and their inherent unique problem-types. Integral to the investigation is the theme that for governments to effectively influence small business, a sound understanding of the factors they are seeking to influence is essential. Moreover, the study, in its recognition of the many shortcomings in management research and, in particular that the research methods and approaches adopted often fail to give adequate understanding of issues under study, attempts to develop an innovative and creative research approach. The aim thus being to produce, not only advances in small business management knowledge from the standpoints of government policy makers and `lq recipient small business, but also insights into future potential research method for the continued development of that knowledge. The origins of the methodology lay in the non-acceptance of traditional philosophical positions in epistemology and ontology, with a philosophical standpoint of internal realism underpinning the research. Internal realism presents the basis for the potential co-existence of qualitative and quantitative research strategy and underlines the crucial contributory role of research method in provision of ultimate factual status of the assertions of research findings. The concept of epistemological bootstrapping is thus used to develop a `lq partial research framework to foothold case study research, thereby avoiding limitations of objectivism and brute inductivism. The major insights and issues highlighted by the `lq bootstrap, guide the researcher around the participant case studies. A novel attempt at contextualist (linked multi-level and processual) analysis was attempted in the major in-depth case study, with two further cases playing a support role and contributing to a balanced emphasis of empirical research within the context of time constraints inherent within part-time research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents a study of how edges are detected and encoded by the human visual system. The study begins with theoretical work on the development of a model of edge processing, and includes psychophysical experiments on humans, and computer simulations of these experiments, using the model. The first chapter reviews the literature on edge processing in biological and machine vision, and introduces the mathematical foundations of this area of research. The second chapter gives a formal presentation of a model of edge perception that detects edges and characterizes their blur, contrast and orientation, using Gaussian derivative templates. This model has previously been shown to accurately predict human performance in blur matching tasks with several different types of edge profile. The model provides veridical estimates of the blur and contrast of edges that have a Gaussian integral profile. Since blur and contrast are independent parameters of Gaussian edges, the model predicts that varying one parameter should not affect perception of the other. Psychophysical experiments showed that this prediction is incorrect: reducing the contrast makes an edge look sharper; increasing the blur reduces the perceived contrast. Both of these effects can be explained by introducing a smoothed threshold to one of the processing stages of the model. It is shown that, with this modification,the model can predict the perceived contrast and blur of a number of edge profiles that differ markedly from the ideal Gaussian edge profiles on which the templates are based. With only a few exceptions, the results from all the experiments on blur and contrast perception can be explained reasonably well using one set of parameters for each subject. In the few cases where the model fails, possible extensions to the model are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Citation information: Armstrong RA, Davies LN, Dunne MCM & Gilmartin B. Statistical guidelines for clinical studies of human vision. Ophthalmic Physiol Opt 2011, 31, 123-136. doi: 10.1111/j.1475-1313.2010.00815.x ABSTRACT: Statistical analysis of data can be complex and different statisticians may disagree as to the correct approach leading to conflict between authors, editors, and reviewers. The objective of this article is to provide some statistical advice for contributors to optometric and ophthalmic journals, to provide advice specifically relevant to clinical studies of human vision, and to recommend statistical analyses that could be used in a variety of circumstances. In submitting an article, in which quantitative data are reported, authors should describe clearly the statistical procedures that they have used and to justify each stage of the analysis. This is especially important if more complex or 'non-standard' analyses have been carried out. The article begins with some general comments relating to data analysis concerning sample size and 'power', hypothesis testing, parametric and non-parametric variables, 'bootstrap methods', one and two-tail testing, and the Bonferroni correction. More specific advice is then given with reference to particular statistical procedures that can be used on a variety of types of data. Where relevant, examples of correct statistical practice are given with reference to recently published articles in the optometric and ophthalmic literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

If, as is widely believed, schizophrenia is characterized by abnormalities of brain functional connectivity, then it seems reasonable to expect that different subtypes of schizophrenia could be discriminated in the same way. However, evidence for differences in functional connectivity between the subtypes of schizophrenia is largely lacking and, where it exists, it could be accounted for by clinical differences between the patients (e.g. medication) or by the limitations of the measures used. In this study, we measured EEG functional connectivity in unmedicated male patients diagnosed with either positive or negative syndrome schizophrenia and compared them with age and sex matched healthy controls. Using new methodology (Medkour et al., 2009) based on partial coherence, brain connectivity plots were constructed for positive and negative syndrome patients and controls. Reliable differences in the pattern of functional connectivity were found with both syndromes showing not only an absence of some of the connections that were seen in controls but also the presence of connections that the controls did not show. Comparing connectivity graphs using the Hamming distance, the negative-syndrome patients were found to be more distant from the controls than were the positive syndrome patients. Bootstrap distributions of these distances were created which showed a significant difference in the mean distances that was consistent with the observation that negative-syndrome diagnosis is associated with a more severe form of schizophrenia. We conclude that schizophrenia is characterized by widespread changes in functional connectivity with negative syndrome patients showing a more extreme pattern of abnormality than positive syndrome patients.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ernst Mach observed that light or dark bands could be seen at abrupt changes of luminance gradient in the absence of peaks or troughs in luminance. Many models of feature detection share the idea that bars, lines, and Mach bands are found at peaks and troughs in the output of even-symmetric spatial filters. Our experiments assessed the appearance of Mach bands (position and width) and the probability of seeing them on a novel set of generalized Gaussian edges. Mach band probability was mainly determined by the shape of the luminance profile and increased with the sharpness of its corners, controlled by a single parameter (n). Doubling or halving the size of the images had no significant effect. Variations in contrast (20%-80%) and duration (50-300 ms) had relatively minor effects. These results rule out the idea that Mach bands depend simply on the amplitude of the second derivative, but a multiscale model, based on Gaussian-smoothed first- and second-derivative filtering, can account accurately for the probability and perceived spatial layout of the bands. A key idea is that Mach band visibility depends on the ratio of second- to first-derivative responses at peaks in the second-derivative scale-space map. This ratio is approximately scale-invariant and increases with the sharpness of the corners of the luminance ramp, as observed. The edges of Mach bands pose a surprisingly difficult challenge for models of edge detection, but a nonlinear third-derivative operation is shown to predict the locations of Mach band edges strikingly well. Mach bands thus shed new light on the role of multiscale filtering systems in feature coding. © 2012 ARVO.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Parkinson’s disease (PD) is an incurable neurological disease with approximately 0.3% prevalence. The hallmark symptom is gradual movement deterioration. Current scientific consensus about disease progression holds that symptoms will worsen smoothly over time unless treated. Accurate information about symptom dynamics is of critical importance to patients, caregivers, and the scientific community for the design of new treatments, clinical decision making, and individual disease management. Long-term studies characterize the typical time course of the disease as an early linear progression gradually reaching a plateau in later stages. However, symptom dynamics over durations of days to weeks remains unquantified. Currently, there is a scarcity of objective clinical information about symptom dynamics at intervals shorter than 3 months stretching over several years, but Internet-based patient self-report platforms may change this. Objective: To assess the clinical value of online self-reported PD symptom data recorded by users of the health-focused Internet social research platform PatientsLikeMe (PLM), in which patients quantify their symptoms on a regular basis on a subset of the Unified Parkinson’s Disease Ratings Scale (UPDRS). By analyzing this data, we aim for a scientific window on the nature of symptom dynamics for assessment intervals shorter than 3 months over durations of several years. Methods: Online self-reported data was validated against the gold standard Parkinson’s Disease Data and Organizing Center (PD-DOC) database, containing clinical symptom data at intervals greater than 3 months. The data were compared visually using quantile-quantile plots, and numerically using the Kolmogorov-Smirnov test. By using a simple piecewise linear trend estimation algorithm, the PLM data was smoothed to separate random fluctuations from continuous symptom dynamics. Subtracting the trends from the original data revealed random fluctuations in symptom severity. The average magnitude of fluctuations versus time since diagnosis was modeled by using a gamma generalized linear model. Results: Distributions of ages at diagnosis and UPDRS in the PLM and PD-DOC databases were broadly consistent. The PLM patients were systematically younger than the PD-DOC patients and showed increased symptom severity in the PD off state. The average fluctuation in symptoms (UPDRS Parts I and II) was 2.6 points at the time of diagnosis, rising to 5.9 points 16 years after diagnosis. This fluctuation exceeds the estimated minimal and moderate clinically important differences, respectively. Not all patients conformed to the current clinical picture of gradual, smooth changes: many patients had regimes where symptom severity varied in an unpredictable manner, or underwent large rapid changes in an otherwise more stable progression. Conclusions: This information about short-term PD symptom dynamics contributes new scientific understanding about the disease progression, currently very costly to obtain without self-administered Internet-based reporting. This understanding should have implications for the optimization of clinical trials into new treatments and for the choice of treatment decision timescales.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In efficiency studies using the stochastic frontier approach, the main focus is to explain inefficiency in terms of some exogenous variables and computation of marginal effects of each of these determinants. Although inefficiency is estimated by its mean conditional on the composed error term (the Jondrow et al., 1982 estimator), the marginal effects are computed from the unconditional mean of inefficiency (Wang, 2002). In this paper we derive the marginal effects based on the Jondrow et al. estimator and use the bootstrap method to compute confidence intervals of the marginal effects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper proposes a semiparametric smooth-coefficient (SPSC) stochastic production frontier model where regression coefficients are unknown smooth functions of environmental factors (ZZ). Technical inefficiency is specified in the form of a parametric scaling function which also depends on the ZZ variables. Thus, in our SPSC model the ZZ variables affect productivity directly via the technology parameters as well as through inefficiency. A residual-based bootstrap test of the relevance of the environmental factors in the SPSC model is suggested. An empirical application is also used to illustrate the technique.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Zambia and many other countries in Sub-Saharan Africa face a key challenge of sustaining high levels of coverage of AIDS treatment under prospects of dwindling global resources for HIV/AIDS treatment. Policy debate in HIV/AIDS is increasingly paying more focus to efficiency in the use of available resources. In this chapter, we apply Data Envelopment Analysis (DEA) to estimate short term technical efficiency of 34 HIV/AIDS treatment facilities in Zambia. The data consists of input variables such as human resources, medical equipment, building space, drugs, medical supplies, and other materials used in providing HIV/AIDS treatment. Two main outputs namely, numbers of ART-years (Anti-Retroviral Therapy-years) and pre-ART-years are included in the model. Results show the mean technical efficiency score to be 83%, with great variability in efficiency scores across the facilities. Scale inefficiency is also shown to be significant. About half of the facilities were on the efficiency frontier. We also construct bootstrap confidence intervals around the efficiency scores.