20 resultados para diffusion process, wavelet estimator, non-parametric rate of convergence, Markov chain, estimation of unknown signal
em Aston University Research Archive
Resumo:
To carry out an analysis of variance, several assumptions are made about the nature of the experimental data which have to be at least approximately true for the tests to be valid. One of the most important of these assumptions is that a measured quantity must be a parametric variable, i.e., a member of a normally distributed population. If the data are not normally distributed, then one method of approach is to transform the data to a different scale so that the new variable is more likely to be normally distributed. An alternative method, however, is to use a non-parametric analysis of variance. There are a limited number of such tests available but two useful tests are described in this Statnote, viz., the Kruskal-Wallis test and Friedmann’s analysis of variance.
Resumo:
This paper analyses the effect of corruption on Multinational Enterprises' (MNEs) incentives to undertake FDI in a particular country. We contribute to the existing literature by modelling the relationship between corruption and FDI using both parametric and non-parametric methods. We report that the impact of corruption on FDI stock is different for the different quantiles of the FDI stock distribution. This is a characteristic that could not be captured in previous studies which used only parametric methods. After controlling for the location selection process of MNEs and other host country characteristics, the result from both parametric and non-parametric analyses offer some support for the ‘helping-hand’ role of corruption.
Resumo:
Practitioners assess performance of entities in increasingly large and complicated datasets. If non-parametric models, such as Data Envelopment Analysis, were ever considered as simple push-button technologies, this is impossible when many variables are available or when data have to be compiled from several sources. This paper introduces by the 'COOPER-framework' a comprehensive model for carrying out non-parametric projects. The framework consists of six interrelated phases: Concepts and objectives, On structuring data, Operational models, Performance comparison model, Evaluation, and Result and deployment. Each of the phases describes some necessary steps a researcher should examine for a well defined and repeatable analysis. The COOPER-framework provides for the novice analyst guidance, structure and advice for a sound non-parametric analysis. The more experienced analyst benefits from a check list such that important issues are not forgotten. In addition, by the use of a standardized framework non-parametric assessments will be more reliable, more repeatable, more manageable, faster and less costly. © 2010 Elsevier B.V. All rights reserved.
A novel, dynamic, in vivo, non-contact method of measuring oxygen depletion rate of the anterior eye
Resumo:
Despite the importance of oxygen measurements, techniques have been limited by their invasive nature and small corneal area of assessment. The aim of this study was to assess a non-contact way of measuring oxygen uptake of the whole anterior eye.
Resumo:
The use of Diagnosis Related Groups (DRG) as a mechanism for hospital financing is a currently debated topic in Portugal. The DRG system was scheduled to be initiated by the Health Ministry of Portugal on January 1, 1990 as an instrument for the allocation of public hospital budgets funded by the National Health Service (NHS), and as a method of payment for other third party payers (e.g., Public Employees (ADSE), private insurers, etc.). Based on experience from other countries such as the United States, it was expected that implementation of this system would result in more efficient hospital resource utilisation and a more equitable distribution of hospital budgets. However, in order to minimise the potentially adverse financial impact on hospitals, the Portuguese Health Ministry decided to gradually phase in the use of the DRG system for budget allocation by using blended hospitalspecific and national DRG casemix rates. Since implementation in 1990, the percentage of each hospitals budget based on hospital specific costs was to decrease, while the percentage based on DRG casemix was to increase. This was scheduled to continue until 1995 when the plan called for allocating yearly budgets on a 50% national and 50% hospitalspecific cost basis. While all other nonNHS third party payers are currently paying based on DRGs, the adoption of DRG casemix as a National Health Service budget setting tool has been slower than anticipated. There is now some argument in both the political and academic communities as to the appropriateness of DRGs as a budget setting criterion as well as to their impact on hospital efficiency in Portugal. This paper uses a twostage procedure to assess the impact of actual DRG payment on the productivity (through its components, i.e., technological change and technical efficiency change) of diagnostic technology in Portuguese hospitals during the years 1992–1994, using both parametric and nonparametric frontier models. We find evidence that the DRG payment system does appear to have had a positive impact on productivity and technical efficiency of some commonly employed diagnostic technologies in Portugal during this time span.
Resumo:
Often observations are nested within other units. This is particularly the case in the educational sector where school performance in terms of value added is the result of school contribution as well as pupil academic ability and other features relating to the pupil. Traditionally, the literature uses parametric (i.e. it assumes a priori a particular function on the production process) Multi-Level Models to estimate the performance of nested entities. This paper discusses the use of the non-parametric (i.e. without a priori assumptions on the production process) Free Disposal Hull model as an alternative approach. While taking into account contextual characteristics as well as atypical observations, we show how to decompose non-parametrically the overall inefficiency of a pupil into a unit specific and a higher level (i.e. a school) component. By a sample of entry and exit attainments of 3017 girls in British ordinary single sex schools, we test the robustness of the non-parametric and parametric estimates. We find that the two methods agree in the relative measures of the scope for potential attainment improvement. Further, the two methods agree on the variation in pupil attainment and the proportion attributable to pupil and school level.
Resumo:
In this paper we develop set of novel Markov chain Monte Carlo algorithms for Bayesian smoothing of partially observed non-linear diffusion processes. The sampling algorithms developed herein use a deterministic approximation to the posterior distribution over paths as the proposal distribution for a mixture of an independence and a random walk sampler. The approximating distribution is sampled by simulating an optimized time-dependent linear diffusion process derived from the recently developed variational Gaussian process approximation method. Flexible blocking strategies are introduced to further improve mixing, and thus the efficiency, of the sampling algorithms. The algorithms are tested on two diffusion processes: one with double-well potential drift and another with SINE drift. The new algorithm's accuracy and efficiency is compared with state-of-the-art hybrid Monte Carlo based path sampling. It is shown that in practical, finite sample, applications the algorithm is accurate except in the presence of large observation errors and low observation densities, which lead to a multi-modal structure in the posterior distribution over paths. More importantly, the variational approximation assisted sampling algorithm outperforms hybrid Monte Carlo in terms of computational efficiency, except when the diffusion process is densely observed with small errors in which case both algorithms are equally efficient.
Resumo:
Grape is one of the world's largest fruit crops with approximately 67.5 million tonnes produced each year and energy is an important element in modern grape productions as it heavily depends on fossil and other energy resources. Efficient use of these energies is a necessary step toward reducing environmental hazards, preventing destruction of natural resources and ensuring agricultural sustainability. Hence, identifying excessive use of energy as well as reducing energy resources is the main focus of this paper to optimize energy consumption in grape production.In this study we use a two-stage methodology to find the association of energy efficiency and performance explained by farmers' specific characteristics. In the first stage a non-parametric Data Envelopment Analysis is used to model efficiencies as an explicit function of human labor, machinery, chemicals, FYM (farmyard manure), diesel fuel, electricity and water for irrigation energies. In the second step, farm specific variables such as farmers' age, gender, level of education and agricultural experience are used in a Tobit regression framework to explain how these factors influence efficiency of grape farming.The result of the first stage shows substantial inefficiency between the grape producers in the studied area while the second stage shows that the main difference between efficient and inefficient farmers was in the use of chemicals, diesel fuel and water for irrigation. The use of chemicals such as insecticides, herbicides and fungicides were considerably less than inefficient ones. The results revealed that the more educated farmers are more energy efficient in comparison with their less educated counterparts. © 2013.
Resumo:
The body of work presented in this thesis are in three main parts: [1] the effect of ultrasound on freezing events of ionic systems, [2] the importance of formulation osmolality in freeze drying, and [3] a novel system for increasing primary freeze drying rate. Chapter 4 briefly presents the work on method optimisation, which is still very much in its infancy. Aspects of freezing such as nucleation and ice crystal growth are strongly related with ice crystal morphology; however, the ice nucleation process typically occurs in a random, non-deterministic and spontaneous manner. In view of this, ultrasound, an emerging application in pharmaceutical sciences, has been applied to aid in the acceleration of nucleation and shorten the freezing process. The research presented in this thesis aimed to study the effect of sonication on nucleation events in ionic solutions, and more importantly how sonication impacts on the freezing process. This work confirmed that nucleation does occur in a random manner. It also showed that ultrasonication aids acceleration of the ice nucleation process and increases the freezing rate of a solution. Cryopreservation of animal sperm is an important aspect of breeding in animal science especially for endangered species. In order for sperm cryopreservation to be successful, cryoprotectants as well as semen extenders are used. One of the factors allowing semen preservation media to be optimum is the osmolality of the semen extenders used. Although preservation of animal sperm has no relation with freeze drying of pharmaceuticals, it was used in this thesis to make a case for considering the osmolality of a formulation (prepared for freeze drying) as a factor for conferring protein protection against the stresses of freeze drying. The osmolalities of some common solutes (mostly sugars) used in freeze drying were determined (molal concentration from 0.1m to 1.2m). Preliminary investigation on the osmolality and osmotic coefficients of common solutes were carried out. It was observed that the osmotic coefficient trend for the sugars analysed could be grouped based on the types of sugar they are. The trends observed show the need for further studies to be carried out with osmolality and to determine how it may be of importance to protein or API protection during freeze drying processes. Primary drying is usually the longest part of the freeze drying process, and primary drying times lasting days or even weeks are not uncommon; however, longer primary drying times lead to longer freeze drying cycles, and consequently increased production costs. Much work has been done previously by others using different processes (such as annealing) in order to improve primary drying times; however, these do not come without drawbacks. A novel system involving the formation of a frozen vial system which results in the creation of a void between the formulation and the inside wall of a vial has been devised to increase the primary freeze drying rate of formulations without product damage. Although the work is not nearly complete, it has been shown that it is possible to improve and increase the primary drying rate of formulations without making any modifications to existing formulations, changing storage vials, or increasing the surface area of freeze dryer shelves.
Resumo:
In this paper we develop set of novel Markov Chain Monte Carlo algorithms for Bayesian smoothing of partially observed non-linear diffusion processes. The sampling algorithms developed herein use a deterministic approximation to the posterior distribution over paths as the proposal distribution for a mixture of an independence and a random walk sampler. The approximating distribution is sampled by simulating an optimized time-dependent linear diffusion process derived from the recently developed variational Gaussian process approximation method. The novel diffusion bridge proposal derived from the variational approximation allows the use of a flexible blocking strategy that further improves mixing, and thus the efficiency, of the sampling algorithms. The algorithms are tested on two diffusion processes: one with double-well potential drift and another with SINE drift. The new algorithm's accuracy and efficiency is compared with state-of-the-art hybrid Monte Carlo based path sampling. It is shown that in practical, finite sample applications the algorithm is accurate except in the presence of large observation errors and low to a multi-modal structure in the posterior distribution over paths. More importantly, the variational approximation assisted sampling algorithm outperforms hybrid Monte Carlo in terms of computational efficiency, except when the diffusion process is densely observed with small errors in which case both algorithms are equally efficient. © 2011 Springer-Verlag.
Resumo:
Are the learning procedures of genetic algorithms (GAs) able to generate optimal architectures for artificial neural networks (ANNs) in high frequency data? In this experimental study,GAs are used to identify the best architecture for ANNs. Additional learning is undertaken by the ANNs to forecast daily excess stock returns. No ANN architectures were able to outperform a random walk,despite the finding of non-linearity in the excess returns. This failure is attributed to the absence of suitable ANN structures and further implies that researchers need to be cautious when making inferences from ANN results that use high frequency data.
Resumo:
This paper discusses the use of the non-parametric free disposal hull (FDH) and the parametric multi-level model (MLM) as alternative methods for measuring pupil and school attainment where hierarchical structured data are available. Using robust FDH estimates, we show how to decompose the overall inefficiency of a unit (a pupil) into a unit specific and a higher level (a school) component. By a sample of entry and exit attainments of 3017 girls in British ordinary single sex schools, we test the robustness of the non-parametric and parametric estimates. Finally, the paper uses the traditional MLM model in a best practice framework so that pupil and school efficiencies can be computed.
Resumo:
Different types of numerical data can be collected in a scientific investigation and the choice of statistical analysis will often depend on the distribution of the data. A basic distinction between variables is whether they are ‘parametric’ or ‘non-parametric’. When a variable is parametric, the data come from a symmetrically shaped distribution known as the ‘Gaussian’ or ‘normal distribution’ whereas non-parametric variables may have a distribution which deviates markedly in shape from normal. This article describes several aspects of the problem of non-normality including: (1) how to test for two common types of deviation from a normal distribution, viz., ‘skew’ and ‘kurtosis’, (2) how to fit the normal distribution to a sample of data, (3) the transformation of non-normally distributed data and scores, and (4) commonly used ‘non-parametric’ statistics which can be used in a variety of circumstances.
Resumo:
The thesis is divided into four chapters. They are: introduction, experimental, results and discussion about the free ligands and results and discussion about the complexes. The First Chapter, the introductory chapter, is a general introduction to the study of solid state reactions. The Second Chapter is devoted to the materials and experimental methods that have been used for carrying out tile experiments. TIle Third Chapter is concerned with the characterisations of free ligands (Picolinic acid, nicotinic acid, and isonicotinic acid) by using elemental analysis, IR spectra, X-ray diffraction, and mass spectra. Additionally, the thermal behaviour of free ligands in air has been studied by means of thermogravimetry (TG), derivative thermogravimetry (DTG), and differential scanning calorimetry (DSC) measurements. The behaviour of thermal decomposition of the three free ligands was not identical Finally, a computer program has been used for kinetic evaluation of non-isothermal differential scanning calorimetry data according to a composite and single heating rate methods in comparison with the methods due to Ozawa and Kissinger methods. The most probable reaction mechanism for the free ligands was the Avrami-Erofeev equation (A) that described the solid-state nucleation-growth mechanism. The activation parameters of the decomposition reaction for free ligands were calculated and the results of different methods of data analysis were compared and discussed. The Fourth Chapter, the final chapter, deals with the preparation of cobalt, nickel, and copper with mono-pyridine carboxylic acids in aqueous solution. The prepared complexes have been characterised by analyses, IR spectra, X-ray diffraction, magnetic moments, and electronic spectra. The stoichiometry of these compounds was ML2x(H20), (where M = metal ion, L = organic ligand and x = water molecule). The environments of cobalt, nickel, and copper nicotinates and the environments of cobalt and nickel picolinates were octahedral, whereas the environment of copper picolinate [Cu(PA)2] was tetragonal. However, the environments of cobalt, nickel, and copper isonicotinates were polymeric octahedral structures. The morphological changes that occurred throughout the decomposition were followed by SEM observation. TG, DTG, and DSC measurements have studied the thermal behaviour of the prepared complexes in air. During the degradation processes of the hydrated complexes, the crystallisation water molecules were lost in one or two steps. This was also followed by loss of organic ligands and the metal oxides remained. Comparison between the DTG temperatures of the first and second steps of the dehydration suggested that the water of crystallisation was more strongly bonded with anion in Ni(II) complexes than in the complexes of Co(II) and Cu(II). The intermediate products of decomposition were not identified. The most probable reaction mechanism for the prepared complexes was also Avrami-Erofeev equation (A) characteristic of solid-state nucleation-growth mechanism. The tempemture dependence of conductivity using direct current was determined for cobalt, nickel, Cl.nd copper isonicotinates. An activation energy (ΔΕ), the activation energy (ΔΕ ) were calculated.The ternperature and frequency dependence of conductivity, the frequency dependence of dielectric constant, and the dielectric loss for nickel isonicotinate were determined by using altemating current. The value of s paralneter and the value of'density of state [N(Ef)] were calculated. Keyword Thermal decomposition, kinetic, electrical conduclion, pyridine rnono~ carboxylic acid, cOlnplex, transition metal compJex.