977 resultados para Instrumental-variable Methods
Resumo:
Visualization of high-dimensional data has always been a challenging task. Here we discuss and propose variants of non-linear data projection methods (Generative Topographic Mapping (GTM) and GTM with simultaneous feature saliency (GTM-FS)) that are adapted to be effective on very high-dimensional data. The adaptations use log space values at certain steps of the Expectation Maximization (EM) algorithm and during the visualization process. We have tested the proposed algorithms by visualizing electrostatic potential data for Major Histocompatibility Complex (MHC) class-I proteins. The experiments show that the variation in the original version of GTM and GTM-FS worked successfully with data of more than 2000 dimensions and we compare the results with other linear/nonlinear projection methods: Principal Component Analysis (PCA), Neuroscale (NSC) and Gaussian Process Latent Variable Model (GPLVM).
Resumo:
1. Pearson's correlation coefficient only tests whether the data fit a linear model. With large numbers of observations, quite small values of r become significant and the X variable may only account for a minute proportion of the variance in Y. Hence, the value of r squared should always be calculated and included in a discussion of the significance of r. 2. The use of r assumes that a bivariate normal distribution is present and this assumption should be examined prior to the study. If Pearson's r is not appropriate, then a non-parametric correlation coefficient such as Spearman's rs may be used. 3. A significant correlation should not be interpreted as indicating causation especially in observational studies in which there is a high probability that the two variables are correlated because of their mutual correlations with other variables. 4. In studies of measurement error, there are problems in using r as a test of reliability and the ‘intra-class correlation coefficient’ should be used as an alternative. A correlation test provides only limited information as to the relationship between two variables. Fitting a regression line to the data using the method known as ‘least square’ provides much more information and the methods of regression and their application in optometry will be discussed in the next article.
Resumo:
Multiple regression analysis is a complex statistical method with many potential uses. It has also become one of the most abused of all statistical procedures since anyone with a data base and suitable software can carry it out. An investigator should always have a clear hypothesis in mind before carrying out such a procedure and knowledge of the limitations of each aspect of the analysis. In addition, multiple regression is probably best used in an exploratory context, identifying variables that might profitably be examined by more detailed studies. Where there are many variables potentially influencing Y, they are likely to be intercorrelated and to account for relatively small amounts of the variance. Any analysis in which R squared is less than 50% should be suspect as probably not indicating the presence of significant variables. A further problem relates to sample size. It is often stated that the number of subjects or patients must be at least 5-10 times the number of variables included in the study.5 This advice should be taken only as a rough guide but it does indicate that the variables included should be selected with great care as inclusion of an obviously unimportant variable may have a significant impact on the sample size required.
Resumo:
Contrary to previously held beliefs, it is now known that bacteria exist not only on the surface of the skin but they are also distributed at varying depths beneath the skin surface. Hence, in order to sterilise the skin, antimicrobial agents are required to penetrate across the skin and eliminate the bacteria residing at all depths. Chlorhexidine is an antimicrobial agent with the widest use for skin sterilisation. However, due to its poor permeation rate across the skin, sterilisation of the skin cannot be achieved and, therefore, the remaining bacteria can act as a source of infection during an operation or insertion of catheters. The underlying theme of this study is to enhance the permeation of this antimicrobial agent in the skin by employing chemical (enhancers and supersaturated systems) or physical (iontophoresis) techniques. The hydrochloride salt of chlorhexidine (CHX), a poorly soluble salt, was used throughout this study. The effect of ionisation on in vitro permeation rate across the excised human epidennis was investigated using Franz-type diffusion cells. Saturated solutions of CHX were used as donor and the variable studied was vehicle pH. Permeation rate was increased with increasing vehicle pH. The pH effect was not related to the level of ionisation of the drug. The effect of donor vehicle was also studied using saturated solutions of CHX in 10% and 20% ethanol as the donor solutions. Permeation of CHX was enhanced by increasing the concentration of ethanol which could be due to the higher concentration of CHX in the donor phase and the effect of ethanol itself on the membrane. The interplay between drug diffusion and enhancer pretreatment of the epidennis was studied. Pretreatment of the membrane with 10% Azone/PG demonstrated the highest diffusion rate followed by 10% olcic acid/PG pretreatment compared to other pretreatment regimens (ethanol, dimethyl sulfoxide (DMSO), propylene glycol (PG), sodium dodecyl sulphate (SDS) and dodecyl trimethyl ammonium bromide (DT AB). Differential Scanning Calorimetry (DSC) was also employed to study the mode of action of these enhancers. The potential of supersaturated solutions in enhancing percutaneous absorption of CHX was investigated. Various anti-nucleating polymers were screened in order to establish the most effective agent. Polyvinylpyrrolidone (PVP, K30) was found to be a better candidate than its lower molecular weight counterpart (K25) and hydroxypropyl methyleellulose (HPMC). The permeation studies showed an increase in diffusion rate by increasing the degree of saturation. Iontophoresis is a physical means of transdemal drug delivery enhancement that causes an increased penetration of molecules into or through the skin by the application of an electric field. This technique was employed in conjunction with chemical enhancers to assess the effect on CHX permeation across the human epidermis. An improved transport of CHX, which was pH dependant was observed upon application of the current. Combined use of iontophoresis and chemical enhancers further increased the CHX transport indicating a synergistic effect. Pretreatment of the membrane with 10% Azone/PG demonstrated the greatest effect.
Resumo:
This thesis set out to develop an objective analysis programme that correlates with subjective grades but has improved sensitivity and reliability in its measures so that the possibility of early detection and reliable monitoring of changes in anterior ocular surfaces (bulbar hyperaemia, palpebral redness, palpebral roughness and corneal straining) could be increased. The sensitivity of the program was 20x greater than subjective grading by optometrists. The reliability was found to be optimal (r=1.0) with subjective grading up to 144x more variable (r=0.08). Objective measures were used to create formulae for an overall ‘objective-grade’ (per surface) equivalent to those displayed by the CCLRU or Efron scales. The correlation between the formulated objective verses subjective grades was high, with adjusted r2 up to 0.96. Determination of baseline levels of objective grade were investigated over four age groups (5-85years n= 120) so that in practice a comparison against the ‘normal limits’ could be made. Differences for bulbar hyperaemia were found between the age groups (p<0.001), and also for palpebral redness and roughness (p<0.001). The objective formulae were then applied to the investigation of diurnal variation in order to account for any change that may affect the baseline. Increases in bulbar hyperaemia and palpebral redness were found between examinations in the morning and evening. Correlation factors were recommended. The program was then applied to clinical situations in the form of a contact lens trial and an investigation into iritis and keratoconus where it successfully recognised various surface changes. This programme could become a valuable tool, greatly improving the chances of early detection of anterior ocular abnormalities, and facilitating reliable monitoring of disease progression in clinical as well as research environments.
Resumo:
There are several methods of providing series compensation for transmission lines using power electronic switches. Four methods of series compensation have been examined in this thesis, the thyristor controlled series capacitor, a voltage sourced inverter series compensator using a capacitor as the series element, a current sourced inverter series compensator and a voltage sourced inverter using an inductor as the series element. All the compensators examined will provide a continuously variable series voltage which is controlled by the switching of the electronic switches. Two of the circuits will offer both capacitive and inductive compensation, the thyristor controlled series capacitor and the current sourced inverter series compensator. The other two will produce either capacitive or inductive series compensation. The thyristor controlled series capacitor offers the widest range of series compensation. However, there is a band of unavailable compensation between 0 and 1 pu capacitive compensation. Compared to the other compensators examined the harmonic content of the compensating voltage is quite high. An algebraic analysis showed that there is more than one state the thyristor controlled series capacitor can operate in. This state has the undesirable effect of introducing large losses. The voltage sourced inverter series compensator using a capacitor as the series element will provide only capacitive compensation. It uses two capacitors which increase the cost of the compensator significantly above the other three. This circuit has the advantage of very low harmonic distortion. The current sourced inverter series compensator will provide both capacitive and inductive series compensation. The harmonic content of the compensating voltage is second only to the voltage sourced inverter series compensator using a capacitor as the series element. The voltage sourced inverter series compensator using an inductor as the series element will only provide inductive compensation, and it is the least expensive compensator examined. Unfortunately, the harmonics introduced by this circuit are considerable.
Resumo:
A methodology is presented which can be used to produce the level of electromagnetic interference, in the form of conducted and radiated emissions, from variable speed drives, the drive that was modelled being a Eurotherm 583 drive. The conducted emissions are predicted using an accurate circuit model of the drive and its associated equipment. The circuit model was constructed from a number of different areas, these being: the power electronics of the drive, the line impedance stabilising network used during the experimental work to measure the conducted emissions, a model of an induction motor assuming near zero load, an accurate model of the shielded cable which connected the drive to the motor, and finally the parasitic capacitances that were present in the drive modelled. The conducted emissions were predicted with an error of +/-6dB over the frequency range 150kHz to 16MHz, which compares well with the limits set in the standards which specify a frequency range of 150kHz to 30MHz. The conducted emissions model was also used to predict the current and voltage sources which were used to predict the radiated emissions from the drive. Two methods for the prediction of the radiated emissions from the drive were investigated, the first being two-dimensional finite element analysis and the second three-dimensional transmission line matrix modelling. The finite element model took account of the features of the drive that were considered to produce the majority of the radiation, these features being the switching of the IGBT's in the inverter, the shielded cable which connected the drive to the motor as well as some of the cables that were present in the drive.The model also took account of the structure of the test rig used to measure the radiated emissions. It was found that the majority of the radiation produced came from the shielded cable and the common mode currents that were flowing in the shield, and that it was feasible to model the radiation from the drive by only modelling the shielded cable. The radiated emissions were correctly predicted in the frequency range 30MHz to 200MHz with an error of +10dB/-6dB. The transmission line matrix method modelled the shielded cable which connected the drive to the motor and also took account of the architecture of the test rig. Only limited simulations were performed using the transmission line matrix model as it was found to be a very slow method and not an ideal solution to the problem. However the limited results obtained were comparable, to within 5%, to the results obtained using the finite element model.
Resumo:
Projection of a high-dimensional dataset onto a two-dimensional space is a useful tool to visualise structures and relationships in the dataset. However, a single two-dimensional visualisation may not display all the intrinsic structure. Therefore, hierarchical/multi-level visualisation methods have been used to extract more detailed understanding of the data. Here we propose a multi-level Gaussian process latent variable model (MLGPLVM). MLGPLVM works by segmenting data (with e.g. K-means, Gaussian mixture model or interactive clustering) in the visualisation space and then fitting a visualisation model to each subset. To measure the quality of multi-level visualisation (with respect to parent and child models), metrics such as trustworthiness, continuity, mean relative rank errors, visualisation distance distortion and the negative log-likelihood per point are used. We evaluate the MLGPLVM approach on the ‘Oil Flow’ dataset and a dataset of protein electrostatic potentials for the ‘Major Histocompatibility Complex (MHC) class I’ of humans. In both cases, visual observation and the quantitative quality measures have shown better visualisation at lower levels.
Resumo:
The ontological approach to structuring knowledge and the description of data domain of knowledge is considered. It is described tool ontology-controlled complex for research and developments of sensor systems. Some approaches to solution most frequently meeting tasks are considered for creation of the recognition procedures.
Resumo:
This paper presents a Variable neighbourhood search (VNS) approach for solving the Maximum Set Splitting Problem (MSSP). The algorithm forms a system of neighborhoods based on changing the component for an increasing number of elements. An efficient local search procedure swaps the components of pairs of elements and yields a relatively short running time. Numerical experiments are performed on the instances known in the literature: minimum hitting set and Steiner triple systems. Computational results show that the proposed VNS achieves all optimal or best known solutions in short times. The experiments indicate that the VNS compares favorably with other methods previously used for solving the MSSP. ACM Computing Classification System (1998): I.2.8.
Resumo:
This research focuses on automatically adapting a search engine size in response to fluctuations in query workload. Deploying a search engine in an Infrastructure as a Service (IaaS) cloud facilitates allocating or deallocating computer resources to or from the engine. Our solution is to contribute an adaptive search engine that will repeatedly re-evaluate its load and, when appropriate, switch over to a dierent number of active processors. We focus on three aspects and break them out into three sub-problems as follows: Continually determining the Number of Processors (CNP), New Grouping Problem (NGP) and Regrouping Order Problem (ROP). CNP means that (in the light of the changes in the query workload in the search engine) there is a problem of determining the ideal number of processors p active at any given time to use in the search engine and we call this problem CNP. NGP happens when changes in the number of processors are determined and it must also be determined which groups of search data will be distributed across the processors. ROP is how to redistribute this data onto processors while keeping the engine responsive and while also minimising the switchover time and the incurred network load. We propose solutions for these sub-problems. For NGP we propose an algorithm for incrementally adjusting the index to t the varying number of virtual machines. For ROP we present an ecient method for redistributing data among processors while keeping the search engine responsive. Regarding the solution for CNP, we propose an algorithm determining the new size of the search engine by re-evaluating its load. We tested the solution performance using a custom-build prototype search engine deployed in the Amazon EC2 cloud. Our experiments show that when we compare our NGP solution with computing the index from scratch, the incremental algorithm speeds up the index computation 2{10 times while maintaining a similar search performance. The chosen redistribution method is 25% to 50% faster than other methods and reduces the network load around by 30%. For CNP we present a deterministic algorithm that shows a good ability to determine a new size of search engine. When combined, these algorithms give an adapting algorithm that is able to adjust the search engine size with a variable workload.
Resumo:
The paper reviews some additive and multiplicative properties of ranking procedures used for generalized tournaments with missing values and multiple comparisons. The methods analysed are the score, generalised row sum and least squares as well as fair bets and its variants. It is argued that generalised row sum should be applied not with a fixed parameter, but a variable one proportional to the number of known comparisons. It is shown that a natural additive property has strong links to independence of irrelevant matches, an axiom judged unfavourable when players have different opponents.
Resumo:
The Locard exchange principle proposes that a person can not enter or leave an area or come in contact with an object, without an exchange of materials. In the case of scent evidence, the suspect leaves his scent in the location of the crime scene itself or on objects found therein. Human scent evidence collected from a crime scene can be evaluated through the use of specially trained canines to determine an association between the evidence and a suspect. To date, there has been limited research as to the volatile organic compounds (VOCs) which comprise human odor and their usefulness in distinguishing among individuals. For the purposes of this research, human scent is defined as the most abundant volatile organic compounds present in the headspace above collected odor samples. ^ An instrumental method has been created for the analysis of the VOCs present in human scent, and has been utilized for the optimization of materials used for the collection and storage of human scent evidence. This research project has identified the volatile organic compounds present in the headspace above collected scent samples from different individuals and various regions of the body, with the primary focus involving the armpit area and the palms of the hands. Human scent from the armpit area and palms of an individual sampled over time shows lower variation in the relative peak area ratio of the common compounds present than what is seen across a population. A comparison of the compounds present in human odor for an individual over time, and across a population has been conducted and demonstrates that it is possible to instrumentally differentiate individuals based on the volatile organic compounds above collected odor samples. ^
Resumo:
In certain European countries and the United States of America, canines have been successfully used in human scent identification. There is however, limited scientific knowledge on the composition of human scent and the detection mechanism that produces an alert from canines. This lack of information has resulted in successful legal challenges to human scent evidence in the courts of law. ^ The main objective of this research was to utilize science to validate the current practices of using human scent evidence in criminal cases. The goals of this study were to utilize Headspace Solid Phase Micro Extraction Gas Chromatography Mass Spectrometry (HS-SPME-GC/MS) to determine the optimum collection and storage conditions for human scent samples, to investigate whether the amount of DNA deposited upon contact with an object affects the alerts produced by human scent identification canines, and to create a prototype pseudo human scent which could be used for training purposes. ^ Hand odor samples which were collected on different sorbent materials and exposed to various environmental conditions showed that human scent samples should be stored without prolonged exposure to UVA/UVB light to allow minimal changes to the overall scent profile. Various methods of collecting human scent from objects were also investigated and it was determined that passive collection methods yields ten times more VOCs by mass than active collection methods. ^ Through the use of polymerase chain reaction (PCR) no correlation was found between the amount of DNA that was deposited upon contact with an object and the alerts that were produced by human scent identification canines. Preliminary studies conducted to create a prototype pseudo human scent showed that it is possible to produce fractions of a human scent sample which can be presented to the canines to determine whether specific fractions or the entire sample is needed to produce alerts by the human scent identification canines. ^
Resumo:
Concurrent software executes multiple threads or processes to achieve high performance. However, concurrency results in a huge number of different system behaviors that are difficult to test and verify. The aim of this dissertation is to develop new methods and tools for modeling and analyzing concurrent software systems at design and code levels. This dissertation consists of several related results. First, a formal model of Mondex, an electronic purse system, is built using Petri nets from user requirements, which is formally verified using model checking. Second, Petri nets models are automatically mined from the event traces generated from scientific workflows. Third, partial order models are automatically extracted from some instrumented concurrent program execution, and potential atomicity violation bugs are automatically verified based on the partial order models using model checking. Our formal specification and verification of Mondex have contributed to the world wide effort in developing a verified software repository. Our method to mine Petri net models automatically from provenance offers a new approach to build scientific workflows. Our dynamic prediction tool, named McPatom, can predict several known bugs in real world systems including one that evades several other existing tools. McPatom is efficient and scalable as it takes advantage of the nature of atomicity violations and considers only a pair of threads and accesses to a single shared variable at one time. However, predictive tools need to consider the tradeoffs between precision and coverage. Based on McPatom, this dissertation presents two methods for improving the coverage and precision of atomicity violation predictions: 1) a post-prediction analysis method to increase coverage while ensuring precision; 2) a follow-up replaying method to further increase coverage. Both methods are implemented in a completely automatic tool.