20 resultados para rank-based procedure
em Aston University Research Archive
Resumo:
Analysing the molecular polymorphism and interactions of DNA, RNA and proteins is of fundamental importance in biology. Predicting functions of polymorphic molecules is important in order to design more effective medicines. Analysing major histocompatibility complex (MHC) polymorphism is important for mate choice, epitope-based vaccine design and transplantation rejection etc. Most of the existing exploratory approaches cannot analyse these datasets because of the large number of molecules with a high number of descriptors per molecule. This thesis develops novel methods for data projection in order to explore high dimensional biological dataset by visualising them in a low-dimensional space. With increasing dimensionality, some existing data visualisation methods such as generative topographic mapping (GTM) become computationally intractable. We propose variants of these methods, where we use log-transformations at certain steps of expectation maximisation (EM) based parameter learning process, to make them tractable for high-dimensional datasets. We demonstrate these proposed variants both for synthetic and electrostatic potential dataset of MHC class-I. We also propose to extend a latent trait model (LTM), suitable for visualising high dimensional discrete data, to simultaneously estimate feature saliency as an integrated part of the parameter learning process of a visualisation model. This LTM variant not only gives better visualisation by modifying the project map based on feature relevance, but also helps users to assess the significance of each feature. Another problem which is not addressed much in the literature is the visualisation of mixed-type data. We propose to combine GTM and LTM in a principled way where appropriate noise models are used for each type of data in order to visualise mixed-type data in a single plot. We call this model a generalised GTM (GGTM). We also propose to extend GGTM model to estimate feature saliencies while training a visualisation model and this is called GGTM with feature saliency (GGTM-FS). We demonstrate effectiveness of these proposed models both for synthetic and real datasets. We evaluate visualisation quality using quality metrics such as distance distortion measure and rank based measures: trustworthiness, continuity, mean relative rank errors with respect to data space and latent space. In cases where the labels are known we also use quality metrics of KL divergence and nearest neighbour classifications error in order to determine the separation between classes. We demonstrate the efficacy of these proposed models both for synthetic and real biological datasets with a main focus on the MHC class-I dataset.
Resumo:
Various room temperature ionic liquids (RTILs), notably, 1-methoxyethyl-3-methylimidazolium trifluoroacetate [MeOEtMIM]+[CF3COO]ˉ , have been used to promote the Knoevenagel condensation to afford substituted olefins. All reactions proceeded effectively in the absence of any other catalysts or co-solvents with good to excellent yields. This method is simple and applicable to reactions involving a wide range of aldehydes and ketones with methylene compounds. The ionic liquid can be recycled without noticeable reduction of its catalytic activity. A plausible reaction mechanism is proposed.
Resumo:
Most pavement design procedures incorporate reliability to account for design inputs-associated uncertainty and variability effect on predicted performance. The load and resistance factor design (LRFD) procedure, which delivers economical section while considering design inputs variability separately, has been recognised as an effective tool to incorporate reliability into design procedures. This paper presents a new reliability-based calibration in LRFD format for a mechanics-based fatigue cracking analysis framework. This paper employs a two-component reliability analysis methodology that utilises a central composite design-based response surface approach and a first-order reliability method. The reliability calibration was achieved based on a number of field pavement sections that have well-documented performance history and high-quality field and laboratory data. The effectiveness of the developed LRFD procedure was evaluated by performing pavement designs of various target reliabilities and design conditions. The result shows an excellent agreement between the target and actual reliabilities. Furthermore, it is clear from the results that more design features need to be included in the reliability calibration to minimise the deviation of the actual reliability from the target reliability.
Resumo:
Objective: To introduce a new technique for co-registration of Magnetoencephalography (MEG) with magnetic resonance imaging (MRI). We compare the accuracy of a new bite-bar with fixed fiducials to a previous technique whereby fiducial coils were attached proximal to landmarks on the skull. Methods: A bite-bar with fixed fiducial coils is used to determine the position of the head in the MEG co-ordinate system. Co-registration is performed by a surface-matching technique. The advantage of fixing the coils is that the co-ordinate system is not based upon arbitrary and operator dependent fiducial points that are attached to landmarks (e.g. nasion and the preauricular points), but rather on those that are permanently fixed in relation to the skull. Results: As a consequence of minimizing coil movement during digitization, errors in localization of the coils are significantly reduced, as shown by a randomization test. Displacement of the bite-bar caused by removal and repositioning between MEG recordings is minimal (∼0.5 mm), and dipole localization accuracy of a somatosensory mapping paradigm shows a repeatability of ∼5 mm. The overall accuracy of the new procedure is greatly improved compared to the previous technique. Conclusions: The test-retest reliability and accuracy of target localization with the new design is superior to techniques that incorporate anatomical-based fiducial points or coils placed on the circumference of the head. © 2003 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Resumo:
The work presented in this thesis describes an investigation into the production and properties of thin amorphous C films, with and without Cr doping, as a low wear / friction coating applicable to MEMS and other micro- and nano-engineering applications. Firstly, an assessment was made of the available testing techniques. Secondly, the optimised test methods were applied to a series of sputtered films of thickness 10 - 2000 nm in order to: (i) investigate the effect of thickness on the properties of coatingslcoating process (ii) investigate fundamental tribology at the nano-scale and (iii) provide a starting point for nanotribological coating optimisation at ultra low thickness. The use of XPS was investigated for the determination of Sp3/Sp2 carbon bonding. Under C 1s peak analysis, significant errors were identified and this was attributed to the absence of sufficient instrument resolution to guide the component peak structure (even with a high resolution instrument). A simple peak width analysis and correlation work with C KLL D value confirmed the errors. The use of XPS for Sp3/Sp2 was therefore limited to initial tentative estimations. Nanoindentation was shown to provide consistent hardness and reduced modulus results with depth (to < 7nm) when replicate data was suitably statistically processed. No significant pile-up or cracking of the films was identified under nanoindentation. Nanowear experimentation by multiple nanoscratching provided some useful information, however the conditions of test were very different to those expect for MEMS and micro- / nano-engineering systems. A novel 'sample oscillated nanoindentation' system was developed for testing nanowear under more relevant conditions. The films were produced in an industrial production coating line. In order to maximise the available information and to take account of uncontrolled process variation a statistical design of experiment procedure was used to investigate the effect of four key process control parameters. Cr doping was the most significant control parameter at all thicknesses tested and produced a softening effect and thus increased nanowear. Substrate bias voltage was also a significant parameter and produced hardening and a wear reducing effect at all thicknesses tested. The use of a Cr adhesion layer produced beneficial results at 150 nm thickness, but was ineffective at 50 nm. Argon flow to the coating chamber produced a complex effect. All effects reduced significantly with reducing film thickness. Classic fretting wear was produced at low amplitude under nanowear testing. Reciprocating sliding was produced at higher amplitude which generated three body abrasive wear and this was generally consistent with the Archard model. Specific wear rates were very low (typically 10-16 - 10-18 m3N-1m-1). Wear rates reduced exponentially with reduced film thickness and below (approx.) 20 nm, thickness was identified as the most important control of wear.
Resumo:
Most of the new processes involving the utilisation of coal are based on hydroliquefaction, and in order to assess the suitability of the various coals for this purpose and to characterise coals in general, it is desirable to have a detailed and accurate knowledge of their chemical constitution and reactivity. Also, in the consumption of coals as chemical feed stocks, as in hydroliquefaction, it is advantageous to classify the coals in terms of chemical parameters as opposed to, or in addition to, carbonisation parameters. In view of this it is important to realise the functional groups on the coal hydrocarbon skeleton. In this research it was attempted to characterise coals of various rank (and subsequently their macerals) via methods involving both microwave-driven and bench top derivatisation of the hydroxyl functionalities present in coal. These hydroxyl groups are predominantly in the form of hindered phenolic groups, with other alcoholic groupings being less important, in the coals studied here. Four different techniques were employed, three of which - stannylation, silylation and methylation - were based on in situ analysis. The fourth technique - acetylation - involved derivatisation followed by analysis of a leaving group. The four different techniques were critically compared and it is concluded that silylation is the most promising technique for the evaluation of the hydroxyl content of middle rank coals and coal macerals. Derivatisation via stannylation using TBTO was impeded due to the large steric demand of the reagent and acetylation did not successfully derivatise the more hindered phenolic groups. Three novel methylation techniques were investigated and two of these show great potential. The information obtained from the techniques was correlated together to give a comprehensive insight into the coals and coal macerals studied.
Resumo:
The practice of evidence-based medicine involves consulting documents from repositories such as Scopus, PubMed, or the Cochrane Library. The most common approach for presenting retrieved documents is in the form of a list, with the assumption that the higher a document is on a list, the more relevant it is. Despite this list-based presentation, it is seldom studied how physicians perceive the importance of the order of documents presented in a list. This paper describes an empirical study that elicited and modeled physicians' preferences with regard to list-based results. Preferences were analyzed using a GRIP method that relies on pairwise comparisons of selected subsets of possible rank-ordered lists composed of 3 documents. The results allow us to draw conclusions regarding physicians' attitudes towards the importance of having documents ranked correctly on a result list, versus the importance of retrieving relevant but misplaced documents. Our findings should help developers of clinical information retrieval applications when deciding how retrieved documents should be presented and how performance of the application should be assessed. © 2012 Springer-Verlag Berlin Heidelberg.
Resumo:
In the developed world we are surrounded by man-made objects, but most people give little thought to the complex processes needed for their design. The design of hand knitting is complex because much of the domain knowledge is tacit. The objective of this thesis is to devise a methodology to help designers to work within design constraints, whilst facilitating creativity. A hybrid solution including computer aided design (CAD) and case based reasoning (CBR) is proposed. The CAD system creates designs using domain-specific rules and these designs are employed for initial seeding of the case base and the management of constraints. CBR reuses the designer's previous experience. The key aspects in the CBR system are measuring the similarity of cases and adapting past solutions to the current problem. Similarity is measured by asking the user to rank the importance of features; the ranks are then used to calculate weights for an algorithm which compares the specifications of designs. A novel adaptation operator called rule difference replay (RDR) is created. When the specifications to a new design is presented, the CAD program uses it to construct a design constituting an approximate solution. The most similar design from the case-base is then retrieved and RDR replays the changes previously made to the retrieved design on the new solution. A measure of solution similarity that can validate subjective success scores is created. Specification similarity can be used as a guide whether to invoke CBR, in a hybrid CAD-CBR system. If the newly resulted design is suffciently similar to a previous design, then CBR is invoked; otherwise CAD is used. The application of RDR to knitwear design has demonstrated the flexibility to overcome deficiencies in rules that try to automate creativity, and has the potential to be applied to other domains such as interior design.
Resumo:
The inverse problem of determining a spacewise dependent heat source, together with the initial temperature for the parabolic heat equation, using the usual conditions of the direct problem and information from two supplementary temperature measurements at different instants of time is studied. These spacewise dependent temperature measurements ensure that this inverse problem has a unique solution, despite the solution being unstable, hence the problem is ill-posed. We propose an iterative algorithm for the stable reconstruction of both the initial data and the source based on a sequence of well-posed direct problems for the parabolic heat equation, which are solved at each iteration step using the boundary element method. The instability is overcome by stopping the iterations at the first iteration for which the discrepancy principle is satisfied. Numerical results are presented for a typical benchmark test example, which has the input measured data perturbed by increasing amounts of random noise. The numerical results show that the proposed procedure gives accurate numerical approximations in relatively few iterations.
Resumo:
An iterative procedure is proposed for the reconstruction of a temperature field from a linear stationary heat equation with stochastic coefficients, and stochastic Cauchy data given on a part of the boundary of a bounded domain. In each step, a series of mixed well-posed boundary-value problems are solved for the stochastic heat operator and its adjoint. Well-posedness of these problems is shown to hold and convergence in the mean of the procedure is proved. A discretized version of this procedure, based on a Monte Carlo Galerkin finite-element method, suitable for numerical implementation is discussed. It is demonstrated that the solution to the discretized problem converges to the continuous as the mesh size tends to zero.
Resumo:
We consider a Cauchy problem for the Laplace equation in a two-dimensional semi-infinite region with a bounded inclusion, i.e. the region is the intersection between a half-plane and the exterior of a bounded closed curve contained in the half-plane. The Cauchy data are given on the unbounded part of the boundary of the region and the aim is to construct the solution on the boundary of the inclusion. In 1989, Kozlov and Maz'ya [10] proposed an alternating iterative method for solving Cauchy problems for general strongly elliptic and formally self-adjoint systems in bounded domains. We extend their approach to our setting and in each iteration step mixed boundary value problems for the Laplace equation in the semi-infinite region are solved. Well-posedness of these mixed problems are investigated and convergence of the alternating procedure is examined. For the numerical implementation an efficient boundary integral equation method is proposed, based on the indirect variant of the boundary integral equation approach. The mixed problems are reduced to integral equations over the (bounded) boundary of the inclusion. Numerical examples are included showing the feasibility of the proposed method.
Resumo:
The existing assignment problems for assigning n jobs to n individuals are limited to the considerations of cost or profit measured as crisp. However, in many real applications, costs are not deterministic numbers. This paper develops a procedure based on Data Envelopment Analysis method to solve the assignment problems with fuzzy costs or fuzzy profits for each possible assignment. It aims to obtain the points with maximum membership values for the fuzzy parameters while maximizing the profit or minimizing the assignment cost. In this method, a discrete approach is presented to rank the fuzzy numbers first. Then, corresponding to each fuzzy number, we introduce a crisp number using the efficiency concept. A numerical example is used to illustrate the usefulness of this new method. © 2012 Operational Research Society Ltd. All rights reserved.
Resumo:
Respiratory-volume monitoring is an indispensable part of mechanical ventilation. Here we present a new method of the respiratory-volume measurement based on a single fibre-optical long-period sensor of bending and the correlation between torso curvature and lung volume. Unlike the commonly used air-flow based measurement methods the proposed sensor is drift-free and immune to air-leaks. In the paper, we explain the working principle of sensors, a two-step calibration-test measurement procedure and present results that establish a linear correlation between the change in the local thorax curvature and the change of the lung volume. We also discuss the advantages and limitations of these sensors with respect to the current standards. © 2013 IEEE.
Resumo:
Internal quantum efficiency (IQE) of a high-brightness blue LED has been evaluated from the external quantum efficiency measured as a function of current at room temperature. Processing the data with a novel evaluation procedure based on the ABC-model, we have determined separately IQE of the LED structure and light extraction efficiency (LEE) of UX:3 chip. Full text Nowadays, understanding of LED efficiency behavior at high currents is quite critical to find ways for further improvement of III-nitride LED performance [1]. External quantum efficiency ηe (EQE) provides integral information on the recombination and photon emission processes in LEDs. Meanwhile EQE is the product of IQE ηi and LEE ηext at negligible carrier leakage from the active region. Separate determination of IQE and LEE would be much more helpful, providing correlation between these parameters and specific epi-structure and chip design. In this paper, we extend the approach of [2,3] to the whole range of the current/optical power variation, providing an express tool for separate evaluation of IQE and LEE. We studied an InGaN-based LED fabricated by Osram OS. LED structure grown by MOCVD on sapphire substrate was processed as UX:3 chip and mounted into the Golden Dragon package without molding. EQE was measured with Labsphere CDS-600 spectrometer. Plotting EQE versus output power P and finding the power Pm corresponding to EQE maximum ηm enables comparing the measurements with the analytical relationships ηi = Q/(Q+p1/2+p-1/2) ,p = P/Pm , and Q = B/(AC) 1/2 where A, Band C are recombination constants [4]. As a result, maximum IQE value equal to QI(Q+2) can be found from the ratio ηm/ηe plotted as a function of p1/2 +p1-1/2 (see Fig.la) and then LEE calculated as ηext = ηm (Q+2)/Q . Experimental EQE as a function of normalized optical power p is shown in Fig. 1 b along with the analytical approximation based on the ABCmodel. The approximation fits perfectly the measurements in the range of the optical power (or operating current) variation by eight orders of magnitude. In conclusion, new express method for separate evaluation of IQE and LEE of III-nitride LEDs is suggested and applied to characterization of a high-brightness blue LED. With this method, we obtained LEE from the free chip surface to the air as 69.8% and IQE as 85.7% at the maximum and 65.2% at the operation current 350 rnA. [I] G. Verzellesi, D. Saguatti, M. Meneghini, F. Bertazzi, M. Goano, G. Meneghesso, and E. Zanoni, "Efficiency droop in InGaN/GaN blue light-emitting diodes: Physical mechanisms and remedies," 1. AppL Phys., vol. 114, no. 7, pp. 071101, Aug., 2013. [2] C. van Opdorp and G. W. 't Hooft, "Method for determining effective non radiative lifetime and leakage losses in double-heterostructure lasers," 1. AppL Phys., vol. 52, no. 6, pp. 3827-3839, Feb., 1981. [3] M. Meneghini, N. Trivellin, G. Meneghesso, E. Zanoni, U. Zehnder, and B. Hahn, "A combined electro-optical method for the determination of the recombination parameters in InGaN-based light-emitting diodes," 1. AppL Phys., vol. 106, no. II, pp. 114508, Dec., 2009. [4] Qi Dai, Qifeng Shan, ling Wang, S. Chhajed, laehee Cho, E. F. Schubert, M. H. Crawford, D. D. Koleske, Min-Ho Kim, and Yongjo Park, "Carrier recombination mechanisms and efficiency droop in GalnN/GaN light-emitting diodes," App/. Phys. Leu., vol. 97, no. 13, pp. 133507, Sept., 2010. © 2014 IEEE.