16 resultados para extraction methods.
em Aston University Research Archive
Analysis of carrier phase extraction methods in 112-Gbit/s NRZ-PDM-QPSK coherent transmission system
Resumo:
We present a comparative analysis on three carrier phase extraction approaches, including a one-tap normalized least mean square method, a block-average method, and a Viterbi-Viterbi method, in coherent transmission system considering equalization enhanced phase noise. © OSA 2012.
Resumo:
In large organizations the resources needed to solve challenging problems are typically dispersed over systems within and beyond the organization, and also in different media. However, there is still the need, in knowledge environments, for extraction methods able to combine evidence for a fact from across different media. In many cases the whole is more than the sum of its parts: only when considering the different media simultaneously can enough evidence be obtained to derive facts otherwise inaccessible to the knowledge worker via traditional methods that work on each single medium separately. In this paper, we present a cross-media knowledge extraction framework specifically designed to handle large volumes of documents composed of three types of media text, images and raw data and to exploit the evidence across the media. Our goal is to improve the quality and depth of automatically extracted knowledge.
Resumo:
This study is concerned with the analysis of tear proteins, paying particular attention to the state of the tears (e.g. non-stimulated, reflex, closed), created during sampling, and to assess their interactions with hydrogel contact lenses. The work has involved the use of a variety of biochemical and immunological analytical techniques for the measurement of proteins, (a), in tears, (b), on the contact lens, and (c), in the eluate of extracted lenses. Although a diverse range of tear components may contribute to contact lens spoilation, proteins were of particular interest in this study because of their theoretical potential for producing immunological reactions. Although normal host proteins in their natural state are generally not treated as dangerous or non-self, those which undergo denaturation or suffer a conformational change may provoke an excessive and unnecessary immune response. A novel on-lens cell based assay has been developed and exploited in order to study the role of the ubiquitous cell adhesion glycoprotein, vitronectin, in tears and contact lens wear under various parameters. Vitronectin, whose levels are known to increase in the closed eye environment and shown here to increase during contact lens wear, is an important immunoregulatory protein and may be a prominent marker of inflammatory activity. Immunodiffusion assays were developed and optimised for use in tear analysis, and in a series of subsequent studies used for example in the measurement of albumin, lactoferrin, IgA and IgG. The immunodiffusion assays were then applied in the estimation of the closed eye environment; an environment which has been described as sustaining a state of sub-clinical inflammation. The role and presence of a lesser understood and investigated protein, kininogen, was also estimated, in particular, in relation to contact lens wear. Difficulties arise when attempting to extract proteins from the contact lens in order to examine the individual nature of the proteins involved. These problems were partly alleviated with the use of the on-lens cell assay and a UV spectrophotometry assay, which can analyse the lens surface and bulk respectively, the latter yielding only total protein values. Various lens extraction methods were investigated to remove protein from the lens and the most efficient was employed in the analysis of lens extracts. Counter immunoelectrophoresis, an immunodiffusion assay, was then applied to the analysis of albumin, lactoferrin, IgA and IgG in the resultant eluates.
Resumo:
During the last decade, biomedicine has witnessed a tremendous development. Large amounts of experimental and computational biomedical data have been generated along with new discoveries, which are accompanied by an exponential increase in the number of biomedical publications describing these discoveries. In the meantime, there has been a great interest with scientific communities in text mining tools to find knowledge such as protein-protein interactions, which is most relevant and useful for specific analysis tasks. This paper provides a outline of the various information extraction methods in biomedical domain, especially for discovery of protein-protein interactions. It surveys methodologies involved in plain texts analyzing and processing, categorizes current work in biomedical information extraction, and provides examples of these methods. Challenges in the field are also presented and possible solutions are discussed.
Resumo:
To date, more than 16 million citations of published articles in biomedical domain are available in the MEDLINE database. These articles describe the new discoveries which accompany a tremendous development in biomedicine during the last decade. It is crucial for biomedical researchers to retrieve and mine some specific knowledge from the huge quantity of published articles with high efficiency. Researchers have been engaged in the development of text mining tools to find knowledge such as protein-protein interactions, which are most relevant and useful for specific analysis tasks. This chapter provides a road map to the various information extraction methods in biomedical domain, such as protein name recognition and discovery of protein-protein interactions. Disciplines involved in analyzing and processing unstructured-text are summarized. Current work in biomedical information extracting is categorized. Challenges in the field are also presented and possible solutions are discussed.
Resumo:
This work follows a feasibility study (187) which suggested that a process for purifying wet-process phosphoric acid by solvent extraction should be economically viable. The work was divided into two main areas, (i) chemical and physical measurements on the three-phase system, with or without impurities; (ii) process simulation and optimization. The object was to test the process technically and economically and to optimise the type of solvent. The chemical equilibria and distribution curves for the system water - phosphoric acid - solvent for the solvents n-amyl alcohol, tri-n-butyl phosphate, di-isopropyl ether and methyl isobutyl ketone have been determined. Both pure phosphoric acid and acid containing known amounts of naturally occurring impurities (Fe P0 4 , A1P0 4 , Ca3(P04)Z and Mg 3(P0 4 )Z) were examined. The hydrodynamic characteristics of the systems were also studied. The experimental results obtained for drop size distribution were compared with those obtainable from Hinze's equation (32) and it was found that they deviated by an amount related to the turbulence. A comprehensive literature survey on the purification of wet-process phosphoric acid by organic solvents has been made. The literature regarding solvent extraction fundamentals and equipment and optimization methods for the envisaged process was also reviewed. A modified form of the Kremser-Brown and Souders equation to calculate the number of contact stages was derived. The modification takes into account the special nature of phosphoric acid distribution curves in the studied systems. The process flow-sheet was developed and simulated. Powell's direct search optimization method was selected in conjunction with the linear search algorithm of Davies, Swann and Campey. The objective function was defined as the total annual manufacturing cost and the program was employed to find the optimum operating conditions for anyone of the chosen solvents. The final results demonstrated the following order of feasibility to purify wet-process acid: di-isopropyl ether, methylisobutyl ketone, n-amyl alcohol and tri-n-butyl phosphate.
Resumo:
A systematic survey of the possible methods of chemical extraction of iron by chloride formation has been presented and supported by a comparable study of :feedstocks, products and markets. The generation and evaluation of alternative processes was carried out by the technique of morphological analysis vihich was exploited by way of a computer program. The final choice was related to technical feasibility and economic viability, particularly capital cost requirements and developments were made in an estimating procedure for hydrometallurgjcal processes which have general applications. The systematic exploration included the compilation of relevant data, and this indicated a need.to investigate precipitative hydrolysis or aqueous ferric chloride. Arising from this study, two novel hydrometallurgical processes for manufacturing iron powder are proposed and experimental work was undertaken in the following .areas to demonstrate feasibility and obtain basic data for design purposes: (1) Precipitative hydrolysis of aqueous ferric chloride. (2) Gaseous chloridation of metallic iron, and oxidation of resultant ferrous chloride. (3) Reduction of gaseous ferric chloride with hydrogen. (4) Aqueous acid leaching of low grade iron ore. (5) Aqueous acid leaching of metallic iron. The experimentation was supported by theoretical analyses dealing with: (1) Thermodynamics of hydrolysis. (2) Kinetics of ore leaching. (3) Kinetics of metallic iron leaching. (4) Crystallisation of ferrous chloride. (5) Oxidation of anhydrous ferrous chloride. (6) Reduction of ferric chloride. Conceptual designs are suggested fbr both the processes mentioned. These draw attention to areas where further work is necessary, which are listed. Economic analyses have been performed which isolate significant cost areas, und indicate total production costs. Comparisons are mode with previous and analogous proposals for the production of iron powder.
Resumo:
PURPOSE: To evaluate theoretically three previously published formulae that use intra-operative aphakic refractive error to calculate intraocular lens (IOL) power, not necessitating pre-operative biometry. The formulae are as follows: IOL power (D) = Aphakic refraction x 2.01 [Ianchulev et al., J. Cataract Refract. Surg.31 (2005) 1530]; IOL power (D) = Aphakic refraction x 1.75 [Mackool et al., J. Cataract Refract. Surg.32 (2006) 435]; IOL power (D) = 0.07x(2) + 1.27x + 1.22, where x = aphakic refraction [Leccisotti, Graefes Arch. Clin. Exp. Ophthalmol.246 (2008) 729]. METHODS: Gaussian first order calculations were used to determine the relationship between intra-operative aphakic refractive error and the IOL power required for emmetropia in a series of schematic eyes incorporating varying corneal powers, pre-operative crystalline lens powers, axial lengths and post-operative IOL positions. The three previously published formulae, based on empirical data, were then compared in terms of IOL power errors that arose in the same schematic eye variants. RESULTS: An inverse relationship exists between theoretical ratio and axial length. Corneal power and initial lens power have little effect on calculated ratios, whilst final IOL position has a significant impact. None of the three empirically derived formulae are universally accurate but each is able to predict IOL power precisely in certain theoretical scenarios. The formulae derived by Ianchulev et al. and Leccisotti are most accurate for posterior IOL positions, whereas the Mackool et al. formula is most reliable when the IOL is located more anteriorly. CONCLUSION: Final IOL position was found to be the chief determinant of IOL power errors. Although the A-constants of IOLs are known and may be accurate, a variety of factors can still influence the final IOL position and lead to undesirable refractive errors. Optimum results using these novel formulae would be achieved in myopic eyes.
Resumo:
One of the main objectives of this study was to functionalise various rubbers (i.e. ethylene propylene copolymer (EP), ethylene propylene diene terpolymer (EPDM), and natural rubber (NR)) using functional monomers, maleic anhydride (MA) and glycidyl methacrylate (GMA), via reactive processing routes. The functionalisation of the rubber was carried out via different reactive processing methods in an internal mixer. GMA was free-radically grafted onto EP and EPDM in the melt state in the absence and presence of a comonomer, trimethylolpropane triacrylate (TRlS). To optinuse the grafting conditions and the compositions, the effects of various paranleters on the grafting yields and the extent of side reactions were investigated. Precipitation method and Soxhlet extraction method was established to purifY the GMA modified rubbers and the grafting degree was determined by FTIR and titration. It was found that without TRlS the grafting degree of GMA increased with increasing peroxide concentration. However, grafting was low and the homopolymerisation of GMA and crosslinking of the polymers were identified as the main side reactions competing with the desired grafting reaction for EP and EPDM, respectively. The use of the tri-functional comonomer, TRlS, was shown to greatly enhance the GMA grafting and reduce the side reactions in terms of the higher GMA grafting degree, less alteration of the rheological properties of the polymer substrates and very little formation of polyGMA. The grafting mechanisms were investigated. MA was grafted onto NR using both thermal initiation and peroxide initiation. The results showed clearly that the reaction of MA with NR could be thermally initiated above 140°C in the absence of peroxide. At a preferable temperature of 200°C, the grafting degree was increased with increasing MA concentration. The grafting reaction could also be initiated with peroxide. It was found that 2,5-dimethyl-2,5-bis(ter-butylproxy) hexane (TIOI) was a suitable peroxide to initiate the reaction efficiently above I50°C. The second objective of the work was to utilize the functionalised rubbers in a second step to achieve an in-situ compatibilisation of blends based on poly(ethylene terephthalate) (PET), in particular, with GMA-grafted-EP and -EPDM and the reactive blending was carried out in an internal mixer. The effects of GMA grafting degree, viscosities of GMAgrafted- EP and -EPDM and the presence of polyGMA in the rubber samples on the compatibilisation of PET blends in terms of morphology, dynamical mechanical properties and tensile properties were investigated. It was found that the GMA modified rubbers were very efficient in compatibilising the PET blends and this was supported by the much finer morphology and the better tensile properties. The evidence obtained from the analysis of the PET blends strongly supports the existence of the copolymers through the interfacial reactions between the grafted epoxy group in the GMA modified rubber and the terminal groups of PET in the blends.
Resumo:
This research was undertaken to: develop a process for the direct solvent extraction of castor oil seeds. A literature survey confirmed the desirability of establishing such a process with emphasis on the decortication, size, reduction, detoxification-deallergenization, and solvent·extraction operations. A novel process was developed for the dehulling of castor seeds which consists of pressurizing the beans and then suddenly releasing the pressure to vaccum. The degree of dehulling varied according to the pressure applied and the size of the beans. Some of the batches were difficult-to-hull, and this phenomenon was investigated using the scanning electron microscope and by thickness and compressive strength measurements. The other variables studied to lesser degrees included residence time, moisture, content, and temperature.The method was successfully extended to cocoa beans, and (with modifications) to peanuts. The possibility of continuous operation was looked into, and a mechanism was suggested to explain the method works. The work on toxins and allergens included an extensive literature survey on the properties of these substances and the methods developed for their deactivation Part of the work involved setting up an assay method for measuring their concentration in the beans and cake, but technical difficulties prevented the completion of this aspect of the project. An appraisal of the existing deactivation methods was made in the course of searching for new ones. A new method of reducing the size of oilseeds was introduced in this research; it involved freezing the beans in cardice and milling them in a coffee grinder, the method was found to be a quick, efficient, and reliable. An application of the freezing technique was successful in dehulling soybeans and de-skinning peanut kernels. The literature on the solvent extraction, of oilseeds, especially castor, was reviewed: The survey covered processes, equipment, solvents, and mechanism of leaching. three solvents were experimentally investigated: cyclohexane, ethanol, and acetone. Extraction with liquid ammonia and liquid butane was not effective under the conditions studied. Based on the results of the research a process has been suggested for the direct solvent extraction of castor seeds, the various sections of the process have analysed, and the factors affecting the economics of the process were discussed.
Resumo:
The trend in modal extraction algorithms is to use all the available frequency response functions data to obtain a global estimate of the natural frequencies, damping ratio and mode shapes. Improvements in transducer and signal processing technology allow the simultaneous measurement of many hundreds of channels of response data. The quantity of data available and the complexity of the extraction algorithms make considerable demands on the available computer power and require a powerful computer or dedicated workstation to perform satisfactorily. An alternative to waiting for faster sequential processors is to implement the algorithm in parallel, for example on a network of Transputers. Parallel architectures are a cost effective means of increasing computational power, and a larger number of response channels would simply require more processors. This thesis considers how two typical modal extraction algorithms, the Rational Fraction Polynomial method and the Ibrahim Time Domain method, may be implemented on a network of transputers. The Rational Fraction Polynomial Method is a well known and robust frequency domain 'curve fitting' algorithm. The Ibrahim Time Domain method is an efficient algorithm that 'curve fits' in the time domain. This thesis reviews the algorithms, considers the problems involved in a parallel implementation, and shows how they were implemented on a real Transputer network.
Resumo:
Objective: Biomedical events extraction concerns about events describing changes on the state of bio-molecules from literature. Comparing to the protein-protein interactions (PPIs) extraction task which often only involves the extraction of binary relations between two proteins, biomedical events extraction is much harder since it needs to deal with complex events consisting of embedded or hierarchical relations among proteins, events, and their textual triggers. In this paper, we propose an information extraction system based on the hidden vector state (HVS) model, called HVS-BioEvent, for biomedical events extraction, and investigate its capability in extracting complex events. Methods and material: HVS has been previously employed for extracting PPIs. In HVS-BioEvent, we propose an automated way to generate abstract annotations for HVS training and further propose novel machine learning approaches for event trigger words identification, and for biomedical events extraction from the HVS parse results. Results: Our proposed system achieves an F-score of 49.57% on the corpus used in the BioNLP'09 shared task, which is only 2.38% lower than the best performing system by UTurku in the BioNLP'09 shared task. Nevertheless, HVS-BioEvent outperforms UTurku's system on complex events extraction with 36.57% vs. 30.52% being achieved for extracting regulation events, and 40.61% vs. 38.99% for negative regulation events. Conclusions: The results suggest that the HVS model with the hierarchical hidden state structure is indeed more suitable for complex event extraction since it could naturally model embedded structural context in sentences.
Resumo:
This paper proposes a novel framework of incorporating protein-protein interactions (PPI) ontology knowledge into PPI extraction from biomedical literature in order to address the emerging challenges of deep natural language understanding. It is built upon the existing work on relation extraction using the Hidden Vector State (HVS) model. The HVS model belongs to the category of statistical learning methods. It can be trained directly from un-annotated data in a constrained way whilst at the same time being able to capture the underlying named entity relationships. However, it is difficult to incorporate background knowledge or non-local information into the HVS model. This paper proposes to represent the HVS model as a conditionally trained undirected graphical model in which non-local features derived from PPI ontology through inference would be easily incorporated. The seamless fusion of ontology inference with statistical learning produces a new paradigm to information extraction.
Resumo:
Rotation invariance is important for an iris recognition system since changes of head orientation and binocular vergence may cause eye rotation. The conventional methods of iris recognition cannot achieve true rotation invariance. They only achieve approximate rotation invariance by rotating the feature vector before matching or unwrapping the iris ring at different initial angles. In these methods, the complexity of the method is increased, and when the rotation scale is beyond the certain scope, the error rates of these methods may substantially increase. In order to solve this problem, a new rotation invariant approach for iris feature extraction based on the non-separable wavelet is proposed in this paper. Firstly, a bank of non-separable orthogonal wavelet filters is used to capture characteristics of the iris. Secondly, a method of Markov random fields is used to capture rotation invariant iris feature. Finally, two-class kernel Fisher classifiers are adopted for classification. Experimental results on public iris databases show that the proposed approach has a low error rate and achieves true rotation invariance. © 2010.
Resumo:
Biomedical relation extraction aims to uncover high-quality relations from life science literature with high accuracy and efficiency. Early biomedical relation extraction tasks focused on capturing binary relations, such as protein-protein interactions, which are crucial for virtually every process in a living cell. Information about these interactions provides the foundations for new therapeutic approaches. In recent years, more interests have been shifted to the extraction of complex relations such as biomolecular events. While complex relations go beyond binary relations and involve more than two arguments, they might also take another relation as an argument. In the paper, we conduct a thorough survey on the research in biomedical relation extraction. We first present a general framework for biomedical relation extraction and then discuss the approaches proposed for binary and complex relation extraction with focus on the latter since it is a much more difficult task compared to binary relation extraction. Finally, we discuss challenges that we are facing with complex relation extraction and outline possible solutions and future directions.