973 resultados para Graph eigenvalue
Resumo:
A new sensitive and selective procedure for speciation of trace dissolved Fe(III) and Fe(II), using modified octadecyl silica membrane disks and determination by flame atomic absorption spectrometry was developed. A ML3 complex is formed between the ligand and Fe(III) responsible for extraction of metal ion on the disk. Various factors influencing the separation of iron were investigated and the optimized operation conditions were established. Under optimum conditions, an enrichment factor of 166 was obtained for Fe3+ ions. The calibration graph using the preconcentration system for Fe3+ was linear between 40.0 and 1000.0 μg L-1.
Resumo:
A simultaneous solid phase extraction procedure for enrichment of Cu(II), Cd(II) and Mn(II) has been developed. The method is based on adsorption of Cu(II), Cd(II) and Mn(II) ions on polyethylene glycol-silica gel pre-conditioned with acetate buffer (pH 5.5). The adsorbed metal ions are eluted with nitric acid (1 mol L -1) and determined by flame atomic absorption spectrometry. The calibration graph was linear in the range of 2-140 ng mL-1 for Cu(II), 1-40 ng mL-1 for Cd(II) and 4-100 ng mL-1 for Mn(II). The limits of detection were 0.66, 0.33 and 1.20 ng mL-1 for Cu(II), Cd(II) and Mn(II), respectively.
Resumo:
A method for the determination of trace amounts of palladium was developed using homogeneous liquid-liquid microextraction via flotation assistance (HLLME-FA) followed by graphite furnace atomic absorption spectrometry (GFAAS). Ammonium pyrrolidine dithiocarbamate (APDC) was used as a complexing agent. This was applied to determine palladium in three types of water samples. In this study, a special extraction cell was designed to facilitate collection of the low-density solvent extraction. No centrifugation was required in this procedure. The water sample solution was added to the extraction cell which contained an appropriate mixture of extraction and homogeneous solvents. By using air flotation, the organic solvent was collected at the conical part of the designed cell. Parameters affecting extraction efficiency were investigated and optimized. Under the optimum conditions, the calibration graph was linear in the range of 1.0-200 µg L-1 with a limit of detection of 0.3 µg L-1. The performance of the method was evaluated for the extraction and determination of palladium in water samples and satisfactory results were obtained. In order to verify the accuracy of the approach, the standard addition method was applied for the determination of palladium in spiked synthetic samples and satisfactory results were obtained.
Resumo:
An enzymatic spectrophotometric method for the determination of methyldopa in a dissolution test of tablets was developed using peroxidase from radish (Raphanus sativus). The enzyme was extracted from radish roots using a phosphate buffer of pH 6.5 and partially purified through centrifugation. The supernatant was used as a source of peroxidase. The methyldopachrome resulting from the oxidation of methyldopa catalyzed by peroxidase was monitored at 480 nm. The enzymatic activity was stable for a period of at least 25 days when the extract was stored at 4 or -20 ºC. The method was validated according to RDC 899 and ICH guidelines. The calibration graph was linear in the range 200-800 µg mL-1, with a correlation coefficient of 0.9992. The limits of detection and quantification in the dissolution medium were 36 and 120 µg mL-1, respectively. Recovery was greater than 98.9%. This method can be applied for the determination of methyldopa in dissolution tests of tablets without interference from the excipients.
Resumo:
In this study, dispersive liquid-liquid microextraction based on the solidification of floating organic droplets was used for the preconcentration and determination of thorium in the water samples. In this method, acetone and 1-undecanol were used as disperser and extraction solvents, respectively, and the ligand 1-(2-thenoyl)-3,3,3-trifluoracetone reagent (TTA) and Aliquat 336 was used as a chelating agent and an ion-paring reagent, for the extraction of thorium, respectively. Inductively coupled plasma-optical emission spectrometry was applied for the quantitation of the analyte after preconcentration. The effect of various factors, such as the extraction and disperser solvent, sample pH, concentration of TTA and concentration of aliquat336 were investigated. Under the optimum conditions, the calibration graph was linear within the thorium content range of 1.0-250 µg L-1 with a detection limit of 0.2 µg L-1. The method was also successfully applied for the determination of thorium in the different water samples.
Resumo:
A nitrate selective electrode was prepared for use in an aggresive medium (high acidic or basic concentration). It is demonstrated that the depending E graph with respect to pNO3- has not a Nernstian response in concentration acidic range upper 0.1 mol/L H2SO4. The observed behaviour is supposed to be due to the formation of a dimeric anion HN2O6-.
Resumo:
Työssä selvitettiin Neste Oil Porvoon jalostamon tuotantolinja 2 jäähdytysvesiverkon tilaa. Jäähdytysvesiverkon hydraulinen malli päivitettiin ja verifioitiin painemittauksin. Mallia tarkennettiin säätöventtiilien mallinnuksen sekä virhelähteiden tarkastelun perusteella havaituin muutoksin. Mallin verifioinnissa havaittiin huomattavia eroja mallin ja mitattujen paineiden välillä. Tämä johti mallin tarkempaan tarkasteluun, sekä virhelähteiden ja niiden vaikutusten selvittämiseen. Putkivarusteiden mallinnusmenetelmiä, sekä mallinnusperiaatteita vertailtiin keskenään. Koska jäähdytysveden kokonaiskierto oli riittämätön, tarkasteltiin kolmea vaihtoehtoa riittävän kiertovesimäärän aikaansaamiseksi. Nykyisten kiertovesipumppujen rinnanoperointi, sekä riittävän suureksi skaalatun pumpun käyttö simuloitiin. Kolmantena tapauksena arvioitiin lämmönvaihdinkohtaisen kuristussuunnitelman vaikutus putkiston painehäviöön, sekä putkistolle sopiva kiertovesipumppu. Vaihtoehdoille laskettiin suuntaa-antavat investointi- ja käyttökustannukset. Tarkastelun perusteella riittävän suureksi skaalattu pumppu todettiin kannattavimmaksi pienen hintaeron, sekä luotettavamman jäähdytysvesikierron käyttövarmuuden vuoksi. Työssä onnistuttiin tuottamaan yleispätevää tietoa suljetun jäähdytysvesiverkon hydrauliseen mallinnukseen vaikuttavista tekijöistä, sekä niiden vaikutuksesta mallin tarkkuuteen. Selvityksen perusteella tarkasteltua mallia saatiin tarkemmaksi.
Resumo:
Työssä tutkittiin soodakattiloiden ilmakanavien hyödyntämistä jäykistävänä rakenteena. Työssä käsiteltiin yksittäisiä jäykistämättömiä ja jäykistettyjä levykenttiä ja niiden lommahduskestävyyttä Eurokoodi standardin mukaisesti ja elementtimenetelmän avulla. Lisäksi käsiteltiin lommahduksen teoriaa ja levykenttien käyttäytymistä yleisellä tasolla erilaisilla kuormituksilla ja reunaehdoilla. Työn tavoitteena oli selvittää kuinka lommahdus tutkitaan Eurokoodin mukaisesti ja elementtimenetelmää hyödyntäen, kun levykentän kuormituksena on poikittainen kuormitus tason suuntaisen kuormituksen lisäksi. Työssä tutkittiin kahden eri elementtimenetelmään pohjautuvan ratkaisuvaihtoehdon käyttöä lommahduslaskennassa. Työssä kehitettiin Eurokoodin sovellettu yhteisvaikutuskaavan käyttö lineaarisen ominaisarvotehtävän ratkaisun lisänä, jossa otetaan huomioon painekuorman vaikutus levykentän lommahduksessa. Kehitettyä menetelmää sovellettiin ilmakanavan esimerkkirakenteen mitoituksessa.
Resumo:
This master’s thesis is devoted to study different heat flux measurement techniques such as differential temperature sensors, semi-infinite surface temperature methods, calorimetric sensors and gradient heat flux sensors. The possibility to use Gradient Heat Flux Sensors (GHFS) to measure heat flux in the combustion chamber of compression ignited reciprocating internal combustion engines was considered in more detail. A. Mityakov conducted an experiment, where Gradient Heat Flux Sensor was placed in four stroke diesel engine Indenor XL4D to measure heat flux in the combustion chamber. The results which were obtained from the experiment were compared with model’s numerical output. This model (a one – dimensional single zone model) was implemented with help of MathCAD and the result of this implementation is graph of heat flux in combustion chamber in relation to the crank angle. The values of heat flux throughout the cycle obtained with aid of heat flux sensor and theoretically were sufficiently similar, but not identical. Such deviation is rather common for this type of experiment.
Resumo:
Crack formation and growth in steel bridge structural elements may be due to loading oscillations. The welded elements are liable to internal discontinuities along welded joints and sensible to stress variations. The evaluation of the remaining life of a bridge is needed to make cost-effective decisions regarding inspection, repair, rehabilitation, and replacement. A steel beam model has been proposed to simulate crack openings due to cyclic loads. Two possible alternatives have been considered to model crack propagation, which the initial phase is based on the linear fracture mechanics. Then, the model is extended to take into account the elastoplastic fracture mechanic concepts. The natural frequency changes are directly related to moment of inertia variation and consequently to a reduction in the flexural stiffness of a steel beam. Thus, it is possible to adopt a nondestructive technique during steel bridge inspection to quantify the structure eigenvalue variation that will be used to localize the grown fracture. A damage detection algorithm is developed for the proposed model and the numerical results are compared with the solutions achieved by using another well know computer code.
Resumo:
Biomedical natural language processing (BioNLP) is a subfield of natural language processing, an area of computational linguistics concerned with developing programs that work with natural language: written texts and speech. Biomedical relation extraction concerns the detection of semantic relations such as protein-protein interactions (PPI) from scientific texts. The aim is to enhance information retrieval by detecting relations between concepts, not just individual concepts as with a keyword search. In recent years, events have been proposed as a more detailed alternative for simple pairwise PPI relations. Events provide a systematic, structural representation for annotating the content of natural language texts. Events are characterized by annotated trigger words, directed and typed arguments and the ability to nest other events. For example, the sentence “Protein A causes protein B to bind protein C” can be annotated with the nested event structure CAUSE(A, BIND(B, C)). Converted to such formal representations, the information of natural language texts can be used by computational applications. Biomedical event annotations were introduced by the BioInfer and GENIA corpora, and event extraction was popularized by the BioNLP'09 Shared Task on Event Extraction. In this thesis we present a method for automated event extraction, implemented as the Turku Event Extraction System (TEES). A unified graph format is defined for representing event annotations and the problem of extracting complex event structures is decomposed into a number of independent classification tasks. These classification tasks are solved using SVM and RLS classifiers, utilizing rich feature representations built from full dependency parsing. Building on earlier work on pairwise relation extraction and using a generalized graph representation, the resulting TEES system is capable of detecting binary relations as well as complex event structures. We show that this event extraction system has good performance, reaching the first place in the BioNLP'09 Shared Task on Event Extraction. Subsequently, TEES has achieved several first ranks in the BioNLP'11 and BioNLP'13 Shared Tasks, as well as shown competitive performance in the binary relation Drug-Drug Interaction Extraction 2011 and 2013 shared tasks. The Turku Event Extraction System is published as a freely available open-source project, documenting the research in detail as well as making the method available for practical applications. In particular, in this thesis we describe the application of the event extraction method to PubMed-scale text mining, showing how the developed approach not only shows good performance, but is generalizable and applicable to large-scale real-world text mining projects. Finally, we discuss related literature, summarize the contributions of the work and present some thoughts on future directions for biomedical event extraction. This thesis includes and builds on six original research publications. The first of these introduces the analysis of dependency parses that leads to development of TEES. The entries in the three BioNLP Shared Tasks, as well as in the DDIExtraction 2011 task are covered in four publications, and the sixth one demonstrates the application of the system to PubMed-scale text mining.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
The amount of biological data has grown exponentially in recent decades. Modern biotechnologies, such as microarrays and next-generation sequencing, are capable to produce massive amounts of biomedical data in a single experiment. As the amount of the data is rapidly growing there is an urgent need for reliable computational methods for analyzing and visualizing it. This thesis addresses this need by studying how to efficiently and reliably analyze and visualize high-dimensional data, especially that obtained from gene expression microarray experiments. First, we will study the ways to improve the quality of microarray data by replacing (imputing) the missing data entries with the estimated values for these entries. Missing value imputation is a method which is commonly used to make the original incomplete data complete, thus making it easier to be analyzed with statistical and computational methods. Our novel approach was to use curated external biological information as a guide for the missing value imputation. Secondly, we studied the effect of missing value imputation on the downstream data analysis methods like clustering. We compared multiple recent imputation algorithms against 8 publicly available microarray data sets. It was observed that the missing value imputation indeed is a rational way to improve the quality of biological data. The research revealed differences between the clustering results obtained with different imputation methods. On most data sets, the simple and fast k-NN imputation was good enough, but there were also needs for more advanced imputation methods, such as Bayesian Principal Component Algorithm (BPCA). Finally, we studied the visualization of biological network data. Biological interaction networks are examples of the outcome of multiple biological experiments such as using the gene microarray techniques. Such networks are typically very large and highly connected, thus there is a need for fast algorithms for producing visually pleasant layouts. A computationally efficient way to produce layouts of large biological interaction networks was developed. The algorithm uses multilevel optimization within the regular force directed graph layout algorithm.