975 resultados para Matrix Analytic Methods
Resumo:
A study of the partial USEPA 3050B and total ISO 14869-1:2001 digestion methods of sediments was performed. USEPA 3050B was recommended as the simpler method with less operational risk. However, the extraction ability of the method should be taken in account for the best environmental interpretation of the results. FAAS was used to quantify metal concentrations in sediment solutions. The alternative use of ICP-OES quantification should be conditioned by a previous detailed investigation and eventual correction of the matrix effect. For the first time, the EID method was employed for the detection and correction of the matrix effect in sediment ICP-OES analysis. Finally, some considerations were made about the level of metal contamination in the area under study.
Resumo:
A simple, precise, specific, repeatable and discriminating dissolution test for primaquine (PQ) matrix tablets was developed and validated according to ICH and FDA guidelines. Two UV assaying methods were validated for determination of PQ released in 0.1 M hydrochloric acid and water media. Both methods were linear (R²>0.999), precise (R.S.D.<1.87%) and accurate (97.65-99.97%). Dissolution efficiency (69-88%) and equivalence of formulations (f2) was assessed in different media and apparatuses (basket/100 rpm and paddle/50 rpm) tested. Discriminating condition was 900 mL aqueous medium, basket at 100 rpm and sampling times at 1, 4 and 8 h. Repeatability (R.S.D.<2.71%) and intermediate precision (R.S.D.<2.06%) of dissolution method were satisfactory.
Resumo:
Cat's claw oxindole alkaloids are prone to isomerization in aqueous solution. However, studies on their behavior in extraction processes are scarce. This paper addressed the issue by considering five commonly used extraction processes. Unlike dynamic maceration (DM) and ultrasound-assisted extraction, substantial isomerization was induced by static maceration, turbo-extraction and reflux extraction. After heating under reflux in DM, the kinetic order of isomerization was established and equations were fitted successfully using a four-parameter Weibull model (R² > 0.999). Different isomerization rates and equilibrium constants were verified, revealing a possible matrix effect on alkaloid isomerization.
Resumo:
Machine learning provides tools for automated construction of predictive models in data intensive areas of engineering and science. The family of regularized kernel methods have in the recent years become one of the mainstream approaches to machine learning, due to a number of advantages the methods share. The approach provides theoretically well-founded solutions to the problems of under- and overfitting, allows learning from structured data, and has been empirically demonstrated to yield high predictive performance on a wide range of application domains. Historically, the problems of classification and regression have gained the majority of attention in the field. In this thesis we focus on another type of learning problem, that of learning to rank. In learning to rank, the aim is from a set of past observations to learn a ranking function that can order new objects according to how well they match some underlying criterion of goodness. As an important special case of the setting, we can recover the bipartite ranking problem, corresponding to maximizing the area under the ROC curve (AUC) in binary classification. Ranking applications appear in a large variety of settings, examples encountered in this thesis include document retrieval in web search, recommender systems, information extraction and automated parsing of natural language. We consider the pairwise approach to learning to rank, where ranking models are learned by minimizing the expected probability of ranking any two randomly drawn test examples incorrectly. The development of computationally efficient kernel methods, based on this approach, has in the past proven to be challenging. Moreover, it is not clear what techniques for estimating the predictive performance of learned models are the most reliable in the ranking setting, and how the techniques can be implemented efficiently. The contributions of this thesis are as follows. First, we develop RankRLS, a computationally efficient kernel method for learning to rank, that is based on minimizing a regularized pairwise least-squares loss. In addition to training methods, we introduce a variety of algorithms for tasks such as model selection, multi-output learning, and cross-validation, based on computational shortcuts from matrix algebra. Second, we improve the fastest known training method for the linear version of the RankSVM algorithm, which is one of the most well established methods for learning to rank. Third, we study the combination of the empirical kernel map and reduced set approximation, which allows the large-scale training of kernel machines using linear solvers, and propose computationally efficient solutions to cross-validation when using the approach. Next, we explore the problem of reliable cross-validation when using AUC as a performance criterion, through an extensive simulation study. We demonstrate that the proposed leave-pair-out cross-validation approach leads to more reliable performance estimation than commonly used alternative approaches. Finally, we present a case study on applying machine learning to information extraction from biomedical literature, which combines several of the approaches considered in the thesis. The thesis is divided into two parts. Part I provides the background for the research work and summarizes the most central results, Part II consists of the five original research articles that are the main contribution of this thesis.
Resumo:
This book is dedicated to celebrate the 60th birthday of Professor Rainer Huopalahti. Professor Rainer “Repe” Huopalahti has had, and in fact is still enjoying a distinguished career in the analysis of food and food related flavor compounds. One will find it hard to make any progress in this particular field without a valid and innovative sample handling technique and this is a field in which Professor Huopalahti has made great contributions. The title and the front cover of this book honors Professor Huopahti’s early steps in science. His PhD thesis which was published on 1985 is entitled “Composition and content of aroma compounds in the dill herb, Anethum graveolens L., affected by different factors”. At that time, the thesis introduced new technology being applied to sample handling and analysis of flavoring compounds of dill. Sample handling is an essential task that in just about every analysis. If one is working with minor compounds in a sample or trying to detect trace levels of the analytes, one of the aims of sample handling may be to increase the sensitivity of the analytical method. On the other hand, if one is working with a challenging matrix such as the kind found in biological samples, one of the aims is to increase the selectivity. However, quite often the aim is to increase both the selectivity and the sensitivity. This book provides good and representative examples about the necessity of valid sample handling and the role of the sample handling in the analytical method. The contributors of the book are leading Finnish scientists on the field of organic instrumental analytical chemistry. Some of them are also Repe’ s personal friends and former students from the University of Turku, Department of Biochemistry and Food Chemistry. Importantly, the authors all know Repe in one way or another and are well aware of his achievements on the field of analytical chemistry. The editorial team had a great time during the planning phase and during the “hard work editorial phase” of the book. For example, we came up with many ideas on how to publish the book. After many long discussions, we decided to have a limited edition as an “old school hard cover book” – and to acknowledge more modern ways of disseminating knowledge by publishing an internet version of the book on the webpages of the University of Turku. Downloading the book from the webpage for personal use is free of charge. We believe and hope that the book will be read with great interest by scientists working in the fascinating field of organic instrumental analytical chemistry. We decided to publish our book in English for two main reasons. First, we believe that in the near future, more and more teaching in Finnish Universities will be delivered in English. To facilitate this process and encourage students to develop good language skills, it was decided to be published the book in English. Secondly, we believe that the book will also interest scientists outside Finland – particularly in the other member states of the European Union. The editorial team thanks all the authors for their willingness to contribute to this book – and to adhere to the very strict schedule. We also want to thank the various individuals and enterprises who financially supported the book project. Without that support, it would not have been possible to publish the hardcover book.
Resumo:
This study uses several measures derived from the error matrix for comparing two thematic maps generated with the same sample set. The reference map was generated with all the sample elements and the map set as the model was generated without the two points detected as influential by the analysis of local influence diagnostics. The data analyzed refer to the wheat productivity in an agricultural area of 13.55 ha considering a sampling grid of 50 x 50 m comprising 50 georeferenced sample elements. The comparison measures derived from the error matrix indicated that despite some similarity on the maps, they are different. The difference between the estimated production by the reference map and the actual production was of 350 kilograms. The same difference calculated with the mode map was of 50 kilograms, indicating that the study of influential points is of fundamental importance to obtain a more reliable estimative and use of measures obtained from the error matrix is a good option to make comparisons between thematic maps.
Resumo:
Bacteria can exist as planktonic, the lifestyle in which single cells exist in suspension, and as biofilms, which are surface-attached bacterial communities embedded in a selfproduced matrix. Most of the antibiotics and the methods for antimicrobial work have been developed for planktonic bacteria. However, the majority of the bacteria in natural habitats live as biofilms. Biofilms develop dauntingly fast high resistance towards conventional antibacterial treatments and thus, there is a great need to meet the demands of effective anti-biofilm therapy. In this thesis project it was attempted to fill the void of anti-biofilm screening methods by developing a platform of assays that evaluate the effect that screened compounds have on the total biomass, viability and the extracellular polysaccharide (EPS) layer of the biofilms. Additionally, a new method for studying biofilms and their interactions with compounds in a continuous flow system was developed using capillary electrochromatography (CEC). The screening platform was utilized with a screening campaign using a small library of cinchona alkaloids. The assays were optimized to be statistically robust enough for screening. The first assay, based on crystal violet staining, measures total biofilm biomass, and it was automated using a liquid handling workstation to decrease the manual workload and signal variation. The second assay, based on resazurin staining, measures viability of the biofilm, and it was thoroughly optimized for the strain used, but was then a very simple and fast method to be used for primary screening. The fluorescent resazurin probe is not toxic to the biofilms. In fact, it was also shown in this project that staining the biofilms with resazurin prior to staining with crystal violet had no effect on the latter and they can be used in sequence on the same screening plate. This sequential addition step was indeed a major improvement on the use of reagents and consumables and also shortened the work time. As a third assay in the platform a wheat germ agglutinin based assay was added to evaluate the effect a compound has on the EPS layer. Using this assay it was found that even if compounds might have clear effect on both biomass and viability, the EPS layer can be left untouched or even be increased. This is a clear implication of the importance of using several assays to be able to find “true hits” in a screening setting. In the pilot study of screening for antimicrobial and anti-biofilm effects using a cinchona alkaloid library, one compound was found to have antimicrobial effect against planktonic bacteria and prevent biofilm formation at low micromolar concentration. To eradicate biofilms, a higher concentration was needed. It was also shown that the chemical space occupied by the active compound was slightly different than the rest of the cinchona alkaloids as well as the rest of the compounds used for validatory screening during the optimization processes of the separate assays.
Resumo:
OBJECTIVE: to evaluate the role of fibrillar extracellular matrix components in the pathogenesis of inguinal hernias. METHODS: samples of the transverse fascia and of the anterior sheath of the rectus abdominis muscle were collected from 40 men aged between 20 and 60 years with type II and IIIA Nyhus inguinal hernia and from 10 fresh male cadavers (controls) without hernia in the same age range. The staining technique was immunohistochemistry for collagen I, collagen III and elastic fibers; quantification of fibrillar components was performed with an image analysis processing software. RESULTS: no statistically significant differences were found in the amount of elastic fibers, collagen I and collagen III, and the ratio of collagen I / III among patients with inguinal hernia when compared with subjects without hernia. CONCLUSION: the amount of fibrillar extracellular matrix components did not change in patients with and without inguinal hernia.
Resumo:
Preparative liquid chromatography is one of the most selective separation techniques in the fine chemical, pharmaceutical, and food industries. Several process concepts have been developed and applied for improving the performance of classical batch chromatography. The most powerful approaches include various single-column recycling schemes, counter-current and cross-current multi-column setups, and hybrid processes where chromatography is coupled with other unit operations such as crystallization, chemical reactor, and/or solvent removal unit. To fully utilize the potential of stand-alone and integrated chromatographic processes, efficient methods for selecting the best process alternative as well as optimal operating conditions are needed. In this thesis, a unified method is developed for analysis and design of the following singlecolumn fixed bed processes and corresponding cross-current schemes: (1) batch chromatography, (2) batch chromatography with an integrated solvent removal unit, (3) mixed-recycle steady state recycling chromatography (SSR), and (4) mixed-recycle steady state recycling chromatography with solvent removal from fresh feed, recycle fraction, or column feed (SSR–SR). The method is based on the equilibrium theory of chromatography with an assumption of negligible mass transfer resistance and axial dispersion. The design criteria are given in general, dimensionless form that is formally analogous to that applied widely in the so called triangle theory of counter-current multi-column chromatography. Analytical design equations are derived for binary systems that follow competitive Langmuir adsorption isotherm model. For this purpose, the existing analytic solution of the ideal model of chromatography for binary Langmuir mixtures is completed by deriving missing explicit equations for the height and location of the pure first component shock in the case of a small feed pulse. It is thus shown that the entire chromatographic cycle at the column outlet can be expressed in closed-form. The developed design method allows predicting the feasible range of operating parameters that lead to desired product purities. It can be applied for the calculation of first estimates of optimal operating conditions, the analysis of process robustness, and the early-stage evaluation of different process alternatives. The design method is utilized to analyse the possibility to enhance the performance of conventional SSR chromatography by integrating it with a solvent removal unit. It is shown that the amount of fresh feed processed during a chromatographic cycle and thus the productivity of SSR process can be improved by removing solvent. The maximum solvent removal capacity depends on the location of the solvent removal unit and the physical solvent removal constraints, such as solubility, viscosity, and/or osmotic pressure limits. Usually, the most flexible option is to remove solvent from the column feed. Applicability of the equilibrium design for real, non-ideal separation problems is evaluated by means of numerical simulations. Due to assumption of infinite column efficiency, the developed design method is most applicable for high performance systems where thermodynamic effects are predominant, while significant deviations are observed under highly non-ideal conditions. The findings based on the equilibrium theory are applied to develop a shortcut approach for the design of chromatographic separation processes under strongly non-ideal conditions with significant dispersive effects. The method is based on a simple procedure applied to a single conventional chromatogram. Applicability of the approach for the design of batch and counter-current simulated moving bed processes is evaluated with case studies. It is shown that the shortcut approach works the better the higher the column efficiency and the lower the purity constraints are.
Resumo:
Angiotensin II (ANG II), the main effector of the renin-angiotensin system, is implicated in endothelial permeability, recruitment and activation of the immune cells, and also vascular remodeling through induction of inflammatory genes. Matrix metalloproteinases (MMPs) are considered to be important inflammatory factors. Elucidation of ANG II signaling pathways and of possible cross-talks between their components is essential for the development of efficient inhibitory medications. The current study investigates the inflammatory signaling pathways activated by ANG II in cultures of human monocytic U-937 cells, and the effects of specific pharmacological inhibitors of signaling intermediates on MMP-9 gene (MMP-9) expression and activity. MMP-9 expression was determined by real-time PCR and supernatants were analyzed for MMP-9 activity by ELISA and zymography methods. A multi-target ELISA kit was employed to evaluate IκB, NF-κB, JNK, p38, and STAT3 activation following treatments. Stimulation with ANG II (100 nM) significantly increased MMP-9 expression and activity, and also activated NF-κB, JNK, and p38 by 3.8-, 2.8- and 2.2-fold, respectively (P < 0.01). ANG II-induced MMP-9 expression was significantly reduced by 75 and 67%, respectively, by co-incubation of the cells with a selective inhibitor of protein kinase C (GF109203X, 5 µM) or of rho kinase (Y-27632, 15 µM), but not with inhibitors of phosphoinositide 3-kinase (wortmannin, 200 nM), tyrosine kinases (genistein, 100 µM) or of reactive oxygen species (α-tocopherol, 100 µM). Thus, protein kinase C and Rho kinase are important components of the inflammatory signaling pathways activated by ANG II to increase MMP-9 expression in monocytic cells. Both signaling molecules may constitute potential targets for effective management of inflammation.
Resumo:
Milk and egg matrixes were assayed for aflatoxin M1 (AFM1) and B1 (AFB1) respectively, by AOAC official and modified methods with detection and quantification by thin layer chromatography (TLC) and high performance thin layer chromatography (HPTLC). The modified methods: Blanc followed by Romer, showed to be most appropriate for AFM1 analysis in milk. Both methods reduced emulsion formation, produced cleaner extracts, no streaking spots, precision and accuracy improved, especially when quantification was performed by HPTLC. The use of ternary mixture in the Blanc Method was advantageous as the solvent could extract AFM1 directly from the first stage (extraction), leaving other compounds in the binary mixture layer, avoiding emulsion formation, thus reducing toxin loss. The relative standard deviation (RSD%) values were low, 16 and 7% when TLC and HPTLC were used, with a mean recovery of 94 and 97%, respectively. As far as egg matrix and final extract are concerned, both methods evaluated for AFB1 need further studies. Although that matrix leads to emulsion with consequent loss of toxin, the Romer modified presented a reasonable clean extract (mean recovery of 92 and 96% for TLC and HPTLC, respectively). Most of the methods studied did not performed as expected mainly due to the matrixes high content of triglicerides (rich on saturated fatty acids), cholesterol, carotene and proteins. Although nowadays most methodology for AFM1 is based on HPLC, TLC determination (Blanc and Romer modified) for AFM1 and AFB1 is particularly recommended to those, inexperienced in food and feed mycotoxins analysis and especially who cannot afford to purchase sophisticated (HPLC,HPTLC) instrumentation.
Resumo:
Nykypäivän monimutkaisessa ja epävakaassa liiketoimintaympäristössä yritykset, jotka kykenevät muuttamaan tuottamansa operatiivisen datan tietovarastoiksi, voivat saavuttaa merkittävää kilpailuetua. Ennustavan analytiikan hyödyntäminen tulevien trendien ennakointiin mahdollistaa yritysten tunnistavan avaintekijöitä, joiden avulla he pystyvät erottumaan kilpailijoistaan. Ennustavan analytiikan hyödyntäminen osana päätöksentekoprosessia mahdollistaa ketterämmän, reaaliaikaisen päätöksenteon. Tämän diplomityön tarkoituksena on koota teoreettinen viitekehys analytiikan mallintamisesta liike-elämän loppukäyttäjän näkökulmasta ja hyödyntää tätä mallinnusprosessia diplomityön tapaustutkimuksen yritykseen. Teoreettista mallia hyödynnettiin asiakkuuksien mallintamisessa sekä tunnistamalla ennakoivia tekijöitä myynnin ennustamiseen. Työ suoritettiin suomalaiseen teollisten suodattimien tukkukauppaan, jolla on liiketoimintaa Suomessa, Venäjällä ja Balteissa. Tämä tutkimus on määrällinen tapaustutkimus, jossa tärkeimpänä tiedonkeruumenetelmänä käytettiin tapausyrityksen transaktiodataa. Data työhön saatiin yrityksen toiminnanohjausjärjestelmästä.
Resumo:
Intelligence from a human source, that is falsely thought to be true, is potentially more harmful than a total lack of it. The veracity assessment of the gathered intelligence is one of the most important phases of the intelligence process. Lie detection and veracity assessment methods have been studied widely but a comprehensive analysis of these methods’ applicability is lacking. There are some problems related to the efficacy of lie detection and veracity assessment. According to a conventional belief an almighty lie detection method, that is almost 100% accurate and suitable for any social encounter, exists. However, scientific studies have shown that this is not the case, and popular approaches are often over simplified. The main research question of this study was: What is the applicability of veracity assessment methods, which are reliable and are based on scientific proof, in terms of the following criteria? o Accuracy, i.e. probability of detecting deception successfully o Ease of Use, i.e. easiness to apply the method correctly o Time Required to apply the method reliably o No Need for Special Equipment o Unobtrusiveness of the method In order to get an answer to the main research question, the following supporting research questions were answered first: What kinds of interviewing and interrogation techniques exist and how could they be used in the intelligence interview context, what kinds of lie detection and veracity assessment methods exist that are reliable and are based on scientific proof and what kind of uncertainty and other limitations are included in these methods? Two major databases, Google Scholar and Science Direct, were used to search and collect existing topic related studies and other papers. After the search phase, the understanding of the existing lie detection and veracity assessment methods was established through a meta-analysis. Multi Criteria Analysis utilizing Analytic Hierarchy Process was conducted to compare scientifically valid lie detection and veracity assessment methods in terms of the assessment criteria. In addition, a field study was arranged to get a firsthand experience of the applicability of different lie detection and veracity assessment methods. The Studied Features of Discourse and the Studied Features of Nonverbal Communication gained the highest ranking in overall applicability. They were assessed to be the easiest and fastest to apply, and to have required temporal and contextual sensitivity. The Plausibility and Inner Logic of the Statement, the Method for Assessing the Credibility of Evidence and the Criteria Based Content Analysis were also found to be useful, but with some limitations. The Discourse Analysis and the Polygraph were assessed to be the least applicable. Results from the field study support these findings. However, it was also discovered that the most applicable methods are not entirely troublefree either. In addition, this study highlighted that three channels of information, Content, Discourse and Nonverbal Communication, can be subjected to veracity assessment methods that are scientifically defensible. There is at least one reliable and applicable veracity assessment method for each of the three channels. All of the methods require disciplined application and a scientific working approach. There are no quick gains if high accuracy and reliability is desired. Since most of the current lie detection studies are concentrated around a scenario, where roughly half of the assessed people are totally truthful and the other half are liars who present a well prepared cover story, it is proposed that in future studies lie detection and veracity assessment methods are tested against partially truthful human sources. This kind of test setup would highlight new challenges and opportunities for the use of existing and widely studied lie detection methods, as well as for the modern ones that are still under development.
Resumo:
The aim of this Master’s thesis is to find a method for classifying spare part criticality in the case company. Several approaches exist for criticality classification of spare parts. The practical problem in this thesis is the lack of a generic analysis method for classifying spare parts of proprietary equipment of the case company. In order to find a classification method, a literature review of various analysis methods is required. The requirements of the case company also have to be recognized. This is achieved by consulting professionals in the company. The literature review states that the analytic hierarchy process (AHP) combined with decision tree models is a common method for classifying spare parts in academic literature. Most of the literature discusses spare part criticality in stock holding perspective. This is relevant perspective also for a customer orientated original equipment manufacturer (OEM), as the case company. A decision tree model is developed for classifying spare parts. The decision tree classifies spare parts into five criticality classes according to five criteria. The criteria are: safety risk, availability risk, functional criticality, predictability of failure and probability of failure. The criticality classes describe the level of criticality from non-critical to highly critical. The method is verified for classifying spare parts of a full deposit stripping machine. The classification can be utilized as a generic model for recognizing critical spare parts of other similar equipment, according to which spare part recommendations can be created. Purchase price of an item and equipment criticality were found to have no effect on spare part criticality in this context. Decision tree is recognized as the most suitable method for classifying spare part criticality in the company.
Resumo:
Euclidean distance matrix analysis (EDMA) methods are used to distinguish whether or not significant difference exists between conformational samples of antibody complementarity determining region (CDR) loops, isolated LI loop and LI in three-loop assembly (LI, L3 and H3) obtained from Monte Carlo simulation. After the significant difference is detected, the specific inter-Ca distance which contributes to the difference is identified using EDMA.The estimated and improved mean forms of the conformational samples of isolated LI loop and LI loop in three-loop assembly, CDR loops of antibody binding site, are described using EDMA and distance geometry (DGEOM). To the best of our knowledge, it is the first time the EDMA methods are used to analyze conformational samples of molecules obtained from Monte Carlo simulations. Therefore, validations of the EDMA methods using both positive control and negative control tests for the conformational samples of isolated LI loop and LI in three-loop assembly must be done. The EDMA-I bootstrap null hypothesis tests showed false positive results for the comparison of six samples of the isolated LI loop and true positive results for comparison of conformational samples of isolated LI loop and LI in three-loop assembly. The bootstrap confidence interval tests revealed true negative results for comparisons of six samples of the isolated LI loop, and false negative results for the conformational comparisons between isolated LI loop and LI in three-loop assembly. Different conformational sample sizes are further explored by combining the samples of isolated LI loop to increase the sample size, or by clustering the sample using self-organizing map (SOM) to narrow the conformational distribution of the samples being comparedmolecular conformations. However, there is no improvement made for both bootstrap null hypothesis and confidence interval tests. These results show that more work is required before EDMA methods can be used reliably as a method for comparison of samples obtained by Monte Carlo simulations.