729 resultados para Labels


Relevância:

10.00% 10.00%

Publicador:

Resumo:

De tout temps, hommes et femmes ont cherché par tous les moyens à développer, préserver ou recouvrer leurs propres capacités sexuelles mais également à stimuler le désir du partenaire. L?utilisation d?aphrodisiaques naturels a été l?un des recours les plus répandus. De nos jours, la commercialisation de nouvelles "love drugs" de synthèse, e.g. Viagra®, Cialis®, Levitra®, a remis au goût du jour les aphrodisiaques classiques et à relancer la recherche sur des molécules nouvelles. La pratique croissante de l?automédication, le matraquage publicitaire sur les aphrodisiaques naturels, la prolifération sur le marché de compléments alimentaires non contrôlés et l?absence de véritable législation accroissent les risques qui pèsent sur la santé publique. Dans le but d?évaluer les risques potentiels sur le consommateur de produits aphrodisiaques commercialisés, le développement et la validation d?une méthode rapide d?analyse qualitative et quantitative de la yohimbine dans ces préparations du marché sont exposés dans la première partie de ce travail. La yohimbine est un antagoniste ?2-adrénocepteur du système nerveux central et périphérique, elle est employée depuis plus d?un siècle dans le traitement des dysfonctionnements érectiles. Cette méthode analytique utilise la chromatographie liquide couplée à l?ultraviolet et à la spectrométrie de masse (LC-UV-MS) et au total, vingt préparations aphrodisiaques ont été étudiées. La dose journalière de yohimbine mesurée s?est révélée très variable selon les produits puisqu?elle varie de 1.32 à 23.16 mg. La seconde partie de ce travail concerne l?étude phytochimique et pharmacologique d?Erythroxylum vacciniifolium Mart. (Erythroxylaceae), une plante, appelée localement catuaba, utilisée dans la médecine traditionnelle brésilienne comme tonique et aphrodisiaque. Dans un premier temps, l?extrait alcaloïdique a été analysé par chromatographie liquide haute performance (HPLC) couplée soit à un détecteur UV à barrette d?iode (LC-UV-DAD), soit à un spectromètre de masse (LC-MS), ou soit à un spectromètre de résonance magnétique nucléaire (LC-RMN). L?interprétation de ces données spectrales enregistrées en ligne a permis d?obtenir des informations structurales et d?identifier partiellement près de 24 alcaloïdes appartenant à la classe des tropanes et potentiellement originaux. Par des méthodes classiques de chromatographie liquide sur l?extrait alcaloïdique de la plante, dix sept tropanes nouveaux ont ensuite été isolés dont les catuabines et leurs dérivés, et les vaccinines. Tous ces composés sont des tropane-diols ou triols estérifiés par au moins un groupe acide 1-méthyl-1H-pyrrole-2-carboxylique. Un de ces composés a été identifié comme un tropane N-oxyde. Toutes les structures ont été déterminées par spectrométrie de masse haute résolution et spectroscopie RMN multi-dimensionnelle. Parmi les nombreux tests biologiques réalisés sur ces tropanes, seuls les tests de cytotoxicité se sont révélés faiblement positifs pour certains de ces composés.<br/><br/>Throughout the ages, men and women have incessantly pursued every means to increase, preserve or recapture their sexual capacity, or to stimulate the sexual desire of selected individuals. One of the most recurrent methods has been the use of natural aphrodisiacs. Nowadays, the commercialization of new synthetic "love drugs", e.g. Viagra®, Cialis® and Levitra®, has fascinated the public interest and has led to a reassessment of classical aphrodisiacs and to the search for new ones. The practice of self-medication by an increasing number of patients, the incessant aggressive advertising of these herbal aphrodisiacs, the invasion of the medicinal market with uncontrolled dietary supplements and the absence of real directives amplifies the potential health hazards to the community. In order to evaluate the possible risks of commercialized aphrodisiac products on consumer health, the development and validation of a rapid qualitative and quantitative method for the analysis of yohimbine in these products, is reported in the first part of the present work. Yohimbine, a pharmacologically well-characterized ?2-adrenoceptor antagonist with activity in the central and peripheral nervous system, has been used for over a century in the treatment of erectile dysfunction. The analytical method is based on liquid chromatography coupled with ultraviolet and mass spectrometry (LC-UV-MS) and in total, 20 commercially-available aphrodisiac preparations were analyzed. The amount of yohimbine measured and expressed as the maximal dose per day suggested on product labels ranged from 1.32 to 23.16 mg. The second part of this work involved the phytochemical and pharmacological investigation of Erythroxylum vacciniifolium Mart. (Erythroxylaceae), a plant used in Brazilian traditional medicine as an aphrodisiac and tonic, and locally known as catuaba. With the aim of obtaining preliminary structure information on-line, the alkaloid extract was analyzed by high performance liquid chromatography (HPLC) coupled to diode array UV detection (LC-UVDAD), to mass spectrometry (LC-MS) and to nuclear magnetic resonance spectroscopy (LCNMR). Interpretation of on-line spectroscopic data led to structure elucidation and partial identification of 24 potentially original alkaloids bearing the same tropane skeleton. Seventeen new tropane alkaloids were then isolated from the alkaloid extract of the plant, including catuabines D to I, their derivatives and vaccinines A and B. All compounds were elucidated as tropane-diol or -triol alkaloids esterified by at least one 1-methyl-1H-pyrrole-2-carboxylic acid. One of the isolated compounds was identified as a tropane alkaloid N-oxide. Their structures were determined by high resolution mass spectrometry and multi-dimensional NMR spectroscopy. Among the numerous bioassays undertaken, only the cytotoxicity tests exhibited a weak positive activity of certain compounds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many classification systems rely on clustering techniques in which a collection of training examples is provided as an input, and a number of clusters c1,...cm modelling some concept C results as an output, such that every cluster ci is labelled as positive or negative. Given a new, unlabelled instance enew, the above classification is used to determine to which particular cluster ci this new instance belongs. In such a setting clusters can overlap, and a new unlabelled instance can be assigned to more than one cluster with conflicting labels. In the literature, such a case is usually solved non-deterministically by making a random choice. This paper presents a novel, hybrid approach to solve this situation by combining a neural network for classification along with a defeasible argumentation framework which models preference criteria for performing clustering.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Työn tavoitteena oli tutkia toiminnanohjausjärjestelmän jälleenmyyjän suunnitteleman ohjelmistokokonaisuuden sopivuutta teknisen tukkukaupan ratkaisuksi. Työssä käydään läpi ohjelmistokokonaisuutta, johon kuuluu toiminnanohjausjärjestelmän lisäksi useita muita ohjelmistoja. Ohjelmat yhdessä muodostavat kokonaisratkaisun erityisesti tukkukaupalle. Työssä käydään läpi myös teknisen tukkukaupan liiketoimintaympäristöä. Samalla pyritään löytämään ohjelmistoratkaisun vahvuudet ja heikkoudet tukkukaupan liiketoiminnalle. Haastattelututkimuksella varmistetaan löydettyjen ominaisuuksien sopiminen tekniselle tukkukaupalle. Tutkimuksessa havaitaan, että tarjottu ratkaisu riittää tekniselle tukkukauppiaalle mainiosti. Haastattelututkimuksen perusteella läheskään kaikkia lisätoiminnallisuuksia ei kuitenkaan tarvita, vaan pelkkä toiminnanohjausjärjestelmä riittää sellaisenaan. Järjestelmän ainoa havaittu heikkous liittyy varastonhallinnan hyllypaikkojen käsittelyyn, mutta haastattelun perusteella tämä ei haittaa teknistä tukkukauppaa harjoittavaa yritystä. Tärkeänä piirteenä toiminnanohjausjärjestelmän lisäksi on internetin käyttö tiedotuskanavana. Tiedon kulku tietojärjestelmien ja web-sivujen välillä koetaan tärkeäksi toiminnallisuudeksi.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Customers are more and more interested in the environmental impacts of the products they purchase. Different labels give the required environmental information to consumers and the labels might affect to the consuming decisions. The European Union has set a plan for sustainable consuming, which encourages industry and commerce to calculate carbon footprints for the products. A term “carbon footprint” means carbon dioxide emissions across the product lifecycle. In this thesis, carbon footprints are calculated for two different fibre-based packages. In the end, greenhouse gas emissions from fibre-package production are compared to greenhouse gas emissions from PET bottle production. The data for mill processes is exact and monitored in the mill. In addition, data was gathered from raw material and material suppliers, customers, official records, KCL-eco databases and literature. The data for PET bottle is sourced from literature. End-of-life operations affect greatly on the carbon footprint of a fibre-based package. The results show that the carbon footprint is smallest when used packages are recycled. Recycling saves also natural resources. If used packages are not recyclable for some reason, it is recommended to use them in energy production. Through waste incineration fossil fuels could be substituted and greenhouse gas emissions avoided.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Suomalainen ääniteala elää jännittäviä aikoja: äänitallenteiden myynti on kääntynyt maailmanlaajuisesti laskuun ja samaan aikaan kilpailu alalla on koventunut. Tallenteet ovat olleet levy-yhtiöiden päätulonlähde useita kymmeniä vuosia, joten tilanne on uusi. Kyseessä ei ole pelkkä formaatin vaihto, vaan koko alan tulonmuodostus on muuttumassa uudenlaiseksi. Opinnäytetyö selvittää, millaisia uusia tulohaaroja suomalaiset indielevy-yhtiöt voivat liiketoimintaansa yhdistää ja millaista tulevaisuutta niille ennustetaan. Keskeisessä roolissa ovat alan ammattilaisten teemahaastattelut ja musiikkialan tulevaisuutta käsittelevät artikkelit. Niiden lisäksi pohjatietona toimivat ammattilaistapahtumista kerätty seminaariaineisto ja erilaiset tilastot. Ammattilaishaastatteluiden tavoitteena oli kartoittaa alan muutosta ja tiedustella haastateltavan mielipidettä uusien rahavirtojen potentiaalista. Saatujen tietojen avulla pohditaan suosituimpia uusia tulohaaroja ja arvioidaan niiden ajankohtaisuutta. Eräs työn päätavoitteista on rakentaa nykyaikainen suomalaisen indielevy-yhtiön tulonmuodostuskaavio. Työ kertoo myös äänitealan muutoksen taustasta ja siihen vaikuttaneista seikoista. Vaikka digitaalinen maailma onkin jo jonkin verran vaikuttanut suomalaisten levy-yhtiöiden tulonmuodostukseen, suurimmat muutokset ovat vasta tulollaan. Tallennemyynnin laskun ennustetaan jatkuvan, eikä digitaalinen kauppa kasva samaan tahtiin. Immateriaalinen tuote ei kiinnosta kuluttajaa tallenteen lailla. Muutokset pakottavat monet levy-yhtiöt miettimään uusia tulohaaroja jo lähitulevaisuudessa. Yritysten yhdistymiset ovat mahdollisia, kun erityisesti ohjelmatoimistopalveluja liitetään levy-yhtiöiden liiketoimintaan. Oheistuotekaupan merkitys kasvaa ja kustannustulojen kirjo laajenee. Globalisoituva maailma tarjoaa laajenevien mahdollisuuksien lisäksi paljon uusia haasteita samalla kun kilpailijat lisääntyvät. Uudet rahoitusmallit kuten mainonta voivat kääntää koko alan tulonmuodostuksen päälaelleen. Kuluttajamarkkinoiden ohella yritysmyynti lisääntyy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Asiakkaiden kiinnostus tuotteiden raaka-aineiden alkuperää kohden on kasvanut. Puuraaka-aineesta tehtyjen tuotteiden alkuperä voidaan todentaa sertifikaatilla, jos puu on peräisin sertifioidusta metsästä ja sen alkuperäketju tunnetaan. Tieto sertifioidusta tuotteesta kulkeutuu asiakkaalle leimaamalla tuotteita sertifioinnista kertovin merkein tai merkitsemällä esimerkiksi tuotteen asiakirjaan sertifioinnista kertova teksti. Tutkimus tehtiin Stora Enson Imatran tehtaalle, jolla oli tarve uudelle tietojärjestelmälle. Tutkimuksen tarkoituksena oli määritellä sertifioidun puuraaka-aineen käytön ja sertifioituina myytävien tuotteiden seurannan mandollistava tietojärjestelmä. Puun alkuperäketjun hallinnan standardit määiittävät, paljonko sertifioitua puuta kohdennettava tuotteelle, jos tuotteita halutaan myydä sertifioituna. Tietojärjestelmän määrittelyn lisäksi oli tarve selvittää prosessi, kuinka tuotteisiin ja asiakirjoihin saataisiin sertifioinnista kertova leima. Tutkimusta tehdessä selvisi, että tehtaan nykyiset myyntitoiminnot ja tietojärjestelmät asettavat rajoituksia tuotteiden ja asiakirjojen leimaukselle. Prosessia kartoitettiin mm. haastatt elemalla tehtaan ti etojärj estelmi en, myynti- ja varastotoimintojen parissa työskenteleviä ihmisiä. Haastattelujen perusteella koottiin myös tietojärjestelmän asiakasvaatimukset. Vaatimusmäärittelyä tehdessä huomioitiin myös alkuperäketjun hallinnan standardien vaatimukset. Määriteltävän tietojärjestelmän toimintaa pyrittiin havainnollistamaan tehtaan edustajille prototyyppien avulla. Lisäksi tutkimuksessa on pyritty havainnollistamaan järjestelmän toimintaa erilaisin kaavioin ja kuvin. Haasteena työssä oli tulkita standardeja ja kerätä yhteen toisistaan poikkeavia asiakasvaatimuksia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract This work studies the multi-label classification of turns in simple English Wikipedia talk pages into dialog acts. The treated dataset was created and multi-labeled by (Ferschke et al., 2012). The first part analyses dependences between labels, in order to examine the annotation coherence and to determine a classification method. Then, a multi-label classification is computed, after transforming the problem into binary relevance. Regarding features, whereas (Ferschke et al., 2012) use features such as uni-, bi-, and trigrams, time distance between turns or the indentation level of the turn, other features are considered here: lemmas, part-of-speech tags and the meaning of verbs (according to WordNet). The dataset authors applied approaches such as Naive Bayes or Support Vector Machines. The present paper proposes, as an alternative, to use Schoenberg transformations which, following the example of kernel methods, transform original Euclidean distances into other Euclidean distances, in a space of high dimensionality. Résumé Ce travail étudie la classification supervisée multi-étiquette en actes de dialogue des tours de parole des contributeurs aux pages de discussion de Simple English Wikipedia (Wikipédia en anglais simple). Le jeu de données considéré a été créé et multi-étiqueté par (Ferschke et al., 2012). Une première partie analyse les relations entre les étiquettes pour examiner la cohérence des annotations et pour déterminer une méthode de classification. Ensuite, une classification supervisée multi-étiquette est effectuée, après recodage binaire des étiquettes. Concernant les variables, alors que (Ferschke et al., 2012) utilisent des caractéristiques telles que les uni-, bi- et trigrammes, le temps entre les tours de parole ou l'indentation d'un tour de parole, d'autres descripteurs sont considérés ici : les lemmes, les catégories morphosyntaxiques et le sens des verbes (selon WordNet). Les auteurs du jeu de données ont employé des approches telles que le Naive Bayes ou les Séparateurs à Vastes Marges (SVM) pour la classification. Cet article propose, de façon alternative, d'utiliser et d'étendre l'analyse discriminante linéaire aux transformations de Schoenberg qui, à l'instar des méthodes à noyau, transforment les distances euclidiennes originales en d'autres distances euclidiennes, dans un espace de haute dimensionnalité.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years correlative microscopy, combining the power and advantages of different imaging system, e.g., light, electrons, X-ray, NMR, etc., has become an important tool for biomedical research. Among all the possible combinations of techniques, light and electron microscopy, have made an especially big step forward and are being implemented in more and more research labs. Electron microscopy profits from the high spatial resolution, the direct recognition of the cellular ultrastructure and identification of the organelles. It, however, has two severe limitations: the restricted field of view and the fact that no live imaging can be done. On the other hand light microscopy has the advantage of live imaging, following a fluorescently tagged molecule in real time and at lower magnifications the large field of view facilitates the identification and location of sparse individual cells in a large context, e.g., tissue. The combination of these two imaging techniques appears to be a valuable approach to dissect biological events at a submicrometer level. Light microscopy can be used to follow a labelled protein of interest, or a visible organelle such as mitochondria, in time, then the sample is fixed and the exactly same region is investigated by electron microscopy. The time resolution is dependent on the speed of penetration and fixation when chemical fixatives are used and on the reaction time of the operator for cryo-fixation. Light microscopy can also be used to identify cells of interest, e.g., a special cell type in tissue or cells that have been modified by either transfections or RNAi, in a large population of non-modified cells. A further application is to find fluorescence labels in cells on a large section to reduce searching time in the electron microscope. Multiple fluorescence labelling of a series of sections can be correlated with the ultrastructure of the individual sections to get 3D information of the distribution of the marked proteins: array tomography. More and more efforts are put in either converting a fluorescence label into an electron dense product or preserving the fluorescence throughout preparation for the electron microscopy. Here, we will review successful protocols and where possible try to extract common features to better understand the importance of the individual steps in the preparation. Further the new instruments and software, intended to ease correlative light and electron microscopy, are discussed. Last but not least we will detail the approach we have chosen for correlative microscopy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An experience aiming to promote a residue interchange and recovery between the teaching laboratories of the Chemistry Institute of this University is described. At the present, several residues interchange have already appeared as advantageous. To make the work easier, a software has been developed in order to keep a record of all the residues generated by the teaching laboratories. Standard labels have been developed for the residues in order to organize them. The software and the label design are described.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Particulate nanostructures are increasingly used for analytical purposes. Such particles are often generated by chemical synthesis from non-renewable raw materials. Generation of uniform nanoscale particles is challenging and particle surfaces must be modified to make the particles biocompatible and water-soluble. Usually nanoparticles are functionalized with binding molecules (e.g., antibodies or their fragments) and a label substance (if needed). Overall, producing nanoparticles for use in bioaffinity assays is a multistep process requiring several manufacturing and purification steps. This study describes a biological method of generating functionalized protein-based nanoparticles with specific binding activity on the particle surface and label activity inside the particles. Traditional chemical bioconjugation of the particle and specific binding molecules is replaced with genetic fusion of the binding molecule gene and particle backbone gene. The entity of the particle shell and binding moieties are synthesized from generic raw materials by bacteria, and fermentation is combined with a simple purification method based on inclusion bodies. The label activity is introduced during the purification. The process results in particles that are ready-to-use as reagents in bioaffinity. Apoferritin was used as particle body and the system was demonstrated using three different binding moieties: a small protein, a peptide and a single chain Fv antibody fragment that represents a complex protein including disulfide bridge.If needed, Eu3+ was used as label substance. The results showed that production system resulted in pure protein preparations, and the particles were of homogeneous size when visualized with transmission electron microscopy. Passively introduced label was stably associated with the particles, and binding molecules genetically fused to the particle specifically bound target molecules. Functionality of the particles in bioaffinity assays were successfully demonstrated with two types of assays; as labels and in particle-enhanced agglutination assay. This biological production procedure features many advantages that make the process especially suited for applications that have frequent and recurring requirements for homogeneous functional particles. The production process of ready, functional and watersoluble particles follows principles of “green chemistry”, is upscalable, fast and cost-effective.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work presents a proposal for the management of residues from teaching laboratories. The main goals of the proposal are: scale reduction of experiments, reuse of residues as raw materials for new experiments and appropriate treatment and storage of residues. The methodology includes standardized labels for residue classification and registration of experimental classes and their residues in files. The management seemed to be efficient, resulting in a reduction of the amount of reagents utilized and residues generated, and an increase of reutilization of residues. A considerable decrease of needed storage space and suitable methods for correct residue disposal were achieved. We expect that all laboratories, including those exclusively for research activities, become involved, in a near future, in the Residue Management Project of URI - Campus Erechim.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work presents an evolution profile of labels and containers of commercial chemicals employed in laboratories since 1870. Most chemicals were made in Germany before the Second World War, after which many other manufacturers arrived on the Brazilian market. North-american products were dominant in the 1940s, but Brazilian chemicals have increased their participation along time. Labels presented increasingly more information, from originally simple names of the compounds at the beginning of the XXth century to the data presented today such as chemical formulae, safety regards and detailed chemical analysis. The raw material for container manufacturing also changed: glass was dominant until the 1950s, but nowadays plastic flasks are preferred whenever possible. Cork covers were replaced by screw caps. The diversity of commercial products also sharply increased with time, especially after the 1950s, following the many new and specific applications of chemicals for research and commercial purposes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The history of the rare earths is rich in innovation and these elements have been the object of study of a number of scientists. Rare earths are used practically in almost all aspects of life and these applications are due to their outstanding properties, mainly spectroscopic and magnetic. In industry, the applications of rare earths are many, such as in catalysis, phosphors, magnetism, glass and lasers. In biological systems, rare earths are used, for example, as luminescent probes in the investigation of binding sites in proteins, labels in immunoassays and in noninvasive tests.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work presents a detailed routine applied to the identification of unknown chemicals and wastes. 786 specimens were analyzed during 20 months. Unknown materials fell into three basic classes: (i) commercial chemicals without labels or illegible ones; (ii) laboratory synthesis products; (iii) used solvents (including mixtures). Uranium and thorium were recovered form their wastes. Unknown chemicals were mainly inorganic compounds, many of which had never been opened. Alkaline salts were dominant, but also precious metal compounds were identified. Laboratory synthesis products were organic compounds. The final destination depended on the nature of the chemical. Most organic compounds were sent to incineration; inorganic salts were distributed among several public organizations, including secondary and technical schools. The work described in this paper greatly reduced the amount of wastes that had to be sent to disposal.