3 resultados para Cafe: Coffea arabica

em Université de Lausanne, Switzerland


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A passive sampling device called Monitor of NICotine or "MoNIC", was constructed and evaluated by IST laboratory for determining nicotine in Second Hand Tobacco Smoke (SHTS) or Environmental Tobacco Smoke (ETS). Vapour nicotine was passively collected on a potassium bisulfate treated glass fibre filter as collection medium. Analysis of collected nicotine on the treated filter by gas chromatography equipped with Thermoionic-Specific Detector (GC-TSD) after liquid-liquid extraction of 1mL of 5N NaOH : 1 mL of n-heptane saturated with NH3 using quinoline as internal standard. Based on nicotine amount of 0.2 mg/cigarette as the reference, the inhaled Cigarette Equivalents (CE) by non-smokers can be calculated. Using the detected CE on the badge for non-smokers, and comparing with amount of nicotine and cotinine level in saliva of both smokers and exposed non-smokers, we can confirm the use of the CE concept for estimating exposure to ETS. The regional CIPRET (Center of information and prevention of the addiction to smoking) of different cantons (Valais (VS), Vaud (VD), Neuchâtel (NE) and Fribourg (FR)) are going to organize a big campaign on the subject of the passive addiction to smoking. This campaign took place in 2007-2008 and has for objective to inform clearly the Swiss population of the dangerousness of the passive smoke. More than 3'900 MoNIC badges were gracefully distributed to Swiss population to perform a self-monitoring of population exposure level to ETS, expressed in term of CE. Non-stimulated saliva was also collected to determine ETS biomarkers nicotine/cotinine levels of participating volunteers. Results of different levels of CE in occupational and non-occupational situations in relation with ETS were presented in this study. This study, unique in Switzerland, has established a base map on the population's exposure to SHTS. It underscored the fact that all the Swiss people involved in this campaign (N=1241) is exposed to passive smoke, from <0.2 cig/d (10.8%), 1-2 to more than 10 cig/d (89.2%). In the area of high exposure (15-38 cig/d), are the most workers in public restaurant, cafe, bar, disco. By monitoring ETS tracer nicotine and its biomarkers, salivary nicotine and cotinine, it is demonstrated that the MoNIC badge can serve as indicator of CE passive smoking. The MoNIC badge, accompanied with content of salivary nicotine/cotinine can serve as a tool of evaluation of the ETS passive smoking and contributes to supply useful data for future epidemiological studies. It is also demonstrated that the salivary nicotine (without stimulation) is a better biomarker of ETS exposure than cotinine.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

RÉSUMÉ Cette thèse porte sur le développement de méthodes algorithmiques pour découvrir automatiquement la structure morphologique des mots d'un corpus. On considère en particulier le cas des langues s'approchant du type introflexionnel, comme l'arabe ou l'hébreu. La tradition linguistique décrit la morphologie de ces langues en termes d'unités discontinues : les racines consonantiques et les schèmes vocaliques. Ce genre de structure constitue un défi pour les systèmes actuels d'apprentissage automatique, qui opèrent généralement avec des unités continues. La stratégie adoptée ici consiste à traiter le problème comme une séquence de deux sous-problèmes. Le premier est d'ordre phonologique : il s'agit de diviser les symboles (phonèmes, lettres) du corpus en deux groupes correspondant autant que possible aux consonnes et voyelles phonétiques. Le second est de nature morphologique et repose sur les résultats du premier : il s'agit d'établir l'inventaire des racines et schèmes du corpus et de déterminer leurs règles de combinaison. On examine la portée et les limites d'une approche basée sur deux hypothèses : (i) la distinction entre consonnes et voyelles peut être inférée sur la base de leur tendance à alterner dans la chaîne parlée; (ii) les racines et les schèmes peuvent être identifiés respectivement aux séquences de consonnes et voyelles découvertes précédemment. L'algorithme proposé utilise une méthode purement distributionnelle pour partitionner les symboles du corpus. Puis il applique des principes analogiques pour identifier un ensemble de candidats sérieux au titre de racine ou de schème, et pour élargir progressivement cet ensemble. Cette extension est soumise à une procédure d'évaluation basée sur le principe de la longueur de description minimale, dans- l'esprit de LINGUISTICA (Goldsmith, 2001). L'algorithme est implémenté sous la forme d'un programme informatique nommé ARABICA, et évalué sur un corpus de noms arabes, du point de vue de sa capacité à décrire le système du pluriel. Cette étude montre que des structures linguistiques complexes peuvent être découvertes en ne faisant qu'un minimum d'hypothèses a priori sur les phénomènes considérés. Elle illustre la synergie possible entre des mécanismes d'apprentissage portant sur des niveaux de description linguistique distincts, et cherche à déterminer quand et pourquoi cette coopération échoue. Elle conclut que la tension entre l'universalité de la distinction consonnes-voyelles et la spécificité de la structuration racine-schème est cruciale pour expliquer les forces et les faiblesses d'une telle approche. ABSTRACT This dissertation is concerned with the development of algorithmic methods for the unsupervised learning of natural language morphology, using a symbolically transcribed wordlist. It focuses on the case of languages approaching the introflectional type, such as Arabic or Hebrew. The morphology of such languages is traditionally described in terms of discontinuous units: consonantal roots and vocalic patterns. Inferring this kind of structure is a challenging task for current unsupervised learning systems, which generally operate with continuous units. In this study, the problem of learning root-and-pattern morphology is divided into a phonological and a morphological subproblem. The phonological component of the analysis seeks to partition the symbols of a corpus (phonemes, letters) into two subsets that correspond well with the phonetic definition of consonants and vowels; building around this result, the morphological component attempts to establish the list of roots and patterns in the corpus, and to infer the rules that govern their combinations. We assess the extent to which this can be done on the basis of two hypotheses: (i) the distinction between consonants and vowels can be learned by observing their tendency to alternate in speech; (ii) roots and patterns can be identified as sequences of the previously discovered consonants and vowels respectively. The proposed algorithm uses a purely distributional method for partitioning symbols. Then it applies analogical principles to identify a preliminary set of reliable roots and patterns, and gradually enlarge it. This extension process is guided by an evaluation procedure based on the minimum description length principle, in line with the approach to morphological learning embodied in LINGUISTICA (Goldsmith, 2001). The algorithm is implemented as a computer program named ARABICA; it is evaluated with regard to its ability to account for the system of plural formation in a corpus of Arabic nouns. This thesis shows that complex linguistic structures can be discovered without recourse to a rich set of a priori hypotheses about the phenomena under consideration. It illustrates the possible synergy between learning mechanisms operating at distinct levels of linguistic description, and attempts to determine where and why such a cooperation fails. It concludes that the tension between the universality of the consonant-vowel distinction and the specificity of root-and-pattern structure is crucial for understanding the advantages and weaknesses of this approach.

Relevância:

10.00% 10.00%

Publicador: