891 resultados para Log ESEO, GPS, orbite, pseudorange, least square
Resumo:
Neste estudo, propõe-se um modelo para explicar a inovatividade dos pecuaristas de gado de corte sob a perspectiva organizacional. Segundo a teoria de difusão de inovações, a inovatividade organizacional é caracterizada como o grau em que uma organização inova relativamente mais cedo do que as demais. Para avaliar a inovatividade, quatro antecedentes foram considerados, sendo: as características percebidas da inovação compostas por vantagem relativa, compatibilidade, imagem, demonstração de resultado, visibilidade, experimentabilidade, voluntariedade e facilidade de uso; a participação dos pecuaristas em redes sociais, caracterizadas como redes de relacionamentos entre pares; as fontes de informação comercial, caracterizadas como informações obtidas por meio dos relacionamentos para reduzir os riscos; a psicografia organizacional, composta por direção, centralidade da decisão, abertura de comunicação e motivação para a conquista. Os quatro construtos possuem uma relação positiva com a inovatividade organizacional. Com amostragem não probabilística por conveniência, foram obtidos 205 questionários válidos. Na análise de componentes múltiplos, observou-se que o perfil dos respondentes se mostrou distinto quanto à inovatividade. Essa distinção motivou a geração de uma taxonomia com base no perfil de adoção de tecnologias com o objetivo de identificar as diferenças no comportamento inovativo, o que resultou em três grupos. Assim, três modelos foram analisados e comparados por meio da modelagem de equações estruturais, utilizando-se o método Partial Least Square (PLS). Os resultados mostraram que o grupo dos menos inovadores pauta suas decisões de adoção de novas tecnologias pelas redes sociais, pela compatibilidade da tecnologia com suas atividades organizacionais e pela imagem que ela tem perante os demais pecuaristas. Os intermediários têm uma psicografia organizacional voltada para a inovatividade, porém a adoção de novas tecnologias parece estar mais relacionada às imposições de mercado do que propriamente à percepção de suas características inovadoras. Os mais inovadores, como formadores de opinião, possuem uma inovatividade organizacional construída a partir de fontes de informação comerciais e avaliam o custo da tecnologia como um investimento positivo para o futuro de seu negócio.
Resumo:
O trabalho tem sido visto não somente como forma de obter a renda, mas também como atividade que proporciona realização pessoal, status social e possibilidade de estabelecer e manter contatos interpessoais, entre outros. Nesta pesquisa, teve-se como objetivo investigar os fatores que influenciam e conferem sentido ao trabalho, como centralidade do trabalho, normas da sociedade e objetivos e resultados valorizados. Na centralidade do trabalho, procurou-se investigar o grau de importância do trabalho dentro do contexto das diversas áreas da vida das pessoas, como família, lazer, religião e vida comunitária. Em normas da sociedade, foram analisados os pontos mais significativos no tocante ao que a sociedade deveria proporcionar ao indivíduo, assim como o que o indivíduo deveria fazer em prol da sociedade. Nos objetivos e resultados valorizados, foi pesquisado o que as pessoas buscam com o trabalho. A partir da pesquisa na literatura, foi elaborado um modelo inicial que, não se mostrando satisfatório segundo critérios estatísticos, foi substituído por outro que apresentou significância estatística e boa aderência aos dados. O modelo escolhido foi o que melhor goodness-of-fit apresentou, quando se utilizou modelagem de equações estruturais pelo método partial least square. O estudo revelou que o significado do trabalho se reflete, na ordem, na centralidade do trabalho, nos objetivos e resultados valorizados e, por último, nas normas sociais.
Resumo:
Neste artigo, investiga-se a dinâmica do processo decisório conduzido por grupos de trabalho ao longo do tempo em ambientes com diferentes latitudes de ação (graus de liberdade para a atuação dos gestores distintos). O objetivo é verificar a influência do tempo e do ambiente nos processos decisórios em grupo. O tema é enfocado a partir de uma revisão teórica considerando três tópicos - o processo decisório conduzido por grupos, a influência do tempo nesses processos e a influência do ambiente nesses processos -, os quais dão origem às hipóteses a serem testadas. Na pesquisa de campo, de natureza quantitativa, utiliza-se o método survey e os dados foram coletados com 89 grupos da disciplina Jogos de Empresa, em um curso de graduação em Administração de Empresas. Para o tratamento dos dados, utilizou-se a modelagem por equações estruturais via partial least square para avaliação das relações entre os construtos. Como resultado, constatou-se influência temporal na associação entre qualidade do processo decisório e resultados organizacionais, reduzindo-se o efeito do perfil dos grupos. Já as relações interpessoais, independente do ambiente, influenciaram nos processos de planejamento e execução das decisões. Concluiu-se que diferentes relações entre perfil dos gestores, qualidade do processo e resultados são observadas pela incorporação simultânea das dimensões temporal e ambiental como contingências na análise do processo decisório em grupo.
Resumo:
Background/Purpose: The primary treatment goals for gouty arthritis (GA) are rapid relief of pain and inflammation during acute attacks, and long-term hyperuricemia management. A post-hoc analysis of 2 pivotal trials was performed to assess efficacy and safety of canakinumab (CAN), a fully human monoclonal anti-IL-1_ antibody, vs triamcinolone acetonide (TA) in GA patients unable to use NSAIDs and colchicine, and who were on stable urate lowering therapy (ULT) or unable to use ULT. Methods: In these 12-week, randomized, multicenter, double-blind, double-dummy, active-controlled studies (_-RELIEVED and _-RELIEVED II), patients had to have frequent attacks (_3 attacks in previous year) meeting preliminary GA ACR 1977 criteria, and were unresponsive, intolerant, or contraindicated to NSAIDs and/or colchicine, and if on ULT, ULT was stable. Patients were randomized during an acute attack to single dose CAN 150 mg s.c. or TA 40 mg i.m. and were redosed "on demand" for each new attack. Patients completing the core studies were enrolled into blinded 12-week extension studies to further investigate on-demand use of CAN vs TA for new attacks. The subpopulation selected for this post-hoc analysis was (a) unable to use NSAIDs and colchicine due to contraindication, intolerance or lack of efficacy for these drugs, and (b) currently on ULT, or contraindication or previous failure of ULT, as determined by investigators. Subpopulation comprised 101 patients (51 CAN; 50 TA) out of 454 total. Results: Several co-morbidities, including hypertension (56%), obesity (56%), diabetes (18%), and ischemic heart disease (13%) were reported in 90% of this subpopulation. Pain intensity (VAS 100 mm scale) was comparable between CAN and TA treatment groups at baseline (least-square [LS] mean 74.6 and 74.4 mm, respectively). A significantly lower pain score was reported with CAN vs TA at 72 hours post dose (1st co-primary endpoint on baseline flare; LS mean, 23.5 vs 33.6 mm; difference _10.2 mm; 95% CI, _19.9, _0.4; P_0.0208 [1-sided]). CAN significantly reduced risk for their first new attacks by 61% vs TA (HR 0.39; 95% CI, 0.17-0.91, P_0.0151 [1-sided]) for the first 12 weeks (2nd co-primary endpoint), and by 61% vs TA (HR 0.39; 95% CI, 0.19-0.79, P_0.0047 [1-sided]) over 24 weeks. Serum urate levels increased for CAN vs TA with mean change from baseline reaching a maximum of _0.7 _ 2.0 vs _0.1 _ 1.8 mg/dL at 8 weeks, and _0.3 _ 2.0 vs _0.2 _ 1.4 mg/dL at end of study (all had GA attack at baseline). Adverse Events (AEs) were reported in 33 (66%) CAN and 24 (47.1%) TA patients. Infections and infestations were the most common AEs, reported in 10 (20%) and 5 (10%) patients treated with CAN and TA respectively. Incidence of SAEs was comparable between CAN (gastritis, gastroenteritis, chronic renal failure) and TA (aortic valve incompetence, cardiomyopathy, aortic stenosis, diarrohea, nausea, vomiting, bicuspid aortic valve) groups (2 [4.0%] vs 2 [3.9%]). Conclusion: CAN provided superior pain relief and reduced risk of new attack in highly-comorbid GA patients unable to use NSAIDs and colchicine, and who were currently on stable ULT or unable to use ULT. The safety profile in this post-hoc subpopulation was consistent with the overall _-RELIEVED and _-RELIEVED II population.
Resumo:
The objective of this work was to investigate heterosis and its components in 16 white grain maize populations presenting high quality protein. These populations were divided according to grain type in order to establish different heterosis groups. The crosses were carried out according to a partial diallel cross design among flint and dent populations. Seven agronomic traits were evaluated in three environments while four leaf diseases and incidence of corn stunt were evaluated in one. Least square procedure was applied to the normal equation X'Xbeta = X'Y, to estimate the model effects and their respective sum of squares. Among the heterosis components, in diallel analysis, significance for average heterosis in grain yield, number of days to female flowering and to all evaluated diseases was detected. Specific heterosis was significant for days to female flowering and resistance to Puccinia polysora. Results concerned to grain yield trait indicate that populations with superior performance in dent group, no matter what flint population group is used in crosses, tend to generate superior intervarietal hybrids. In decreasing order of preference, the dent type populations CMS 476, ZQP/B 103 and ZQP/B 101 and the flint type CMS 461, CMS 460, ZQP/B 104 and ZQP/B 102 are recommended to form composites.
Resumo:
Résumé : La radiothérapie par modulation d'intensité (IMRT) est une technique de traitement qui utilise des faisceaux dont la fluence de rayonnement est modulée. L'IMRT, largement utilisée dans les pays industrialisés, permet d'atteindre une meilleure homogénéité de la dose à l'intérieur du volume cible et de réduire la dose aux organes à risque. Une méthode usuelle pour réaliser pratiquement la modulation des faisceaux est de sommer de petits faisceaux (segments) qui ont la même incidence. Cette technique est appelée IMRT step-and-shoot. Dans le contexte clinique, il est nécessaire de vérifier les plans de traitement des patients avant la première irradiation. Cette question n'est toujours pas résolue de manière satisfaisante. En effet, un calcul indépendant des unités moniteur (représentatif de la pondération des chaque segment) ne peut pas être réalisé pour les traitements IMRT step-and-shoot, car les poids des segments ne sont pas connus à priori, mais calculés au moment de la planification inverse. Par ailleurs, la vérification des plans de traitement par comparaison avec des mesures prend du temps et ne restitue pas la géométrie exacte du traitement. Dans ce travail, une méthode indépendante de calcul des plans de traitement IMRT step-and-shoot est décrite. Cette méthode est basée sur le code Monte Carlo EGSnrc/BEAMnrc, dont la modélisation de la tête de l'accélérateur linéaire a été validée dans une large gamme de situations. Les segments d'un plan de traitement IMRT sont simulés individuellement dans la géométrie exacte du traitement. Ensuite, les distributions de dose sont converties en dose absorbée dans l'eau par unité moniteur. La dose totale du traitement dans chaque élément de volume du patient (voxel) peut être exprimée comme une équation matricielle linéaire des unités moniteur et de la dose par unité moniteur de chacun des faisceaux. La résolution de cette équation est effectuée par l'inversion d'une matrice à l'aide de l'algorithme dit Non-Negative Least Square fit (NNLS). L'ensemble des voxels contenus dans le volume patient ne pouvant être utilisés dans le calcul pour des raisons de limitations informatiques, plusieurs possibilités de sélection ont été testées. Le meilleur choix consiste à utiliser les voxels contenus dans le Volume Cible de Planification (PTV). La méthode proposée dans ce travail a été testée avec huit cas cliniques représentatifs des traitements habituels de radiothérapie. Les unités moniteur obtenues conduisent à des distributions de dose globale cliniquement équivalentes à celles issues du logiciel de planification des traitements. Ainsi, cette méthode indépendante de calcul des unités moniteur pour l'IMRT step-andshootest validée pour une utilisation clinique. Par analogie, il serait possible d'envisager d'appliquer une méthode similaire pour d'autres modalités de traitement comme par exemple la tomothérapie. Abstract : Intensity Modulated RadioTherapy (IMRT) is a treatment technique that uses modulated beam fluence. IMRT is now widespread in more advanced countries, due to its improvement of dose conformation around target volume, and its ability to lower doses to organs at risk in complex clinical cases. One way to carry out beam modulation is to sum smaller beams (beamlets) with the same incidence. This technique is called step-and-shoot IMRT. In a clinical context, it is necessary to verify treatment plans before the first irradiation. IMRT Plan verification is still an issue for this technique. Independent monitor unit calculation (representative of the weight of each beamlet) can indeed not be performed for IMRT step-and-shoot, because beamlet weights are not known a priori, but calculated by inverse planning. Besides, treatment plan verification by comparison with measured data is time consuming and performed in a simple geometry, usually in a cubic water phantom with all machine angles set to zero. In this work, an independent method for monitor unit calculation for step-and-shoot IMRT is described. This method is based on the Monte Carlo code EGSnrc/BEAMnrc. The Monte Carlo model of the head of the linear accelerator is validated by comparison of simulated and measured dose distributions in a large range of situations. The beamlets of an IMRT treatment plan are calculated individually by Monte Carlo, in the exact geometry of the treatment. Then, the dose distributions of the beamlets are converted in absorbed dose to water per monitor unit. The dose of the whole treatment in each volume element (voxel) can be expressed through a linear matrix equation of the monitor units and dose per monitor unit of every beamlets. This equation is solved by a Non-Negative Least Sqvare fif algorithm (NNLS). However, not every voxels inside the patient volume can be used in order to solve this equation, because of computer limitations. Several ways of voxel selection have been tested and the best choice consists in using voxels inside the Planning Target Volume (PTV). The method presented in this work was tested with eight clinical cases, which were representative of usual radiotherapy treatments. The monitor units obtained lead to clinically equivalent global dose distributions. Thus, this independent monitor unit calculation method for step-and-shoot IMRT is validated and can therefore be used in a clinical routine. It would be possible to consider applying a similar method for other treatment modalities, such as for instance tomotherapy or volumetric modulated arc therapy.
Resumo:
In this thesis different parameters influencing critical flux in protein ultrafiltration and membrane foul-ing were studied. Short reviews of proteins, cross-flow ultrafiltration, flux decline and criticalflux and the basic theory of Partial Least Square analysis (PLS) are given at the beginning. The experiments were mainly performed using dilute solutions of globular proteins, commercial polymeric membranes and laboratory scale apparatuses. Fouling was studied by flux, streaming potential and FTIR-ATR measurements. Critical flux was evaluated by different kinds of stepwise procedures and by both con-stant pressure and constant flux methods. The critical flux was affected by transmembrane pressure, flow velocity, protein concentration, mem-brane hydrophobicity and protein and membrane charges. Generally, the lowest critical fluxes were obtained at the isoelectric points of the protein and the highest in the presence of electrostatic repulsion between the membrane surface and the protein molecules. In the laminar flow regime the critical flux increased with flow velocity, but not any more above this region. An increase in concentration de-creased the critical flux. Hydrophobic membranes showed fouling in all charge conditionsand, furthermore, especially at the beginning of the experiment even at very low transmembrane pressures. Fouling of these membranes was thought to be due to protein adsorption by hydrophobic interactions. The hydrophilic membranes used suffered more from reversible fouling and concentration polarisation than from irreversible foul-ing. They became fouled at higher transmembrane pressures becauseof pore blocking. In this thesis some new aspects on critical flux are presented that are important for ultrafiltration and fractionation of proteins.
Resumo:
Metastatic melanomas are frequently refractory to most adjuvant therapies such as chemotherapies and radiotherapies. Recently, immunotherapies have shown good results in the treatment of some metastatic melanomas. Immune cell infiltration in the tumor has been associated with successful immunotherapy. More generally, tumor infiltrating lymphocytes (TILs) in the primary tumor and in metastases of melanoma patients have been demonstrated to correlate positively with favorable clinical outcomes. Altogether, these findings suggest the importance of being able to identify, quantify and characterize immune infiltration at the tumor site for a better diagnostic and treatment choice. In this paper, we used Fourier Transform Infrared (FTIR) imaging to identify and quantify different subpopulations of T cells: the cytotoxic T cells (CD8+), the helper T cells (CD4+) and the regulatory T cells (T reg). As a proof of concept, we investigated pure populations isolated from human peripheral blood from 6 healthy donors. These subpopulations were isolated from blood samples by magnetic labeling and purities were assessed by Fluorescence Activated Cell Sorting (FACS). The results presented here show that Fourier Transform Infrared (FTIR) imaging followed by supervised Partial Least Square Discriminant Analysis (PLS-DA) allows an accurate identification of CD4+ T cells and CD8+ T cells (>86%). We then developed a PLS regression allowing the quantification of T reg in a different mix of immune cells (e.g. Peripheral Blood Mononuclear Cells (PBMCs)). Altogether, these results demonstrate the sensitivity of infrared imaging to detect the low biological variability observed in T cell subpopulations.
Resumo:
Sähkönkulutuksen lyhyen aikavälin ennustamista on tutkittu jo pitkään. Pohjoismaisien sähkömarkkinoiden vapautuminen on vaikuttanut sähkönkulutuksen ennustamiseen. Aluksi työssä perehdyttiin aiheeseen liittyvään kirjallisuuteen. Sähkönkulutuksen käyttäytymistä tutkittiin eri aikoina. Lämpötila tilastojen käyttökelpoisuutta arvioitiin sähkönkulutusennustetta ajatellen. Kulutus ennusteet tehtiin tunneittain ja ennustejaksona käytettiin yhtä viikkoa. Työssä tutkittiin sähkönkulutuksen- ja lämpötiladatan saatavuutta ja laatua Nord Poolin markkina-alueelta. Syötettävien tietojen ominaisuudet vaikuttavat tunnittaiseen sähkönkulutuksen ennustamiseen. Sähkönkulutuksen ennustamista varten mallinnettiin kaksi lähestymistapaa. Testattavina malleina käytettiin regressiomallia ja autoregressiivistä mallia (autoregressive model, ARX). Mallien parametrit estimoitiin pienimmän neliösumman menetelmällä. Tulokset osoittavat että kulutus- ja lämpötiladata on tarkastettava jälkikäteen koska reaaliaikaisen syötetietojen laatu on huonoa. Lämpötila vaikuttaa kulutukseen talvella, mutta se voidaan jättää huomiotta kesäkaudella. Regressiomalli on vakaampi kuin ARX malli. Regressiomallin virhetermi voidaan mallintaa aikasarjamallia hyväksikäyttäen.
Identification-commitment inventory (ICI-Model): confirmatory factor analysis and construct validity
Resumo:
The aim of this study is to confirm the factorial structure of the Identification-Commitment Inventory (ICI) developed within the frame of the Human System Audit (HSA) (Quijano et al. in Revist Psicol Soc Apl 10(2):27-61, 2000; Pap Psicól Revist Col Of Psicó 29:92-106, 2008). Commitment and identification are understood by the HSA at an individual level as part of the quality of human processes and resources in an organization; and therefore as antecedents of important organizational outcomes, such as personnel turnover intentions, organizational citizenship behavior, etc. (Meyer et al. in J Org Behav 27:665-683, 2006). The theoretical integrative model which underlies ICI Quijano et al. (2000) was tested in a sample (N = 625) of workers in a Spanish public hospital. Confirmatory factor analysis through structural equation modeling was performed. Elliptical least square solution was chosen as estimator procedure on account of non-normal distribution of the variables. The results confirm the goodness of fit of an integrative model, which underlies the relation between Commitment and Identification, although each one is operatively different.
Resumo:
The classical theory of collision induced emission (CIE) from pairs of dissimilar rare gas atoms was developed in Paper I [D. Reguera and G. Birnbaum, J. Chem. Phys. 125, 184304 (2006)] from a knowledge of the straight line collision trajectory and the assumption that the magnitude of the dipole could be represented by an exponential function of the inter-nuclear distance. This theory is extended here to deal with other functional forms of the induced dipole as revealed by ab initio calculations. Accurate analytical expression for the CIE can be obtained by least square fitting of the ab initio values of the dipole as a function of inter-atomic separation using a sum of exponentials and then proceeding as in Paper I. However, we also show how the multi-exponential fit can be replaced by a simpler fit using only two analytic functions. Our analysis is applied to the polar molecules HF and HBr. Unlike the rare gas atoms considered previously, these atomic pairs form stable bound diatomic molecules. We show that, interestingly, the spectra of these reactive molecules are characterized by the presence of multiple peaks. We also discuss the CIE arising from half collisions in excited electronic states, which in principle could be probed in photo-dissociation experiments.
Resumo:
OBJECTIVE: The objective of this study was to compare posttreatment seizure severity in a phase III clinical trial of eslicarbazepine acetate (ESL) as adjunctive treatment of refractory partial-onset seizures. METHODS: The Seizure Severity Questionnaire (SSQ) was administered at baseline and posttreatment. The SSQ total score (TS) and component scores (frequency and helpfulness of warning signs before seizures [BS]; severity and bothersomeness of ictal movement and altered consciousness during seizures [DS]; cognitive, emotional, and physical aspects of postictal recovery after seizures [AS]; and overall severity and bothersomeness [SB]) were calculated for the per-protocol population. Analysis of covariance, adjusted for baseline scores, estimated differences in posttreatment least square means between treatment arms. RESULTS: Out of 547 per-protocol patients, 441 had valid SSQ TS both at baseline and posttreatment. Mean posttreatment TS for ESL 1200mg/day was significantly lower than that for placebo (2.68 vs 3.20, p<0.001), exceeding the minimal clinically important difference (MCID: 0.48). Mean DS, AS, and SB were also significantly lower with ESL 1200mg/day; differences in AS and SB exceeded the MCIDs. The TS, DS, AS, and SB were lower for ESL 800mg/day than for placebo; only SB was significant (p=0.013). For both ESL arms combined versus placebo, mean scores differed significantly for TS (p=0.006), DS (p=0.031), and SB (p=0.001). CONCLUSIONS: Therapeutic ESL doses led to clinically meaningful, dose-dependent reductions in seizure severity, as measured by SSQ scores. CLASSIFICATION OF EVIDENCE: This study presents Class I evidence that adjunctive ESL (800 and 1200mg/day) led to clinically meaningful, dose-dependent seizure severity reductions, measured by the SSQ.
Resumo:
The least square method is analyzed. The basic aspects of the method are discussed. Emphasis is given in procedures that allow a simple memorization of the basic equations associated with the linear and non linear least square method, polinomial regression and multilinear method.
Resumo:
Rosin is a natural product from pine forests and it is used as a raw material in resinate syntheses. Resinates are polyvalent metal salts of rosin acids and especially Ca- and Ca/Mg- resinates find wide application in the printing ink industry. In this thesis, analytical methods were applied to increase general knowledge of resinate chemistry and the reaction kinetics was studied in order to model the non linear solution viscosity increase during resinate syntheses by the fusion method. Solution viscosity in toluene is an important quality factor for resinates to be used in printing inks. The concept of critical resinate concentration, c crit, was introduced to define an abrupt change in viscosity dependence on resinate concentration in the solution. The concept was then used to explain the non-inear solution viscosity increase during resinate syntheses. A semi empirical model with two estimated parameters was derived for the viscosity increase on the basis of apparent reaction kinetics. The model was used to control the viscosity and to predict the total reaction time of the resinate process. The kinetic data from the complex reaction media was obtained by acid value titration and by FTIR spectroscopic analyses using a conventional calibration method to measure the resinate concentration and the concentration of free rosin acids. A multivariate calibration method was successfully applied to make partial least square (PLS) models for monitoring acid value and solution viscosity in both mid-infrared (MIR) and near infrared (NIR) regions during the syntheses. The calibration models can be used for on line resinate process monitoring. In kinetic studies, two main reaction steps were observed during the syntheses. First a fast irreversible resination reaction occurs at 235 °C and then a slow thermal decarboxylation of rosin acids starts to take place at 265 °C. Rosin oil is formed during the decarboxylation reaction step causing significant mass loss as the rosin oil evaporates from the system while the viscosity increases to the target level. The mass balance of the syntheses was determined based on the resinate concentration increase during the decarboxylation reaction step. A mechanistic study of the decarboxylation reaction was based on the observation that resinate molecules are partly solvated by rosin acids during the syntheses. Different decarboxylation mechanisms were proposed for the free and solvating rosin acids. The deduced kinetic model supported the analytical data of the syntheses in a wide resinate concentration region, over a wide range of viscosity values and at different reaction temperatures. In addition, the application of the kinetic model to the modified resinate syntheses gave a good fit. A novel synthesis method with the addition of decarboxylated rosin (i.e. rosin oil) to the reaction mixture was introduced. The conversion of rosin acid to resinate was increased to the level necessary to obtain the target viscosity for the product at 235 °C. Due to a lower reaction temperature than in traditional fusion synthesis at 265 °C, thermal decarboxylation is avoided. As a consequence, the mass yield of the resinate syntheses can be increased from ca. 70% to almost 100% by recycling the added rosin oil.
Resumo:
One hundred fifteen cachaça samples derived from distillation in copper stills (73) or in stainless steels (42) were analyzed for thirty five itens by chromatography and inductively coupled plasma optical emission spectrometry. The analytical data were treated through Factor Analysis (FA), Partial Least Square Discriminant Analysis (PLS-DA) and Quadratic Discriminant Analysis (QDA). The FA explained 66.0% of the database variance. PLS-DA showed that it is possible to distinguish between the two groups of cachaças with 52.8% of the database variance. QDA was used to build up a classification model using acetaldehyde, ethyl carbamate, isobutyl alcohol, benzaldehyde, acetic acid and formaldehyde as chemical descriptors. The model presented 91.7% of accuracy on predicting the apparatus in which unknown samples were distilled.