937 resultados para analysis with NMR
Resumo:
Negotiating trade agreements is an important part of government trade policies, economic planning and part of the globally operating trading system of today. European Union and the United States have been active in the formation of trade agreements in global comparison. Now these two economic giants are engaged in negotiations to form their own trade agreement, the so called Transnational Trade and Investment Partnership (TTIP). The purpose of this thesis is to understand the reasons for making a trade agreement between two economic areas and understanding the issues it may include in the case of the TTIP. The TTIP has received a great deal of attention in the media. The opinions towards the partnership have been extreme, and the debate has been heated. The purpose of this study is to introduce the nature of the public discussion regarding the TTIP from Spring 2013 until 2014. The research problem is to find out what are the main issues in the agreement and what are the values influencing them. The study was conducted applying methods of critical discourse analysis to the chosen data. This includes gathering the issues from the data based on the attention each has received in the discussion. The underlying motives for raising different issues were analysed by investigating the authors’ position in the political, economic and social circuits. The perceived economic impacts of the TTIP are also under analysis with the same criteria. Some of the most respected economic newspapers globally were included in the research material as well as papers or reports published by the EU and global organisations. The analysis indicates a clear dichotomy of the attitudes towards the TTIP. Key problems include lack of transparency in the negotiations, the misunderstood investor-state dispute settlement, the constantly expanding regulatory issues and the risk of protectionism. The theory and data does suggest that the removal of tariffs is an effective tool for reaching economic gains in the TTIP and even more effective would be the reducing of non-tariff barriers, such as protectionism. Critics are worried over the rising influence of corporations over governments. The discourse analysis reveals that the supporters of the TTIP have values related to increasing welfare through economic growth. Critics do not deny the economic benefits but raise the question of inequality as a consequence. Overall they represent softer values such as sustainable development and democracy as a counter-attack to the corporate values of efficiency and the maximising of profits.
Resumo:
A method for determining aflatoxins B1 (AFB1), B2 (AFB2),G1 (AFG1) andG2 (AFG2) in maize with florisil clean up was optimised aiming at one-dimensional thin layer chromatography (TLC) analysis with visual and densitometric quantification. Aflatoxins were extracted with chloroform: water (30:1, v/v), purified through florisil cartridges, separated on TLC plate, detected and quantified by visual and densitometric analysis. The in-house method performance characteristics were determined by using spiked, naturally contaminated maize samples, and certified reference material. The mean recoveries for aflatoxins were 94.2, 81.9, 93.5 and 97.3% in the range of 1.0 to 242 µg/kg for AFB1, 0.3 to 85mg/kg for AFB2, 0.6 to 148mg/kg for AFG1 and 0.6 to 140mg/kg for AFG2, respectively. The correlation values between visual and densitometric analysis for spiked samples were higher than 0.99 for AFB1, AFB2, AFG1 and 0.98 for AFG2. The mean relative standard deviations (RSD) for spiked samples were 16.2, 20.6, 12.8 and 16.9% for AFB1, AFB2, AFG1 and AFG2, respectively. The RSD of the method for naturally contaminated sample (n = 5) was 16.8% for AFB1 and 27.2% for AFB2. The limits of detection of the method (LD) were 0.2, 0.1, 0.1 and 0.1mg/kg and the limits of quantification (LQ) were 1.0, 0.3, 0.6 and 0.6mg/kg for AFB1, AFB2, AFG1 and AFG2, respectively.
Resumo:
Avec l’échec des négociations entre les États-Unis et la Corée du Nord, menées depuis le début des années 1990, sur la question du nucléaire, le problème est devenu graduellement l’affaire des pays voisins, tous soucieux de l’avenir de la région du sud-est asiatique. Présentée comme le seul allié de la Corée du Nord, la China a été invitée à participer à des négociations à trois, à quatre (1997-1998), et à six (2003-2007), dans l’espoir de faire entendre raison au régime isolé, mais jusqu’à maintenant, aucune des tentatives n’est parvenue à satisfaire chacun des membres à la table. Alors que la tension monte et que la politique américaine se fait de moins en moins flexible, la Chine quant à elle, continue d’encourager le retour des négociations à six (Six-Party Talks) dans l’optique de dénucléariser la péninsule coréenne, tout en travaillant à maintenir ses liens avec la Corée du Nord. Le fil conducteur de cette présente recherche est d’abord d’essayer de comprendre pourquoi la Chine continue de soutenir la Corée du Nord, fournissant dons alimentaires et financiers. L’idée est donc d’analyser, selon les principes du réalisme néoclassique, la politique étrangère de la Chine. L’hypothèse principale de cette théorie renvoie à l’idée que la distribution du pouvoir dans le système international influence la politique étrangère des États, mais que des variables au niveau de l’état et/ou de l’individu interviennent dans la formulation et l’application de celle-ci. Il est proposé ici que le lien entre l’unipolarité du système international et la politique nord-coréenne de la Chine, est façonné par des variables intermédiaires telles que : a) la perception des leaders de la distribution du pouvoir et de leur place dans le système international; b) l’idéologie du régime politique, et; c) le type d’unité responsable de la prise de décision en politique étrangère. L’analyse de chacune des variables permettra de faire la lumière sur les intérêts politiques et économiques de la Chine dans l’entretien de cette relation avec la Corée du Nord.
Resumo:
The first two articles build procedures to simulate vector of univariate states and estimate parameters in nonlinear and non Gaussian state space models. We propose state space speci fications that offer more flexibility in modeling dynamic relationship with latent variables. Our procedures are extension of the HESSIAN method of McCausland[2012]. Thus, they use approximation of the posterior density of the vector of states that allow to : simulate directly from the state vector posterior distribution, to simulate the states vector in one bloc and jointly with the vector of parameters, and to not allow data augmentation. These properties allow to build posterior simulators with very high relative numerical efficiency. Generic, they open a new path in nonlinear and non Gaussian state space analysis with limited contribution of the modeler. The third article is an essay in commodity market analysis. Private firms coexist with farmers' cooperatives in commodity markets in subsaharan african countries. The private firms have the biggest market share while some theoretical models predict they disappearance once confronted to farmers cooperatives. Elsewhere, some empirical studies and observations link cooperative incidence in a region with interpersonal trust, and thus to farmers trust toward cooperatives. We propose a model that sustain these empirical facts. A model where the cooperative reputation is a leading factor determining the market equilibrium of a price competition between a cooperative and a private firm
Resumo:
A spectral angle based feature extraction method, Spectral Clustering Independent Component Analysis (SC-ICA), is proposed in this work to improve the brain tissue classification from Magnetic Resonance Images (MRI). SC-ICA provides equal priority to global and local features; thereby it tries to resolve the inefficiency of conventional approaches in abnormal tissue extraction. First, input multispectral MRI is divided into different clusters by a spectral distance based clustering. Then, Independent Component Analysis (ICA) is applied on the clustered data, in conjunction with Support Vector Machines (SVM) for brain tissue analysis. Normal and abnormal datasets, consisting of real and synthetic T1-weighted, T2-weighted and proton density/fluid-attenuated inversion recovery images, were used to evaluate the performance of the new method. Comparative analysis with ICA based SVM and other conventional classifiers established the stability and efficiency of SC-ICA based classification, especially in reproduction of small abnormalities. Clinical abnormal case analysis demonstrated it through the highest Tanimoto Index/accuracy values, 0.75/98.8%, observed against ICA based SVM results, 0.17/96.1%, for reproduced lesions. Experimental results recommend the proposed method as a promising approach in clinical and pathological studies of brain diseases
Resumo:
Solid waste management nowadays is an important environmental issue in country like India. Statistics show that there has been substantial increase in the solid waste generation especially in the urban areas. This trend can be ascribed to rapid population growth, changing lifestyles, food habits, and change in living standards, lack of financial resources, institutional weaknesses, improper choice of technology and public apathy towards municipal solid waste. Waste is directly related to the consumption of resources and dumping to the land. Ecological footprint analysis – an impact assessment environment management tool makes a relationship between two factors- the amount of land required to dispose per capita generated waste. Ecological footprint analysis is a quantitative tool that represents the ecological load imposed on the earth by humans in spatial terms. By quantifying the ecological footprint we can formulate strategies to reduce the footprint and there by having a sustainable living. In this paper, an attempt is made to explore the tool Ecological Footprint Analysis with special emphasis to waste generation. The paper also discusses and analyses the waste footprint of Kochi city,India. An attempt is also made to suggest strategies to reduce the waste footprint thereby making the city sustainable, greener and cleaner
Resumo:
The consumers are becoming more concerned about food quality, especially regarding how, when and where the foods are produced (Haglund et al., 1999; Kahl et al., 2004; Alföldi, et al., 2006). Therefore, during recent years there has been a growing interest in the methods for food quality assessment, especially in the picture-development methods as a complement to traditional chemical analysis of single compounds (Kahl et al., 2006). The biocrystallization as one of the picture-developing method is based on the crystallographic phenomenon that when crystallizing aqueous solutions of dihydrate CuCl2 with adding of organic solutions, originating, e.g., from crop samples, biocrystallograms are generated with reproducible crystal patterns (Kleber & Steinike-Hartung, 1959). Its output is a crystal pattern on glass plates from which different variables (numbers) can be calculated by using image analysis. However, there is a lack of a standardized evaluation method to quantify the morphological features of the biocrystallogram image. Therefore, the main sakes of this research are (1) to optimize an existing statistical model in order to describe all the effects that contribute to the experiment, (2) to investigate the effect of image parameters on the texture analysis of the biocrystallogram images, i.e., region of interest (ROI), color transformation and histogram matching on samples from the project 020E170/F financed by the Federal Ministry of Food, Agriculture and Consumer Protection(BMELV).The samples are wheat and carrots from controlled field and farm trials, (3) to consider the strongest effect of texture parameter with the visual evaluation criteria that have been developed by a group of researcher (University of Kassel, Germany; Louis Bolk Institute (LBI), Netherlands and Biodynamic Research Association Denmark (BRAD), Denmark) in order to clarify how the relation of the texture parameter and visual characteristics on an image is. The refined statistical model was accomplished by using a lme model with repeated measurements via crossed effects, programmed in R (version 2.1.0). The validity of the F and P values is checked against the SAS program. While getting from the ANOVA the same F values, the P values are bigger in R because of the more conservative approach. The refined model is calculating more significant P values. The optimization of the image analysis is dealing with the following parameters: ROI(Region of Interest which is the area around the geometrical center), color transformation (calculation of the 1 dimensional gray level value out of the three dimensional color information of the scanned picture, which is necessary for the texture analysis), histogram matching (normalization of the histogram of the picture to enhance the contrast and to minimize the errors from lighting conditions). The samples were wheat from DOC trial with 4 field replicates for the years 2003 and 2005, “market samples”(organic and conventional neighbors with the same variety) for 2004 and 2005, carrot where the samples were obtained from the University of Kassel (2 varieties, 2 nitrogen treatments) for the years 2004, 2005, 2006 and “market samples” of carrot for the years 2004 and 2005. The criterion for the optimization was repeatability of the differentiation of the samples over the different harvest(years). For different samples different ROIs were found, which reflect the different pictures. The best color transformation that shows efficiently differentiation is relied on gray scale, i.e., equal color transformation. The second dimension of the color transformation only appeared in some years for the effect of color wavelength(hue) for carrot treated with different nitrate fertilizer levels. The best histogram matching is the Gaussian distribution. The approach was to find a connection between the variables from textural image analysis with the different visual criteria. The relation between the texture parameters and visual evaluation criteria was limited to the carrot samples, especially, as it could be well differentiated by the texture analysis. It was possible to connect groups of variables of the texture analysis with groups of criteria from the visual evaluation. These selected variables were able to differentiate the samples but not able to classify the samples according to the treatment. Contrarily, in case of visual criteria which describe the picture as a whole there is a classification in 80% of the sample cases possible. Herewith, it clearly can find the limits of the single variable approach of the image analysis (texture analysis).
Resumo:
Mit dieser Arbeit wurde die Selbstassemblierung von dia- und paramagnetischen Molekülen sowie Einzelmolekülmagneten auf Goldsubstraten und magnetisch strukturierten Substraten untersucht. Dazu wurden drei verschiedene Klassen an Phthalocyaninderivaten verwendet: Diamagnetische Subphthalocyanine, paramagnetische Phthalocyaninatometalle und Diphthalocyaninatolanthanidkomplexe. Alle synthetisierten Verbindungen sind peripher thioethersubstituiert. Die Alkylketten (a: n-C8H17, b: n-C12H25) vermitteln die Löslichkeit in vielen organischen Solventien und sorgen für eine geordnete Assemblierung auf einer Oberfläche, wobei die Bindung auf Gold hauptsächlich über die Schwefelatome stattfindet. Die aus Lösung abgeschiedenen selbstassemblierten Monolagen wurden mit XPS, NEXAFS-Spektroskopie und ToF-SIMS untersucht. Bei der Selbstassemblierung auf magnetisch strukturierten Substraten stehen die Moleküle unter dem Einfluss magnetischer Streufelder und binden bevorzugt nur in bestimmten Bereichen. Die gebildeten Submonolagen wurden zusätzlich mit X-PEEM untersucht. Die erstmals dargestellten Manganphthalocyanine [MnClPc(SR)8] 1 wurden ausgehend von MnCl2 erhalten. Hier fand bei der Aufarbeitung an Luft eine Oxidation zu Mangan(III) statt; +III ist die stabilste Oxidationsstufe von Mangan in Phthalocyaninen. Der Nachweis des axialen Chloridoliganden erfolgte mit Massenspektrometrie und FIR- sowie Raman-Spektroskopie. SQUID-Messungen haben gezeigt, dass die Komplexe 1 vier ungepaarte Elektronen haben. Bei den Subphthalocyaninen [BClSubpc(SR)6] 2 wurde der axiale Chloridoligand mit dem stäbchenförmigen Phenolderivat 29-H substituiert und die erfolgreiche Ligandensubstitution durch NMR- und IR-Spektroskopie sowie Massenspektrometrie an den Produkten [BSubpc(SR)6(29)] 30 belegt. Der Radikalcharakter der synthetisierten Terbiumkomplexe [Tb{Pc(SR)8}2] 3 wurde spektroskopisch nachgewiesen; SQUID-Messungen ergaben, dass es sich um Einzelmolekülmagnete mit einer Energiebarriere U des Doppelpotentialtopfs von 880 K oder 610 cm-1 bei 3a handelt. Zunächst wurden die SAMs der Komplexverbindungen 1, 2, 30 und 3 auf nicht magnetisch strukturierten Goldsubstraten untersucht. Die Manganphthalocyanine 1 bilden geordnete SAMs mit größtenteils flach liegenden Molekülen, wie die XPS-, NEXAFS- und ToF-SIMS-Analyse zeigte. Die Mehrzahl der Thioether-Einheiten ist auf Gold koordiniert und die Alkylketten zeigen ungeordnet von der Oberfläche weg. Bei der Adsorption findet eine Reduktion zu Mangan(II) statt und der axiale Chloridoligand wird abgespalten. Das beruht auf dem sog. Oberflächen-trans-Effekt. Im vorliegenden Fall übt die Metalloberfläche einen stärkeren trans-Effekt als der axiale Ligand aus, was bisher experimentell noch nicht beobachtet wurde. Die thioethersubstituierten Subphthalocyanine 2 und 30 sowie die Diphthalocyaninatoterbium-Komplexe 3 sind ebenfalls für SAMs geeignet. Ihre Monolagen wurden mit XPS und NEXAFS-Spektroskopie untersucht, und trotz einer gewissen Unordnung in den Filmen liegen die Moleküle jeweils im Wesentlichen flach auf der Goldoberfläche. Vermutlich sind bei diesen Systemen auch die Alkylketten größtenteils parallel zur Oberfläche orientiert. Im Gegensatz zu den Manganphthalocyaninen 1 tritt bei 2b, 30a, 30b und 3b neben der koordinativen Bindung der Schwefelatome auf Gold auch eine für Thioether nicht erwartete kovalente Au–S-Bindung auf, die durch C–S-Bindungsbruch unter Abspaltung der Alkylketten ermöglicht wird. Der Anteil, zu dem dieser Prozess stattfindet, scheint nicht mit der Molekülstruktur zu korrelieren. Selbstassemblierte Submonolagen auf magnetisch strukturierten Substraten wurden mit dem diamagnetischen Subphthalocyanin 2b hergestellt. Der Nachweis der Submonolagen war schwierig und gelang schließlich durch eine Kombination von ToF-SIMS, NEXAFS Imaging und X-PEEM. Die Analyse der ToF-SIMS-Daten zeigte, dass tatsächlich eine Modulation der Verteilung der Moleküle auf einem unterwärts magnetisch strukturierten Substrat eintritt. Mit X-PEEM konnte die magnetische Struktur der ferromagnetischen Schicht des Substrats direkt der Verteilung der adsorbierten Moleküle zugeordnet werden. Die Subphthalocyanine 2b adsorbieren nicht an den Domänengrenzen, sondern vermehrt dazwischen. Auf Substraten mit abwechselnd 6.5 und 3.5 µm breiten magnetischen Domänen binden die Moleküle bevorzugt in den Bereichen geringster magnetischer Streufeldgradienten, also den größeren Domänen. Solche Substrate wurden für die ToF-SIMS- und X-PEEM-Messungen verwendet. Bei größeren magnetischen Strukturen mit ca. 400 µm breiten Domänen, wie sie aufgrund der geringeren Ortsauflösung dieser Methode für NEXAFS Imaging eingesetzt wurden, binden die Moleküle dann in allen Domänen. Die diamagnetischen Moleküle werden nach dieser Interpretation aus dem inhomogenen Magnetfeld über der Probenoberfläche heraus gedrängt und verhalten sich analog makroskopischer Diamagnete. Die eindeutige Detektion der Moleküle auf den magnetisch strukturierten Substraten konnte bisher nur für die diamagnetischen Subphthalocyanine 2b erfolgen. Um die Interpretation ihres Verhaltens bei der Selbstassemblierung in einem inhomogenen Magnetfeld weiter voranzutreiben, wurde das Subphthalocyanin 37b dargestellt, welches ein stabiles organisches TEMPO-Radikal in seinem axialen Liganden enthält. Das paramagnetische Subphthalocyanin 37b sollte auf den magnetisch strukturierten Substraten in Regionen starker magnetischer Streufelder binden und damit das entgegengesetzte Verhalten zu den diamagnetischen Subphthalocyaninen 2b zeigen. Aus Zeitgründen konnte dieser Nachweis im Rahmen dieser Arbeit noch nicht erbracht werden.
Resumo:
To study the behaviour of beam-to-column composite connection more sophisticated finite element models is required, since component model has some severe limitations. In this research a generic finite element model for composite beam-to-column joint with welded connections is developed using current state of the art local modelling. Applying mechanically consistent scaling method, it can provide the constitutive relationship for a plane rectangular macro element with beam-type boundaries. Then, this defined macro element, which preserves local behaviour and allows for the transfer of five independent states between local and global models, can be implemented in high-accuracy frame analysis with the possibility of limit state checks. In order that macro element for scaling method can be used in practical manner, a generic geometry program as a new idea proposed in this study is also developed for this finite element model. With generic programming a set of global geometric variables can be input to generate a specific instance of the connection without much effort. The proposed finite element model generated by this generic programming is validated against testing results from University of Kaiserslautern. Finally, two illustrative examples for applying this macro element approach are presented. In the first example how to obtain the constitutive relationships of macro element is demonstrated. With certain assumptions for typical composite frame the constitutive relationships can be represented by bilinear laws for the macro bending and shear states that are then coupled by a two-dimensional surface law with yield and failure surfaces. In second example a scaling concept that combines sophisticated local models with a frame analysis using a macro element approach is presented as a practical application of this numerical model.
Resumo:
In any discipline, where uncertainty and variability are present, it is important to have principles which are accepted as inviolate and which should therefore drive statistical modelling, statistical analysis of data and any inferences from such an analysis. Despite the fact that two such principles have existed over the last two decades and from these a sensible, meaningful methodology has been developed for the statistical analysis of compositional data, the application of inappropriate and/or meaningless methods persists in many areas of application. This paper identifies at least ten common fallacies and confusions in compositional data analysis with illustrative examples and provides readers with necessary, and hopefully sufficient, arguments to persuade the culprits why and how they should amend their ways
Resumo:
Isotopic data are currently becoming an important source of information regarding sources, evolution and mixing processes of water in hydrogeologic systems. However, it is not clear how to treat with statistics the geochemical data and the isotopic data together. We propose to introduce the isotopic information as new parts, and apply compositional data analysis with the resulting increased composition. Results are equivalent to downscale the classical isotopic delta variables, because they are already relative (as needed in the compositional framework) and isotopic variations are almost always very small. This methodology is illustrated and tested with the study of the Llobregat River Basin (Barcelona, NE Spain), where it is shown that, though very small, isotopic variations comp lement geochemical principal components, and help in the better identification of pollution sources
Resumo:
Many multivariate methods that are apparently distinct can be linked by introducing one or more parameters in their definition. Methods that can be linked in this way are correspondence analysis, unweighted or weighted logratio analysis (the latter also known as "spectral mapping"), nonsymmetric correspondence analysis, principal component analysis (with and without logarithmic transformation of the data) and multidimensional scaling. In this presentation I will show how several of these methods, which are frequently used in compositional data analysis, may be linked through parametrizations such as power transformations, linear transformations and convex linear combinations. Since the methods of interest here all lead to visual maps of data, a "movie" can be made where where the linking parameter is allowed to vary in small steps: the results are recalculated "frame by frame" and one can see the smooth change from one method to another. Several of these "movies" will be shown, giving a deeper insight into the similarities and differences between these methods
Resumo:
We study a particular restitution problem where there is an indivisible good (land or property) over which two agents have rights: the dispossessed agent and the owner. A third party, possibly the government, seeks to resolve the situation by assigning rights to one and compensate the other. There is also a maximum amount of money available for the compensation. We characterize a family of asymmetrically fair rules that are immune to strategic behavior, guarantee minimal welfare levels for the agents, and satisfy the budget constraint.
Resumo:
An improved method for the detection of pressed hazelnut oil in admixtures with virgin olive oil by analysis of polar components is described. The method. which is based on the SPE-based isolation of the polar fraction followed by RP-HPLC analysis with UV detection. is able to detect virgin olive oil adulterated with pressed hazelnut oil at levels as low as 5% with accuracy (90.0 +/- 4.2% recovery of internal standard), good reproducibility (4.7% RSD) and linearity (R-2: 0.9982 over the 5-40% adulteration range). An international ring-test of the developed method highlighted its capability as 80% of the samples were, on average, correctly identified despite the fact that no training samples were provided to the participating laboratories. However, the large variability in marker components among the pressed hazelnut oils examined prevents the use of the method for quantification of the level of adulteration. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
We explore the potential for making statistical decadal predictions of sea surface temperatures (SSTs) in a perfect model analysis, with a focus on the Atlantic basin. Various statistical methods (Lagged correlations, Linear Inverse Modelling and Constructed Analogue) are found to have significant skill in predicting the internal variability of Atlantic SSTs for up to a decade ahead in control integrations of two different global climate models (GCMs), namely HadCM3 and HadGEM1. Statistical methods which consider non-local information tend to perform best, but which is the most successful statistical method depends on the region considered, GCM data used and prediction lead time. However, the Constructed Analogue method tends to have the highest skill at longer lead times. Importantly, the regions of greatest prediction skill can be very different to regions identified as potentially predictable from variance explained arguments. This finding suggests that significant local decadal variability is not necessarily a prerequisite for skillful decadal predictions, and that the statistical methods are capturing some of the dynamics of low-frequency SST evolution. In particular, using data from HadGEM1, significant skill at lead times of 6–10 years is found in the tropical North Atlantic, a region with relatively little decadal variability compared to interannual variability. This skill appears to come from reconstructing the SSTs in the far north Atlantic, suggesting that the more northern latitudes are optimal for SST observations to improve predictions. We additionally explore whether adding sub-surface temperature data improves these decadal statistical predictions, and find that, again, it depends on the region, prediction lead time and GCM data used. Overall, we argue that the estimated prediction skill motivates the further development of statistical decadal predictions of SSTs as a benchmark for current and future GCM-based decadal climate predictions.