985 resultados para Decomposition methods
Resumo:
Environmentally benign and economical methods for the preparation of industrially important hydroxy acids and diacids were developed. The carboxylic acids, used in polyesters, alkyd resins, and polyamides, were obtained by the oxidation of the corresponding alcohols with hydrogen peroxide or air catalyzed by sodium tungstate or supported noble metals. These oxidations were carried out using water as a solvent. The alcohols are also a useful alternative to the conventional reactants, hydroxyaldehydes and cycloalkanes. The oxidation of 2,2-disubstituted propane-1,3-diols with hydrogen peroxide catalyzed by sodium tungstate afforded 2,2-disubstituted 3-hydroxypropanoic acids and 1,1-disubstituted ethane-1,2-diols as products. A computational study of the Baeyer-Villiger rearrangement of the intermediate 2,2-disubstituted 3-hydroxypropanals gave in-depth data of the mechanism of the reaction. Linear primary diols having chain length of at least six carbons were easily oxidized with hydrogen peroxide to linear dicarboxylic acids catalyzed by sodium tungstate. The Pt/C catalyzed air oxidation of 2,2-disubstituted propane-1,3-diols and linear primary diols afforded the highest yield of the corresponding hydroxy acids, while the Pt, Bi/C catalyzed oxidation of the diols afforded the highest yield of the corresponding diacids. The mechanism of the promoted oxidation was best described by the ensemble effect, and by the formation of a complex of the hydroxy and the carboxy groups of the hydroxy acids with bismuth atoms. The Pt, Bi/C catalyzed air oxidation of 2-substituted 2-hydroxymethylpropane-1,3-diols gave 2-substituted malonic acids by the decarboxylation of the corresponding triacids. Activated carbon was the best support and bismuth the most efficient promoter in the air oxidation of 2,2-dialkylpropane-1,3-diols to diacids. In oxidations carried out in organic solvents barium sulfate could be a valuable alternative to activated carbon as a non-flammable support. In the Pt/C catalyzed air oxidation of 2,2-disubstituted propane-1,3-diols to 2,2-disubstituted 3-hydroxypropanoic acids the small size of the 2-substituents enhanced the rate of the oxidation. When the potential of platinum of the catalyst was not controlled, the highest yield of the diacids in the Pt, Bi/C catalyzed air oxidation of 2,2-dialkylpropane-1,3-diols was obtained in the regime of mass transfer. The most favorable pH of the reaction mixture of the promoted oxidation was 10. The reaction temperature of 40°C prevented the decarboxylation of the diacids.
Resumo:
Rarely is it possible to obtain absolute numbers in free-ranging populations and although various direct and indirect methods are used to estimate abundance, few are validated against populations of known size. In this paper, we apply grounding, calibration and verification methods, used to validate mathematical models, to methods of estimating relative abundance. To illustrate how this might be done, we consider and evaluate the widely applied passive tracking index (PTI) methodology. Using published data, we examine the rationality of PTI methodology, how conceptually animal activity and abundance are related and how alternative methods are subject to similar biases or produce similar abundance estimates and trends. We then attune the method against populations representing a range of densities likely to be encountered in the field. Finally, we compare PTI trends against a prediction that adjacent populations of the same species will have similar abundance values and trends in activity. We show that while PTI abundance estimates are subject to environmental and behavioural stochasticity peculiar to each species, the PTI method and associated variance estimate showed high probability of detection, high precision of abundance values and, generally, low variability between surveys, and suggest that the PTI method applied using this procedure and for these species provides a sensitive and credible index of abundance. This same or similar validation approach can and should be applied to alternative relative abundance methods in order to demonstrate their credibility and justify their use.
Resumo:
Thermal decomposition and combustion of lithium perchlorate ammine:ammonium perchlorate (LPA:AP) and magnesium perchlorate ammine:ammonium perchlorate (MPA:AP) pellets have been studied using DTA, TG, and strand burner techniques. The DTA results of the ammine:AP pellets show that the addition of ammines lowers the ignition temperature of AP. However, isothermal TG of the ammine:AP pellets show that in the case of LPA:AP pellets the extent of decomposition increases with the increase in the concentration of LPA; whereas in the case of MPA:AP pellets the extent of decomposition decreases with the increase in the concentration of MPA. Similarly, LPA:AP pellets show higher burning rates compared to AP pellets. On the other hand, MPA:AP pellets show lower burning rates compared to AP pellets. Increasing the concentration of MPA in MPA:AP pellets completely suppresses the combustion. These results are explained on the basis of the thermal characteristics of the additives and their decomposition products.
Resumo:
In recent years, concern has arisen over the effects of increasing carbon dioxide (CO2) in the earth's atmosphere due to the burning of fossil fuels. One way to mitigate increase in atmospheric CO2 concentration and climate change is carbon sequestration to forest vegeta-tion through photosynthesis. Comparable regional scale estimates for the carbon balance of forests are therefore needed for scientific and political purposes. The aim of the present dissertation was to improve methods for quantifying and verifying inventory-based carbon pool estimates of the boreal forests in the mineral soils. Ongoing forest inventories provide a data based on statistically sounded sampling for estimating the level of carbon stocks and stock changes, but improved modelling tools and comparison of methods are still needed. In this dissertation, the entire inventory-based large-scale forest carbon stock assessment method was presented together with some separate methods for enhancing and comparing it. The enhancement methods presented here include ways to quantify the biomass of understorey vegetation as well as to estimate the litter production of needles and branches. In addition, the optical remote sensing method illustrated in this dis-sertation can be used to compare with independent data. The forest inventory-based large-scale carbon stock assessment method demonstrated here provided reliable carbon estimates when compared with independent data. Future ac-tivity to improve the accuracy of this method could consist of reducing the uncertainties regarding belowground biomass and litter production as well as the soil compartment. The methods developed will serve the needs for UNFCCC reporting and the reporting under the Kyoto Protocol. This method is principally intended for analysts or planners interested in quantifying carbon over extensive forest areas.
Resumo:
Context Most studies assess pollination success at capsule maturity, and studies of pre-zygotic processes are often lacking. Aims This study investigates the suitability of controlled pollination for a potential forestry plantation species, Eucalyptus argophloia, by examining pre- and post-zygotic pollination success. Methods Pollen tube development, capsule set and seed set are compared following three-stop pollination, artificially induced protogyny (AIP), AIP unpollinated and open pollination. The fecundity of stored pollen was compared with that of fresh pollen. Results Three-stop pollination, AIP and open pollination had similar numbers of pollen tubes, but AIP unpollinated had none. Open pollination produced significantly more capsules and total number of seeds than the other treatments. There were significantly more seeds per retained capsule for the open pollination and three-stop pollination treatments than for the AIP and AIP unpollinated pollination treatments. There were no significant differences relative to the age of pollen. Conclusions Pre-zygotic success in terms of pollen tubes was similar for open-pollinated, three stop and AIP, but was not reflected in post-zygotic success when the open pollination and three-stop method produced significantly more seeds per retained capsule than the AIP treatments and open pollination yielded more seeds. Capsule set and total seed set for open pollination, and fewer capsules in controlled pollinations, may reflect physical damage to buds because of the small E. argophloia flowers. Suitable alternative breeding strategies other than controlled pollinations are discussed for this species.
Resumo:
This Paper deals with the analysis of liquid limit of soils, an inferential parameter of universal acceptance. It has been undertaken primarily to re-examine one-point methods of determination of liquid limit water contents. It has been shown by basic characteristics of soils and associated physico-chemical factors that critical shear strengths at liquid limit water contents arise out of force field equilibrium and are independent of soil type. This leads to the formation of a scientific base for liquid limit determination by one-point methods, which hitherto was formulated purely on statistical analysis of data. Available methods (Norman, 1959; Karlsson, 1961; Clayton & Jukes, 1978) of one-point liquid limit determination have been critically re-examined. A simple one-point cone penetrometer method of computing liquid limit has been suggested and compared with other methods. Experimental data of Sherwood & Ryley (1970) have been employed for comparison of different cone penetration methods. Results indicate that, apart from mere statistical considerations, one-point methods have a strong scientific base on the uniqueness of modified flow line irrespective of soil type. Normalized flow line is obtained by normalization of water contents by liquid limit values thereby nullifying the effects of surface areas and associated physico-chemical factors that are otherwise reflected in different responses at macrolevel.Cet article traite de l'analyse de la limite de liquidité des sols, paramètre déductif universellement accepté. Cette analyse a été entreprise en premier lieu pour ré-examiner les méthodes à un point destinées à la détermination de la teneur en eau à la limite de liquidité. Il a été démontré par les caractéristiques fondamentales de sols et par des facteurs physico-chimiques associés que les résistances critiques à la rupture au cisaillement pour des teneurs en eau à la limite de liquidité résultent de l'équilibre des champs de forces et sont indépendantes du type de sol concerné. On peut donc constituer une base scientifique pour la détermination de la limite de liquidité par des méthodes à un point lesquelles, jusqu'alors, n'avaient été formulées que sur la base d'une analyse statistique des données. Les méthodes dont on dispose (Norman, 1959; Karlsson, 1961; Clayton & Jukes, 1978) pour la détermination de la limite de liquidité à un point font l'objet d'un ré-examen critique. Une simple méthode d'analyse à un point à l'aide d'un pénétromètre à cône pour le calcul de la limite de liquidité a été suggérée et comparée à d'autres méthodes. Les données expérimentales de Sherwood & Ryley (1970) ont été utilisées en vue de comparer différentes méthodes de pénétration par cône. En plus de considérations d'ordre purement statistque, les résultats montrent que les méthodes de détermination à un point constituent une base scientifique solide en raison du caractère unique de la ligne de courant modifiée, quel que soit le type de sol La ligne de courant normalisée est obtenue par la normalisation de la teneur en eau en faisant appel à des valeurs de limite de liquidité pour, de cette manière, annuler les effets des surfaces et des facteurs physico-chimiques associés qui sans cela se manifesteraient dans les différentes réponses au niveau macro.
Resumo:
Lower water availability coupled with labor shortage has resulted in the increasing inability of growers to cultivate puddled transplanted rice (PTR). A field study was conducted in the wet season of 2012 and dry season of 2013 to evaluate the performance of five rice establishment methods and four weed control treatments on weed management, and rice yield. Grass weeds were higher in dry-seeded rice (DSR) as compared to PTR and nonpuddled transplanted rice (NPTR). The highest total weed density (225-256plantsm-2) and total weed biomass (315-501gm-2) were recorded in DSR while the lowest (102-129plantsm-2 and 75-387gm-2) in PTR. Compared with the weedy plots, the treatment pretilachlor followed by fenoxaprop plus ethoxysulfuron plus 2,4-D provided excellent weed control. This treatment, however, had a poor performance in NPTR. In both seasons, herbicide efficacy was better in DSR and wet-seeded rice. PTR and DSR produced the maximum rice grain yields. The weed-free plots and herbicide treatments produced 84-614% and 58-504% higher rice grain yield, respectively, than the weedy plots in 2012, and a similar trend was observed in 2013.
Resumo:
Glaucoma is the second leading cause of blindness worldwide. Often, the optic nerve head (ONH) glaucomatous damage and ONH changes occur prior to visual field loss and are observable in vivo. Thus, digital image analysis is a promising choice for detecting the onset and/or progression of glaucoma. In this paper, we present a new framework for detecting glaucomatous changes in the ONH of an eye using the method of proper orthogonal decomposition (POD). A baseline topograph subspace was constructed for each eye to describe the structure of the ONH of the eye at a reference/baseline condition using POD. Any glaucomatous changes in the ONH of the eye present during a follow-up exam were estimated by comparing the follow-up ONH topography with its baseline topograph subspace representation. Image correspondence measures of L-1-norm and L-2-norm, correlation, and image Euclidean distance (IMED) were used to quantify the ONH changes. An ONH topographic library built from the Louisiana State University Experimental Glaucoma study was used to evaluate the performance of the proposed method. The area under the receiver operating characteristic curves (AUCs) was used to compare the diagnostic performance of the POD-induced parameters with the parameters of the topographic change analysis (TCA) method. The IMED and L-2-norm parameters in the POD framework provided the highest AUC of 0.94 at 10 degrees. field of imaging and 0.91 at 15 degrees. field of imaging compared to the TCA parameters with an AUC of 0.86 and 0.88, respectively. The proposed POD framework captures the instrument measurement variability and inherent structure variability and shows promise for improving our ability to detect glaucomatous change over time in glaucoma management.
Resumo:
A 4-degree-of-freedom single-input system and a 3-degree-of-freedom multi-input system are solved by the Coates', modified Coates' and Chan-Mai flowgraph methods. It is concluded that the Chan-Mai flowgraph method is superior to other flowgraph methods in such cases.
Resumo:
We review 20 studies that examined persuasive processing and outcomes of health messages using neurocognitive measures. The results suggest that cognitive processes and neural activity in regions thought to reflect self-related processing may be more prominent in the persuasive process of self-relevant messages. Furthermore, activity in the medial prefrontal cortex (MPFC), the superior temporal gyrus, and the middle frontal gyrus were identified as predictors of message effectiveness, with the MPFC accounting for additional variance in behaviour change beyond that accounted for by self-report measures. Incorporating neurocognitive measures may provide a more comprehensive understanding of the processing and outcomes of health messages.
Resumo:
This thesis consists of an introduction, four research articles and an appendix. The thesis studies relations between two different approaches to continuum limit of models of two dimensional statistical mechanics at criticality. The approach of conformal field theory (CFT) could be thought of as the algebraic classification of some basic objects in these models. It has been succesfully used by physicists since 1980's. The other approach, Schramm-Loewner evolutions (SLEs), is a recently introduced set of mathematical methods to study random curves or interfaces occurring in the continuum limit of the models. The first and second included articles argue on basis of statistical mechanics what would be a plausible relation between SLEs and conformal field theory. The first article studies multiple SLEs, several random curves simultaneously in a domain. The proposed definition is compatible with a natural commutation requirement suggested by Dubédat. The curves of multiple SLE may form different topological configurations, ``pure geometries''. We conjecture a relation between the topological configurations and CFT concepts of conformal blocks and operator product expansions. Example applications of multiple SLEs include crossing probabilities for percolation and Ising model. The second article studies SLE variants that represent models with boundary conditions implemented by primary fields. The most well known of these, SLE(kappa, rho), is shown to be simple in terms of the Coulomb gas formalism of CFT. In the third article the space of local martingales for variants of SLE is shown to carry a representation of Virasoro algebra. Finding this structure is guided by the relation of SLEs and CFTs in general, but the result is established in a straightforward fashion. This article, too, emphasizes multiple SLEs and proposes a possible way of treating pure geometries in terms of Coulomb gas. The fourth article states results of applications of the Virasoro structure to the open questions of SLE reversibility and duality. Proofs of the stated results are provided in the appendix. The objective is an indirect computation of certain polynomial expected values. Provided that these expected values exist, in generic cases they are shown to possess the desired properties, thus giving support for both reversibility and duality.
Resumo:
It is well known that an integrable (in the sense of Arnold-Jost) Hamiltonian system gives rise to quasi-periodic motion with trajectories running on invariant tori. These tori foliate the whole phase space. If we perturb an integrable system, the Kolmogorow-Arnold-Moser (KAM) theorem states that, provided some non-degeneracy condition and that the perturbation is sufficiently small, most of the invariant tori carrying quasi-periodic motion persist, getting only slightly deformed. The measure of the persisting invariant tori is large together with the inverse of the size of the perturbation. In the first part of the thesis we shall use a Renormalization Group (RG) scheme in order to prove the classical KAM result in the case of a non analytic perturbation (the latter will only be assumed to have continuous derivatives up to a sufficiently large order). We shall proceed by solving a sequence of problems in which theperturbations are analytic approximations of the original one. We will finally show that the approximate solutions will converge to a differentiable solution of our original problem. In the second part we will use an RG scheme using continuous scales, so that instead of solving an iterative equation as in the classical RG KAM, we will end up solving a partial differential equation. This will allow us to reduce the complications of treating a sequence of iterative equations to the use of the Banach fixed point theorem in a suitable Banach space.
Resumo:
Genetics, the science of heredity and variation in living organisms, has a central role in medicine, in breeding crops and livestock, and in studying fundamental topics of biological sciences such as evolution and cell functioning. Currently the field of genetics is under a rapid development because of the recent advances in technologies by which molecular data can be obtained from living organisms. In order that most information from such data can be extracted, the analyses need to be carried out using statistical models that are tailored to take account of the particular genetic processes. In this thesis we formulate and analyze Bayesian models for genetic marker data of contemporary individuals. The major focus is on the modeling of the unobserved recent ancestry of the sampled individuals (say, for tens of generations or so), which is carried out by using explicit probabilistic reconstructions of the pedigree structures accompanied by the gene flows at the marker loci. For such a recent history, the recombination process is the major genetic force that shapes the genomes of the individuals, and it is included in the model by assuming that the recombination fractions between the adjacent markers are known. The posterior distribution of the unobserved history of the individuals is studied conditionally on the observed marker data by using a Markov chain Monte Carlo algorithm (MCMC). The example analyses consider estimation of the population structure, relatedness structure (both at the level of whole genomes as well as at each marker separately), and haplotype configurations. For situations where the pedigree structure is partially known, an algorithm to create an initial state for the MCMC algorithm is given. Furthermore, the thesis includes an extension of the model for the recent genetic history to situations where also a quantitative phenotype has been measured from the contemporary individuals. In that case the goal is to identify positions on the genome that affect the observed phenotypic values. This task is carried out within the Bayesian framework, where the number and the relative effects of the quantitative trait loci are treated as random variables whose posterior distribution is studied conditionally on the observed genetic and phenotypic data. In addition, the thesis contains an extension of a widely-used haplotyping method, the PHASE algorithm, to settings where genetic material from several individuals has been pooled together, and the allele frequencies of each pool are determined in a single genotyping.