966 resultados para Zeros of orthogonal polynomials
Resumo:
Abstract Background Fuel ethanol production from sustainable and largely abundant agro-residues such as sugarcane bagasse (SB) provides long term, geopolitical and strategic benefits. Pretreatment of SB is an inevitable process for improved saccharification of cell wall carbohydrates. Recently, ammonium hydroxide-based pretreatment technologies have gained significance as an effective and economical pretreatment strategy. We hypothesized that soaking in concentrated aqueous ammonia-mediated thermochemical pretreatment (SCAA) would overcome the native recalcitrance of SB by enhancing cellulase accessibility of the embedded holocellulosic microfibrils. Results In this study, we designed an experiment considering response surface methodology (Taguchi method, L8 orthogonal array) to optimize sugar recovery from ammonia pretreated sugarcane bagasse (SB) by using the method of soaking in concentrated aqueous ammonia (SCAA-SB). Three independent variables: ammonia concentration, temperature and time, were selected at two levels with center point. The ammonia pretreated bagasse (SCAA-SB) was enzymatically hydrolysed by commercial enzymes (Celluclast 1.5 L and Novozym 188) using 15 FPU/g dry biomass and 17.5 Units of β-glucosidase/g dry biomass at 50°C, 150 rpm for 96 h. A maximum of 28.43 g/l reducing sugars corresponding to 0.57 g sugars/g pretreated bagasse was obtained from the SCAA-SB derived using a 20% v/v ammonia solution, at 70°C for 24 h after enzymatic hydrolysis. Among the tested parameters, pretreatment time showed the maximum influence (p value, 0.053282) while ammonia concentration showed the least influence (p value, 0.612552) on sugar recovery. The changes in the ultra-structure and crystallinity of native SCAA-SB and enzymatically hydrolysed SB were observed by scanning electron microscopy (SEM), x-ray diffraction (XRD) and solid-state 13C nuclear magnetic resonance (NMR) spectroscopy. The enzymatic hydrolysates and solid SCAA-SB were subjected to ethanol fermentation under separate hydrolysis and fermentation (SHF) and simultaneous saccharification and fermentation (SSF) by Scheffersomyces (Pichia) stipitis NRRL Y-7124 respectively. Higher ethanol production (10.31 g/l and yield, 0.387 g/g) was obtained through SSF than SHF (3.83 g/l and yield, 0.289 g/g). Conclusions SCAA treatment showed marked lignin removal from SB thus improving the accessibility of cellulases towards holocellulose substrate as evidenced by efficient sugar release. The ultrastructure of SB after SCAA and enzymatic hydrolysis of holocellulose provided insights of the degradation process at the molecular level.
Resumo:
Abstract Background The use of lignocellulosic constituents in biotechnological processes requires a selective separation of the main fractions (cellulose, hemicellulose and lignin). During diluted acid hydrolysis for hemicellulose extraction, several toxic compounds are formed by the degradation of sugars and lignin, which have ability to inhibit microbial metabolism. Thus, the use of a detoxification step represents an important aspect to be considered for the improvement of fermentation processes from hydrolysates. In this paper, we evaluated the application of Advanced Oxidative Processes (AOPs) for the detoxification of rice straw hemicellulosic hydrolysate with the goal of improving ethanol bioproduction by Pichia stipitis yeast. Aiming to reduce the toxicity of the hemicellulosic hydrolysate, different treatment conditions were analyzed. The treatments were carried out according to a Taguchi L16 orthogonal array to evaluate the influence of Fe+2, H2O2, UV, O3 and pH on the concentration of aromatic compounds and the fermentative process. Results The results showed that the AOPs were able to remove aromatic compounds (furan and phenolic compounds derived from lignin) without affecting the sugar concentration in the hydrolysate. Ozonation in alkaline medium (pH 8) in the presence of H2O2 (treatment A3) or UV radiation (treatment A5) were the most effective for hydrolysate detoxification and had a positive effect on increasing the yeast fermentability of rice straw hemicellulose hydrolysate. Under these conditions, the higher removal of total phenols (above 40%), low molecular weight phenolic compounds (above 95%) and furans (above 52%) were observed. In addition, the ethanol volumetric productivity by P. stipitis was increased in approximately twice in relation the untreated hydrolysate. Conclusion These results demonstrate that AOPs are a promising methods to reduce toxicity and improve the fermentability of lignocellulosic hydrolysates.
Resumo:
This paper presents a method to design membrane elements of concrete with orthogonal mesh of reinforcement which are subject to compressive stress. Design methods, in general, define how to quantify the reinforcement necessary to support the tension stress and verify if the compression in concrete is within the strength limit. In case the compression in membrane is excessive, it is possible to use reinforcements subject to compression. However, there is not much information in the literature about how to design reinforcement for these cases. For that, this paper presents a procedure which uses the model based on Baumann's [1] criteria. The strength limits used herein are those recommended by CEB [3], however, a model is proposed in which this limit varies according to the tensile strain which occur perpendicular to compression. This resistance model is based on concepts proposed by Vecchio e Collins [2].
Resumo:
Abstract Background Biofuels produced from sugarcane bagasse (SB) have shown promising results as a suitable alternative of gasoline. Biofuels provide unique, strategic, environmental and socio-economic benefits. However, production of biofuels from SB has negative impact on environment due to the use of harsh chemicals during pretreatment. Consecutive sulfuric acid-sodium hydroxide pretreatment of SB is an effective process which eventually ameliorates the accessibility of cellulase towards cellulose for the sugars production. Alkaline hydrolysate of SB is black liquor containing high amount of dissolved lignin. Results This work evaluates the environmental impact of residues generated during the consecutive acid-base pretreatment of SB. Advanced oxidative process (AOP) was used based on photo-Fenton reaction mechanism (Fenton Reagent/UV). Experiments were performed in batch mode following factorial design L9 (Taguchi orthogonal array design of experiments), considering the three operation variables: temperature (°C), pH, Fenton Reagent (Fe2+/H2O2) + ultraviolet. Reduction of total phenolics (TP) and total organic carbon (TOC) were responsive variables. Among the tested conditions, experiment 7 (temperature, 35°C; pH, 2.5; Fenton reagent, 144 ml H2O2+153 ml Fe2+; UV, 16W) revealed the maximum reduction in TP (98.65%) and TOC (95.73%). Parameters such as chemical oxygen demand (COD), biochemical oxygen demand (BOD), BOD/COD ratio, color intensity and turbidity also showed a significant change in AOP mediated lignin solution than the native alkaline hydrolysate. Conclusion AOP based on Fenton Reagent/UV reaction mechanism showed efficient removal of TP and TOC from sugarcane bagasse alkaline hydrolysate (lignin solution). To the best of our knowledge, this is the first report on statistical optimization of the removal of TP and TOC from sugarcane bagasse alkaline hydrolysate employing Fenton reagent mediated AOP process.
Resumo:
We study orthogonal projections of generic embedded hypersurfaces in 'R POT.4' with boundary to 2-spaces. Therefore, we classify simple map germs from 'R POT.3' to the plane of codimension less than or equal to 4 with the source containing a distinguished plane which is preserved by coordinate changes. We also go into some detail on their geometrical properties in order to recognize the cases of codimension less than or equal to 1.
Resumo:
The thesis consists of three independent parts. Part I: Polynomial amoebas We study the amoeba of a polynomial, as de ned by Gelfand, Kapranov and Zelevinsky. A central role in the treatment is played by a certain convex function which is linear in each complement component of the amoeba, which we call the Ronkin function. This function is used in two di erent ways. First, we use it to construct a polyhedral complex, which we call a spine, approximating the amoeba. Second, the Monge-Ampere measure of the Ronkin function has interesting properties which we explore. This measure can be used to derive an upper bound on the area of an amoeba in two dimensions. We also obtain results on the number of complement components of an amoeba, and consider possible extensions of the theory to varieties of codimension higher than 1. Part II: Differential equations in the complex plane We consider polynomials in one complex variable arising as eigenfunctions of certain differential operators, and obtain results on the distribution of their zeros. We show that in the limit when the degree of the polynomial approaches innity, its zeros are distributed according to a certain probability measure. This measure has its support on the union of nitely many curve segments, and can be characterized by a simple condition on its Cauchy transform. Part III: Radon transforms and tomography This part is concerned with different weighted Radon transforms in two dimensions, in particular the problem of inverting such transforms. We obtain stability results of this inverse problem for rather general classes of weights, including weights of attenuation type with data acquisition limited to a 180 degrees range of angles. We also derive an inversion formula for the exponential Radon transform, with the same restriction on the angle.
Resumo:
In this thesis some multivariate spectroscopic methods for the analysis of solutions are proposed. Spectroscopy and multivariate data analysis form a powerful combination for obtaining both quantitative and qualitative information and it is shown how spectroscopic techniques in combination with chemometric data evaluation can be used to obtain rapid, simple and efficient analytical methods. These spectroscopic methods consisting of spectroscopic analysis, a high level of automation and chemometric data evaluation can lead to analytical methods with a high analytical capacity, and for these methods, the term high-capacity analysis (HCA) is suggested. It is further shown how chemometric evaluation of the multivariate data in chromatographic analyses decreases the need for baseline separation. The thesis is based on six papers and the chemometric tools used are experimental design, principal component analysis (PCA), soft independent modelling of class analogy (SIMCA), partial least squares regression (PLS) and parallel factor analysis (PARAFAC). The analytical techniques utilised are scanning ultraviolet-visible (UV-Vis) spectroscopy, diode array detection (DAD) used in non-column chromatographic diode array UV spectroscopy, high-performance liquid chromatography with diode array detection (HPLC-DAD) and fluorescence spectroscopy. The methods proposed are exemplified in the analysis of pharmaceutical solutions and serum proteins. In Paper I a method is proposed for the determination of the content and identity of the active compound in pharmaceutical solutions by means of UV-Vis spectroscopy, orthogonal signal correction and multivariate calibration with PLS and SIMCA classification. Paper II proposes a new method for the rapid determination of pharmaceutical solutions by the use of non-column chromatographic diode array UV spectroscopy, i.e. a conventional HPLC-DAD system without any chromatographic column connected. In Paper III an investigation is made of the ability of a control sample, of known content and identity to diagnose and correct errors in multivariate predictions something that together with use of multivariate residuals can make it possible to use the same calibration model over time. In Paper IV a method is proposed for simultaneous determination of serum proteins with fluorescence spectroscopy and multivariate calibration. Paper V proposes a method for the determination of chromatographic peak purity by means of PCA of HPLC-DAD data. In Paper VI PARAFAC is applied for the decomposition of DAD data of some partially separated peaks into the pure chromatographic, spectral and concentration profiles.
Resumo:
In the present thesis a thourough multiwavelength analysis of a number of galaxy clusters known to be experiencing a merger event is presented. The bulk of the thesis consists in the analysis of deep radio observations of six merging clusters, which host extended radio emission on the cluster scale. A composite optical and X–ray analysis is performed in order to obtain a detailed and comprehensive picture of the cluster dynamics and possibly derive hints about the properties of the ongoing merger, such as the involved mass ratio, geometry and time scale. The combination of the high quality radio, optical and X–ray data allows us to investigate the implications of the ongoing merger for the cluster radio properties, focusing on the phenomenon of cluster scale diffuse radio sources, known as radio halos and relics. A total number of six merging clusters was selected for the present study: A3562, A697, A209, A521, RXCJ 1314.4–2515 and RXCJ 2003.5–2323. All of them were known, or suspected, to possess extended radio emission on the cluster scale, in the form of a radio halo and/or a relic. High sensitivity radio observations were carried out for all clusters using the Giant Metrewave Radio Telescope (GMRT) at low frequency (i.e. ≤ 610 MHz), in order to test the presence of a diffuse radio source and/or analyse in detail the properties of the hosted extended radio emission. For three clusters, the GMRT information was combined with higher frequency data from Very Large Array (VLA) observations. A re–analysis of the optical and X–ray data available in the public archives was carried out for all sources. Propriety deep XMM–Newton and Chandra observations were used to investigate the merger dynamics in A3562. Thanks to our multiwavelength analysis, we were able to confirm the existence of a radio halo and/or a relic in all clusters, and to connect their properties and origin to the reconstructed merging scenario for most of the investigated cases. • The existence of a small size and low power radio halo in A3562 was successfully explained in the theoretical framework of the particle re–acceleration model for the origin of radio halos, which invokes the re–acceleration of pre–existing relativistic electrons in the intracluster medium by merger–driven turbulence. • A giant radio halo was found in the massive galaxy cluster A209, which has likely undergone a past major merger and is currently experiencing a new merging process in a direction roughly orthogonal to the old merger axis. A giant radio halo was also detected in A697, whose optical and X–ray properties may be suggestive of a strong merger event along the line of sight. Given the cluster mass and the kind of merger, the existence of a giant radio halo in both clusters is expected in the framework of the re–acceleration scenario. • A radio relic was detected at the outskirts of A521, a highly dynamically disturbed cluster which is accreting a number of small mass concentrations. A possible explanation for its origin requires the presence of a merger–driven shock front at the location of the source. The spectral properties of the relic may support such interpretation and require a Mach number M < ∼ 3 for the shock. • The galaxy cluster RXCJ 1314.4–2515 is exceptional and unique in hosting two peripheral relic sources, extending on the Mpc scale, and a central small size radio halo. The existence of these sources requires the presence of an ongoing energetic merger. Our combined optical and X–ray investigation suggests that a strong merging process between two or more massive subclumps may be ongoing in this cluster. Thanks to forthcoming optical and X–ray observations, we will reconstruct in detail the merger dynamics and derive its energetics, to be related to the energy necessary for the particle re–acceleration in this cluster. • Finally, RXCJ 2003.5–2323 was found to possess a giant radio halo. This source is among the largest, most powerful and most distant (z=0.317) halos imaged so far. Unlike other radio halos, it shows a very peculiar morphology with bright clumps and filaments of emission, whose origin might be related to the relatively high redshift of the hosting cluster. Although very little optical and X–ray information is available about the cluster dynamical stage, the results of our optical analysis suggest the presence of two massive substructures which may be interacting with the cluster. Forthcoming observations in the optical and X–ray bands will allow us to confirm the expected high merging activity in this cluster. Throughout the present thesis a cosmology with H0 = 70 km s−1 Mpc−1, m=0.3 and =0.7 is assumed.
Resumo:
[EN]A natural generalization of the classical Moore-Penrose inverse is presented. The so-called S-Moore-Penrose inverse of a m x n complex matrix A, denoted by As, is defined for any linear subspace S of the matrix vector space Cnxm. The S-Moore-Penrose inverse As is characterized using either the singular value decomposition or (for the nonsingular square case) the orthogonal complements with respect to the Frobenius inner product. These results are applied to the preconditioning of linear systems based on Frobenius norm minimization and to the linearly constrained linear least squares problem.
Resumo:
[EN]In this work we develop a procedure to deform a given surface triangulation to obtain its alignment with interior curves. These curves are defined by splines in a parametric space and, subsequently, mapped to the surface triangulation. We have restricted our study to orthogonal mapping, so we require the curves to be included in a patch of the surface that can be orthogonally projected onto a plane (our parametric space). For example, the curves can represent interfaces between different materials or boundary conditions, internal boundaries or feature lines. Another setting in which this procedure can be used is the adaption of a reference mesh to changing curves in the course of an evolutionary process...
Resumo:
[EN]We present a new strategy for constructing tensor product spline spaces over quadtree and octree T-meshes. The proposed technique includes some simple rules for inferring local knot vectors to define spline blending functions. These rules allow to obtain for a given T-mesh a set of cubic spline functions that span a space with nice properties: it can reproduce cubic polynomials, the functions are C2-continuous, linearly independent, and spaces spanned by nested T-meshes are also nested. In order to span spaces with these properties applying the proposed rules, the T-mesh should fulfill the only requirement of being a 0-balanced quadtree or octree. ..
Resumo:
[EN]We present a new strategy for constructing tensor product spline spaces over quadtree and octree T-meshes. The proposed technique includes some simple rules for inferring local knot vectors to define spline blending functions. These rules allow to obtain for a given T-mesh a set of cubic spline functions that span a space with nice properties: it can reproduce cubic polynomials, the functions are C2-continuous, linearly independent, and spaces spanned by nested T-meshes are also nested. In order to span spaces with these properties applying the proposed rules, the T-mesh should fulfill the only requirement of being a 0-balanced quadtree or octree. ..
Resumo:
Investigations on formation and specification of neural precursor cells in the central nervous system of the Drosophila melanogaster embryoSpecification of a unique cell fate during development of a multicellular organism often is a function of its position. The Drosophila central nervous system (CNS) provides an ideal system to dissect signalling events during development that lead to cell specific patterns. Different cell types in the CNS are formed from a relatively few precursor cells, the neuroblasts (NBs), which delaminate from the neurogenic region of the ectoderm. The delamination occurs in five waves, S1-S5, finally leading to a subepidermal layer consisting of about 30 NBs, each with a unique identity, arranged in a stereotyped spatial pattern in each hemisegment. This information depends on several factors such as the concentrations of various morphogens, cell-cell interactions and long range signals present at the position and time of its birth. The early NBs, delaminating during S1 and S2, form an orthogonal array of four rows (2/3,4,5,6/7) and three columns (medial, intermediate, and lateral) . However, the three column and four row-arrangement pattern is only transitory during early stages of neurogenesis which is obscured by late emerging (S3-S5) neuroblasts (Doe and Goodman, 1985; Goodman and Doe, 1993). Therefore the aim of my study has been to identify novel genes which play a role in the formation or specification of late delaminating NBs.In this study the gene anterior open or yan was picked up in a genetic screen to identity novel and yet unidentified genes in the process of late neuroblast formation and specification. I have shown that the gene yan is responsible for maintaining the cells of the neuroectoderm in an undifferentiated state by interfering with the Notch signalling mechanism. Secondly, I have studied the function and interactions of segment polarity genes within a certain neuroectodermal region, namely the engrailed (en) expressing domain, with regard to the fate specification of a set of late neuroblasts, namely NB 6-4 and NB 7-3. I have dissected the regulatory interaction of the segment polarity genes wingless (wg), hedgehog (hh) and engrailed (en) as they maintain each others expression to show that En is a prerequisite for neurogenesis and show that the interplay of the segmentation genes naked (nkd) and gooseberry (gsb), both of which are targets of wingless (wg) activity, leads to differential commitment of NB 7-3 and NB 6-4 cell fate. I have shown that in the absence of either nkd or gsb one NB fate is replaced by the other. However, the temporal sequence of delamination is maintained, suggesting that formation and specification of these two NBs are under independent control.
Resumo:
Auf der Suche nach potenten pharmakologischen Wirkstoffen hat die Kombinatorische Chemie in der letzten Dekade eine große Bedeutung erlangt, um innerhalb kurzer Zeit ein breites Spektrum von Verbindungen für biologische Tests zu erzeugen. Kohlenhydrate stellen als Scaffolds interessante Kandidaten für die kombinatorische Synthese dar, da sie mehrere Derivatisierungspositionen in stereochemisch definierter Art und Weise zur Verfügung stellen. So ist die räumlich eindeutige Präsentation angebundener pharmakophorer Gruppen möglich, wie es für den Einsatz als Peptidmimetika wünschenswert ist. Zur gezielten Derivatisierung einzelner Hydroxylfunktionen ist der Einsatz eines orthogonalen Schutz-gruppenmusters erforderlich, das gegenüber den im Lauf der kombinatorischen Synthese herrschenden Reaktionsbedingungen hinreichend stabil ist. Weiterhin ist ein geeignetes Ankersystem zu finden, um eine Festphasensynthese und damit eine Automatisierung zu ermöglichen.Zur Minimierung der im Fall von Hexosen wie Galactose benötigten fünf zueinander orthogonal stabilen Schutzgruppen wurde bei der vorliegenden Arbeit von Galactal ausgegangen, bei dem nur noch drei Hydroxylfunktionen zu differenzieren sind. Das Galactose-Gerüst kann anschließend wiederhergestellt werden. Die Differenzierung wurde über den Einsatz von Hydrolasen durch regioselektive Acylierungs- und Deacylierungs-reaktionen erreicht, wobei auch immobilisierte Enzyme Verwendung fanden. Dabei konnte ein orthogonales Schutzgruppenmuster sequentiell aufgebaut werden, das auch die nötigen Stabilitäten gegenüber sonstigen, teilweise geeignet modifizierten Reaktionsbedingungen aufweist. Für die Anbindung an eine Festphase wurde ein metathetisch spaltbarer Anker entwickelt, der über die anomere Position unter Wiederherstellung des Galactose-Gerüsts angebunden wurde. Auch ein oxidativ spaltbares und ein photolabiles Ankersystem wurden erprobt.
Resumo:
The increasing precision of current and future experiments in high-energy physics requires a likewise increase in the accuracy of the calculation of theoretical predictions, in order to find evidence for possible deviations of the generally accepted Standard Model of elementary particles and interactions. Calculating the experimentally measurable cross sections of scattering and decay processes to a higher accuracy directly translates into including higher order radiative corrections in the calculation. The large number of particles and interactions in the full Standard Model results in an exponentially growing number of Feynman diagrams contributing to any given process in higher orders. Additionally, the appearance of multiple independent mass scales makes even the calculation of single diagrams non-trivial. For over two decades now, the only way to cope with these issues has been to rely on the assistance of computers. The aim of the xloops project is to provide the necessary tools to automate the calculation procedures as far as possible, including the generation of the contributing diagrams and the evaluation of the resulting Feynman integrals. The latter is based on the techniques developed in Mainz for solving one- and two-loop diagrams in a general and systematic way using parallel/orthogonal space methods. These techniques involve a considerable amount of symbolic computations. During the development of xloops it was found that conventional computer algebra systems were not a suitable implementation environment. For this reason, a new system called GiNaC has been created, which allows the development of large-scale symbolic applications in an object-oriented fashion within the C++ programming language. This system, which is now also in use for other projects besides xloops, is the main focus of this thesis. The implementation of GiNaC as a C++ library sets it apart from other algebraic systems. Our results prove that a highly efficient symbolic manipulator can be designed in an object-oriented way, and that having a very fine granularity of objects is also feasible. The xloops-related parts of this work consist of a new implementation, based on GiNaC, of functions for calculating one-loop Feynman integrals that already existed in the original xloops program, as well as the addition of supplementary modules belonging to the interface between the library of integral functions and the diagram generator.