970 resultados para Projection matrices
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
Let and be matrices over an algebraically closed field. Let be elements of such that and . We give necessary and sufficient condition for the existence of matrices and similar to and, respectively, such that has eigenvalues.
Resumo:
Recently, operational matrices were adapted for solving several kinds of fractional differential equations (FDEs). The use of numerical techniques in conjunction with operational matrices of some orthogonal polynomials, for the solution of FDEs on finite and infinite intervals, produced highly accurate solutions for such equations. This article discusses spectral techniques based on operational matrices of fractional derivatives and integrals for solving several kinds of linear and nonlinear FDEs. More precisely, we present the operational matrices of fractional derivatives and integrals, for several polynomials on bounded domains, such as the Legendre, Chebyshev, Jacobi and Bernstein polynomials, and we use them with different spectral techniques for solving the aforementioned equations on bounded domains. The operational matrices of fractional derivatives and integrals are also presented for orthogonal Laguerre and modified generalized Laguerre polynomials, and their use with numerical techniques for solving FDEs on a semi-infinite interval is discussed. Several examples are presented to illustrate the numerical and theoretical properties of various spectral techniques for solving FDEs on finite and semi-infinite intervals.
Resumo:
Dissertação para obtenção do Grau de Doutor em Engenharia Química e Bioquímica
Resumo:
There is a need to develop viable techniques for removal and recovery organic and inorganic compounds from environmental matrices, due to their ecotoxicity, regulatory obligations or potential supplies as secondary materials. In this dissertation, electro –removal and –recovery techniques were applied to five different contaminated environmental matrices aiming phosphorus (P) recovery and/or contaminants removal. In a first phase, the electrokinetic process (EK) was carried out in soils for (i) metalloids and (ii) organic contaminants (OCs) removal. In the case of As and Sb mine contaminated soil, the EK process was additionally coupled with phytotechnologies. In a second phase, the electrodialytic process (ED) was applied to wastes aiming P recovery and simultaneous removal of (iii) toxins from membrane concentrate, (iv) heavy metals from sewage sludge ash (SSA), and (v) OCs from sewage sludge (SS). EK enhanced phytoremediation showed to be viable for the remediation of soils contaminated with metalloids, as although remediation was low, it combines advantages of both technologies while allowing site management. EK also proved to be an effective remediation technology for the removal and degradation of emerging OCs from two types of soil. Aiming P recovery and contaminants removal, different ED cell set-ups were tested. For the membrane concentrates, the best P recovery was achieved in a three compartment (3c) cell, but the highest toxin removal was obtained in a two compartment (2c) cell, placing the matrix in the cathode end. In the case of SSA the best approach for simultaneous P recovery and heavy metals removal was to use a 2c-cell placing the matrix in the anode end. However, for simultaneous P recovery and OCs removal, SS should be placed in the cathode end, in a 2c-cell. Overall, the data support that the selection of the cell design should be done case-by-case.
Resumo:
Innovative composite materials made of continuous fibers embedded in mortar matrices have been recently received attention for externally bonded reinforcement of masonry structures. In this regards, application of natural fibers for strengthening of the repair mortars is attractive due to their low specific weight, sustainability and recycability. This paper presents experimental characterization of tensile and pull-out behavior of natural fibers embedded in two different mortar-based matrices. A lime-based and a geopolymeric-based mortar are used as sustainable and innovative matrices. The obtained experimental results and observations are presented and discussed.
Resumo:
Understanding the behavior of c omplex composite materials using mixing procedures is fundamental in several industrial processes. For instance, polymer composites are usually manufactured using dispersion of fillers in polymer melt matrices. The success of the filler dispersion depends both on the complex flow patterns generated and on the polymer melt rheological behavior. Consequently, the availability of a numerical tool that allow to model both fluid and particle would be very useful to increase the process insight. Nowadays there ar e computational tools that allow modeling the behavior of filled systems, taking into account both the behavior of the fluid (Computational Rheology) and the particles (Discrete Element Method). One example is the DPMFoam solver of the OpenFOAM ® framework where the averaged volume fraction momentum and mass conservation equations are used to describe the fluid (continuous phase) rheology, and the Newton’s second law of motion is used to compute the particles (discrete phase) movement. In this work the refer red solver is extended to take into account the elasticity of the polymer melts for the continuous phase. The solver capabilities will be illustrated by studying the effect of the fluid rheology on the filler dispersion, taking into account different fluid types (generalized Newtonian or viscoelastic) and particles volume fraction and size. The results obtained are used to evaluate the relevance of considering the fluid complex rheology for the prediction of the composites morphology
Resumo:
Understanding the mixing process of complex composite materials is fundamental in several industrial processes. For instance, the dispersion of fillers in polymer melt matrices is commonly employed to manufacture polymer composites, using a twin-screw extruder. The effectiveness of the filler dispersion depends not only on the complex flow patterns generated, but also on the polymer melt rheological behavior. Therefore, the availability of a numerical tool able to predict mixing, taking into account both fluid and particles phases would be very useful to increase the process insight, and thus provide useful guidelines for its optimization. In this work, a new Eulerian-Lagrangian numerical solver is developed OpenFOAM® computational library, and used to better understand the mechanisms determining the dispersion of fillers in polymer matrices. Particular attention will be given to the effect of the rheological model used to represent the fluid behavior, on the level of dispersion obtained. For the Eulerian phase the averaged volume fraction governing equations (conservation of mass and linear momentum) are used to describe the fluid behavior. In the case of the Lagrangian phase, Newton’s second law of motion is used to compute the particles trajectories and velocity. To study the effect of fluid behavior on the filler dispersion, several systems are modeled considering different fluid types (generalized Newtonian or viscoelastic) and particles volume fraction and size. The results obtained are used to correlate the fluid and particle characteristics on the effectiveness of mixing and morphology obtained.
Resumo:
How much can be said about the location of the eigenvalues of a symmetric tridiagonal matrix just by looking at its diagonal entries? We use classical results on the eigenvalues of symmetric matrices to show that the diagonal entries are bounds for some of the eigenvalues regardless of the size of the off-diagonal entries. Numerical examples are given to illustrate that our arithmetic-free technique delivers useful information on the location of the eigenvalues.
Resumo:
Here, we define and consider (linear) TP-directions and TP-paths for a totally nonnegative matrix, in an effort to more deeply understand perturbation of a TN matrix to a TP matrix. We give circumstances in which a TP-direction exists and an example to show that they do not always exist. A strategy to give (nonlinear) TP-paths is given (and applied to this example). A long term goal is to understand the sparsest TP-perturbation for application to completion problems.
Resumo:
Lactic acid bacteria (LAB) play a key role in the biopreservation of a wide range of fermented food products, such as yogurt, cheese, fermented milks, meat, fish, vegetables (sauerkraut, olives and pickles), certain beer brands, wines and silage, allowing their safe consumption, which gave to these bacteria a GRAS (Generally Recognised as Safe) status. Besides that, the use of LAB in food and feed is a promising strategy to reduce the exposure to dietary mycotoxins, improving their shelf life and reducing health risks, given the unique mycotoxin decontaminating characteristic of some LAB. Mycotoxins present carcinogenic, mutagenic, teratogenic, neurotoxic and immunosuppressive effects over animals and Humans, being the most important ochratoxin A (OTA), aflatoxins (AFB1), trichothecenes, zearalenone (ZEA), fumonisin (FUM) and patulin. In a previous work of our group it was observed OTA biodegradation by some strains of Pediococcus parvulus isolated from Douro wines. So, the aim of this study was to enlarge the screening of the biodetoxification over more mycotoxins besides OTA, including AFB1, and ZEA. This ability was checked in a collection of LAB isolated from vegetable (wine, olives, fruits and silage) and animal (milk and dairy products, sausages) sources. All LAB strains were characterized phenotypically (Gram, catalase) and genotypically. Molecular characterisation of all LAB strains was performed using genomic fingerprinting by MSP- PCR with (GTG)5 and csM13 primers. The identification of the isolates was confirmed by 16S rDNA sequencing. To study the ability of LAB strains to degrade OTA, AFB1 and ZEA, a MRS broth medium was supplemented with 2.0 g/mL of each mycotoxin. For each strain, 2 mL of MRS supplemented with the mycotoxins was inoculated in triplicate with 109 CFU/mL. The culture media and bacterial cells were extracted by the addition of an equal volume of acetonitrile/methanol/acetic acid (78:20:2 v/v/v) to the culture tubes. A 2 mL sample was then collected and filtered into a clean 2 mL vial using PP filters with 0.45 m pores. The samples were preserved at 4 °C until HPLC analysis. Among LAB tested, 10 strains isolated from milk were able to eliminate AFB1, belonging to Lactobacillus casei (7), Lb. paracasei (1), Lb. plantarum (1) and 1 to Leuconostoc mesenteroides. Two strains of Enterococcus faecium and one of Ec. faecalis from sausage eliminated ZEA. Concerning to strains of vegetal origin, one Lb. plantarum isolated from elderberry fruit, one Lb. buchnerii and one Lb. parafarraginis both isolated from silage eliminated ZEA. Other 2 strains of Lb. plantarum from silage were able to degrade both ZEA and OTA, and 1 Lb. buchnerii showed activity over AFB1. These enzymatic activities were also verified genotypically through specific gene PCR and posteriorly confirmed by sequencing analysis. In conclusion, due the ability of some strains of LAB isolated from different sources to eliminate OTA, AFB1 and ZEA one can recognize their potential biotechnological application to reduce the health hazards associated with these mycotoxins. They may be suitable as silage inoculants or as feed additives or even in food industry.
Resumo:
PhD in Chemical and Biological Engineering
Resumo:
El mejoramiento animal tiene como función modificar genéticamente una población animal con un fin económico determinado. Las herramientas que se utilizan son: evaluación genética, selección y/o esquemas de apareamiento. La evaluación genética se realiza obteniendo información (marcadores) que predice el genotipo del animal que no es registrable directamente. Estos marcadores son el fenotipo o información física del animal y algún predictor del genotipo que puede ser un marcador químico o la relación de parentesco con otro animal (padre-madre, hermanos, progenie). El uso de los marcadores genéticos para determinar la genealogía está en desarrollo actualmente pero resulta extremadamente costoso para ciertos niveles de producción. Por otra parte, los dispositivos electrónicos ensayados hasta el momento no han resultado muy eficientes, por lo tanto se propone para este proyecto desarrollar un sistema electrónico de identificación animal que permita construir matrices de genealogía para la evaluación genética de reproductores de las diferentes especies en condiciones de producción extensivas. El proyecto se desarrollará entre especialistas en Mejoramiento Animal e Ingenieros Electrónicos, en un aporte multidisciplinario a la solución del problema de establecer la filiación a través del armado del par macho-hembra durante el servicio natural de las distintas especies y el par madre-cría en algún momento luego del parto. La solución propuesta consiste en la implementación de dispositivos electrónicos activos (microcontrolador y tranceptor de radiofrecuencia alimentados a batería) dispuestos como nodos de una red de sensores inalambricos. Estos dispositivos serán colocados a través de collares en los animales tomados como objeto de estudio y mediante el continuo intercambio de informacion entre ellos se generará una matriz de datos que permita a posterior establecer la relación que existe entre ellos a traves de la determinacion de la periodicidad de sus cercanías relativas.
Resumo:
El objetivo general de este proyecto es el de estudiar la recuperación de nutracéuticos y antioxidantes a partir de orégano y romero, tanto evitando su perdida durante el procesamiento como mediante su extracción y concentración utilizando tecnologías eficientes y no contaminantes. Se fraccionarán los aceites esenciales y oleoresinas, por destilación molecular para obtener productos de alto contenido en nutracéuticos. Se estudiará la extracción-separación de nutracéuticos de un modo teórico-experimental. Se realizarán determinaciones experimentales de cada una de las operaciones, para distintos valores de las variables operativas más relevantes a efectos de estudiar su influencia y lograr la optimización de las operaciones. Se utilizarán los modelos de Redes Neuronales Artificiales para el modelado y optimizacion de las operaciones implementadas.
Resumo:
Las reacciones bioquímicas que ocurren como consecuencia del tratamiento y almacenamiento de los alimentos, mejoran la seguridad alimentaria, las propiedades sensoriales y la vida útil. Sin embargo, el tratamiento térmico, la exposición a la luz y el oxígeno pueden causar daño oxidativo a los lípidos y proteínas. Los procesos oxidativos de matrices complejas tienen características distintivas que no se manifiestan cuando los componentes son sometidos a oxidación individualmente. La hipótesis de trabajo es que la oxidación de proteínas en matrices alimentarias complejas altera la estructura y las propiedades funcionales de las proteínas y, que las modificaciones que se producen varían según las condiciones de procesamiento y de la composición química del alimento. Nuestros estudios intentan demostrar que el estado oxidativo de las proteínas de un alimento es un parámetro importante para la evaluación de las propiedades funcionales, sensoriales y nutricionales de un producto lácteo. El objetivo general del proyecto es el estudio de los procesos de oxidación de matrices alimentarias complejas (la leche, miel) y su relación con distintos procesos y materiales utilizados en la industria. Es decir, nos proponemos estudiar las consecuencias funcionales y biológicas (calidad nutricional, coagulación) de la oxidación proteica en modelos experimentales “in vitro” y en productos comerciales. 1. Estudiar los fenómenos de peroxidación proteica en leche entera y descremada sometida a los distintos procesos tecnológicos de la producción de leche y queso a escala laboratorio. Se realizarán las mismas experiencias con albúmina sérica y con proteínas aisladas de suero de leche para comparar diferencias entre una matriz compleja y una simple. 2. Determinar la relación entre oxidación y composición proteica de la leche, y los cambios en las fracciones proteicas aisladas (caseínas y beta-lactoglobulina). 3. Analizar el impacto de los procesos tecnológicos a nivel de producción primaria (composición proteica y estado de oxidación) en los indicadores de inflamación (contenido de células somáticas y proteína C Reactiva) y de estado redox (capacidad antioxidante de los productos lácteos y nivel de carbonilos de proteinas). 4. Comparar las características de composición química y el estado de oxidación de leche provenientes de las tres regiones (Buenos Aires, Santa Fe y Córdoba) que conforman la cuenca láctea Argentina. Este objetivo se realizará conjuntamente con los integrantes de nuestro grupo de investigación que trabajan en el Laboratorio de Control de Calidad de la Escuela Superior de Lechería. 5. Determinar los metabolitos secundarios de mieles uniflorales propuestos como responsables de la capacidad antioxidante de estas (polifenoles) y como indicadores de su origen botánico. 6. Valorar la capacidad antioxidante total de mieles uniflorales. 7. Validar los métodos analíticos y semicuantitativos utilizados y a utilizar en el presente proyecto teniendo en cuenta lo efectos de matrices típico de los fluidos biológicos y las mezclas. El estudio de las modificaciones oxidativas de matrices complejas es un tema que es importante tanto desde el punto de vista del conocimiento básico como del aplicado. Nosotros creemos que el presente proyecto aportará conocimiento sobre las características de las vías oxidativas de proteínas en matrices complejas y que podrá ser utilizado para diseñar estrategias productivas tendientes a disminuir el deterioro de la calidad de la leche debido a la exposición a energía radiante. Parte de la experiencia ganada por el grupo ha sido ya volcada a subsanar dificultades y problemas de oxidación y deterioro de la calidad de alimentos. Además, se contribuirá a discernir la paradoja que existe en el área sobre las propiedades oxidantes/antioxidantes de los polifenoles y la relación entre estas y el estado oxidativo de un alimento. The biochemical reactions that occur as a result of food treatment and storage, improve food security, sensory properties and shelf life. Heat treatment, exposure to light and oxygen can cause oxidative damage to lipids and proteins. Oxidative processes in complex matrices display distinctive features that do not appear when the components are individually subjected to oxidation. The hypothesis is that protein oxidation in complex food matrices alters the structure and functional properties of proteins and that the modifications vary according to process conditions and food composition. The main goal is to study oxidation of complex food matrices (milk, honey) with different processes and materials used in the industry. The specific aims are: 1. To study protein oxidation in whole milk and skim subject to various technological processes. The same experiences will be done with serum albumin and isolated whey proteins to compare complex and simple matrices. 2. To determine the relationship between oxidation and milk protein composition, and changes in casein and beta-lactoglobulin. 3. Analyze the impact of technological processes at the level of primary production on markers of inflammation and redox (antioxidant capacity and protein carbonyls). 4. Compare characteristics of chemical composition and oxidation state of milk. 5. Determine secondary metabolites of honey responsible for the antioxidant capacity of these. 6. To evaluate the total antioxidant capacity unifloral honey. This project will provide knowledge about characteristics of oxidative pathways of proteins in complex matrices that can be used to design production strategies aimed at reduce the deterioration of milk quality. Also, it would help to discern the paradox that exists on the oxidants/antioxidants properties of polyphenols and the relationship between these and the oxidative status of a food.