15 resultados para Scattering
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
To determine self-consistently the time evolution of particle size and their number density in situ multi-angle polarization-sensitive laser light scattering was used. Cross-polarization intensities (incident and scattered light intensities with opposite polarization) measured at 135 degrees and ex situ transmission electronic microscopy analysis demonstrate the existence of nonspherical agglomerates during the early phase of agglomeration. Later in the particle time development both techniques reveal spherical particles again. The presence of strong cross-polarization intensities is accompanied by low-frequency instabilities detected on the scattered light intensities and plasma emission. It is found that the particle radius and particle number density during the agglomeration phase can be well described by the Brownian free molecule coagulation model. Application of this neutral particle coagulation model is justified by calculation of the particle charge whereby it is shown that particles of a few tens of nanometer can be considered as neutral under our experimental conditions. The measured particle dispersion can be well described by a Brownian free molecule coagulation model including a log-normal particle size distribution. (C) 1996 American Institute of Physics.
Resumo:
This work demonstrates the feasibility of using polymeric micro- and nanofiber-composed films and liquid crystals as electrically switchable scattering light shutters. We present a concept of electro-optic device based on an innovative combination of two mature technologies: optics of nematic liquid crystals and electrospinning of nanofibers. These devices have electric and optical characteristics far superior to other comparable methods. The simulation presented shows results that are highly consistent with those of experiments and that explain the working mechanism of the devices.
Resumo:
Purpose: This study was conducted to study the influence of protein structure on the immunogenicity in wild-type and immune tolerant mice of well-characterized degradation products of recombinant human interferon alpha2b (rhIFNα2b). Methods: RhIFNα2b was degraded by metal-catalyzed oxidation (M), cross-linking with glutaraldehyde (G), oxidation with hydrogen peroxide (H), and incubation in a boiling water bath (B). The products were characterized with UV absorption, circular dichroism and fluorescence spectroscopy, gel permeation chromatography, reverse-phase high-pressure liquid chromatography, sodium dodecyl sulfate polyacrylamide gel electrophoresis, Western blotting, and mass spectrometry. The immunogenicity of the products was evaluated in wild-type mice and in transgenic mice immune tolerant for hIFNα2. Serum antibodies were detected by enzyme-linked immunosorbent assay or surface plasmon resonance. Results: M-rhIFNα2b contained covalently aggregated rhIFNα2b with three methionines partly oxidized to methionine sulfoxides. G-rhIFNα2b contained covalent aggregates and did not show changes in secondary structure. H-rhIFNα2b was only chemically changed with four partly oxidized methionines. B-rhIFNα2b was largely unfolded and heavily aggregated. Nontreated (N) rhIFNα2b was immunogenic in the wild-type mice but not in the transgenic mice, showing that the latter were immune tolerant for rhIFNα2b. The anti-rhIFNα2b antibody levels in the wild-type mice depended on the degradation product: M-rhIFNα2b > H-rhIFNα2b ∼ N-rhIFNα2b ≫ B-rhIFNα2b; G-rhIFNα2b did not induce anti-rhIFNα2b antibodies. In the transgenic mice, only M-rhIFNα2b could break the immune tolerance. Conclusions: RhIFNα2b immunogenicity is related to its structural integrity. Moreover, the immunogenicity of aggregated rhIFNα2b depends on the structure and orientation of the constituent protein molecules and/or on the aggregate size.
Resumo:
In the two Higgs doublet model, there is the possibility that the vacuum where the universe resides in is metastable. We present the tree-level bounds on the scalar potential parameters which have to be obeyed to prevent that situation. Analytical expressions for those bounds are shown for the most used potential, that with a softly broken Z(2) symmetry. The impact of those bounds on the model's phenomenology is discussed in detail, as well as the importance of the current LHC results in determining whether the vacuum we live in is or is not stable. We demonstrate how the vacuum stability bounds can be obtained for the most generic CP-conserving potential, and provide a simple method to implement them.
Resumo:
A series of new ruthenium(II) complexes of the general formula [Ru(eta(5)-C5H5)(PP)(L)][PF6] (PP = DPPE or 2PPh(3), L = 4-butoxybenzonitrile or N-(3-cyanophenyl)formamide) and the binuclear iron(II) complex [Fe(eta(5)-C5H5)(PP)(mu-L)(PP)(eta(5)-C5H5)Fe][PF6](2) (L = (E)-2-(3-(4-nitrophenyl)allylidene)malononitrile, that has been also newly synthesized) have been prepared and studied to evaluate their potential in the second harmonic generation property. All the new compounds were fully characterized by NMR, IR and UV-Vis spectroscopies and their electrochemistry behaviour was studied by cyclic voltammetry. Quadratic hyperpolarizabilities (beta) of three of the complexes have been determined by hyper-Rayleigh scattering (HRS) measurements at fundamental wavelength of 1500 nm and the calculated static beta(0) values are found to fall in the range 65-212 x 10(-30) esu. Compound presenting beta(0) = 212 x 10(-30) esu has revealed to be 1.2 times more efficient than urea standard in the second harmonic generation (SHG) property, measured in the solid state by Kurtz powder technique, using a Nd:YAG laser (1064 nm). (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Independent component analysis (ICA) has recently been proposed as a tool to unmix hyperspectral data. ICA is founded on two assumptions: 1) the observed spectrum vector is a linear mixture of the constituent spectra (endmember spectra) weighted by the correspondent abundance fractions (sources); 2)sources are statistically independent. Independent factor analysis (IFA) extends ICA to linear mixtures of independent sources immersed in noise. Concerning hyperspectral data, the first assumption is valid whenever the multiple scattering among the distinct constituent substances (endmembers) is negligible, and the surface is partitioned according to the fractional abundances. The second assumption, however, is violated, since the sum of abundance fractions associated to each pixel is constant due to physical constraints in the data acquisition process. Thus, sources cannot be statistically independent, this compromising the performance of ICA/IFA algorithms in hyperspectral unmixing. This paper studies the impact of hyperspectral source statistical dependence on ICA and IFA performances. We conclude that the accuracy of these methods tends to improve with the increase of the signature variability, of the number of endmembers, and of the signal-to-noise ratio. In any case, there are always endmembers incorrectly unmixed. We arrive to this conclusion by minimizing the mutual information of simulated and real hyperspectral mixtures. The computation of mutual information is based on fitting mixtures of Gaussians to the observed data. A method to sort ICA and IFA estimates in terms of the likelihood of being correctly unmixed is proposed.
Resumo:
Proceedings of International Conference Conference Volume 7830 Image and Signal Processing for Remote Sensing XVI Lorenzo Bruzzone Toulouse, France | September 20, 2010
Resumo:
Proceedings of International Conference - SPIE 7477, Image and Signal Processing for Remote Sensing XV - 28 September 2009
Resumo:
5-Monocyclopentadienyliron(II)/ruthenium(II) complexes of the general formula [M(5-C5H5)(PP)(L1)][PF6] {M = Fe, PP = dppe; M = Ru, PP = dppe or 2PPh3; L1 = 5-[3-(thiophen-2-yl)benzo[c]thiophenyl]thiophene-2-carbonitrile} have been synthesized and studied to evaluate their molecular quadratic hyperpolarizabilities. The compounds were fully characterized by NMR, FTIR and UV/Vis spectroscopy and their electrochemical behaviour studied by cyclic voltammetry. Quadratic hyperpolarizabilities () were determined by hyper-Rayleigh scattering measurements at a fundamental wavelength of 1500 nm. Density functional theory calculations were employed to rationalize the second-order non-linear optical properties of these complexes.
Resumo:
A series of mono(eta(5)-cyclopentadienyl)metal-(II) complexes with nitro-substituted thienyl acetylide ligands of general formula [M(eta(5)-C5H5)(L)(C C{C4H2S}(n)NO2)] (M = Fe, L = kappa(2)-DPPE, n = 1,2; M = Ru, L = kappa(2)-DPPE, 2 PPh3, n = 1, 2; M = Ni, L = PPh3, n = 1, 2) has been synthesized and fully characterized by NMR, FT-IR, and UV-Vis spectroscopy. The electrochemical behavior of the complexes was explored by cyclic voltammetry. Quadratic hyperpolarizabilities (beta) of the complexes have been determined by hyper-Rayleigh scattering (HRS) measurements at 1500 nm. The effect of donor abilities of different organometallic fragments on the quadratic hyperpolarizabilities was studied and correlated with spectroscopic and electrochemical data. Density functional theory (DFT) and time-dependent DFT (TDDFT) calculations were employed to get a better understanding of the second-order nonlinear optical properties in these complexes. In this series, the complexity of the push pull systems is revealed; even so, several trends in the second-order hyperpolarizability can still be recognized. In particular, the overall data seem to indicate that the existence of other electronic transitions in addition to the main MLCT clearly controls the effectiveness of the organometallic donor ability on the second-order NLO properties of these push pull systems.
Resumo:
Liquid crystalline cellulosic-based solutions described by distinctive properties are at the origin of different kinds of multifunctional materials with unique characteristics. These solutions can form chiral nematic phases at rest, with tuneable photonic behavior, and exhibit a complex behavior associated with the onset of a network of director field defects under shear. Techniques, such as Nuclear Magnetic Resonance (NMR), Rheology coupled with NMR (Rheo-NMR), rheology, optical methods, Magnetic Resonance Imaging (MRI), Wide Angle X-rays Scattering (WAXS), were extensively used to enlighten the liquid crystalline characteristics of these cellulosic solutions. Cellulosic films produced by shear casting and fibers by electrospinning, from these liquid crystalline solutions, have regained wider attention due to recognition of their innovative properties associated to their biocompatibility. Electrospun membranes composed by helical and spiral shape fibers allow the achievement of large surface areas, leading to the improvement of the performance of this kind of systems. The moisture response, light modulated, wettability and the capability of orienting protein and cellulose crystals, opened a wide range of new applications to the shear casted films. Characterization by NMR, X-rays, tensile tests, AFM, and optical methods allowed detailed characterization of those soft cellulosic materials. In this work, special attention will be given to recent developments, including, among others, a moisture driven cellulosic motor and electro-optical devices.
Resumo:
The market for emulsion polymers (latexes) is large and growing at the expense of other manufacturing processes that emit higher amounts of volatile organic solvents. The paint industry is not an exception and solvent-borne paints have been gradually substituted by aqueous paints. In their life-cycle, much of the aqueous paint used for architectural or decorative purposes will eventually be discharged into wastewater treatment facilities, where its polymeric nanoparticles (mainly acrylic and styrene-acrylic) can work as xenobiotics to the microbial communities present in activated sludge. It is well established that these materials are biocompatible at macroscopic scale. But is their behaviour the same at nanoscale? What happens to the polymeric nanoparticles during the activated sludge process? Do nanoparticles agregate and are discharged together with the sludge or remain in emulsion? How do microorganisms interact with these nanoparticles? Are nanoparticles degradated by them? Are they adsorbed? Are these nanoparticles toxic to the microbial community? To study the influence of these xenobiotics in the activated sludge process, an emulsion of cross-linked poly(butyl methacrylate) nanoparticles of ca. 50 nm diameter was produced and used as model compound. Activated sludge from a wastewater treatment plant was tested by the OCDE’s respiration inhibition test using several concentrations of PBMA nanoparticles. Particle aggregation was followed by Dynamic Light Scattering and microorganism surfaces were observed by Atomic Force Microscopy. Using sequential batch reactors (SBRs) and continuous reactors, both inoculated with activated sludge, the consumption of carbon, ammonia, nitrite and nitrate was monitored and compared, in the presence and absence of nanoparticles. No particles were detected in all treated waters by Dynamic Light Scattering. This can either mean that microorganisms can efficiently remove all polymer nanoparticles or that nanoparticles tend to aggregate and be naturally removed by precipitation. Nevertheless respiration inhibition tests demonstrated that microorganisms consume more oxygen in the presence of nanoparticles, which suggests a stress situation. It was also observed a slight decrease in the efficiency of nitrification in the presence of nanoparticles. AFM images showed that while the morphology of some organisms remained the same both in the presence and absence of nanoparticles, others assumed a rough surface with hilly like shapes of ca. 50 nm when exposed to nanoparticles. Nanoparticles are thus likely to be either incorporated or adsorbed at the surface of some organisms, increasing the overall respiration rate and decreasing nitrification efficiency. Thus, despite its biocompatibility at macroscopic scale, PBMA is likely to be no longer innocuous at nanoscale.
Resumo:
We directly visualize the response of nematic liquid crystal drops of toroidal topology threaded in cellulosic fibers, suspended in air, to an AC electric field and at different temperatures over the N-I transition. This new liquid crystal system can exhibit non-trivial point defects, which can be energetically unstable against expanding into ring defects depending on the fiber constraining geometries. The director anchoring tangentially near the fiber surface and homeotropically at the air interface makes a hybrid shell distribution that in turn causes a ring disclination line around the main axis of the fiber at the center of the droplet. Upon application of an electric field, E, the disclination ring first expands and moves along the fiber main axis, followed by the appearance of a stable "spherical particle" object orbiting around the fiber at the center of the liquid crystal drop. The rotation speed of this particle was found to vary linearly with the applied voltage. This constrained liquid crystal geometry seems to meet the essential requirements in which soliton-like deformations can develop and exhibit stable orbiting in three dimensions upon application of an external electric field. On changing the temperature the system remains stable and allows the study of the defect evolution near the nematic-isotropic transition, showing qualitatively different behaviour on cooling and heating processes. The necklaces of such liquid crystal drops constitute excellent systems for the study of topological defects and their evolution and open new perspectives for application in microelectronics and photonics.
Resumo:
We present results, obtained by means of an analytic study and a numerical simulation, about the resonant condition necessary to produce a Localized Surface Plasmonic Resonance (LSPR) effect at the surface of metal nanospheres embedded in an amorphous silicon matrix. The study is based on a Lorentz dispersive model for a-Si:H permittivity and a Drude model for the metals. Considering the absorption spectra of a-Si:H, the best choice for the metal nanoparticles appears to be aluminium, indium or magnesium. No difference has been observed when considering a-SiC:H. Finite-difference time-domain (FDTD) simulation of an Al nanosphere embedded into an amorphous silicon matrix shows an increased scattering radius and the presence of LSPR induced by the metal/semiconductor interaction under green light (560 nm) illumination. Further results include the effect of the nanoparticles shape (nano-ellipsoids) in controlling the wavelength suitable to produce LSPR. It has been shown that is possible to produce LSPR in the red part of the visible spectrum (the most critical for a-Si:H solar cells applications in terms of light absorption enhancement) with aluminium nano-ellipsoids. As an additional results we may conclude that the double Lorentz-Lorenz model for the optical functions of a-Si:H is numerically stable in 3D simulations and can be used safely in the FDTD algorithm. A further simulation study is directed to determine an optimal spatial distribution of Al nanoparticles, with variable shapes, capable to enhance light absorption in the red part of the visible spectrum, exploiting light trapping and plasmonic effects. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.