991 resultados para geometric properties
Resumo:
Cellulose acetate (CA)-silver (Ag) nanocomposite asymmetric membranes were prepared via the wet-phase inversion method by dispersing polyvinylpirrolydone-protected Ag nanoparticles in the membrane casting solutions of different compositions. Silver nanoparticles were synthesized ex situ and added to the casting solution as a concentrated aqueous colloidal dispersion. The effects of the dispersion addition on the structure and on the selective permeation properties of the membranes were studied by comparing the nanocomposites with the silver-free materials. The casting solution composition played an important role in the adequate dispersion of the silver nanoparticles in the membrane. Incorporation of nanoscale silver and the final silver content resulted in structural changes leading to an increase in the hydraulic permeability and molecular weight cut-off of the nanocomposite membranes. (c) 2014 Wiley Periodicals, Inc. J. Appl. Polym. Sci. 2015, 132, 41796.
Resumo:
The ready biodegradability of four chelating agents, N,N -(S,S)-bis[1-carboxy-2-(imidazol-4-yl)ethyl]ethylenediamine (BCIEE), N - ethylenedi-L-cysteine (EC), N,N -bis (4-imidazolymethyl)ethylenediamine (EMI) and 2,6-pyridine dicarboxylic acid (PDA), was tested according to the OECD guideline for testing of chemicals. PDA proved to be a readily biodegradable substance. However, none of the other three compounds were degraded during the 28 days of the test. Chemical simulations were performed for the four compounds in order to understand their ability to complex with some metal ions (Ca, Cd, Co, Cu, Fe, Mg, Mn, Ni, Pb, Zn) and discuss possible applications of these chelating agents. Two different conditions were simulated: (i) in the presence of the chelating agent and one metal ion, and (ii) in the simultaneous presence of the chelating agent and all metal ions with an excess of Ca. For those compounds that were revealed not to be readily biodegradable (BCIEE, EC and EMI), applications were evaluated where this property was not fundamental or even not required. Chemical simulations pointed out that possible applications for these chelating agents are: food fortification, food process, fertilizers, biocides, soil remediation and treatment of metal poisoning. Additionally, chemical simulations also predicted that PDA is an efficient chelating agent for Ca incrustations removal, detergents and for pulp metal ions removal process.
Resumo:
Cubic cobalt nitride films were grown onto different single crystalline substrates Al2O3 (0 0 0 1) and (1 1 View the MathML source 0), MgO (1 0 0) and (1 1 0) and TiO2 (1 0 0) and (1 1 0). The films display low atomic densities compared with the bulk material, are ferromagnetic and have metallic electrical conductivity. X-ray diffraction and X-ray absorption fine structure confirm the cubic structure of the films and with RBS results indicate that samples are not homogeneous at the microscopic scale, coexisting Co4+xN nitride with nitrogen rich regions. The magnetization of the films decreases with increase of the nitrogen content, variation that is shown to be due to the decrease of the cobalt density, and not to a decrease of the magnetic moment per cobalt ion. The films are crystalline with a nitrogen deficient stoichiometry and epitaxial with orientation determined by the substrate.
Resumo:
Fasciola hepatica somatic antigen, its partially purified fractions and excretion-secretion products were investigated as to serological, electrophoretic and biological properties. In a Sephadex G-100 column (SG-100), Fasciola hepatica total antigen (FhTA) gave 5 fractions, and SDS-PAGE analysis showed they were glycoproteins ranging from 14 to 94 kDa molecular weight (MW). When these fractions were analyzed by enzyme linked immunotransfer blot (EITB) and immunodiffusion in gel (ID) with serum from immunized rats with FhTA, the presence of different antigenic components was revealed. In the SDS-PAGE of excretor-secretor antigen (ESA), it was possible to observe peptides from 12 to 22 kDa, which were also present in FhTA. When the FhTA, its fractions and the ESA were analyzed by EITB with the immune rat serum (IRS), it was observed that only some fractions of the SG-100 shared antigens with the FhTA and ESA. Moreover, DTH and ITH responses were studied in FhTA immunized rats challenged with these different antigen components, revealing that the protein/carbohydrate ratio is important for inducing DTH response. The ESA was the most active component in the DTH and ITH response.
Low temperature structural transitions in dipolar hard spheres: the influence on magnetic properties
Resumo:
We investigate the structural chain-to-ring transition at low temperature in a gas of dipolar hard spheres (DRS). Due to the weakening of entropic contribution, ring formation becomes noticeable when the effective dipole-dipole magnetic interaction increases, It results in the redistribution of particles from usually observed flexible chains into flexible rings. The concentration (rho) of DI-IS plays a crucial part in this transition: at a very low rho only chains and rings are observed, whereas even a slight increase of the volume fraction leads to the formation of branched or defect structures. As a result, the fraction of DHS aggregated in defect-free rings turns out to be a non-monotonic function of rho. The average ring size is found to be a slower increasing function of rho when compared Lo that of chains. Both theory and computer simulations confirm the dramatic influence of the ring formation on the rho-dependence of the initial magnetic susceptibility (chi) when the temperature decreases. The rings clue to their zero total dipole moment are irresponsive to a weak magnetic field and drive to the strong decrease of the initial magnetic susceptibility. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
In this work, we present results from teleseismic P-wave receiver functions (PRFs) obtained in Portugal, Western Iberia. A dense seismic station deployment conducted between 2010 and 2012, in the scope of the WILAS project and covering the entire country, allowed the most spatially extensive probing on the bulk crustal seismic properties of Portugal up to date. The application of the H-κ stacking algorithm to the PRFs enabled us to estimate the crustal thickness (H) and the average crustal ratio of the P- and S-waves velocities V p/V s (κ) for the region. Observations of Moho conversions indicate that this interface is relatively smooth with the crustal thickness ranging between 24 and 34 km, with an average of 30 km. The highest V p/V s values are found on the Mesozoic-Cenozoic crust beneath the western and southern coastal domain of Portugal, whereas the lowest values correspond to Palaeozoic crust underlying the remaining part of the subject area. An average V p/V s is found to be 1.72, ranging 1.63-1.86 across the study area, indicating a predominantly felsic composition. Overall, we systematically observe a decrease of V p/V s with increasing crustal thickness. Taken as a whole, our results indicate a clear distinction between the geological zones of the Variscan Iberian Massif in Portugal, the overall shape of the anomalies conditioned by the shape of the Ibero-Armorican Arc, and associated Late Paleozoic suture zones, and the Meso-Cenozoic basin associated with Atlantic rifting stages. Thickened crust (30-34 km) across the studied region may be inherited from continental collision during the Paleozoic Variscan orogeny. An anomalous crustal thinning to around 28 km is observed beneath the central part of the Central Iberian Zone and the eastern part of South Portuguese Zone.
Resumo:
This work presents and analyses the fat and fuel properties and the methyl ester profile of biodiesel from animal fats and fish oil (beef tallow, pork lard, chicken fat and sardine oil). Also, their sustainability is evaluated in comparison with rapeseed biodiesel and fossil diesel, currently the dominant liquid fuels for transportation in Europe. Results show that from a technological point of view it is possible to use animal fats and fish oil as feedstock for biodiesel production. From the sustainability perspective, beef tallow biodiesel seems to be the most sustainable one, as its contribution to global warming has the same value of fossil diesel and in terms of energy efficiency it has the best value of the biodiesels under consideration. Although biodiesel is not so energy efficient as fossil diesel there is room to improve it, for example, by replacing the fossil energy used in the process with renewable energy generated using co-products (e.g. straw, biomass cake, glycerine).
Resumo:
Origanum glandulosum Desf. (Species endemic of North Africa: Tunisia and Algeria) is important medicinally as it has antimicrobial, antifungal, antioxidant, antibacterial, antithrombin, antimutagenic, angiogenic, antiparasetic and antihyperglycaemic activities. Phytochemical investigations of the species of this genus have resulted in the extraction of a number of important bioactive compounds. This emphasizes on the need of extensive study for reporting the additional information on the medicinal importance, the biological activities and properties of oil of other unattended species of Origanum glandulosum. © 2015 Springer-Verlag France.
Resumo:
This work intends to evaluate the (mechanical and durability) performance of concrete made with coarse recycled concrete aggregates (CRCA) obtained using two crushing processes: primary crushing (PC) and primary plus secondary crushing (PSC). This analysis intends to select the most efficient production process of recycled aggregates (RA). The RA used here resulted from precast products (P), with strength classes of 20 MPa, 45 MPa and 65 MPa, and from laboratory-made concrete (L) with the same compressive strengths. The evaluation of concrete was made with the following tests: compressive strength; splitting tensile strength; modulus of elasticity; carbona-tion resistance; chloride penetration resistance; capillary water absorption; and water absorption by immersion. These findings contribute to a solid and innovative basis that allows the precasting industry to use without restrictions the waste it generates. © (2015) Trans Tech Publications, Switzerland.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
The concepts and instruments required for the teaching and learning of geometric optics are introduced in the didactic processwithout a proper didactic transposition. This claim is secured by the ample evidence of both wide- and deep-rooted alternative concepts on the topic. Didactic transposition is a theory that comes from a reflection on the teaching and learning process in mathematics but has been used in other disciplinary fields. It will be used in this work in order to clear up the main obstacles in the teachinglearning process of geometric optics. We proceed to argue that since Newton’s approach to optics, in his Book I of Opticks, is independent of the corpuscular or undulatory nature of light, it is the most suitable for a constructivist learning environment. However, Newton’s theory must be subject to a proper didactic transposition to help overcome the referred alternative concepts. Then is described our didactic transposition in order to create knowledge to be taught using a dialogical process between students’ previous knowledge, history of optics and the desired outcomes on geometrical optics in an elementary pre-service teacher training course. Finally, we use the scheme-facet structure of knowledge both to analyse and discuss our results as well as to illuminate shortcomings that must be addressed in our next stage of the inquiry.
Resumo:
In this work tubular fiber reinforced specimens are tested for fatigue life. The specimens are biaxially loaded with tension and shear stresses, with a load angle β of 30° and 60° and a load ratio of R=0,1. There are many factors that affect fatigue life of a fiber reinforced material and the main goal of this work is to study the effects of load ratio R by obtaining S-N curves and compare them to the previous works (1). All the other parameters, such as specimen production, fatigue loading frequency and temperature, will be the same as for the previous tests. For every specimen, stiffness, temperature of the specimen during testing, crack counting and final fracture mode are obtained. Prior to testing, a study if the literature regarding the load ratio effects on composites fatigue life and with that review estimate the initial stresses to be applied in testing. In previous works (1) similar specimens have only been tested for a load ratio of R=-1 and therefore the behaviour of this tubular specimens for a different load ratio is unknown. All the data acquired will be analysed and compared to the previous works, emphasizing the differences found and discussing the possible explanations for those differences. The crack counting software, developed at the institute, has shown useful before, however different adjustments to the software parameters lead to different cracks numbers for the same picture, and therefore a better methodology will be discussed to improve the crack counting results. After the specimen’s failure, all the data will be collected and stored and fibre volume content for every specimen is also determinate. The number of tests required to make the S-N curves are obtained according to the existent standards. Additionally are also identified some improvements to the testing machine setup and to the procedures for future testing.
Resumo:
Prostate cancer (PCa) is a major cause of cancer-related morbidity and mortality worldwide. Although early disease is often efficiently managed therapeutically, available options for advanced disease are mostly ineffective. Aberrant DNA methylation associated with gene-silencing of cancer-related genes is a common feature of PCa. Therefore, DNA methylation inhibitors might constitute an attractive alternative therapy. Herein, we evaluated the anti-cancer properties of hydralazine, a non-nucleoside DNA methyltransferases (DNMT) inhibitor, in PCa cell lines. In vitro assays showed that hydralazine exposure led to a significant dose and time dependent growth inhibition, increased apoptotic rate and decreased invasiveness. Furthermore, it also induced cell cycle arrest and DNA damage. These phenotypic effects were particularly prominent in DU145 cells. Following hydralazine exposure, decreased levels of DNMT1, DNMT3a and DNMT3b mRNA and DNMT1 protein were depicted. Moreover, a significant decrease in GSTP1, BCL2 and CCND2 promoter methylation levels, with concomitant transcript re-expression, was also observed. Interestingly, hydralazine restored androgen receptor expression, with upregulation of its target p21 in DU145 cell line. Protein array analysis suggested that blockage of EGF receptor signaling pathway is likely to be the main mechanism of hydralazine action in DU145 cells. Our data demonstrate that hydralazine attenuated the malignant phenotype of PCa cells, and might constitute a useful therapeutic tool.
Resumo:
The aim of this study was to develop and validate a Portuguese version of the Short Form of the Posttraumatic Growth Inventory (PTGI-SF). Using an online convenience sample of Portuguese divorced adults (N = 482), we confirmed the oblique five-factor structure of the PTGI-SF by confirmatory factor analysis. The results demonstrated the measurement invariance across divorce initiator status groups. Total score and factors of PTGI-SF showed good internal consistency, with the exception of the New Possibilities factor, which revealed an acceptable reliability. The Portuguese PTGI-SF showed a satisfactory convergent validity. In terms of discriminant validity, posttraumatic growth assessed by the Portuguese PTGI-SF was a distinct factor from posttraumatic psychological adjustment. These preliminary findings suggest the cultural adaptation and also psychometric properties of the present Portuguese PTGI-SF to measure posttraumatic growth after personal crisis.
Resumo:
A aplicação do material compósito é neste momento bastante vasta, graças à combinação das suas características específicas, tais como, maior resistência específica e módulos específicos e melhor resistência à fadiga, quando comparados com os metais convencionais. Tais características, quando requeridas, tornam este material ideal para aplicações estruturais. Esta caminhada de sucesso iniciou desde muito cedo, quando o material compósito já era utilizado para fabrico de armas pelos mongóis e na construção civil pelos hebreus e egípcios, contudo, só a partir dos meados do século XX é que despertou interesses para aplicações mais modernas. Atualmente os materiais compósitos são utilizados em equipamentos domésticos, componentes elétricos e eletrónicos, passando por materiais desportivos, pela indústria automóvel e construção civil, até indústrias de grande exigência e visibilidade tecnológica como a aeronáutica, espacial e de defesa. Apesar das boas características apresentadas pelos materiais compósitos, no entanto, estes materiais têm tendência a perderem as suas propriedades quando submetidas a algumas operações de acabamento como a furação. A furação surge da necessidade de ligação de peças de um mesmo mecanismo. Os furos obtidos por este processo devem ser precisos e sem danos para garantir ligações de alta resistência e também precisas. A furação nos materiais compósitos é bastante complexa devido à sua heterogeneidade, anisotropia, sensibilidade ao calor e pelo facto de os reforços serem extremamente abrasivos. A operação de furação pode causar grandes danos na peça, como a delaminação a entrada, defeitos de circularidade do furo, danos de origem térmica e a delaminação à saída que se apresenta como o mais frequente e indesejável. Com base nesses pressupostos é que este trabalho foi desenvolvido de forma a tentar obter processos simples para determinação e previsão de danos em polímeros reforçados com fibras (de carbono neste caso) de forma a precavê-los. De forma a conseguir estes objetivos, foram realizados ensaios de início de delaminação segundo a proposta de Lachaud et al. e ensaios de pin-bearing segundo a proposta de Khashaba et al. Foram também examinadas extensões de danos de acordo com o modelo de Fator de delaminação ajustado apresentado por Davim et al. A partir dos ensaios, de pin-bearing, realizados foram analisadas influências do material e geometria da broca, do avanço utilizado na furação e de diferentes orientações de empilhamentos de placas na delaminação de laminados compósitos e ainda a influências dessas variáveis na força de rutura por pin-bearing. As principais conclusões tiradas daqui são que a delaminação aumenta com o aumento do avanço, o que já era esperado, as brocas em carboneto de tungsténio são as mais recomendas para a furação do material em causa e que a delaminação é superior para a placa cross-ply quando comparada com placas unidirecionais. Para a situação de ensaios de início de delaminação foram analisadas as influências da variação da espessura não cortada por baixo da broca/punção, de diferentes geometrias de brocas, da alteração de velocidade de ensaio e diferentes orientações de empilhamentos de placas na força de início de delaminação. Deste ensaio as principais conclusões são que a força de início de delaminação aumenta com o aumenta da espessura não cortada e a influência da velocidade de ensaio altera com a variação das orientações de empilhamento.