978 resultados para Processing technique


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The World Business Council for Sustainable Development (WBCSD) defines Eco-Efficiency as follows: ‘Eco- Efficiency is achieved by the delivery of competitively priced-goods and services that satisfy human needs and bring quality of life, while progressively reducing ecological impacts and resource intensity throughout the life-cycle to a level at least in line with the earth’s estimated carrying capacity’. Eco-Efficiency is under this point of view a key concept for sustainable development, bringing together economic and ecological progress. Measuring the Eco-Efficiency of a company, factory or business, is a complex process that involves the measurement and control of several and relevant parameters or indicators, globally applied to all companies in general, or specific according to the nature and specificities of the business itself. In this study, an attempt was made in order to measure and evaluate the eco-efficiency of a pultruded composite processing company. For this purpose the recommendations of WBCSD [1] and the directives of ISO 14301 standard [2] were followed and applied. The analysis was restricted to the main business branch of the company: the production and sale of standard GFRP pultrusion profiles. The main general indicators of eco-efficiency, as well as the specific indicators, were defined and determined according to ISO 14031 recommendations. With basis on indicators’ figures, the value profile, the environmental profile, and the pertinent eco-efficiency’s ratios were established and analyzed. In order to evaluate potential improvements on company eco-performance, new indicators values and ecoefficiency ratios were estimated taking into account the implementation of new proceedings and procedures, both in upstream and downstream of the production process, namely: a) Adoption of new heating system for pultrusion die in the manufacturing process, more effective and with minor heat losses; b) Implementation of new software for stock management (raw materials and final products) that minimize production failures and delivery delays to final consumer; c) Recycling approach, with partial waste reuse of scrap material derived from manufacturing, cutting and assembly processes of GFRP profiles. In particular, the last approach seems to significantly improve the eco-efficient performance of the company. Currently, by-products and wastes generated in the manufacturing process of GFRP profiles are landfilled, with supplementary added costs to this company traduced by transport of scrap, landfill taxes and required test analysis to waste materials. However, mechanical recycling of GFRP waste materials, with reduction to powdered and fibrous particulates, constitutes a recycling process that can be easily attained on heavy-duty cutting mills. The posterior reuse of obtained recyclates, either into a close-looping process, as filler replacement of resin matrix of GFRP profiles, or as reinforcement of other composite materials produced by the company, will drive to both costs reduction in raw materials and landfill process, and minimization of waste landfill. These features lead to significant improvements on the sequent assessed eco-efficiency ratios of the present case study, yielding to a more sustainable product and manufacturing process of pultruded GFRP profiles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

TiO2 films have been deposited on ITO substrates by dc reactive magnetron sputtering technique. It has been found that the sputtering pressure is a very important parameter for the structure of the deposited TiO2 films. When the pressure is lower than 1 Pa, the deposited has a dense structure and shows a preferred orientation along the [101] direction. However, the nanorod structure has been obtained as the sputtering pressure is higher than 1 Pa. These nanorods structure TiO2 film shows a preferred orientation along the [110] direction. The x-ray diffraction and the Raman scattering measurements show both the dense and the nanostructure TiO2 films have only an anatase phase, no other phase has been obtained. The results of the SEM show that these TiO2 nanorods are perpendicular to the ITO substrate. The TEM measurement shows that the nanorods have a very rough surface. The dye-sensitized solar cells (DSSCs) have been assembled using these TiO2 nanorod films prepared at different sputtering pressures as photoelectrode. And the effect of the sputtering pressure on the properties of the photoelectric conversion of the DSSCs has been studied.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reliable flow simulation software is inevitable to determine an optimal injection strategy in Liquid Composite Molding processes. Several methodologies can be implemented into standard software in order to reduce CPU time. Post-processing techniques might be one of them. Post-processing a finite element solution is a well-known procedure, which consists in a recalculation of the originally obtained quantities such that the rate of convergence increases without the need for expensive remeshing techniques. Post-processing is especially effective in problems where better accuracy is required for derivatives of nodal variables in regions where Dirichlet essential boundary condition is imposed strongly. In previous works influence of smoothness of non-homogeneous Dirichlet condition, imposed on smooth front was examined. However, usually quite a non-smooth boundary is obtained at each time step of the infiltration process due to discretization. Then direct application of post-processing techniques does not improve final results as expected. The new contribution of this paper lies in improvement of the standard methodology. Improved results clearly show that the recalculated flow front is closer to the ”exact” one, is smoother that the previous one and it improves local disturbances of the “exact” solution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introdução – A cintigrafia de perfusão do miocárdio (CPM) desempenha um importante papel no diagnóstico, avaliação e seguimento de pacientes com doença arterial coronária, sendo o seu processamento realizado maioritariamente de forma semiautomática. Uma vez que o desempenho dos técnicos de medicina nuclear (TMN) pode ser afetado por fatores individuais e ambientais, diferentes profissionais que processem os mesmos dados poderão obter diferentes estimativas dos parâmetros quantitativos (PQ). Objetivo – Avaliar a influência da experiência profissional e da função visual no processamento semiautomático da CPM. Analisar a variabilidade intra e interoperador na determinação dos PQ funcionais e de perfusão. Metodologia – Selecionou-se uma amostra de 20 TMN divididos em dois grupos, de acordo com a sua experiência no software Quantitative Gated SPECTTM: Grupo A (GA) – TMN ≥600h de experiência e Grupo B (GB) – TMN sem experiência. Submeteram-se os TMN a uma avaliação ortóptica e ao processamento de 21 CPM, cinco vezes, não consecutivas. Considerou-se uma visão alterada quando pelo menos um parâmetro da função visual se encontrava anormal. Para avaliar a repetibilidade e a reprodutibilidade recorreu-se à determinação dos coeficientes de variação, %. Na comparação dos PQ entre operadores, e para a análise do desempenho entre o GA e GB, aplicou-se o Teste de Friedman e de Wilcoxon, respetivamente, considerando o processamento das mesmas CPM. Para a comparação de TMN com visão normal e alterada na determinação dos PQ utilizou-se o Teste Mann-Whitney e para avaliar a influência da visão para cada PQ recorreu-se ao coeficiente de associação ETA. Diferenças estatisticamente significativas foram assumidas ao nível de significância de 5%. Resultados e Discussão – Verificou-se uma reduzida variabilidade intra (<6,59%) e inter (<5,07%) operador. O GB demonstrou ser o mais discrepante na determinação dos PQ, sendo a parede septal (PS) o único PQ que apresentou diferenças estatisticamente significativas (zw=-2,051, p=0,040), em detrimento do GA. No que se refere à influência da função visual foram detetadas diferenças estatisticamente significativas apenas na fração de ejeção do ventrículo esquerdo (FEVE) (U=11,5, p=0,012) entre TMN com visão normal e alterada, contribuindo a visão em 33,99% para a sua variação. Denotaram-se mais diferenças nos PQ obtidos em TMN que apresentam uma maior incidência de sintomatologia ocular e uma visão binocular diminuída. A FEVE demonstrou ser o parâmetro mais consistente entre operadores (1,86%). Conclusão – A CPM apresenta-se como uma técnica repetível e reprodutível, independente do operador. Verificou-se influência da experiência profissional e da função visual no processamento semiautomático da CPM, nos PQ PS e FEVE, respetivamente.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conventional film based X-ray imaging systems are being replaced by their digital equivalents. Different approaches are being followed by considering direct or indirect conversion, with the later technique dominating. The typical, indirect conversion, X-ray panel detector uses a phosphor for X-ray conversion coupled to a large area array of amorphous silicon based optical sensors and a couple of switching thin film transistors (TFT). The pixel information can then be readout by switching the correspondent line and column transistors, routing the signal to an external amplifier. In this work we follow an alternative approach, where the electrical switching performed by the TFT is replaced by optical scanning using a low power laser beam and a sensing/switching PINPIN structure, thus resulting in a simpler device. The optically active device is a PINPIN array, sharing both front and back electrical contacts, deposited over a glass substrate. During X-ray exposure, each sensing side photodiode collects photons generated by the scintillator screen (560 nm), charging its internal capacitance. Subsequently a laser beam (445 nm) scans the switching diodes (back side) retrieving the stored charge in a sequential way, reconstructing the image. In this paper we present recent work on the optoelectronic characterization of the PINPIN structure to be incorporated in the X-ray image sensor. The results from the optoelectronic characterization of the device and the dependence on scanning beam parameters are presented and discussed. Preliminary results of line scans are also presented. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The most common techniques for stress analysis/strength prediction of adhesive joints involve analytical or numerical methods such as the Finite Element Method (FEM). However, the Boundary Element Method (BEM) is an alternative numerical technique that has been successfully applied for the solution of a wide variety of engineering problems. This work evaluates the applicability of the boundary elem ent code BEASY as a design tool to analyze adhesive joints. The linearity of peak shear and peel stresses with the applied displacement is studied and compared between BEASY and the analytical model of Frostig et al., considering a bonded single-lap joint under tensile loading. The BEM results are also compared with FEM in terms of stress distributions. To evaluate the mesh convergence of BEASY, the influence of the mesh refinement on peak shear and peel stress distributions is assessed. Joint stress predictions are carried out numerically in BEASY and ABAQUS®, and analytically by the models of Volkersen, Goland, and Reissner and Frostig et al. The failure loads for each model are compared with experimental results. The preparation, processing, and mesh creation times are compared for all models. BEASY results presented a good agreement with the conventional methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Amorphous and crystalline sputtered boron carbide thin films have a very high hardness even surpassing that of bulk crystalline boron carbide (≈41 GPa). However, magnetron sputtered B-C films have high friction coefficients (C.o.F) which limit their industrial application. Nanopatterning of materials surfaces has been proposed as a solution to decrease the C.o.F. The contact area of the nanopatterned surfaces is decreased due to the nanometre size of the asperities which results in a significant reduction of adhesion and friction. In the present work, the surface of amorphous and polycrystalline B-C thin films deposited by magnetron sputtering was nanopatterned using infrared femtosecond laser radiation. Successive parallel laser tracks 10 μm apart were overlapped in order to obtain a processed area of about 3 mm2. Sinusoidal-like undulations with the same spatial period as the laser tracks were formed on the surface of the amorphous boron carbide films after laser processing. The undulations amplitude increases with increasing laser fluence. The formation of undulations with a 10 μm period was also observed on the surface of the crystalline boron carbide film processed with a pulse energy of 72 μJ. The amplitude of the undulations is about 10 times higher than in the amorphous films processed at the same pulse energy due to the higher roughness of the films and consequent increase in laser radiation absorption. LIPSS formation on the surface of the films was achieved for the three B-C films under study. However, LIPSS are formed under different circumstances. Processing of the amorphous films at low fluence (72 μJ) results in LIPSS formation only on localized spots on the film surface. LIPSS formation was also observed on the top of the undulations formed after laser processing with 78 μJ of the amorphous film deposited at 800 °C. Finally, large-area homogeneous LIPSS coverage of the boron carbide crystalline films surface was achieved within a large range of laser fluences although holes are also formed at higher laser fluences.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since long ago cellulosic lyotropic liquid crystals were thought as potential materials to produce fibers competitive with spidersilk or Kevlar, yet the processing of high modulus materials from cellulose-based precursors was hampered by their complex rheological behavior. In this work, by using the Rheo-NMR technique, which combines deuterium NMR with rheology, we investigate the high shear rate regimes that may be of interest to the industrial processing of these materials. Whereas the low shear rate regimes were already investigated by this technique in different works [1-4], the high shear rates range is still lacking a detailed study. This work focuses on the orientational order in the system both under shear and subsequent relaxation process arising after shear cessation through the analysis of deuterium spectra from the deuterated solvent water. At the analyzed shear rates the cholesteric order is suppressed and a flow-aligned nematic is observed which for the higher shear rates develops after certain time periodic perturbations that transiently annihilate the order in the system. During relaxation the flow aligned nematic starts losing order due to the onset of the cholesteric helices leading to a period of very low order where cholesteric helices with different orientations are forming from the aligned nematic, followed in the final stage by an increase in order at long relaxation times corresponding to the development of aligned cholesteric domains. This study sheds light on the complex rheological behavior of chiral nematic cellulose-based systems and opens ways to improve its processing. (C) 2015 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Summary form only given. Bacterial infections and the fight against them have been one of the major concerns of mankind since the dawn of time. During the `golden years' of antibiotic discovery, during the 1940-90s, it was thought that the war against infectious diseases had been won. However currently, due to the drug resistance increase, associated with the inefficiency of discovering new antibiotic classes, infectious diseases are again a major public health concern. A potential alternative to antibiotic treatments may be the antimicrobial photodynamic inactivation (PDI) therapy. To date no indication of antimicrobial PDI resistance development has been reported. However the PDI protocol depends on the bacteria species [1], and in some cases on the bacteria strains, for instance Staphylococcus aureus [2]. Therefore the development of PDI monitoring techniques for diverse bacteria strains is critical in pursuing further understanding of such promising alternative therapy. The present works aims to evaluate Fourier-Transformed-Infra-Red (FT-IR) spectroscopy to monitor the PDI of two model bacteria, a gram-negative (Escherichia coli) and a gram-positive (S. aureus) bacteria. For that a high-throughput FTIR spectroscopic method was implemented as generally described in Scholz et al. [3], using short incubation periods and microliter quantities of the incubation mixture containing the bacteria and the PDI-drug model the known bactericidal tetracationic porphyrin 5,10,15,20-tetrakis (4-N, N, Ntrimethylammoniumphenyl)-porphyrin p-tosylate (TTAP4+). In both bacteria models it was possible to detect, by FTIR-spectroscopy, the drugs effect on the cellular composition either directly on the spectra or on score plots of principal component analysis. Furthermore the technique enabled to infer the effect of PDI on the major cellular biomolecules and metabolic status, for example the turn-over metabolism. In summary bacteria PDI was monitored in an economic, rapid (in minutes- , high-throughput (using microplates with 96 wells) and highly sensitive mode resourcing to FTIR spectroscopy, which could serve has a technological basis for the evaluation of antimicrobial PDI therapies efficiency.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hyperspectral imaging can be used for object detection and for discriminating between different objects based on their spectral characteristics. One of the main problems of hyperspectral data analysis is the presence of mixed pixels, due to the low spatial resolution of such images. This means that several spectrally pure signatures (endmembers) are combined into the same mixed pixel. Linear spectral unmixing follows an unsupervised approach which aims at inferring pure spectral signatures and their material fractions at each pixel of the scene. The huge data volumes acquired by such sensors put stringent requirements on processing and unmixing methods. This paper proposes an efficient implementation of a unsupervised linear unmixing method on GPUs using CUDA. The method finds the smallest simplex by solving a sequence of nonsmooth convex subproblems using variable splitting to obtain a constraint formulation, and then applying an augmented Lagrangian technique. The parallel implementation of SISAL presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory. The results herein presented indicate that the GPU implementation can significantly accelerate the method's execution over big datasets while maintaining the methods accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, plasticizer agents were incorporated in a chitosan based formulation, as a strategy to improve the fragile structure of chitosan based-materials. Three different plasticizers: ethylene glycol, glycerol and sorbitol, were blended with chitosan to prepare 3D dense chitosan specimens. The properties of the obtained structures were assessed for mechanical, microstructural, physical and biocompatibility behavior. The results obtained revealed that from the different specimens prepared, the blend of chitosan with glycerol has superior mechanical properties and good biological behavior, making this chitosan based formulation a good candidate to improve robust chitosan structures for the construction of bioabsorbable orthopedic implants.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the most challenging task underlying many hyperspectral imagery applications is the linear unmixing. The key to linear unmixing is to find the set of reference substances, also called endmembers, that are representative of a given scene. This paper presents the vertex component analysis (VCA) a new method to unmix linear mixtures of hyperspectral sources. The algorithm is unsupervised and exploits a simple geometric fact: endmembers are vertices of a simplex. The algorithm complexity, measured in floating points operations, is O (n), where n is the sample size. The effectiveness of the proposed scheme is illustrated using simulated data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertation presented to obtain the degree of Doctor of Philosophy in Electrical Engineering, speciality on Perceptional Systems, by the Universidade Nova de Lisboa, Faculty of Sciences and Technology

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.