965 resultados para Process analysis
Resumo:
The single-lap joint is the most commonly used, although it endures significant bending due to the non-collinear load path, which negatively affects its load bearing capabilities. The use of material or geometric changes is widely documented in the literature to reduce this handicap, acting by reduction of peel and shear peak stresses or alterations of the failure mechanism emerging from local modifications. In this work, the effect of using different thickness adherends on the tensile strength of single-lap joints, bonded with a ductile and brittle adhesive, was numerically and experimentally evaluated. The joints were tested under tension for different combinations of adherend thickness. The effect of the adherends thickness mismatch on the stress distributions was also investigated by Finite Elements (FE), which explained the experimental results and the strength prediction of the joints. The numerical study was made by FE and Cohesive Zone Modelling (CZM), which allowed characterizing the entire fracture process. For this purpose, a FE analysis was performed in ABAQUS® considering geometric non-linearities. In the end, a detailed comparative evaluation of unbalanced joints, commonly used in engineering applications, is presented to give an understanding on how modifications in the bonded structures thickness can influence the joint performance.
Resumo:
Infrared spectroscopy, either in the near and mid (NIR/MIR) region of the spectra, has gained great acceptance in the industry for bioprocess monitoring according to Process Analytical Technology, due to its rapid, economic, high sensitivity mode of application and versatility. Due to the relevance of cyprosin (mostly for dairy industry), and as NIR and MIR spectroscopy presents specific characteristics that ultimately may complement each other, in the present work these techniques were compared to monitor and characterize by in situ and by at-line high-throughput analysis, respectively, recombinant cyprosin production by Saccharomyces cerevisiae. Partial least-square regression models, relating NIR and MIR-spectral features with biomass, cyprosin activity, specific activity, glucose, galactose, ethanol and acetate concentration were developed, all presenting, in general, high regression coefficients and low prediction errors. In the case of biomass and glucose slight better models were achieved by in situ NIR spectroscopic analysis, while for cyprosin activity and specific activity slight better models were achieved by at-line MIR spectroscopic analysis. Therefore both techniques enabled to monitor the highly dynamic cyprosin production bioprocess, promoting by this way more efficient platforms for the bioprocess optimization and control.
Resumo:
This work intends to evaluate the (mechanical and durability) performance of concrete made with coarse recycled concrete aggregates (CRCA) obtained using two crushing processes: primary crushing (PC) and primary plus secondary crushing (PSC). This analysis intends to select the most efficient production process of recycled aggregates (RA). The RA used here resulted from precast products (P), with strength classes of 20 MPa, 45 MPa and 65 MPa, and from laboratory-made concrete (L) with the same compressive strengths. The evaluation of concrete was made with the following tests: compressive strength; splitting tensile strength; modulus of elasticity; carbona-tion resistance; chloride penetration resistance; capillary water absorption; and water absorption by immersion. These findings contribute to a solid and innovative basis that allows the precasting industry to use without restrictions the waste it generates. © (2015) Trans Tech Publications, Switzerland.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
This paper introduces a new method to blindly unmix hyperspectral data, termed dependent component analysis (DECA). This method decomposes a hyperspectral images into a collection of reflectance (or radiance) spectra of the materials present in the scene (endmember signatures) and the corresponding abundance fractions at each pixel. DECA assumes that each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. These abudances are modeled as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. The mixing matrix is inferred by a generalized expectation-maximization (GEM) type algorithm. This method overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical based approaches. The effectiveness of the proposed method is illustrated using simulated data based on U.S.G.S. laboratory spectra and real hyperspectral data collected by the AVIRIS sensor over Cuprite, Nevada.
Resumo:
Thesis submitted to Faculdade de Ciências e Tecnologia of Universidade Nova de Lisboa in partial fulfilment of the requirements for the degree of Master in Computer Science
Risk Acceptance in the Furniture Sector: Analysis of Acceptance Level and Relevant Influence Factors
Resumo:
Risk acceptance has been broadly discussed in relation to hazardous risk activities and/or technologies. A better understanding of risk acceptance in occupational settings is also important; however, studies on this topic are scarce. It seems important to understand the level of risk that stakeholders consider sufficiently low, how stakeholders form their opinion about risk, and why they adopt a certain attitude toward risk. Accordingly, the aim of this study is to examine risk acceptance in regard to occupational accidents in furniture industries. The safety climate analysis was conducted through the application of the Safety Climate in Wood Industries questionnaire. Judgments about risk acceptance, trust, risk perception, benefit perception, emotions, and moral values were measured. Several models were tested to explain occupational risk acceptance. The results showed that the level of risk acceptance decreased as the risk level increased. High-risk and death scenarios were assessed as unacceptable. Risk perception, emotions, and trust had an important influence on risk acceptance. Safety climate was correlated with risk acceptance and other variables that influence risk acceptance. These results are important for the risk assessment process in terms of defining risk acceptance criteria and strategies to reduce risks.
Resumo:
Formula Student events gather engineering students, who compete, designing, building and racing single-seater cars. The team of ISEP is working on its first car that soon will take part in this competition. This work aims to analyze the current design’s chassis, focusing on suspension geometry and frame’s performance. After analyzing results of the tests planned suggestions, that can be taken into consideration during design process of next cars will be presented. As the car has not been tested yet this work can also be helpful to explain its performance on the track later.
Resumo:
Cork stopper manufacturing process includes an operation, known as stabilisation, by which humid cork slabs are extensively colonised by fungi. The effects of fungal growth on cork are yet to be completely understood and are considered to be involved in the so called “cork taint” of bottled wine. It is essential to identify environmental constraints which define the appearance of the colonising fungal species and to trace their origin to the forest and/or as residents in the manufacturing space. The present article correlates two sets of data, from consecutive years and the same season, of systematic biologic sampling of two manufacturing units, located in the North and South of Portugal. Chrysonilia sitophila dominance was identified, followed by a high diversity of Penicillium species. Penicillium glabrum, found in all samples, was the most frequent isolated species. P. glabrum intra-species variability was investigated using DNA fingerprinting techniques revealing highly discriminative polymorphic markers in the genome. Cluster analysis of P. glabrum data was discussed in relation to the geographical location of strains, and results suggest that P. glabrum arise from predominantly the manufacturing space, although cork resident fungi can also contrib
Resumo:
Dissertation presented to obtain the Ph.D degree in Biology
Resumo:
Waves of globalization reflect the historical technical progress and modern economic growth. The dynamics of this process are here approached using the multidimensional scaling (MDS) methodology to analyze the evolution of GDP per capita, international trade openness, life expectancy, and education tertiary enrollment in 14 countries. MDS provides the appropriate theoretical concepts and the exact mathematical tools to describe the joint evolution of these indicators of economic growth, globalization, welfare and human development of the world economy from 1977 up to 2012. The polarization dance of countries enlightens the convergence paths, potential warfare and present-day rivalries in the global geopolitical scene.
Resumo:
Complex industrial plants exhibit multiple interactions among smaller parts and with human operators. Failure in one part can propagate across subsystem boundaries causing a serious disaster. This paper analyzes the industrial accident data series in the perspective of dynamical systems. First, we process real world data and show that the statistics of the number of fatalities reveal features that are well described by power law (PL) distributions. For early years, the data reveal double PL behavior, while, for more recent time periods, a single PL fits better into the experimental data. Second, we analyze the entropy of the data series statistics over time. Third, we use the Kullback–Leibler divergence to compare the empirical data and multidimensional scaling (MDS) techniques for data analysis and visualization. Entropy-based analysis is adopted to assess complexity, having the advantage of yielding a single parameter to express relationships between the data. The classical and the generalized (fractional) entropy and Kullback–Leibler divergence are used. The generalized measures allow a clear identification of patterns embedded in the data.
Resumo:
Software tools in education became popular since the widespread of personal computers. Engineering courses lead the way in this development and these tools became almost a standard. Engineering graduates are familiar with numerical analysis tools but also with simulators (e.g. electronic circuits), computer assisted design tools and others, depending on the degree. One of the main problems with these tools is when and how to start use them so that they can be beneficial to students and not mere substitutes for potentially difficult calculations or design. In this paper a software tool to be used by first year students in electronics/electricity courses is presented. The growing acknowledgement and acceptance of open source software lead to the choice of an open source software tool – Scilab, which is a numerical analysis tool – to develop a toolbox. The toolbox was developed to be used as standalone or integrated in an e-learning platform. The e-learning platform used was Moodle. The first approach was to assess the mathematical skills necessary to solve all the problems related to electronics and electricity courses. Analysing the existing circuit simulators software tools, it is clear that even though they are very helpful by showing the end result they are not so effective in the process of the students studying and self learning since they show results but not intermediate steps which are crucial in problems that involve derivatives or integrals. Also, they are not very effective in obtaining graphical results that could be used to elaborate reports and for an overall better comprehension of the results. The developed tool was based on the numerical analysis software Scilab and is a toolbox that gives their users the opportunity to obtain the end results of a circuit analysis but also the expressions obtained when derivative and integrals calculations, plot signals, obtain vector diagrams, etc. The toolbox runs entirely in the Moodle web platform and provides the same results as the standalone application. The students can use the toolbox through the web platform (in computers where they don't have installation privileges) or in their personal computers by installing both the Scilab software and the toolbox. This approach was designed for first year students from all engineering degrees that have electronics/electricity courses in their curricula.
Resumo:
The purpose of this work was to develop a reliable alternative method for the determination of the dithiocarbamate pesticide mancozeb (MCZ) in formulations. Furthermore, a method for the analysis of MCZ's major degradation product, ethylenethiourea (ETU), was also proposed. Cyclic voltammetry was used to characterize the electrochemical behavior of MCZ and ETU, and square-wave adsorptive stripping voltammetry (SWAdSV) was employed for MCZ quantification in commercial formulations. It was found that both MCZ and ETU are irreversibly reduced (− 0.6 V and − 0.5 V vs Ag/AgCl, respectively) at the surface of a glassy carbon electrode in a mainly diffusion-controlled process, presenting maximum peak current intensities at pH 7.0 (in phosphate buffered saline electrolyte). Several parameters of the SWAdSV technique were optimized and linear relationships between concentration and peak current intensity were established between 10–90 μmol L− 1 and 10–110 μmol L− 1 for MCZ and ETU, respectively. The limits of detection were 7.0 μmol L− 1 for MCZ and 7.8 μmol L− 1 for ETU. The optimized method for MCZ was successfully applied to the quantification of this pesticide in two commercial formulations. The developed procedures provided accurate and precise results and could be interesting alternatives to the established methods for quality control of the studied products, as well as for analysis of MCZ and ETU in environmental samples.
Resumo:
Stroke is one of the most common conditions requiring rehabilitation, and its motor impairments are a major cause of permanent disability. Hemiparesis is observed by 80% of the patients after acute stroke. Neuroimaging studies showed that real and imagined movements have similarities regarding brain activation, supplying evidence that those similarities are based on the same process. Within this context, the combination of mental practice (MP) with physical and occupational therapy appears to be a natural complement based on neurorehabilitation concepts. Our study seeks to investigate if MP for stroke rehabilitation of upper limbs is an effective adjunct therapy. PubMed (Medline), ISI knowledge (Institute for Scientific Information) and SciELO (Scientific Electronic Library) were terminated on 20 February 2015. Data were collected on variables as follows: sample size, type of supervision, configuration of mental practice, setting the physical practice (intensity, number of sets and repetitions, duration of contractions, rest interval between sets, weekly and total duration), measures of sensorimotor deficits used in the main studies and significant results. Random effects models were used that take into account the variance within and between studies. Seven articles were selected. As there was no statistically significant difference between the two groups (MP vs control), showed a - 0.6 (95% CI: -1.27 to 0.04), for upper limb motor restoration after stroke. The present meta-analysis concluded that MP is not effective as adjunct therapeutic strategy for upper limb motor restoration after stroke.