943 resultados para finite temperature BHF approach
Resumo:
Component joining is typically performed by welding, fastening, or adhesive-bonding. For bonded aerospace applications, adhesives must withstand high-temperatures (200°C or above, depending on the application), which implies their mechanical characterization under identical conditions. The extended finite element method (XFEM) is an enhancement of the finite element method (FEM) that can be used for the strength prediction of bonded structures. This work proposes and validates damage laws for a thin layer of an epoxy adhesive at room temperature (RT), 100, 150, and 200°C using the XFEM. The fracture toughness (G Ic ) and maximum load ( ); in pure tensile loading were defined by testing double-cantilever beam (DCB) and bulk tensile specimens, respectively, which permitted building the damage laws for each temperature. The bulk test results revealed that decreased gradually with the temperature. On the other hand, the value of G Ic of the adhesive, extracted from the DCB data, was shown to be relatively insensitive to temperature up to the glass transition temperature (T g ), while above T g (at 200°C) a great reduction took place. The output of the DCB numerical simulations for the various temperatures showed a good agreement with the experimental results, which validated the obtained data for strength prediction of bonded joints in tension. By the obtained results, the XFEM proved to be an alternative for the accurate strength prediction of bonded structures.
Resumo:
Global warming and the associated climate changes are being the subject of intensive research due to their major impact on social, economic and health aspects of the human life. Surface temperature time-series characterise Earth as a slow dynamics spatiotemporal system, evidencing long memory behaviour, typical of fractional order systems. Such phenomena are difficult to model and analyse, demanding for alternative approaches. This paper studies the complex correlations between global temperature time-series using the Multidimensional scaling (MDS) approach. MDS provides a graphical representation of the pattern of climatic similarities between regions around the globe. The similarities are quantified through two mathematical indices that correlate the monthly average temperatures observed in meteorological stations, over a given period of time. Furthermore, time dynamics is analysed by performing the MDS analysis over slices sampling the time series. MDS generates maps describing the stations’ locus in the perspective that, if they are perceived to be similar to each other, then they are placed on the map forming clusters. We show that MDS provides an intuitive and useful visual representation of the complex relationships that are present among temperature time-series, which are not perceived on traditional geographic maps. Moreover, MDS avoids sensitivity to the irregular distribution density of the meteorological stations.
Resumo:
A mathematical model is proposed for the evolution of temperature, chemical composition, and energy release in bubbles, clouds, and emulsion phase during combustion of gaseous premixtures of air and propane in a bubbling fluidized bed. The analysis begins as the bubbles are formed at the orifices of the distributor, until they explode inside the bed or emerge at the free surface of the bed. The model also considers the freeboard region of the fluidized bed until the propane is thoroughly burned. It is essentially built upon the quasi-global mechanism of Hautman et al. (1981) and the mass and heat transfer equations from the two-phase model of Davidson and Harrison (1963). The focus is not on a new modeling approach, but on combining the classical models of the kinetics and other diffusional aspects to obtain a better insight into the events occurring inside a fluidized bed reactor. Experimental data are obtained to validate the model by testing the combustion of commercial propane, in a laboratory-scale fluidized bed, using four sand particle sizes: 400–500, 315–400, 250–315, and 200–250 µm. The mole fractions of CO2, CO, and O2 in the flue gases and the temperature of the fluidized bed are measured and compared with the numerical results.
Resumo:
The aim of this study is to optimize the heat flow through the pultrusion die assembly system on the manufacturing process of a specific glass-fiber reinforced polymer (GFRP) pultrusion profile. The control of heat flow and its distribution through whole die assembly system is of vital importance in optimizing the actual GFRP pultrusion process. Through mathematical modeling of heating-die process, by means of Finite Element Analysis (FEA) program, an optimum heater selection, die position and temperature control was achieved. The thermal environment within the die was critically modeled relative not only to the applied heat sources, but also to the conductive and convective losses, as well as the thermal contribution arising from the exothermic reaction of resin matrix as it cures or polymerizes from the liquid to solid condition. Numerical simulation was validated with basis on thermographic measurements carried out on key points along the die during pultrusion process.
Resumo:
The global warming due to high CO2 emission in the last years has made energy saving a global problem nowadays. However, manufacturing processes such as pultrusion necessarily needs heat for curing the resin. Then, the only option available is to apply all efforts to make the process even more efficient. Different heating systems have been used on pultrusion, however, the most widely used are the planar resistances. The main objective of this study is to develop another heating system and compares it with the former one. Thermography was used in spite of define the temperature profile along the die. FEA (finite element analysis) allows to understand how many energy is spend with the initial heating system. After this first approach, changes were done on the die in order to test the new heating system and to check possible quality problems on the product. Thus, this work allows to conclude that with the new heating system a significant reduction in the setup time is now possible and an energy reduction of about 57% was achieved.
Resumo:
This study is based on a previous experimental work in which embedded cylindrical heaters were applied to a pultrusion machine die, and resultant energetic performance compared with that achieved with the former heating system based on planar resistances. The previous work allowed to conclude that the use of embedded resistances enhances significantly the energetic performance of pultrusion process, leading to 57% decrease of energy consumption. However, the aforementioned study was developed with basis on an existing pultrusion die, which only allowed a single relative position for the heaters. In the present work, new relative positions for the heaters were investigated in order to optimize heat distribution process and energy consumption. Finite Elements Analysis was applied as an efficient tool to identify the best relative position of the heaters into the die, taking into account the usual parameters involved in the process and the control system already tested in the previous study. The analysis was firstly developed with basis on eight cylindrical heaters located in four different location plans. In a second phase, in order to refine the results, a new approach was adopted using sixteen heaters with the same total power. Final results allow to conclude that the correct positioning of the heaters can contribute to about 10% of energy consumption reduction, decreasing the production costs and leading to a better eco-efficiency of pultrusion process.
Resumo:
Volatile organic compounds are a common source of groundwater contamination that can be easily removed by air stripping in columns with random packing and using a counter-current flow between the phases. This work proposes a new methodology for column design for any type of packing and contaminant which avoids the necessity of an arbitrary chosen diameter. It also avoids the employment of the usual graphical Eckert correlations for pressure drop. The hydraulic features are previously chosen as a project criterion. The design procedure was translated into a convenient algorithm in C++ language. A column was built in order to test the design, the theoretical steady-state and dynamic behaviour. The experiments were conducted using a solution of chloroform in distilled water. The results allowed for a correction in the theoretical global mass transfer coefficient previously estimated by the Onda correlations, which depend on several parameters that are not easy to control in experiments. For best describe the column behaviour in stationary and dynamic conditions, an original mathematical model was developed. It consists in a system of two partial non linear differential equations (distributed parameters). Nevertheless, when flows are steady, the system became linear, although there is not an evident solution in analytical terms. In steady state the resulting ODE can be solved by analytical methods, and in dynamic state the discretization of the PDE by finite differences allows for the overcoming of this difficulty. To estimate the contaminant concentrations in both phases in the column, a numerical algorithm was used. The high number of resulting algebraic equations and the impossibility of generating a recursive procedure did not allow the construction of a generalized programme. But an iterative procedure developed in an electronic worksheet allowed for the simulation. The solution is stable only for similar discretizations values. If different values for time/space discretization parameters are used, the solution easily becomes unstable. The system dynamic behaviour was simulated for the common liquid phase perturbations: step, impulse, rectangular pulse and sinusoidal. The final results do not configure strange or non-predictable behaviours.
Resumo:
Dissertação para a obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Energia
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
Structure and Infrastructure Engineering, 1-17
Resumo:
The excessive use of pesticides and fertilisers in agriculture has generated a decrease in groundwater and surface water quality in many regions of the EU, constituting a hazard for human health and the environment. Besides, on-site sewage disposal is an important source of groundwater contamination in urban and peri-urban areas. The assessment of groundwater vulnerability to contamination is an important tool to fulfil the demands of EU Directives. The purpose of this study is to assess the groundwater vulnerability to contamination related mainly to agricultural activities in a peri-urban area (Vila do Conde, NW Portugal). The hydrogeological framework is characterised mainly by fissured granitic basement and sedimentary cover. Water samples were collected and analysed for temperature, pH, electrical conductivity, chloride, phosphate, nitrate and nitrite. An evaluation of groundwater vulnerability to contamination was applied (GOD-S, Pesticide DRASTIC-Fm, SINTACS and SI) and the potential nitrate contamination risk was assessed, both on a hydrogeological GIS-based mapping. A principal component analysis was performed to characterised patterns of relationship among groundwater contamination, vulnerability, and the hydrogeological setting assessed. Levels of nitrate above legislation limits were detected in 75 % of the samples analysed. Alluvia units showed the highest nitrate concentrations and also the highest vulnerability and risk. Nitrate contamination is a serious problem affecting groundwater, particularly shallow aquifers, especially due to agriculture activities, livestock and cesspools. GIS-based cartography provided an accurate way to improve knowledge on water circulation models and global functioning of local aquifer systems. Finally, this study highlights the adequacy of an integrated approach, combining hydrogeochemical data, vulnerability assessments and multivariate analysis, to understand groundwater processes in peri-urban areas.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e Computadores
Resumo:
The present dissertation focuses on the research of the recent approach of innovative high-temperature superconducting stacked tapes in electrical ma-chines applications, taking into account their potential benefits as an alternative for the massive superconducting bulks, mainly related with geometric and me-chanical flexibility. This work was developed in collaboration with Institut de Ciència de Ma-terials de Barcelona (ICMAB), and is related with evaluation of electrical and magnetic properties of the mentioned superconducting materials, namely: analysis of magnetization of a bulk sample through simulations carried out in the finite elements COMSOL software; measurement of superconducting tape resistivity at liquid nitrogen and room temperatures; and, finally, development and testing of a frequency controlled superconducting motor with rotor built by superconducting tapes. In the superconducting state, results showed a critical current density of 140.3 MA/m2 (or current of 51.15 A) on the tape and a 1 N∙m developed motor torque, independent from the rotor position angle, typical in hysteresis motors.
Resumo:
With the projection of an increasing world population, hand-in-hand with a journey towards a bigger number of developed countries, further demand on basic chemical building blocks, as ethylene and propylene, has to be properly addressed in the next decades. The methanol-to-olefins (MTO) is an interesting reaction to produce those alkenes using coal, gas or alternative sources, like biomass, through syngas as a source for the production of methanol. This technology has been widely applied since 1985 and most of the processes are making use of zeolites as catalysts, particularly ZSM-5. Although its selectivity is not especially biased over light olefins, it resists to a quick deactivation by coke deposition, making it quite attractive when it comes to industrial environments; nevertheless, this is a highly exothermic reaction, which is hard to control and to anticipate problems, such as temperature runaways or hot-spots, inside the catalytic bed. The main focus of this project is to study those temperature effects, by addressing both experimental, where the catalytic performance and the temperature profiles are studied, and modelling fronts, which consists in a five step strategy to predict the weight fractions and activity. The mind-set of catalytic testing is present in all the developed assays. It was verified that the selectivity towards light olefins increases with temperature, although this also leads to a much faster catalyst deactivation. To oppose this effect, experiments were carried using a diluted bed, having been able to increase the catalyst lifetime between 32% and 47%. Additionally, experiments with three thermocouples placed inside the catalytic bed were performed, analysing the deactivation wave and the peaks of temperature throughout the bed. Regeneration was done between consecutive runs and it was concluded that this action can be a powerful means to increase the catalyst lifetime, maintaining a constant selectivity towards light olefins, by losing acid strength in a steam stabilised zeolitic structure. On the other hand, developments on the other approach lead to the construction of a raw basic model, able to predict weight fractions, that should be tuned to be a tool for deactivation and temperature profiles prediction.