988 resultados para Geometric morphometry. Secular trends. Maxillo-mandibular structures
Resumo:
In the Longroiva-Vilariça area, the identification of Cenozoic lithostratigraphic units, the sedimentology and the characterization of its geometric relations with tectonic structures allowed the interpretation of the palaeogeographic main stages: 1) the greenwhitish Vilariça Arkoses (Middle Eocene to Oligocene ?) represent proximal sediments of a very low gradient drainage towards the eastern Spanish Tertiary Duero Basin; 2)Quintãs Formation (late Miocene ?) are brown-reddish coloured piedmont alluvial deposits, correlative of important vertical displacement (western tectonic block relative uplift) along the NNE-SSW indent-linked strike-slip Bragança-Vilariça-Longroiva fault zone, interpreted as a reactivated deep hercynian fracture, with left-lateral movement; 3) the red Sampaio Formation (Gelasian-early Pleistocene ?)was interpreted as downhill conglomeratic deposits related with important overtrusting along this fault zone (the definition of the present-day narrow graben configuration) and correlative of the atlantic hydrographic incision stage beginning; 4) conglomeratic terraces (middle and late Pleistocene ?); 5) alluvial plains and colluvial deposits (Holocene).
Resumo:
The interpretation of 64 seismic reflection profiles in the Algarve continental platform (36º 20'-37º 00' paralels and 7º 20'-8º 40' meridians) calibrated with five petroleum exploration wells, with the identification of the geometric relations between six Cenozoic seismic units (B to G) and tectonic structures, allowed the construction of sucessive time-isopach maps (twt/s) and detailed interpretation of the geologic evolution. Two major tectonic structures were identified: a) the Portimão-Monchique fracture zone (striking N-S); b) an off-shore NW-SE fault zone, probably the S. Marcos-Quarteira fault. This accident separates two tectonic domains: the western domain (with N-S and E-W predominant structures and, secondarily, NW-SE and NE-SW) and the eastern domain (dominated by WSW-ENE, NW-SE, NE-SW, NNE-SSW and NNW-SSE structures). A persistent halokinetic activity had two major moments: a) sin-C unit; b) sin- and post-E unit. An increasing flexuration of the margin was identified, with spacial and temporal variation of the subsidence. The tectonic regime is considered as generally compressive, but the interpretation of the successíve stress-fields is rendered dificult by the existence of tectonic sub-domains and evaporitic structures.
Resumo:
Cellulose acetate (CA)-silver (Ag) nanocomposite asymmetric membranes were prepared via the wet-phase inversion method by dispersing polyvinylpirrolydone-protected Ag nanoparticles in the membrane casting solutions of different compositions. Silver nanoparticles were synthesized ex situ and added to the casting solution as a concentrated aqueous colloidal dispersion. The effects of the dispersion addition on the structure and on the selective permeation properties of the membranes were studied by comparing the nanocomposites with the silver-free materials. The casting solution composition played an important role in the adequate dispersion of the silver nanoparticles in the membrane. Incorporation of nanoscale silver and the final silver content resulted in structural changes leading to an increase in the hydraulic permeability and molecular weight cut-off of the nanocomposite membranes. (c) 2014 Wiley Periodicals, Inc. J. Appl. Polym. Sci. 2015, 132, 41796.
Resumo:
Amorphous and crystalline sputtered boron carbide thin films have a very high hardness even surpassing that of bulk crystalline boron carbide (≈41 GPa). However, magnetron sputtered B-C films have high friction coefficients (C.o.F) which limit their industrial application. Nanopatterning of materials surfaces has been proposed as a solution to decrease the C.o.F. The contact area of the nanopatterned surfaces is decreased due to the nanometre size of the asperities which results in a significant reduction of adhesion and friction. In the present work, the surface of amorphous and polycrystalline B-C thin films deposited by magnetron sputtering was nanopatterned using infrared femtosecond laser radiation. Successive parallel laser tracks 10 μm apart were overlapped in order to obtain a processed area of about 3 mm2. Sinusoidal-like undulations with the same spatial period as the laser tracks were formed on the surface of the amorphous boron carbide films after laser processing. The undulations amplitude increases with increasing laser fluence. The formation of undulations with a 10 μm period was also observed on the surface of the crystalline boron carbide film processed with a pulse energy of 72 μJ. The amplitude of the undulations is about 10 times higher than in the amorphous films processed at the same pulse energy due to the higher roughness of the films and consequent increase in laser radiation absorption. LIPSS formation on the surface of the films was achieved for the three B-C films under study. However, LIPSS are formed under different circumstances. Processing of the amorphous films at low fluence (72 μJ) results in LIPSS formation only on localized spots on the film surface. LIPSS formation was also observed on the top of the undulations formed after laser processing with 78 μJ of the amorphous film deposited at 800 °C. Finally, large-area homogeneous LIPSS coverage of the boron carbide crystalline films surface was achieved within a large range of laser fluences although holes are also formed at higher laser fluences.
Resumo:
Session 7: Playing with Roles, images and improvising New States of Awareness, 3rd Global Conference, 1st November – 3rd November, 2014, Prague, Czech Republic.
Resumo:
Trabalho final de Mestrado para a obtenção do grau de mestre em Engenharia Civil
Resumo:
A multiobjective approach for optimization of passive damping for vibration reduction in sandwich structures is presented in this paper. Constrained optimization is conducted for maximization of modal loss factors and minimization of weight of sandwich beams and plates with elastic laminated constraining layers and a viscoelastic core, with layer thickness and material and laminate layer ply orientation angles as design variables. The problem is solved using the Direct MultiSearch (DMS) solver for derivative-free multiobjective optimization and solutions are compared with alternative ones obtained using genetic algorithms.
Resumo:
The single-lap joint is the most commonly used, although it endures significant bending due to the non-collinear load path, which negatively affects its load bearing capabilities. The use of material or geometric changes is widely documented in the literature to reduce this handicap, acting by reduction of peel and shear peak stresses or alterations of the failure mechanism emerging from local modifications. In this work, the effect of using different thickness adherends on the tensile strength of single-lap joints, bonded with a ductile and brittle adhesive, was numerically and experimentally evaluated. The joints were tested under tension for different combinations of adherend thickness. The effect of the adherends thickness mismatch on the stress distributions was also investigated by Finite Elements (FE), which explained the experimental results and the strength prediction of the joints. The numerical study was made by FE and Cohesive Zone Modelling (CZM), which allowed characterizing the entire fracture process. For this purpose, a FE analysis was performed in ABAQUS® considering geometric non-linearities. In the end, a detailed comparative evaluation of unbalanced joints, commonly used in engineering applications, is presented to give an understanding on how modifications in the bonded structures thickness can influence the joint performance.
Resumo:
Beam-like structures are the most common components in real engineering, while single side damage is often encountered. In this study, a numerical analysis of single side damage in a free-free beam is analysed with three different finite element models; namely solid, shell and beam models for demonstrating their performance in simulating real structures. Similar to experiment, damage is introduced into one side of the beam, and natural frequencies are extracted from the simulations and compared with experimental and analytical results. Mode shapes are also analysed with modal assurance criterion. The results from simulations reveal a good performance of the three models in extracting natural frequencies, and solid model performs better than shell while shell model performs better than beam model under intact state. For damaged states, the natural frequencies captured from solid model show more sensitivity to damage severity than shell model and shell model performs similar to the beam model in distinguishing damage. The main contribution of this paper is to perform a comparison between three finite element models and experimental data as well as analytical solutions. The finite element results show a relatively well performance.
Resumo:
Mestrado em Engenharia Informática - Área de Especialização em Sistemas Gráficos e Multimédia
Resumo:
The reaction between 2-aminobenzenesulfonic acid and 2-hydroxy-3-methoxybenzaldehyde produces the acyclic Schiff base 2-[(2-hydroxy-3-methoxyphenyl) methylideneamino] benzenesulfonic acid (H2L center dot 3H(2)O) (1). In situ reactions of this compound with Cu(II) salts and, eventually, in the presence of pyridine (py) or 2,2'-bipyridine (2,2'-bipy) lead to the formation of the mononuclear complexes [CuL(H2O)(2)] (2) and [CuL(2,2'-bipy)]center dot DMF center dot H2O (3) and the diphenoxo-bridged dicopper compounds [CuL(py)](2) (4) and [CuL(EtOH)](2)center dot 2H(2)O (5). In 2-5 the L-2-ligand acts as a tridentate chelating species by means of one of the O-sulfonate atoms, the O-phenoxo and the N-atoms. The remaining coordination sites are then occupied by H2O (in 2), 2,2'-bipyridine (in 3), pyridine (in 4) or EtOH (in 5). Hydrogen bond interactions resulted in R-2(2) (14) and in R-4(4)(12) graph sets leading to dimeric species (in 2 and 3, respectively), 1D chain associations (in 2 and 5) or a 2D network (1). Complexes 2-5 are applied as selective catalysts for the homogeneous peroxidative (with tert-butylhydroperoxide, TBHP) oxidation of primary and secondary alcohols, under solvent-and additive-free conditions and under low power microwave (MW) irradiation. A quantitative yield of acetophenone was obtained by oxidation of 1-phenylethanol with compound 4 [TOFs up to 7.6 x 10(3) h(-1)] after 20 min of MW irradiation, whereas the oxidation of benzyl alcohol to benzaldehyde is less effective (TOF 992 h(-1)). The selectivity of 4 to oxidize the alcohol relative to the ene function is demonstrated when using cinnamyl alcohol as substrate.
Resumo:
Dissertação apresentada para obtenção do grau de Doutor em Biologia Celular pelo Instituto de Tecnologia Química e Biológica da Universidade Nova de Lisboa
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
The concepts and instruments required for the teaching and learning of geometric optics are introduced in the didactic processwithout a proper didactic transposition. This claim is secured by the ample evidence of both wide- and deep-rooted alternative concepts on the topic. Didactic transposition is a theory that comes from a reflection on the teaching and learning process in mathematics but has been used in other disciplinary fields. It will be used in this work in order to clear up the main obstacles in the teachinglearning process of geometric optics. We proceed to argue that since Newton’s approach to optics, in his Book I of Opticks, is independent of the corpuscular or undulatory nature of light, it is the most suitable for a constructivist learning environment. However, Newton’s theory must be subject to a proper didactic transposition to help overcome the referred alternative concepts. Then is described our didactic transposition in order to create knowledge to be taught using a dialogical process between students’ previous knowledge, history of optics and the desired outcomes on geometrical optics in an elementary pre-service teacher training course. Finally, we use the scheme-facet structure of knowledge both to analyse and discuss our results as well as to illuminate shortcomings that must be addressed in our next stage of the inquiry.
Resumo:
Trabalho académico com o objetivo do autor desenvolver um estudo prévio e um projeto de uma travessia sobre o rio Lima, na cidade de Viana do Castelo constituída por uma ponte de tirantes rodoferroviária. O projeto académico visa, também, desenvolver e compreender: os conceitos básicos, as metodologias de conceção, e o funcionamento de estruturas desse género. O motivo principal da escolha do tema é a necessidade de uma alternativa à ponte Eiffel em Viana do Castelo, e juntando o facto de em Portugal não existir nenhuma obra de arte de tirantes rodoferroviária até ao presente, seria interessante estudar e projetar uma estrutura rodoferroviária de tirantes. Das diversas possibilidades de sistemas estruturais estudados, adotou-se uma ponte que acomodará 4 vias rodoviárias e 2 vias ferroviárias, com um desenvolvimento total de 660 metros, constituída por dois vãos laterais com 165 metros cada um, e com um vão central de 330 metros. A obra de arte será em semi-leque com dois planos de tirantes, ancorados a duas torres de betão em Y invertido de altura aproximadamente de 110 metros. O tabuleiro será duplo misto aço-betão, constituído por duas vigas trianguladas do tipo Warren, e por carlingas, afastadas entre si de 15 metros com secções tubulares metálicas de espessura variável. As carlingas ao nível superior suportam a laje de betão, que constitui a rodovia, e inferiormente, suportam outra laje de betão para a parte ferroviária. O trabalho inicia-se com o enquadramento conceptual geral da envolvente da obra de arte, seguidamente com apresentação da evolução histórica ao longo do tempo das pontes de tirantes, e à apresentação de algumas pontes rodoferroviárias de tirantes. É realizada uma análise preliminar, onde se estudam as restrições, as condicionantes, o local de implantação, e o sistema da configuração geométrica a adotar na conceção estrutural. São descritos todos os tipos de materiais, equipamentos a utilizar, bem como as suas características mecânicas necessárias para o cálculo estrutural. A quantificação das ações e das combinações de cálculo efetuaram-se de acordo com as normas em vigor nacionais e europeias, designadamente os Eurocódigos das várias especialidades e o Regulamento de Segurança e Ações para Estruturas de Edifícios e Pontes. Efetuou-se um pré-dimensionamento e uma otimização de vários sistemas estruturais possíveis de todos os elementos estruturais, tendo em conta variáveis de estudo como a economia e a resistência estrutural das secções, por forma a chegar à solução final. A estrutura foi discretizada e analisada num modelo estático tridimensional num programa de cálculo automático. A análise de resultados foi efetuada longitudinalmente para a verificação dos Estados Limites Últimos e Estados Limites de Utilização dos elementos estruturais que constituem a ponte. Foi ainda efetuada uma estimativa orçamental da ponte no rio Lima na cidade de Viana do Castelo.