928 resultados para Mesh segmentation
Resumo:
Meshless methods are used for their capability of producing excellent solutions without requiring a mesh, avoiding mesh related problems encountered in other numerical methods, such as finite elements. However, node placement is still an open question, specially in strong form collocation meshless methods. The number of used nodes can have a big influence on matrix size and therefore produce ill-conditioned matrices. In order to optimize node position and number, a direct multisearch technique for multiobjective optimization is used to optimize node distribution in the global collocation method using radial basis functions. The optimization method is applied to the bending of isotropic simply supported plates. Using as a starting condition a uniformly distributed grid, results show that the method is capable of reducing the number of nodes in the grid without compromising the accuracy of the solution. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
Mestrado em Engenharia Geotécnica e Geoambiente
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
This paper addresses the problem of optimal positioning of surface bonded piezoelectric patches in sandwich plates with viscoelastic core and laminated face layers. The objective is to maximize a set of modal loss factors for a given frequency range using multiobjective topology optimization. Active damping is introduced through co-located negative velocity feedback control. The multiobjective topology optimization problem is solved using the Direct MultiSearch Method. An application to a simply supported sandwich plate is presented with results for the maximization of the first six modal loss factors. The influence of the finite element mesh is analyzed and the results are, to some extent, compared with those obtained using alternative single objective optimization.
Resumo:
The Casa da Música Foundation, responsible for the management of Casa da Música do Porto building, has the need to obtain statistical data related to the number of building’s visitors. This information is a valuable tool for the elaboration of periodical reports concerning the success of this cultural institution. For this reason it was necessary to develop a system capable of returning the number of visitors for a requested period of time. This represents a complex task due to the building’s unique architectural design, characterized by very large doors and halls, and the sudden large number of people that pass through them in moments preceding and proceeding the different activities occurring in the building. To achieve the technical solution for this challenge, several image processing methods, for people detection with still cameras, were first studied. The next step was the development of a real time algorithm, using OpenCV libraries and computer vision concepts,to count individuals with the desired accuracy. This algorithm includes the scientific and technical knowledge acquired in the study of the previous methods. The themes developed in this thesis comprise the fields of background maintenance, shadow and highlight detection, and blob detection and tracking. A graphical interface was also built, to help on the development, test and tunning of the proposed system, as a complement to the work. Furthermore, tests to the system were also performed, to certify the proposed techniques against a set of limited circumstances. The results obtained revealed that the algorithm was successfully applied to count the number of people in complex environments with reliable accuracy.
Resumo:
Brain dopamine transporters imaging by Single Emission Tomography (SPECT) with 123I-FP-CIT (DaTScanTM) has become an important tool in the diagnosis and evaluation of Parkinson syndromes.This diagnostic method allows the visualization of a portion of the striatum – where healthy pattern resemble two symmetric commas - allowing the evaluation of dopamine presynaptic system, in which dopamine transporters are responsible for dopamine release into the synaptic cleft, and their reabsorption into the nigrostriatal nerve terminals, in order to be stored or degraded. In daily practice for assessment of DaTScan TM, it is common to rely only on visual assessment for diagnosis. However, this process is complex and subjective as it depends on the observer’s experience and it is associated with high variability intra and inter observer. Studies have shown that semiquantification can improve the diagnosis of Parkinson syndromes. For semiquantification, analysis methods of image segmentation using regions of interest (ROI) are necessary. ROIs are drawn, in specific - striatum - and in nonspecific – background – uptake areas. Subsequently, specific binding ratios are calculated. Low adherence of semiquantification for diagnosis of Parkinson syndromes is related, not only with the associated time spent, but also with the need of an adapted database of reference values for the population concerned, as well as, the examination of each service protocol. Studies have concluded, that this process increases the reproducibility of semiquantification. The aim of this investigation was to create and validate a database of healthy controls for Dopamine transporters with DaTScanTM named DBRV. The created database has been adapted to the Nuclear Medicine Department’s protocol, and the population of Infanta Cristina’s Hospital located in Badajoz, Spain.
Resumo:
The main purpose of this work was the development of procedures for the simulation of atmospheric ows over complex terrain, using OpenFOAM. For this aim, tools and procedures were developed apart from this code for the preprocessing and data extraction, which were thereafter applied in the simulation of a real case. For the generation of the computational domain, a systematic method able to translate the terrain elevation model to a native OpenFOAM format (blockMeshDict) was developed. The outcome was a structured mesh, in which the user has the ability to de ne the number of control volumes and its dimensions. With this procedure, the di culties of case set up and the high computation computational e ort reported in literature associated to the use of snappyHexMesh, the OpenFOAM resource explored until then for the accomplishment of this task, were considered to be overwhelmed. Developed procedures for the generation of boundary conditions allowed for the automatic creation of idealized inlet vertical pro les, de nition of wall functions boundary conditions and the calculation of internal eld rst guesses for the iterative solution process, having as input experimental data supplied by the user. The applicability of the generated boundary conditions was limited to the simulation of turbulent, steady-state, incompressible and neutrally strati ed atmospheric ows, always recurring to RaNS (Reynolds-averaged Navier-Stokes) models. For the modelling of terrain roughness, the developed procedure allowed to the user the de nition of idealized conditions, like an uniform aerodynamic roughness length or making its value variable as a function of topography characteristic values, or the using of real site data, and it was complemented by the development of techniques for the visual inspection of generated roughness maps. The absence and the non inclusion of a forest canopy model limited the applicability of this procedure to low aerodynamic roughness lengths. The developed tools and procedures were then applied in the simulation of a neutrally strati ed atmospheric ow over the Askervein hill. In the performed simulations was evaluated the solution sensibility to di erent convection schemes, mesh dimensions, ground roughness and formulations of the k - ε and k - ω models. When compared to experimental data, calculated values showed a good agreement of speed-up in hill top and lee side, with a relative error of less than 10% at a height of 10 m above ground level. Turbulent kinetic energy was considered to be well simulated in the hill windward and hill top, and grossly predicted in the lee side, where a zone of ow separation was also identi ed. Despite the need of more work to evaluate the importance of the downstream recirculation zone in the quality of gathered results, the agreement between the calculated and experimental values and the OpenFOAM sensibility to the tested parameters were considered to be generally in line with the simulations presented in the reviewed bibliographic sources.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Química e Biológica
Resumo:
The structure and nature of the crust underlying the Santos Basin-São Paulo Plateau System (SSPS), in the SE Brazilian margin, are discussed based on five wide-angle seismic profiles acquired during the Santos Basin (SanBa) experiment in 2011. Velocity models allow us to precisely divide the SSPS in six domains from unthinned continental crust (Domain CC) to normal oceanic crust (Domain OC). A seventh domain (Domain D), a triangular shape region in the SE of the SSPS, is discussed by Klingelhoefer et al. (2014). Beneath the continental shelf, a similar to 100km wide necking zone (Domain N) is imaged where the continental crust thins abruptly from similar to 40km to less than 15km. Toward the ocean, most of the SSPS (Domains A and C) shows velocity ranges, velocity gradients, and a Moho interface characteristic of the thinned continental crust. The central domain (Domain B) has, however, a very heterogeneous structure. While its southwestern part still exhibits extremely thinned (7km) continental crust, its northeastern part depicts a 2-4km thick upper layer (6.0-6.5km/s) overlying an anomalous velocity layer (7.0-7.8km/s) and no evidence of a Moho interface. This structure is interpreted as atypical oceanic crust, exhumed lower crust, or upper continental crust intruded by mafic material, overlying either altered mantle in the first two cases or intruded lower continental crust in the last case. The deep structure and v-shaped segmentation of the SSPS confirm that an initial episode of rifting occurred there obliquely to the general opening direction of the South Atlantic Central Segment.
Resumo:
This study reports the embryogenesis of T. infestans (Hemiptera, Reduviidae). Morphological parameters of growth sequences from oviposition until hatching (12-14 d 28ºC) were established. Five periods, as percent of time of development (TD), were characterized from oviposition until hatching. The most important morphological features were: 1) formation of blastoderm within 7% of TD; 2) germ band and gastrulation within 30% of TD; 3) nerve cord, limb budding, thoracic and abdominal segmentation and formation of body cavity within 50% of TD; 4) nervous system and blastokinesis end, and development of embryonic cuticle within 65% of TD; 5) differentiation of the mouth parts, fat body, and Malphigian tubules during final stage and completion of embryo at day 12 to day 14 around hatching. These signals were chosen as appropriate morphological parameters which should enable the evaluation of embryologic modifications due to the action/s of different insecticides
Resumo:
The most common techniques for stress analysis/strength prediction of adhesive joints involve analytical or numerical methods such as the Finite Element Method (FEM). However, the Boundary Element Method (BEM) is an alternative numerical technique that has been successfully applied for the solution of a wide variety of engineering problems. This work evaluates the applicability of the boundary elem ent code BEASY as a design tool to analyze adhesive joints. The linearity of peak shear and peel stresses with the applied displacement is studied and compared between BEASY and the analytical model of Frostig et al., considering a bonded single-lap joint under tensile loading. The BEM results are also compared with FEM in terms of stress distributions. To evaluate the mesh convergence of BEASY, the influence of the mesh refinement on peak shear and peel stress distributions is assessed. Joint stress predictions are carried out numerically in BEASY and ABAQUS®, and analytically by the models of Volkersen, Goland, and Reissner and Frostig et al. The failure loads for each model are compared with experimental results. The preparation, processing, and mesh creation times are compared for all models. BEASY results presented a good agreement with the conventional methods.
Resumo:
This work aims to design a synthetic construct that mimics the natural bone extracellular matrix through innovative approaches based on simultaneous type I collagen electrospinning and nanophased hydroxyapatite (nanoHA) electrospraying using non-denaturating conditions and non-toxic reagents. The morphological results, assessed using scanning electron microscopy and atomic force microscopy (AFM), showed a mesh of collagen nanofibers embedded with crystals of HA with fiber diameters within the nanometer range (30 nm), thus significantly lower than those reported in the literature, over 200 nm. The mechanical properties, assessed by nanoindentation using AFM, exhibited elastic moduli between 0.3 and 2 GPa. Fourier transformed infrared spectrometry confirmed the collagenous integrity as well as the presence of nanoHA in the composite. The network architecture allows cell access to both collagen nanofibers and HA crystals as in the natural bone environment. The inclusion of nanoHA agglomerates by electrospraying in type I collagen nanofibers improved the adhesion and metabolic activity of MC3T3-E1 osteoblasts. This new nanostructured collagen–nanoHA composite holds great potential for healing bone defects or as a functional membrane for guided bone tissue regeneration and in treating bone diseases.
Resumo:
Dissertação de Mestrado apresentada ao Instituto Superior de Contabilidade e Administração do Porto, para obtenção do grau de Mestre em Marketing Digital, sob orientação da Prof. Sandrina Teixeira
Resumo:
Electrocardiogram (ECG) biometrics are a relatively recent trend in biometric recognition, with at least 13 years of development in peer-reviewed literature. Most of the proposed biometric techniques perform classifi-cation on features extracted from either heartbeats or from ECG based transformed signals. The best representation is yet to be decided. This paper studies an alternative representation, a dissimilarity space, based on the pairwise dissimilarity between templates and subjects' signals. Additionally, this representation can make use of ECG signals sourced from multiple leads. Configurations of three leads will be tested and contrasted with single-lead experiments. Using the same k-NN classifier the results proved superior to those obtained through a similar algorithm which does not employ a dissimilarity representation. The best Authentication EER went as low as 1:53% for a database employing 503 subjects. However, the employment of extra leads did not prove itself advantageous.
Resumo:
Dissertation submitted in Faculdade de Ciências e Tecnologia of Universidade Nova de Lisboa for the degree of Master in Biomedical Engineering