884 resultados para Computational Geometry and Object Modelling
Resumo:
This paper reports on the analysis of tidal breathing patterns measured during noninvasive forced oscillation lung function tests in six individual groups. The three adult groups were healthy, with prediagnosed chronic obstructive pulmonary disease, and with prediagnosed kyphoscoliosis, respectively. The three children groups were healthy, with prediagnosed asthma, and with prediagnosed cystic fibrosis, respectively. The analysis is applied to the pressure-volume curves and the pseudophase-plane loop by means of the box-counting method, which gives a measure of the area within each loop. The objective was to verify if there exists a link between the area of the loops, power-law patterns, and alterations in the respiratory structure with disease. We obtained statistically significant variations between the data sets corresponding to the six groups of patients, showing also the existence of power-law patterns. Our findings support the idea that the respiratory system changes with disease in terms of airway geometry and tissue parameters, leading, in turn, to variations in the fractal dimension of the respiratory tree and its dynamics.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
Management Information Systems 2000, p. 103-111
Resumo:
20th International Conference on Reliable Software Technologies - Ada-Europe 2015 (Ada-Europe 2015), 22 to 26, Jun, 2015, Madrid, Spain.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
The underground scenarios are one of the most challenging environments for accurate and precise 3d mapping where hostile conditions like absence of Global Positioning Systems, extreme lighting variations and geometrically smooth surfaces may be expected. So far, the state-of-the-art methods in underground modelling remain restricted to environments in which pronounced geometric features are abundant. This limitation is a consequence of the scan matching algorithms used to solve the localization and registration problems. This paper contributes to the expansion of the modelling capabilities to structures characterized by uniform geometry and smooth surfaces, as is the case of road and train tunnels. To achieve that, we combine some state of the art techniques from mobile robotics, and propose a method for 6DOF platform positioning in such scenarios, that is latter used for the environment modelling. A visual monocular Simultaneous Localization and Mapping (MonoSLAM) approach based on the Extended Kalman Filter (EKF), complemented by the introduction of inertial measurements in the prediction step, allows our system to localize himself over long distances, using exclusively sensors carried on board a mobile platform. By feeding the Extended Kalman Filter with inertial data we were able to overcome the major problem related with MonoSLAM implementations, known as scale factor ambiguity. Despite extreme lighting variations, reliable visual features were extracted through the SIFT algorithm, and inserted directly in the EKF mechanism according to the Inverse Depth Parametrization. Through the 1-Point RANSAC (Random Sample Consensus) wrong frame-to-frame feature matches were rejected. The developed method was tested based on a dataset acquired inside a road tunnel and the navigation results compared with a ground truth obtained by post-processing a high grade Inertial Navigation System and L1/L2 RTK-GPS measurements acquired outside the tunnel. Results from the localization strategy are presented and analyzed.
Resumo:
As ligações adesivas são frequentemente utilizadas na fabricação de estruturas complexas que não poderiam ou não seriam tão fáceis de ser fabricadas numa só peça, a fim de proporcionar uma união estrutural que, teoricamente, deve ser pelo menos tão resistente como o material de base. As juntas adesivas têm vindo a substituir métodos como a soldadura, e ligações parafusadas e rebitadas, devido à facilidade de fabricação, menor custo, facilidade em unir materiais diferentes, melhor resistência, entre outras características. Os materiais compósitos reforçados com fibra de carbono são amplamente utilizados em muitas indústrias, tais como de construção de barcos, automóvel e aeronáutica, sendo usados em estruturas que requerem elevada resistência e rigidez específicas, o que reduz o peso dos componentes, mantendo a resistência e rigidez necessárias para suportar as diversas cargas aplicadas. Embora estes métodos de fabricação reduzam ao máximo as ligações através de técnicas de fabrico avançadas, estas ainda são necessárias devido ao tamanho dos componentes, limitações de projecto tecnológicas e logísticas. Em muitas estruturas, a combinação de compósitos com metais tais como alumínio ou titânio traz vantagens de projecto. Este trabalho tem como objectivo estudar, experimentalmente e por modelos de dano coesivo (MDC), juntas adesivas em L entre componentes de alumínio e compósito de carbono epóxido quando solicitados a forças de arrancamento, considerando diferentes configurações de junta e adesivos de ductilidade distinta. Os parâmetros geométricos abordados são a espessura do aderente de alumínio (tP2) e comprimento de sobreposição (LO). A análise numérica permitiu o estudo da distribuição das tensões, evolução do dano, resistência e modos de rotura. Os testes experimentais validam os resultados numéricos e fornecem mecanismos de projecto para juntas em L. Foi mostrado que a geometria do aderente em L (alumínio) e o tipo de adesivo têm uma influência directa na resistência de junta.
Resumo:
Thesis for the master degree in Structural and Functional Biochemistry
Resumo:
With the need to find an alternative way to mechanical and welding joints, and at the same time to overcome some limitations linked to these traditional techniques, adhesive bonds can be used. Adhesive bonding is a permanent joining process that uses an adhesive to bond the components of a structure. Composite materials reinforced with fibres are becoming increasingly popular in many applications as a result of a number of competitive advantages. In the manufacture of composite structures, although the fabrication techniques reduce to the minimum by means of advanced manufacturing techniques, the use of connections is still required due to the typical size limitations and design, technological and logistical aspects. Moreover, it is known that in many high performance structures, unions between composite materials with other light metals such as aluminium are required, for purposes of structural optimization. This work deals with the experimental and numerical study of single lap joints (SLJ), bonded with a brittle (Nagase Chemtex Denatite XNRH6823) and a ductile adhesive (Nagase Chemtex Denatite XNR6852). These are applied to hybrid joints between aluminium (AL6082-T651) and carbon fibre reinforced plastic (CFRP; Texipreg HS 160 RM) adherends in joints with different overlap lengths (LO) under a tensile loading. The Finite Element (FE) Method is used to perform detailed stress and damage analyses allowing to explain the joints’ behaviour and the use of cohesive zone models (CZM) enables predicting the joint strength and creating a simple and rapid design methodology. The use of numerical methods to simulate the behaviour of the joints can lead to savings of time and resources by optimizing the geometry and material parameters of the joints. The joints’ strength and failure modes were highly dependent on the adhesive, and this behaviour was successfully modelled numerically. Using a brittle adhesive resulted in a negligible maximum load (Pm) improvement with LO. The joints bonded with the ductile adhesive showed a nearly linear improvement of Pm with LO.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
OBJECTIVE:Endograft mural thrombus has been associated with stent graft or limb thrombosis after endovascular aneurysm repair (EVAR). This study aimed to identify clinical and morphologic determinants of endograft mural thrombus accumulation and its influence on thromboembolic events after EVAR. METHODS: A prospectively maintained database of patients treated by EVAR at a tertiary institution from 2000 to 2012 was analyzed. Patients treated for degenerative infrarenal abdominal aortic aneurysms and with available imaging for thrombus analysis were considered. All measurements were performed on three-dimensional center-lumen line computed tomography angiography (CTA) reconstructions. Patients with thrombus accumulation within the endograft's main body with a thickness >2 mm and an extension >25% of the main body's circumference were included in the study group and compared with a control group that included all remaining patients. Clinical and morphologic variables were assessed for association with significant thrombus accumulation within the endograft's main body by multivariate regression analysis. Estimates for freedom from thromboembolic events were obtained by Kaplan-Meier plots. RESULTS: Sixty-eight patients (16.4%) presented with endograft mural thrombus. Median follow-up time was 3.54 years (interquartile range, 1.99-5.47 years). In-graft mural thrombus was identified on 30-day CTA in 22 patients (32.4% of the study group), on 6-month CTA in 8 patients (11.8%), and on 1-year CTA in 17 patients (25%). Intraprosthetic thrombus progressively accumulated during the study period in 40 patients of the study group (55.8%). Overall, 17 patients (4.1%) presented with endograft or limb occlusions, 3 (4.4%) in the thrombus group and 14 (4.1%) in the control group (P = .89). Thirty-one patients (7.5%) received an aortouni-iliac (AUI) endograft. Two endograft occlusions were identified among AUI devices (6.5%; overall, 0.5%). None of these patients showed thrombotic deposits in the main body, nor were any outflow abnormalities identified on the immediately preceding CTA. Estimated freedom from thromboembolic events at 5 years was 95% in both groups (P = .97). Endograft thrombus accumulation was associated with >25% proximal aneurysm neck thrombus coverage at baseline (odds ratio [OR], 1.9; 95% confidence interval [CI], 1.1-3.3), neck length ≤ 15 mm (OR, 2.4; 95% CI, 1.3-4.2), proximal neck diameter ≥ 30 mm (OR, 2.4; 95% CI, 1.3-4.6), AUI (OR, 2.2; 95% CI, 1.8-5.5), or polyester-covered stent grafts (OR, 4.0; 95% CI, 2.2-7.3) and with main component "barrel-like" configuration (OR, 6.9; 95% CI, 1.7-28.3). CONCLUSIONS: Mural thrombus formation within the main body of the endograft is related to different endograft configurations, main body geometry, and device fabric but appears to have no association with the occurrence of thromboembolic events over time.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Este trabalho foi efectuado com o apoio da Universidade de Lisboa, Instituto Superior de Agronomia com o Centro de Engenharia dos Biossistemas (CEER
Resumo:
This paper presents the findings of an experimental campaign that was conducted to investigate the seismic behaviour of log houses. A two-storey log house designed by the Portuguese company Rusticasa® was subjected to a series of shaking table tests at LNEC, Lisbon, Portugal. The paper contains the description of the geometry and construction of the house and all the aspects related to the testing procedure, namely the pre-design, the setup, instrumentation and the testing process itself. The shaking table tests were carried out with a scaled spectrum of the Montenegro (1979) earthquake, at increasing levels of PGA, starting from 0.07g, moving on to 0.28g and finally 0.5g. The log house did not suffer any major damage and remained in working condition throughout the entire process. The preliminary analysis of the overall behaviour of the log house is also discussed.
Resumo:
"Lecture notes in computational vision and biomechanics series, ISSN 2212-9391, vol. 19"