996 resultados para Fault section estimation
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
Geociências, Museu Nac. Hist. Nat. Univ. Lisboa, nº 2, 35-84
Resumo:
Dissertação para obtenção do Grau de Mestre em Matemática e Aplicações Especialização em Actuariado, Estatística e Investigação Operacional
Resumo:
Dissertação apresentada como requisito parcial para obtenção do grau de Doutor em Gestão de Informação
Resumo:
Radio link quality estimation is essential for protocols and mechanisms such as routing, mobility management and localization, particularly for low-power wireless networks such as wireless sensor networks. Commodity Link Quality Estimators (LQEs), e.g. PRR, RNP, ETX, four-bit and RSSI, can only provide a partial characterization of links as they ignore several link properties such as channel quality and stability. In this paper, we propose F-LQE (Fuzzy Link Quality Estimator, a holistic metric that estimates link quality on the basis of four link quality properties—packet delivery, asymmetry, stability, and channel quality—that are expressed and combined using Fuzzy Logic. We demonstrate through an extensive experimental analysis that F-LQE is more reliable than existing estimators (e.g., PRR, WMEWMA, ETX, RNP, and four-bit) as it provides a finer grain link classification. It is also more stable as it has lower coefficient of variation of link estimates. Importantly, we evaluate the impact of F-LQE on the performance of tree routing, specifically the CTP (Collection Tree Protocol). For this purpose, we adapted F-LQE to build a new routing metric for CTP, which we dubbed as F-LQE/RM. Extensive experimental results obtained with state-of-the-art widely used test-beds show that F-LQE/RM improves significantly CTP routing performance over four-bit (the default LQE of CTP) and ETX (another popular LQE). F-LQE/RM improves the end-to-end packet delivery by up to 16%, reduces the number of packet retransmissions by up to 32%, reduces the Hop count by up to 4%, and improves the topology stability by up to 47%.
Resumo:
This paper employs the Lyapunov direct method for the stability analysis of fractional order linear systems subject to input saturation. A new stability condition based on saturation function is adopted for estimating the domain of attraction via ellipsoid approach. To further improve this estimation, the auxiliary feedback is also supported by the concept of stability region. The advantages of the proposed method are twofold: (1) it is straightforward to handle the problem both in analysis and design because of using Lyapunov method, (2) the estimation leads to less conservative results. A numerical example illustrates the feasibility of the proposed method.
Resumo:
This paper reports investigation on the estimation of the short circuit impedance of power transformers, using fractional order calculus to analytically study the influence of the diffusion phenomena in the windings. The aim is to better characterize the medium frequency range behavior of leakage inductances of power transformer models, which include terms to represent the magnetic field diffusion process in the windings. Comparisons between calculated and measured values are shown and discussed.
Resumo:
Proceedings of the International Conference on Computer Vision Theory and Applications, 361-365, 2013, Barcelona, Spain
Resumo:
Esta Tese de Mestrado foi realizada na empresa Pietec Cortiças S.A.. A empresa Pietec Cortiças S.A. é a unidade industrial responsável pela produção de rolhas técnicas de cortiça do Grupo Piedade. O objetivo desta tese prende-se com a melhoria do processo produtivo de uma das suas secções, a secção da Marcação. Esta secção é responsável pela marcação da superfície da rolha, pela aplicação do tratamento de superfície e pelo embalamento das rolhas. A otimização do processo da secção de Marcação, na qualidade de última secção do processo produtivo, permitirá à empresa obter vantagens competitivas. De forma a atingir o objetivo proposto, foi realizado um levantamento exaustivo do processo produtivo e das respetivas operações. Esta análise permitiu a identificação dos possíveis pontos de desperdício, a sua avaliação e a definição de possíveis melhorias que visam o aumento de produtividade e a redução do número de produtos não conformes. Uma vez identificados os pontos críticos do processo, procedeu-se à definição das ações de melhoria a implementar de forma a melhorar o processo produtivo. As ações tomadas assentam na filosofia Lean e nos seus princípios, utilizando-se algumas das ferramentas desta filosofia para concretizar os objetivos traçados. A ferramenta de análise Plan-Do-Check-Act (PDCA) foi a ferramenta base do projeto, acompanhando na elaboração do plano de ação, na implementação das ferramentas 5S e Single Minute Exchange of Die (SMED), na verificação dos resultados e no plano de manutenção das melhorias alcançadas. Após a implementação das medidas definidas no processo de marcação da superfície das rolhas, foi mensurada uma melhoria de 23 % no tempo médio de setup da máquina de marcação. Esta melhoria global foi alcançada através de intervenções no processo de vazamento do circuito da máquina de marcação, no procedimento de armazenamento dos moldes de marcação e na alteração dos mecanismos de ajuste da máquina de marcar e de orientar. No processo de embalamento das rolhas, as medidas implementadas produziram um aumento de 1,7 % no número de rolhas produzidas e uma redução do número de rolhas não conformes de 3,6 %. Os resultados obtidos no projeto demonstram que é possível continuar a melhorar o estado atual dos processos. Constatou-se ainda que, com a adoção de ferramentas de análise e de proposta de melhoria adequadas, é possível atuar sobre os processos e obter melhorias a curto prazo, sem que para isso seja necessário efetuar grandes investimentos.
Resumo:
It is imperative to accept that failures can and will occur, even in meticulously designed distributed systems, and design proper measures to counter those failures. Passive replication minimises resource consumption by only activating redundant replicas in case of failures, as typically providing and applying state updates is less resource demanding than requesting execution. However, most existing solutions for passive fault tolerance are usually designed and configured at design time, explicitly and statically identifying the most critical components and their number of replicas, lacking the needed flexibility to handle the runtime dynamics of distributed component-based embedded systems. This paper proposes a cost-effective adaptive fault tolerance solution with a significant lower overhead compared to a strict active redundancy-based approach, achieving a high error coverage with the minimum amount of redundancy. The activation of passive replicas is coordinated through a feedback-based coordination model that reduces the complexity of the needed interactions among components until a new collective global service solution is determined, improving the overall maintainability and robustness of the system.
Resumo:
OBJECTIVE: Describe implementation of a successful program to reduce doses (cefazolin 2 to 1 g) used for antimicrobial prophylaxis. METHODS: Evaluation of an intervention program to reduce prophylactic antimicrobial doses. The intervention included weekly staff discussions, automatic dispensation of 1g-vial of cefazolin by the pharmacy unless expressly requested by surgeon and increase in post-discharge surveillance as a strategy to reassure surgeons of the safety of the reduction. In the pre and post intervention periods, a prospective study of antimicrobial consumption and surgical site infections were measured. RESULTS: There were 5,164 and 5,204 deliveries in 2001-2002 and 2003-2004, respectively; 1,524 (29.5%) and 1,363 (26%) were cesarean sections. There was a 45% decrease in cefazolin vials used on average per cesarean section (2.29 to 1.25). Patients evaluated increased from 16% to 67% and the SSI rates in both periods were 3.34% to 2.42%, respectively. CONCLUSION: An ample intervention, including administrative and educational measures, led to high compliance with dose reduction and saved more than US$4,000 in cefazolin, considered important because government reimbursement in Brazil for cesarean section is $80.
Resumo:
Quality of life is a concept influenced by social, economic, psychological, spiritual or medical state factors. More specifically, the perceived quality of an individual's daily life is an assessment of their well-being or lack of it. In this context, information technologies may help on the management of services for healthcare of chronic patients such as estimating the patient quality of life and helping the medical staff to take appropriate measures to increase each patient quality of life. This paper describes a Quality of Life estimation system developed using information technologies and the application of data mining algorithms to access the information of clinical data of patients with cancer from Otorhinolaryngology and Head and Neck services of an oncology institution. The system was evaluated with a sample composed of 3013 patients. The results achieved show that there are variables that may be significant predictors for the Quality of Life of the patient: years of smoking (p value 0.049) and size of the tumor (p value < 0.001). In order to assign the variables to the classification of the quality of life the best accuracy was obtained by applying the John Platt's sequential minimal optimization algorithm for training a support vector classifier. In conclusion data mining techniques allow having access to patients additional information helping the physicians to be able to know the quality of life and produce a well-informed clinical decision.
Resumo:
The adhesive bonding technique enables both weight and complexity reduction in structures that require some joining technique to be used on account of fabrication/component shape issues. Because of this, adhesive bonding is also one of the main repair methods for metal and composite structures by the strap and scarf configurations. The availability of strength prediction techniques for adhesive joints is essential for their generalized application and it can rely on different approaches, such as mechanics of materials, conventional fracture mechanics or damage mechanics. These two last techniques depend on the measurement of the fracture toughness (GC) of materials. Within the framework of damage mechanics, a valid option is the use of Cohesive Zone Modelling (CZM) coupled with Finite Element (FE) analyses. In this work, CZM laws for adhesive joints considering three adhesives with varying ductility were estimated. The End-Notched Flexure (ENF) test geometry was selected based on overall test simplicity and results accuracy. The adhesives Araldite® AV138, Araldite® 2015 and Sikaforce® 7752 were studied between high-strength aluminium adherends. Estimation of the CZM laws was carried out by an inverse methodology based on a curve fitting procedure, which enabled a precise estimation of the adhesive joints’ behaviour. The work allowed to conclude that a unique set of shear fracture toughness (GIIC) and shear cohesive strength (ts0) exists for each specimen that accurately reproduces the adhesive layer’ behaviour. With this information, the accurate strength prediction of adhesive joints in shear is made possible by CZM.
Resumo:
This work aims to shed some light on longshore sediment transport (LST) in the highly energetic northwest coast of Portugal. Data achieved through a sand-tracer experiment are compared with data obtained from the original and the new re-evaluated longshore sediment transport formulas (USACE Waterways Experiment Station’s Coastal Engineering and Research Center, Kamphuis, and Bayram bulk formulas) to assess their performance. The field experiment with dyed sand was held at Ofir Beach during one tidal cycle under medium wave-energy conditions. Local hydrodynamic conditions and beach topography were recorded. The tracer was driven southward in response to the local swell and wind- and wave-induced currents (Hsb=0.75mHsb=0.75m, Tp=11.5sTp=11.5s, θb=8−12°θb=8−12°). The LST was estimated by using a linear sediment transport flux approach. The obtained value (2.3×10−3m3⋅s−12.3×10−3m3⋅s−1) approached the estimation provided by the original Bayram formula (2.5×10−3m3⋅s−12.5×10−3m3⋅s−1). The other formulas overestimated the transport, but the estimations resulting from the new re-evaluated formulas also yield approximate results. Therefore, the results of this work indicated that the Bayram formula may give satisfactory results for predicting the longshore sediment transport on Ofir Beach.