46 resultados para Distance-based techniques
Resumo:
Context. HD 181231 is a B5IVe star, which has been observed with the CoRoT satellite during similar to 5 consecutive months and simultaneously from the ground in spectroscopy and spectropolarimetry. Aims. By analysing these data, we aim to detect and characterize as many pulsation frequencies as possible, to search for the presence of beating effects possibly at the origin of the Be phenomenon. Our results will also provide a basis for seismic modelling. Methods. The fundamental parameters of the star are determined from spectral fitting and from the study of the circumstellar emission. The CoRoT photometric data and ground-based spectroscopy are analysed using several Fourier techniques: CLEAN-NG, PASPER, and TISAFT, as well as a time-frequency technique. A search for a magnetic field is performed by applying the LSD technique to the spectropolarimetric data. Results. We find that HD 181231 is a B5IVe star seen with an inclination of similar to 45 degrees. No magnetic field is detected in its photosphere. We detect at least 10 independent significant frequencies of variations among the 54 detected frequencies, interpreted in terms of non-radial pulsation modes and rotation. Two longer-term variations are also detected: one at similar to 14 days resulting from a beating effect between the two main frequencies of short-term variations, the other at similar to 116 days due either to a beating of frequencies or to a zonal pulsation mode. Conclusions. Our analysis of the CoRoT light curve and ground-based spectroscopic data of HD 181231 has led to the determination of the fundamental and pulsational parameters of the star, including beating effects. This will allow a precise seismic modelling of this star.
Resumo:
Using a combination of density functional theory and recursive Green's functions techniques, we present a full description of a large scale sensor, accounting for disorder and different coverages. Here, we use this method to demonstrate the functionality of nitrogen-rich carbon nanotubes as ammonia sensors as an example. We show how the molecules one wishes to detect bind to the most relevant defects on the nanotube, describe how these interactions lead to changes in the electronic transport properties of each isolated defect, and demonstrate that there are significative resistance changes even in the presence of disorder, elucidating how a realistic nanosensor works.
Resumo:
The goal of this paper is to study and propose a new technique for noise reduction used during the reconstruction of speech signals, particularly for biomedical applications. The proposed method is based on Kalman filtering in the time domain combined with spectral subtraction. Comparison with discrete Kalman filter in the frequency domain shows better performance of the proposed technique. The performance is evaluated by using the segmental signal-to-noise ratio and the Itakura-Saito`s distance. Results have shown that Kalman`s filter in time combined with spectral subtraction is more robust and efficient, improving the Itakura-Saito`s distance by up to four times. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
The productivity associated with commonly available disassembly methods today seldomly makes disassembly the preferred end-of-life solution for massive take back product streams. Systematic reuse of parts or components, or recycling of pure material fractions are often not achievable in an economically sustainable way. In this paper a case-based review of current disassembly practices is used to analyse the factors influencing disassembly feasibility. Data mining techniques were used to identify major factors influencing the profitability of disassembly operations. Case characteristics such as involvement of the product manufacturer in the end-of-life treatment and continuous ownership are some of the important dimensions. Economic models demonstrate that the efficiency of disassembly operations should be increased an order of magnitude to assure the competitiveness of ecologically preferred, disassembly oriented end-of-life scenarios for large waste of electric and electronic equipment (WEEE) streams. Technological means available to increase the productivity of the disassembly operations are summarized. Automated disassembly techniques can contribute to the robustness of the process, but do not allow to overcome the efficiency gap if not combined with appropriate product design measures. Innovative, reversible joints, collectively activated by external trigger signals, form a promising approach to low cost, mass disassembly in this context. A short overview of the state-of-the-art in the development of such self-disassembling joints is included. (c) 2008 CIRP.
Resumo:
Recently semi-empirical models to estimate flow boiling heat transfer coefficient, saturated CHF and pressure drop in micro-scale channels have been proposed. Most of the models were developed based on elongated bubbles and annular flows in the view of the fact that these flow patterns are predominant in smaller channels. In these models, the liquid film thickness plays an important role and such a fact emphasizes that the accurate measurement of the liquid film thickness is a key point to validate them. On the other hand, several techniques have been successfully applied to measure liquid film thicknesses during condensation and evaporation under macro-scale conditions. However, although this subject has been targeted by several leading laboratories around the world, it seems that there is no conclusive result describing a successful technique capable of measuring dynamic liquid film thickness during evaporation inside micro-scale round channels. This work presents a comprehensive literature review of the methods used to measure liquid film thickness in macro- and micro-scale systems. The methods are described and the main difficulties related to their use in micro-scale systems are identified. Based on this discussion, the most promising methods to measure dynamic liquid film thickness in micro-scale channels are identified. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
In this article a novel algorithm based on the chemotaxis process of Echerichia coil is developed to solve multiobjective optimization problems. The algorithm uses fast nondominated sorting procedure, communication between the colony members and a simple chemotactical strategy to change the bacterial positions in order to explore the search space to find several optimal solutions. The proposed algorithm is validated using 11 benchmark problems and implementing three different performance measures to compare its performance with the NSGA-II genetic algorithm and with the particle swarm-based algorithm NSPSO. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Modal filters may be obtained by a properly designed weighted sum of the output signals of an array of sensors distributed on the host structure. Although several research groups have been interested in techniques for designing and implementing modal filters based on a given array of sensors, the effect of the array topology on the effectiveness of the modal filter has received much less attention. In particular, it is known that some parameters, such as size, shape and location of a sensor, are very important in determining the observability of a vibration mode. Hence, this paper presents a methodology for the topological optimization of an array of sensors in order to maximize the effectiveness of a set of selected modal filters. This is done using a genetic algorithm optimization technique for the selection of 12 piezoceramic sensors from an array of 36 piezoceramic sensors regularly distributed on an aluminum plate, which maximize the filtering performance, over a given frequency range, of a set of modal filters, each one aiming to isolate one of the first vibration modes. The vectors of the weighting coefficients for each modal filter are evaluated using QR decomposition of the complex frequency response function matrix. Results show that the array topology is not very important for lower frequencies but it greatly affects the filter effectiveness for higher frequencies. Therefore, it is possible to improve the effectiveness and frequency range of a set of modal filters by optimizing the topology of an array of sensors. Indeed, using 12 properly located piezoceramic sensors bonded on an aluminum plate it is shown that the frequency range of a set of modal filters may be enlarged by 25-50%.
Resumo:
The central issue for pillar design in underground coal mining is the in situ uniaxial compressive strength (sigma (cm)). The paper proposes a new method for estimating in situ uniaxial compressive strength in coal seams based on laboratory strength and P wave propagation velocity. It describes the collection of samples in the Bonito coal seam, Fontanella Mine, southern Brazil, the techniques used for the structural mapping of the coal seam and determination of seismic wave propagation velocity as well as the laboratory procedures used to determine the strength and ultrasonic wave velocity. The results obtained using the new methodology are compared with those from seven other techniques for estimating in situ rock mass uniaxial compressive strength.
Resumo:
This paper aims to find relations between the socioeconomic characteristics, activity participation, land use patterns and travel behavior of the residents in the Sao Paulo Metropolitan Area (SPMA) by using Exploratory Multivariate Data Analysis (EMDA) techniques. The variables influencing travel pattern choices are investigated using: (a) Cluster Analysis (CA), grouping and characterizing the Traffic Zones (17), proposing the independent variable called Origin Cluster and, (b) Decision Tree (DT) to find a priori unknown relations among socioeconomic characteristics, land use attributes of the origin TZ and destination choices. The analysis was based on the origin-destination home-interview survey carried out in SPMA in 1997. The DT application revealed the variables of greatest influence on the travel pattern choice. The most important independent variable considered by DT is car ownership, followed by the Use of Transportation ""credits"" for Transit tariff, and, finally, activity participation variables and Origin Cluster. With these results, it was possible to analyze the influence of a family income, car ownership, position of the individual in the family, use of transportation ""credits"" for transit tariff (mainly for travel mode sequence choice), activities participation (activity sequence choice) and Origin Cluster (destination/travel distance choice). (c) 2010 Elsevier Ltd. All rights reserved.
Resumo:
This work presents the development and implementation of an artificial neural network based algorithm for transmission lines distance protection. This algorithm was developed to be used in any transmission line regardless of its configuration or voltage level. The described ANN-based algorithm does not need any topology adaptation or ANN parameters adjustment when applied to different electrical systems. This feature makes this solution unique since all ANN-based solutions presented until now were developed for particular transmission lines, which means that those solutions cannot be implemented in commercial relays. (c) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Most post-processors for boundary element (BE) analysis use an auxiliary domain mesh to display domain results, working against the profitable modelling process of a pure boundary discretization. This paper introduces a novel visualization technique which preserves the basic properties of the boundary element methods. The proposed algorithm does not require any domain discretization and is based on the direct and automatic identification of isolines. Another critical aspect of the visualization of domain results in BE analysis is the effort required to evaluate results in interior points. In order to tackle this issue, the present article also provides a comparison between the performance of two different BE formulations (conventional and hybrid). In addition, this paper presents an overview of the most common post-processing and visualization techniques in BE analysis, such as the classical algorithms of scan line and the interpolation over a domain discretization. The results presented herein show that the proposed algorithm offers a very high performance compared with other visualization procedures.
Resumo:
Coatings based on NiCrAlC intermetallic based alloy were applied on AISI 316L stainless steel substrates using a high velocity oxygen fuel torch. The influence of the spray parameters on friction and abrasive wear resistance were investigated using an instrumented rubber wheel abrasion test, able to measure the friction forces. The corrosion behaviour of the coatings were studied with electrochemical techniques and compared with the corrosion resistance of the substrate material. Specimens prepared using lower O(2)/C(3)H(8) ratios showed smaller porosity values. The abrasion wear rate of the NiCrAlC coatings was much smaller than that described in the literature for bulk as cast materials with similar composition and one order of magnitude higher than bulk cast and heat treated (aged) NiCrAlC alloy. All coatings showed higher corrosion resistance than the AISI 316L substrate in HCl (5%) aqueous solution at 40 degrees C.
Resumo:
Modern Integrated Circuit (IC) design is characterized by a strong trend of Intellectual Property (IP) core integration into complex system-on-chip (SOC) architectures. These cores require thorough verification of their functionality to avoid erroneous behavior in the final device. Formal verification methods are capable of detecting any design bug. However, due to state explosion, their use remains limited to small circuits. Alternatively, simulation-based verification can explore hardware descriptions of any size, although the corresponding stimulus generation, as well as functional coverage definition, must be carefully planned to guarantee its efficacy. In general, static input space optimization methodologies have shown better efficiency and results than, for instance, Coverage Directed Verification (CDV) techniques, although they act on different facets of the monitored system and are not exclusive. This work presents a constrained-random simulation-based functional verification methodology where, on the basis of the Parameter Domains (PD) formalism, irrelevant and invalid test case scenarios are removed from the input space. To this purpose, a tool to automatically generate PD-based stimuli sources was developed. Additionally, we have developed a second tool to generate functional coverage models that fit exactly to the PD-based input space. Both the input stimuli and coverage model enhancements, resulted in a notable testbench efficiency increase, if compared to testbenches with traditional stimulation and coverage scenarios: 22% simulation time reduction when generating stimuli with our PD-based stimuli sources (still with a conventional coverage model), and 56% simulation time reduction when combining our stimuli sources with their corresponding, automatically generated, coverage models.
Resumo:
In this paper the continuous Verhulst dynamic model is used to synthesize a new distributed power control algorithm (DPCA) for use in direct sequence code division multiple access (DS-CDMA) systems. The Verhulst model was initially designed to describe the population growth of biological species under food and physical space restrictions. The discretization of the corresponding differential equation is accomplished via the Euler numeric integration (ENI) method. Analytical convergence conditions for the proposed DPCA are also established. Several properties of the proposed recursive algorithm, such as Euclidean distance from optimum vector after convergence, convergence speed, normalized mean squared error (NSE), average power consumption per user, performance under dynamics channels, and implementation complexity aspects, are analyzed through simulations. The simulation results are compared with two other DPCAs: the classic algorithm derived by Foschini and Miljanic and the sigmoidal of Uykan and Koivo. Under estimated errors conditions, the proposed DPCA exhibits smaller discrepancy from the optimum power vector solution and better convergence (under fixed and adaptive convergence factor) than the classic and sigmoidal DPCAs. (C) 2010 Elsevier GmbH. All rights reserved.
Resumo:
Recently, the development of industrial processes brought on the outbreak of technologically complex systems. This development generated the necessity of research relative to the mathematical techniques that have the capacity to deal with project complexities and validation. Fuzzy models have been receiving particular attention in the area of nonlinear systems identification and analysis due to it is capacity to approximate nonlinear behavior and deal with uncertainty. A fuzzy rule-based model suitable for the approximation of many systems and functions is the Takagi-Sugeno (TS) fuzzy model. IS fuzzy models are nonlinear systems described by a set of if then rules which gives local linear representations of an underlying system. Such models can approximate a wide class of nonlinear systems. In this paper a performance analysis of a system based on IS fuzzy inference system for the calibration of electronic compass devices is considered. The contribution of the evaluated IS fuzzy inference system is to reduce the error obtained in data acquisition from a digital electronic compass. For the reliable operation of the TS fuzzy inference system, adequate error measurements must be taken. The error noise must be filtered before the application of the IS fuzzy inference system. The proposed method demonstrated an effectiveness of 57% at reducing the total error based on considered tests. (C) 2011 Elsevier Ltd. All rights reserved.