940 resultados para Distance-based techniques
Resumo:
In this article a novel algorithm based on the chemotaxis process of Echerichia coil is developed to solve multiobjective optimization problems. The algorithm uses fast nondominated sorting procedure, communication between the colony members and a simple chemotactical strategy to change the bacterial positions in order to explore the search space to find several optimal solutions. The proposed algorithm is validated using 11 benchmark problems and implementing three different performance measures to compare its performance with the NSGA-II genetic algorithm and with the particle swarm-based algorithm NSPSO. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Modal filters may be obtained by a properly designed weighted sum of the output signals of an array of sensors distributed on the host structure. Although several research groups have been interested in techniques for designing and implementing modal filters based on a given array of sensors, the effect of the array topology on the effectiveness of the modal filter has received much less attention. In particular, it is known that some parameters, such as size, shape and location of a sensor, are very important in determining the observability of a vibration mode. Hence, this paper presents a methodology for the topological optimization of an array of sensors in order to maximize the effectiveness of a set of selected modal filters. This is done using a genetic algorithm optimization technique for the selection of 12 piezoceramic sensors from an array of 36 piezoceramic sensors regularly distributed on an aluminum plate, which maximize the filtering performance, over a given frequency range, of a set of modal filters, each one aiming to isolate one of the first vibration modes. The vectors of the weighting coefficients for each modal filter are evaluated using QR decomposition of the complex frequency response function matrix. Results show that the array topology is not very important for lower frequencies but it greatly affects the filter effectiveness for higher frequencies. Therefore, it is possible to improve the effectiveness and frequency range of a set of modal filters by optimizing the topology of an array of sensors. Indeed, using 12 properly located piezoceramic sensors bonded on an aluminum plate it is shown that the frequency range of a set of modal filters may be enlarged by 25-50%.
Resumo:
The central issue for pillar design in underground coal mining is the in situ uniaxial compressive strength (sigma (cm)). The paper proposes a new method for estimating in situ uniaxial compressive strength in coal seams based on laboratory strength and P wave propagation velocity. It describes the collection of samples in the Bonito coal seam, Fontanella Mine, southern Brazil, the techniques used for the structural mapping of the coal seam and determination of seismic wave propagation velocity as well as the laboratory procedures used to determine the strength and ultrasonic wave velocity. The results obtained using the new methodology are compared with those from seven other techniques for estimating in situ rock mass uniaxial compressive strength.
Resumo:
This paper aims to find relations between the socioeconomic characteristics, activity participation, land use patterns and travel behavior of the residents in the Sao Paulo Metropolitan Area (SPMA) by using Exploratory Multivariate Data Analysis (EMDA) techniques. The variables influencing travel pattern choices are investigated using: (a) Cluster Analysis (CA), grouping and characterizing the Traffic Zones (17), proposing the independent variable called Origin Cluster and, (b) Decision Tree (DT) to find a priori unknown relations among socioeconomic characteristics, land use attributes of the origin TZ and destination choices. The analysis was based on the origin-destination home-interview survey carried out in SPMA in 1997. The DT application revealed the variables of greatest influence on the travel pattern choice. The most important independent variable considered by DT is car ownership, followed by the Use of Transportation ""credits"" for Transit tariff, and, finally, activity participation variables and Origin Cluster. With these results, it was possible to analyze the influence of a family income, car ownership, position of the individual in the family, use of transportation ""credits"" for transit tariff (mainly for travel mode sequence choice), activities participation (activity sequence choice) and Origin Cluster (destination/travel distance choice). (c) 2010 Elsevier Ltd. All rights reserved.
Resumo:
This work presents the development and implementation of an artificial neural network based algorithm for transmission lines distance protection. This algorithm was developed to be used in any transmission line regardless of its configuration or voltage level. The described ANN-based algorithm does not need any topology adaptation or ANN parameters adjustment when applied to different electrical systems. This feature makes this solution unique since all ANN-based solutions presented until now were developed for particular transmission lines, which means that those solutions cannot be implemented in commercial relays. (c) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Most post-processors for boundary element (BE) analysis use an auxiliary domain mesh to display domain results, working against the profitable modelling process of a pure boundary discretization. This paper introduces a novel visualization technique which preserves the basic properties of the boundary element methods. The proposed algorithm does not require any domain discretization and is based on the direct and automatic identification of isolines. Another critical aspect of the visualization of domain results in BE analysis is the effort required to evaluate results in interior points. In order to tackle this issue, the present article also provides a comparison between the performance of two different BE formulations (conventional and hybrid). In addition, this paper presents an overview of the most common post-processing and visualization techniques in BE analysis, such as the classical algorithms of scan line and the interpolation over a domain discretization. The results presented herein show that the proposed algorithm offers a very high performance compared with other visualization procedures.
Resumo:
Coatings based on NiCrAlC intermetallic based alloy were applied on AISI 316L stainless steel substrates using a high velocity oxygen fuel torch. The influence of the spray parameters on friction and abrasive wear resistance were investigated using an instrumented rubber wheel abrasion test, able to measure the friction forces. The corrosion behaviour of the coatings were studied with electrochemical techniques and compared with the corrosion resistance of the substrate material. Specimens prepared using lower O(2)/C(3)H(8) ratios showed smaller porosity values. The abrasion wear rate of the NiCrAlC coatings was much smaller than that described in the literature for bulk as cast materials with similar composition and one order of magnitude higher than bulk cast and heat treated (aged) NiCrAlC alloy. All coatings showed higher corrosion resistance than the AISI 316L substrate in HCl (5%) aqueous solution at 40 degrees C.
Resumo:
Modern Integrated Circuit (IC) design is characterized by a strong trend of Intellectual Property (IP) core integration into complex system-on-chip (SOC) architectures. These cores require thorough verification of their functionality to avoid erroneous behavior in the final device. Formal verification methods are capable of detecting any design bug. However, due to state explosion, their use remains limited to small circuits. Alternatively, simulation-based verification can explore hardware descriptions of any size, although the corresponding stimulus generation, as well as functional coverage definition, must be carefully planned to guarantee its efficacy. In general, static input space optimization methodologies have shown better efficiency and results than, for instance, Coverage Directed Verification (CDV) techniques, although they act on different facets of the monitored system and are not exclusive. This work presents a constrained-random simulation-based functional verification methodology where, on the basis of the Parameter Domains (PD) formalism, irrelevant and invalid test case scenarios are removed from the input space. To this purpose, a tool to automatically generate PD-based stimuli sources was developed. Additionally, we have developed a second tool to generate functional coverage models that fit exactly to the PD-based input space. Both the input stimuli and coverage model enhancements, resulted in a notable testbench efficiency increase, if compared to testbenches with traditional stimulation and coverage scenarios: 22% simulation time reduction when generating stimuli with our PD-based stimuli sources (still with a conventional coverage model), and 56% simulation time reduction when combining our stimuli sources with their corresponding, automatically generated, coverage models.
Resumo:
In this paper the continuous Verhulst dynamic model is used to synthesize a new distributed power control algorithm (DPCA) for use in direct sequence code division multiple access (DS-CDMA) systems. The Verhulst model was initially designed to describe the population growth of biological species under food and physical space restrictions. The discretization of the corresponding differential equation is accomplished via the Euler numeric integration (ENI) method. Analytical convergence conditions for the proposed DPCA are also established. Several properties of the proposed recursive algorithm, such as Euclidean distance from optimum vector after convergence, convergence speed, normalized mean squared error (NSE), average power consumption per user, performance under dynamics channels, and implementation complexity aspects, are analyzed through simulations. The simulation results are compared with two other DPCAs: the classic algorithm derived by Foschini and Miljanic and the sigmoidal of Uykan and Koivo. Under estimated errors conditions, the proposed DPCA exhibits smaller discrepancy from the optimum power vector solution and better convergence (under fixed and adaptive convergence factor) than the classic and sigmoidal DPCAs. (C) 2010 Elsevier GmbH. All rights reserved.
Resumo:
Recently, the development of industrial processes brought on the outbreak of technologically complex systems. This development generated the necessity of research relative to the mathematical techniques that have the capacity to deal with project complexities and validation. Fuzzy models have been receiving particular attention in the area of nonlinear systems identification and analysis due to it is capacity to approximate nonlinear behavior and deal with uncertainty. A fuzzy rule-based model suitable for the approximation of many systems and functions is the Takagi-Sugeno (TS) fuzzy model. IS fuzzy models are nonlinear systems described by a set of if then rules which gives local linear representations of an underlying system. Such models can approximate a wide class of nonlinear systems. In this paper a performance analysis of a system based on IS fuzzy inference system for the calibration of electronic compass devices is considered. The contribution of the evaluated IS fuzzy inference system is to reduce the error obtained in data acquisition from a digital electronic compass. For the reliable operation of the TS fuzzy inference system, adequate error measurements must be taken. The error noise must be filtered before the application of the IS fuzzy inference system. The proposed method demonstrated an effectiveness of 57% at reducing the total error based on considered tests. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This paper analyzes the complexity-performance trade-off of several heuristic near-optimum multiuser detection (MuD) approaches applied to the uplink of synchronous single/multiple-input multiple-output multicarrier code division multiple access (S/MIMO MC-CDMA) systems. Genetic algorithm (GA), short term tabu search (STTS) and reactive tabu search (RTS), simulated annealing (SA), particle swarm optimization (PSO), and 1-opt local search (1-LS) heuristic multiuser detection algorithms (Heur-MuDs) are analyzed in details, using a single-objective antenna-diversity-aided optimization approach. Monte- Carlo simulations show that, after convergence, the performances reached by all near-optimum Heur-MuDs are similar. However, the computational complexities may differ substantially, depending on the system operation conditions. Their complexities are carefully analyzed in order to obtain a general complexity-performance framework comparison and to show that unitary Hamming distance search MuD (uH-ds) approaches (1-LS, SA, RTS and STTS) reach the best convergence rates, and among them, the 1-LS-MuD provides the best trade-off between implementation complexity and bit error rate (BER) performance.
Resumo:
We describe a novel method of fabricating atom chips that are well suited to the production and manipulation of atomic Bose–Einstein condensates. Our chip was created using a silver foil and simple micro-cutting techniques without the need for photolithography. It can sustain larger currents than conventional chips, and is compatible with the patterning of complex trapping potentials. A near pure Bose–Einstein condensate of 4 × 104 87Rb atoms has been created in a magnetic microtrap formed by currents through wires on the chip. We have observed the fragmentation of atom clouds in close proximity to the silver conductors. The fragmentation has different characteristic features to those seen with copper conductors.
Resumo:
We investigate the effect of coexisting transverse modes on the operation of self-mixing sensors based on vertical-cavity surface-emitting lasers (VCSELs). The effect of multiple transverse modes on the measurement of displacement and distance were examined by simulation and in laboratory experiment. The simulation model shows that the periodic change in the shape and magnitude of the self-mixing signal with modulation current can be properly explained by the different frequency-modulation coefficients of the respective transverse modes in VCSELs. The simulation results are in excellent agreement with measurements performed on single-mode and multimode VCSELs and on self-mixing sensors based on these VCSELs.
Resumo:
Nature-based tourism has grown in importance in recent decades, and strong links have been established between it and ecotourism. This reflects rising incomes, greater levels of educational attainment and changing values, especially in the Western world. Nature-based tourism is quite varied. Different types of such tourism are identified and their consequences for sustainability of their resource-base are briefly considered. The development and management of nature-based tourism involves many economic aspects, several of which are discussed. For example, one must consider the economics of reserving or protecting land for this type of tourism. What economic factors should be taken into account? Economists stress the importance of taking into account the opportunity costs involved in such a decision. This concept is explained. However, determining the net economic value of an area used for tourism is not straightforward. Techniques for doing this, such as the travel cost method and stated value methods, are introduced. Natural areas reserved for tourism may have economic value not only for tourism but also jointly for other purposes, such as conserving wildlife, maintaining hydrological cycles and so on. These other purposes, should be taken into account when considering the use of land for nature-based tourism. According to one economic point of view, land should be used in a way that maximises its total economic value. While this approach has its merits, it does not take into account the distribution of benefits from land use and its local impacts on income and employment. These can be quite important politically and for nature conservation, and are discussed. Finally, there is some discussion of whether fees charged to tourists for access to environmental resources should discriminate between domestic tourists and foreigners.
Resumo:
Patterns of population subdivision and the relationship between gene flow and geographical distance in the tropical estuarine fish Lares calcarifer (Centropomidae) were investigated using mtDNA control region sequences. Sixty-three putative haplotypes were resolved from a total of 270 individuals from nine localities within three geographical regions spanning the north Australian coastline. Despite a continuous estuarine distribution throughout the sampled range, no haplotypes were shared among regions. However, within regions, common haplotypes were often shared among localities. Both sequence-based (average Phi(ST)=0.328) and haplotype-based (average Phi(ST)=0.182) population subdivision analyses indicated strong geographical structuring. Depending on the method of calculation, geographical distance explained either 79 per cent (sequence-based) or 23 per cent (haplotype-based) of the variation in mitochondrial gene flow. Such relationships suggest that genetic differentiation of L. calcarifer has been generated via isolation-by-distance, possibly in a stepping-stone fashion. This pattern of genetic structure is concordant with expectations based on the life history of L. calcarifer and direct studies of its dispersal patterns. Mitochondrial DNA variation, although generally in agreement with patterns of allozyme variation, detected population subdivision at smaller spatial scales. Our analysis of mtDNA variation in L. calcarifer confirms that population genetic models can detect population structure of not only evolutionary significance but also of demographic significance. Further, it demonstrates the power of inferring such structure from hypervariable markers, which correspond to small effective population sizes.