951 resultados para error-location number
Resumo:
In many problems of decision making under uncertainty the system has to acquire knowledge of its environment and learn the optimal decision through its experience. Such problems may also involve the system having to arrive at the globally optimal decision, when at each instant only a subset of the entire set of possible alternatives is available. These problems can be successfully modelled and analysed by learning automata. In this paper an estimator learning algorithm, which maintains estimates of the reward characteristics of the random environment, is presented for an automaton with changing number of actions. A learning automaton using the new scheme is shown to be e-optimal. The simulation results demonstrate the fast convergence properties of the new algorithm. The results of this study can be extended to the design of other types of estimator algorithms with good convergence properties.
Resumo:
A fuzzy system is developed using a linearized performance model of the gas turbine engine for performing gas turbine fault isolation from noisy measurements. By using a priori information about measurement uncertainties and through design variable linking, the design of the fuzzy system is posed as an optimization problem with low number of design variables which can be solved using the genetic algorithm in considerably low amount of computer time. The faults modeled are module faults in five modules: fan, low pressure compressor, high pressure compressor, high pressure turbine and low pressure turbine. The measurements used are deviations in exhaust gas temperature, low rotor speed, high rotor speed and fuel flow from a base line 'good engine'. The genetic fuzzy system (GFS) allows rapid development of the rule base if the fault signatures and measurement uncertainties change which happens for different engines and airlines. In addition, the genetic fuzzy system reduces the human effort needed in the trial and error process used to design the fuzzy system and makes the development of such a system easier and faster. A radial basis function neural network (RBFNN) is also used to preprocess the measurements before fault isolation. The RBFNN shows significant noise reduction and when combined with the GFS leads to a diagnostic system that is highly robust to the presence of noise in data. Showing the advantage of using a soft computing approach for gas turbine diagnostics.
Resumo:
A posteriori error estimation and adaptive refinement technique for fracture analysis of 2-D/3-D crack problems is the state-of-the-art. The objective of the present paper is to propose a new a posteriori error estimator based on strain energy release rate (SERR) or stress intensity factor (SIF) at the crack tip region and to use this along with the stress based error estimator available in the literature for the region away from the crack tip. The proposed a posteriori error estimator is called the K-S error estimator. Further, an adaptive mesh refinement (h-) strategy which can be used with K-S error estimator has been proposed for fracture analysis of 2-D crack problems. The performance of the proposed a posteriori error estimator and the h-adaptive refinement strategy have been demonstrated by employing the 4-noded, 8-noded and 9-noded plane stress finite elements. The proposed error estimator together with the h-adaptive refinement strategy will facilitate automation of fracture analysis process to provide reliable solutions.
Resumo:
Receive antenna selection (AS) reduces the hardware complexity of multi-antenna receivers by dynamically connecting an instantaneously best antenna element to the available radio frequency (RF) chain. Due to the hardware constraints, the channels at various antenna elements have to be sounded sequentially to obtain estimates that are required for selecting the ``best'' antenna and for coherently demodulating data. Consequently, the channel state information at different antennas is outdated by different amounts. We show that, for this reason, simply selecting the antenna with the highest estimated channel gain is not optimum. Rather, the channel estimates of different antennas should be weighted differently, depending on the training scheme. We derive closed-form expressions for the symbol error probability (SEP) of AS for MPSK and MQAM in time-varying Rayleigh fading channels for arbitrary selection weights, and validate them with simulations. We then derive an explicit formula for the optimal selection weights that minimize the SEP. We find that when selection weights are not used, the SEP need not improve as the number of antenna elements increases, which is in contrast to the ideal channel estimation case. However, the optimal selection weights remedy this situation and significantly improve performance.
Resumo:
In this paper, we propose a training-based channel estimation scheme for large non-orthogonal space-time block coded (STBC) MIMO systems.The proposed scheme employs a block transmission strategy where an N-t x N-t pilot matrix is sent (for training purposes) followed by several N-t x N-t square data STBC matrices, where Nt is the number of transmit antennas. At the receiver, we iterate between channel estimation (using an MMSE estimator) and detection (using a low-complexity likelihood ascent search (LAS) detector) till convergence or for a fixed number of iterations. Our simulation results show that excellent bit error rate and nearness-to-capacity performance are achieved by the proposed scheme at low complexities. The fact that we could show such good results for large STBCs (e.g., 16 x 16 STBC from cyclic division algebras) operating at spectral efficiencies in excess of 20 bps/Hz (even after accounting for the overheads meant for pilot-based channel estimation and turbo coding) establishes the effectiveness of the proposed scheme.
Resumo:
The problem of determining a minimal number of control inputs for converting a programmable logic array (PLA) with undetectable faults to crosspoint-irredundant PLA for testing has been formulated as a nonstandard set covering problem. By representing subsets of sets as cubes, this problem has been reformulated as familiar problems. It is noted that this result has significance because a crosspoint-irredundant PLA can be converted to a completely testable PLA in a straightforward fashion, thus achieving very good fault coverage and easy testability.
Resumo:
When there is a variation in the quality of males in a population, multiple mating can lead to an increase in the genetic fitness of a female by reducing the variance of the progeny number. The extent of selective advantage obtainable by this process is investigated for a population subdivided into structured demes. It is seen that for a wide range of model parameters (deme size, distribution of male quality, local resource level), multiple mating leads to a considerable increase in the fitness. Frequency-dependent selection or a stable coexistence between polyandry and monandry can also result when the possible costs involved in multiple mating are taken into account.
Resumo:
Mesoscale weather phenomena, such as the sea breeze circulation or lake effect snow bands, are typically too large to be observed at one point, yet too small to be caught in a traditional network of weather stations. Hence, the weather radar is one of the best tools for observing, analyzing and understanding their behavior and development. A weather radar network is a complex system, which has many structural and technical features to be tuned, from the location of each radar to the number of pulses averaged in the signal processing. These design parameters have no universal optimal values, but their selection depends on the nature of the weather phenomena to be monitored as well as on the applications for which the data will be used. The priorities and critical values are different for forest fire forecasting, aviation weather service or the planning of snow ploughing, to name a few radar-based applications. The main objective of the work performed within this thesis has been to combine knowledge of technical properties of the radar systems and our understanding of weather conditions in order to produce better applications able to efficiently support decision making in service duties for modern society related to weather and safety in northern conditions. When a new application is developed, it must be tested against ground truth . Two new verification approaches for radar-based hail estimates are introduced in this thesis. For mesoscale applications, finding the representative reference can be challenging since these phenomena are by definition difficult to catch with surface observations. Hence, almost any valuable information, which can be distilled from unconventional data sources such as newspapers and holiday shots is welcome. However, as important as getting data is to obtain estimates of data quality, and to judge to what extent the two disparate information sources can be compared. The presented new applications do not rely on radar data alone, but ingest information from auxiliary sources such as temperature fields. The author concludes that in the future the radar will continue to be a key source of data and information especially when used together in an effective way with other meteorological data.
Resumo:
It is important to identify the ``correct'' number of topics in mechanisms like Latent Dirichlet Allocation(LDA) as they determine the quality of features that are presented as features for classifiers like SVM. In this work we propose a measure to identify the correct number of topics and offer empirical evidence in its favor in terms of classification accuracy and the number of topics that are naturally present in the corpus. We show the merit of the measure by applying it on real-world as well as synthetic data sets(both text and images). In proposing this measure, we view LDA as a matrix factorization mechanism, wherein a given corpus C is split into two matrix factors M-1 and M-2 as given by C-d*w = M1(d*t) x Q(t*w).Where d is the number of documents present in the corpus anti w is the size of the vocabulary. The quality of the split depends on ``t'', the right number of topics chosen. The measure is computed in terms of symmetric KL-Divergence of salient distributions that are derived from these matrix factors. We observe that the divergence values are higher for non-optimal number of topics - this is shown by a `dip' at the right value for `t'.
Resumo:
This review article, based on a lecture delivered in Madras in 1985, is an account of the author's experience in the working out of the molecular structure and conformation of the collagen triple-helix over the years 1952–78. It starts with the first proposal of the correct triple-helix in 1954, but with three residues per turn, which was later refined in 1955 into a coiled-coil structure with approximately 3.3 residues per turn. The structure readily fitted proline and hydroxyproline residues and required glycine as every third residue in each of the three chains. The controversy regarding the number of hydrogen bonds per tripeptide could not be resolved by X-ray diffraction or energy minimization, but physicochemical data, obtained in other laboratories during 1961–65, strongly pointed to two hydrogen bonds, as suggested by the author. However, it was felt that the structure with one straight NH … O bond was better. A reconciliation of the two was obtained in Chicago in 1968, by showing that the second hydrogen bond is via a water molecule, which makes it weaker, as found in the physicochemical studies mentioned above. This water molecule was also shown, in 1973, to take part in further cross-linking hydrogen bonds with the OH group of hydroxyproline, which occurred always in the location previous to glycine, and is at the right distance from the water. Thus, almost all features of the primary structure, X-ray pattern, optical and hydrodynamic data, and the role of hydroxyproline in stabilising the triple helical structure, have been satisfactorily accounted for. These also lead to a confirmation of Pauling's theory that vitamin C improves immunity to diseases, as explained in the last section.
Resumo:
It is pointed out that the complement Clq, associated with the immune response system, has a part containing about 80 residues with a collagen-like sequence, with Gly at every third location and having also a number of Hyp and Hyl residues in locations before Gly, and that it takes the triple-helical conformation characteristic of collagen. As with collagen biosynthesis, ascorbic acid is therefore expected to be required for its production. Also, collagen itself, in the extracellular matrix, is connected with the fibroblast surface protein (FSP), whose absence leads to cell proliferation, and whose addition leads to suppression of malignancy in tissue culture. All these show the great importance of vitamin C for resistance to diseases, and even to cancer, as has been widely advocated by Pauling.
Resumo:
Housepits have a remarkably short research history as compared to Fennoscandian archaeological research on the Stone Age in general. The current understanding of the numbers and the distribution of Stone Age housepits in the Nordic countries has, for the most part, been shaped by archaeological studies carried out over the last twenty to thirty years. The main subjects of this research are Neolithic housepits, which are archaeological remains of semi-subterranean pithouses. This dissertation consists of five peer-reviewed articles and a synthesis paper. The articles deal with the development of housepits as seen in the data gathered from Finland (the Lake Saimaa area and south-eastern Finland) and Russia (the Karelian Isthmus). This synthesis expands the discussion of the changes observed in the Papers to include Fennoscandian housepit research as a whole. Certain changes in the size, shape, environmental location, and clustering of housepits extended into various cultures and ecological zones in northern Fennoscandia. Previously, the evolution of housepits has been interpreted to have been caused by the adaptation of Neolithic societies to prevailing environmental circumstances or to re-organization following contacts with the agrarian Corded Ware/Battle Axe Cultures spreading to North. This dissertation argues for two waves of change in the pithouse building tradition. Both waves brought with them certain changes in the pithouses themselves and in the practices of locating the dwellings in the environment/landscape. The changes in housepits do not go hand in hand with other changes in material culture, nor are the changes restricted to certain ecological environments. Based on current information, it appears that the changes relate primarily to the spread of new concepts of housing and possibly to new technology, as opposed to representing merely a local response to environmental factors. This development commenced already before the birth of the Corded Ware/Battle Axe Cultures. Therefore, the changes are argued to have resulted from the spreading of new ideas through the same networks that actively distributed commodities, exotic goods, and raw materials over vast areas between the southern Baltic Sea, the north-west Russian forest zone, and Fennoscandia.