930 resultados para Naval Electronic Systems Engineering Activity (U.S.)
Resumo:
Context-aware multimodal interactive systems aim to adapt to the needs and behavioural patterns of users and offer a way forward for enhancing the efficacy and quality of experience (QoE) in human-computer interaction. The various modalities that constribute to such systems each provide a specific uni-modal response that is integratively presented as a multi-modal interface capable of interpretation of multi-modal user input and appropriately responding to it through dynamically adapted multi-modal interactive flow management , This paper presents an initial background study in the context of the first phase of a PhD research programme in the area of optimisation of data fusion techniques to serve multimodal interactivite systems, their applications and requirements.
Resumo:
This paper presents a theoretical model of the torsional characteristics of parallel multi-part rope systems. In such systems, the ropes may cable, or wrap around each other, depending on the combination of applied torque, rope tension, length and spacing between the rope parts. Cabling constitutes a failure that might be retrievable but as such can seriously affect the performance of the rope system. The torsional characteristics of the system are very different before and after cabling, and theoretical models are given for both situations. Laboratory tests were performed on both two and four rope systems, with measurements being made of torque at rotations from 0 to 360 deg. Tests were run with different rope spacings, tensions and lengths and the results compared with predictions from the theoretical model. The conclusion from the test results was that the theoretical model predicts both the pre- and post-cabling torsional behaviour with an acceptable level of accuracy.
Resumo:
Purpose – The purpose of this research is to show that reliability analysis and its implementation will lead to an improved whole life performance of the building systems, and hence their life cycle costs (LCC). Design/methodology/approach – This paper analyses reliability impacts on the whole life cycle of building systems, and reviews the up-to-date approaches adopted in UK construction, based on questionnaires designed to investigate the use of reliability within the industry. Findings – Approaches to reliability design and maintainability design have been introduced from the operating environment level, system structural level and component level, and a scheduled maintenance logic tree is modified based on the model developed by Pride. Different stages of the whole life cycle of building services systems, reliability-associated factors should be considered to ensure the system's whole life performance. It is suggested that data analysis should be applied in reliability design, maintainability design, and maintenance policy development. Originality/value – The paper presents important factors in different stages of the whole life cycle of the systems, and reliability and maintainability design approaches which can be helpful for building services system designers. The survey from the questionnaires provides the designers with understanding of key impacting factors.
Resumo:
This paper explores the criticism that system dynamics is a ‘hard’ or ‘deterministic’ systems approach. This criticism is seen to have four interpretations and each is addressed from the perspectives of social theory and systems science. Firstly, system dynamics is shown to offer not prophecies but Popperian predictions. Secondly, it is shown to involve the view that system structure only partially, not fully, determines human behaviour. Thirdly, the field's assumptions are shown not to constitute a grand content theory—though its structural theory and its attachment to the notion of causality in social systems are acknowledged. Finally, system dynamics is shown to be significantly different from systems engineering. The paper concludes that such confusions have arisen partially because of limited communication at the theoretical level from within the system dynamics community but also because of imperfect command of the available literature on the part of external commentators. Improved communication on theoretical issues is encouraged, though it is observed that system dynamics will continue to justify its assumptions primarily from the point of view of practical problem solving. The answer to the question in the paper's title is therefore: on balance, no.
Resumo:
In the first part some information and characterisation about an AC distribution network that feeds traction substations and their possible influences on the DC traction load flow are presented. Those influences are investigated and mathematically modelled. To corroborate the mathematical model, an example is presented and their results are confronted with real measurements.
Resumo:
Predictive performance evaluation is a fundamental issue in design, development, and deployment of classification systems. As predictive performance evaluation is a multidimensional problem, single scalar summaries such as error rate, although quite convenient due to its simplicity, can seldom evaluate all the aspects that a complete and reliable evaluation must consider. Due to this, various graphical performance evaluation methods are increasingly drawing the attention of machine learning, data mining, and pattern recognition communities. The main advantage of these types of methods resides in their ability to depict the trade-offs between evaluation aspects in a multidimensional space rather than reducing these aspects to an arbitrarily chosen (and often biased) single scalar measure. Furthermore, to appropriately select a suitable graphical method for a given task, it is crucial to identify its strengths and weaknesses. This paper surveys various graphical methods often used for predictive performance evaluation. By presenting these methods in the same framework, we hope this paper may shed some light on deciding which methods are more suitable to use in different situations.
Resumo:
A novel technique for selecting the poles of orthonormal basis functions (OBF) in Volterra models of any order is presented. It is well-known that the usual large number of parameters required to describe the Volterra kernels can be significantly reduced by representing each kernel using an appropriate basis of orthonormal functions. Such a representation results in the so-called OBF Volterra model, which has a Wiener structure consisting of a linear dynamic generated by the orthonormal basis followed by a nonlinear static mapping given by the Volterra polynomial series. Aiming at optimizing the poles that fully parameterize the orthonormal bases, the exact gradients of the outputs of the orthonormal filters with respect to their poles are computed analytically by using a back-propagation-through-time technique. The expressions relative to the Kautz basis and to generalized orthonormal bases of functions (GOBF) are addressed; the ones related to the Laguerre basis follow straightforwardly as a particular case. The main innovation here is that the dynamic nature of the OBF filters is fully considered in the gradient computations. These gradients provide exact search directions for optimizing the poles of a given orthonormal basis. Such search directions can, in turn, be used as part of an optimization procedure to locate the minimum of a cost-function that takes into account the error of estimation of the system output. The Levenberg-Marquardt algorithm is adopted here as the optimization procedure. Unlike previous related work, the proposed approach relies solely on input-output data measured from the system to be modeled, i.e., no information about the Volterra kernels is required. Examples are presented to illustrate the application of this approach to the modeling of dynamic systems, including a real magnetic levitation system with nonlinear oscillatory behavior.
Resumo:
Two fundamental processes usually arise in the production planning of many industries. The first one consists of deciding how many final products of each type have to be produced in each period of a planning horizon, the well-known lot sizing problem. The other process consists of cutting raw materials in stock in order to produce smaller parts used in the assembly of final products, the well-studied cutting stock problem. In this paper the decision variables of these two problems are dependent of each other in order to obtain a global optimum solution. Setups that are typically present in lot sizing problems are relaxed together with integer frequencies of cutting patterns in the cutting problem. Therefore, a large scale linear optimizations problem arises, which is exactly solved by a column generated technique. It is worth noting that this new combined problem still takes the trade-off between storage costs (for final products and the parts) and trim losses (in the cutting process). We present some sets of computational tests, analyzed over three different scenarios. These results show that, by combining the problems and using an exact method, it is possible to obtain significant gains when compared to the usual industrial practice, which solve them in sequence. (C) 2010 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.
Resumo:
The amount of textual information digitally stored is growing every day. However, our capability of processing and analyzing that information is not growing at the same pace. To overcome this limitation, it is important to develop semiautomatic processes to extract relevant knowledge from textual information, such as the text mining process. One of the main and most expensive stages of the text mining process is the text pre-processing stage, where the unstructured text should be transformed to structured format such as an attribute-value table. The stemming process, i.e. linguistics normalization, is usually used to find the attributes of this table. However, the stemming process is strongly dependent on the language in which the original textual information is given. Furthermore, for most languages, the stemming algorithms proposed in the literature are computationally expensive. In this work, several improvements of the well know Porter stemming algorithm for the Portuguese language, which explore the characteristics of this language, are proposed. Experimental results show that the proposed algorithm executes in far less time without affecting the quality of the generated stems.
Resumo:
Localization and Mapping are two of the most important capabilities for autonomous mobile robots and have been receiving considerable attention from the scientific computing community over the last 10 years. One of the most efficient methods to address these problems is based on the use of the Extended Kalman Filter (EKF). The EKF simultaneously estimates a model of the environment (map) and the position of the robot based on odometric and exteroceptive sensor information. As this algorithm demands a considerable amount of computation, it is usually executed on high end PCs coupled to the robot. In this work we present an FPGA-based architecture for the EKF algorithm that is capable of processing two-dimensional maps containing up to 1.8 k features at real time (14 Hz), a three-fold improvement over a Pentium M 1.6 GHz, and a 13-fold improvement over an ARM920T 200 MHz. The proposed architecture also consumes only 1.3% of the Pentium and 12.3% of the ARM energy per feature.
Resumo:
Measurements of X-ray diffraction, electrical resistivity, and magnetization are reported across the Jahn-Teller phase transition in LaMnO(3). Using a thermodynamic equation, we obtained the pressure derivative of the critical temperature (T(JT)), dT(JT)/dP = -28.3 K GPa(-1). This approach also reveals that 5.7(3)J(mol K)(-1) comes from the volume change and 0.8(2)J(mol K)(-1) from the magnetic exchange interaction change across the phase transition. Around T(JT), a robust increase in the electrical conductivity takes place and the electronic entropy change, which is assumed to be negligible for the majority of electronic systems, was found to be 1.8(3)J(mol K)(-1).
Resumo:
The aim of task scheduling is to minimize the makespan of applications, exploiting the best possible way to use shared resources. Applications have requirements which call for customized environments for their execution. One way to provide such environments is to use virtualization on demand. This paper presents two schedulers based on integer linear programming which schedule virtual machines (VMs) in grid resources and tasks on these VMs. The schedulers differ from previous work by the joint scheduling of tasks and VMs and by considering the impact of the available bandwidth on the quality of the schedule. Experiments show the efficacy of the schedulers in scenarios with different network configurations.