920 resultados para INTELLIGENCE SYSTEMS METHODOLOGY


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new tuning methodology of the main controller of an internal model control structure for n×n stable multivariable processes with multiple time delays based on the centralized inverted decoupling structure. Independently of the system size, very simple general expressions for the controller elements are obtained. The realizability conditions are provided and the specification of the closed-loop requirements is explained. A diagonal filter is added to the proposed control structure in order to improve the disturbance rejection without modifying the nominal set-point response. The effectiveness of the method is illustrated through different simulation examples in comparison with other works.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O século XXI introduziu profundas mudanças no espaço onde a atuação militar se desenvolve. Esta mutação, que agora inclui o domínio físico e cognitivo na ação militar, impõe a adoção de novos conceitos de operação e estruturas organizacionais mais ágeis, de forma a fazerem face a um ambiente altamente volátil, imprevisível e complexo. Tal contexto torna as organizações, hoje mais do que nunca, dependentes de informação (e dos sistemas que as geram), e no âmbito das organizações militares, uma capacidade em particular assume, na atualidade, uma preponderância fulcral para o sucesso destas, que se designa por Intelligence, Surveillance & Reconnaissance (ISR). Considerando a complexidade de sistemas, processos e pessoas que envolvem toda esta capacidade, torna-se relevante estudar como a Força Aérea Portuguesa (FAP) está a acomodar este conceito no interior da sua estrutura, uma vez que a sua adaptação requer uma organização da era da informação, onde o trabalho em rede assume particular destaque. A presente investigação analisa formas de estruturas organizacionais contemporâneas e cruza-as com as recomendações da Organização do Tratado do Atlântico Norte (também designada por Aliança), comparando-as posteriormente com a atualidade da FAP. No final, são efetuadas propostas tangíveis, que podem potenciar as capacidades existentes, de onde se destaca a criação de uma matriz de análise quanto à eficiência organizacional, uma nova forma de organização das capacidades residentes no que ao ISR concerne, bem como o modo de potenciar o trabalho em rede com base nos meios existentes. Abstract: The 21st century has caused profound changes in the areas where military action takes place. This mutation, which now includes the physical and cognitive domain in military action, requires the adoption of new concepts of operation and more agile organizational structures in order to cope with a highly volatile, unpredictable and complex environment. Thus, more than ever, this makes the present organizations dependent of information (and the systems that generate them), in the case of military organizations, a particular capability undertakes today a strong impact on the success of military organizations. It is known as Intelligence, Surveillance& Reconnaissance (ISR). Taking into account the complexity of systems, processes and people involving all this capability, it is relevant to study how the Portuguese Air Force (PAF) is accommodating this concept within its structure, since the adaptation requires an organization adapted to the information era, where networking is particularly prominent. This research aims to analyze contemporary forms of organizational structures and cross them with the recommendations of the North Atlantic Treaty Organization (also known as Alliance), later comparing them with today's PAF. At the end of this investigation, some tangible proposals are made which can enhance existing capabilities: we can highlight the creation of an analysis matrix for organizational efficiency, a new form of organization of the resident capabilities in the ISR concerns, as well as the way of enhancing networking, based on existing means.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ante el vertiginoso crecimiento de la competencia en el campo motiva la creación de una metodología para la Gestión de proyectos teniendo en cuenta las buenas prácticas disponibles a nivel mundial. El uso de TIC’s representará la forma de afrontar a la competencia que actualmente las empresas enfrentan, gestión de procesos, administración de recursos, generar responsabilidades y compromisos en los procesos con los que se llevan a cabo los proyectos marcarán un cambio en el cumplimiento de metas y objetivos. Tomando en cuenta lo anterior y luego de una evaluación a los procesos con los que se manejan actualmente los proyectos, se definirá una metodología para gestionar los proyectos, una vez definidos los procesos que van a intervenir, se abordará el tema de buscar la herramienta idónea para la administración de los proyectos vía Web, manteniendo el control de los mismos, generando un beneficio en lo referente al uso de los recursos con los que dispone la empresa. Generar indicadores de Gestión dentro de la herramienta seleccionada será una ventaja que se deberá tomar en consideración al momento de escoger la plataforma Web para llevar a cabo el control oportuno en el avance de los procesos. El uso de las TIC’s conjugado con una metodología adecuada, fomentará el desarrollo de las compañías logrando así poder planificar la gestión de los recursos: Humanos, Financieros y materiales, con sus respectivos ahorros en tiempo y dinero.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Part 16: Performance Measurement Systems

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article presents a methodology to build real-time reconfigurable systems that ensure that all the temporal constraints of a set of applications are met, while optimizing the utilization of the available reconfigurable resources. Starting from a static platform that meets all the real-time deadlines, our approach takes advantage of run-time reconfiguration in order to reduce the area needed while guaranteeing that all the deadlines are still met. This goal is achieved by identifying which tasks must be always ready for execution in order to meet the deadlines, and by means of a methodology that also allows reducing the area requirements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Part 12: Collaboration Platforms

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the wide swath of applications where multiphase fluid contact lines exist, there is still no consensus on an accurate and general simulation methodology. Most prior numerical work has imposed one of the many dynamic contact-angle theories at solid walls. Such approaches are inherently limited by the theory accuracy. In fact, when inertial effects are important, the contact angle may be history dependent and, thus, any single mathematical function is inappropriate. Given these limitations, the present work has two primary goals: 1) create a numerical framework that allows the contact angle to evolve naturally with appropriate contact-line physics and 2) develop equations and numerical methods such that contact-line simulations may be performed on coarse computational meshes.

Fluid flows affected by contact lines are dominated by capillary stresses and require accurate curvature calculations. The level set method was chosen to track the fluid interfaces because it is easy to calculate interface curvature accurately. Unfortunately, the level set reinitialization suffers from an ill-posed mathematical problem at contact lines: a ``blind spot'' exists. Standard techniques to handle this deficiency are shown to introduce parasitic velocity currents that artificially deform freely floating (non-prescribed) contact angles. As an alternative, a new relaxation equation reinitialization is proposed to remove these spurious velocity currents and its concept is further explored with level-set extension velocities.

To capture contact-line physics, two classical boundary conditions, the Navier-slip velocity boundary condition and a fixed contact angle, are implemented in direct numerical simulations (DNS). DNS are found to converge only if the slip length is well resolved by the computational mesh. Unfortunately, since the slip length is often very small compared to fluid structures, these simulations are not computationally feasible for large systems. To address the second goal, a new methodology is proposed which relies on the volumetric-filtered Navier-Stokes equations. Two unclosed terms, an average curvature and a viscous shear VS, are proposed to represent the missing microscale physics on a coarse mesh.

All of these components are then combined into a single framework and tested for a water droplet impacting a partially-wetting substrate. Very good agreement is found for the evolution of the contact diameter in time between the experimental measurements and the numerical simulation. Such comparison would not be possible with prior methods, since the Reynolds number Re and capillary number Ca are large. Furthermore, the experimentally approximated slip length ratio is well outside of the range currently achievable by DNS. This framework is a promising first step towards simulating complex physics in capillary-dominated flows at a reasonable computational expense.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pressure management (PM) is commonly used in water distribution systems (WDSs). In the last decade, a strategic objective in the field has been the development of new scientific and technical methods for its implementation. However, due to a lack of systematic analysis of the results obtained in practical cases, progress has not always been reflected in practical actions. To address this problem, this paper provides a comprehensive analysis of the most innovative issues related to PM. The methodology proposed is based on a case-study comparison of qualitative concepts that involves published work from 140 sources. The results include a qualitative analysis covering four aspects: (1) the objectives yielded by PM; (2) types of regulation, including advanced control systems through electronic controllers; (3) new methods for designing districts; and (4) development of optimization models associated with PM. The evolution of the aforementioned four aspects is examined and discussed. Conclusions regarding the current status of each factor are drawn and proposals for future research outlined

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ecological models written in a mathematical language L(M) or model language, with a given style or methodology can be considered as a text. It is possible to apply statistical linguistic laws and the experimental results demonstrate that the behaviour of a mathematical model is the same of any literary text of any natural language. A text has the following characteristics: (a) the variables, its transformed functions and parameters are the lexic units or LUN of ecological models; (b) the syllables are constituted by a LUN, or a chain of them, separated by operating or ordering LUNs; (c) the flow equations are words; and (d) the distribution of words (LUM and CLUN) according to their lengths is based on a Poisson distribution, the Chebanov's law. It is founded on Vakar's formula, that is calculated likewise the linguistic entropy for L(M). We will apply these ideas over practical examples using MARIOLA model. In this paper it will be studied the problem of the lengths of the simple lexic units composed lexic units and words of text models, expressing these lengths in number of the primitive symbols, and syllables. The use of these linguistic laws renders it possible to indicate the degree of information given by an ecological model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tese (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Mecânica, 2016.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the medical field images obtained from high definition cameras and other medical imaging systems are an integral part of medical diagnosis. The analysis of these images are usually performed by the physicians who sometimes need to spend long hours reviewing the images before they are able to come up with a diagnosis and then decide on the course of action. In this dissertation we present a framework for a computer-aided analysis of medical imagery via the use of an expert system. While this problem has been discussed before, we will consider a system based on mobile devices. Since the release of the iPhone on April 2003, the popularity of mobile devices has increased rapidly and our lives have become more reliant on them. This popularity and the ease of development of mobile applications has now made it possible to perform on these devices many of the image analyses that previously required a personal computer. All of this has opened the door to a whole new set of possibilities and freed the physicians from their reliance on their desktop machines. The approach proposed in this dissertation aims to capitalize on these new found opportunities by providing a framework for analysis of medical images that physicians can utilize from their mobile devices thus remove their reliance on desktop computers. We also provide an expert system to aid in the analysis and advice on the selection of medical procedure. Finally, we also allow for other mobile applications to be developed by providing a generic mobile application development framework that allows for access of other applications into the mobile domain. In this dissertation we outline our work leading towards development of the proposed methodology and the remaining work needed to find a solution to the problem. In order to make this difficult problem tractable, we divide the problem into three parts: the development user interface modeling language and tooling, the creation of a game development modeling language and tooling, and the development of a generic mobile application framework. In order to make this problem more manageable, we will narrow down the initial scope to the hair transplant, and glaucoma domains.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Power efficiency is one of the most important constraints in the design of embedded systems since such systems are generally driven by batteries with limited energy budget or restricted power supply. In every embedded system, there are one or more processor cores to run the software and interact with the other hardware components of the system. The power consumption of the processor core(s) has an important impact on the total power dissipated in the system. Hence, the processor power optimization is crucial in satisfying the power consumption constraints, and developing low-power embedded systems. A key aspect of research in processor power optimization and management is “power estimation”. Having a fast and accurate method for processor power estimation at design time helps the designer to explore a large space of design possibilities, to make the optimal choices for developing a power efficient processor. Likewise, understanding the processor power dissipation behaviour of a specific software/application is the key for choosing appropriate algorithms in order to write power efficient software. Simulation-based methods for measuring the processor power achieve very high accuracy, but are available only late in the design process, and are often quite slow. Therefore, the need has arisen for faster, higher-level power prediction methods that allow the system designer to explore many alternatives for developing powerefficient hardware and software. The aim of this thesis is to present fast and high-level power models for the prediction of processor power consumption. Power predictability in this work is achieved in two ways: first, using a design method to develop power predictable circuits; second, analysing the power of the functions in the code which repeat during execution, then building the power model based on average number of repetitions. In the first case, a design method called Asynchronous Charge Sharing Logic (ACSL) is used to implement the Arithmetic Logic Unit (ALU) for the 8051 microcontroller. The ACSL circuits are power predictable due to the independency of their power consumption to the input data. Based on this property, a fast prediction method is presented to estimate the power of ALU by analysing the software program, and extracting the number of ALU-related instructions. This method achieves less than 1% error in power estimation and more than 100 times speedup in comparison to conventional simulation-based methods. In the second case, an average-case processor energy model is developed for the Insertion sort algorithm based on the number of comparisons that take place in the execution of the algorithm. The average number of comparisons is calculated using a high level methodology called MOdular Quantitative Analysis (MOQA). The parameters of the energy model are measured for the LEON3 processor core, but the model is general and can be used for any processor. The model has been validated through the power measurement experiments, and offers high accuracy and orders of magnitude speedup over the simulation-based method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data sources are often dispersed geographically in real life applications. Finding a knowledge model may require to join all the data sources and to run a machine learning algorithm on the joint set. We present an alternative based on a Multi Agent System (MAS): an agent mines one data source in order to extract a local theory (knowledge model) and then merges it with the previous MAS theory using a knowledge fusion technique. This way, we obtain a global theory that summarizes the distributed knowledge without spending resources and time in joining data sources. New experiments have been executed including statistical significance analysis. The results show that, as a result of knowledge fusion, the accuracy of initial theories is significantly improved as well as the accuracy of the monolithic solution.