922 resultados para 290205 Flight Control Systems


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, we carried out a study of the 2208 model servo module Datapool, aiming to make the recognition module and the material that accompanies it, and develop the experiences suggested in their study tours, in order to prove and understand its operation. From this study, three experiments were developed, aimed to familiarizing students with the module, calibrate it, and to control servo motor's speed and position, experiences which can become part of the laboratory of Linear Control, making the learning of concepts just richer, because visually, students can escape the theoretical field and see in practice complex concepts being employed

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, we carried out a study of the 2208 model servo module Datapool, aiming to make the recognition module and the material that accompanies it, and develop the experiences suggested in their study tours, in order to prove and understand its operation. From this study, three experiments were developed, aimed to familiarizing students with the module, calibrate it, and to control servo motor's speed and position, experiences which can become part of the laboratory of Linear Control, making the learning of concepts just richer, because visually, students can escape the theoretical field and see in practice complex concepts being employed

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, a method of computing PD stabilising gains for rotating systems is presented based on the D-decomposition technique, which requires the sole knowledge of frequency response functions. By applying this method to a rotating system with electromagnetic actuators, it is demonstrated that the stability boundary locus in the plane of feedback gains can be easily plotted, and the most suitable gains can be found to minimise the resonant peak of the system. Experimental results for a Laval rotor show the feasibility of not only controlling lateral shaft vibration and assuring stability, but also helps in predicting the final vibration level achieved by the closed-loop system. These results are obtained based solely on the input-output response information of the system as a whole.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The human eye is sensitive to visible light. Increasing illumination on the eye causes the pupil of the eye to contract, while decreasing illumination causes the pupil to dilate. Visible light causes specular reflections inside the iris ring. On the other hand, the human retina is less sensitive to near infra-red (NIR) radiation in the wavelength range from 800 nm to 1400 nm, but iris detail can still be imaged with NIR illumination. In order to measure the dynamic movement of the human pupil and iris while keeping the light-induced reflexes from affecting the quality of the digitalized image, this paper describes a device based on the consensual reflex. This biological phenomenon contracts and dilates the two pupils synchronously when illuminating one of the eyes by visible light. In this paper, we propose to capture images of the pupil of one eye using NIR illumination while illuminating the other eye using a visible-light pulse. This new approach extracts iris features called "dynamic features (DFs)." This innovative methodology proposes the extraction of information about the way the human eye reacts to light, and to use such information for biometric recognition purposes. The results demonstrate that these features are discriminating features, and, even using the Euclidean distance measure, an average accuracy of recognition of 99.1% was obtained. The proposed methodology has the potential to be "fraud-proof," because these DFs can only be extracted from living irises.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Synchronous telecommunication networks, distributed control systems and integrated circuits have its accuracy of operation dependent on the existence of a reliable time basis signal extracted from the line data stream and acquirable to each node. In this sense, the existence of a sub-network (inside the main network) dedicated to the distribution of the clock signals is crucially important. There are different solutions for the architecture of the time distribution sub-network and choosing one of them depends on cost, precision, reliability and operational security. In this work we expose: (i) the possible time distribution networks and their usual topologies and arrangements. (ii) How parameters of the network nodes can affect the reachability and stability of the synchronous state of a network. (iii) Optimizations methods for synchronous networks which can provide low cost architectures with operational precision, reliability and security. (C) 2011 Elsevier B. V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We introduce a five-parameter continuous model, called the McDonald inverted beta distribution, to extend the two-parameter inverted beta distribution and provide new four- and three-parameter sub-models. We give a mathematical treatment of the new distribution including expansions for the density function, moments, generating and quantile functions, mean deviations, entropy and reliability. The model parameters are estimated by maximum likelihood and the observed information matrix is derived. An application of the new model to real data shows that it can give consistently a better fit than other important lifetime models. (C) 2012 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper addresses the m-machine no-wait flow shop problem where the set-up time of a job is separated from its processing time. The performance measure considered is the total flowtime. A new hybrid metaheuristic Genetic Algorithm-Cluster Search is proposed to solve the scheduling problem. The performance of the proposed method is evaluated and the results are compared with the best method reported in the literature. Experimental tests show superiority of the new method for the test problems set, regarding the solution quality. (c) 2012 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, we study the performance evaluation of resource-aware business process models. We define a new framework that allows the generation of analytical models for performance evaluation from business process models annotated with resource management information. This framework is composed of a new notation that allows the specification of resource management constraints and a method to convert a business process specification and its resource constraints into Stochastic Automata Networks (SANs). We show that the analysis of the generated SAN model provides several performance indices, such as average throughput of the system, average waiting time, average queues size, and utilization rate of resources. Using the BP2SAN tool - our implementation of the proposed framework - and a SAN solver (such as the PEPS tool) we show through a simple use-case how a business specialist with no skills in stochastic modeling can easily obtain performance indices that, in turn, can help to identify bottlenecks on the model, to perform workload characterization, to define the provisioning of resources, and to study other performance related aspects of the business process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The abandonment of less productive fields and agro-forest activities has occured in the last decades, interesting large mountain areas in all mediterranean basin. Until the fifties, agricultural practices dealt mainly with soil surface and surface runoff control systems. However, the apparent sustainability of soil use results often in contrast with historical documents, witnessing heavy hydrogeological instability, in naturally fragile areas. The research focused on the dynamics and effects of post-coltural land abandonment in a critical mountain area of the Reno River. The Reno River rappresents a typical Tuscan-Emilian Apennines Watershed where soil erosion occurs under very different conditions depending on interactions between land use, climate, geomorphology and lithology. Landslides are largely rappresented, due to the diffusion of clay hill slopes. Recent researches suggest that climatic variability will increase as a consequence of global climate change, resulting in greater frequency and intensity of extreme weather events, which could increase rates of erosion, landslides reactivations and diffusion of calanchive basins. As far as hill slopes are concerned, instability is today basically due to intrinsic factors, as the Apennine range is a rather young formation, in geological terms, and is mainly formed by sedimentary rocks with high occurrence of clays. Therefore landslides and rockfalls are very frequent, while surface soil erosion is generally low and anyway concentrated in the low Apennine, where intensive farming is still economically worth. The study, supported by GIS use, analyses the main fisical characteristics of the area and the historical changes of land use, and focalizes the dynamics of spontaneous reafforestation. Futhermore, the research studies the results of soil bioengineering and surface water control solutions for the restablishment of landslides occured in the last period. Infact soil bioengineering has been recently used in different situations in order to consolidate slopes and hillsides and prevent erosion; when applied, it gave good results, both in terms of engineering efficiency and vegetational development, expecially if combined with a good hydraulic control, thus proving to be an actual alternative to other techniques with heavier environmental impacts. Research into the specific site features and the use of proper plant species is vital to the success of bioengineering works.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the collective imaginaries a robot is a human like machine as any androids in science fiction. However the type of robots that you will encounter most frequently are machinery that do work that is too dangerous, boring or onerous. Most of the robots in the world are of this type. They can be found in auto, medical, manufacturing and space industries. Therefore a robot is a system that contains sensors, control systems, manipulators, power supplies and software all working together to perform a task. The development and use of such a system is an active area of research and one of the main problems is the development of interaction skills with the surrounding environment, which include the ability to grasp objects. To perform this task the robot needs to sense the environment and acquire the object informations, physical attributes that may influence a grasp. Humans can solve this grasping problem easily due to their past experiences, that is why many researchers are approaching it from a machine learning perspective finding grasp of an object using information of already known objects. But humans can select the best grasp amongst a vast repertoire not only considering the physical attributes of the object to grasp but even to obtain a certain effect. This is why in our case the study in the area of robot manipulation is focused on grasping and integrating symbolic tasks with data gained through sensors. The learning model is based on Bayesian Network to encode the statistical dependencies between the data collected by the sensors and the symbolic task. This data representation has several advantages. It allows to take into account the uncertainty of the real world, allowing to deal with sensor noise, encodes notion of causality and provides an unified network for learning. Since the network is actually implemented and based on the human expert knowledge, it is very interesting to implement an automated method to learn the structure as in the future more tasks and object features can be introduced and a complex network design based only on human expert knowledge can become unreliable. Since structure learning algorithms presents some weaknesses, the goal of this thesis is to analyze real data used in the network modeled by the human expert, implement a feasible structure learning approach and compare the results with the network designed by the expert in order to possibly enhance it.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Urbanization is a continuing phenomenon in all the world. Grasslands, forests, etc. are being continually changed to residential, commercial and industrial complexes, roads and streets, and so on. One of the side effects of urbanization with which engineers and planners must deal with, is the increase of peak flows and volumes of runoff from rainfall events. As a result, the urban drainage and flood control systems must be designed to accommodate the peak flows from a variety of storms that may occur. Usually the peak flow, after development, is required not to exceed what would have occurred from the same storm under conditions existing prior to development. In order to do this it is necessary to design detention storage to hold back runoff and to release it downstream at controlled rates. In the first part of the work have been developed various simplified formulations that can be adopted for the design of stormwater detention facilities. In order to obtain a simplified hydrograph were adopted two approaches: the kinematic routing technique and the linear reservoir schematization. For the two approaches have been also obtained other two formulations depending if the IDF (intensity-duration-frequency) curve is described with two or three parameters. Other formulations have been developed taking into account if the outlet have a constant discharge or it depends on the water level in the pond. All these formulations can be easily applied when are known the characteristics of the drainage system and maximum discharge that these is in the outlet and has been defined a Return Period which characterize the IDF curve. In this way the volume of the detention pond can be calculated. In the second part of the work have been analyzed the design of detention ponds adopting continuous simulation models. The drainage systems adopted for the simulations, performed with SWMM5, are fictitious systems characterized by different sizes, and different shapes of the catchments and with a rainfall historical time series of 16 years recorded in Bologna. This approach suffers from the fact that continuous record of rainfall is often not available and when it is, the cost of such modelling can be very expensive, and that the majority of design practitioners are not prepared to use continuous long term modelling in the design of stormwater detention facilities. In the third part of the work have been analyzed statistical and stochastic methodologies in order to define the volume of the detention pond. In particular have been adopted the results of the long term simulation, performed with SWMM, to obtain the data to apply statistic and stochastic formulation. All these methodologies have been compared and correction coefficient have been proposed on the basis of the statistic and stochastic form. In this way engineers which have to design a detention pond can apply a simplified procedure appropriately corrected with the proposed coefficient.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The emergency of infection by highly pathogenic avian influenza virus (HPAI) subtype H5N1 has focused the attention of the world scientific community, requiring the prompt provision of effective control systems for early detection of the circulation of low pathogenic influenza H5 viruses (LPAI) in populations of wild birds to prevent outbreaks of highly pathogenic (HPAI) in populations of domestic birds with possible transmission to humans. The project stems from the aim to provide, through a preliminary analysis of data obtained from surveillance in Italy and Europe, a preliminary study about the virus detection rates and the development of mathematical models, an objective assessment of the effectiveness of avian influenza surveillance systems in wild bird populations, and to point out guidelines to support the planning process of the sampling activities. The results obtained from the statistical processing quantify the sampling effort in terms of time and sample size required, and simulating different epidemiological scenarios identify active surveillance as the most suitable for endemic LPAI infection monitoring in wild waterfowl, and passive surveillance as the only really effective tool in early detecting HPAI H5N1 circulation in wild populations. Given the lack of relevant information on H5N1 epidemiology, and the actual finantial and logistic constraints, an approach that makes use of statistical tools to evaluate and predict monitoring activities effectiveness proves to be of primary importance to direct decision-making and make the best use of available resources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work describes the development of a simulation tool which allows the simulation of the Internal Combustion Engine (ICE), the transmission and the vehicle dynamics. It is a control oriented simulation tool, designed in order to perform both off-line (Software In the Loop) and on-line (Hardware In the Loop) simulation. In the first case the simulation tool can be used in order to optimize Engine Control Unit strategies (as far as regard, for example, the fuel consumption or the performance of the engine), while in the second case it can be used in order to test the control system. In recent years the use of HIL simulations has proved to be very useful in developing and testing of control systems. Hardware In the Loop simulation is a technology where the actual vehicles, engines or other components are replaced by a real time simulation, based on a mathematical model and running in a real time processor. The processor reads ECU (Engine Control Unit) output signals which would normally feed the actuators and, by using mathematical models, provides the signals which would be produced by the actual sensors. The simulation tool, fully designed within Simulink, includes the possibility to simulate the only engine, the transmission and vehicle dynamics and the engine along with the vehicle and transmission dynamics, allowing in this case to evaluate the performance and the operating conditions of the Internal Combustion Engine, once it is installed on a given vehicle. Furthermore the simulation tool includes different level of complexity, since it is possible to use, for example, either a zero-dimensional or a one-dimensional model of the intake system (in this case only for off-line application, because of the higher computational effort). Given these preliminary remarks, an important goal of this work is the development of a simulation environment that can be easily adapted to different engine types (single- or multi-cylinder, four-stroke or two-stroke, diesel or gasoline) and transmission architecture without reprogramming. Also, the same simulation tool can be rapidly configured both for off-line and real-time application. The Matlab-Simulink environment has been adopted to achieve such objectives, since its graphical programming interface allows building flexible and reconfigurable models, and real-time simulation is possible with standard, off-the-shelf software and hardware platforms (such as dSPACE systems).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two of the main features of today complex software systems like pervasive computing systems and Internet-based applications are distribution and openness. Distribution revolves around three orthogonal dimensions: (i) distribution of control|systems are characterised by several independent computational entities and devices, each representing an autonomous and proactive locus of control; (ii) spatial distribution|entities and devices are physically distributed and connected in a global (such as the Internet) or local network; and (iii) temporal distribution|interacting system components come and go over time, and are not required to be available for interaction at the same time. Openness deals with the heterogeneity and dynamism of system components: complex computational systems are open to the integration of diverse components, heterogeneous in terms of architecture and technology, and are dynamic since they allow components to be updated, added, or removed while the system is running. The engineering of open and distributed computational systems mandates for the adoption of a software infrastructure whose underlying model and technology could provide the required level of uncoupling among system components. This is the main motivation behind current research trends in the area of coordination middleware to exploit tuple-based coordination models in the engineering of complex software systems, since they intrinsically provide coordinated components with communication uncoupling and further details in the references therein. An additional daunting challenge for tuple-based models comes from knowledge-intensive application scenarios, namely, scenarios where most of the activities are based on knowledge in some form|and where knowledge becomes the prominent means by which systems get coordinated. Handling knowledge in tuple-based systems induces problems in terms of syntax - e.g., two tuples containing the same data may not match due to differences in the tuple structure - and (mostly) of semantics|e.g., two tuples representing the same information may not match based on a dierent syntax adopted. Till now, the problem has been faced by exploiting tuple-based coordination within a middleware for knowledge intensive environments: e.g., experiments with tuple-based coordination within a Semantic Web middleware (surveys analogous approaches). However, they appear to be designed to tackle the design of coordination for specic application contexts like Semantic Web and Semantic Web Services, and they result in a rather involved extension of the tuple space model. The main goal of this thesis was to conceive a more general approach to semantic coordination. In particular, it was developed the model and technology of semantic tuple centres. It is adopted the tuple centre model as main coordination abstraction to manage system interactions. A tuple centre can be seen as a programmable tuple space, i.e. an extension of a Linda tuple space, where the behaviour of the tuple space can be programmed so as to react to interaction events. By encapsulating coordination laws within coordination media, tuple centres promote coordination uncoupling among coordinated components. Then, the tuple centre model was semantically enriched: a main design choice in this work was to try not to completely redesign the existing syntactic tuple space model, but rather provide a smooth extension that { although supporting semantic reasoning { keep the simplicity of tuple and tuple matching as easier as possible. By encapsulating the semantic representation of the domain of discourse within coordination media, semantic tuple centres promote semantic uncoupling among coordinated components. The main contributions of the thesis are: (i) the design of the semantic tuple centre model; (ii) the implementation and evaluation of the model based on an existent coordination infrastructure; (iii) a view of the application scenarios in which semantic tuple centres seem to be suitable as coordination media.