891 resultados para Autonomous Robotic Systems. Autonomous Sailboats. Software Architecture
Resumo:
MEDEIROS, Adelardo A. D.A survey of control architectures for autonomous mobile robots. J. Braz. Comp. Soc., Campinas, v. 4, n. 3, abr. 1998 .Disponível em:
Resumo:
In this paper various techniques in relation to large-scale systems are presented. At first, explanation of large-scale systems and differences from traditional systems are given. Next, possible specifications and requirements on hardware and software are listed. Finally, examples of large-scale systems are presented.
Resumo:
The present paper describes a system for the construction of visual maps ("mosaics") and motion estimation for a set of AUVs (Autonomous Underwater Vehicles). Robots are equipped with down-looking camera which is used to estimate their motion with respect to the seafloor and built an online mosaic. As the mosaic increases in size, a systematic bias is introduced in its alignment, resulting in an erroneous output. The theoretical concepts associated with the use of an Augmented State Kalman Filter (ASKF) were applied to optimally estimate both visual map and the fleet position.
Resumo:
MEDEIROS, Adelardo A. D.A survey of control architectures for autonomous mobile robots. J. Braz. Comp. Soc., Campinas, v. 4, n. 3, abr. 1998 .Disponível em:
Resumo:
Recent developments in automation, robotics and artificial intelligence have given a push to a wider usage of these technologies in recent years, and nowadays, driverless transport systems are already state-of-the-art on certain legs of transportation. This has given a push for the maritime industry to join the advancement. The case organisation, AAWA initiative, is a joint industry-academia research consortium with the objective of developing readiness for the first commercial autonomous solutions, exploiting state-of-the-art autonomous and remote technology. The initiative develops both autonomous and remote operation technology for navigation, machinery, and all on-board operating systems. The aim of this study is to develop a model with which to estimate and forecast the operational costs, and thus enable comparisons between manned and autonomous cargo vessels. The building process of the model is also described and discussed. Furthermore, the model’s aim is to track and identify the critical success factors of the chosen ship design, and to enable monitoring and tracking of the incurred operational costs as the life cycle of the vessel progresses. The study adopts the constructive research approach, as the aim is to develop a construct to meet the needs of a case organisation. Data has been collected through discussions and meeting with consortium members and researchers, as well as through written and internal communications material. The model itself is built using activity-based life cycle costing, which enables both realistic cost estimation and forecasting, as well as the identification of critical success factors due to the process-orientation adopted from activity-based costing and the statistical nature of Monte Carlo simulation techniques. As the model was able to meet the multiple aims set for it, and the case organisation was satisfied with it, it could be argued that activity-based life cycle costing is the method with which to conduct cost estimation and forecasting in the case of autonomous cargo vessels. The model was able to perform the cost analysis and forecasting, as well as to trace the critical success factors. Later on, it also enabled, albeit hypothetically, monitoring and tracking of the incurred costs. By collecting costs this way, it was argued that the activity-based LCC model is able facilitate learning from and continuous improvement of the autonomous vessel. As with the building process of the model, an individual approach was chosen, while still using the implementation and model building steps presented in existing literature. This was due to two factors: the nature of the model and – perhaps even more importantly – the nature of the case organisation. Furthermore, the loosely organised network structure means that knowing the case organisation and its aims is of great importance when conducting a constructive research.
Resumo:
Miniaturization of power generators to the MEMS scale, based on the hydrogen-air fuel cell, is the object of this research. The micro fuel cell approach has been adopted for advantages of both high power and energy densities. On-board hydrogen production/storage and an efficient control scheme that facilitates integration with a fuel cell membrane electrode assembly (MEA) are key elements for micro energy conversion. Millimeter-scale reactors (ca. 10 µL) have been developed, for hydrogen production through hydrolysis of CaH2 and LiAlH4, to yield volumetric energy densities of the order of 200 Whr/L. Passive microfluidic control schemes have been implemented in order to facilitate delivery, self-regulation, and at the same time eliminate bulky auxiliaries that run on parasitic power. One technique uses surface tension to pump water in a microchannel for hydrolysis and is self-regulated, based on load, by back pressure from accumulated hydrogen acting on a gas-liquid microvalve. This control scheme improves uniformity of power delivery during long periods of lower power demand, with fast switching to mass transport regime on the order of seconds, thus providing peak power density of up to 391.85 W/L. Another method takes advantage of water recovery by backward transport through the MEA, of water vapor that is generated at the cathode half-cell reaction. This regulation-free scheme increases available reactor volume to yield energy density of 313 Whr/L, and provides peak power density of 104 W/L. Prototype devices have been tested for a range of duty periods from 2-24 hours, with multiple switching of power demand in order to establish operation across multiple regimes. Issues identified as critical to the realization of the integrated power MEMS include effects of water transport and byproduct hydrate swelling on hydrogen production in the micro reactor, and ambient relative humidity on fuel cell performance.
Resumo:
This thesis reports on an investigation of the feasibility and usefulness of incorporating dynamic management facilities for managing sensed context data in a distributed contextaware mobile application. The investigation focuses on reducing the work required to integrate new sensed context streams in an existing context aware architecture. Current architectures require integration work for new streams and new contexts that are encountered. This means of operation is acceptable for current fixed architectures. However, as systems become more mobile the number of discoverable streams increases. Without the ability to discover and use these new streams the functionality of any given device will be limited to the streams that it knows how to decode. The integration of new streams requires that the sensed context data be understood by the current application. If the new source provides data of a type that an application currently requires then the new source should be connected to the application without any prior knowledge of the new source. If the type is similar and can be converted then this stream too should be appropriated by the application. Such applications are based on portable devices (phones, PDAs) for semi-autonomous services that use data from sensors connected to the devices, plus data exchanged with other such devices and remote servers. Such applications must handle input from a variety of sensors, refining the data locally and managing its communication from the device in volatile and unpredictable network conditions. The choice to focus on locally connected sensory input allows for the introduction of privacy and access controls. This local control can determine how the information is communicated to others. This investigation focuses on the evaluation of three approaches to sensor data management. The first system is characterised by its static management based on the pre-pended metadata. This was the reference system. Developed for a mobile system, the data was processed based on the attached metadata. The code that performed the processing was static. The second system was developed to move away from the static processing and introduce a greater freedom of handling for the data stream, this resulted in a heavy weight approach. The approach focused on pushing the processing of the data into a number of networked nodes rather than the monolithic design of the previous system. By creating a separate communication channel for the metadata it is possible to be more flexible with the amount and type of data transmitted. The final system pulled the benefits of the other systems together. By providing a small management class that would load a separate handler based on the incoming data, Dynamism was maximised whilst maintaining ease of code understanding. The three systems were then compared to highlight their ability to dynamically manage new sensed context. The evaluation took two approaches, the first is a quantitative analysis of the code to understand the complexity of the relative three systems. This was done by evaluating what changes to the system were involved for the new context. The second approach takes a qualitative view of the work required by the software engineer to reconfigure the systems to provide support for a new data stream. The evaluation highlights the various scenarios in which the three systems are most suited. There is always a trade-o↵ in the development of a system. The three approaches highlight this fact. The creation of a statically bound system can be quick to develop but may need to be completely re-written if the requirements move too far. Alternatively a highly dynamic system may be able to cope with new requirements but the developer time to create such a system may be greater than the creation of several simpler systems.
Resumo:
As unmanned autonomous vehicles (UAVs) are being widely utilized in military and civil applications, concerns are growing about mission safety and how to integrate dierent phases of mission design. One important barrier to a coste ective and timely safety certication process for UAVs is the lack of a systematic approach for bridging the gap between understanding high-level commander/pilot intent and implementation of intent through low-level UAV behaviors. In this thesis we demonstrate an entire systems design process for a representative UAV mission, beginning from an operational concept and requirements and ending with a simulation framework for segments of the mission design, such as path planning and decision making in collision avoidance. In this thesis, we divided this complex system into sub-systems; path planning, collision detection and collision avoidance. We then developed software modules for each sub-system
Resumo:
To exploit the full potential of radio measurements of cosmic-ray air showers at MHz frequencies, a detector timing synchronization within 1 ns is needed. Large distributed radio detector arrays such as the Auger Engineering Radio Array (AERA) rely on timing via the Global Positioning System (GPS) for the synchronization of individual detector station clocks. Unfortunately, GPS timing is expected to have an accuracy no better than about 5 ns. In practice, in particular in AERA, the GPS clocks exhibit drifts on the order of tens of ns. We developed a technique to correct for the GPS drifts, and an independent method is used to cross-check that indeed we reach a nanosecond-scale timing accuracy by this correction. First, we operate a "beacon transmitter" which emits defined sine waves detected by AERA antennas recorded within the physics data. The relative phasing of these sine waves can be used to correct for GPS clock drifts. In addition to this, we observe radio pulses emitted by commercial airplanes, the position of which we determine in real time from Automatic Dependent Surveillance Broadcasts intercepted with a software-defined radio. From the known source location and the measured arrival times of the pulses we determine relative timing offsets between radio detector stations. We demonstrate with a combined analysis that the two methods give a consistent timing calibration with an accuracy of 2 ns or better. Consequently, the beacon method alone can be used in the future to continuously determine and correct for GPS clock drifts in each individual event measured by AERA.
Resumo:
The integration of distributed and ubiquitous intelligence has emerged over the last years as the mainspring of transformative advancements in mobile radio networks. As we approach the era of “mobile for intelligence”, next-generation wireless networks are poised to undergo significant and profound changes. Notably, the overarching challenge that lies ahead is the development and implementation of integrated communication and learning mechanisms that will enable the realization of autonomous mobile radio networks. The ultimate pursuit of eliminating human-in-the-loop constitutes an ambitious challenge, necessitating a meticulous delineation of the fundamental characteristics that artificial intelligence (AI) should possess to effectively achieve this objective. This challenge represents a paradigm shift in the design, deployment, and operation of wireless networks, where conventional, static configurations give way to dynamic, adaptive, and AI-native systems capable of self-optimization, self-sustainment, and learning. This thesis aims to provide a comprehensive exploration of the fundamental principles and practical approaches required to create autonomous mobile radio networks that seamlessly integrate communication and learning components. The first chapter of this thesis introduces the notion of Predictive Quality of Service (PQoS) and adaptive optimization and expands upon the challenge to achieve adaptable, reliable, and robust network performance in dynamic and ever-changing environments. The subsequent chapter delves into the revolutionary role of generative AI in shaping next-generation autonomous networks. This chapter emphasizes achieving trustworthy uncertainty-aware generation processes with the use of approximate Bayesian methods and aims to show how generative AI can improve generalization while reducing data communication costs. Finally, the thesis embarks on the topic of distributed learning over wireless networks. Distributed learning and its declinations, including multi-agent reinforcement learning systems and federated learning, have the potential to meet the scalability demands of modern data-driven applications, enabling efficient and collaborative model training across dynamic scenarios while ensuring data privacy and reducing communication overhead.
Resumo:
Nowadays, the development of intelligent and autonomous vehicles used to perform agricultural activities is essential to improve quantity and quality of agricultural productions. Moreover, with automation techniques it is possible to reduce the usage of agrochemicals and minimize the pollution. The University of Bologna is developing an innovative system for orchard management called ORTO (Orchard Rapid Transportation System). This system involves an autonomous electric vehicle capable to perform agricultural activities inside an orchard structure. The vehicle is equipped with an implement capable to perform different tasks. The purpose of this thesis project is to control the vehicle and the implement to perform an inter-row grass mowing. This kind of task requires a synchronized motion between the traction motors and the implement motors. A motion control system has been developed to generate trajectories and manage their synchronization. Two main trajectories type have been used: a five order polynomial trajectory and a trapezoidal trajectory. These two kinds of trajectories have been chosen in order to perform a uniform grass mowing, paying a particular attention to the constrains of the system. To synchronize the motions, the electronic cams approach has been adopted. A master profile has been generated and all the trajectories have been linked to the master motion. Moreover, a safety system has been developed. The aim of this system is firstly to improve the safety during the motion, furthermore it allows to manage obstacle detection and avoidance. Using some particular techniques obstacles can be detected and recovery action can be performed to overcome the problem. Once the measured force reaches the predefined force threshold, then the vehicle stops immediately its motion. The whole project has been developed by employing Matlab and Simulink. Eventually, the software has been translated into C code and executed on the TI Lauchpad XL board.
Resumo:
I vantaggi dell’Industria 4.0 hanno stravolto il manufacturing. Ma cosa vuol dire "Industria 4.0"? Essa è la nuova frontiera del manufacturing, basata su princìpi che seguono i passi avanti dei sistemi IT e della tecnologia. Dunque, i suoi pilastri sono: integrazione, verticale e orizzontale, digitalizzazione e automazione. L’Industria 4.0 coinvolge molte aree della supply chain, dai flussi informativi alla logistica. In essa e nell’intralogistica, la priorità è sviluppare dei sistemi di material handling flessibili, automatizzati e con alta prontezza di risposta. Il modello ideale è autonomo, in cui i veicoli fanno parte di una flotta le cui decisioni sono rese decentralizzate grazie all'alta connettività e alla loro abilità di collezionare dati e scambiarli rapidamente nel cloud aziendale.Tutto ciò non sarebbe raggiungibile se ci si affidasse a un comune sistema di trasporto AGV, troppo rigido e centralizzato. La tesi si focalizza su un tipo di material handlers più flessibile e intelligente: gli Autonomous Mobile Robots. Grazie alla loro intelligenza artificiale e alla digitalizzazione degli scambi di informazioni, interagiscono con l’ambiente per evitare ostacoli e calcolare il percorso ottimale. Gli scenari dell’ambiente lavorativo determinano perdite di tempo nel tragitto dei robot e sono queste che dovremo studiare. Nella tesi, i vantaggi apportati dagli AMR, come la loro decentralizzazione delle decisioni, saranno introdotti mediante una literature review e poi l’attenzione verterà sull’analisi di ogni scenario di lavoro. Fondamentali sono state le esperienze nel Logistics 4.0 Lab di NTNU, per ricreare fisicamente alcuni scenari. Inoltre, il software AnyLogic sarà usato per riprodurre e simulare tutti gli scenari rilevanti. I risultati delle simulazioni verranno infine usati per creare un modello che associ ad ogni scenario rilevante una perdita di tempo, attraverso una funzione. Per questo saranno usati software di data analysis come Minitab e MatLab.
Resumo:
L'avanzamento dell'e-commerce e l'aumento della densità abitativa nel centro città sono elementi che incentivano l'incremento della richiesta merci all'interno dei centri urbani. L'attenzione all'impatto ambientale derivante da queste attività operative è un punto focale oggetto di sempre maggiore interesse. Attraverso il seguente studio, l'obiettivo è definire attuali e potenziali soluzioni nell'ambito della logistica urbana, con particolare interesse alle consegne dell'ultimo miglio. Una soluzione proposta riguarda la possibilità di sfruttare la capacità disponibile nei flussi generati dalla folla per movimentare merce, pratica nota sotto il nome di Crowd-shipping. L'idea consiste nella saturazione di mezzi già presenti nella rete urbana al fine di ridurre il numero di veicoli commerciali e minimizzare le esternalità negative annesse. A supporto di questa iniziativa, nell'analisi verranno considerati veicoli autonomi elettrici a guida autonoma. La tesi è incentrata sulla definizione di un modello di ottimizzazione matematica, che mira a designare un network logistico-distributivo efficiente per le consegne dell'ultimo miglio e a minimizzare le distanze degli attori coinvolti. Il problema proposto rappresenta una variante del Vehicle Routing Problem con time windows e multi depots. Il problema è NP-hard, quindi computazionalmente complesso per cui sarà necessario, in fase di analisi, definire un approccio euristico che permetterà di ottenere una soluzione sub-ottima in un tempo di calcolo ragionevole per istanze maggiori. L'analisi è stata sviluppata nell'ambiente di sviluppo Eclipse, attraverso il risolutore Cplex, in linguaggio Java. Per poterne comprendere la validità, è prevista un'ultima fase in cui gli output del modello ottimo e dell'euristica vengono confrontati tra loro su parametri caratteristici. Bisogna tuttavia considerare che l' utilizzo di sistemi cyber-fisici a supporto della logistica non può prescindere da un costante sguardo verso il progresso.
Resumo:
In this paper it is presented the theoretical background, the architecture (using the ""4+1"" model), and the use of the library for execution of adaptive devices, AdapLib. This library was created seeking to be accurate to the adaptive devices theory, and to allow its easy extension considering the specific details of solutions that employ this kind of device. As an example, it is presented a case study in which the library was used to create a proof of concept to monitor and diagnose problems in an online news portal.
Resumo:
The XSophe-Sophe-XeprView((R)) computer simulation software suite enables scientists to easily determine spin Hamiltonian parameters from isotropic, randomly oriented and single crystal continuous wave electron paramagnetic resonance (CW EPR) spectra from radicals and isolated paramagnetic metal ion centers or clusters found in metalloproteins, chemical systems and materials science. XSophe provides an X-windows graphical user interface to the Sophe programme and allows: creation of multiple input files, local and remote execution of Sophe, the display of sophelog (output from Sophe) and input parameters/files. Sophe is a sophisticated computer simulation software programme employing a number of innovative technologies including; the Sydney OPera HousE (SOPHE) partition and interpolation schemes, a field segmentation algorithm, the mosaic misorientation linewidth model, parallelization and spectral optimisation. In conjunction with the SOPHE partition scheme and the field segmentation algorithm, the SOPHE interpolation scheme and the mosaic misorientation linewidth model greatly increase the speed of simulations for most spin systems. Employing brute force matrix diagonalization in the simulation of an EPR spectrum from a high spin Cr(III) complex with the spin Hamiltonian parameters g(e) = 2.00, D = 0.10 cm(-1), E/D = 0.25, A(x) = 120.0, A(y) = 120.0, A(z) = 240.0 x 10(-4) cm(-1) requires a SOPHE grid size of N = 400 (to produce a good signal to noise ratio) and takes 229.47 s. In contrast the use of either the SOPHE interpolation scheme or the mosaic misorientation linewidth model requires a SOPHE grid size of only N = 18 and takes 44.08 and 0.79 s, respectively. Results from Sophe are transferred via the Common Object Request Broker Architecture (CORBA) to XSophe and subsequently to XeprView((R)) where the simulated CW EPR spectra (1D and 2D) can be compared to the experimental spectra. Energy level diagrams, transition roadmaps and transition surfaces aid the interpretation of complicated randomly oriented CW EPR spectra and can be viewed with a web browser and an OpenInventor scene graph viewer.