974 resultados para pacs: simulation techniques
Resumo:
Every space launch increases the overall amount of space debris. Satellites have limited awareness of nearby objects that might pose a collision hazard. Astrometric, radiometric, and thermal models for the study of space debris in low-Earth orbit have been developed. This modeled approach proposes analysis methods that provide increased Local Area Awareness for satellites in low-Earth and geostationary orbit. Local Area Awareness is defined as the ability to detect, characterize, and extract useful information regarding resident space objects as they move through the space environment surrounding a spacecraft. The study of space debris is of critical importance to all space-faring nations. Characterization efforts are proposed using long-wave infrared sensors for space-based observations of debris objects in low-Earth orbit. Long-wave infrared sensors are commercially available and do not require solar illumination to be observed, as their received signal is temperature dependent. The characterization of debris objects through means of passive imaging techniques allows for further studies into the origination, specifications, and future trajectory of debris objects. Conclusions are made regarding the aforementioned thermal analysis as a function of debris orbit, geometry, orientation with respect to time, and material properties. Development of a thermal model permits the characterization of debris objects based upon their received long-wave infrared signals. Information regarding the material type, size, and tumble-rate of the observed debris objects are extracted. This investigation proposes the utilization of long-wave infrared radiometric models of typical debris to develop techniques for the detection and characterization of debris objects via signal analysis of unresolved imagery. Knowledge regarding the orbital type and semi-major axis of the observed debris object are extracted via astrometric analysis. This knowledge may aid in the constraint of the admissible region for the initial orbit determination process. The resultant orbital information is then fused with the radiometric characterization analysis enabling further characterization efforts of the observed debris object. This fused analysis, yielding orbital, material, and thermal properties, significantly increases a satellite’s Local Area Awareness via an intimate understanding of the debris environment surrounding the spacecraft.
Resumo:
The application of 3D grain-based modelling techniques is investigated in both small and large scale 3DEC models, in order to simulate brittle fracture processes in low-porosity crystalline rock. Mesh dependency in 3D grain-based models (GBMs) is examined through a number of cases to compare Voronoi and tetrahedral grain assemblages. Various methods are used in the generation of tessellations, each with a number of issues and advantages. A number of comparative UCS test simulations capture the distinct failure mechanisms, strength profiles, and progressive damage development using various Voronoi and tetrahedral GBMs. Relative calibration requirements are outlined to generate similar macro-strength and damage profiles for all the models. The results confirmed a number of inherent model behaviors that arise due to mesh dependency. In Voronoi models, inherent tensile failure mechanisms are produced by internal wedging and rotation of Voronoi grains. This results in a combined dependence on frictional and cohesive strength. In tetrahedral models, increased kinematic freedom of grains and an abundance of straight, connected failure pathways causes a preference for shear failure. This results in an inability to develop significant normal stresses causing cohesional strength dependence. In general, Voronoi models require high relative contact tensile strength values, with lower contact stiffness and contact cohesional strength compared to tetrahedral tessellations. Upscaling of 3D GBMs is investigated for both Voronoi and tetrahedral tessellations using a case study from the AECL’s Mine-by-Experiment at the Underground Research Laboratory. An upscaled tetrahedral model was able to reasonably simulate damage development in the roof forming a notch geometry by adjusting the cohesive strength. An upscaled Voronoi model underestimated the damage development in the roof and floor, and overestimated the damage in the side-walls. This was attributed to the discretization resolution limitations.
Resumo:
Model predictive control (MPC) has often been referred to in literature as a potential method for more efficient control of building heating systems. Though a significant performance improvement can be achieved with an MPC strategy, the complexity introduced to the commissioning of the system is often prohibitive. Models are required which can capture the thermodynamic properties of the building with sufficient accuracy for meaningful predictions to be made. Furthermore, a large number of tuning weights may need to be determined to achieve a desired performance. For MPC to become a practicable alternative, these issues must be addressed. Acknowledging the impact of the external environment as well as the interaction of occupants on the thermal behaviour of the building, in this work, techniques have been developed for deriving building models from data in which large, unmeasured disturbances are present. A spatio-temporal filtering process was introduced to determine estimates of the disturbances from measured data, which were then incorporated with metaheuristic search techniques to derive high-order simulation models, capable of replicating the thermal dynamics of a building. While a high-order simulation model allowed for control strategies to be analysed and compared, low-order models were required for use within the MPC strategy itself. The disturbance estimation techniques were adapted for use with system-identification methods to derive such models. MPC formulations were then derived to enable a more straightforward commissioning process and implemented in a validated simulation platform. A prioritised-objective strategy was developed which allowed for the tuning parameters typically associated with an MPC cost function to be omitted from the formulation by separation of the conflicting requirements of comfort satisfaction and energy reduction within a lexicographic framework. The improved ability of the formulation to be set-up and reconfigured in faulted conditions was shown.
Resumo:
Developers strive to create innovative Artificial Intelligence (AI) behaviour in their games as a key selling point. Machine Learning is an area of AI that looks at how applications and agents can be programmed to learn their own behaviour without the need to manually design and implement each aspect of it. Machine learning methods have been utilised infrequently within games and are usually trained to learn offline before the game is released to the players. In order to investigate new ways AI could be applied innovatively to games it is wise to explore how machine learning methods could be utilised in real-time as the game is played, so as to allow AI agents to learn directly from the player or their environment. Two machine learning methods were implemented into a simple 2D Fighter test game to allow the agents to fully showcase their learned behaviour as the game is played. The methods chosen were: Q-Learning and an NGram based system. It was found that N-Grams and QLearning could significantly benefit game developers as they facilitate fast, realistic learning at run-time.
Resumo:
Myocardial fibrosis detected via delayed-enhanced magnetic resonance imaging (MRI) has been shown to be a strong indicator for ventricular tachycardia (VT) inducibility. However, little is known regarding how inducibility is affected by the details of the fibrosis extent, morphology, and border zone configuration. The objective of this article is to systematically study the arrhythmogenic effects of fibrosis geometry and extent, specifically on VT inducibility and maintenance. We present a set of methods for constructing patient-specific computational models of human ventricles using in vivo MRI data for patients suffering from hypertension, hypercholesterolemia, and chronic myocardial infarction. Additional synthesized models with morphologically varied extents of fibrosis and gray zone (GZ) distribution were derived to study the alterations in the arrhythmia induction and reentry patterns. Detailed electrophysiological simulations demonstrated that (1) VT morphology was highly dependent on the extent of fibrosis, which acts as a structural substrate, (2) reentry tended to be anchored to the fibrosis edges and showed transmural conduction of activations through narrow channels formed within fibrosis, and (3) increasing the extent of GZ within fibrosis tended to destabilize the structural reentry sites and aggravate the VT as compared to fibrotic regions of the same size and shape but with lower or no GZ. The approach and findings represent a significant step toward patient-specific cardiac modeling as a reliable tool for VT prediction and management of the patient. Sensitivities to approximation nuances in the modeling of structural pathology by image-based reconstruction techniques are also implicated.
Resumo:
The objective of this thesis is the analysis and the study of the various access techniques for vehicular communications, in particular of the C-V2X and WAVE protocols. The simulator used to study the performance of the two protocols is called LTEV2Vsim and was developed by the CNI IEIIT for the study of V2V (Vehicle-to-Vehicle) communications. The changes I made allowed me to study the I2V (Infrastructure-to-Vehicle) scenario in highway areas and, with the results obtained, I made a comparison between the two protocols in the case of high vehicular density and low vehicular density, putting in relation to the PRR (packet reception ratio) and the cell size (RAW, awareness range). The final comparison allows to fully understand the possible performances of the two protocols and highlights the need for a protocol that allows to reach the minimum necessary requirements.
Resumo:
Besides increasing the share of electric and hybrid vehicles, in order to comply with more stringent environmental protection limitations, in the mid-term the auto industry must improve the efficiency of the internal combustion engine and the well to wheel efficiency of the employed fuel. To achieve this target, a deeper knowledge of the phenomena that influence the mixture formation and the chemical reactions involving new synthetic fuel components is mandatory, but complex and time intensive to perform purely by experimentation. Therefore, numerical simulations play an important role in this development process, but their use can be effective only if they can be considered accurate enough to capture these variations. The most relevant models necessary for the simulation of the reacting mixture formation and successive chemical reactions have been investigated in the present work, with a critical approach, in order to provide instruments to define the most suitable approaches also in the industrial context, which is limited by time constraints and budget evaluations. To overcome these limitations, new methodologies have been developed to conjugate detailed and simplified modelling techniques for the phenomena involving chemical reactions and mixture formation in non-traditional conditions (e.g. water injection, biofuels etc.). Thanks to the large use of machine learning and deep learning algorithms, several applications have been revised or implemented, with the target of reducing the computing time of some traditional tasks by orders of magnitude. Finally, a complete workflow leveraging these new models has been defined and used for evaluating the effects of different surrogate formulations of the same experimental fuel on a proof-of-concept GDI engine model.
Resumo:
The topic of this thesis is the design and the implementation of mathematical models and control system algorithms for rotary-wing unmanned aerial vehicles to be used in cooperative scenarios. The use of rotorcrafts has many attractive advantages, since these vehicles have the capability to take-off and land vertically, to hover and to move backward and laterally. Rotary-wing aircraft missions require precise control characteristics due to their unstable and heavy coupling aspects. As a matter of fact, flight test is the most accurate way to evaluate flying qualities and to test control systems. However, it may be very expensive and/or not feasible in case of early stage design and prototyping. A good compromise is made by a preliminary assessment performed by means of simulations and a reduced flight testing campaign. Consequently, having an analytical framework represents an important stage for simulations and control algorithm design. In this work mathematical models for various helicopter configurations are implemented. Different flight control techniques for helicopters are presented with theoretical background and tested via simulations and experimental flight tests on a small-scale unmanned helicopter. The same platform is used also in a cooperative scenario with a rover. Control strategies, algorithms and their implementation to perform missions are presented for two main scenarios. One of the main contributions of this thesis is to propose a suitable control system made by a classical PID baseline controller augmented with L1 adaptive contribution. In addition a complete analytical framework and the study of the dynamics and the stability of a synch-rotor are provided. At last, the implementation of cooperative control strategies for two main scenarios that include a small-scale unmanned helicopter and a rover.
Resumo:
The navigation of deep space spacecraft requires accurate measurement of the probe’s state and attitude with respect to a body whose ephemerides may not be known with good accuracy. The heliocentric state of the spacecraft is estimated through radiometric techniques (ranging, Doppler, and Delta-DOR), while optical observables can be introduced to improve the uncertainty in the relative position and attitude with respect to the target body. In this study, we analyze how simulated optical observables affect the estimation of parameters in an orbit determination problem, considering the case of the ESA’s Hera mission towards the binary asteroid system composed of Didymos and Dimorphos. To this extent, a shape model and a photometric function are used to create synthetic onboard camera images. Then, using a stereophotoclinometry technique on some of the simulated images, we create a database of maplets that describe the 3D geometry of the surface around a set of landmarks. The matching of maplets with the simulated images provides the optical observables, expressed as pixel coordinates in the camera frame, which are fed to an orbit determination filter to estimate a certain number of solve-for parameters. The noise introduced in the output optical observables by the image processing can be quantified using as a metric the quality of the residuals, which is used to fine-tune the maplet-matching parameters. In particular, the best results are obtained when using small maplets, with high correlation coefficients and occupation factors.
Resumo:
The spectrum of radiofrequency is distributed in such a way that it is fixed to certain users called licensed users and it cannot be used by unlicensed users even though the spectrum is not in use. This inefficient use of spectrum leads to spectral holes. To overcome the problem of spectral holes and increase the efficiency of the spectrum, Cognitive Radio (CR) was used and all simulation work was done on MATLAB. Here analyzed the performance of different spectrum sensing techniques as Match filter based spectrum sensing and energy detection, which depend on various factors, systems such as Numbers of input, signal-to-noise ratio ( SNR Ratio), QPSK system and BPSK system, and different fading channels, to identify the best possible channels and systems for spectrum sensing and improving the probability of detection. The study resulted that an averaging filter being better than an IIR filter. As the number of inputs and SNR increased, the probability of detection also improved. The Rayleigh fading channel has a better performance compared to the Rician and Nakagami fading channel.
Resumo:
In this thesis, the study and the simulation of two advanced sensorless speed control techniques for a surface PMSM are presented. The aim is to implement a sensorless control algorithm for a submarine auxiliary propulsion system. This experimental activity is the result of a project collaboration with L3Harris Calzoni, a leader company in A&D systems for naval handling in military field. A Simulink model of the whole electric drive has been developed. Due to the satisfactory results of the simulations, the sensorless control system has been implemented in C code for STM32 environment. Finally, several tests on a real brushless machine have been carried out while the motor was connected to a mechanical load to simulate the real scenario of the final application. All the experimental results have been recorded through a graphical interface software developed at Calzoni.
Resumo:
The purpose of this thesis work is the study and creation of a harness modelling system. The model needs to simulate faithfully the physical behaviour of the harness, without any instability or incorrect movements. Since there are various simulation engines that try to model wiring's systems, this thesis work focused on the creation and test of a 3D environment with wiring and other objects through the PyChrono Simulation Engine. Fine-tuning of the simulation parameters were done during the test to achieve the most stable and correct simulation possible, but tests showed the intrinsic limits of the Engine regarding the collisions' detection between the various part of the cables, while collisions between cables and other physical objects such as pavement, walls and others are well managed by the simulator. Finally, the main purpose of the model is to be used to train Artificial Intelligence through Reinforcement Learnings techniques, so we designed, using OpenAI Gym APIs, the general structure of the learning environment, defining its basic functions and an initial framework.
Resumo:
The aim of this investigation was to compare the skeletal stability of three different rigid fixation methods after mandibular advancement. Fifty-five class II malocclusion patients treated with the use of bilateral sagittal split ramus osteotomy and mandibular advancement were selected for this retrospective study. Group 1 (n = 17) had miniplates with monocortical screws, Group 2 (n = 16) had bicortical screws and Group 3 (n = 22) had the osteotomy fixed by means of the hybrid technique. Cephalograms were taken preoperatively, 1 week within the postoperative care period, and 6 months after the orthognathic surgery. Linear and angular changes of the cephalometric landmarks of the chin region were measured at each period, and the changes at each cephalometric landmark were determined for the time gaps. Postoperative changes in the mandibular shape were analyzed to determine the stability of fixation methods. There was minimum difference in the relapse of the mandibular advancement among the three groups. Statistical analysis showed no significant difference in postoperative stability. However, a positive correlation between the amount of advancement and the amount of postoperative relapse was demonstrated by the linear multiple regression test (p < 0.05). It can be concluded that all techniques can be used to obtain stable postoperative results in mandibular advancement after 6 months.
Resumo:
Quantification of dermal exposure to pesticides in rural workers, used in risk assessment, can be performed with different techniques such as patches or whole body evaluation. However, the wide variety of methods can jeopardize the process by producing disparate results, depending on the principles in sample collection. A critical review was thus performed on the main techniques for quantifying dermal exposure, calling attention to this issue and the need to establish a single methodology for quantification of dermal exposure in rural workers. Such harmonization of different techniques should help achieve safer and healthier working conditions. Techniques that can provide reliable exposure data are an essential first step towards avoiding harm to workers' health.
Resumo:
The Centers for High Cost Medication (Centros de Medicação de Alto Custo, CEDMAC), Health Department, São Paulo were instituted by project in partnership with the Clinical Hospital of the Faculty of Medicine, USP, sponsored by the Foundation for Research Support of the State of São Paulo (Fundação de Amparo à Pesquisa do Estado de São Paulo, FAPESP) aimed at the formation of a statewide network for comprehensive care of patients referred for use of immunobiological agents in rheumatological diseases. The CEDMAC of Hospital de Clínicas, Universidade Estadual de Campinas (HC-Unicamp), implemented by the Division of Rheumatology, Faculty of Medical Sciences, identified the need for standardization of the multidisciplinary team conducts, in face of the specificity of care conducts, verifying the importance of describing, in manual format, their operational and technical processes. The aim of this study is to present the methodology applied to the elaboration of the CEDMAC/HC-Unicamp Manual as an institutional tool, with the aim of offering the best assistance and administrative quality. In the methodology for preparing the manuals at HC-Unicamp since 2008, the premise was to obtain a document that is participatory, multidisciplinary, focused on work processes integrated with institutional rules, with objective and didactic descriptions, in a standardized format and with electronic dissemination. The CEDMAC/HC-Unicamp Manual was elaborated in 10 months, with involvement of the entire multidisciplinary team, with 19 chapters on work processes and techniques, in addition to those concerning the organizational structure and its annexes. Published in the electronic portal of HC Manuals in July 2012 as an e-Book (ISBN 978-85-63274-17-5), the manual has been a valuable instrument in guiding professionals in healthcare, teaching and research activities.