2 resultados para Consistent term structure models
em AMS Tesi di Laurea - Alm@DL - Università di Bologna
Resumo:
A recent integral-field spectroscopic (IFS) survey, the MASSIVE survey (Ma et al. 2014), observed the 116 most massive (MK < −25.3 mag, stellar mass M∗ > 10^11.6 M⊙) early-type galaxies (ETGs) within 108 Mpc, out to radii as large as 40 kpc, that correspond to ∼ 2 − 3 effective radii (Re). One of the major findings of the MASSIVE survey is that the galaxy sample is split nearly equally among three groups showing three different velocity dispersion profiles σ(R) outer of a radius ∼ 5 kpc (falling, flat and rising with radius). The purpose of this thesis is to model the kinematic profiles of six ETGs included in the MASSIVE survey and representative of the three observed σ(R) shapes, with the aim of investigating their dynamical structure. Models for the chosen galaxies are built using the numerical code JASMINE (Posacki, Pellegrini, and Ciotti 2013). The code produces models of axisymmetric galaxies, based on the solution of the Jeans equations for a multicomponent gravitational potential (supermassive black hole, stars and dark matter halo). With the aim of having a good agreement between the kinematics obtained from the Jeans equations, and the observed σ and rotation velocity V of MASSIVE (Veale et al. 2016, 2018), I derived constraints on the dark matter distribution and orbital anisotropy. This work suggests a trend of the dark matter amount and distribution with the shape of the velocity dispersion profiles in the outer regions: the models of galaxies with flat or rising velocity dispersion profiles show higher dark matter fractions fDM both within 1 Re and 5 Re. Orbital anisotropy alone cannot account for the different observed trends of σ(R) and has a minor effect compared to variations of the mass profile. Galaxies with similar stellar mass M∗ that show different velocity dispersion profiles (from falling to rising) are successfully modelled with a variation of the halo mass Mh.
Resumo:
In the collective imaginaries a robot is a human like machine as any androids in science fiction. However the type of robots that you will encounter most frequently are machinery that do work that is too dangerous, boring or onerous. Most of the robots in the world are of this type. They can be found in auto, medical, manufacturing and space industries. Therefore a robot is a system that contains sensors, control systems, manipulators, power supplies and software all working together to perform a task. The development and use of such a system is an active area of research and one of the main problems is the development of interaction skills with the surrounding environment, which include the ability to grasp objects. To perform this task the robot needs to sense the environment and acquire the object informations, physical attributes that may influence a grasp. Humans can solve this grasping problem easily due to their past experiences, that is why many researchers are approaching it from a machine learning perspective finding grasp of an object using information of already known objects. But humans can select the best grasp amongst a vast repertoire not only considering the physical attributes of the object to grasp but even to obtain a certain effect. This is why in our case the study in the area of robot manipulation is focused on grasping and integrating symbolic tasks with data gained through sensors. The learning model is based on Bayesian Network to encode the statistical dependencies between the data collected by the sensors and the symbolic task. This data representation has several advantages. It allows to take into account the uncertainty of the real world, allowing to deal with sensor noise, encodes notion of causality and provides an unified network for learning. Since the network is actually implemented and based on the human expert knowledge, it is very interesting to implement an automated method to learn the structure as in the future more tasks and object features can be introduced and a complex network design based only on human expert knowledge can become unreliable. Since structure learning algorithms presents some weaknesses, the goal of this thesis is to analyze real data used in the network modeled by the human expert, implement a feasible structure learning approach and compare the results with the network designed by the expert in order to possibly enhance it.