13 resultados para Hardware and Architecture
em AMS Tesi di Laurea - Alm@DL - Università di Bologna
Resumo:
The current work has for object the improvement and the maintenance of the School of Engineering and Architecture in Via Terracini 28 (Bologna), with the prospective to maximize the operative efficiency reducing to the minimum the environmental impact and the costs. In order to realize this work the LEED certification has been used. LEED (Leadership in Energy and Environmental Design) is a certification system of the buildings. It was born in United States by the U.S. Green Building Council (USGBC)
Resumo:
Augmented reality has been growing extensively over the years in all aspects and multiple fields. My aim in this paper is to present a comprehensive study on augmented reality(AR) hardware and its applications from early developments to the possible future trends. Particularly my research is more focused on last 11 years(2012-2022), where I systematically reviewed 30 research papers per year to get a clear knowledge on trends of AR. A total of 330 publications were reviewed and grouped according to their application. The review's main contribution is to show the entire landscape of AR research and to provide a broad view of how it has evolved. Along with various AR glasses history and specifications are presented in detail. In the penultimate chapter I explained my methodology of research following my analysis from the past to the present along with my thoughts for the future. To conclude my study, In the final chapter I made some statements about possible future with AR, VR and XR(extended reality).
Resumo:
Communication and coordination are two key-aspects in open distributed agent system, being both responsible for the system’s behaviour integrity. An infrastructure capable to handling these issues, like TuCSoN, should to be able to exploit modern technologies and tools provided by fast software engineering contexts. Thesis aims to demonstrate TuCSoN infrastructure’s abilities to cope new possibilities, hardware and software, offered by mobile technology. The scenarios are going to configure, are related to the distributed nature of multi-agent systems where an agent should be located and runned just on a mobile device. We deal new mobile technology frontiers concerned with smartphones using Android operating system by Google. Analysis and deployment of a distributed agent-based system so described go first to impact with quality and quantity considerations about available resources. Engineering issue at the base of our research is to use TuCSoN against to reduced memory and computing capability of a smartphone, without the loss of functionality, efficiency and integrity for the infrastructure. Thesis work is organized on two fronts simultaneously: the former is the rationalization process of the available hardware and software resources, the latter, totally orthogonal, is the adaptation and optimization process about TuCSoN architecture for an ad-hoc client side release.
Resumo:
In the last decade, the mechanical characterization of bone segments has been seen as a fundamental key to understanding how the distribution of physiological loads works on the bone in everyday life, and the resulting structural deformations. Therefore, characterization allows to obtain the main load directions and, consequently, to observe the structural lamellae of the bone disposal, in order to recreate a prosthesis using artificial materials that behave naturally. This thesis will expose a modular system which provides the mechanical characterization of bone in vitro segment, with particular attention to vertebrae, as the current object of study and research in the lab where I did my thesis work. The system will be able to acquire and process all the appropriately conditioned signals of interest for the test, through dedicated hardware and software architecture, with high speed and high reliability. The aim of my thesis is to create a system that can be used as a versatile tool for experimentation and innovation for future tests of the mechanical characterization of biological components, allowing a quantitative and qualitative assessment of the deformation in analysis, regardless of anatomical regions of interest.
Resumo:
Every year, thousand of surgical treatments are performed in order to fix up or completely substitute, where possible, organs or tissues affected by degenerative diseases. Patients with these kind of illnesses stay long times waiting for a donor that could replace, in a short time, the damaged organ or the tissue. The lack of biological alternates, related to conventional surgical treatments as autografts, allografts, e xenografts, led the researchers belonging to different areas to collaborate to find out innovative solutions. This research brought to a new discipline able to merge molecular biology, biomaterial, engineering, biomechanics and, recently, design and architecture knowledges. This discipline is named Tissue Engineering (TE) and it represents a step forward towards the substitutive or regenerative medicine. One of the major challenge of the TE is to design and develop, using a biomimetic approach, an artificial 3D anatomy scaffold, suitable for cells adhesion that are able to proliferate and differentiate themselves as consequence of the biological and biophysical stimulus offered by the specific tissue to be replaced. Nowadays, powerful instruments allow to perform analysis day by day more accurateand defined on patients that need more precise diagnosis and treatments.Starting from patient specific information provided by TC (Computed Tomography) microCT and MRI(Magnetic Resonance Imaging), an image-based approach can be performed in order to reconstruct the site to be replaced. With the aid of the recent Additive Manufacturing techniques that allow to print tridimensional objects with sub millimetric precision, it is now possible to practice an almost complete control of the parametrical characteristics of the scaffold: this is the way to achieve a correct cellular regeneration. In this work, we focalize the attention on a branch of TE known as Bone TE, whose the bone is main subject. Bone TE combines osteoconductive and morphological aspects of the scaffold, whose main properties are pore diameter, structure porosity and interconnectivity. The realization of the ideal values of these parameters represents the main goal of this work: here we'll a create simple and interactive biomimetic design process based on 3D CAD modeling and generative algorithmsthat provide a way to control the main properties and to create a structure morphologically similar to the cancellous bone. Two different typologies of scaffold will be compared: the first is based on Triply Periodic MinimalSurface (T.P.M.S.) whose basic crystalline geometries are nowadays used for Bone TE scaffolding; the second is based on using Voronoi's diagrams and they are more often used in the design of decorations and jewellery for their capacity to decompose and tasselate a volumetric space using an heterogeneous spatial distribution (often frequent in nature). In this work, we will show how to manipulate the main properties (pore diameter, structure porosity and interconnectivity) of the design TE oriented scaffolding using the implementation of generative algorithms: "bringing back the nature to the nature".
Resumo:
Industrial companies, particularly those with induction motors and gearboxes as integral components of their systems, are utilizing Condition Monitoring (CM) systems more frequently in order to discover the need for maintenance in advance, as traditional maintenance only performs tasks when a failure has been identified. Utilizing a CM system is essential to boost productivity and minimize long-term failures that result in financial loss. The more exact and practical the CM system, the better the data analysis, which adds to a more precise maintenance forecast. This thesis project is a cooperation with PEI Vibration Monitoring s.r.l. to design and construct a low-cost vibrational condition monitoring system to check the health of induction motors and gearboxes automatically. Moreover, according to the company's request, such a system should have specs comparable to NI 9234, one of the company's standard Data Acquisition (DAQ) boards, but at a significantly cheaper price. Additionally, PEI VM Company has supplied all hardware and electronic components. The suggested CM system is capable of highprecision autonomous monitoring of induction motors and gearboxes, and it consists of a Raspberry Pi 3B and MCC 172 DAQ board.
Resumo:
This thesis proposes a novel technology in the field of swarm robotics that allows a swarm of robots to sense a virtual environment through virtual sensors. Virtual sensing is a desirable and helpful technology in swarm robotics research activity, because it allows the researchers to efficiently and quickly perform experiments otherwise more expensive and time consuming, or even impossible. In particular, we envision two useful applications for virtual sensing technology. On the one hand, it is possible to prototype and foresee the effects of a new sensor on a robot swarm, before producing it. On the other hand, thanks to this technology it is possible to study the behaviour of robots operating in environments that are not easily reproducible inside a lab for safety reasons or just because physically infeasible. The use of virtual sensing technology for sensor prototyping aims to foresee the behaviour of the swarm enhanced with new or more powerful sensors, without producing the hardware. Sensor prototyping can be used to tune a new sensor or perform performance comparison tests between alternative types of sensors. This kind of prototyping experiments can be performed through the presented tool, that allows to rapidly develop and test software virtual sensors of different typologies and quality, emulating the behaviour of several hardware real sensors. By investigating on which sensors is better to invest, a researcher can minimize the sensors’ production cost while achieving a given swarm performance. Through augmented reality, it is possible to test the performance of the swarm in a desired virtual environment that cannot be set into the lab for physical, logistic or economical reasons. The virtual environment is sensed by the robots through properly designed virtual sensors. Virtual sensing technology allows a researcher to quickly carry out real robots experiment in challenging scenarios without all the required hardware and environment.
Resumo:
The main purpose of ultrarelativistic heavy-ion collisions is the investigation of the QGP. The ALICE experiment situated at the CERN has been specifically designed to study heavy-ion collisions for centre-of-mass energies up to 5.5 per nucleon pair. Extended particle identification capability is one of the main characteristics of the ALICE experiment. In the intermediate momentum region (up to 2.5 GeV/c for pi/K and 4 GeV/c for K/p), charged particles are identified in the ALICE experiment by the Time of Flight (TOF) detector. The ALICE-TOF system is a large-area detector based on the use of Multi-gap Resistive Plate Chamber (MRPC) built with high efficiency, fast response and intrinsic time resolution better than 40 ps. This thesis work, developed with the ALICE-TOF Bologna group, is part of the efforts carried out to adapt the read-out of the detector to the new requirements after the LHC Long Shutdown 2. Tests on the feasibility of a new read-out scheme for the TOF detector have been performed. In fact, the achievement of a continuous read-out also for the TOF detector would not be affordable if one considers the replacement of the TRM cards both for hardware and budget reasons. Actually, the read-out of the TOF is limited at 250 kHz i.e. it would be able to collect up to just a fourth of the maximum collision rate potentially achievable for pp interactions. In this Master’s degree thesis work, I discuss a different read-out system for the ALICE-TOF detector that allows to register all the hits at the interaction rate of 1 MHz foreseen for pp interactions after the 2020, by using the electronics currently available. Such solution would allow the ALICE-TOF detector to collect all the hits generated by pp collisions at 1 MHz interaction rate, which corresponds to an amount four times larger than that initially expected at such frequencies with the triggered read-out system operated at 250 kHz for LHC Run 3. The obtained results confirm that the proposed read-out scheme is a viable option for the ALICE TOF detector. The results also highlighted that it will be advantageous if the ALICE-TOF group also implement an online monitoring system of noisy channels to allow their deactivation in real time.
Resumo:
The 5th generation of mobile networking introduces the concept of “Network slicing”, the network will be “sliced” horizontally, each slice will be compliant with different requirements in terms of network parameters such as bandwidth, latency. This technology is built on logical instead of physical resources, relies on virtual network as main concept to retrieve a logical resource. The Network Function Virtualisation provides the concept of logical resources for a virtual network function, enabling the concept virtual network; it relies on the Software Defined Networking as main technology to realize the virtual network as resource, it also define the concept of virtual network infrastructure with all components needed to enable the network slicing requirements. SDN itself uses cloud computing technology to realize the virtual network infrastructure, NFV uses also the virtual computing resources to enable the deployment of virtual network function instead of having custom hardware and software for each network function. The key of network slicing is the differentiation of slice in terms of Quality of Services parameters, which relies on the possibility to enable QoS management in cloud computing environment. The QoS in cloud computing denotes level of performances, reliability and availability offered. QoS is fundamental for cloud users, who expect providers to deliver the advertised quality characteristics, and for cloud providers, who need to find the right tradeoff between QoS levels that has possible to offer and operational costs. While QoS properties has received constant attention before the advent of cloud computing, performance heterogeneity and resource isolation mechanisms of cloud platforms have significantly complicated QoS analysis and deploying, prediction, and assurance. This is prompting several researchers to investigate automated QoS management methods that can leverage the high programmability of hardware and software resources in the cloud.
Resumo:
In cardiovascular disease the definition and the detection of the ECG parameters related to repolarization dynamics in post MI patients is still a crucial unmet need. In addition, the use of a 3D sensor in the implantable medical devices would be a crucial mean in the assessment or prediction of Heart Failure status, but the inclusion of such feature is limited by hardware and firmware constraints. The aim of this thesis is the definition of a reliable surrogate of the 500 Hz ECG signal to reach the aforementioned objective. To evaluate the worsening of reliability due to sampling frequency reduction on delineation performance, the signals have been consecutively down sampled by a factor 2, 4, 8 thus obtaining the ECG signals sampled at 250, 125 and 62.5 Hz, respectively. The final goal is the feasibility assessment of the detection of the fiducial points in order to translate those parameters into meaningful clinical parameter for Heart Failure prediction, such as T waves intervals heterogeneity and variability of areas under T waves. An experimental setting for data collection on healthy volunteers has been set up at the Bakken Research Center in Maastricht. A 16 – channel ambulatory system, provided by TMSI, has recorded the standard 12 – Leads ECG, two 3D accelerometers and a respiration sensor. The collection platform has been set up by the TMSI property software Polybench, the data analysis of such signals has been performed with Matlab. The main results of this study show that the 125 Hz sampling rate has demonstrated to be a good candidate for a reliable detection of fiducial points. T wave intervals proved to be consistently stable, even at 62.5 Hz. Further studies would be needed to provide a better comparison between sampling at 250 Hz and 125 Hz for areas under the T waves.
Resumo:
The disintegration of stone materials used in sculpture and architecture due to the crystallization of salts is capable of irreparably damaging artistic objects and historic buildings. A number of phosphonates and carboxylates were tested here as potential crystallization modifiers for sodium carbonate crystallization. Precipitated phases during crystallization induced either by cooling or by evaporation tests were nahcolite (NaHCO3), natron (Na2CO3∙10H2O) and thermonatrite (Na2CO3∙H2O), identified using X-ray diffraction. By using the thermodynamic code PHREEQC and the calculation of the nucleation rate it was demonstrated that nahcolite had to be first phase formed during both tests. The formation of the other phases depended on the experimental conditions under which the two tests were conducted. Nahcolite nucleation is strongly inhibited in the presence of sodium citrate tribasic dihydrate (CA), polyacrylic acid 2100MW (PA) and etidronic acid (HEDP), when the additives are dosed at appropriate concentrations and the pH range of the resulting solution is about 8. Electrostatic attraction generated between the deprotonated organic additives and the cations present in solution appears to be the principal mechanism of additive-nahcolite interaction. Salt weathering tests, in addition to mercury intrusion porosimetry tests allowed to quantify the damage induced by such salts. FESEM observation of both salts grown on calcite single crystals and in limestone blocks subjected to salt crystallization tests allowed to identify the effect of these additives on crystal growth and development. The results show that PA seems to be the best inhibitor, while CA and HEDP, which show similar behaviors, are slightly less effective. The use of such effective crystallization inhibitors may lead to more efficient preventive conservation of ornamental stone affected by crystallization damage due to formation of sodium carbonate crystals.
Resumo:
The project aims to experiment the Cone Beam Breast Computed Tomography technique using a standard digital mammography system. The work is focused on the definition of a protocol of quality measurements for the pre-clinical evaluation of the machine. The paper is developed in two parts. The first is specifically concerned with the methods used to define the image quality and dosimetry aspects specific for digital mammography devices. A complete characterization of the system has been performed according to the applicable IEC standards to assure the performances of the equipment and define the quality levels. Due to the lack of a quality control protocol dedicated to CBBCT mammography scanner, a new equivalent test procedure has been proposed. The second part of the paper is focused on the evaluation, through quantitative and visual analyzes, of the CBCT exam feasibility in the hardware and software conditions currently proposed by IMS Giotto. The prototype was in fact developed differing from the technical choices of competing companies and developed for a different intended use. The main difference with respect to the existing breast CT scanners is the possibility of performing on the same system the CBBCT scanning but also all the mammographic techniques. In this thesis, we aim to assess whether, in the current setup, considering a dosimetric range very close to that used in the clinic, the tests produce results that can be considered acceptable or at least indicative of the feasibility of the entire project from a commercial point of view. For this purpose, the final reconstruction images, obtained by two previously developed software, are analyzed.
Resumo:
The IoT is growing more and more each year and is becoming so ubiquitous that it includes heterogeneous devices with different hardware and software constraints leading to an highly fragmented ecosystem. Devices are using different protocols with different paradigms and they are not compatible with each other; some devices use request-response protocols like HTTP or CoAP while others use publish-subscribe protocols like MQTT. Integration in IoT is still an open research topic. When handling and testing IoT sensors there are some common task that people may be interested in: reading and visualizing the current value of the sensor; doing some aggregations on a set of values in order to compute statistical features; saving the history of the data to a time-series database; forecasting the future values to react in advance to a future condition; bridging the protocol of the sensor in order to integrate the device with other tools. In this work we will show the working implementation of a low-code and flow-based tool prototype which supports the common operations mentioned above, based on Node-RED and Python. Since this system is just a prototype, it has some issues and limitations that will be discussed in this work.