285 resultados para MPC
Resumo:
This work addresses the solution to the problem of robust model predictive control (MPC) of systems with model uncertainty. The case of zone control of multi-variable stable systems with multiple time delays is considered. The usual approach of dealing with this kind of problem is through the inclusion of non-linear cost constraint in the control problem. The control action is then obtained at each sampling time as the solution to a non-linear programming (NLP) problem that for high-order systems can be computationally expensive. Here, the robust MPC problem is formulated as a linear matrix inequality problem that can be solved in real time with a fraction of the computer effort. The proposed approach is compared with the conventional robust MPC and tested through the simulation of a reactor system of the process industry.
Resumo:
The Hubble constant, H-0, sets the scale of the size and age of the Universe and its determination from independent methods is still worthwhile to be investigated. In this article, by using the Sunyaev-Zeldovich effect and X-ray surface brightness data from 38 galaxy clusters observed by Bonamente et al. (Astrophys J 647:25, 2006), we obtain a new estimate of H-0 in the context of a flat Lambda CDM model. There is a degeneracy on the mass density parameter (Omega(m)) which is broken by applying a joint analysis involving the baryon acoustic oscillations (BAO) as given by Sloan Digital Sky Survey. This happens because the BAO signature does not depend on H-0. Our basic finding is that a joint analysis involving these tests yield H-0 = 76.5(-3.33)(+3.35) km/s/mpc and Omega(m) = 0.27(-0.02)(+0.03). Since the hypothesis of spherical geometry assumed by Bonamente et al. is questionable, we have also compared the above results to a recent work where a sample of galaxy clusters described by an elliptical profile was used in analysis.
Resumo:
Model predictive control (MPC) applications in the process industry usually deal with process systems that show time delays (dead times) between the system inputs and outputs. Also, in many industrial applications of MPC, integrating outputs resulting from liquid level control or recycle streams need to be considered as controlled outputs. Conventional MPC packages can be applied to time-delay systems but stability of the closed loop system will depend on the tuning parameters of the controller and cannot be guaranteed even in the nominal case. In this work, a state space model based on the analytical step response model is extended to the case of integrating time systems with time delays. This model is applied to the development of two versions of a nominally stable MPC, which is designed to the practical scenario in which one has targets for some of the inputs and/or outputs that may be unreachable and zone control (or interval tracking) for the remaining outputs. The controller is tested through simulation of a multivariable industrial reactor system. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
This work studies the optimization and control of a styrene polymerization reactor. The proposed strategy deals with the case where, because of market conditions and equipment deterioration, the optimal operating point of the continuous reactor is modified significantly along the operation time and the control system has to search for this optimum point, besides keeping the reactor system stable at any possible point. The approach considered here consists of three layers: the Real Time Optimization (RTO), the Model Predictive Control (MPC) and a Target Calculation (TC) that coordinates the communication between the two other layers and guarantees the stability of the whole structure. The proposed algorithm is simulated with the phenomenological model of a styrene polymerization reactor, which has been widely used as a benchmark for process control. The complete optimization structure for the styrene process including disturbances rejection is developed. The simulation results show the robustness of the proposed strategy and the capability to deal with disturbances while the economic objective is optimized.
Resumo:
This work studies the optimization and control of a styrene polymerization reactor. The proposed strategy deals with the case where, because of market conditions and equipment deterioration, the optimal operating point of the continuous reactor is modified significantly along the operation time and the control system has to search for this optimum point, besides keeping the reactor system stable at any possible point. The approach considered here consists of three layers: the Real Time Optimization (RTO), the Model Predictive Control (MPC) and a Target Calculation (TC) that coordinates the communication between the two other layers and guarantees the stability of the whole structure. The proposed algorithm is simulated with the phenomenological model of a styrene polymerization reactor, which has been widely used as a benchmark for process control. The complete optimization structure for the styrene process including disturbances rejection is developed. The simulation results show the robustness of the proposed strategy and the capability to deal with disturbances while the economic objective is optimized.
Resumo:
A mancha preta dos citros (MPC), causada pelo fungo Guignardia citricarpa, produz lesões em frutos, os quais ficam depreciados para o mercado interno e os restringem para a exportação. O grande período de suscetibilidade dos frutos cítricos, em adição ao fato de G. citricarpa causar infecções latentes, dificulta o entendimento sobre o período de incubação da doença. O objetivo do trabalho foi determinar o período de incubação da MPC inoculando frutos de laranjeira 'Valência' em diferentes estádios fenológicos. Para a inoculação foram empregadas suspensões de conídios de G. citricarpa (10³, 10(4), 10(5) e 10(6) conídios mL-1) em diferentes diâmetros dos frutos (1,5; 2,0; 2,5; 3,0; 5,0 e 7,0 cm). O período de incubação da MPC para os diferentes diâmetros dos frutos inoculados apresentou uma relação polinomial negativa. Em frutos com até 3 cm de diâmetro o período de incubação médio foi superior a 200 dias, enquanto que em frutos com diâmetros superiores a 5 cm o período de incubação médio foi inferior a 84 dias. A MPC apresenta período de incubação variável dependente do estádio fenológico em que os frutos são infectados. A concentração de conídios de G. citricarpa, na infecção, não interfere no período de incubação da doença.
Resumo:
We derive lower bounds on the density of sources of ultra-high energy cosmic rays from the lack of significant clustering in the arrival directions of the highest energy events detected at the Pierre Auger Observatory. The density of uniformly distributed sources of equal intrinsic intensity was found to be larger than ~(0.06 - 5) × '10 POT. -4' 'Mpc POT. -3' at 95% CL, depending on the magnitude of the magnetic deflections. Similar bounds, in the range (0.2 - 7) × '10 POT. -4' 'Mpc POT. -3', were obtained for sources following the local matter distribution.
Resumo:
[EN] Vertical distributions of turbulent energy dissipation rates and fluorescence were measured simultaneously with a high-resolution micro-profiler in four different oceanographic regions, from temperate to polar and from coastal to open waters settings. High fluorescence values, forming a deep chlorophyll maximum (DCM), were often located in weakly stratified portions of the upper water column, just below layers with maximum levels of turbulent energy dissipation rate. In the vicinity of the DCM, a significant negative relationship between fluorescence and turbulent energy dissipation rate was found. We discuss the mechanisms that may explain the observed patterns of planktonic biomass distribution within the ocean mixed layer, including a vertically variable diffusion coefficient and the alteration of the cells sinking velocity by turbulent motion. These findings provide further insight into the processes controlling the vertical distribution of the pelagic community and position of the DCM.
Resumo:
In the present thesis a thourough multiwavelength analysis of a number of galaxy clusters known to be experiencing a merger event is presented. The bulk of the thesis consists in the analysis of deep radio observations of six merging clusters, which host extended radio emission on the cluster scale. A composite optical and X–ray analysis is performed in order to obtain a detailed and comprehensive picture of the cluster dynamics and possibly derive hints about the properties of the ongoing merger, such as the involved mass ratio, geometry and time scale. The combination of the high quality radio, optical and X–ray data allows us to investigate the implications of the ongoing merger for the cluster radio properties, focusing on the phenomenon of cluster scale diffuse radio sources, known as radio halos and relics. A total number of six merging clusters was selected for the present study: A3562, A697, A209, A521, RXCJ 1314.4–2515 and RXCJ 2003.5–2323. All of them were known, or suspected, to possess extended radio emission on the cluster scale, in the form of a radio halo and/or a relic. High sensitivity radio observations were carried out for all clusters using the Giant Metrewave Radio Telescope (GMRT) at low frequency (i.e. ≤ 610 MHz), in order to test the presence of a diffuse radio source and/or analyse in detail the properties of the hosted extended radio emission. For three clusters, the GMRT information was combined with higher frequency data from Very Large Array (VLA) observations. A re–analysis of the optical and X–ray data available in the public archives was carried out for all sources. Propriety deep XMM–Newton and Chandra observations were used to investigate the merger dynamics in A3562. Thanks to our multiwavelength analysis, we were able to confirm the existence of a radio halo and/or a relic in all clusters, and to connect their properties and origin to the reconstructed merging scenario for most of the investigated cases. • The existence of a small size and low power radio halo in A3562 was successfully explained in the theoretical framework of the particle re–acceleration model for the origin of radio halos, which invokes the re–acceleration of pre–existing relativistic electrons in the intracluster medium by merger–driven turbulence. • A giant radio halo was found in the massive galaxy cluster A209, which has likely undergone a past major merger and is currently experiencing a new merging process in a direction roughly orthogonal to the old merger axis. A giant radio halo was also detected in A697, whose optical and X–ray properties may be suggestive of a strong merger event along the line of sight. Given the cluster mass and the kind of merger, the existence of a giant radio halo in both clusters is expected in the framework of the re–acceleration scenario. • A radio relic was detected at the outskirts of A521, a highly dynamically disturbed cluster which is accreting a number of small mass concentrations. A possible explanation for its origin requires the presence of a merger–driven shock front at the location of the source. The spectral properties of the relic may support such interpretation and require a Mach number M < ∼ 3 for the shock. • The galaxy cluster RXCJ 1314.4–2515 is exceptional and unique in hosting two peripheral relic sources, extending on the Mpc scale, and a central small size radio halo. The existence of these sources requires the presence of an ongoing energetic merger. Our combined optical and X–ray investigation suggests that a strong merging process between two or more massive subclumps may be ongoing in this cluster. Thanks to forthcoming optical and X–ray observations, we will reconstruct in detail the merger dynamics and derive its energetics, to be related to the energy necessary for the particle re–acceleration in this cluster. • Finally, RXCJ 2003.5–2323 was found to possess a giant radio halo. This source is among the largest, most powerful and most distant (z=0.317) halos imaged so far. Unlike other radio halos, it shows a very peculiar morphology with bright clumps and filaments of emission, whose origin might be related to the relatively high redshift of the hosting cluster. Although very little optical and X–ray information is available about the cluster dynamical stage, the results of our optical analysis suggest the presence of two massive substructures which may be interacting with the cluster. Forthcoming observations in the optical and X–ray bands will allow us to confirm the expected high merging activity in this cluster. Throughout the present thesis a cosmology with H0 = 70 km s−1 Mpc−1, m=0.3 and =0.7 is assumed.
Resumo:
In the past decade, the advent of efficient genome sequencing tools and high-throughput experimental biotechnology has lead to enormous progress in the life science. Among the most important innovations is the microarray tecnology. It allows to quantify the expression for thousands of genes simultaneously by measurin the hybridization from a tissue of interest to probes on a small glass or plastic slide. The characteristics of these data include a fair amount of random noise, a predictor dimension in the thousand, and a sample noise in the dozens. One of the most exciting areas to which microarray technology has been applied is the challenge of deciphering complex disease such as cancer. In these studies, samples are taken from two or more groups of individuals with heterogeneous phenotypes, pathologies, or clinical outcomes. these samples are hybridized to microarrays in an effort to find a small number of genes which are strongly correlated with the group of individuals. Eventhough today methods to analyse the data are welle developed and close to reach a standard organization (through the effort of preposed International project like Microarray Gene Expression Data -MGED- Society [1]) it is not unfrequant to stumble in a clinician's question that do not have a compelling statistical method that could permit to answer it.The contribution of this dissertation in deciphering disease regards the development of new approaches aiming at handle open problems posed by clinicians in handle specific experimental designs. In Chapter 1 starting from a biological necessary introduction, we revise the microarray tecnologies and all the important steps that involve an experiment from the production of the array, to the quality controls ending with preprocessing steps that will be used into the data analysis in the rest of the dissertation. While in Chapter 2 a critical review of standard analysis methods are provided stressing most of problems that In Chapter 3 is introduced a method to adress the issue of unbalanced design of miacroarray experiments. In microarray experiments, experimental design is a crucial starting-point for obtaining reasonable results. In a two-class problem, an equal or similar number of samples it should be collected between the two classes. However in some cases, e.g. rare pathologies, the approach to be taken is less evident. We propose to address this issue by applying a modified version of SAM [2]. MultiSAM consists in a reiterated application of a SAM analysis, comparing the less populated class (LPC) with 1,000 random samplings of the same size from the more populated class (MPC) A list of the differentially expressed genes is generated for each SAM application. After 1,000 reiterations, each single probe given a "score" ranging from 0 to 1,000 based on its recurrence in the 1,000 lists as differentially expressed. The performance of MultiSAM was compared to the performance of SAM and LIMMA [3] over two simulated data sets via beta and exponential distribution. The results of all three algorithms over low- noise data sets seems acceptable However, on a real unbalanced two-channel data set reagardin Chronic Lymphocitic Leukemia, LIMMA finds no significant probe, SAM finds 23 significantly changed probes but cannot separate the two classes, while MultiSAM finds 122 probes with score >300 and separates the data into two clusters by hierarchical clustering. We also report extra-assay validation in terms of differentially expressed genes Although standard algorithms perform well over low-noise simulated data sets, multi-SAM seems to be the only one able to reveal subtle differences in gene expression profiles on real unbalanced data. In Chapter 4 a method to adress similarities evaluation in a three-class prblem by means of Relevance Vector Machine [4] is described. In fact, looking at microarray data in a prognostic and diagnostic clinical framework, not only differences could have a crucial role. In some cases similarities can give useful and, sometimes even more, important information. The goal, given three classes, could be to establish, with a certain level of confidence, if the third one is similar to the first or the second one. In this work we show that Relevance Vector Machine (RVM) [2] could be a possible solutions to the limitation of standard supervised classification. In fact, RVM offers many advantages compared, for example, with his well-known precursor (Support Vector Machine - SVM [3]). Among these advantages, the estimate of posterior probability of class membership represents a key feature to address the similarity issue. This is a highly important, but often overlooked, option of any practical pattern recognition system. We focused on Tumor-Grade-three-class problem, so we have 67 samples of grade I (G1), 54 samples of grade 3 (G3) and 100 samples of grade 2 (G2). The goal is to find a model able to separate G1 from G3, then evaluate the third class G2 as test-set to obtain the probability for samples of G2 to be member of class G1 or class G3. The analysis showed that breast cancer samples of grade II have a molecular profile more similar to breast cancer samples of grade I. Looking at the literature this result have been guessed, but no measure of significance was gived before.
Resumo:
The term Ambient Intelligence (AmI) refers to a vision on the future of the information society where smart, electronic environment are sensitive and responsive to the presence of people and their activities (Context awareness). In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices. This promotes the creation of pervasive environments improving the quality of life of the occupants and enhancing the human experience. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. Ambient intelligent systems are heterogeneous and require an excellent cooperation between several hardware/software technologies and disciplines, including signal processing, networking and protocols, embedded systems, information management, and distributed algorithms. Since a large amount of fixed and mobile sensors embedded is deployed into the environment, the Wireless Sensor Networks is one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes which can be deployed in a target area to sense physical phenomena and communicate with other nodes and base stations. These simple devices typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). WNS promises of revolutionizing the interactions between the real physical worlds and human beings. Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. To fully exploit the potential of distributed sensing approaches, a set of challengesmust be addressed. Sensor nodes are inherently resource-constrained systems with very low power consumption and small size requirements which enables than to reduce the interference on the physical phenomena sensed and to allow easy and low-cost deployment. They have limited processing speed,storage capacity and communication bandwidth that must be efficiently used to increase the degree of local ”understanding” of the observed phenomena. A particular case of sensor nodes are video sensors. This topic holds strong interest for a wide range of contexts such as military, security, robotics and most recently consumer applications. Vision sensors are extremely effective for medium to long-range sensing because vision provides rich information to human operators. However, image sensors generate a huge amount of data, whichmust be heavily processed before it is transmitted due to the scarce bandwidth capability of radio interfaces. In particular, in video-surveillance, it has been shown that source-side compression is mandatory due to limited bandwidth and delay constraints. Moreover, there is an ample opportunity for performing higher-level processing functions, such as object recognition that has the potential to drastically reduce the required bandwidth (e.g. by transmitting compressed images only when something ‘interesting‘ is detected). The energy cost of image processing must however be carefully minimized. Imaging could play and plays an important role in sensing devices for ambient intelligence. Computer vision can for instance be used for recognising persons and objects and recognising behaviour such as illness and rioting. Having a wireless camera as a camera mote opens the way for distributed scene analysis. More eyes see more than one and a camera system that can observe a scene from multiple directions would be able to overcome occlusion problems and could describe objects in their true 3D appearance. In real-time, these approaches are a recently opened field of research. In this thesis we pay attention to the realities of hardware/software technologies and the design needed to realize systems for distributed monitoring, attempting to propose solutions on open issues and filling the gap between AmI scenarios and hardware reality. The physical implementation of an individual wireless node is constrained by three important metrics which are outlined below. Despite that the design of the sensor network and its sensor nodes is strictly application dependent, a number of constraints should almost always be considered. Among them: • Small form factor to reduce nodes intrusiveness. • Low power consumption to reduce battery size and to extend nodes lifetime. • Low cost for a widespread diffusion. These limitations typically result in the adoption of low power, low cost devices such as low powermicrocontrollers with few kilobytes of RAMand tenth of kilobytes of program memory with whomonly simple data processing algorithms can be implemented. However the overall computational power of the WNS can be very large since the network presents a high degree of parallelism that can be exploited through the adoption of ad-hoc techniques. Furthermore through the fusion of information from the dense mesh of sensors even complex phenomena can be monitored. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas:Low Power Video Sensor Node and Video Processing Alghoritm and Multimodal Surveillance . Low Power Video Sensor Nodes and Video Processing Alghoritms In comparison to scalar sensors, such as temperature, pressure, humidity, velocity, and acceleration sensors, vision sensors generate much higher bandwidth data due to the two-dimensional nature of their pixel array. We have tackled all the constraints listed above and have proposed solutions to overcome the current WSNlimits for Video sensor node. We have designed and developed wireless video sensor nodes focusing on the small size and the flexibility of reuse in different applications. The video nodes target a different design point: the portability (on-board power supply, wireless communication), a scanty power budget (500mW),while still providing a prominent level of intelligence, namely sophisticated classification algorithmand high level of reconfigurability. We developed two different video sensor node: The device architecture of the first one is based on a low-cost low-power FPGA+microcontroller system-on-chip. The second one is based on ARM9 processor. Both systems designed within the above mentioned power envelope could operate in a continuous fashion with Li-Polymer battery pack and solar panel. Novel low power low cost video sensor nodes which, in contrast to sensors that just watch the world, are capable of comprehending the perceived information in order to interpret it locally, are presented. Featuring such intelligence, these nodes would be able to cope with such tasks as recognition of unattended bags in airports, persons carrying potentially dangerous objects, etc.,which normally require a human operator. Vision algorithms for object detection, acquisition like human detection with Support Vector Machine (SVM) classification and abandoned/removed object detection are implemented, described and illustrated on real world data. Multimodal surveillance: In several setup the use of wired video cameras may not be possible. For this reason building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. Energy efficiency for wireless smart camera networks is one of the major efforts in distributed monitoring and surveillance community. For this reason, building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. The Pyroelectric Infra-Red (PIR) sensors have been used to extend the lifetime of a solar-powered video sensor node by providing an energy level dependent trigger to the video camera and the wireless module. Such approach has shown to be able to extend node lifetime and possibly result in continuous operation of the node.Being low-cost, passive (thus low-power) and presenting a limited form factor, PIR sensors are well suited for WSN applications. Moreover techniques to have aggressive power management policies are essential for achieving long-termoperating on standalone distributed cameras needed to improve the power consumption. We have used an adaptive controller like Model Predictive Control (MPC) to help the system to improve the performances outperforming naive power management policies.
Resumo:
Constraints are widely present in the flight control problems: actuators saturations or flight envelope limitations are only some examples of that. The ability of Model Predictive Control (MPC) of dealing with the constraints joined with the increased computational power of modern calculators makes this approach attractive also for fast dynamics systems such as agile air vehicles. This PhD thesis presents the results, achieved at the Aerospace Engineering Department of the University of Bologna in collaboration with the Dutch National Aerospace Laboratories (NLR), concerning the development of a model predictive control system for small scale rotorcraft UAS. Several different predictive architectures have been evaluated and tested by means of simulation, as a result of this analysis the most promising one has been used to implement three different control systems: a Stability and Control Augmentation System, a trajectory tracking and a path following system. The systems have been compared with a corresponding baseline controller and showed several advantages in terms of performance, stability and robustness.
Resumo:
In this work we investigate the influence of dark energy on structure formation, within five different cosmological models, namely a concordance $\Lambda$CDM model, two models with dynamical dark energy, viewed as a quintessence scalar field (using a RP and a SUGRA potential form) and two extended quintessence models (EQp and EQn) where the quintessence scalar field interacts non-minimally with gravity (scalar-tensor theories). We adopted for all models the normalization of the matter power spectrum $\sigma_{8}$ to match the CMB data. For each model, we perform hydrodynamical simulations in a cosmological box of $(300 \ {\rm{Mpc}} \ h^{-1})^{3}$ including baryons and allowing for cooling and star formation. We find that, in models with dynamical dark energy, the evolving cosmological background leads to different star formation rates and different formation histories of galaxy clusters, but the baryon physics is not affected in a relevant way. We investigate several proxies for the cluster mass function based on X-ray observables like temperature, luminosity, $M_{gas}$, and $Y_{X}$. We confirm that the overall baryon fraction is almost independent of the dark energy models within few percentage points. The same is true for the gas fraction. This evidence reinforces the use of galaxy clusters as cosmological probe of the matter and energy content of the Universe. We also study the $c-M$ relation in the different cosmological scenarios, using both dark matter only and hydrodynamical simulations. We find that the normalization of the $c-M$ relation is directly linked to $\sigma_{8}$ and the evolution of the density perturbations for $\Lambda$CDM, RP and SUGRA, while for EQp and EQn it depends also on the evolution of the linear density contrast. These differences in the $c-M$ relation provide another way to use galaxy clusters to constrain the underlying cosmology.
Resumo:
Sebbene studiati a fondo, i processi che hanno portato alla formazione ed alla evoluzione delle galassie così come sono osservate nell'Universo attuale non sono ancora del tutto compresi. La visione attuale della storia di formazione delle strutture prevede che il collasso gravitazionale, a partire dalle fluttuazioni di densità primordiali, porti all'innesco della formazione stellare; quindi che un qualche processo intervenga e la interrompa. Diversi studi vedono il principale responsabile di questa brusca interruzione della formazione stellare nei fenomeni di attività nucleare al centro delle galassie (Active Galactic Nuclei, AGN), capaci di fornire l'energia necessaria a impedire il collasso gravitazionale del gas e la formazione di nuove stelle. Uno dei segni della presenza di un tale fenomeno all'interno di una galassia e l'emissione radio dovuta ai fenomeni di accrescimento di gas su buco nero. In questo lavoro di tesi si è studiato l'ambiente delle radio sorgenti nel campo della survey VLA-COSMOS. Partendo da un campione di 1806 radio sorgenti e 1482993 galassie che non presentassero emissione radio, con redshift fotometrici e fotometria provenienti dalla survey COSMOS e dalla sua parte radio (VLA-COSMOS), si è stimata la ricchezza dell'ambiente attorno a ciascuna radio sorgente, contando il numero di galassie senza emissione radio presenti all'interno di un cilindro di raggio di base 1 Mpc e di altezza proporzionale all'errore sul redshift fotometrico di ciascuna radio sorgente, centrato su di essa. Al fine di stimare la significatività dei risultati si è creato un campione di controllo costituito da 1806 galassie che non presentassero emissione radio e si è stimato l'ambiente attorno a ciascuna di esse con lo stesso metodo usato per le radio sorgenti. I risultati mostrano che gli ammassi di galassie aventi al proprio centro una radio sorgente sono significativamente più ricchi di quelli con al proprio centro una galassia senza emissione radio. Tale differenza in ricchezza permane indipendentemente da selezioni basate sul redshift, la massa stellare e il tasso di formazione stellare specifica delle galassie del campione e mostra che gli ammassi di galassie con al proprio centro una radio sorgente dovuta a fenomeni di AGN sono significativamente più ricchi di ammassi con al proprio centro una galassia senza emissione radio. Questo effetto e più marcato per AGN di tipo FR I rispetto ad oggetti di tipo FR II, indicando una correlazione fra potenza dell'AGN e formazione delle strutture. Tali risultati gettano nuova luce sui meccanismi di formazione ed evoluzione delle galassie che prevedono una stretta correlazione tra fenomeni di AGN, formazione stellare ed interruzione della stessa.
Resumo:
MultiProcessor Systems-on-Chip (MPSoC) are the core of nowadays and next generation computing platforms. Their relevance in the global market continuously increase, occupying an important role both in everydaylife products (e.g. smartphones, tablets, laptops, cars) and in strategical market sectors as aviation, defense, robotics, medicine. Despite of the incredible performance improvements in the recent years processors manufacturers have had to deal with issues, commonly called “Walls”, that have hindered the processors development. After the famous “Power Wall”, that limited the maximum frequency of a single core and marked the birth of the modern multiprocessors system-on-chip, the “Thermal Wall” and the “Utilization Wall” are the actual key limiter for performance improvements. The former concerns the damaging effects of the high temperature on the chip caused by the large power densities dissipation, whereas the second refers to the impossibility of fully exploiting the computing power of the processor due to the limitations on power and temperature budgets. In this thesis we faced these challenges by developing efficient and reliable solutions able to maximize performance while limiting the maximum temperature below a fixed critical threshold and saving energy. This has been possible by exploiting the Model Predictive Controller (MPC) paradigm that solves an optimization problem subject to constraints in order to find the optimal control decisions for the future interval. A fully-distributedMPC-based thermal controller with a far lower complexity respect to a centralized one has been developed. The control feasibility and interesting properties for the simplification of the control design has been proved by studying a partial differential equation thermal model. Finally, the controller has been efficiently included in more complex control schemes able to minimize energy consumption and deal with mixed-criticalities tasks