15 resultados para Power of Veto

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

MYCN amplification is a genetic hallmark of the childhood tumour neuroblastoma. MYCN-MAX dimers activate the expression of genes promoting cell proliferation. Moreover, MYCN seems to transcriptionally repress cell differentiation even in absence of MAX. We adopted the Drosophila eye as model to investigate the effect of high MYC to MAX expression ratio on cells. We found that dMyc overexpression in eye cell precursors inhibits cell differentiation and induces the ectopic expression of Antennapedia (the wing Hox gene). The further increase of MYC/MAX ratio results in an eye-to-wing homeotic transformation. Notably, dMyc overexpression phenotype is suppressed by low levels of transcriptional co-repressors and MYCN associates to the promoter of Deformed (the eye Hox gene) in proximity to repressive sites. Hence, we envisage that, in presence of high MYC/MAX ratio, the “free MYC” might inhibit Deformed expression, leading in turn to the ectopic expression of Antennapedia. This suggests that MYCN might reinforce its oncogenic role by affecting the physiological homeotic program. Furthermore, poor neuroblastoma outcome associates with a high level of the MRP1 protein, encoded by the ABCC1 gene and known to promote drug efflux in cancer cells. Intriguingly, this correlation persists regardless of chemotherapy and ABCC1 overexpression enhances neuroblastoma cell motility. We found that Drosophila dMRP contributes to the adhesion between the dorsal and ventral epithelia of the wing by inhibiting the function of integrin receptors, well known regulators of cell adhesion and migration. Besides, integrins play a crucial role during synaptogenesis and ABCC1 locus is included in a copy number variable region of the human genome (16p13.11) involved in neuropsychiatric diseases. Interestingly, we found that the altered dMRP/MRP1 level affects nervous system development in Drosophila embryos. These preliminary findings point out novel ABCC1 functions possibly defining ABCC1 contribution to neuroblastoma and to the pathogenicity of 16p13.11 deletion/duplication

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis we describe in detail the Monte Carlo simulation (LVDG4) built to interpret the experimental data collected by LVD and to measure the muon-induced neutron yield in iron and liquid scintillator. A full Monte Carlo simulation, based on the Geant4 (v 9.3) toolkit, has been developed and validation tests have been performed. We used the LVDG4 to determine the active vetoing and the shielding power of LVD. The idea was to evaluate the feasibility to host a dark matter detector in the most internal part, called Core Facility (LVD-CF). The first conclusion is that LVD is a good moderator, but the iron supporting structure produce a great number of neutrons near the core. The second conclusions is that if LVD is used as an active veto for muons, the neutron flux in the LVD-CF is reduced by a factor 50, of the same order of magnitude of the neutron flux in the deepest laboratory of the world, Sudbury. Finally, the muon-induced neutron yield has been measured. In liquid scintillator we found $(3.2 \pm 0.2) \times 10^{-4}$ n/g/cm$^2$, in agreement with previous measurements performed at different depths and with the general trend predicted by theoretical calculations and Monte Carlo simulations. Moreover we present the first measurement, in our knowledge, of the neutron yield in iron: $(1.9 \pm 0.1) \times 10^{-3}$ n/g/cm$^2$. That measurement provides an important check for the MC of neutron production in heavy materials that are often used as shield in low background experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this thesis is the power transient analysis concerning experimental devices placed within the reflector of Jules Horowitz Reactor (JHR). Since JHR material testing facility is designed to achieve 100 MW core thermal power, a large reflector hosts fissile material samples that are irradiated up to total relevant power of 3 MW. MADISON devices are expected to attain 130 kW, conversely ADELINE nominal power is of some 60 kW. In addition, MOLFI test samples are envisaged to reach 360 kW for what concerns LEU configuration and up to 650 kW according to HEU frame. Safety issues concern shutdown transients and need particular verifications about thermal power decreasing of these fissile samples with respect to core kinetics, as far as single device reactivity determination is concerned. Calculation model is conceived and applied in order to properly account for different nuclear heating processes and relative time-dependent features of device transients. An innovative methodology is carried out since flux shape modification during control rod insertions is investigated regarding the impact on device power through core-reflector coupling coefficients. In fact, previous methods considering only nominal core-reflector parameters are then improved. Moreover, delayed emissions effect is evaluated about spatial impact on devices of a diffuse in-core delayed neutron source. Delayed gammas transport related to fission products concentration is taken into account through evolution calculations of different fuel compositions in equilibrium cycle. Provided accurate device reactivity control, power transients are then computed for every sample according to envisaged shutdown procedures. Results obtained in this study are aimed at design feedback and reactor management optimization by JHR project team. Moreover, Safety Report is intended to utilize present analysis for improved device characterization.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Galaxy clusters occupy a special position in the cosmic hierarchy as they are the largest bound structures in the Universe. There is now general agreement on a hierarchical picture for the formation of cosmic structures, in which galaxy clusters are supposed to form by accretion of matter and merging between smaller units. During merger events, shocks are driven by the gravity of the dark matter in the diffuse barionic component, which is heated up to the observed temperature. Radio and hard-X ray observations have discovered non-thermal components mixed with the thermal Intra Cluster Medium (ICM) and this is of great importance as it calls for a “revision” of the physics of the ICM. The bulk of present information comes from the radio observations which discovered an increasing number of Mpcsized emissions from the ICM, Radio Halos (at the cluster center) and Radio Relics (at the cluster periphery). These sources are due to synchrotron emission from ultra relativistic electrons diffusing through µG turbulent magnetic fields. Radio Halos are the most spectacular evidence of non-thermal components in the ICM and understanding the origin and evolution of these sources represents one of the most challenging goal of the theory of the ICM. Cluster mergers are the most energetic events in the Universe and a fraction of the energy dissipated during these mergers could be channelled into the amplification of the magnetic fields and into the acceleration of high energy particles via shocks and turbulence driven by these mergers. Present observations of Radio Halos (and possibly of hard X-rays) can be best interpreted in terms of the reacceleration scenario in which MHD turbulence injected during these cluster mergers re-accelerates high energy particles in the ICM. The physics involved in this scenario is very complex and model details are difficult to test, however this model clearly predicts some simple properties of Radio Halos (and resulting IC emission in the hard X-ray band) which are almost independent of the details of the adopted physics. In particular in the re-acceleration scenario MHD turbulence is injected and dissipated during cluster mergers and thus Radio Halos (and also the resulting hard X-ray IC emission) should be transient phenomena (with a typical lifetime <» 1 Gyr) associated with dynamically disturbed clusters. The physics of the re-acceleration scenario should produce an unavoidable cut-off in the spectrum of the re-accelerated electrons, which is due to the balance between turbulent acceleration and radiative losses. The energy at which this cut-off occurs, and thus the maximum frequency at which synchrotron radiation is produced, depends essentially on the efficiency of the acceleration mechanism so that observations at high frequencies are expected to catch only the most efficient phenomena while, in principle, low frequency radio surveys may found these phenomena much common in the Universe. These basic properties should leave an important imprint in the statistical properties of Radio Halos (and of non-thermal phenomena in general) which, however, have not been addressed yet by present modellings. The main focus of this PhD thesis is to calculate, for the first time, the expected statistics of Radio Halos in the context of the re-acceleration scenario. In particular, we shall address the following main questions: • Is it possible to model “self-consistently” the evolution of these sources together with that of the parent clusters? • How the occurrence of Radio Halos is expected to change with cluster mass and to evolve with redshift? How the efficiency to catch Radio Halos in galaxy clusters changes with the observing radio frequency? • How many Radio Halos are expected to form in the Universe? At which redshift is expected the bulk of these sources? • Is it possible to reproduce in the re-acceleration scenario the observed occurrence and number of Radio Halos in the Universe and the observed correlations between thermal and non-thermal properties of galaxy clusters? • Is it possible to constrain the magnetic field intensity and profile in galaxy clusters and the energetic of turbulence in the ICM from the comparison between model expectations and observations? Several astrophysical ingredients are necessary to model the evolution and statistical properties of Radio Halos in the context of re-acceleration model and to address the points given above. For these reason we deserve some space in this PhD thesis to review the important aspects of the physics of the ICM which are of interest to catch our goals. In Chapt. 1 we discuss the physics of galaxy clusters, and in particular, the clusters formation process; in Chapt. 2 we review the main observational properties of non-thermal components in the ICM; and in Chapt. 3 we focus on the physics of magnetic field and of particle acceleration in galaxy clusters. As a relevant application, the theory of Alfv´enic particle acceleration is applied in Chapt. 4 where we report the most important results from calculations we have done in the framework of the re-acceleration scenario. In this Chapter we show that a fraction of the energy of fluid turbulence driven in the ICM by the cluster mergers can be channelled into the injection of Alfv´en waves at small scales and that these waves can efficiently re-accelerate particles and trigger Radio Halos and hard X-ray emission. The main part of this PhD work, the calculation of the statistical properties of Radio Halos and non-thermal phenomena as expected in the context of the re-acceleration model and their comparison with observations, is presented in Chapts.5, 6, 7 and 8. In Chapt.5 we present a first approach to semi-analytical calculations of statistical properties of giant Radio Halos. The main goal of this Chapter is to model cluster formation, the injection of turbulence in the ICM and the resulting particle acceleration process. We adopt the semi–analytic extended Press & Schechter (PS) theory to follow the formation of a large synthetic population of galaxy clusters and assume that during a merger a fraction of the PdV work done by the infalling subclusters in passing through the most massive one is injected in the form of magnetosonic waves. Then the processes of stochastic acceleration of the relativistic electrons by these waves and the properties of the ensuing synchrotron (Radio Halos) and inverse Compton (IC, hard X-ray) emission of merging clusters are computed under the assumption of a constant rms average magnetic field strength in emitting volume. The main finding of these calculations is that giant Radio Halos are naturally expected only in the more massive clusters, and that the expected fraction of clusters with Radio Halos is consistent with the observed one. In Chapt. 6 we extend the previous calculations by including a scaling of the magnetic field strength with cluster mass. The inclusion of this scaling allows us to derive the expected correlations between the synchrotron radio power of Radio Halos and the X-ray properties (T, LX) and mass of the hosting clusters. For the first time, we show that these correlations, calculated in the context of the re-acceleration model, are consistent with the observed ones for typical µG strengths of the average B intensity in massive clusters. The calculations presented in this Chapter allow us to derive the evolution of the probability to form Radio Halos as a function of the cluster mass and redshift. The most relevant finding presented in this Chapter is that the luminosity functions of giant Radio Halos at 1.4 GHz are expected to peak around a radio power » 1024 W/Hz and to flatten (or cut-off) at lower radio powers because of the decrease of the electron re-acceleration efficiency in smaller galaxy clusters. In Chapt. 6 we also derive the expected number counts of Radio Halos and compare them with available observations: we claim that » 100 Radio Halos in the Universe can be observed at 1.4 GHz with deep surveys, while more than 1000 Radio Halos are expected to be discovered in the next future by LOFAR at 150 MHz. This is the first (and so far unique) model expectation for the number counts of Radio Halos at lower frequency and allows to design future radio surveys. Based on the results of Chapt. 6, in Chapt.7 we present a work in progress on a “revision” of the occurrence of Radio Halos. We combine past results from the NVSS radio survey (z » 0.05 − 0.2) with our ongoing GMRT Radio Halos Pointed Observations of 50 X-ray luminous galaxy clusters (at z » 0.2−0.4) and discuss the possibility to test our model expectations with the number counts of Radio Halos at z » 0.05 − 0.4. The most relevant limitation in the calculations presented in Chapt. 5 and 6 is the assumption of an “averaged” size of Radio Halos independently of their radio luminosity and of the mass of the parent clusters. This assumption cannot be released in the context of the PS formalism used to describe the formation process of clusters, while a more detailed analysis of the physics of cluster mergers and of the injection process of turbulence in the ICM would require an approach based on numerical (possible MHD) simulations of a very large volume of the Universe which is however well beyond the aim of this PhD thesis. On the other hand, in Chapt.8 we report our discovery of novel correlations between the size (RH) of Radio Halos and their radio power and between RH and the cluster mass within the Radio Halo region, MH. In particular this last “geometrical” MH − RH correlation allows us to “observationally” overcome the limitation of the “average” size of Radio Halos. Thus in this Chapter, by making use of this “geometrical” correlation and of a simplified form of the re-acceleration model based on the results of Chapt. 5 and 6 we are able to discuss expected correlations between the synchrotron power and the thermal cluster quantities relative to the radio emitting region. This is a new powerful tool of investigation and we show that all the observed correlations (PR − RH, PR − MH, PR − T, PR − LX, . . . ) now become well understood in the context of the re-acceleration model. In addition, we find that observationally the size of Radio Halos scales non-linearly with the virial radius of the parent cluster, and this immediately means that the fraction of the cluster volume which is radio emitting increases with cluster mass and thus that the non-thermal component in clusters is not self-similar.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This Doctoral Thesis focuses on the study of individual behaviours as a result of organizational affiliation. The objective is to assess the Entrepreneurial Orientation of individuals proving the existence of a set of antecedents to that measure returning a structural model of its micro-foundation. Relying on the developed measurement model, I address the issue whether some Entrepreneurs experience different behaviours as a result of their academic affiliation, comparing a sample of ‘Academic Entrepreneurs’ to a control sample of ‘Private Entrepreneurs’ affiliated to a matched sample of Academic Spin-offs and Private Start-ups. Building on the Theory of the Planned Behaviour, proposed by Ajzen (1991), I present a model of causal antecedents of Entrepreneurial Orientation on constructs extensively used and validated, both from a theoretical and empirical perspective, in sociological and psychological studies. I focus my investigation on five major domains: (a) Situationally Specific Motivation, (b) Personal Traits and Characteristics, (c) Individual Skills, (d) Perception of the Business Environment and (e) Entrepreneurial Orientation Related Dimensions. I rely on a sample of 200 Entrepreneurs, affiliated to a matched sample of 72 Academic Spin-offs and Private Start-ups. Firms are matched by Industry, Year of Establishment and Localization and they are all located in the Emilia Romagna region, in northern Italy. I’ve gathered data by face to face interviews and used a Structural Equation Modeling technique (Lisrel 8.80, Joreskog, K., & Sorbom, D. 2006) to perform the empirical analysis. The results show that Entrepreneurial Orientation is a multi-dimensional micro-founded construct which can be better represented by a Second-Order Model. The t-tests on the latent means reveal that the Academic Entrepreneurs differ in terms of: Risk taking, Passion, Procedural and Organizational Skills, Perception of the Government, Context and University Supports. The Structural models also reveal that the main differences between the two groups lay in the predicting power of Technical Skills, Perceived Context Support and Perceived University Support in explaining the Entrepreneurial Orientation Related Dimensions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Chemists have long sought to extrapolate the power of biological catalysis and recognition to synthetic systems. These efforts have focused largely on low molecular weight catalysts and receptors; however, biological systems themselves rely almost exclusively on polymers, proteins and RNA, to perform complex chemical functions. Proteins and RNA are unique in their ability to adopt compact, well-ordered conformations, and specific folding provides precise spatial orientation of the functional groups that comprise the “active site”. These features suggest that identification of new polymer backbones with discrete and predictable folding propensities (“foldamers”) will provide a basis for design of molecular machines with unique capabilities. The foldamer approach complements current efforts to design unnatural properties into polypeptides and polynucleotides. The aim of this thesis is the synthesis and conformational studies of new classes of foldamers, using a peptidomimetic approach. Moreover their attitude to be utilized as ionophores, catalysts, and nanobiomaterials were analyzed in solution and in the solid state. This thesis is divided in thematically chapters that are reported below. It begins with a very general introduction (page 4) which is useful, but not strictly necessary, to the expert reader. It is worth mentioning that paragraph I.3 (page 22) is the starting point of this work and paragraph I.5 (page 32) isrequired to better understand the results of chapters 4 and 5. In chapter 1 (page 39) is reported the synthesis and conformational analysis of a novel class of foldamers containing (S)-β3-homophenylglycine [(S)-β3-hPhg] and D- 4-carboxy-oxazolidin-2-one (D-Oxd) residues in alternate order is reported. The experimental conformational analysis performed in solution by IR, 1HNMR, and CD spectroscopy unambiguously proved that these oligomers fold into ordered structures with increasing sequence length. Theoretical calculations employing ab initio MO theory suggest a helix with 11-membered hydrogenbonded rings as the preferred secondary structure type. The novel structures enrich the field of peptidic foldamers and might be useful in the mimicry of native peptides. In chapter 2 cyclo-(L-Ala-D-Oxd)3 and cyclo-(L-Ala-DOxd) 4 were prepared in the liquid phase with good overall yields and were utilized for bivalent ions chelation (Ca2+, Mg2+, Cu2+, Zn2+ and Hg2+); their chelation skill was analyzed with ESI-MS, CD and 1HNMR techniques and the best results were obtained with cyclo-(L-Ala-D-Oxd)3 and Mg2+ or Ca2+. Chapter 3 describes an application of oligopeptides as catalysts for aldol reactions. Paragraph 3.1 concerns the use of prolinamides as catalysts of the cross aldol addition of hydroxyacetone to aromatic aldeydes, whereas paragraphs 3.2 and 3.3 are about the catalyzed aldol addition of acetone to isatins. By means of DFT and AIM calculations, the steric and stereoelectronic effects that control the enantioselectivity in the cross-aldol addition of acetone to isatin catalysed by L-proline have been studied, also in the presence of small quantities of water. In chapter 4 is reported the synthesis and the analysis of a new fiber-like material, obtained from the selfaggregation of the dipeptide Boc-L-Phe-D-Oxd-OBn, which spontaneously forms uniform fibers consisting of parallel infinite linear chains arising from singleintermolecular N-H···O=C hydrogen bonds. This is the absolute borderline case of a parallel β-sheet structure. Longer oligomers of the same series with general formula Boc-(L-Phe-D-Oxd)n-OBn (where n = 2-5), are described in chapter 5. Their properties in solution and in the solid state were analyzed, in correlation with their attitude to form intramolecular hydrogen bond. In chapter 6 is reported the synthesis of imidazolidin-2- one-4-carboxylate and (tetrahydro)-pyrimidin-2-one-5- carboxylate, via an efficient modification of the Hofmann rearrangement. The reaction affords the desired compounds from protected asparagine or glutamine in good to high yield, using PhI(OAc)2 as source of iodine(III).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of this thesis is to go through different approaches for proving expressiveness properties in several concurrent languages. We analyse four different calculi exploiting for each one a different technique. We begin with the analysis of a synchronous language, we explore the expressiveness of a fragment of CCS! (a variant of Milner's CCS where replication is considered instead of recursion) w.r.t. the existence of faithful encodings (i.e. encodings that respect the behaviour of the encoded model without introducing unnecessary computations) of models of computability strictly less expressive than Turing Machines. Namely, grammars of types 1,2 and 3 in the Chomsky Hierarchy. We then move to asynchronous languages and we study full abstraction for two Linda-like languages. Linda can be considered as the asynchronous version of CCS plus a shared memory (a multiset of elements) that is used for storing messages. After having defined a denotational semantics based on traces, we obtain fully abstract semantics for both languages by using suitable abstractions in order to identify different traces which do not correspond to different behaviours. Since the ability of one of the two variants considered of recognising multiple occurrences of messages in the store (which accounts for an increase of expressiveness) reflects in a less complex abstraction, we then study other languages where multiplicity plays a fundamental role. We consider the language CHR (Constraint Handling Rules) a language which uses multi-headed (guarded) rules. We prove that multiple heads augment the expressive power of the language. Indeed we show that if we restrict to rules where the head contains at most n atoms we could generate a hierarchy of languages with increasing expressiveness (i.e. the CHR language allowing at most n atoms in the heads is more expressive than the language allowing at most m atoms, with mof the rewriting rules, several dialects of the calculus can be obtained. We analyse the expressive power of some of these dialects by focusing on decidability and undecidability for problems like reachability and coverability.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The application of Concurrency Theory to Systems Biology is in its earliest stage of progress. The metaphor of cells as computing systems by Regev and Shapiro opened the employment of concurrent languages for the modelling of biological systems. Their peculiar characteristics led to the design of many bio-inspired formalisms which achieve higher faithfulness and specificity. In this thesis we present pi@, an extremely simple and conservative extension of the pi-calculus representing a keystone in this respect, thanks to its expressiveness capabilities. The pi@ calculus is obtained by the addition of polyadic synchronisation and priority to the pi-calculus, in order to achieve compartment semantics and atomicity of complex operations respectively. In its direct application to biological modelling, the stochastic variant of the calculus, Spi@, is shown able to model consistently several phenomena such as formation of molecular complexes, hierarchical subdivision of the system into compartments, inter-compartment reactions, dynamic reorganisation of compartment structure consistent with volume variation. The pivotal role of pi@ is evidenced by its capability of encoding in a compositional way several bio-inspired formalisms, so that it represents the optimal core of a framework for the analysis and implementation of bio-inspired languages. In this respect, the encodings of BioAmbients, Brane Calculi and a variant of P Systems in pi@ are formalised. The conciseness of their translation in pi@ allows their indirect comparison by means of their encodings. Furthermore it provides a ready-to-run implementation of minimal effort whose correctness is granted by the correctness of the respective encoding functions. Further important results of general validity are stated on the expressive power of priority. Several impossibility results are described, which clearly state the superior expressiveness of prioritised languages and the problems arising in the attempt of providing their parallel implementation. To this aim, a new setting in distributed computing (the last man standing problem) is singled out and exploited to prove the impossibility of providing a purely parallel implementation of priority by means of point-to-point or broadcast communication.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Researches performed during the PhD course intended to assess innovative applications of near-infrared spectroscopy in reflectance (NIR) in the production chain of beer. The purpose is to measure by NIR the "malting quality" (MQ) parameter of barley, to monitor the malting process and to know if a certain type of barley is suitable for the production of beer and spirits. Moreover, NIR will be applied to monitor the brewing process. First of all, it was possible to check the quality of the raw materials like barley, maize and barley malt using a rapid, non-destructive and reliable method, with a low error of prediction. The more interesting result obtained at this level was that the repeatability of the NIR calibration models developed was comparable with the one of the reference method. Moreover, about malt, new kinds of validation were used in order to estimate the real predictive power of the proposed calibration models and to understand the long-term effects. Furthermore, the precision of all the calibration models developed for malt evaluation was estimated and statistically compared with the reference methods, with good results. Then, new calibration models were developed for monitoring the malting process, measuring the moisture content and other malt quality parameters during germination. Moreover it was possible to obtain by NIR an estimate of the "malting quality" (MQ) of barley and to predict whether if its germination will be rapid and uniform and if a certain type of barley is suitable for the production of beer and spirits. Finally, the NIR technique was applied to monitor the brewing process, using correlations between NIR spectra of beer and analytical parameters, and to assess beer quality. These innovative results are potentially very useful for the actors involved in the beer production chain, especially the calibration models suitable for the control of the malting process and for the assessment of the “malting quality” of barley, which need to be deepened in future studies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

MultiProcessor Systems-on-Chip (MPSoC) are the core of nowadays and next generation computing platforms. Their relevance in the global market continuously increase, occupying an important role both in everydaylife products (e.g. smartphones, tablets, laptops, cars) and in strategical market sectors as aviation, defense, robotics, medicine. Despite of the incredible performance improvements in the recent years processors manufacturers have had to deal with issues, commonly called “Walls”, that have hindered the processors development. After the famous “Power Wall”, that limited the maximum frequency of a single core and marked the birth of the modern multiprocessors system-on-chip, the “Thermal Wall” and the “Utilization Wall” are the actual key limiter for performance improvements. The former concerns the damaging effects of the high temperature on the chip caused by the large power densities dissipation, whereas the second refers to the impossibility of fully exploiting the computing power of the processor due to the limitations on power and temperature budgets. In this thesis we faced these challenges by developing efficient and reliable solutions able to maximize performance while limiting the maximum temperature below a fixed critical threshold and saving energy. This has been possible by exploiting the Model Predictive Controller (MPC) paradigm that solves an optimization problem subject to constraints in order to find the optimal control decisions for the future interval. A fully-distributedMPC-based thermal controller with a far lower complexity respect to a centralized one has been developed. The control feasibility and interesting properties for the simplification of the control design has been proved by studying a partial differential equation thermal model. Finally, the controller has been efficiently included in more complex control schemes able to minimize energy consumption and deal with mixed-criticalities tasks

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis presents some different techniques designed to drive a swarm of robots in an a-priori unknown environment in order to move the group from a starting area to a final one avoiding obstacles. The presented techniques are based on two different theories used alone or in combination: Swarm Intelligence (SI) and Graph Theory. Both theories are based on the study of interactions between different entities (also called agents or units) in Multi- Agent Systems (MAS). The first one belongs to the Artificial Intelligence context and the second one to the Distributed Systems context. These theories, each one from its own point of view, exploit the emergent behaviour that comes from the interactive work of the entities, in order to achieve a common goal. The features of flexibility and adaptability of the swarm have been exploited with the aim to overcome and to minimize difficulties and problems that can affect one or more units of the group, having minimal impact to the whole group and to the common main target. Another aim of this work is to show the importance of the information shared between the units of the group, such as the communication topology, because it helps to maintain the environmental information, detected by each single agent, updated among the swarm. Swarm Intelligence has been applied to the presented technique, through the Particle Swarm Optimization algorithm (PSO), taking advantage of its features as a navigation system. The Graph Theory has been applied by exploiting Consensus and the application of the agreement protocol with the aim to maintain the units in a desired and controlled formation. This approach has been followed in order to conserve the power of PSO and to control part of its random behaviour with a distributed control algorithm like Consensus.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Treaty of Lisbon has brought remarkable changes and innovations to the European Union. As far as the Council of Ministers of the European Union (“the Council” hereinafter) is concerned, there are two significant innovations: double qualified majority voting and new rotating Presidency scheme, which are considered to make the working of the Council more efficiently, stably and consistently. With the modification relating to other key institutions, the Commission and the European Parliament, and with certain procedures being re-codified, the power of the Council varies accordingly, where the inter-institutional balance counts for more research. As the Council is one of the co-legislatures of the Union, the legislative function of it would be probably influenced, positively or negatively, by the internal innovations and the inter-institutional re-balance. Has the legislative function of the Council been reinforced or not? How could the Council better reach its functional goal designed by the Treaties’ drafter? How to evaluate the Council’s evolution after Lisbon Treaty in the light of European integration? This thesis is attempting to find the answers by analyzing two main internal innovations and inter-institutional re-balance thereinafter.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Organizational and institutional scholars have advocated the need to examine how processes originating at an individual level can change organizations or even create new organizational arrangements able to affect institutional dynamics (Chreim et al., 2007; Powell & Colyvas, 2008; Smets et al., 2012). Conversely, research on identity work has mainly investigated the different ways individuals can modify the boundaries of their work in actual occupations, thus paying particular attention to ‘internal’ self-crafting (e.g. Wrzesniewski & Dutton, 2001). Drawing from literatures on possible and alternative self and on positive organizational scholarship (e.g., Obodaru, 2012; Roberts & Dutton, 2009), my argument is that individuals’ identity work can go well beyond the boundaries of internal self-crafting to the creation of new organizational arrangements. In this contribution I analyze, through multiple case studies, healthcare professionals who spontaneously participated in the creation of new organizational arrangements, namely health structures called Community Hospitals. The contribution develops this form of identity work by building a grounded model. My findings disclose the process that leads from the search for the enactment of different self-concepts to positive identities, through the creation of a new organizational arrangement. I contend that this is a particularly complex form of collective identity work because it requires, to be successful, concerted actions of several internal, external and institutional actors, and it also requires balanced tensions that – at the same time - enable individuals’ aspirations and organizational equilibrium. I name this process organizational collective crafting. Moreover I inquire the role of context in supporting the triggering power of those unrealized selves. I contribute to the comprehension of the consequences of self-comparisons, organizational identity variance, and positive identity. The study bears important insights on how identity work originating from individuals can influence organizational outcomes and larger social systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of this work was to identify markers associated with production traits in the pig genome using different approaches. We focused the attention on Italian Large White pig breed using Genome Wide Association Studies (GWAS) and applying a selective genotyping approach to increase the power of the analyses. Furthermore, we searched the pig genome using Next Generation Sequencing (NSG) Ion Torrent Technology to combine selective genotyping approach and deep sequencing for SNP discovery. Other two studies were carried on with a different approach. Allele frequency changes for SNPs affecting candidate genes and at Genome Wide level were analysed to identify selection signatures driven by selection program during the last 20 years. This approach confirmed that a great number of markers may affect production traits and that they are captured by the classical selection programs. GWAS revealed 123 significant or suggestively significant SNP associated with Back Fat Thickenss and 229 associated with Average Daily Gain. 16 Copy Number Variant Regions resulted more frequent in lean or fat pigs and showed that different copies of those region could have a limited impact on fat. These often appear to be involved in food intake and behavior, beside affecting genes involved in metabolic pathways and their expression. By combining NGS sequencing with selective genotyping approach, new variants where discovered and at least 54 are worth to be analysed in association studies. The study of groups of pigs undergone to stringent selection showed that allele frequency of some loci can drastically change if they are close to traits that are interesting for selection schemes. These approaches could be, in future, integrated in genomic selection plans.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The energy harvesting research field has grown considerably in the last decade due to increasing interests in energy autonomous sensing systems, which require smart and efficient interfaces for extracting power from energy source and power management (PM) circuits. This thesis investigates the design trade-offs for minimizing the intrinsic power of PM circuits, in order to allow operation with very weak energy sources. For validation purposes, three different integrated power converter and PM circuits for energy harvesting applications are presented. They have been designed for nano-power operations and single-source converters can operate with input power lower than 1 μW. The first IC is a buck-boost converter for piezoelectric transducers (PZ) implementing Synchronous Electrical Charge Extraction (SECE), a non-linear energy extraction technique. Moreover, Residual Charge Inversion technique is exploited for extracting energy from PZ with weak and irregular excitations (i.e. lower voltage), and the implemented PM policy, named Two-Way Energy Storage, considerably reduces the start-up time of the converter, improving the overall conversion efficiency. The second proposed IC is a general-purpose buck-boost converter for low-voltage DC energy sources, up to 2.5 V. An ultra-low-power MPPT circuit has been designed in order to track variations of source power. Furthermore, a capacitive boost circuit has been included, allowing the converter start-up from a source voltage VDC0 = 223 mV. A nano-power programmable linear regulator is also included in order to provide a stable voltage to the load. The third IC implements an heterogeneous multisource buck-boost converter. It provides up to 9 independent input channels, of which 5 are specific for PZ (with SECE) and 4 for DC energy sources with MPPT. The inductor is shared among channels and an arbiter, designed with asynchronous logic to reduce the energy consumption, avoids simultaneous access to the buck-boost core, with a dynamic schedule based on source priority.