80 resultados para EXPLOITING MULTICOMMUTATION


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Peer-to-Peer network paradigm is drawing the attention of both final users and researchers for its features. P2P networks shift from the classic client-server approach to a high level of decentralization where there is no central control and all the nodes should be able not only to require services, but to provide them to other peers as well. While on one hand such high level of decentralization might lead to interesting properties like scalability and fault tolerance, on the other hand it implies many new problems to deal with. A key feature of many P2P systems is openness, meaning that everybody is potentially able to join a network with no need for subscription or payment systems. The combination of openness and lack of central control makes it feasible for a user to free-ride, that is to increase its own benefit by using services without allocating resources to satisfy other peers’ requests. One of the main goals when designing a P2P system is therefore to achieve cooperation between users. Given the nature of P2P systems based on simple local interactions of many peers having partial knowledge of the whole system, an interesting way to achieve desired properties on a system scale might consist in obtaining them as emergent properties of the many interactions occurring at local node level. Two methods are typically used to face the problem of cooperation in P2P networks: 1) engineering emergent properties when designing the protocol; 2) study the system as a game and apply Game Theory techniques, especially to find Nash Equilibria in the game and to reach them making the system stable against possible deviant behaviors. In this work we present an evolutionary framework to enforce cooperative behaviour in P2P networks that is alternative to both the methods mentioned above. Our approach is based on an evolutionary algorithm inspired by computational sociology and evolutionary game theory, consisting in having each peer periodically trying to copy another peer which is performing better. The proposed algorithms, called SLAC and SLACER, draw inspiration from tag systems originated in computational sociology, the main idea behind the algorithm consists in having low performance nodes copying high performance ones. The algorithm is run locally by every node and leads to an evolution of the network both from the topology and from the nodes’ strategy point of view. Initial tests with a simple Prisoners’ Dilemma application show how SLAC is able to bring the network to a state of high cooperation independently from the initial network conditions. Interesting results are obtained when studying the effect of cheating nodes on SLAC algorithm. In fact in some cases selfish nodes rationally exploiting the system for their own benefit can actually improve system performance from the cooperation formation point of view. The final step is to apply our results to more realistic scenarios. We put our efforts in studying and improving the BitTorrent protocol. BitTorrent was chosen not only for its popularity but because it has many points in common with SLAC and SLACER algorithms, ranging from the game theoretical inspiration (tit-for-tat-like mechanism) to the swarms topology. We discovered fairness, meant as ratio between uploaded and downloaded data, to be a weakness of the original BitTorrent protocol and we drew inspiration from the knowledge of cooperation formation and maintenance mechanism derived from the development and analysis of SLAC and SLACER, to improve fairness and tackle freeriding and cheating in BitTorrent. We produced an extension of BitTorrent called BitFair that has been evaluated through simulation and has shown the abilities of enforcing fairness and tackling free-riding and cheating nodes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Assimilation in the Unstable Subspace (AUS) was introduced by Trevisan and Uboldi in 2004, and developed by Trevisan, Uboldi and Carrassi, to minimize the analysis and forecast errors by exploiting the flow-dependent instabilities of the forecast-analysis cycle system, which may be thought of as a system forced by observations. In the AUS scheme the assimilation is obtained by confining the analysis increment in the unstable subspace of the forecast-analysis cycle system so that it will have the same structure of the dominant instabilities of the system. The unstable subspace is estimated by Breeding on the Data Assimilation System (BDAS). AUS- BDAS has already been tested in realistic models and observational configurations, including a Quasi-Geostrophicmodel and a high dimensional, primitive equation ocean model; the experiments include both fixed and“adaptive”observations. In these contexts, the AUS-BDAS approach greatly reduces the analysis error, with reasonable computational costs for data assimilation with respect, for example, to a prohibitive full Extended Kalman Filter. This is a follow-up study in which we revisit the AUS-BDAS approach in the more basic, highly nonlinear Lorenz 1963 convective model. We run observation system simulation experiments in a perfect model setting, and with two types of model error as well: random and systematic. In the different configurations examined, and in a perfect model setting, AUS once again shows better efficiency than other advanced data assimilation schemes. In the present study, we develop an iterative scheme that leads to a significant improvement of the overall assimilation performance with respect also to standard AUS. In particular, it boosts the efficiency of regime’s changes tracking, with a low computational cost. Other data assimilation schemes need estimates of ad hoc parameters, which have to be tuned for the specific model at hand. In Numerical Weather Prediction models, tuning of parameters — and in particular an estimate of the model error covariance matrix — may turn out to be quite difficult. Our proposed approach, instead, may be easier to implement in operational models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this thesis is to go through different approaches for proving expressiveness properties in several concurrent languages. We analyse four different calculi exploiting for each one a different technique. We begin with the analysis of a synchronous language, we explore the expressiveness of a fragment of CCS! (a variant of Milner's CCS where replication is considered instead of recursion) w.r.t. the existence of faithful encodings (i.e. encodings that respect the behaviour of the encoded model without introducing unnecessary computations) of models of computability strictly less expressive than Turing Machines. Namely, grammars of types 1,2 and 3 in the Chomsky Hierarchy. We then move to asynchronous languages and we study full abstraction for two Linda-like languages. Linda can be considered as the asynchronous version of CCS plus a shared memory (a multiset of elements) that is used for storing messages. After having defined a denotational semantics based on traces, we obtain fully abstract semantics for both languages by using suitable abstractions in order to identify different traces which do not correspond to different behaviours. Since the ability of one of the two variants considered of recognising multiple occurrences of messages in the store (which accounts for an increase of expressiveness) reflects in a less complex abstraction, we then study other languages where multiplicity plays a fundamental role. We consider the language CHR (Constraint Handling Rules) a language which uses multi-headed (guarded) rules. We prove that multiple heads augment the expressive power of the language. Indeed we show that if we restrict to rules where the head contains at most n atoms we could generate a hierarchy of languages with increasing expressiveness (i.e. the CHR language allowing at most n atoms in the heads is more expressive than the language allowing at most m atoms, with m

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The recent widespread diffusion of radio-frequency identification (RFID) applications operating in the UHF band has been supported by both the request for greater interrogation ranges and greater and faster data exchange. UHF-RFID systems, exploiting a physical interaction based on Electromagnetic propagation, introduce many problems that have not been fully explored for the previous generations of RFID systems (e.g. HF). Therefore, the availability of reliable tools for modeling and evaluating the radio-communication between Reader and Tag within an RFID radio-link are needed. The first part of the thesis discuss the impact of real environment on system performance. In particular an analytical closed form formulation for the back-scattered field from the Tag antenna and the formulation for the lower bound of the BER achievable at the Reader side will be presented, considering different possible electromagnetic impairments. By means of the previous formulations, of the analysis of the RFID link operating in near filed conditions and of some electromagnetic/system-level co-simulations, an in-depth study of the dimensioning parameters and the actual performance of the systems will be discussed and analyzed, showing some relevant properties and trade-offs in transponder and reader design. Moreover a new low cost approach to extend the read range of the RFID UHF passive systems will be discussed. Within the scope to check the reliability of the analysis approaches and of innovative proposals, some reference transponder antennas have been designed and extensive measurement campaign has been carried out with satisfactory results. Finally, some commercial ad-hoc transponder for industrial application have been designed within the cooperation with Datalogic s.p.a., some guidelines and results will be briefly presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this Ph.D. project has been the design and characterization of new and more efficient luminescent tools, in particular sensors and labels, for analytical chemistry, medical diagnostics and imaging. Actually both the increasing temporal and spatial resolutions that are demanded by those branches, coupled to a sensitivity that is required to reach the single molecule resolution, can be provided by the wide range of techniques based on luminescence spectroscopy. As far as the development of new chemical sensors is concerned, as chemists we were interested in the preparation of new, efficient, sensing materials. In this context, we kept developing new molecular chemosensors, by exploiting the supramolecular approach, for different classes of analytes. In particular we studied a family of luminescent tetrapodal-hosts based on aminopyridinium units with pyrenyl groups for the detection of anions. These systems exhibited noticeable changes in the photophysical properties, depending on the nature of the anion; in particular, addition of chloride resulted in a conformational change, giving an initial increase in excimeric emission. A good selectivity for dicarboxylic acid was also found. In the search for higher sensitivities, we moved our attention also to systems able to perform amplification effects. In this context we described the metal ion binding properties of three photoactive poly-(arylene ethynylene) co-polymers with different complexing units and we highlighted, for one of them, a ten-fold amplification of the response in case of addition of Zn2+, Cu2+ and Hg2+ ions. In addition, we were able to demonstrate the formation of complexes with Yb3+ an Er3+ and an efficient sensitization of their typical metal centered NIR emission upon excitation of the polymer structure, this feature being of particular interest for their possible applications in optical imaging and in optical amplification for telecommunication purposes. An amplification effect was also observed during this research in silica nanoparticles derivatized with a suitable zinc probe. In this case we were able to prove, for the first time, that nanoparticles can work as “off-on” chemosensors with signal amplification. Fluorescent silica nanoparticles can be thus seen as innovative multicomponent systems in which the organization of photophysically active units gives rise to fruitful collective effects. These precious effects can be exploited for biological imaging, medical diagnostic and therapeutics, as evidenced also by some results reported in this thesis. In particular, the observed amplification effect has been obtained thanks to a suitable organization of molecular probe units onto the surface of the nanoparticles. In the effort of reaching a deeper inside in the mechanisms which lead to the final amplification effects, we also attempted to find a correlation between the synthetic route and the final organization of the active molecules in the silica network, and thus with those mutual interactions between one another which result in the emerging, collective behavior, responsible for the desired signal amplification. In this context, we firstly investigated the process of formation of silica nanoparticles doped with pyrene derivative and we showed that the dyes are not uniformly dispersed inside the silica matrix; thus, core-shell structures can be formed spontaneously in a one step synthesis. Moreover, as far as the design of new labels is concerned, we reported a new synthetic approach to obtain a class of robust, biocompatible silica core-shell nanoparticles able to show a long-term stability. Taking advantage of this new approach we also showed the synthesis and photophysical properties of core-shell NIR absorbing and emitting materials that proved to be very valuable for in-vivo imaging. In general, the dye doped silica nanoparticles prepared in the framework of this project can conjugate unique properties, such as a very high brightness, due to the possibility to include many fluorophores per nanoparticle, high stability, because of the shielding effect of the silica matrix, and, to date, no toxicity, with a simple and low-cost preparation. All these features make these nanostructures suitable to reach the low detection limits that are nowadays required for effective clinical and environmental applications, fulfilling in this way the initial expectations of this research project.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This Doctoral Dissertation is triggered by an emergent trend: firms are increasingly referring to investments in corporate venture capital (CVC) as means to create new competencies and foster the search for competitive advantage through the use of external resources. CVC is generally defined as the practice by non-financial firms of placing equity investments in entrepreneurial companies. Thus, CVC can be interpreted (i) as a key component of corporate entrepreneurship - acts of organizational creation, renewal, or innovation that occur within or outside an existing organization– and (ii) as a particular form of venture capital (VC) investment where the investor is not a traditional and financial institution, but an established corporation. My Dissertation, thus, simultaneously refers to two streams of research: corporate strategy and venture capital. In particular, I directed my attention to three topics of particular relevance for better understanding the role of CVC. In the first study, I moved from the consideration that competitive environments with rapid technological changes increasingly force established corporations to access knowledge from external sources. Firms, thus, extensively engage in external business development activities through different forms of collaboration with partners. While the underlying process common to these mechanisms is one of knowledge access, they are substantially different. The aim of the first study is to figure out how corporations choose among CVC, alliance, joint venture and acquisition. I addressed this issue adopting a multi-theoretical framework where the resource-based view and real options theory are integrated. While the first study mainly looked into the use of external resources for corporate growth, in the second work, I combined an internal and an external perspective to figure out the relationship between CVC investments (exploiting external resources) and a more traditional strategy to create competitive advantage, that is, corporate diversification (based on internal resources). Adopting an explorative lens, I investigated how these different modes to renew corporate current capabilities interact to each other. More precisely, is CVC complementary or substitute to corporate diversification? Finally, the third study focused on the more general field of VC to investigate (i) how VC firms evaluate the patent portfolios of their potential investee companies and (ii) whether the ability to evaluate technology and intellectual property varies depending on the type of investors, in particular for what concern the distinction between specialized versus generalist VCs and independent versus corporate VCs. This topic is motivated by two observations. First, it is not clear yet which determinants of patent value are primarily considered by VCs in their investment decisions. Second, VCs are not all alike in terms of technological experiences and these differences need to be taken into account.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis gathers the work carried out by the author in the last three years of research and it concerns the study and implementation of algorithms to coordinate and control a swarm of mobile robots moving in unknown environments. In particular, the author's attention is focused on two different approaches in order to solve two different problems. The first algorithm considered in this work deals with the possibility of decomposing a main complex task in many simple subtasks by exploiting the decentralized implementation of the so called \emph{Null Space Behavioral} paradigm. This approach to the problem of merging different subtasks with assigned priority is slightly modified in order to handle critical situations that can be detected when robots are moving through an unknown environment. In fact, issues can occur when one or more robots got stuck in local minima: a smart strategy to avoid deadlock situations is provided by the author and the algorithm is validated by simulative analysis. The second problem deals with the use of concepts borrowed from \emph{graph theory} to control a group differential wheel robots by exploiting the Laplacian solution of the consensus problem. Constraints on the swarm communication topology have been introduced by the use of a range and bearing platform developed at the Distributed Intelligent Systems and Algorithms Laboratory (DISAL), EPFL (Lausanne, CH) where part of author's work has been carried out. The control algorithm is validated by demonstration and simulation analysis and, later, is performed by a team of four robots engaged in a formation mission. To conclude, the capabilities of the algorithm based on the local solution of the consensus problem for differential wheel robots are demonstrated with an application scenario, where nine robots are engaged in a hunting task.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Adhesive bonding provides solutions to realize cost effective and low weight aircraft fuselage structures, in particular where the Damage Tolerance (DT) is the design criterion. Bonded structures that combine Metal Laminates (MLs) and eventually Selective Reinforcements can guarantee slow crack propagation, crack arrest and large damage capability. To optimize the design exploiting the benefit of bonded structures incorporating selective reinforcement requires reliable analysis tools. The effect of bonded doublers / selective reinforcements is very difficult to be predicted numerically or analytically due to the complexity of the underlying mechanisms and failures modes acting. Reliable predictions of crack growth and residual strength can only be based on sound empirical and phenomenological considerations strictly related to the specific structural concept. Large flat stiffened panels that combine MLs and selective reinforcements have been tested with the purpose of investigating solutions applicable to pressurized fuselages. The large test campaign (for a total of 35 stiffened panels) has quantitatively investigated the role of the different metallic skin concepts (monolithic vs. MLs) of the aluminum, titanium and glass-fiber reinforcements, of the stringers material and cross sections and of the geometry and location of doublers / selective reinforcements. Bonded doublers and selective reinforcements confirmed to be outstanding tools to improve the DT properties of structural elements with a minor weight increase. However the choice of proper materials for the skin and the stringers must be not underestimated since they play an important role as well. A fuselage structural concept has been developed to exploit the benefit of a metal laminate design concept in terms of high Fatigue and Damage Tolerance (F&DT) performances. The structure used laminated skin (0.8mm thick), bonded stringers, two different splicing solutions and selective reinforcements (glass prepreg embedded in the laminate) under the circumferential frames. To validate the design concept a curved panel was manufactured and tested under loading conditions representative of a single aisle fuselage: cyclic internal pressurization plus longitudinal loads. The geometry of the panel, design and loading conditions were tailored for the requirements of the upper front fuselage. The curved panel has been fatigue tested for 60 000 cycles before the introduction of artificial damages (cracks in longitudinal and circumferential directions). The crack growth of the artificial damages has been investigated for about 85 000 cycles. At the end a residual strength test has been performed with a “2 bay over broken frame” longitudinal crack. The reparability of this innovative concept has been taken into account during design and demonstrated with the use of an external riveted repair. The F&DT curved panel test has confirmed that a long fatigue life and high damage tolerance can be achieved with a hybrid metal laminate low weight configuration. The superior fatigue life from metal laminates and the high damage tolerance characteristics provided by integrated selective reinforcements are the key concepts that provided the excellent performances. The weight comparison between the innovative bonded concept and a conventional monolithic riveted design solution showed a significant potential weight saving but the weight advantages shall be traded off with the additional costs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Technology advances in recent years have dramatically changed the way users exploit contents and services available on the Internet, by enforcing pervasive and mobile computing scenarios and enabling access to networked resources almost from everywhere, at anytime, and independently of the device in use. In addition, people increasingly require to customize their experience, by exploiting specific device capabilities and limitations, inherent features of the communication channel in use, and interaction paradigms that significantly differ from the traditional request/response one. So-called Ubiquitous Internet scenario calls for solutions that address many different challenges, such as device mobility, session management, content adaptation, context-awareness and the provisioning of multimodal interfaces. Moreover, new service opportunities demand simple and effective ways to integrate existing resources into new and value added applications, that can also undergo run-time modifications, according to ever-changing execution conditions. Despite service-oriented architectural models are gaining momentum to tame the increasing complexity of composing and orchestrating distributed and heterogeneous functionalities, existing solutions generally lack a unified approach and only provide support for specific Ubiquitous Internet aspects. Moreover, they usually target rather static scenarios and scarcely support the dynamic nature of pervasive access to Internet resources, that can make existing compositions soon become obsolete or inadequate, hence in need of reconfiguration. This thesis proposes a novel middleware approach to comprehensively deal with Ubiquitous Internet facets and assist in establishing innovative application scenarios. We claim that a truly viable ubiquity support infrastructure must neatly decouple distributed resources to integrate and push any kind of content-related logic outside its core layers, by keeping only management and coordination responsibilities. Furthermore, we promote an innovative, open, and dynamic resource composition model that allows to easily describe and enforce complex scenario requirements, and to suitably react to changes in the execution conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Triplex cell vaccine is a cancer immunopreventive cell vaccine that can prevent almost completely mammary tumor onset in HER-2/neu transgenic mice. A future translation of cancer immunoprevention from preclinical to clinical studies should take into account several aspects. The work reported in this thesis deals with the study of three of these aspects: vaccine schedule, activity in a therapeutic set-up and second-generation DNA vaccines. An important element in determining human acceptance and compliance of a treatment protocol is the number of vaccinations. In order to improve the vaccination schedule a minimal protocol was searched, i.e. a schedule consisting of a lower number of administrations than standard protocol but with a similar efficacy. A candidate optimal protocol was identified by the use of an in silico model, SimTriplex simulator. The in vivo test of this schedule in HER-2/neu transgenic mice only partially confirmed in silico predictions. This result shows that in silico models have the potential ability to aid in searching of optimal treatment protocols, provided that they will be further tuned on experimental data. As a further result this preclinical study highlighted that kinetic of antibody response plays a major role in determining cancer prevention, leading to the hypothesis of a threshold that must be reached rapidly and maintained lifetime. Early clinical trials would be performed in a therapeutic, rather than preventive, setting. Thus, the activity of Triplex vaccine was investigated against experimental lung metastases in HER-2/neu transgenic mice in order to evaluate if the immunopreventive Triplex vaccine could be effective also against a pre-existing tumor mass. This preclinical model of aggressive metastatic development showed that the vaccine was an efficient treatment also 4 for the cure of micrometastases. However the immune mechanisms activated against tumor mass were not antibody dependent, i.e. different from those preventing the onset of primary mammary carcinoma. DNA vaccines could be more easily used than cellular ones. A second generation of Triplex vaccine based on DNA plasmids was evaluated in an aggressive preclinical model (BALBp53neu female mice) and compared with the preventive ability of cellular Triplex vaccine. It was observed that Triplex DNA vaccine was as effective as Triplex cell vaccine, exploiting a more restricted immune stimulation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis, we present our work about some generalisations of ideas, techniques and physical interpretations typical for integrable models to one of the most outstanding advances in theoretical physics of nowadays: the AdS/CFT correspondences. We have undertaken the problem of testing this conjectured duality under various points of view, but with a clear starting point - the integrability - and with a clear ambitious task in mind: to study the finite-size effects in the energy spectrum of certain string solutions on a side and in the anomalous dimensions of the gauge theory on the other. Of course, the final desire woul be the exact comparison between these two faces of the gauge/string duality. In few words, the original part of this work consists in application of well known integrability technologies, in large parte borrowed by the study of relativistic (1+1)-dimensional integrable quantum field theories, to the highly non-relativisic and much complicated case of the thoeries involved in the recent conjectures of AdS5/CFT4 and AdS4/CFT3 corrspondences. In details, exploiting the spin chain nature of the dilatation operator of N = 4 Super-Yang-Mills theory, we concentrated our attention on one of the most important sector, namely the SL(2) sector - which is also very intersting for the QCD understanding - by formulating a new type of nonlinear integral equation (NLIE) based on a previously guessed asymptotic Bethe Ansatz. The solutions of this Bethe Ansatz are characterised by the length L of the correspondent spin chain and by the number s of its excitations. A NLIE allows one, at least in principle, to make analytical and numerical calculations for arbitrary values of these parameters. The results have been rather exciting. In the important regime of high Lorentz spin, the NLIE clarifies how it reduces to a linear integral equations which governs the subleading order in s, o(s0). This also holds in the regime with L ! 1, L/ ln s finite (long operators case). This region of parameters has been particularly investigated in literature especially because of an intriguing limit into the O(6) sigma model defined on the string side. One of the most powerful methods to keep under control the finite-size spectrum of an integrable relativistic theory is the so called thermodynamic Bethe Ansatz (TBA). We proposed a highly non-trivial generalisation of this technique to the non-relativistic case of AdS5/CFT4 and made the first steps in order to determine its full spectrum - of energies for the AdS side, of anomalous dimensions for the CFT one - at any values of the coupling constant and of the size. At the leading order in the size parameter, the calculation of the finite-size corrections is much simpler and does not necessitate the TBA. It consists in deriving for a nonrelativistc case a method, invented for the first time by L¨uscher to compute the finite-size effects on the mass spectrum of relativisic theories. So, we have formulated a new version of this approach to adapt it to the case of recently found classical string solutions on AdS4 × CP3, inside the new conjecture of an AdS4/CFT3 correspondence. Our results in part confirm the string and algebraic curve calculations, in part are completely new and then could be better understood by the rapidly evolving developments of this extremely exciting research field.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Supramolecular self-assembly represents a key technology for the spontaneous construction of nanoarchitectures and for the fabrication of materials with enhanced physical and chemical properties. In addition, a significant asset of supramolecular self-assemblies rests on their reversible formation, thanks to the kinetic lability of their non-covalent interactions. This dynamic nature can be exploited for the development of “self-healing” and “smart” materials towards the tuning of their functional properties upon various external factors. One particular intriguing objective in the field is to reach a high level of control over the shape and size of the supramolecular architectures, in order to produce well-defined functional nanostructures by rational design. In this direction, many investigations have been pursued toward the construction of self-assembled objects from numerous low-molecular weight scaffolds, for instance by exploiting multiple directional hydrogen-bonding interactions. In particular, nucleobases have been used as supramolecular synthons as a result of their efficiency to code for non-covalent interaction motifs. Among nucleobases, guanine represents the most versatile one, because of its different H-bond donor and acceptor sites which display self-complementary patterns of interactions. Interestingly, and depending on the environmental conditions, guanosine derivatives can form various types of structures. Most of the supramolecular architectures reported in this Thesis from guanosine derivatives require the presence of a cation which stabilizes, via dipole-ion interactions, the macrocyclic G-quartet that can, in turn, stack in columnar G-quadruplex arrangements. In addition, in absence of cations, guanosine can polymerize via hydrogen bonding to give a variety of supramolecular networks including linear ribbons. This complex supramolecular behavior confers to the guanine-guanine interactions their upper interest among all the homonucleobases studied. They have been subjected to intense investigations in various areas ranging from structural biology and medicinal chemistry – guanine-rich sequences are abundant in telomeric ends of chromosomes and promoter regions of DNA, and are capable of forming G-quartet based structures– to material science and nanotechnology. This Thesis, organized into five Chapters, describes mainly some recent advances in the form and function provided by self-assembly of guanine based systems. More generally, Chapter 4 will focus on the construction of supramolecular self-assemblies whose self-assembling process and self-assembled architectures can be controlled by light as external stimulus. Chapter 1 will describe some of the many recent studies of G-quartets in the general area of nanoscience. Natural G- quadruplexes can be useful motifs to build new structures and biomaterials such as self-assembled nanomachines, biosensors, therapeutic aptamer and catalysts. In Chapters 2-4 it is pointed out the core concept held in this PhD Thesis, i.e. the supramolecular organization of lipophilic guanosine derivatives with photo or chemical addressability. Chapter 2 will mainly focus on the use of cation-templated guanosine derivatives as a potential scaffold for designing functional materials with tailored physical properties, showing a new way to control the bottom-up realization of well-defined nanoarchitectures. In section 2.6.7, the self-assembly properties of compound 28a may be considered an example of open-shell moieties ordered by a supramolecular guanosine architecture showing a new (magnetic) property. Chapter 3 will report on ribbon-like structures, supramolecular architectures formed by guanosine derivatives that may be of interest for the fabrication of molecular nanowires within the framework of future molecular electronic applications. In section 3.4 we investigate the supramolecular polymerizations of derivatives dG 1 and G 30 by light scattering technique and TEM experiments. The obtained data reveal the presence of several levels of organization due to the hierarchical self-assembly of the guanosine units in ribbons that in turn aggregate in fibrillar or lamellar soft structures. The elucidation of these structures furnishes an explanation to the physical behaviour of guanosine units which display organogelator properties. Chapter 4 will describe photoresponsive self-assembling systems. Numerous research examples have demonstrated that the use of photochromic molecules in supramolecular self-assemblies is the most reasonable method to noninvasively manipulate their degree of aggregation and supramolecular architectures. In section 4.4 we report on the photocontrolled self-assembly of modified guanosine nucleobase E-42: by the introduction of a photoactive moiety at C8 it is possible to operate a photocontrol over the self-assembly of the molecule, where the existence of G-quartets can be alternately switched on and off. In section 4.5 we focus on the use of cyclodextrins as photoresponsive host-guest assemblies: αCD–azobenzene conjugates 47-48 (section 4.5.3) are synthesized in order to obtain a photoresponsive system exhibiting a fine photocontrollable degree of aggregation and self-assembled architecture. Finally, Chapter 5 contains the experimental protocols used for the research described in Chapters 2-4.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Italian radio telescopes currently undergo a major upgrade period in response to the growing demand for deep radio observations, such as surveys on large sky areas or observations of vast samples of compact radio sources. The optimised employment of the Italian antennas, at first constructed mainly for VLBI activities and provided with a control system (FS – Field System) not tailored to single-dish observations, required important modifications in particular of the guiding software and data acquisition system. The production of a completely new control system called ESCS (Enhanced Single-dish Control System) for the Medicina dish started in 2007, in synergy with the software development for the forthcoming Sardinia Radio Telescope (SRT). The aim is to produce a system optimised for single-dish observations in continuum, spectrometry and polarimetry. ESCS is also planned to be installed at the Noto site. A substantial part of this thesis work consisted in designing and developing subsystems within ESCS, in order to provide this software with tools to carry out large maps, spanning from the implementation of On-The-Fly fast scans (following both conventional and innovative observing strategies) to the production of single-dish standard output files and the realisation of tools for the quick-look of the acquired data. The test period coincided with the commissioning phase for two devices temporarily installed – while waiting for the SRT to be completed – on the Medicina antenna: a 18-26 GHz 7-feed receiver and the 14-channel analogue backend developed for its use. It is worth stressing that it is the only K-band multi-feed receiver at present available worldwide. The commissioning of the overall hardware/software system constituted a considerable section of the thesis work. Tests were led in order to verify the system stability and its capabilities, down to sensitivity levels which had never been reached in Medicina using the previous observing techniques and hardware devices. The aim was also to assess the scientific potential of the multi-feed receiver for the production of wide maps, exploiting its temporary availability on a mid-sized antenna. Dishes like the 32-m antennas at Medicina and Noto, in fact, offer the best conditions for large-area surveys, especially at high frequencies, as they provide a suited compromise between sufficiently large beam sizes to cover quickly large areas of the sky (typical of small-sized telescopes) and sensitivity (typical of large-sized telescopes). The KNoWS (K-band Northern Wide Survey) project is aimed at the realisation of a full-northern-sky survey at 21 GHz; its pilot observations, performed using the new ESCS tools and a peculiar observing strategy, constituted an ideal test-bed for ESCS itself and for the multi-feed/backend system. The KNoWS group, which I am part of, supported the commissioning activities also providing map-making and source-extraction tools, in order to complete the necessary data reduction pipeline and assess the general system scientific capabilities. The K-band observations, which were carried out in several sessions along the December 2008-March 2010 period, were accompanied by the realisation of a 5 GHz test survey during the summertime, which is not suitable for high-frequency observations. This activity was conceived in order to check the new analogue backend separately from the multi-feed receiver, and to simultaneously produce original scientific data (the 6-cm Medicina Survey, 6MS, a polar cap survey to complete PMN-GB6 and provide an all-sky coverage at 5 GHz).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this Thesis is to develop a robust and powerful method to classify galaxies from large surveys, in order to establish and confirm the connections between the principal observational parameters of the galaxies (spectral features, colours, morphological indices), and help unveil the evolution of these parameters from $z \sim 1$ to the local Universe. Within the framework of zCOSMOS-bright survey, and making use of its large database of objects ($\sim 10\,000$ galaxies in the redshift range $0 < z \lesssim 1.2$) and its great reliability in redshift and spectral properties determinations, first we adopt and extend the \emph{classification cube method}, as developed by Mignoli et al. (2009), to exploit the bimodal properties of galaxies (spectral, photometric and morphologic) separately, and then combining together these three subclassifications. We use this classification method as a test for a newly devised statistical classification, based on Principal Component Analysis and Unsupervised Fuzzy Partition clustering method (PCA+UFP), which is able to define the galaxy population exploiting their natural global bimodality, considering simultaneously up to 8 different properties. The PCA+UFP analysis is a very powerful and robust tool to probe the nature and the evolution of galaxies in a survey. It allows to define with less uncertainties the classification of galaxies, adding the flexibility to be adapted to different parameters: being a fuzzy classification it avoids the problems due to a hard classification, such as the classification cube presented in the first part of the article. The PCA+UFP method can be easily applied to different datasets: it does not rely on the nature of the data and for this reason it can be successfully employed with others observables (magnitudes, colours) or derived properties (masses, luminosities, SFRs, etc.). The agreement between the two classification cluster definitions is very high. ``Early'' and ``late'' type galaxies are well defined by the spectral, photometric and morphological properties, both considering them in a separate way and then combining the classifications (classification cube) and treating them as a whole (PCA+UFP cluster analysis). Differences arise in the definition of outliers: the classification cube is much more sensitive to single measurement errors or misclassifications in one property than the PCA+UFP cluster analysis, in which errors are ``averaged out'' during the process. This method allowed us to behold the \emph{downsizing} effect taking place in the PC spaces: the migration between the blue cloud towards the red clump happens at higher redshifts for galaxies of larger mass. The determination of $M_{\mathrm{cross}}$ the transition mass is in significant agreement with others values in literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nowadays alternative energies are an extremely important topic and the possibility of using hydrogen as an energy carrier must be explored. Many problems infer the technological application of this abundant and powerful resource, one of them the possibility of storage. In the framework of suitable materials for hydrogen storage, magnesium has been the center of this study because it is cheap and the amount of stored hydrogen that it achieves (7.6 wt%) is extremely appealing. Nanostructure helps to overcome the slow hydrogen diffusion and the functionalization of surfaces with transition metals or oxides favors the hydrogen molecule dissociation/recombination. The aim of this research is the investigation of the metal-hydride transformation in magnesium nanoparticles synthesized by inert-gas condensation, exploiting the fact that they are a simple model system. The so produced nanostructured powder has been analyzed in response to nanoparticles surface functionalization by transition metal clusters, specifically palladium, nickel and titanium, chosen on the basis of their completely different Mg-related phase diagrams. The role of the intermetallic phases formed upon heating and hydrogenation treatments will be presented to provide a comprehensive picture of hydrogen sorption in this class of nanostructured storage materials.