923 resultados para THE 30s GENERATION
Resumo:
The University of São Paulo has been experiencing the increase in contents in electronic and digital formats, distributed by different suppliers and hosted remotely or in clouds, and is faced with the also increasing difficulties related to facilitating access to this digital collection by its users besides coexisting with the traditional world of physical collections. A possible solution was identified in the new generation of systems called Web Scale Discovery, which allow better management, data integration and agility of search. Aiming to identify if and how such a system would meet the USP demand and expectation and, in case it does, to identify what the analysis criteria of such a tool would be, an analytical study with an essentially documental base was structured, as from a revision of the literature and from data available in official websites and of libraries using this kind of resources. The conceptual base of the study was defined after the identification of software assessment methods already available, generating a standard with 40 analysis criteria, from details on the unique access interface to information contents, web 2.0 characteristics, intuitive interface, facet navigation, among others. The details of the studies conducted into four of the major systems currently available in this software category are presented, providing subsidies for the decision-making of other libraries interested in such systems.
Resumo:
This study addresses a vehicle routing problem with time windows, accessibility restrictions on customers, and a fleet that is heterogeneous with regard to capacity and average speed. A vehicle can performmultiple routes per day, all starting and ending at a single depot, and it is assigned to a single driverwhose totalwork hours are limited.Acolumn generation algorithmis proposed.The column generation pricing subproblem requires a specific elementary shortest path problem with resource constraints algorithm to address the possibility for each vehicle performingmultiple routes per day and to address the need to set the workday’s start time within the planning horizon. A constructive heuristic and a metaheuristic based on tabu search are also developed to find good solutions.
Resumo:
Graphene has received great attention due to its exceptional properties, which include corners with zero effective mass, extremely large mobilities, this could render it the new template for the next generation of electronic devices. Furthermore it has weak spin orbit interaction because of the low atomic number of carbon atom in turn results in long spin coherence lengths. Therefore, graphene is also a promising material for future applications in spintronic devices - the use of electronic spin degrees of freedom instead of the electron charge. Graphene can be engineered to form a number of different structures. In particular, by appropriately cutting it one can obtain 1-D system -with only a few nanometers in width - known as graphene nanoribbon, which strongly owe their properties to the width of the ribbons and to the atomic structure along the edges. Those GNR-based systems have been shown to have great potential applications specially as connectors for integrated circuits. Impurities and defects might play an important role to the coherence of these systems. In particular, the presence of transition metal atoms can lead to significant spin-flip processes of conduction electrons. Understanding this effect is of utmost importance for spintronics applied design. In this work, we focus on electronic transport properties of armchair graphene nanoribbons with adsorbed transition metal atoms as impurities and taking into account the spin-orbit effect. Our calculations were performed using a combination of density functional theory and non-equilibrium Greens functions. Also, employing a recursive method we consider a large number of impurities randomly distributed along the nanoribbon in order to infer, for different concentrations of defects, the spin-coherence length.
Resumo:
Máster en Economía del Turismo, Transporte y Medio Ambiente
Resumo:
[EN]Oceanic eddy generation by tall deep-water islands is common phenomenon. It is recognized that these eddies may have a significant impact on the marine system and related biogeochemical fluxes. Hence, it is important to establish favourable conditions for their generation. With this objective, we present an observational study on eddy generation mechanisms by tall deep-water islands, using as a case study the island of Gran Canaria. Observations show that the main generation mechanism is topographic forcing, which leads to eddy generation when the incident oceanic flow is sufficiently intense. Wind shear at the island wake may acts only as an additional eddy-generation trigger mechanism when the impinging oceanic flow is not sufficiently intense. For the case of the island of Gran Canaria we have observed a mean of ten generated cyclonic eddies per year. Eddies are more frequently generated in summer coinciding with intense Trade winds and Canary Current.
Resumo:
This Ph.D. Thesis has been carried out in the framework of a long-term and large project devoted to describe the main photometric, chemical, evolutionary and integrated properties of a representative sample of Large and Small Magellanic Cloud (LMC and SMC respectively) clusters. The globular clusters system of these two Irregular galaxies provides a rich resource for investigating stellar and chemical evolution and to obtain a detailed view of the star formation history and chemical enrichment of the Clouds. The results discussed here are based on the analysis of high-resolution photometric and spectroscopic datasets obtained by using the last generation of imagers and spectrographs. The principal aims of this project are summarized as follows: • The study of the AGB and RGB sequences in a sample of MC clusters, through the analysis of a wide near-infrared photometric database, including 33 Magellanic globulars obtained in three observing runs with the near-infrared camera SOFI@NTT (ESO, La Silla). • The study of the chemical properties of a sample of MCs clusters, by using optical and near-infrared high-resolution spectra. 3 observing runs have been secured to our group to observe 9 LMC clusters (with ages between 100 Myr and 13 Gyr) with the optical high-resolution spectrograph FLAMES@VLT (ESO, Paranal) and 4 very young (<30 Myr) clusters (3 in the LMC and 1 in the SMC) with the near-infrared high-resolution spectrograph CRIRES@VLT. • The study of the photometric properties of the main evolutive sequences in optical Color- Magnitude Diagrams (CMD) obtained by using HST archive data, with the final aim of dating several clusters via the comparison between the observed CMDs and theoretical isochrones. The determination of the age of a stellar population requires an accurate measure of the Main Sequence (MS) Turn-Off (TO) luminosity and the knowledge of the distance modulus, reddening and overall metallicity. For this purpose, we limited the study of the age just to the clusters already observed with high-resolution spectroscopy, in order to date only clusters with accurate estimates of the overall metallicity.
Resumo:
It is not unknown that the evolution of firm theories has been developed along a path paved by an increasing awareness of the organizational structure importance. From the early “neoclassical” conceptualizations that intended the firm as a rational actor whose aim is to produce that amount of output, given the inputs at its disposal and in accordance to technological or environmental constraints, which maximizes the revenue (see Boulding, 1942 for a past mid century state of the art discussion) to the knowledge based theory of the firm (Nonaka & Takeuchi, 1995; Nonaka & Toyama, 2005), which recognizes in the firm a knnowledge creating entity, with specific organizational capabilities (Teece, 1996; Teece & Pisano, 1998) that allow to sustaine competitive advantages. Tracing back a map of the theory of the firm evolution, taking into account the several perspectives adopted in the history of thought, would take the length of many books. Because of that a more fruitful strategy is circumscribing the focus of the description of the literature evolution to one flow connected to a crucial question about the nature of firm’s behaviour and about the determinants of competitive advantages. In so doing I adopt a perspective that allows me to consider the organizational structure of the firm as an element according to which the different theories can be discriminated. The approach adopted starts by considering the drawbacks of the standard neoclassical theory of the firm. Discussing the most influential theoretical approaches I end up with a close examination of the knowledge based perspective of the firm. Within this perspective the firm is considered as a knowledge creating entity that produce and mange knowledge (Nonaka, Toyama, & Nagata, 2000; Nonaka & Toyama, 2005). In a knowledge intensive organization, knowledge is clearly embedded for the most part in the human capital of the individuals that compose such an organization. In a knowledge based organization, the management, in order to cope with knowledge intensive productions, ought to develop and accumulate capabilities that shape the organizational forms in a way that relies on “cross-functional processes, extensive delayering and empowerment” (Foss 2005, p.12). This mechanism contributes to determine the absorptive capacity of the firm towards specific technologies and, in so doing, it also shape the technological trajectories along which the firm moves. After having recognized the growing importance of the firm’s organizational structure in the theoretical literature concerning the firm theory, the subsequent point of the analysis is that of providing an overview of the changes that have been occurred at micro level to the firm’s organization of production. The economic actors have to deal with challenges posed by processes of internationalisation and globalization, increased and increasing competitive pressure of less developed countries on low value added production activities, changes in technologies and increased environmental turbulence and volatility. As a consequence, it has been widely recognized that the main organizational models of production that fitted well in the 20th century are now partially inadequate and processes aiming to reorganize production activities have been widespread across several economies in recent years. Recently, the emergence of a “new” form of production organization has been proposed both by scholars, practitioners and institutions: the most prominent characteristic of such a model is its recognition of the importance of employees commitment and involvement. As a consequence it is characterized by a strong accent on the human resource management and on those practices that aim to widen the autonomy and responsibility of the workers as well as increasing their commitment to the organization (Osterman, 1994; 2000; Lynch, 2007). This “model” of production organization is by many defined as High Performance Work System (HPWS). Despite the increasing diffusion of workplace practices that may be inscribed within the concept of HPWS in western countries’ companies, it is an hazard, to some extent, to speak about the emergence of a “new organizational paradigm”. The discussion about organizational changes and the diffusion of HPWP the focus cannot abstract from a discussion about the industrial relations systems, with a particular accent on the employment relationships, because of their relevance, in the same way as production organization, in determining two major outcomes of the firm: innovation and economic performances. The argument is treated starting from the issue of the Social Dialogue at macro level, both in an European perspective and Italian perspective. The model of interaction between the social parties has repercussions, at micro level, on the employment relationships, that is to say on the relations between union delegates and management or workers and management. Finding economic and social policies capable of sustaining growth and employment within a knowledge based scenario is likely to constitute the major challenge for the next generation of social pacts, which are the main social dialogue outcomes. As Acocella and Leoni (2007) put forward the social pacts may constitute an instrument to trade wage moderation for high intensity in ICT, organizational and human capital investments. Empirical evidence, especially focused on the micro level, about the positive relation between economic growth and new organizational designs coupled with ICT adoption and non adversarial industrial relations is growing. Partnership among social parties may become an instrument to enhance firm competitiveness. The outcome of the discussion is the integration of organizational changes and industrial relations elements within a unified framework: the HPWS. Such a choice may help in disentangling the potential existence of complementarities between these two aspects of the firm internal structure on economic and innovative performance. With the third chapter starts the more original part of the thesis. The data utilized in order to disentangle the relations between HPWS practices, innovation and economic performance refer to the manufacturing firms of the Reggio Emilia province with more than 50 employees. The data have been collected through face to face interviews both to management (199 respondents) and to union representatives (181 respondents). Coupled with the cross section datasets a further data source is constituted by longitudinal balance sheets (1994-2004). Collecting reliable data that in turn provide reliable results needs always a great effort to which are connected uncertain results. Data at micro level are often subjected to a trade off: the wider is the geographical context to which the population surveyed belong the lesser is the amount of information usually collected (low level of resolution); the narrower is the focus on specific geographical context, the higher is the amount of information usually collected (high level of resolution). For the Italian case the evidence about the diffusion of HPWP and their effects on firm performances is still scanty and usually limited to local level studies (Cristini, et al., 2003). The thesis is also devoted to the deepening of an argument of particular interest: the existence of complementarities between the HPWS practices. It has been widely shown by empirical evidence that when HPWP are adopted in bundles they are more likely to impact on firm’s performances than when adopted in isolation (Ichniowski, Prennushi, Shaw, 1997). Is it true also for the local production system of Reggio Emilia? The empirical analysis has the precise aim of providing evidence on the relations between the HPWS dimensions and the innovative and economic performances of the firm. As far as the first line of analysis is concerned it must to be stressed the fundamental role that innovation plays in the economy (Geroski & Machin, 1993; Stoneman & Kwoon 1994, 1996; OECD, 2005; EC, 2002). On this point the evidence goes from the traditional innovations, usually approximated by R&D investment expenditure or number of patents, to the introduction and adoption of ICT, in the recent years (Brynjolfsson & Hitt, 2000). If innovation is important then it is critical to analyse its determinants. In this work it is hypothesised that organizational changes and firm level industrial relations/employment relations aspects that can be put under the heading of HPWS, influence the propensity to innovate in product, process and quality of the firm. The general argument may goes as follow: changes in production management and work organization reconfigure the absorptive capacity of the firm towards specific technologies and, in so doing, they shape the technological trajectories along which the firm moves; cooperative industrial relations may lead to smother adoption of innovations, because not contrasted by unions. From the first empirical chapter emerges that the different types of innovations seem to respond in different ways to the HPWS variables. The underlying processes of product, process and quality innovations are likely to answer to different firm’s strategies and needs. Nevertheless, it is possible to extract some general results in terms of the most influencing HPWS factors on innovative performance. The main three aspects are training coverage, employees involvement and the diffusion of bonuses. These variables show persistent and significant relations with all the three innovation types. The same do the components having such variables at their inside. In sum the aspects of the HPWS influence the propensity to innovate of the firm. At the same time, emerges a quite neat (although not always strong) evidence of complementarities presence between HPWS practices. In terns of the complementarity issue it can be said that some specific complementarities exist. Training activities, when adopted and managed in bundles, are related to the propensity to innovate. Having a sound skill base may be an element that enhances the firm’s capacity to innovate. It may enhance both the capacity to absorbe exogenous innovation and the capacity to endogenously develop innovations. The presence and diffusion of bonuses and the employees involvement also spur innovative propensity. The former because of their incentive nature and the latter because direct workers participation may increase workers commitment to the organizationa and thus their willingness to support and suggest inovations. The other line of analysis provides results on the relation between HPWS and economic performances of the firm. There have been a bulk of international empirical studies on the relation between organizational changes and economic performance (Black & Lynch 2001; Zwick 2004; Janod & Saint-Martin 2004; Huselid 1995; Huselid & Becker 1996; Cappelli & Neumark 2001), while the works aiming to capture the relations between economic performance and unions or industrial relations aspects are quite scant (Addison & Belfield, 2001; Pencavel, 2003; Machin & Stewart, 1990; Addison, 2005). In the empirical analysis the integration of the two main areas of the HPWS represent a scarcely exploited approach in the panorama of both national and international empirical studies. As remarked by Addison “although most analysis of workers representation and employee involvement/high performance work practices have been conducted in isolation – while sometimes including the other as controls – research is beginning to consider their interactions” (Addison, 2005, p.407). The analysis conducted exploiting temporal lags between dependent and covariates, possibility given by the merger of cross section and panel data, provides evidence in favour of the existence of HPWS practices impact on firm’s economic performance, differently measured. Although it does not seem to emerge robust evidence on the existence of complementarities among HPWS aspects on performances there is evidence of a general positive influence of the single practices. The results are quite sensible to the time lags, inducing to hypothesize that time varying heterogeneity is an important factor in determining the impact of organizational changes on economic performance. The implications of the analysis can be of help both to management and local level policy makers. Although the results are not simply extendible to other local production systems it may be argued that for contexts similar to the Reggio Emilia province, characterized by the presence of small and medium enterprises organized in districts and by a deep rooted unionism, with strong supporting institutions, the results and the implications here obtained can also fit well. However, a hope for future researches on the subject treated in the present work is that of collecting good quality information over wider geographical areas, possibly at national level, and repeated in time. Only in this way it is possible to solve the Gordian knot about the linkages between innovation, performance, high performance work practices and industrial relations.
Resumo:
Subduction zones are the favorite places to generate tsunamigenic earthquakes, where friction between oceanic and continental plates causes the occurrence of a strong seismicity. The topics and the methodologies discussed in this thesis are focussed to the understanding of the rupture process of the seismic sources of great earthquakes that generate tsunamis. The tsunamigenesis is controlled by several kinematical characteristic of the parent earthquake, as the focal mechanism, the depth of the rupture, the slip distribution along the fault area and by the mechanical properties of the source zone. Each of these factors plays a fundamental role in the tsunami generation. Therefore, inferring the source parameters of tsunamigenic earthquakes is crucial to understand the generation of the consequent tsunami and so to mitigate the risk along the coasts. The typical way to proceed when we want to gather information regarding the source process is to have recourse to the inversion of geophysical data that are available. Tsunami data, moreover, are useful to constrain the portion of the fault area that extends offshore, generally close to the trench that, on the contrary, other kinds of data are not able to constrain. In this thesis I have discussed the rupture process of some recent tsunamigenic events, as inferred by means of an inverse method. I have presented the 2003 Tokachi-Oki (Japan) earthquake (Mw 8.1). In this study the slip distribution on the fault has been inferred by inverting tsunami waveform, GPS, and bottom-pressure data. The joint inversion of tsunami and geodetic data has revealed a much better constrain for the slip distribution on the fault rather than the separate inversions of single datasets. Then we have studied the earthquake occurred on 2007 in southern Sumatra (Mw 8.4). By inverting several tsunami waveforms, both in the near and in the far field, we have determined the slip distribution and the mean rupture velocity along the causative fault. Since the largest patch of slip was concentrated on the deepest part of the fault, this is the likely reason for the small tsunami waves that followed the earthquake, pointing out how much the depth of the rupture plays a crucial role in controlling the tsunamigenesis. Finally, we have presented a new rupture model for the great 2004 Sumatra earthquake (Mw 9.2). We have performed the joint inversion of tsunami waveform, GPS and satellite altimetry data, to infer the slip distribution, the slip direction, and the rupture velocity on the fault. Furthermore, in this work we have presented a novel method to estimate, in a self-consistent way, the average rigidity of the source zone. The estimation of the source zone rigidity is important since it may play a significant role in the tsunami generation and, particularly for slow earthquakes, a low rigidity value is sometimes necessary to explain how a relatively low seismic moment earthquake may generate significant tsunamis; this latter point may be relevant for explaining the mechanics of the tsunami earthquakes, one of the open issues in present day seismology. The investigation of these tsunamigenic earthquakes has underlined the importance to use a joint inversion of different geophysical data to determine the rupture characteristics. The results shown here have important implications for the implementation of new tsunami warning systems – particularly in the near-field – the improvement of the current ones, and furthermore for the planning of the inundation maps for tsunami-hazard assessment along the coastal area.
Resumo:
The aspartic protease BACE1 (β-amyloid precursor protein cleaving enzyme, β-secretase) is recognized as one of the most promising targets in the treatment of Alzheimer's disease (AD). The accumulation of β-amyloid peptide (Aβ) in the brain is a major factor in the pathogenesis of AD. Aβ is formed by initial cleavage of β-amyloid precursor protein (APP) by β-secretase, therefore BACE1 inhibition represents one of the therapeutic approaches to control progression of AD, by preventing the abnormal generation of Aβ. For this reason, in the last decade, many research efforts have focused at the identification of new BACE1 inhibitors as drug candidates. Generally, BACE1 inhibitors are grouped into two families: substrate-based inhibitors, designed as peptidomimetic inhibitors, and non-peptidomimetic ones. The research on non-peptidomimetic small molecules BACE1 inhibitors remains the most interesting approach, since these compounds hold an improved bioavailability after systemic administration, due to a good blood-brain barrier permeability in comparison to peptidomimetic inhibitors. Very recently, our research group discovered a new promising lead compound for the treatment of AD, named lipocrine, a hybrid derivative between lipoic acid and the AChE inhibitor (AChEI) tacrine, characterized by a tetrahydroacridinic moiety. Lipocrine is one of the first compounds able to inhibit the catalytic activity of AChE and AChE-induced amyloid-β aggregation and to protect against reactive oxygen species. Due to this interesting profile, lipocrine was also evaluated for BACE1 inhibitory activity, resulting in a potent lead compound for BACE1 inhibition. Starting from this interesting profile, a series of tetrahydroacridine analogues were synthesised varying the chain length between the two fragments. Moreover, following the approach of combining in a single molecule two different pharmacophores, we designed and synthesised different compounds bearing the moieties of known AChEIs (rivastigmine and caproctamine) coupled with lipoic acid, since it was shown that dithiolane group is an important structural feature of lipocrine for the optimal inhibition of BACE1. All the tetrahydroacridines, rivastigmine and caproctamine-based compounds, were evaluated for BACE1 inhibitory activity in a FRET (fluorescence resonance energy transfer) enzymatic assay (test A). With the aim to enhancing the biological activity of the lead compound, we applied the molecular simplification approach to design and synthesize novel heterocyclic compounds related to lipocrine, in which the tetrahydroacridine moiety was replaced by 4-amino-quinoline or 4-amino-quinazoline rings. All the synthesized compounds were also evaluated in a modified FRET enzymatic assay (test B), changing the fluorescent substrate for enzymatic BACE1 cleavage. This test method guided deep structure-activity relationships for BACE1 inhibition on the most promising quinazoline-based derivatives. By varying the substituent on the 2-position of the quinazoline ring and by replacing the lipoic acid residue in lateral chain with different moieties (i.e. trans-ferulic acid, a known antioxidant molecule), a series of quinazoline derivatives were obtained. In order to confirm inhibitory activity of the most active compounds, they were evaluated with a third FRET assay (test C) which, surprisingly, did not confirm the previous good activity profiles. An evaluation study of kinetic parameters of the three assays revealed that method C is endowed with the best specificity and enzymatic efficiency. Biological evaluation of the modified 2,4-diamino-quinazoline derivatives measured through the method C, allow to obtain a new lead compound bearing the trans-ferulic acid residue coupled to 2,4-diamino-quinazoline core endowed with a good BACE1 inhibitory activity (IC50 = 0.8 mM). We reported on the variability of the results in the three different FRET assays that are known to have some disadvantages in term of interference rates that are strongly dependent on compound properties. The observed results variability could be also ascribed to different enzyme origin, varied substrate and different fluorescent groups. The inhibitors should be tested on a parallel screening in order to have a more reliable data prior to be tested into cellular assay. With this aim, preliminary cellular BACE1 inhibition assay carried out on lipocrine confirmed a good cellular activity profile (EC50 = 3.7 mM) strengthening the idea to find a small molecule non-peptidomimetic compound as BACE1 inhibitor. In conclusion, the present study allowed to identify a new lead compound endowed with BACE1 inhibitory activity in submicromolar range. Further lead optimization to the obtained derivative is needed in order to obtain a more potent and a selective BACE1 inhibitor based on 2,4-diamino-quinazoline scaffold. A side project related to the synthesis of novel enzymatic inhibitors of BACE1 in order to explore the pseudopeptidic transition-state isosteres chemistry was carried out during research stage at Università de Montrèal (Canada) in Hanessian's group. The aim of this work has been the synthesis of the δ-aminocyclohexane carboxylic acid motif with stereochemically defined substitution to incorporating such a constrained core in potential BACE1 inhibitors. This fragment, endowed with reduced peptidic character, is not known in the context of peptidomimetic design. In particular, we envisioned an alternative route based on an organocatalytic asymmetric conjugate addition of nitroalkanes to cyclohexenone in presence of D-proline and trans-2,5-dimethylpiperazine. The enantioenriched obtained 3-(α-nitroalkyl)-cyclohexanones were further functionalized to give the corresponding δ-nitroalkyl cyclohexane carboxylic acids. These intermediates were elaborated to the target structures 3-(α-aminoalkyl)-1-cyclohexane carboxylic acids in a new readily accessible way.
Resumo:
Several MCAO systems are under study to improve the angular resolution of the current and of the future generation large ground-based telescopes (diameters in the 8-40 m range). The subject of this PhD Thesis is embedded in this context. Two MCAO systems, in dierent realization phases, are addressed in this Thesis: NIRVANA, the 'double' MCAO system designed for one of the interferometric instruments of LBT, is in the integration and testing phase; MAORY, the future E-ELT MCAO module, is under preliminary study. These two systems takle the sky coverage problem in two dierent ways. The layer oriented approach of NIRVANA, coupled with multi-pyramids wavefront sensors, takes advantage of the optical co-addition of the signal coming from up to 12 NGS in a annular 2' to 6' technical FoV and up to 8 in the central 2' FoV. Summing the light coming from many natural sources permits to increase the limiting magnitude of the single NGS and to improve considerably the sky coverage. One of the two Wavefront Sensors for the mid- high altitude atmosphere analysis has been integrated and tested as a stand- alone unit in the laboratory at INAF-Osservatorio Astronomico di Bologna and afterwards delivered to the MPIA laboratories in Heidelberg, where was integrated and aligned to the post-focal optical relay of one LINC-NIRVANA arm. A number of tests were performed in order to characterize and optimize the system functionalities and performance. A report about this work is presented in Chapter 2. In the MAORY case, to ensure correction uniformity and sky coverage, the LGS-based approach is the current baseline. However, since the Sodium layer is approximately 10 km thick, the articial reference source looks elongated, especially when observed from the edge of a large aperture. On a 30-40 m class telescope, for instance, the maximum elongation varies between few arcsec and 10 arcsec, depending on the actual telescope diameter, on the Sodium layer properties and on the laser launcher position. The centroiding error in a Shack-Hartmann WFS increases proportionally to the elongation (in a photon noise dominated regime), strongly limiting the performance. To compensate for this effect a straightforward solution is to increase the laser power, i.e. to increase the number of detected photons per subaperture. The scope of Chapter 3 is twofold: an analysis of the performance of three dierent algorithms (Weighted Center of Gravity, Correlation and Quad-cell) for the instantaneous LGS image position measurement in presence of elongated spots and the determination of the required number of photons to achieve a certain average wavefront error over the telescope aperture. An alternative optical solution to the spot elongation problem is proposed in Section 3.4. Starting from the considerations presented in Chapter 3, a first order analysis of the LGS WFS for MAORY (number of subapertures, number of detected photons per subaperture, RON, focal plane sampling, subaperture FoV) is the subject of Chapter 4. An LGS WFS laboratory prototype was designed to reproduce the relevant aspects of an LGS SH WFS for the E-ELT and to evaluate the performance of different centroid algorithms in presence of elongated spots as investigated numerically and analytically in Chapter 3. This prototype permits to simulate realistic Sodium proles. A full testing plan for the prototype is set in Chapter 4.
Resumo:
The aim of this work was to show that refined analyses of background, low magnitude seismicity allow to delineate the main active faults and to accurately estimate the directions of the regional tectonic stress that characterize the Southern Apennines (Italy), a structurally complex area with high seismic potential. Thanks the presence in the area of an integrated dense and wide dynamic network, was possible to analyzed an high quality microearthquake data-set consisting of 1312 events that occurred from August 2005 to April 2011 by integrating the data recorded at 42 seismic stations of various networks. The refined seismicity location and focal mechanisms well delineate a system of NW-SE striking normal faults along the Apenninic chain and an approximately E-W oriented, strike-slip fault, transversely cutting the belt. The seismicity along the chain does not occur on a single fault but in a volume, delimited by the faults activated during the 1980 Irpinia M 6.9 earthquake, on sub-parallel predominant normal faults. Results show that the recent low magnitude earthquakes belongs to the background seismicity and they are likely generated along the major fault segments activated during the most recent earthquakes, suggesting that they are still active today thirty years after the mainshock occurrences. In this sense, this study gives a new perspective to the application of the high quality records of low magnitude background seismicity for the identification and characterization of active fault systems. The analysis of the stress tensor inversion provides two equivalent models to explain the microearthquake generation along both the NW-SE striking normal faults and the E- W oriented fault with a dominant dextral strike-slip motion, but having different geological interpretations. We suggest that the NW-SE-striking Africa-Eurasia convergence acts in the background of all these structures, playing a primary and unifying role in the seismotectonics of the whole region.
Resumo:
A fundamental gap in the current understanding of collapsed structures in the universe concerns the thermodynamical evolution of the ordinary, baryonic component. Unopposed radiative cooling of plasma would lead to the cooling catastrophe, a massive inflow of condensing gas toward the centre of galaxies, groups and clusters. The last generation of multiwavelength observations has radically changed our view on baryons, suggesting that the heating linked to the active galactic nucleus (AGN) may be the balancing counterpart of cooling. In this Thesis, I investigate the engine of the heating regulated by the central black hole. I argue that the mechanical feedback, based on massive subrelativistic outflows, is the key to solving the cooling flow problem, i.e. dramatically quenching the cooling rates for several billion years without destroying the cool-core structure. Using an upgraded version of the parallel 3D hydrodynamic code FLASH, I show that anisotropic AGN outflows can further reproduce fundamental observed features, such as buoyant bubbles, cocoon shocks, sonic ripples, metals dredge-up, and subsonic turbulence. The latter is an essential ingredient to drive nonlinear thermal instabilities, which cause cold gas condensation, a residual of the quenched cooling flow and, later, fuel for the AGN feedback engine. The self-regulated outflows are systematically tested on the scales of massive clusters, groups and isolated elliptical galaxies: in lighter less bound objects the feedback needs to be gentler and less efficient, in order to avoid drastic overheating. In this Thesis, I describe in depth the complex hydrodynamics, involving the coupling of the feedback energy to that of the surrounding hot medium. Finally, I present the merits and flaws of all the proposed models, with a critical eye toward observational concordance.
Resumo:
Donor-derived CD8+ cytotoxic T lymphocytes (CTLs) eliminating host leukemic cells mediate curative graft-versus-leukemia (GVL) reactions after allogeneic hematopoietic stem cell transplantation (HSCT). The leukemia-reactive CTLs recognize hematopoiesis-restricted or broadly expressed minor histocompatibility and leukemia-associated peptide antigens that are presented by human leukocyte antigen (HLA) class I molecules on recipient cells. The development of allogeneic CTL therapy in acute myeloid leukemia (AML) is hampered by the poor efficiency of current techniques for generating leukemia-reactive CTLs from unprimed healthy donors in vitro. In this work, a novel allogeneic mini-mixed lymphocyte/leukemia culture (mini-MLLC) approach was established by stimulating CD8+ T cells isolated from peripheral blood of healthy donors at comparably low numbers (i.e. 10e4/well) with HLA class I-matched primary AML blasts in 96-well microtiter plates. Before culture, CD8+ T cells were immunomagnetically separated into CD62L(high)+ and CD62L(low)+/neg subsets enriched for naive/central memory and effector memory cells, respectively. The application of 96-well microtiter plates aimed at creating multiple different responder-stimulator cell compositions in order to provide for the growth of leukemia-reactive CTLs optimized culture conditions by chance. The culture medium was supplemented with interleukin (IL)-7, IL-12, and IL-15. On day 14, IL-12 was replaced by IL-2. In eight different related and unrelated donor/AML pairs with complete HLA class I match, numerous CTL populations were isolated that specifically lysed myeloid leukemias in association with various HLA-A, -B, or -C alleles. These CTLs recognized neither lymphoblastoid B cell lines of donor and patient origin nor primary B cell leukemias expressing the corresponding HLA restriction element. CTLs expressed T cell receptors of single V-beta chain families, indicating their clonality. The vast majority of CTL clones were obtained from mini-MLLCs initiated with CD8+ CD62L(high)+ cells. Using antigen-specific stimulation, multiple CTL populations were amplified to 10e8-10e10 cells within six to eight weeks. The capability of mini-MLLC derived AML-reactive CTL clones to inhibit the engraftment of human primary AML blasts was investigated in the immunodeficient nonobese diabetic/severe combined immune deficient IL-2 receptor common γ-chain deficient (NOD/SCID IL2Rγnull) mouse model. The leukemic engraftment in NOD/SCID IL2Rγnull was specifically prevented if inoculated AML blasts had been pre-incubated in vitro with AML-reactive CTLs, but not with anti-melanoma control CTLs. These results demonstrate that myeloid leukemia-specific CTL clones capable of preventing AML engraftment in mice can be rapidly isolated from CD8+ CD62L(high)+ T cells of healthy donors in vitro. The efficient generation and expansion of these CTLs by the newly established mini-MLLC approach opens the door for several potential applications. First, CTLs can be used within T cell-driven antigen identification strategies to extend the panel of molecularly defined AML antigens that are recognizable by T cells of healthy donors. Second, because these CTLs can be isolated from the stem cell donor by mini-MLLC prior to transplantation, they could be infused into AML patients as a part of the stem cell allograft, or early after transplantation when the leukemia burden is low. The capability of these T cells to expand and function in vivo might require the simultaneous administration of AML-reactive CD4+ T cells generated by a similar in vitro strategy or, less complex, the co-transfer of CD8-depleted donor lymphocytes. To prepare clinical testing, the mini-MLLC approach should now be translated into a protocol that is compatible with good manufacturing practice guidelines.
Analysis of the influence of epitope flanking regions on MHC class I restricted antigen presentation
Resumo:
Peptides presented by MHC class I molecules for CTL recognition are derived mainly from cytosolic proteins. For antigen presentation on the cell surface, epitopes require correct processing by cytosolic and ER proteases, efficient TAP transport and MHC class I binding affinity. The efficiency of epitope generation depends not only on the epitope itself, but also on its flanking regions. In this project, the influence of the C-terminal region of the model epitope SIINFEKL (S8L) from chicken ovalbumin (aa 257-264) on antigen processing has been investigated. S8L is a well characterized epitope presented on the murine MHC class I molecule, H-2Kb. The Flp-In 293Kb cell line was transfected with different constructs each enabling the expression of the S8L sequence with different defined C-terminal flanking regions. The constructs differed at the two first C-terminal positions after the S8L epitope, so called P1’ and P2’. At these sites, all 20 amino acids were exchanged consecutively and tested for their influence on H-2Kb/S8L presentation on the cell surface of the Flp-In 293Kb cells. The detection of this complex was performed by immunostaining and flow cytometry. The prevailing assumption is that proteasomal cleavages are exclusively responsible for the generation of the final C-termini of CTL epitopes. Nevertheless, recent publications showed that TPPII (tripeptidyl peptidase II) is required for the generation of the correct C-terminus of the HLA-A3-restricted HIV epitope Nef(73-82). With this background, the dependence of the S8L generation on proteasomal cleavage of the designed constructs was characterized using proteasomal inhibitors. The results obtained indicate that it is crucial for proteasomal cleavage, which amino acid is flanking the C-terminus of an epitope. Furthermore, partially proteasome independent S8L generation from specific S8L-precursor peptides was observed. Hence, the possibility of other existing endo- or carboxy-peptidases in the cytosol that could be involved in the correct trimming of the C-terminus of antigenic peptides for MHC class I presentation was investigated, performing specific knockdowns and using inhibitors against the target peptidases. In parallel, a purification strategy to identify the novel peptidase was established. The purified peaks showing an endopeptidase activity were further analyzed by mass spectrometry and some potential peptidases (like e.g. Lon) were identified, which have to be further characterized.
Resumo:
The aim of the research project discussed in this thesis was to study the inhibition of aerobic glycolysis, that is the metabolic pathway exploited by cancer cells for the ATP generation. This observation has led to the evaluation of glycolytic inhibitors as potential anticancer agents. Lactate dehydrogenase (LDH) is the only enzyme whose inhibition should allow a blocking of aerobic glycolysis of tumor cells without damaging the normal cells which, in conditions of normal functional activity and sufficient oxygen supply, do not need this enzyme. In preliminar experiments we demonstrated that oxamic acid and tartronic acid, two LDH competitive inhibitors, impaired aerobic glycolysis and replication of cells from human hepatocellular carcinoma. Therefore, we proposed that the depletion of ATP levels in neoplastic cells, could improved the chemotherapeutic index of associated anticancer drugs; in particular, it was studied the association of oxamic acid and multi-targeted kinase inhibitors. A synergistic effect in combination with sorafenib was observed, and we demonstrated that this was related to the capacity of sorafenib to hinder the oxidative phosphorylation, so that cells were more dependent to aerobic glycolysis. These results linked to LDH blockage encouraged us to search for LDH inhibitors more powerful than oxamic acid; thus, in collaboration with the Department of Pharmaceutical Sciences of Bologna University we identified a new molecule, galloflavin, able to inhibit both A and B isoforms of LDH enzyme. The effects of galloflavin were studied on different human cancer cell lines (hepatocellular carcinoma, breast cancer, Burkitt’s lymphoma). Although exhibiting different power on the tested cell lines, galloflavin was constantly found to inhibit lactate and ATP production and to induce cell death, mainly in the form of apoptosis. Finally, as LDH-A is able to bind single stranded DNA, thus stimulating cell transcription, galloflavin effects were also studied on this other LDH function.