26 resultados para Subject of rights

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present research aims to study the special rights other than shares in Spanish Law and the protection of their holders in cross-border mergers of limited liability companies within the European Union frame. Special rights other than shares are recognised as an independent legal category within legal systems of some EU Member States, such as Germany or Spain, through the implementation of the Third Directive 78/855/CEE concerning mergers of public limited liability companies. The above-cited Directive contains a special regime of protection for the holders of securities, other than shares, to which special rights are attached, consisting of being given rights in the acquiring company, at least equivalent to those they possessed in the company being acquired. This safeguard is to highlight the intimate connection between this type of rights and the company whose extinction determines the existence of those. Pursuant to the Directive 2005/56/CE on cross-border mergers of limited liability companies, each company taking part in these operations shall comply with the safeguards of members and third parties provided in their respective national law to which is subject. In this regard, the protection for holders of special rights other than shares shall be ruled by the domestic M&A regime. As far as Spanish Law are concerned, holders of these special rights are recognized a right of merger information, in the same terms as shareholders, as well as equal rights in the company resulting from the cross-border merger. However, these measures are not enough guarantee for a suitable protection, thus considering those holders of special rights as special creditors, sometimes it will be necessary to go to the general protection regime for creditors. In Spanish Law, it would involve the recognition of right to the merger opposition, whose exercise would prevent the operation was completed until ensuring equal rights.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The subject of this doctoral dissertation concerns the definition of a new methodology for the morphological and morphometric study of fossilized human teeth, and therefore strives to provide a contribution to the reconstruction of human evolutionary history that proposes to extend to the different species of hominid fossils. Standardized investigative methodologies are lacking both regarding the orientation of teeth subject to study and in the analysis that can be carried out on these teeth once they are oriented. The opportunity to standardize a primary analysis methodology is furnished by the study of certain early Neanderthal and preneanderthal molars recovered in two caves in southern Italy [Grotta Taddeo (Taddeo Cave) and Grotta del Poggio (Poggio Cave), near Marina di Camerata, Campania]. To these we can add other molars of Neanderthal and modern man of the upper Paleolithic era, specifically scanned in the paleoanthropology laboratory of the University of Arkansas (Fayetteville, Arkansas, USA), in order to increase the paleoanthropological sample data and thereby make the final results of the analyses more significant. The new analysis methodology is rendered as follows: 1. Standardization of an orientation system for primary molars (superior and inferior), starting from a scan of a sample of 30 molars belonging to modern man (15 M1 inferior and 15 M1 superior), the definition of landmarks, the comparison of various systems and the choice of a system of orientation for each of the two dental typologies. 2. The definition of an analysis procedure that considers only the first 4 millimeters of the dental crown starting from the collar: 5 sections parallel to the plane according to which the tooth has been oriented are carried out, spaced 1 millimeter between them. The intention is to determine a method that allows for the differentiation of fossilized species even in the presence of worn teeth. 3. Results and Conclusions. The new approach to the study of teeth provides a considerable quantity of information that can better be evaluated by increasing the fossil sample data. It has been demonstrated to be a valid tool in evolutionary classification that has allowed (us) to differentiate the Neanderthal sample from that of modern man. In a particular sense the molars of Grotta Taddeo, which up until this point it has not been possible to determine with exactness their species of origin, through the present research they are classified as Neanderthal.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Thanks to the Chandra and XMM–Newton surveys, the hard X-ray sky is now probed down to a flux limit where the bulk of the X-ray background is almost completely resolved into discrete sources, at least in the 2–8 keV band. Extensive programs of multiwavelength follow-up observations showed that the large majority of hard X–ray selected sources are identified with Active Galactic Nuclei (AGN) spanning a broad range of redshifts, luminosities and optical properties. A sizable fraction of relatively luminous X-ray sources hosting an active, presumably obscured, nucleus would not have been easily recognized as such on the basis of optical observations because characterized by “peculiar” optical properties. In my PhD thesis, I will focus the attention on the nature of two classes of hard X-ray selected “elusive” sources: those characterized by high X-ray-to-optical flux ratios and red optical-to-near-infrared colors, a fraction of which associated with Type 2 quasars, and the X-ray bright optically normal galaxies, also known as XBONGs. In order to characterize the properties of these classes of elusive AGN, the datasets of several deep and large-area surveys have been fully exploited. The first class of “elusive” sources is characterized by X-ray-to-optical flux ratios (X/O) significantly higher than what is generally observed from unobscured quasars and Seyfert galaxies. The properties of well defined samples of high X/O sources detected at bright X–ray fluxes suggest that X/O selection is highly efficient in sampling high–redshift obscured quasars. At the limits of deep Chandra surveys (∼10−16 erg cm−2 s−1), high X/O sources are generally characterized by extremely faint optical magnitudes, hence their spectroscopic identification is hardly feasible even with the largest telescopes. In this framework, a detailed investigation of their X-ray properties may provide useful information on the nature of this important component of the X-ray source population. The X-ray data of the deepest X-ray observations ever performed, the Chandra deep fields, allows us to characterize the average X-ray properties of the high X/O population. The results of spectral analysis clearly indicate that the high X/O sources represent the most obscured component of the X–ray background. Their spectra are harder (G ∼ 1) than any other class of sources in the deep fields and also of the XRB spectrum (G ≈ 1.4). In order to better understand the AGN physics and evolution, a much better knowledge of the redshift, luminosity and spectral energy distributions (SEDs) of elusive AGN is of paramount importance. The recent COSMOS survey provides the necessary multiwavelength database to characterize the SEDs of a statistically robust sample of obscured sources. The combination of high X/O and red-colors offers a powerful tool to select obscured luminous objects at high redshift. A large sample of X-ray emitting extremely red objects (R−K >5) has been collected and their optical-infrared properties have been studied. In particular, using an appropriate SED fitting procedure, the nuclear and the host galaxy components have been deconvolved over a large range of wavelengths and ptical nuclear extinctions, black hole masses and Eddington ratios have been estimated. It is important to remark that the combination of hard X-ray selection and extreme red colors is highly efficient in picking up highly obscured, luminous sources at high redshift. Although the XBONGs do not present a new source population, the interest on the nature of these sources has gained a renewed attention after the discovery of several examples from recent Chandra and XMM–Newton surveys. Even though several possibilities were proposed in recent literature to explain why a relatively luminous (LX = 1042 − 1043erg s−1) hard X-ray source does not leave any significant signature of its presence in terms of optical emission lines, the very nature of XBONGs is still subject of debate. Good-quality photometric near-infrared data (ISAAC/VLT) of 4 low-redshift XBONGs from the HELLAS2XMMsurvey have been used to search for the presence of the putative nucleus, applying the surface-brightness decomposition technique. In two out of the four sources, the presence of a nuclear weak component hosted by a bright galaxy has been revealed. The results indicate that moderate amounts of gas and dust, covering a large solid angle (possibly 4p) at the nuclear source, may explain the lack of optical emission lines. A weak nucleus not able to produce suffcient UV photons may provide an alternative or additional explanation. On the basis of an admittedly small sample, we conclude that XBONGs constitute a mixed bag rather than a new source population. When the presence of a nucleus is revealed, it turns out to be mildly absorbed and hosted by a bright galaxy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recently, a rising interest in political and economic integration/disintegration issues has been developed in the political economy field. This growing strand of literature partly draws on traditional issues of fiscal federalism and optimum public good provision and focuses on a trade-off between the benefits of centralization, arising from economies of scale or externalities, and the costs of harmonizing policies as a consequence of the increased heterogeneity of individual preferences in an international union or in a country composed of at least two regions. This thesis stems from this strand of literature and aims to shed some light on two highly relevant aspects of the political economy of European integration. The first concerns the role of public opinion in the integration process; more precisely, how economic benefits and costs of integration shape citizens' support for European Union (EU) membership. The second is the allocation of policy competences among different levels of government: European, national and regional. Chapter 1 introduces the topics developed in this thesis by reviewing the main recent theoretical developments in the political economy analysis of integration processes. It is structured as follows. First, it briefly surveys a few relevant articles on economic theories of integration and disintegration processes (Alesina and Spolaore 1997, Bolton and Roland 1997, Alesina et al. 2000, Casella and Feinstein 2002) and discusses their relevance for the study of the impact of economic benefits and costs on public opinion attitude towards the EU. Subsequently, it explores the links existing between such political economy literature and theories of fiscal federalism, especially with regard to normative considerations concerning the optimal allocation of competences in a union. Chapter 2 firstly proposes a model of citizens’ support for membership of international unions, with explicit reference to the EU; subsequently it tests the model on a panel of EU countries. What are the factors that influence public opinion support for the European Union (EU)? In international relations theory, the idea that citizens' support for the EU depends on material benefits deriving from integration, i.e. whether European integration makes individuals economically better off (utilitarian support), has been common since the 1970s, but has never been the subject of a formal treatment (Hix 2005). A small number of studies in the 1990s have investigated econometrically the link between national economic performance and mass support for European integration (Eichenberg and Dalton 1993; Anderson and Kalthenthaler 1996), but only making informal assumptions. The main aim of Chapter 2 is thus to propose and test our model with a view to providing a more complete and theoretically grounded picture of public support for the EU. Following theories of utilitarian support, we assume that citizens are in favour of membership if they receive economic benefits from it. To develop this idea, we propose a simple political economic model drawing on the recent economic literature on integration and disintegration processes. The basic element is the existence of a trade-off between the benefits of centralisation and the costs of harmonising policies in presence of heterogeneous preferences among countries. The approach we follow is that of the recent literature on the political economy of international unions and the unification or break-up of nations (Bolton and Roland 1997, Alesina and Wacziarg 1999, Alesina et al. 2001, 2005a, to mention only the relevant). The general perspective is that unification provides returns to scale in the provision of public goods, but reduces each member state’s ability to determine its most favoured bundle of public goods. In the simple model presented in Chapter 2, support for membership of the union is increasing in the union’s average income and in the loss of efficiency stemming from being outside the union, and decreasing in a country’s average income, while increasing heterogeneity of preferences among countries points to a reduced scope of the union. Afterwards we empirically test the model with data on the EU; more precisely, we perform an econometric analysis employing a panel of member countries over time. The second part of Chapter 2 thus tries to answer the following question: does public opinion support for the EU really depend on economic factors? The findings are broadly consistent with our theoretical expectations: the conditions of the national economy, differences in income among member states and heterogeneity of preferences shape citizens’ attitude towards their country’s membership of the EU. Consequently, this analysis offers some interesting policy implications for the present debate about ratification of the European Constitution and, more generally, about how the EU could act in order to gain more support from the European public. Citizens in many member states are called to express their opinion in national referenda, which may well end up in rejection of the Constitution, as recently happened in France and the Netherlands, triggering a European-wide political crisis. These events show that nowadays understanding public attitude towards the EU is not only of academic interest, but has a strong relevance for policy-making too. Chapter 3 empirically investigates the link between European integration and regional autonomy in Italy. Over the last few decades, the double tendency towards supranationalism and regional autonomy, which has characterised some European States, has taken a very interesting form in this country, because Italy, besides being one of the founding members of the EU, also implemented a process of decentralisation during the 1970s, further strengthened by a constitutional reform in 2001. Moreover, the issue of the allocation of competences among the EU, the Member States and the regions is now especially topical. The process leading to the drafting of European Constitution (even if then it has not come into force) has attracted much attention from a constitutional political economy perspective both on a normative and positive point of view (Breuss and Eller 2004, Mueller 2005). The Italian parliament has recently passed a new thorough constitutional reform, still to be approved by citizens in a referendum, which includes, among other things, the so called “devolution”, i.e. granting the regions exclusive competence in public health care, education and local police. Following and extending the methodology proposed in a recent influential article by Alesina et al. (2005b), which only concentrated on the EU activity (treaties, legislation, and European Court of Justice’s rulings), we develop a set of quantitative indicators measuring the intensity of the legislative activity of the Italian State, the EU and the Italian regions from 1973 to 2005 in a large number of policy categories. By doing so, we seek to answer the following broad questions. Are European and regional legislations substitutes for state laws? To what extent are the competences attributed by the European treaties or the Italian Constitution actually exerted in the various policy areas? Is their exertion consistent with the normative recommendations from the economic literature about their optimum allocation among different levels of government? The main results show that, first, there seems to be a certain substitutability between EU and national legislations (even if not a very strong one), but not between regional and national ones. Second, the EU concentrates its legislative activity mainly in international trade and agriculture, whilst social policy is where the regions and the State (which is also the main actor in foreign policy) are more active. Third, at least two levels of government (in some cases all of them) are significantly involved in the legislative activity in many sectors, even where the rationale for that is, at best, very questionable, indicating that they actually share a larger number of policy tasks than that suggested by the economic theory. It appears therefore that an excessive number of competences are actually shared among different levels of government. From an economic perspective, it may well be recommended that some competences be shared, but only when the balance between scale or spillover effects and heterogeneity of preferences suggests so. When, on the contrary, too many levels of government are involved in a certain policy area, the distinction between their different responsibilities easily becomes unnecessarily blurred. This may not only leads to a slower and inefficient policy-making process, but also risks to make it too complicate to understand for citizens, who, on the contrary, should be able to know who is really responsible for a certain policy when they vote in national,local or European elections or in referenda on national or European constitutional issues.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The research activity carried out during the PhD course in Electrical Engineering belongs to the branch of electric and electronic measurements. The main subject of the present thesis is a distributed measurement system to be installed in Medium Voltage power networks, as well as the method developed to analyze data acquired by the measurement system itself and to monitor power quality. In chapter 2 the increasing interest towards power quality in electrical systems is illustrated, by reporting the international research activity inherent to the problem and the relevant standards and guidelines emitted. The aspect of the quality of voltage provided by utilities and influenced by customers in the various points of a network came out only in recent years, in particular as a consequence of the energy market liberalization. Usually, the concept of quality of the delivered energy has been associated mostly to its continuity. Hence the reliability was the main characteristic to be ensured for power systems. Nowadays, the number and duration of interruptions are the “quality indicators” commonly perceived by most customers; for this reason, a short section is dedicated also to network reliability and its regulation. In this contest it should be noted that although the measurement system developed during the research activity belongs to the field of power quality evaluation systems, the information registered in real time by its remote stations can be used to improve the system reliability too. Given the vast scenario of power quality degrading phenomena that usually can occur in distribution networks, the study has been focused on electromagnetic transients affecting line voltages. The outcome of such a study has been the design and realization of a distributed measurement system which continuously monitor the phase signals in different points of a network, detect the occurrence of transients superposed to the fundamental steady state component and register the time of occurrence of such events. The data set is finally used to locate the source of the transient disturbance propagating along the network lines. Most of the oscillatory transients affecting line voltages are due to faults occurring in any point of the distribution system and have to be seen before protection equipment intervention. An important conclusion is that the method can improve the monitored network reliability, since the knowledge of the location of a fault allows the energy manager to reduce as much as possible both the area of the network to be disconnected for protection purposes and the time spent by technical staff to recover the abnormal condition and/or the damage. The part of the thesis presenting the results of such a study and activity is structured as follows: chapter 3 deals with the propagation of electromagnetic transients in power systems by defining characteristics and causes of the phenomena and briefly reporting the theory and approaches used to study transients propagation. Then the state of the art concerning methods to detect and locate faults in distribution networks is presented. Finally the attention is paid on the particular technique adopted for the same purpose during the thesis, and the methods developed on the basis of such approach. Chapter 4 reports the configuration of the distribution networks on which the fault location method has been applied by means of simulations as well as the results obtained case by case. In this way the performance featured by the location procedure firstly in ideal then in realistic operating conditions are tested. In chapter 5 the measurement system designed to implement the transients detection and fault location method is presented. The hardware belonging to the measurement chain of every acquisition channel in remote stations is described. Then, the global measurement system is characterized by considering the non ideal aspects of each device that can concur to the final combined uncertainty on the estimated position of the fault in the network under test. Finally, such parameter is computed according to the Guide to the Expression of Uncertainty in Measurements, by means of a numeric procedure. In the last chapter a device is described that has been designed and realized during the PhD activity aiming at substituting the commercial capacitive voltage divider belonging to the conditioning block of the measurement chain. Such a study has been carried out aiming at providing an alternative to the used transducer that could feature equivalent performance and lower cost. In this way, the economical impact of the investment associated to the whole measurement system would be significantly reduced, making the method application much more feasible.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Salt deposits characterize the subsurface of Tuzla (BiH) and made it famous since the ancient times. Archeological discoveries demonstrate the presence of a Neolithic pile-dwelling settlement related to the existence of saltwater springs that contributed to make the most of the area a swampy ground. Since the Roman times, the town is reported as “the City of Salt deposits and Springs”; "tuz" is the Turkish word for salt, as the Ottomans renamed the settlement in the 15th century following their conquest of the medieval Bosnia (Donia and Fine, 1994). Natural brine springs were located everywhere and salt has been evaporated by means of hot charcoals since pre-Roman times. The ancient use of salt was just a small exploitation compared to the massive salt production carried out during the 20th century by means of classical mine methodologies and especially wild brine pumping. In the past salt extraction was practised tapping natural brine springs, while the modern technique consists in about 100 boreholes with pumps tapped to the natural underground brine runs, at an average depth of 400-500 m. The mining operation changed the hydrogeological conditions enabling the downward flow of fresh water causing additional salt dissolution. This process induced severe ground subsidence during the last 60 years reaching up to 10 meters of sinking in the most affected area. Stress and strain of the overlying rocks induced the formation of numerous fractures over a conspicuous area (3 Km2). Consequently serious damages occurred to buildings and infrastructures such as water supply system, sewage networks and power lines. Downtown urban life was compromised by the destruction of more than 2000 buildings that collapsed or needed to be demolished causing the resettlement of about 15000 inhabitants (Tatić, 1979). Recently salt extraction activities have been strongly reduced, but the underground water system is returning to his natural conditions, threatening the flooding of the most collapsed area. During the last 60 years local government developed a monitoring system of the phenomenon, collecting several data about geodetic measurements, amount of brine pumped, piezometry, lithostratigraphy, extension of the salt body and geotechnical parameters. A database was created within a scientific cooperation between the municipality of Tuzla and the city of Rotterdam (D.O.O. Mining Institute Tuzla, 2000). The scientific investigation presented in this dissertation has been financially supported by a cooperation project between the Municipality of Tuzla, The University of Bologna (CIRSA) and the Province of Ravenna. The University of Tuzla (RGGF) gave an important scientific support in particular about the geological and hydrogeological features. Subsidence damage resulting from evaporite dissolution generates substantial losses throughout the world, but the causes are only well understood in a few areas (Gutierrez et al., 2008). The subject of this study is the collapsing phenomenon occurring in Tuzla area with the aim to identify and quantify the several factors involved in the system and their correlations. Tuzla subsidence phenomenon can be defined as geohazard, which represents the consequence of an adverse combination of geological processes and ground conditions precipitated by human activity with the potential to cause harm (Rosenbaum and Culshaw, 2003). Where an hazard induces a risk to a vulnerable element, a risk management process is required. The single factors involved in the subsidence of Tuzla can be considered as hazards. The final objective of this dissertation represents a preliminary risk assessment procedure and guidelines, developed in order to quantify the buildings vulnerability in relation to the overall geohazard that affect the town. The historical available database, never fully processed, have been analyzed by means of geographic information systems and mathematical interpolators (PART I). Modern geomatic applications have been implemented to deeply investigate the most relevant hazards (PART II). In order to monitor and quantify the actual subsidence rates, geodetic GPS technologies have been implemented and 4 survey campaigns have been carried out once a year. Subsidence related fractures system has been identified by means of field surveys and mathematical interpretations of the sinking surface, called curvature analysis. The comparison of mapped and predicted fractures leaded to a better comprehension of the problem. Results confirmed the reliability of fractures identification using curvature analysis applied to sinking data instead of topographic or seismic data. Urban changes evolution has been reconstructed analyzing topographic maps and satellite imageries, identifying the most damaged areas. This part of the investigation was very important for the quantification of buildings vulnerability.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The role of mitochondrial dysfunction in cancer has long been a subject of great interest. In this study, such dysfunction has been examined with regards to thyroid oncocytoma, a rare form of cancer, accounting for less than 5% of all thyroid cancers. A peculiar characteristic of thyroid oncocytic cells is the presence of an abnormally large number of mitochondria in the cytoplasm. Such mitochondrial hyperplasia has also been observed in cells derived from patients suffering from mitochondrial encephalomyopathies, where mutations in the mitochondrial DNA(mtDNA) encoding the respiratory complexes result in oxidative phosphorylation dysfunction. An increase in the number of mitochondria occurs in the latter in order to compensate for the respiratory deficiency. This fact spurred the investigation into the presence of analogous mutations in thyroid oncocytic cells. In this study, the only available cell model of thyroid oncocytoma was utilised, the XTC-1 cell line, established from an oncocytic thyroid metastasis to the breast. In order to assess the energetic efficiency of these cells, they were incubated in a medium lacking glucose and supplemented instead with galactose. When subjected to such conditions, glycolysis is effectively inhibited and the cells are forced to use the mitochondria for energy production. Cell viability experiments revealed that XTC-1 cells were unable to survive in galactose medium. This was in marked contrast to the TPC-1 control cell line, a thyroid tumour cell line which does not display the oncocytic phenotype. In agreement with these findings, subsequent experiments assessing the levels of cellular ATP over incubation time in galactose medium, showed a drastic and continual decrease in ATP levels only in the XTC-1 cell line. Furthermore, experiments on digitonin-permeabilised cells revealed that the respiratory dysfunction in the latter was due to a defect in complex I of the respiratory chain. Subsequent experiments using cybrids demonstrated that this defect could be attributed to the mitochondrially-encoded subunits of complex I as opposed to the nuclearencoded subunits. Confirmation came with mtDNA sequencing, which detected the presence of a novel mutation in the ND1 subunit of complex I. In addition, a mutation in the cytochrome b subunit of complex III of the respiratory chain was detected. The fact that XTC-1 cells are unable to survive when incubated in galactose medium is consistent with the fact that many cancers are largely dependent on glycolysis for energy production. Indeed, numerous studies have shown that glycolytic inhibitors are able to induce apoptosis in various cancer cell lines. Subsequent experiments were therefore performed in order to identify the mode of XTC-1 cell death when subjected to the metabolic stress imposed by the forced use of the mitochondria for energy production. Cell shrinkage and mitochondrial fragmentation were observed in the dying cells, which would indicate an apoptotic type of cell death. Analysis of additional parameters however revealed a lack of both DNA fragmentation and caspase activation, thus excluding a classical apoptotic type of cell death. Interestingly, cleavage of the actin component of the cytoskeleton was observed, implicating the action of proteases in this mode of cell demise. However, experiments employing protease inhibitors failed to identify the specific protease involved. It has been reported in the literature that overexpression of Bcl-2 is able to rescue cells presenting a respiratory deficiency. As the XTC-1 cell line is not only respiration-deficient but also exhibits a marked decrease in Bcl-2 expression, it is a perfect model with which to study the relationship between Bcl-2 and oxidative phosphorylation in respiratory-deficient cells. Contrary to the reported literature studies on various cell lines harbouring defects in the respiratory chain, Bcl-2 overexpression was not shown to increase cell survival or rescue the energetic dysfunction in XTC-1 cells. Interestingly however, it had a noticeable impact on cell adhesion and morphology. Whereas XTC-1 cells shrank and detached from the growth surface under conditions of metabolic stress, Bcl-2-overexpressing XTC-1 cells appeared much healthier and were up to 45% more adherent. The target of Bcl-2 in this setting appeared to be the actin cytoskeleton, as the cleavage observed in XTC-1 cells expressing only endogenous levels of Bcl-2, was inhibited in Bcl-2-overexpressing cells. Thus, although unable to rescue XTC-1 cells in terms of cell viability, Bcl-2 is somehow able to stabilise the cytoskeleton, resulting in modifications in cell morphology and adhesion. The mitochondrial respiratory deficiency observed in cancer cells is thought not only to cause an increased dependency on glycolysis but it is also thought to blunt cellular responses to anticancer agents. The effects of several therapeutic agents were thus assessed for their death-inducing ability in XTC-1 cells. Cell viability experiments clearly showed that the cells were more resistant to stimuli which generate reactive oxygen species (tert-butylhydroperoxide) and to mitochondrial calcium-mediated apoptotic stimuli (C6-ceramide), as opposed to stimuli inflicting DNA damage (cisplatin) and damage to protein kinases(staurosporine). Various studies in the literature have reported that the peroxisome proliferator-activated receptor-coactivator 1(PGC-1α), which plays a fundamental role in mitochondrial biogenesis, is also involved in protecting cells against apoptosis caused by the former two types of stimuli. In accordance with these observations, real-time PCR experiments showed that XTC-1 cells express higher mRNA levels of this coactivator than do the control cells, implicating its importance in drug resistance. In conclusion, this study has revealed that XTC-1 cells, like many cancer cell lines, are characterised by a reduced energetic efficiency due to mitochondrial dysfunction. Said dysfunction has been attributed to mutations in respiratory genes encoded by the mitochondrial genome. Although the mechanism of cell demise in conditions of metabolic stress is unclear, the potential of targeting thyroid oncocytic cancers using glycolytic inhibitors has been illustrated. In addition, the discovery of mtDNA mutations in XTC-1 cells has enabled the use of this cell line as a model with which to study the relationship between Bcl-2 overexpression and oxidative phosphorylation in cells harbouring mtDNA mutations and also to investigate the significance of such mutations in establishing resistance to apoptotic stimuli.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Several MCAO systems are under study to improve the angular resolution of the current and of the future generation large ground-based telescopes (diameters in the 8-40 m range). The subject of this PhD Thesis is embedded in this context. Two MCAO systems, in dierent realization phases, are addressed in this Thesis: NIRVANA, the 'double' MCAO system designed for one of the interferometric instruments of LBT, is in the integration and testing phase; MAORY, the future E-ELT MCAO module, is under preliminary study. These two systems takle the sky coverage problem in two dierent ways. The layer oriented approach of NIRVANA, coupled with multi-pyramids wavefront sensors, takes advantage of the optical co-addition of the signal coming from up to 12 NGS in a annular 2' to 6' technical FoV and up to 8 in the central 2' FoV. Summing the light coming from many natural sources permits to increase the limiting magnitude of the single NGS and to improve considerably the sky coverage. One of the two Wavefront Sensors for the mid- high altitude atmosphere analysis has been integrated and tested as a stand- alone unit in the laboratory at INAF-Osservatorio Astronomico di Bologna and afterwards delivered to the MPIA laboratories in Heidelberg, where was integrated and aligned to the post-focal optical relay of one LINC-NIRVANA arm. A number of tests were performed in order to characterize and optimize the system functionalities and performance. A report about this work is presented in Chapter 2. In the MAORY case, to ensure correction uniformity and sky coverage, the LGS-based approach is the current baseline. However, since the Sodium layer is approximately 10 km thick, the articial reference source looks elongated, especially when observed from the edge of a large aperture. On a 30-40 m class telescope, for instance, the maximum elongation varies between few arcsec and 10 arcsec, depending on the actual telescope diameter, on the Sodium layer properties and on the laser launcher position. The centroiding error in a Shack-Hartmann WFS increases proportionally to the elongation (in a photon noise dominated regime), strongly limiting the performance. To compensate for this effect a straightforward solution is to increase the laser power, i.e. to increase the number of detected photons per subaperture. The scope of Chapter 3 is twofold: an analysis of the performance of three dierent algorithms (Weighted Center of Gravity, Correlation and Quad-cell) for the instantaneous LGS image position measurement in presence of elongated spots and the determination of the required number of photons to achieve a certain average wavefront error over the telescope aperture. An alternative optical solution to the spot elongation problem is proposed in Section 3.4. Starting from the considerations presented in Chapter 3, a first order analysis of the LGS WFS for MAORY (number of subapertures, number of detected photons per subaperture, RON, focal plane sampling, subaperture FoV) is the subject of Chapter 4. An LGS WFS laboratory prototype was designed to reproduce the relevant aspects of an LGS SH WFS for the E-ELT and to evaluate the performance of different centroid algorithms in presence of elongated spots as investigated numerically and analytically in Chapter 3. This prototype permits to simulate realistic Sodium proles. A full testing plan for the prototype is set in Chapter 4.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La tesi si articola in tre capitoli. Il primo dà conto del dibattito sorto attorno alla problematica dell’inquadramento della previdenza complementare nel sistema costituzionale dell’art. 38 Cost. che ha diviso la dottrina tra quanti hanno voluto ricondurre tale fenomeno al principio di libertà della previdenza privata di cui all’ art. 38, comma 5, Cost. e quanti lo hanno invece collocato al 2° comma della stessa norma, sulla base di una ritenuta identità di funzioni tra previdenza pubblica e previdenza complementare. Tale ultima ricostruzione in particolare dopo la c.d. Riforma “Amato” è culminata nella giurisprudenza della Corte Costituzionale, che ha avuto modo di pronunciarsi sulla questione con una serie di pronunce sulla vicenda del c.d. “contributo sul contributo” e su quella della subordinazione dei requisiti di accesso alle prestazioni pensionistiche complementari alla maturazione dei requisiti previsti dal sistema obbligatorio. Il capitolo successivo si occupa della verifica della attualità e della coerenza dell’impostazione della Corte Costituzionale alla luce dell’evoluzione della disciplina dei fondi pensione. Nel terzo capitolo, infine, vengono affrontate alcune questioni aperte in relazione ai c.d. fondi pensione “preesistenti” suscettibili di sollevare preoccupazioni circa la necessità di garantire le aspettative e i diritti dei soggetti iscritti.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The subject of this Ph.D. research thesis is the development and application of multiplexed analytical methods based on bioluminescent whole-cell biosensors. One of the main goals of analytical chemistry is multianalyte testing in which two or more analytes are measured simultaneously in a single assay. The advantages of multianalyte testing are work simplification, high throughput, and reduction in the overall cost per test. The availability of multiplexed portable analytical systems is of particular interest for on-field analysis of clinical, environmental or food samples as well as for the drug discovery process. To allow highly sensitive and selective analysis, these devices should combine biospecific molecular recognition with ultrasensitive detection systems. To address the current need for rapid, highly sensitive and inexpensive devices for obtaining more data from each sample,genetically engineered whole-cell biosensors as biospecific recognition element were combined with ultrasensitive bioluminescence detection techniques. Genetically engineered cell-based sensing systems were obtained by introducing into bacterial, yeast or mammalian cells a vector expressing a reporter protein whose expression is controlled by regulatory proteins and promoter sequences. The regulatory protein is able to recognize the presence of the analyte (e.g., compounds with hormone-like activity, heavy metals…) and to consequently activate the expression of the reporter protein that can be readily measured and directly related to the analyte bioavailable concentration in the sample. Bioluminescence represents the ideal detection principle for miniaturized analytical devices and multiplexed assays thanks to high detectability in small sample volumes allowing an accurate signal localization and quantification. In the first chapter of this dissertation is discussed the obtainment of improved bioluminescent proteins emitting at different wavelenghts, in term of increased thermostability, enhanced emission decay kinetic and spectral resolution. The second chapter is mainly focused on the use of these proteins in the development of whole-cell based assay with improved analytical performance. In particular since the main drawback of whole-cell biosensors is the high variability of their analyte specific response mainly caused by variations in cell viability due to aspecific effects of the sample’s matrix, an additional bioluminescent reporter has been introduced to correct the analytical response thus increasing the robustness of the bioassays. The feasibility of using a combination of two or more bioluminescent proteins for obtaining biosensors with internal signal correction or for the simultaneous detection of multiple analytes has been demonstrated by developing a dual reporter yeast based biosensor for androgenic activity measurement and a triple reporter mammalian cell-based biosensor for the simultaneous monitoring of two CYP450 enzymes activation, involved in cholesterol degradation, with the use of two spectrally resolved intracellular luciferases and a secreted luciferase as a control for cells viability. In the third chapter is presented the development of a portable multianalyte detection system. In order to develop a portable system that can be used also outside the laboratory environment even by non skilled personnel, cells have been immobilized into a new biocompatible and transparent polymeric matrix within a modified clear bottom black 384 -well microtiter plate to obtain a bioluminescent cell array. The cell array was placed in contact with a portable charge-coupled device (CCD) light sensor able to localize and quantify the luminescent signal produced by different bioluminescent whole-cell biosensors. This multiplexed biosensing platform containing whole-cell biosensors was successfully used to measure the overall toxicity of a given sample as well as to obtain dose response curves for heavy metals and to detect hormonal activity in clinical samples (PCT/IB2010/050625: “Portable device based on immobilized cells for the detection of analytes.” Michelini E, Roda A, Dolci LS, Mezzanotte L, Cevenini L , 2010). At the end of the dissertation some future development steps are also discussed in order to develop a point of care (POCT) device that combine portability, minimum sample pre-treatment and highly sensitive multiplexed assays in a short assay time. In this POCT perspective, field-flow fractionation (FFF) techniques, in particular gravitational variant (GrFFF) that exploit the earth gravitational field to structure the separation, have been investigated for cells fractionation, characterization and isolation. Thanks to the simplicity of its equipment, amenable to miniaturization, the GrFFF techniques appears to be particularly suited for its implementation in POCT devices and may be used as pre-analytical integrated module to be applied directly to drive target analytes of raw samples to the modules where biospecifc recognition reactions based on ultrasensitive bioluminescence detection occurs, providing an increase in overall analytical output.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

There have been almost fifty years since Harry Eckstein' s classic monograph, A Theory of Stable Democracy (Princeton, 1961), where he sketched out the basic tenets of the “congruence theory”, which was to become one of the most important and innovative contributions to understanding democratic rule. His next work, Division and Cohesion in Democracy, (Princeton University Press: 1966) is designed to serve as a plausibility probe for this 'theory' (ftn.) and is a case study of a Northern democratic system, Norway. What is more, this line of his work best exemplifies the contribution Eckstein brought to the methodology of comparative politics through his seminal article, “ “Case Study and Theory in Political Science” ” (in Greenstein and Polsby, eds., Handbook of Political Science, 1975), on the importance of the case study as an approach to empirical theory. This article demonstrates the special utility of “crucial case studies” in testing theory, thereby undermining the accepted wisdom in comparative research that the larger the number of cases the better. Although not along the same lines, but shifting the case study unit of research, I intend to take up here the challenge and build upon an equally unique political system, the Swedish one. Bearing in mind the peculiarities of the Swedish political system, my unit of analysis is going to be further restricted to the Swedish Social Democratic Party, the Svenska Arbetare Partiet. However, my research stays within the methodological framework of the case study theory inasmuch as it focuses on a single political system and party. The Swedish SAP endurance in government office and its electoral success throughout half a century (ftn. As of the 1991 election, there were about 56 years - more than half century - of interrupted social democratic "reign" in Sweden.) are undeniably a performance no other Social Democrat party has yet achieved in democratic conditions. Therefore, it is legitimate to inquire about the exceptionality of this unique political power combination. Which were the different components of this dominance power position, which made possible for SAP's governmental office stamina? I will argue here that it was the end-product of a combination of multifarious factors such as a key position in the party system, strong party leadership and organization, a carefully designed strategy regarding class politics and welfare policy. My research is divided into three main parts, the historical incursion, the 'welfare' part and the 'environment' part. The first part is a historical account of the main political events and issues, which are relevant for my case study. Chapter 2 is devoted to the historical events unfolding in the 1920-1960 period: the Saltsjoebaden Agreement, the series of workers' strikes in the 1920s and SAP's inception. It exposes SAP's ascent to power in the mid 1930s and the party's ensuing strategies for winning and keeping political office, that is its economic program and key economic goals. The following chapter - chapter 3 - explores the next period, i.e. the period from 1960s to 1990s and covers the party's troubled political times, its peak and the beginnings of the decline. The 1960s are relevant for SAP's planning of a long term economic strategy - the Rehn Meidner model, a new way of macroeconomic steering, based on the Keynesian model, but adapted to the new economic realities of welfare capitalist societies. The second and third parts of this study develop several hypotheses related to SAP's 'dominant position' (endurance in politics and in office) and test them afterwards. Mainly, the twin issues of economics and environment are raised and their political relevance for the party analyzed. On one hand, globalization and its spillover effects over the Swedish welfare system are important causal factors in explaining the transformative social-economic challenges the party had to put up with. On the other hand, Europeanization and environmental change influenced to a great deal SAP's foreign policy choices and its domestic electoral strategies. The implications of globalization on the Swedish welfare system will make the subject of two chapters - chapters four and five, respectively, whereupon the Europeanization consequences will be treated at length in the third part of this work - chapters six and seven, respectively. Apparently, at first sight, the link between foreign policy and electoral strategy is difficult to prove and uncanny, in the least. However, in the SAP's case there is a bulk of literature and public opinion statistical data able to show that governmental domestic policy and party politics are in a tight dependence to foreign policy decisions and sovereignty issues. Again, these country characteristics and peculiar causal relationships are outlined in the first chapters and explained in the second and third parts. The sixth chapter explores the presupposed relationship between Europeanization and environmental policy, on one hand, and SAP's environmental policy formulation and simultaneous agenda-setting at the international level, on the other hand. This chapter describes Swedish leadership in environmental policy formulation on two simultaneous fronts and across two different time spans. The last chapter, chapter eight - while trying to develop a conclusion, explores the alternative theories plausible in explaining the outlined hypotheses and points out the reasons why these theories do not fit as valid alternative explanation to my systemic corporatism thesis as the main causal factor determining SAP's 'dominant position'. Among the alternative theories, I would consider Traedgaardh L. and Bo Rothstein's historical exceptionalism thesis and the public opinion thesis, which alone are not able to explain the half century social democratic endurance in government in the Swedish case.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The subject of this thesis is multicolour bioluminescence analysis and how it can provide new tools for drug discovery and development.The mechanism of color tuning in bioluminescent reactions is not fully understood yet but it is object of intense research and several hypothesis have been generated. In the past decade key residues of the active site of the enzyme or in the surface surrounding the active site have been identified as responsible of different color emission. Anyway since bioluminescence reaction is strictly dependent from the interaction between the enzyme and its substrate D-luciferin, modification of the substrate can lead to a different emission spectrum too. In the recent years firefly luciferase and other luciferases underwent mutagenesis in order to obtain mutants with different emission characteristics. Thanks to these new discoveries in the bioluminescence field multicolour luciferases can be nowadays employed in bioanalysis for assay developments and imaging purposes. The use of multicolor bioluminescent enzymes expanded the potential of a range of application in vitro and in vivo. Multiple analysis and more information can be obtained from the same analytical session saving cost and time. This thesis focuses on several application of multicolour bioluminescence for high-throughput screening and in vivo imaging. Multicolor luciferases can be employed as new tools for drug discovery and developments and some examples are provided in the different chapters. New red codon optimized luciferase have been demonstrated to be improved tools for bioluminescence imaging in small animal and the possibility to combine red and green luciferases for BLI has been achieved even if some aspects of the methodology remain challenging and need further improvement. In vivo Bioluminescence imaging has known a rapid progress since its first application no more than 15 years ago. It is becoming an indispensable tool in pharmacological research. At the same time the development of more sensitive and implemented microscopes and low-light imager for a better visualization and quantification of multicolor signals would boost the research and the discoveries in life sciences in general and in drug discovery and development in particular.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Gaia space mission is a major project for the European astronomical community. As challenging as it is, the processing and analysis of the huge data-flow incoming from Gaia is the subject of thorough study and preparatory work by the DPAC (Data Processing and Analysis Consortium), in charge of all aspects of the Gaia data reduction. This PhD Thesis was carried out in the framework of the DPAC, within the team based in Bologna. The task of the Bologna team is to define the calibration model and to build a grid of spectro-photometric standard stars (SPSS) suitable for the absolute flux calibration of the Gaia G-band photometry and the BP/RP spectrophotometry. Such a flux calibration can be performed by repeatedly observing each SPSS during the life-time of the Gaia mission and by comparing the observed Gaia spectra to the spectra obtained by our ground-based observations. Due to both the different observing sites involved and the huge amount of frames expected (≃100000), it is essential to maintain the maximum homogeneity in data quality, acquisition and treatment, and a particular care has to be used to test the capabilities of each telescope/instrument combination (through the “instrument familiarization plan”), to devise methods to keep under control, and eventually to correct for, the typical instrumental effects that can affect the high precision required for the Gaia SPSS grid (a few % with respect to Vega). I contributed to the ground-based survey of Gaia SPSS in many respects: with the observations, the instrument familiarization plan, the data reduction and analysis activities (both photometry and spectroscopy), and to the maintenance of the data archives. However, the field I was personally responsible for was photometry and in particular relative photometry for the production of short-term light curves. In this context I defined and tested a semi-automated pipeline which allows for the pre-reduction of imaging SPSS data and the production of aperture photometry catalogues ready to be used for further analysis. A series of semi-automated quality control criteria are included in the pipeline at various levels, from pre-reduction, to aperture photometry, to light curves production and analysis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The present PhD thesis summarizes two examples of research in microfluidics. Both times water was the subject of interest, once in the liquid state (droplets adsorbed on chemically functionalized surfaces), the other time in the solid state (ice snowflakes and their fractal behaviour). The first problem deals with a slipping nano-droplet of water adsorbed on a surface with photo-switchable wettability characteristics. Main focus was on identifying the underlying driving forces and mechanical principles at the molecular level of detail. Molecular Dynamics simulation was employed as investigative tool owing to its record of successfully describing the microscopic behaviour of liquids at interfaces. To reproduce the specialized surface on which a water droplet can effectively “walk”, a new implicit surface potential was developed. Applying this new method the experimentally observed droplet slippage could be reproduced successfully. Next the movement of the droplet was analyzed at various conditions emphasizing on the behaviour of the water molecules in contact with the surface. The main objective was to identify driving forces and molecular mechanisms underlying the slippage process. The second part of this thesis is concerned with theoretical studies of snowflake melting. In the present work snowflakes are represented by filled von Koch-like fractals of mesoscopic beads. A new algorithm has been developed from scratch to simulate the thermal collapse of fractal structures based on Monte Carlo and Random Walk Simulations (MCRWS). The developed method was applied and compared to Molecular Dynamics simulations regarding the melting of ice snowflake crystals and new parameters were derived from this comparison. Bigger snow-fractals were then studied looking at the time evolution at different temperatures again making use of the developed MCRWS method. This was accompanied by an in-depth analysis of fractal properties (border length and gyration radius) in order to shed light on the dynamics of the melting process.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Oggetto della ricerca è lo studio del National Institute of Design (NID), progettato da Gautam Sarabhai e sua sorella Gira, ad Ahmedabad, assunta a paradigma del nuovo corso della politica che il Primo Ministro Nehru espresse nei primi decenni del governo postcoloniale. Obiettivo della tesi è di analizzare il fenomeno che unisce modernità e tradizione in architettura. La modernità indiana, infatti, nacque e si sviluppò con i caratteri di un Giano bifronte: da un lato, la politica del Primo Ministro Nehru favorì lo sviluppo dell’industria e della scienza; dall’altro, la visione di Gandhi mirava alla riscoperta del locale, delle tradizioni e dell’artigianato. Questi orientamenti influenzarono l’architettura postcoloniale. Negli anni ‘50 e ’60 Ahmedabad divenne la culla dell’architettura moderna indiana. Kanvinde, i Sarabhai, Correa, Doshi, Raje trovarono qui le condizioni per costruire la propria identità come progettisti e come intellettuali. I motori che resero possibile questo fermento furono principalmente due: una committenza di imprenditori illuminati, desiderosi di modernizzare la città; la presenza ad Ahmedabad, a partire dal 1951, dei maestri dell’architettura moderna, tra cui i più noti furono Le Corbusier e Kahn, invitati da quella stessa committenza, per la quale realizzarono edifici di notevole rilevanza. Ad Ahmedabad si confrontarono con forza entrambe le visioni dell’India moderna. Lo sforzo maggiore degli architetti indiani si espresse nel tentativo di conciliare i due aspetti, quelli che derivavano dalle influenze internazionali e quelli che provenivano dallo spirito della tradizione. Il progetto del NID è uno dei migliori esempi di questo esercizio di sintesi. Esso recupera nella composizione spaziale la lezione di Wright, Le Corbusier, Kahn, Eames ibridandola con elementi della tradizione indiana. Nell’uso sapiente della struttura modulare e a padiglione, della griglia ordinatrice a base quadrata, dell’integrazione costante fra spazi aperti, natura e architettura affiorano nell’edificio del NID echi di una cultura millenaria.