949 resultados para Exit ramp
Resumo:
BACKGROUND: MYC deregulation is a common event in gastric carcinogenesis, usually as a consequence of gene amplification, chromosomal translocations, or posttranslational mechanisms. FBXW7 is a p53-controlled tumor-suppressor that plays a role in the regulation of cell cycle exit and reentry via MYC degradation. METHODS: We evaluated MYC, FBXW7, and TP53 copy number, mRNA levels, and protein expression in gastric cancer and paired non-neoplastic specimens from 33 patients and also in gastric adenocarcinoma cell lines. We also determined the invasion potential of the gastric cancer cell lines. RESULTS: MYC amplification was observed in 51.5% of gastric tumor samples. Deletion of one copy of FBXW7 and TP53 was observed in 45.5% and 21.2% of gastric tumors, respectively. MYC mRNA expression was significantly higher in tumors than in non-neoplastic samples. FBXW7 and TP53 mRNA expression was markedly lower in tumors than in paired non-neoplastic specimens. Moreover, deregulated MYC and FBXW7 mRNA expression was associated with the presence of lymph node metastasis and tumor stage III-IV. Additionally, MYC immunostaining was more frequently observed in intestinal-type than diffuse-type gastric cancers and was associated with MYC mRNA expression. In vitro studies showed that increased MYC and reduced FBXW7 expression is associated with a more invasive phenotype in gastric cancer cell lines. This result encouraged us to investigate the activity of the gelatinases MMP-2 and MMP-9 in both cell lines. Both gelatinases are synthesized predominantly by stromal cells rather than cancer cells, and it has been proposed that both contribute to cancer progression. We observed a significant increase in MMP-9 activity in ACP02 compared with ACP03 cells. These results confirmed that ACP02 cells have greater invasion capability than ACP03 cells. CONCLUSION: In conclusion, FBXW7 and MYC mRNA may play a role in aggressive biologic behavior of gastric cancer cells and may be a useful indicator of poor prognosis. Furthermore, MYC is a candidate target for new therapies against gastric cancer.
Resumo:
This thesis consists of four self-contained essays in economics. Tournaments and unfair treatment. This paper introduces the negative feelings associated with the perception of being unfairly treated into a tournament model and examines the impact of these perceptions on workers’ efforts and their willingness to work overtime. The effect of unfair treatment on workers’ behavior is ambiguous in the model in that two countervailing effects arise: a negative impulsive effect and a positive strategic effect. The impulsive effect implies that workers react to the perception of being unfairly treated by reducing their level of effort. The strategic effect implies that workers raise this level in order to improve their career opportunities and thereby avoid feeling even more unfairly treated in the future. An empirical test of the model using survey data from a Swedish municipal utility shows that the overall effect is negative. This suggests that employers should consider the negative impulsive effect of unfair treatment on effort and overtime in designing contracts and determining on promotions. Late careers in Sweden between 1970 and 2000. In this essay Swedish workers’ late careers between 1970 and 2000 are studied. The aim is to examine older workers’ career patterns and whether they have changed during this period. For example, is there a difference in career mobility or labor market exiting between cohorts? What affects the late career, and does this differ between cohorts? The analysis shows that between 1970 and 2000 the late careers of Swedish workers comprised of few job changes and consisted more of “trying to keep the job you had in your mid-fifties” than of climbing up the promotion ladder. There are no cohort differences in this pattern. Also a large fraction of the older workers exited the labor market before the normal retirement age of 65. During the 1970s and first part of the 1980s, 56 percent of the older workers made an early exit and the average drop-out age was 63. During the late 1980s and the 1990s the share of old workers who made an early exit had risen to 76 percent and the average drop-out age had dropped to 61.5. Different factors have affected the probabilities of an early exit between 1970 and 2000. For example, skills did affect the risk of exiting the labor market during the 1970s and up to the mid-1980s, but not in the late 1980s or the 1990s. During the first period old workers in the lowest occupations or with the lowest level of education were more likely to exit the labor market than more highly skilled workers. In the second period old workers at all levels of skill had the same probability of leaving the labor market. The growth and survival of establishments: does gender segregation matter? We empirically examine the employment dynamics that arise in Becker’s (1957) model of labor market discrimination. According to the model, firms that employ a large fraction of women will be relatively more profitable due to lower wage costs, and thus enjoy a greater probability of surviving and growing by underselling other firms in the competitive product market. In order to test these implications, we use a unique Swedish matched employer-employee data set. We find that female-dominated establishments do not enjoy any greater probability of surviving and do not grow faster than other establishments. Additionally, we find that integrated establishments, in terms of gender, age and education levels, are more successful than other establishments. Thus, attempts by legislators to integrate firms along all dimensions of diversity may have positive effects on the growth and survival of firms. Risk and overconfidence – Gender differences in financial decision-making as revealed in the TV game-show Jeopardy. We have used unique data from the Swedish version of the TV-show Jeopardy to uncover gender differences in financial decision-making by looking at the contestants’ final wagering strategies. After ruling out empirical best-responses, which do appear in Jeopardy in the US, a simple model is derived to show that risk preferences, the subjective and objective probabilities of answering correctly (individual and group competence), determine wagering strategies. The empirical model shows that, on average, women adopt more conservative and diversified strategies, while men’s strategies aim for the greatest gains. Further, women’s strategies are more responsive to the competence measures, which suggests that they are less overconfident. Together these traits make women more successful players. These results are in line with earlier findings on gender and financial trading.
Resumo:
Wave breaking is an important coastal process, influencing hydro-morphodynamic processes such as turbulence generation and wave energy dissipation, run-up on the beach and overtopping of coastal defence structures. During breaking, waves are complex mixtures of air and water (“white water”) whose properties affect velocity and pressure fields in the vicinity of the free surface and, depending on the breaker characteristics, different mechanisms for air entrainment are usually observed. Several laboratory experiments have been performed to investigate the role of air bubbles in the wave breaking process (Chanson & Cummings, 1994, among others) and in wave loading on vertical wall (Oumeraci et al., 2001; Peregrine et al., 2006, among others), showing that the air phase is not negligible since the turbulent energy dissipation involves air-water mixture. The recent advancement of numerical models has given valuable insights in the knowledge of wave transformation and interaction with coastal structures. Among these models, some solve the RANS equations coupled with a free-surface tracking algorithm and describe velocity, pressure, turbulence and vorticity fields (Lara et al. 2006 a-b, Clementi et al., 2007). The single-phase numerical model, in which the constitutive equations are solved only for the liquid phase, neglects effects induced by air movement and trapped air bubbles in water. Numerical approximations at the free surface may induce errors in predicting breaking point and wave height and moreover, entrapped air bubbles and water splash in air are not properly represented. The aim of the present thesis is to develop a new two-phase model called COBRAS2 (stands for Cornell Breaking waves And Structures 2 phases), that is the enhancement of the single-phase code COBRAS0, originally developed at Cornell University (Lin & Liu, 1998). In the first part of the work, both fluids are considered as incompressible, while the second part will treat air compressibility modelling. The mathematical formulation and the numerical resolution of the governing equations of COBRAS2 are derived and some model-experiment comparisons are shown. In particular, validation tests are performed in order to prove model stability and accuracy. The simulation of the rising of a large air bubble in an otherwise quiescent water pool reveals the model capability to reproduce the process physics in a realistic way. Analytical solutions for stationary and internal waves are compared with corresponding numerical results, in order to test processes involving wide range of density difference. Waves induced by dam-break in different scenarios (on dry and wet beds, as well as on a ramp) are studied, focusing on the role of air as the medium in which the water wave propagates and on the numerical representation of bubble dynamics. Simulations of solitary and regular waves, characterized by both spilling and plunging breakers, are analyzed with comparisons with experimental data and other numerical model in order to investigate air influence on wave breaking mechanisms and underline model capability and accuracy. Finally, modelling of air compressibility is included in the new developed model and is validated, revealing an accurate reproduction of processes. Some preliminary tests on wave impact on vertical walls are performed: since air flow modelling allows to have a more realistic reproduction of breaking wave propagation, the dependence of wave breaker shapes and aeration characteristics on impact pressure values is studied and, on the basis of a qualitative comparison with experimental observations, the numerical simulations achieve good results.
Resumo:
Polymer blends constitute a valuable way to produce relatively low cost new materials. A still open question concerns the miscibility of polyethylene blends. Deviations from the log-additivity rule of the newtonian viscosity are often taken as a signature of immiscibility of the two components. The aim of this thesis is to characterize the rheological behavior in shear and elongation of five series of LLDPE/LDPE blends whose parent polymers have been chosen with different viscosity and SCB content and length. Synergistic effects have been measured for both zero shear viscosity and melt strength. Both SCB length and viscosity ratio between the components have been found to be key parameters for the miscibility of the pure polymers. In particular the miscibility increases with increasing SCB length and with decreasing the LDPE molecular weight and viscosity. This rheological behavior has significant effects on the processability window of these blends when the uni or biaxial elongational flows are involved. The film blowing is one of the processes for which the synergistic effects above mentioned can be crucial. Small scale experiments of film blowing performed for one of the series of blends has demonstrated that the positive deviation of the melt strength enlarges the processability window. In particular, the bubble stability was found to improve or disappear when the melt strength of the samples increased. The blending of LDPE and LLDPE can even reduce undesired melt flow instability phenomena widening, as a consequence, the processability window in extrusion. One of the series of blends has been characterized by means of capillary rheometry in order to allow a careful morphological analysis of the surface of the extruded polymer jets by means of Scanning Electron Microscopy (SEM) with the aim to detect the very early stages of the small scale melt instabilty at low shear rates (sharksin) and to follow its subsequent evolution as long as the shear rate was increased. With this experimental procedure it was possible to evaluate the shear rate ranges corresponding to different flow regions: smooth extrudate surface (absence of instability), sharkskin (small scale instability produced at the capillary exit), stick-slip transition (instability involving the whole capillary wall) and gross melt fracture (i.e. a large scale "upstream" instability originating from the entrance region of the capillary). A quantitative map was finally worked out using which an assessment of the flow type for a given shear rate and blend composition can be predicted.
Resumo:
PRESUPPOSTI: Le tachicardie atriali sono comuni nei GUCH sia dopo intervento correttivo o palliativo che in storia naturale, ma l’incidenza è significativamente più elevata nei pazienti sottoposti ad interventi che prevedono un’estesa manipolazione atriale (Mustard, Senning, Fontan). Il meccanismo più frequente delle tachicardie atriali nel paziente congenito adulto è il macrorientro atriale destro. L’ECG è poco utile nella previsione della localizzazione del circuito di rientro. Nei pazienti con cardiopatia congenita sottoposta a correzione biventricolare o in storia naturale il rientro peritricuspidale costituisce il circuito più frequente, invece nei pazienti con esiti di intervento di Fontan la sede più comune di macrorientro è la parete laterale dell’atrio destro. I farmaci antiaritmici sono poco efficaci nel trattamento di tali aritmie e comportano un’elevata incidenza di effetti avversi, soprattutto l’aggravamento della disfunzione sinusale preesistente ed il peggioramento della disfunzione ventricolare, e di effetti proaritmici. Vari studi hanno dimostrato la possibilità di trattare efficacemente le IART mediante l’ablazione transcatetere. I primi studi in cui le procedure venivano realizzate mediante fluoroscopia tradizionale, la documentazione di blocco di conduzione translesionale bidirezionale non era routinariamente eseguita e non tutti i circuiti di rientro venivano sottoposti ad ablazione, riportano un successo in acuto del 70% e una libertà da recidiva a 3 anni del 40%. I lavori più recenti riportano un successo in acuto del 94% ed un tasso di recidiva a 13 mesi del 6%. Questi ottimi risultati sono stati ottenuti con l’utilizzo delle moderne tecniche di mappaggio elettroanatomico e di cateteri muniti di sistemi di irrigazione per il raffreddamento della punta, inoltre la dimostrazione della presenza di blocco di conduzione translesionale bidirezionale, l’ablazione di tutti i circuiti indotti mediante stimolazione atriale programmata, nonché delle sedi potenziali di rientro identificate alla mappa di voltaggio sono stati considerati requisiti indispensabili per la definizione del successo della procedura. OBIETTIVI: riportare il tasso di efficia, le complicanze, ed il tasso di recidiva delle procedure di ablazione transcatetere eseguite con le moderne tecnologie e con una rigorosa strategia di programmazione degli obiettivi della procedura. Risultati: Questo studio riporta una buona percentuale di efficacia dell’ablazione transcatetere delle tachicardie atriali in una popolazione varia di pazienti con cardiopatia congenita operata ed in storia naturale: la percentuale di successo completo della procedura in acuto è del 71%, il tasso di recidiva ad un follow-up medio di 13 mesi è pari al 28%. Tuttavia se l’analisi viene limitata esclusivamente alle IART il successo della procedura è pari al 100%, i restanti casi in cui la procedura è stata definita inefficace o parzialmente efficace l’aritmia non eliminata ma cardiovertita elettricamente non è un’aritmia da rientro ma la fibrillazione atriale. Inoltre, sempre limitando l’analisi alle IART, anche il tasso di recidiva a 13 mesi si abbassa dal 28% al 3%. In un solo paziente è stato possibile documentare un episodio asintomatico e non sostenuto di IART al follow-up: in questo caso l’aspetto ECG era diverso dalla tachicardia clinica che aveva motivato la prima procedura. Sebbene la diversa morfologia dell’attivazione atriale all’ECG non escluda che si tratti di una recidiva, data la possibilità di un diverso exit point del medesimo circuito o di un diverso senso di rotazione dello stesso, è tuttavia più probabile l’emergenza di un nuovo circuito di macrorientro. CONCLUSIONI: L'ablazione trancatetere, pur non potendo essere considerata una procedura curativa, in quanto non in grado di modificare il substrato atriale che predispone all’insorgenza e mantenimento della fibrillazione atriale (ossia la fibrosi, l’ipertrofia, e la dilatazione atriale conseguenti alla patologia e condizione anatomica di base)è in grado di assicurare a tutti i pazienti un sostanziale beneficio clinico. È sempre stato possibile sospendere l’antiaritmico, tranne 2 casi, ed anche nei pazienti in cui è stata documentata una recidiva al follow-up la qualità di vita ed i sintomi sono decisamente migliorati ed è stato ottenuto un buon controllo della tachiaritmia con una bassa dose di beta-bloccante. Inoltre tutti i pazienti che avevano sviluppato disfunzione ventricolare secondaria alla tachiaritmia hanno presentato un miglioramento della funzione sistolica fino alla normalizzazione o al ritorno a valori precedenti la documentazione dell’aritmia. Alla base dei buoni risultati sia in acuto che al follow-up c’è una meticolosa programmazione della procedura e una rigorosa definizione degli endpoint. La dimostrazione del blocco di conduzione translesionale bidirezionale, requisito indispensabile per affermare di aver creato una linea continua e transmurale, l’ablazione di tutti i circuiti di rientro inducibili mediante stimolazione atriale programmata e sostenuti, e l’ablazione di alcune sedi critiche, in quanto corridoi protetti coinvolti nelle IART di più comune osservazione clinica, pur in assenza di una effettiva inducibilità periprocedurale, sono obiettivi necessari per una procedura efficace in acuto e a distanza. Anche la disponibilità di moderne tecnologie come i sistemi di irrigazione dei cateteri ablatori e le metodiche di mappaggio elettroanantomico sono requisiti tecnici molto importanti per il successo della procedura.
Resumo:
In such territories where food production is mostly scattered in several small / medium size or even domestic farms, a lot of heterogeneous residues are produced yearly, since farmers usually carry out different activities in their properties. The amount and composition of farm residues, therefore, widely change during year, according to the single production process periodically achieved. Coupling high efficiency micro-cogeneration energy units with easy handling biomass conversion equipments, suitable to treat different materials, would provide many important advantages to the farmers and to the community as well, so that the increase in feedstock flexibility of gasification units is nowadays seen as a further paramount step towards their wide spreading in rural areas and as a real necessity for their utilization at small scale. Two main research topics were thought to be of main concern at this purpose, and they were therefore discussed in this work: the investigation of fuels properties impact on gasification process development and the technical feasibility of small scale gasification units integration with cogeneration systems. According to these two main aspects, the present work was thus divided in two main parts. The first one is focused on the biomass gasification process, that was investigated in its theoretical aspects and then analytically modelled in order to simulate thermo-chemical conversion of different biomass fuels, such as wood (park waste wood and softwood), wheat straw, sewage sludge and refuse derived fuels. The main idea is to correlate the results of reactor design procedures with the physical properties of biomasses and the corresponding working conditions of gasifiers (temperature profile, above all), in order to point out the main differences which prevent the use of the same conversion unit for different materials. At this scope, a gasification kinetic free model was initially developed in Excel sheets, considering different values of air to biomass ratio and the downdraft gasification technology as particular examined application. The differences in syngas production and working conditions (process temperatures, above all) among the considered fuels were tried to be connected to some biomass properties, such elementary composition, ash and water contents. The novelty of this analytical approach was the use of kinetic constants ratio in order to determine oxygen distribution among the different oxidation reactions (regarding volatile matter only) while equilibrium of water gas shift reaction was considered in gasification zone, by which the energy and mass balances involved in the process algorithm were linked together, as well. Moreover, the main advantage of this analytical tool is the easiness by which the input data corresponding to the particular biomass materials can be inserted into the model, so that a rapid evaluation on their own thermo-chemical conversion properties is possible to be obtained, mainly based on their chemical composition A good conformity of the model results with the other literature and experimental data was detected for almost all the considered materials (except for refuse derived fuels, because of their unfitting chemical composition with the model assumptions). Successively, a dimensioning procedure for open core downdraft gasifiers was set up, by the analysis on the fundamental thermo-physical and thermo-chemical mechanisms which are supposed to regulate the main solid conversion steps involved in the gasification process. Gasification units were schematically subdivided in four reaction zones, respectively corresponding to biomass heating, solids drying, pyrolysis and char gasification processes, and the time required for the full development of each of these steps was correlated to the kinetics rates (for pyrolysis and char gasification processes only) and to the heat and mass transfer phenomena from gas to solid phase. On the basis of this analysis and according to the kinetic free model results and biomass physical properties (particles size, above all) it was achieved that for all the considered materials char gasification step is kinetically limited and therefore temperature is the main working parameter controlling this step. Solids drying is mainly regulated by heat transfer from bulk gas to the inner layers of particles and the corresponding time especially depends on particle size. Biomass heating is almost totally achieved by the radiative heat transfer from the hot walls of reactor to the bed of material. For pyrolysis, instead, working temperature, particles size and the same nature of biomass (through its own pyrolysis heat) have all comparable weights on the process development, so that the corresponding time can be differently depending on one of these factors according to the particular fuel is gasified and the particular conditions are established inside the gasifier. The same analysis also led to the estimation of reaction zone volumes for each biomass fuel, so as a comparison among the dimensions of the differently fed gasification units was finally accomplished. Each biomass material showed a different volumes distribution, so that any dimensioned gasification unit does not seem to be suitable for more than one biomass species. Nevertheless, since reactors diameters were found out quite similar for all the examined materials, it could be envisaged to design a single units for all of them by adopting the largest diameter and by combining together the maximum heights of each reaction zone, as they were calculated for the different biomasses. A total height of gasifier as around 2400mm would be obtained in this case. Besides, by arranging air injecting nozzles at different levels along the reactor, gasification zone could be properly set up according to the particular material is in turn gasified. Finally, since gasification and pyrolysis times were found to considerably change according to even short temperature variations, it could be also envisaged to regulate air feeding rate for each gasified material (which process temperatures depend on), so as the available reactor volumes would be suitable for the complete development of solid conversion in each case, without even changing fluid dynamics behaviour of the unit as well as air/biomass ratio in noticeable measure. The second part of this work dealt with the gas cleaning systems to be adopted downstream the gasifiers in order to run high efficiency CHP units (i.e. internal engines and micro-turbines). Especially in the case multi–fuel gasifiers are assumed to be used, weightier gas cleaning lines need to be envisaged in order to reach the standard gas quality degree required to fuel cogeneration units. Indeed, as the more heterogeneous feed to the gasification unit, several contaminant species can simultaneously be present in the exit gas stream and, as a consequence, suitable gas cleaning systems have to be designed. In this work, an overall study on gas cleaning lines assessment is carried out. Differently from the other research efforts carried out in the same field, the main scope is to define general arrangements for gas cleaning lines suitable to remove several contaminants from the gas stream, independently on the feedstock material and the energy plant size The gas contaminant species taken into account in this analysis were: particulate, tars, sulphur (in H2S form), alkali metals, nitrogen (in NH3 form) and acid gases (in HCl form). For each of these species, alternative cleaning devices were designed according to three different plant sizes, respectively corresponding with 8Nm3/h, 125Nm3/h and 350Nm3/h gas flows. Their performances were examined on the basis of their optimal working conditions (efficiency, temperature and pressure drops, above all) and their own consumption of energy and materials. Successively, the designed units were combined together in different overall gas cleaning line arrangements, paths, by following some technical constraints which were mainly determined from the same performance analysis on the cleaning units and from the presumable synergic effects by contaminants on the right working of some of them (filters clogging, catalysts deactivation, etc.). One of the main issues to be stated in paths design accomplishment was the tars removal from the gas stream, preventing filters plugging and/or line pipes clogging At this scope, a catalytic tars cracking unit was envisaged as the only solution to be adopted, and, therefore, a catalytic material which is able to work at relatively low temperatures was chosen. Nevertheless, a rapid drop in tars cracking efficiency was also estimated for this same material, so that an high frequency of catalysts regeneration and a consequent relevant air consumption for this operation were calculated in all of the cases. Other difficulties had to be overcome in the abatement of alkali metals, which condense at temperatures lower than tars, but they also need to be removed in the first sections of gas cleaning line in order to avoid corrosion of materials. In this case a dry scrubber technology was envisaged, by using the same fine particles filter units and by choosing for them corrosion resistant materials, like ceramic ones. Besides these two solutions which seem to be unavoidable in gas cleaning line design, high temperature gas cleaning lines were not possible to be achieved for the two larger plant sizes, as well. Indeed, as the use of temperature control devices was precluded in the adopted design procedure, ammonia partial oxidation units (as the only considered methods for the abatement of ammonia at high temperature) were not suitable for the large scale units, because of the high increase of reactors temperature by the exothermic reactions involved in the process. In spite of these limitations, yet, overall arrangements for each considered plant size were finally designed, so that the possibility to clean the gas up to the required standard degree was technically demonstrated, even in the case several contaminants are simultaneously present in the gas stream. Moreover, all the possible paths defined for the different plant sizes were compared each others on the basis of some defined operational parameters, among which total pressure drops, total energy losses, number of units and secondary materials consumption. On the basis of this analysis, dry gas cleaning methods proved preferable to the ones including water scrubber technology in al of the cases, especially because of the high water consumption provided by water scrubber units in ammonia adsorption process. This result is yet connected to the possibility to use activated carbon units for ammonia removal and Nahcolite adsorber for chloride acid. The very high efficiency of this latter material is also remarkable. Finally, as an estimation of the overall energy loss pertaining the gas cleaning process, the total enthalpy losses estimated for the three plant sizes were compared with the respective gas streams energy contents, these latter obtained on the basis of low heating value of gas only. This overall study on gas cleaning systems is thus proposed as an analytical tool by which different gas cleaning line configurations can be evaluated, according to the particular practical application they are adopted for and the size of cogeneration unit they are connected to.
Resumo:
Im Rahmen dieser Arbeit wurde eine Routinemethode zur Differenzierung und Identifizierung von Unterlagssorten in jedem Verarbeitungsstadium, wie Holz, Pfropfrebe, bereits im Weinberg gepflanzte Rebe, entwickelt. Hierfür wurde eine Methode erarbeitet, die es ermöglicht, DNA aus Blättern, Holz und Wurzeln gleichermaßen zu extrahieren. Vermischungen von Unterlagssorten in einem Unterlagenholzbündel konnten bis zu 10% Fremd-Unterlagenholz durch eine RAPD-PCR nachgewiesen werden. Mit den 12mer Primer #722b und #722c wurden sortenspezifische Banden für die Unterlagssorten Börner, 8B, 3309C und 5BB festgestellt. Der Primers # 751 war in der Lage von 151 Unterlagssorten und Wildarten 144 Genotypen zu unterschieden. Mit Hilfe der Optimierung von RAMP-Zeiten konnten die Bandenmuster der sieben in Deutschland am häufigsten verwendeten Unterlagssorten auf zwei unterschiedlichen Thermocyclern reproduziert werden. Aufgrund der Optimierung der RAPD-PCR war es möglich, die zur Unterscheidung notwendigen Banden durch eine lineare Transformation anhand einer ermittelten Referenzbande mathematisch und graphisch darzustellen. Klone der Unterlagssorten SO4, 125AA und 5C, sowie die Unterlagssorte Binova, wurden auf die Unterscheidungsmöglichkeit hin mit RAPD, AFLP und SAMPL untersucht. Innerhalb der AFLP-/SAMPL-Methode bildeten die zu einer Sorte gehörenden Unterlagenklone ein Cluster, wobei Binova innerhalb der SO4 Klone zu finden war. Es wurden âunterlagssortenspezifische Bandenâ, âwiederholende Bandenâ und âEinzelbandenâ gefunden.
Resumo:
ZusammenfassungSpätarchaische Sedimentgesteine (ca. 2,65 Milliarden Jahre alt) wurden in Grünsteingürteln des Simbabwe Kratons untersucht. In dem Belingwe Grünsteingürtel ist granitoides Grundgebirge von einer allochthonen Einheit aus vulkanischen Gesteinen und Vorlandbeckensedimenten überlagert. Die sedimentäre Abfolge besteht aus Flachwasserkalken und Turbiditen. Unterschiedliche Faziestypen der Kalksteine sind in sedimentäre Verflachungszyklen angeordnet. Eustatische Meeresspiegelschwankungen werden als Ursache der zyklischen Sedimentation angenommen. Sedimentologische, geochemische und strukturelle Analysen zeigen die Bedeutung horizontal-tektonischer Prozesse für die Entstehung dieses Grünsteingürtels an.Sedimentgesteine des Midlands Grünsteingürtels lagern zwischen ozeanischen, mafischen Vulkaniten und kontinentalen, granitoiden Gneisen. Die Art der Abfolge sedimentärer Fazies, beginnend mit Turbiditen und überlagert von flachmarinen Schelfsedimenten und alluvialen Ablagerungen, sowie geologische und geochemische Hinweise aus den benachbarten Gesteinsserien lassen auf Ablagerung während der Kollision zwischen einem ozeanischen Plateau/Inselbogen und einem kontinentalen Krustenfragmentes schließen.In dem Bindura-Shamva Grünsteingürtel können zwei Sedimentgesteinseinheiten unterschieden werden, eine alluvialflachmarine Abfolge und eine tiefmarinfluviatile Abfolge. Extensionstektonik verursachte wahrscheinlich die Bildung des Sedimentbeckens. Die spätere Phase der Beckenbildung war jedoch ähnlich jener in modernen Vorlandbecken.Schichtparallele Eisensteinhorizonte sind häufig entlang von Sediment-Vulkanit-Kontakten zu finden. Diese Gesteine werden als silifizierte und von Sulfiden imprägnierte Scherzonen interpretiert. Syntektonische hydrothermale Alteration von Gesteinen entlang der Störungszonen führte zur Bildung dieser 'tektonischen Eisensteine'.
Resumo:
The Northern Apennines (NA) chain is the expression of the active plate margin between Europe and Adria. Given the low convergence rates and the moderate seismic activity, ambiguities still occur in defining a seismotectonic framework and many different scenarios have been proposed for the mountain front evolution. Differently from older models that indicate the mountain front as an active thrust at the surface, a recently proposed scenario describes the latter as the frontal limb of a long-wavelength fold (> 150 km) formed by a thrust fault tipped around 17 km at depth, and considered as the active subduction boundary. East of Bologna, this frontal limb is remarkably very straight and its surface is riddled with small, but pervasive high- angle normal faults. However, west of Bologna, some recesses are visible along strike of the mountain front: these perturbations seem due to the presence of shorter wavelength (15 to 25 km along strike) structures showing both NE and NW-vergence. The Pleistocene activity of these structures was already suggested, but not quantitative reconstructions are available in literature. This research investigates the tectonic geomorphology of the NA mountain front with the specific aim to quantify active deformations and infer possible deep causes of both short- and long-wavelength structures. This study documents the presence of a network of active extensional faults, in the foothills south and east of Bologna. For these structures, the strain rate has been measured to find a constant throw-to-length relationship and the slip rates have been compared with measured rates of erosion. Fluvial geomorphology and quantitative analysis of the topography document in detail the active tectonics of two growing domal structures (Castelvetro - Vignola foothills and the Ghiardo plateau) embedded in the mountain front west of Bologna. Here, tilting and river incision rates (interpreted as that long-term uplift rates) have been measured respectively at the mountain front and in the Enza and Panaro valleys, using a well defined stratigraphy of Pleistocene to Holocene river terraces and alluvial fan deposits as growth strata, and seismic reflection profiles relationships. The geometry and uplift rates of the anticlines constrain a simple trishear fault propagation folding model that inverts for blind thrust ramp depth, dip, and slip. Topographic swath profiles and the steepness index of river longitudinal profiles that traverse the anti- clines are consistent with stratigraphy, structures, aquifer geometry, and seismic reflection profiles. Available focal mechanisms of earthquakes with magnitude between Mw 4.1 to 5.4, obtained from a dataset of the instrumental seismicity for the last 30 years, evidence a clear vertical separation at around 15 km between shallow extensional and deeper compressional hypocenters along the mountain front and adjacent foothills. In summary, the studied anticlines appear to grow at rates slower than the growing rate of the longer- wavelength structure that defines the mountain front of the NA. The domal structures show evidences of NW-verging deformation and reactivations of older (late Neogene) thrusts. The reconstructed river incision rates together with rates coming from several other rivers along a 250 km wide stretch of the NA mountain front and recently available in the literature, all indicate a general increase from Middle to Late Pleistocene. This suggests focusing of deformation along a deep structure, as confirmed by the deep compressional seismicity. The maximum rate is however not constant along the mountain front, but varies from 0.2 mm/yr in the west to more than 2.2 mm/yr in the eastern sector, suggesting a similar (eastward-increasing) trend of the apenninic subduction.
Resumo:
CONCLUSIONS The focus of this work was the investigation ofanomalies in Tg and dynamics at polymer surfaces. Thethermally induced decay of hot-embossed polymer gratings isstudied using laser-diffraction and atomic force microscopy(AFM). Monodisperse PMMA and PS are selected in the Mwranges of 4.2 to 65.0 kg/mol and 3.47 to 65.0 kg/mol,respectively. Two different modes of measurement were used:the one mode uses temperature ramps to obtain an estimate ofthe near-surface glass temperature, Tdec,0; the other modeinvestigates the dynamics at a constant temperature aboveTg. The temperature-ramp experiments reveal Tdec,0 valuesvery close to the Tg,bulk values, as determined bydifferential scanning calorimetry (DSC). The PMMA of65.0 kg/mol shows a decreased value of Tg, while the PS samples of 3.47 and 10.3 kg/mol (Mw
Resumo:
Synthetic biology has recently had a great development, many papers have been published and many applications have been presented, spanning from the production of biopharmacheuticals to the synthesis of bioenergetic substrates or industrial catalysts. But, despite these advances, most of the applications are quite simple and don’t fully exploit the potential of this discipline. This limitation in complexity has many causes, like the incomplete characterization of some components, or the intrinsic variability of the biological systems, but one of the most important reasons is the incapability of the cell to sustain the additional metabolic burden introduced by a complex circuit. The objective of the project, of which this work is part, is trying to solve this problem through the engineering of a multicellular behaviour in prokaryotic cells. This system will introduce a cooperative behaviour that will allow to implement complex functionalities, that can’t be obtained with a single cell. In particular the goal is to implement the Leader Election, this procedure has been firstly devised in the field of distributed computing, to identify the process that allow to identify a single process as organizer and coordinator of a series of tasks assigned to the whole population. The election of the Leader greatly simplifies the computation providing a centralized control. Further- more this system may even be useful to evolutionary studies that aims to explain how complex organisms evolved from unicellular systems. The work presented here describes, in particular, the design and the experimental characterization of a component of the circuit that solves the Leader Election problem. This module, composed of an hybrid promoter and a gene, is activated in the non-leader cells after receiving the signal that a leader is present in the colony. The most important element, in this case, is the hybrid promoter, it has been realized in different versions, applying the heuristic rules stated in [22], and their activity has been experimentally tested. The objective of the experimental characterization was to test the response of the genetic circuit to the introduction, in the cellular environment, of particular molecules, inducers, that can be considered inputs of the system. The desired behaviour is similar to the one of a logic AND gate in which the exit, represented by the luminous signal produced by a fluorescent protein, is one only in presence of both inducers. The robustness and the stability of this behaviour have been tested by changing the concentration of the input signals and building dose response curves. From these data it is possible to conclude that the analysed constructs have an AND-like behaviour over a wide range of inducers’ concentrations, even if it is possible to identify many differences in the expression profiles of the different constructs. This variability accounts for the fact that the input and the output signals are continuous, and so their binary representation isn’t able to capture the complexity of the behaviour. The module of the circuit that has been considered in this analysis has a fundamental role in the realization of the intercellular communication system that is necessary for the cooperative behaviour to take place. For this reason, the second phase of the characterization has been focused on the analysis of the signal transmission. In particular, the interaction between this element and the one that is responsible for emitting the chemical signal has been tested. The desired behaviour is still similar to a logic AND, since, even in this case, the exit signal is determined by the hybrid promoter activity. The experimental results have demonstrated that the systems behave correctly, even if there is still a substantial variability between them. The dose response curves highlighted that stricter constrains on the inducers concentrations need to be imposed in order to obtain a clear separation between the two levels of expression. In the conclusive chapter the DNA sequences of the hybrid promoters are analysed, trying to identify the regulatory elements that are most important for the determination of the gene expression. Given the available data it wasn’t possible to draw definitive conclusions. In the end, few considerations on promoter engineering and complex circuits realization are presented. This section aims to briefly recall some of the problems outlined in the introduction and provide a few possible solutions.
Resumo:
Il coordinamento tra sistemi impositivi è una questione originaria e tipica del diritto comunitario. La tesi ne esplora le conseguenze sotto più aspetti.
Resumo:
What exactly is tax treaty override ? When is it realized ? This thesis, which is the result of a co-directed PhD between the University of Bologna and Tilburg University, gives a deep insight into a topic that has not yet been analyzed in a systematic way. On the contrary, the analysis about tax treaty override is still at a preliminary stage. For this reason the origin and nature of tax treaty override are first of all analyzed in their ‘natural’ context, i.e. within general international law. In order to characterize tax treaty override and deeply understand its peculiarities the evaluation of the effects of general international law on tax treaties based on the OECD Model Convention is a necessary pre-condition. Therefore, the binding effects of an international agreement on state sovereignty are specifically investigated. Afterwards, the interpretation of the OECD Model Convention occupies the main part of the thesis in order to develop an ‘interpretative model’ which can be applied every time a case of tax treaty override needs to be detected. Fictitious income, exit taxes and CFC regimes are analyzed in order to verify their compliance with tax treaties based on the OECD Model Convention and establish when the relevant legislation realizes cases of tax treaty override.
Resumo:
The purpose of this doctoral thesis is to prove existence for a mutually catalytic random walk with infinite branching rate on countably many sites. The process is defined as a weak limit of an approximating family of processes. An approximating process is constructed by adding jumps to a deterministic migration on an equidistant time grid. As law of jumps we need to choose the invariant probability measure of the mutually catalytic random walk with a finite branching rate in the recurrent regime. This model was introduced by Dawson and Perkins (1998) and this thesis relies heavily on their work. Due to the properties of this invariant distribution, which is in fact the exit distribution of planar Brownian motion from the first quadrant, it is possible to establish a martingale problem for the weak limit of any convergent sequence of approximating processes. We can prove a duality relation for the solution to the mentioned martingale problem, which goes back to Mytnik (1996) in the case of finite rate branching, and this duality gives rise to weak uniqueness for the solution to the martingale problem. Using standard arguments we can show that this solution is in fact a Feller process and it has the strong Markov property. For the case of only one site we prove that the model we have constructed is the limit of finite rate mutually catalytic branching processes as the branching rate approaches infinity. Therefore, it seems naturalto refer to the above model as an infinite rate branching process. However, a result for convergence on infinitely many sites remains open.
Resumo:
In this work we studied the efficiency of the benchmarks used in the asset management industry. In chapter 2 we analyzed the efficiency of the benchmark used for the government bond markets. We found that for the Emerging Market Bonds an equally weighted index for the country weights is probably the more suited because guarantees maximum diversification of country risk but for the Eurozone government bond market we found a GDP weighted index is better because the most important matter is to avoid a higher weight for highly indebted countries. In chapter 3 we analyzed the efficiency of a Derivatives Index to invest in the European corporate bond market instead of a Cash Index. We can state that the two indexes are similar in terms of returns, but that the Derivatives Index is less risky because it has a lower volatility, has values of skewness and kurtosis closer to those of a normal distribution and is a more liquid instrument, as the autocorrelation is not significant. In chapter 4 it is analyzed the impact of fallen angels on the corporate bond portfolios. Our analysis investigated the impact of the month-end rebalancing of the ML Emu Non Financial Corporate Index for the exit of downgraded bond (the event). We can conclude a flexible approach to the month-end rebalancing is better in order to avoid a loss of valued due to the benchmark construction rules. In chapter 5 we did a comparison between the equally weighted and capitalization weighted method for the European equity market. The benefit which results from reweighting the portfolio into equal weights can be attributed to the fact that EW portfolios implicitly follow a contrarian investment strategy, because they mechanically rebalance away from stocks that increase in price.