982 resultados para No Exit


Relevância:

10.00% 10.00%

Publicador:

Resumo:

PRESUPPOSTI: Le tachicardie atriali sono comuni nei GUCH sia dopo intervento correttivo o palliativo che in storia naturale, ma l’incidenza è significativamente più elevata nei pazienti sottoposti ad interventi che prevedono un’estesa manipolazione atriale (Mustard, Senning, Fontan). Il meccanismo più frequente delle tachicardie atriali nel paziente congenito adulto è il macrorientro atriale destro. L’ECG è poco utile nella previsione della localizzazione del circuito di rientro. Nei pazienti con cardiopatia congenita sottoposta a correzione biventricolare o in storia naturale il rientro peritricuspidale costituisce il circuito più frequente, invece nei pazienti con esiti di intervento di Fontan la sede più comune di macrorientro è la parete laterale dell’atrio destro. I farmaci antiaritmici sono poco efficaci nel trattamento di tali aritmie e comportano un’elevata incidenza di effetti avversi, soprattutto l’aggravamento della disfunzione sinusale preesistente ed il peggioramento della disfunzione ventricolare, e di effetti proaritmici. Vari studi hanno dimostrato la possibilità di trattare efficacemente le IART mediante l’ablazione transcatetere. I primi studi in cui le procedure venivano realizzate mediante fluoroscopia tradizionale, la documentazione di blocco di conduzione translesionale bidirezionale non era routinariamente eseguita e non tutti i circuiti di rientro venivano sottoposti ad ablazione, riportano un successo in acuto del 70% e una libertà da recidiva a 3 anni del 40%. I lavori più recenti riportano un successo in acuto del 94% ed un tasso di recidiva a 13 mesi del 6%. Questi ottimi risultati sono stati ottenuti con l’utilizzo delle moderne tecniche di mappaggio elettroanatomico e di cateteri muniti di sistemi di irrigazione per il raffreddamento della punta, inoltre la dimostrazione della presenza di blocco di conduzione translesionale bidirezionale, l’ablazione di tutti i circuiti indotti mediante stimolazione atriale programmata, nonché delle sedi potenziali di rientro identificate alla mappa di voltaggio sono stati considerati requisiti indispensabili per la definizione del successo della procedura. OBIETTIVI: riportare il tasso di efficia, le complicanze, ed il tasso di recidiva delle procedure di ablazione transcatetere eseguite con le moderne tecnologie e con una rigorosa strategia di programmazione degli obiettivi della procedura. Risultati: Questo studio riporta una buona percentuale di efficacia dell’ablazione transcatetere delle tachicardie atriali in una popolazione varia di pazienti con cardiopatia congenita operata ed in storia naturale: la percentuale di successo completo della procedura in acuto è del 71%, il tasso di recidiva ad un follow-up medio di 13 mesi è pari al 28%. Tuttavia se l’analisi viene limitata esclusivamente alle IART il successo della procedura è pari al 100%, i restanti casi in cui la procedura è stata definita inefficace o parzialmente efficace l’aritmia non eliminata ma cardiovertita elettricamente non è un’aritmia da rientro ma la fibrillazione atriale. Inoltre, sempre limitando l’analisi alle IART, anche il tasso di recidiva a 13 mesi si abbassa dal 28% al 3%. In un solo paziente è stato possibile documentare un episodio asintomatico e non sostenuto di IART al follow-up: in questo caso l’aspetto ECG era diverso dalla tachicardia clinica che aveva motivato la prima procedura. Sebbene la diversa morfologia dell’attivazione atriale all’ECG non escluda che si tratti di una recidiva, data la possibilità di un diverso exit point del medesimo circuito o di un diverso senso di rotazione dello stesso, è tuttavia più probabile l’emergenza di un nuovo circuito di macrorientro. CONCLUSIONI: L'ablazione trancatetere, pur non potendo essere considerata una procedura curativa, in quanto non in grado di modificare il substrato atriale che predispone all’insorgenza e mantenimento della fibrillazione atriale (ossia la fibrosi, l’ipertrofia, e la dilatazione atriale conseguenti alla patologia e condizione anatomica di base)è in grado di assicurare a tutti i pazienti un sostanziale beneficio clinico. È sempre stato possibile sospendere l’antiaritmico, tranne 2 casi, ed anche nei pazienti in cui è stata documentata una recidiva al follow-up la qualità di vita ed i sintomi sono decisamente migliorati ed è stato ottenuto un buon controllo della tachiaritmia con una bassa dose di beta-bloccante. Inoltre tutti i pazienti che avevano sviluppato disfunzione ventricolare secondaria alla tachiaritmia hanno presentato un miglioramento della funzione sistolica fino alla normalizzazione o al ritorno a valori precedenti la documentazione dell’aritmia. Alla base dei buoni risultati sia in acuto che al follow-up c’è una meticolosa programmazione della procedura e una rigorosa definizione degli endpoint. La dimostrazione del blocco di conduzione translesionale bidirezionale, requisito indispensabile per affermare di aver creato una linea continua e transmurale, l’ablazione di tutti i circuiti di rientro inducibili mediante stimolazione atriale programmata e sostenuti, e l’ablazione di alcune sedi critiche, in quanto corridoi protetti coinvolti nelle IART di più comune osservazione clinica, pur in assenza di una effettiva inducibilità periprocedurale, sono obiettivi necessari per una procedura efficace in acuto e a distanza. Anche la disponibilità di moderne tecnologie come i sistemi di irrigazione dei cateteri ablatori e le metodiche di mappaggio elettroanantomico sono requisiti tecnici molto importanti per il successo della procedura.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In such territories where food production is mostly scattered in several small / medium size or even domestic farms, a lot of heterogeneous residues are produced yearly, since farmers usually carry out different activities in their properties. The amount and composition of farm residues, therefore, widely change during year, according to the single production process periodically achieved. Coupling high efficiency micro-cogeneration energy units with easy handling biomass conversion equipments, suitable to treat different materials, would provide many important advantages to the farmers and to the community as well, so that the increase in feedstock flexibility of gasification units is nowadays seen as a further paramount step towards their wide spreading in rural areas and as a real necessity for their utilization at small scale. Two main research topics were thought to be of main concern at this purpose, and they were therefore discussed in this work: the investigation of fuels properties impact on gasification process development and the technical feasibility of small scale gasification units integration with cogeneration systems. According to these two main aspects, the present work was thus divided in two main parts. The first one is focused on the biomass gasification process, that was investigated in its theoretical aspects and then analytically modelled in order to simulate thermo-chemical conversion of different biomass fuels, such as wood (park waste wood and softwood), wheat straw, sewage sludge and refuse derived fuels. The main idea is to correlate the results of reactor design procedures with the physical properties of biomasses and the corresponding working conditions of gasifiers (temperature profile, above all), in order to point out the main differences which prevent the use of the same conversion unit for different materials. At this scope, a gasification kinetic free model was initially developed in Excel sheets, considering different values of air to biomass ratio and the downdraft gasification technology as particular examined application. The differences in syngas production and working conditions (process temperatures, above all) among the considered fuels were tried to be connected to some biomass properties, such elementary composition, ash and water contents. The novelty of this analytical approach was the use of kinetic constants ratio in order to determine oxygen distribution among the different oxidation reactions (regarding volatile matter only) while equilibrium of water gas shift reaction was considered in gasification zone, by which the energy and mass balances involved in the process algorithm were linked together, as well. Moreover, the main advantage of this analytical tool is the easiness by which the input data corresponding to the particular biomass materials can be inserted into the model, so that a rapid evaluation on their own thermo-chemical conversion properties is possible to be obtained, mainly based on their chemical composition A good conformity of the model results with the other literature and experimental data was detected for almost all the considered materials (except for refuse derived fuels, because of their unfitting chemical composition with the model assumptions). Successively, a dimensioning procedure for open core downdraft gasifiers was set up, by the analysis on the fundamental thermo-physical and thermo-chemical mechanisms which are supposed to regulate the main solid conversion steps involved in the gasification process. Gasification units were schematically subdivided in four reaction zones, respectively corresponding to biomass heating, solids drying, pyrolysis and char gasification processes, and the time required for the full development of each of these steps was correlated to the kinetics rates (for pyrolysis and char gasification processes only) and to the heat and mass transfer phenomena from gas to solid phase. On the basis of this analysis and according to the kinetic free model results and biomass physical properties (particles size, above all) it was achieved that for all the considered materials char gasification step is kinetically limited and therefore temperature is the main working parameter controlling this step. Solids drying is mainly regulated by heat transfer from bulk gas to the inner layers of particles and the corresponding time especially depends on particle size. Biomass heating is almost totally achieved by the radiative heat transfer from the hot walls of reactor to the bed of material. For pyrolysis, instead, working temperature, particles size and the same nature of biomass (through its own pyrolysis heat) have all comparable weights on the process development, so that the corresponding time can be differently depending on one of these factors according to the particular fuel is gasified and the particular conditions are established inside the gasifier. The same analysis also led to the estimation of reaction zone volumes for each biomass fuel, so as a comparison among the dimensions of the differently fed gasification units was finally accomplished. Each biomass material showed a different volumes distribution, so that any dimensioned gasification unit does not seem to be suitable for more than one biomass species. Nevertheless, since reactors diameters were found out quite similar for all the examined materials, it could be envisaged to design a single units for all of them by adopting the largest diameter and by combining together the maximum heights of each reaction zone, as they were calculated for the different biomasses. A total height of gasifier as around 2400mm would be obtained in this case. Besides, by arranging air injecting nozzles at different levels along the reactor, gasification zone could be properly set up according to the particular material is in turn gasified. Finally, since gasification and pyrolysis times were found to considerably change according to even short temperature variations, it could be also envisaged to regulate air feeding rate for each gasified material (which process temperatures depend on), so as the available reactor volumes would be suitable for the complete development of solid conversion in each case, without even changing fluid dynamics behaviour of the unit as well as air/biomass ratio in noticeable measure. The second part of this work dealt with the gas cleaning systems to be adopted downstream the gasifiers in order to run high efficiency CHP units (i.e. internal engines and micro-turbines). Especially in the case multi–fuel gasifiers are assumed to be used, weightier gas cleaning lines need to be envisaged in order to reach the standard gas quality degree required to fuel cogeneration units. Indeed, as the more heterogeneous feed to the gasification unit, several contaminant species can simultaneously be present in the exit gas stream and, as a consequence, suitable gas cleaning systems have to be designed. In this work, an overall study on gas cleaning lines assessment is carried out. Differently from the other research efforts carried out in the same field, the main scope is to define general arrangements for gas cleaning lines suitable to remove several contaminants from the gas stream, independently on the feedstock material and the energy plant size The gas contaminant species taken into account in this analysis were: particulate, tars, sulphur (in H2S form), alkali metals, nitrogen (in NH3 form) and acid gases (in HCl form). For each of these species, alternative cleaning devices were designed according to three different plant sizes, respectively corresponding with 8Nm3/h, 125Nm3/h and 350Nm3/h gas flows. Their performances were examined on the basis of their optimal working conditions (efficiency, temperature and pressure drops, above all) and their own consumption of energy and materials. Successively, the designed units were combined together in different overall gas cleaning line arrangements, paths, by following some technical constraints which were mainly determined from the same performance analysis on the cleaning units and from the presumable synergic effects by contaminants on the right working of some of them (filters clogging, catalysts deactivation, etc.). One of the main issues to be stated in paths design accomplishment was the tars removal from the gas stream, preventing filters plugging and/or line pipes clogging At this scope, a catalytic tars cracking unit was envisaged as the only solution to be adopted, and, therefore, a catalytic material which is able to work at relatively low temperatures was chosen. Nevertheless, a rapid drop in tars cracking efficiency was also estimated for this same material, so that an high frequency of catalysts regeneration and a consequent relevant air consumption for this operation were calculated in all of the cases. Other difficulties had to be overcome in the abatement of alkali metals, which condense at temperatures lower than tars, but they also need to be removed in the first sections of gas cleaning line in order to avoid corrosion of materials. In this case a dry scrubber technology was envisaged, by using the same fine particles filter units and by choosing for them corrosion resistant materials, like ceramic ones. Besides these two solutions which seem to be unavoidable in gas cleaning line design, high temperature gas cleaning lines were not possible to be achieved for the two larger plant sizes, as well. Indeed, as the use of temperature control devices was precluded in the adopted design procedure, ammonia partial oxidation units (as the only considered methods for the abatement of ammonia at high temperature) were not suitable for the large scale units, because of the high increase of reactors temperature by the exothermic reactions involved in the process. In spite of these limitations, yet, overall arrangements for each considered plant size were finally designed, so that the possibility to clean the gas up to the required standard degree was technically demonstrated, even in the case several contaminants are simultaneously present in the gas stream. Moreover, all the possible paths defined for the different plant sizes were compared each others on the basis of some defined operational parameters, among which total pressure drops, total energy losses, number of units and secondary materials consumption. On the basis of this analysis, dry gas cleaning methods proved preferable to the ones including water scrubber technology in al of the cases, especially because of the high water consumption provided by water scrubber units in ammonia adsorption process. This result is yet connected to the possibility to use activated carbon units for ammonia removal and Nahcolite adsorber for chloride acid. The very high efficiency of this latter material is also remarkable. Finally, as an estimation of the overall energy loss pertaining the gas cleaning process, the total enthalpy losses estimated for the three plant sizes were compared with the respective gas streams energy contents, these latter obtained on the basis of low heating value of gas only. This overall study on gas cleaning systems is thus proposed as an analytical tool by which different gas cleaning line configurations can be evaluated, according to the particular practical application they are adopted for and the size of cogeneration unit they are connected to.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Synthetic biology has recently had a great development, many papers have been published and many applications have been presented, spanning from the production of biopharmacheuticals to the synthesis of bioenergetic substrates or industrial catalysts. But, despite these advances, most of the applications are quite simple and don’t fully exploit the potential of this discipline. This limitation in complexity has many causes, like the incomplete characterization of some components, or the intrinsic variability of the biological systems, but one of the most important reasons is the incapability of the cell to sustain the additional metabolic burden introduced by a complex circuit. The objective of the project, of which this work is part, is trying to solve this problem through the engineering of a multicellular behaviour in prokaryotic cells. This system will introduce a cooperative behaviour that will allow to implement complex functionalities, that can’t be obtained with a single cell. In particular the goal is to implement the Leader Election, this procedure has been firstly devised in the field of distributed computing, to identify the process that allow to identify a single process as organizer and coordinator of a series of tasks assigned to the whole population. The election of the Leader greatly simplifies the computation providing a centralized control. Further- more this system may even be useful to evolutionary studies that aims to explain how complex organisms evolved from unicellular systems. The work presented here describes, in particular, the design and the experimental characterization of a component of the circuit that solves the Leader Election problem. This module, composed of an hybrid promoter and a gene, is activated in the non-leader cells after receiving the signal that a leader is present in the colony. The most important element, in this case, is the hybrid promoter, it has been realized in different versions, applying the heuristic rules stated in [22], and their activity has been experimentally tested. The objective of the experimental characterization was to test the response of the genetic circuit to the introduction, in the cellular environment, of particular molecules, inducers, that can be considered inputs of the system. The desired behaviour is similar to the one of a logic AND gate in which the exit, represented by the luminous signal produced by a fluorescent protein, is one only in presence of both inducers. The robustness and the stability of this behaviour have been tested by changing the concentration of the input signals and building dose response curves. From these data it is possible to conclude that the analysed constructs have an AND-like behaviour over a wide range of inducers’ concentrations, even if it is possible to identify many differences in the expression profiles of the different constructs. This variability accounts for the fact that the input and the output signals are continuous, and so their binary representation isn’t able to capture the complexity of the behaviour. The module of the circuit that has been considered in this analysis has a fundamental role in the realization of the intercellular communication system that is necessary for the cooperative behaviour to take place. For this reason, the second phase of the characterization has been focused on the analysis of the signal transmission. In particular, the interaction between this element and the one that is responsible for emitting the chemical signal has been tested. The desired behaviour is still similar to a logic AND, since, even in this case, the exit signal is determined by the hybrid promoter activity. The experimental results have demonstrated that the systems behave correctly, even if there is still a substantial variability between them. The dose response curves highlighted that stricter constrains on the inducers concentrations need to be imposed in order to obtain a clear separation between the two levels of expression. In the conclusive chapter the DNA sequences of the hybrid promoters are analysed, trying to identify the regulatory elements that are most important for the determination of the gene expression. Given the available data it wasn’t possible to draw definitive conclusions. In the end, few considerations on promoter engineering and complex circuits realization are presented. This section aims to briefly recall some of the problems outlined in the introduction and provide a few possible solutions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Il coordinamento tra sistemi impositivi è una questione originaria e tipica del diritto comunitario. La tesi ne esplora le conseguenze sotto più aspetti.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

What exactly is tax treaty override ? When is it realized ? This thesis, which is the result of a co-directed PhD between the University of Bologna and Tilburg University, gives a deep insight into a topic that has not yet been analyzed in a systematic way. On the contrary, the analysis about tax treaty override is still at a preliminary stage. For this reason the origin and nature of tax treaty override are first of all analyzed in their ‘natural’ context, i.e. within general international law. In order to characterize tax treaty override and deeply understand its peculiarities the evaluation of the effects of general international law on tax treaties based on the OECD Model Convention is a necessary pre-condition. Therefore, the binding effects of an international agreement on state sovereignty are specifically investigated. Afterwards, the interpretation of the OECD Model Convention occupies the main part of the thesis in order to develop an ‘interpretative model’ which can be applied every time a case of tax treaty override needs to be detected. Fictitious income, exit taxes and CFC regimes are analyzed in order to verify their compliance with tax treaties based on the OECD Model Convention and establish when the relevant legislation realizes cases of tax treaty override.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this doctoral thesis is to prove existence for a mutually catalytic random walk with infinite branching rate on countably many sites. The process is defined as a weak limit of an approximating family of processes. An approximating process is constructed by adding jumps to a deterministic migration on an equidistant time grid. As law of jumps we need to choose the invariant probability measure of the mutually catalytic random walk with a finite branching rate in the recurrent regime. This model was introduced by Dawson and Perkins (1998) and this thesis relies heavily on their work. Due to the properties of this invariant distribution, which is in fact the exit distribution of planar Brownian motion from the first quadrant, it is possible to establish a martingale problem for the weak limit of any convergent sequence of approximating processes. We can prove a duality relation for the solution to the mentioned martingale problem, which goes back to Mytnik (1996) in the case of finite rate branching, and this duality gives rise to weak uniqueness for the solution to the martingale problem. Using standard arguments we can show that this solution is in fact a Feller process and it has the strong Markov property. For the case of only one site we prove that the model we have constructed is the limit of finite rate mutually catalytic branching processes as the branching rate approaches infinity. Therefore, it seems naturalto refer to the above model as an infinite rate branching process. However, a result for convergence on infinitely many sites remains open.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we studied the efficiency of the benchmarks used in the asset management industry. In chapter 2 we analyzed the efficiency of the benchmark used for the government bond markets. We found that for the Emerging Market Bonds an equally weighted index for the country weights is probably the more suited because guarantees maximum diversification of country risk but for the Eurozone government bond market we found a GDP weighted index is better because the most important matter is to avoid a higher weight for highly indebted countries. In chapter 3 we analyzed the efficiency of a Derivatives Index to invest in the European corporate bond market instead of a Cash Index. We can state that the two indexes are similar in terms of returns, but that the Derivatives Index is less risky because it has a lower volatility, has values of skewness and kurtosis closer to those of a normal distribution and is a more liquid instrument, as the autocorrelation is not significant. In chapter 4 it is analyzed the impact of fallen angels on the corporate bond portfolios. Our analysis investigated the impact of the month-end rebalancing of the ML Emu Non Financial Corporate Index for the exit of downgraded bond (the event). We can conclude a flexible approach to the month-end rebalancing is better in order to avoid a loss of valued due to the benchmark construction rules. In chapter 5 we did a comparison between the equally weighted and capitalization weighted method for the European equity market. The benefit which results from reweighting the portfolio into equal weights can be attributed to the fact that EW portfolios implicitly follow a contrarian investment strategy, because they mechanically rebalance away from stocks that increase in price.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Im Mittelpunkt dieser Arbeit stand das große L-Hüllprotein (L) des Hepatitis B - Virus. L bildet eine ungewöhnliche duale Topologie in der ER-Membran aus, welche auch im reifen Viruspartikel erhalten bleibt. In einem partiellen, posttranslationalen Reifungsprozess wird die sogenannte PräS-Region von der zytosolischen Seite der Membran aus in das ER-Lumen transloziert. Aufgrund seiner dualen Topologie und der damit verbundenen Multifunktionalität übernimmt L eine Schlüsselfunktion im viralen Lebenszyklus. Ein Schwerpunkt dieser Arbeit lag deshalb darin, neue zelluläre Interaktionspartner des L-Hüllproteins zu identifizieren. Ihre Analyse sollte helfen, das Zusammenspiel des Virus mit der Wirtszelle besser zu verstehen. Hierfür wurde das Split - Ubiquitin Hefe - Zwei - Hybrid System eingesetzt, das die Interaktionsanalyse von Membranproteinen und Membran-assoziierten Proteinen ermöglicht. Zwei der neu identifizierten Interaktionspartner, der v-SNARE Bet1 und Sec24A, die Cargo-bindende Untereinheit des CoPII-vermittelten vesikulären Transports, wurden weitergehend im humanen Zellkultursystem untersucht. Sowohl für Bet1 als auch für Sec24A konnte die Interaktion mit dem L-Hüllprotein bestätigt und der Bindungsbereich eingegrenzt werden. Die Depletion des endogenen Bet1 reduzierte die Freisetzung L-haltiger, nicht aber S-haltiger subviraler Partikel (SVP) deutlich. Im Gegensatz zu Bet1 interagierte Sec24A auch mit dem mittleren M- und kleinen S-Hüllprotein von HBV. Die Inhibition des CoPII-vermittelten vesikulären Transportweges durch kombinierte Depletion der vier Sec24 Isoformen blockierte die Freisetzung sowohl L- als auch S-haltiger SVP. Dies bedeutet, dass die HBV - Hüllproteine das ER CoPII-vermittelt verlassen, wobei sie aktiv Kontakt zur Cargo-bindenden Untereinheit Sec24A aufnehmen. Der effiziente Export der Hüllproteine aus dem ER ist für die Virusmorphogenese und somit für den HBV - Lebenszyklus essentiell. rnEin weiterer Schwerpunkt dieser Arbeit basierte auf der Interaktion des L-Hüllproteins mit dem ER-luminalen Chaperon BiP. In der vorliegenden Arbeit wurde überprüft, ob BiP, ähnlich wie das zytosolische Chaperon Hsc70, an der Ausbildung der dualen Topologie des L-Hüllproteins beteiligt ist. Hierfür wurde BiP durch die ektopische Expression seiner Ko-Chaperone BAP und ERdj4 in seiner Substrat-bindenen Kapazität manipuliert. ERdj4, ein Mitglied der Hsp40 - Proteinfamilie, stimuliert die ATPase-Aktivität von BiP, was die Substratbindung stabilisiert. Der Nukleotid - Austauschfaktor BAP hingegen vermittelt die Auflösung des BiP - Substrat - Komplexes. Die Auswirkung der veränderten in vivo-Aktivität von BiP auf die posttranslationale PräS-Translokation wurde mit Proteaseschutz - Versuchen untersucht. Die ektopische Expression des positiven als auch des negativen Regulators von BiP resultierte in einer drastischen Reduktion der posttranslationalen PräS-Translokation. Ein vergleichbarer Effekt wurde nach Manipulation des BiP ATPase - Zyklus durch Depletion der zellulären ATP - Konzentration beobachtet. Dies spricht dafür, dass das ER-luminale Chaperon BiP, zusammen mit Hsc70, eine zentrale Rolle in der Ausbildung der dualen Topologie des L-Hüllproteins spielt. rnZwei weitere Proteine, Sec62 und Sec63, die sich für die posttranslationale Translokation in der Hefe als essentiell erwiesen haben, wurden in die Analyse der dualen Topologie des L-Hüllproteins einbezogen. Interessanterweise konnte eine rein luminale Ausrichtung der PräS-Region nach kombinierter Depletion des endogenen Sec62 und Sec63 beobachtet werden. Dies deutet an, dass sowohl Sec62 als auch Sec63 an der Ausbildung der dualen Topologie des L-Hüllproteins beteiligt sind. In Analogie zur Posttranslokation der Hefe könnte Sec62 als Translokon-assoziierter Rezeptor für Substrate der Posttranslokation, und damit der PräS-Region, dienen. Sec63 könnte mit seiner J-Domäne BiP zum Translokon rekrutieren und daraufhin dessen Substrat-bindende Aktivität stimulieren. BiP würde dann, einer molekularen Ratsche gleich, die PräS-Region durch wiederholtes Binden und Freisetzen aktiv in das ER-Lumen hereinziehen, bis eine stabile duale Topologie des L-Hüllproteins ausgebildet ist. Die Bedeutung von Sec62 und Sec63 für den HBV - Lebenszyklus wird dadurch untermauert, dass sowohl die ektopische Expression als auch die Depletion des endogenen Sec63 die Freisetzung L-haltiger SVP deutlich reduziert. rn

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Esta investigación tuvo como objetivo general develar las representaciones sociales sobre la Medicina Popular en tres grupos poblacionales, pacientes oncológicos (n=100), familiares de los pacientes (n=25) y miembros del equipo de salud (n=26). Para ello, se realizaron tres estudios cualitativos con cada grupo poblacional y un cuarto en el que se describen las similitudes y las diferencias entre ellos en relación con el objeto de representación. En general, se utilizaron entrevistas en profundidad, ejercicios de asociaciones libres y grupos focales (7 con 62 pacientes). Resultados: paciente oncológico: Medicina Popular representada como una salida optimista a la angustiante situación que está viviendo frente al cáncer; una apuesta a la vida. Para la familia: una contra capaz de mantener con vida y fortaleza al paciente y para el equipo de salud, una realidad incombatible de los pacientes y de la familia, que tiene efecto placebo sobre ellos y que está relacionada con el pensamiento mágico religioso, la fé y la ignorancia de quienes la realizan. En cuanto a las diferencias, el paciente y la familia consideran que la Medicina Popular es una alternativa en la que depositan su fé y confianza; el personal de salud no cree en sus efectos sobre el cáncer y la considera como estafa y engaño para el paciente. En las similitudes, todos coinciden en que es una alternativa, generadora de esperanza, basada en compuestos naturales que le permiten al paciente contribuir a la curación del cáncer y a sobrellevar el malestar provocado por la quimioterapia. Finalmente, se presentan conclusiones generales, se discuten algunos de los hallazgos y la importancia de las RS de la Medicina Popular y su impacto sobre la atención y la calidad de vida del paciente y se plantean algunos interrogantes que podrían favorecer el desarrollo de una línea de investigación en el tema.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Il quadro tematico della tesi è la storia politica e culturale delle relazioni tra il cattolicesimo democratico di origine «popolare» e la tradizione del liberalismo italiano, in un arco cronologico compreso tra l’antifascismo dell’Aventino e la fondazione della Democrazia Cristiana. L’ipotesi della ricerca è che proprio in questo «lungo viaggio» la classe dirigente del cattolicesimo politico (a cominciare dalla leadership di Alcide De Gasperi) abbia completato quel processo di acculturazione in senso «liberale» che le avrebbe consentito di guidare consensualmente l’uscita dal fascismo nel secondo dopoguerra.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBIETTIVO : Quantificare le CECs/ml nei pazienti affetti da ischemia critica (IC) degli arti inferiori, eventuali correlazioni tra i fattori di rischio, lo stadio clinico con l’ aumento delle CECs. Valutare i cambiamenti strutturali (calcificazione ed infiltratto infiammatorio) e l’ angiogenesi (numero di capillari /sezione) della parete arteriosa. MATERIALI E METODI: Da Maggio 2006 ad Aprile 2008 in modo prospettico abbiamo arruolato paziente affetti da IC da sottoporre ad intervento chirurgico. In un data base abbiamo raccolto : caratteristiche demografiche, fattori di rischio, stadiazione dell'IC secondo Leriche-Fontaine (L-F), il tipo di intervento chirurgico. Per ogni paziente abbiamo effettuato un prelievo ematico di 2 ml per la quantificazione immunomagnetica delle CECs e prelievo di parete arteriosa. RISULTATI: In modo consecutivo abbiamo arruolato 33 pazienti (75.8% maschi) con età media di 71 aa (range 34-91aa), affetti da arteriopatia ostruttiva cronica periferica al IV stadio di L-F nel 84.8%, da cardiopatia ischemica cronica nel 60.6%, da ipertensione arteriosa nel 72.7% e da diabete mellito di II tipo nel 66.6%. Il valore medio di CECs/ml è risultato significativamente più elevato (p= 0.001) nei soggetti affetti da IC (CECs/ml =531.24 range 107- 3330) rispetto ai casi controllo (CECs/ml = 125.8 range 19-346 ). Le CECs/ml nei pazienti diabetici sono maggiori rispetto alle CECs/ml nei pazienti non diabetici ( 726.7 /ml vs 325.5/ml ), p< 0.05 I pazienti diabetici hanno presentato maggior incidenza di lesioni arteriose complesse rispetto ai non diabetici (66% vs 47%) e minor densità capillare (65% vs 87%). Conclusioni : Le CECs sono un marker sierologico attendibile di danno vascolare parietale, la loro quantità è maggiore nei pazienti diabetici e ipertesi. La minor capacità angiogenetica della parete arteriosa in presenza di maggior calcificazioni ed infiltrato infiammatorio nei diabetici, dimostra un danno istopatologico di parete maggiore .

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A fronte dal recepimento del direttiva SHR nel nostro ordinamento, realizzato dal d.lgs. 27/2010, il presente lavoro si propone anzitutto di analizzare l'attuale ruolo della delega di voto - sollecitata e non - per poi verificare quale sia l'interesse concretamente sotteso a un voto così esercitato, con particolare attenzione alla sollecitazione di deleghe di voto, oggi destinata espressamente (per la prevalente dottrina) a consentire al promotore il perseguimento di interessi propri. Le considerazioni riguardo all'interesse concretamente sotteso al voto esercitato per delega portano a vagliarne la rilevanza ai fini della nozione di controllo, ex art. 2359 c.c., la quale esclude espressamente dai voti rilevanti esclusivamente quelli esercitati "per conto terzi", e non, dunque, anche quelli esercitati nell'interesse proprio da un soggetto non titolare della partecipazione. Viene quindi affrontata la principale critica ad un controllo raggiunto per tale via e, più in generale, attraverso una delle varie forme di dissociazione tra titolarità della partecipazione e legittimazione all'esercizio del voto ad essa relativo, ovvero la apparente mancanza di stabilità. Considerando tuttavia che ogni ipotesi di controllo c.d. di fatto per definizione non gode di stabilità se non si scelga di ammettere una valutazione di tale requisito necessariamente prognostica ed ex ante, si giunge alla conclusione che la fattispecie di un controllo acquisito tramite sollecitazione di deleghe si distingue da altre ipotesi di controllo di fatto esclusivamente per la maggiore difficoltà dell'accertamento in fatto del requisito della stabilità. Si affronta infine la possibilità di garantire il diritto di exit (ovvero una tutela risarcitoria) del socio di minoranza che veda modificate le condizioni di rischio del proprio investimento a causa di una modifica del soggetto controllante derivante da sollecitazione di deleghe, tramite applicazione diretta della disciplina OPA ovvero riconducendo la fattispecie all'art. 2497quater, lett. d, ove ne ricorrano i presupposti.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The amyloid precursor protein (APP) is a type I transmembrane glycoprotein, which resembles a cell surface receptor, comprising a large ectodomain, a single spanning transmembrane part and a short C-terminal, cytoplasmic domain. It belongs to a conserved gene family, with over 17 members, including also the two mammalian APP homologues proteins APLP1 and APLP2 („amyloid precursor like proteins“). APP is encoded by 19 exons, of which exons 7, 8, and 15 can be alternatively spliced to produce three major protein isoforms APP770, APP751 and APP695, reflecting the number of amino acids. The neuronal APP695 is the only isoform that lacks a Kunitz Protease Inhibitor (KPI) domain in its extracellular portion whereas the two larger, peripheral APP isoforms, contain the 57-amino-acid KPI insert. rnRecently, research effort has suggested that APP metabolism and function is thought to be influenced by homodimerization and that the oligomerization state of APP could also play a role in the pathology of Alzheimer's disease (AD), by regulating its processing and amyloid beta production. Several independent studies have shown that APP can form homodimers within the cell, driven by motifs present in the extracellular domain, as well as in the juxtamembrane (JM) and transmembrane (TM) regions of the molecule, whereby the exact molecular mechanism and the origin of dimer formation remains elusive. Therefore, we focused in our study on the actual subcellular origin of APP homodimerization within the cell, an underlying mechanism, and a possible impact on dimerization properties of its homologue APLP1. Furthermore, we analyzed homodimerization of various APP isoforms, in particular APP695, APP751 and APP770, which differ in the presence of a Kunitz-type protease inhibitor domain (KPI) in the extracellular region. In order to assess the cellular origin of dimerization under different cellular conditions, we established a mammalian cell culture model-system in CHO-K1 (chinese hamster ovary) cells, stably overexpressing human APP, harboring dilysine based organelle sorting motifs at the very C-terminus [KKAA-Endoplasmic Reticulum (ER); KKFF-Golgi]. In this study we show that APP exists as disulfide-bound, SDS-stable dimers, when it was retained in the ER, unlike when it progressed further to the cis-Golgi, due to the KKFF ER exit determinant. These stable APP complexes were isolated from cells, and analyzed by SDS–polyacrylamide gel electrophoresis under non-reducing conditions, whereas strong denaturing and reducing conditions completely converted those dimers to monomers. Our findings suggested that APP homodimer formation starts early in the secretory pathway and that the unique oxidizing environment of the ER likely promotes intermolecular disulfide bond formation between APP molecules. We particularly visualized APP dimerization employing a variety of biochemical experiments and investigated the origin of its generation by using a Bimolecular Fluorescence Complementation (BiFC) approach with split GFP-APP chimeras. Moreover, using N-terminal deletion constructs, we demonstrate that intermolecular disulfide linkage between cysteine residues, exclusively located in the extracellular E1 domain, represents another mechanism of how an APP sub-fraction can dimerize within the cell. Additionally, mutational studies revealed that cysteines at positions 98 and 105, embedded in the conserved loop region within the E1 domain, are critical for interchain disulfide bond formation. Using a pharmacological treatment approach, we show that once generated in the oxidative environment of the ER, APP dimers remain stably associated during transport, reaching the plasma membrane. In addition, we demonstrate that APP isoforms, encompassing the KPI domain, exhibit a strongly reduced ability to form cis-directed dimers in the ER, whereas trans-directed cell aggregation of Drosophila Schneider (S2)-cells was isoform independent, mediating cell-cell contacts. Thus, suggesting that steric properties of KPI-APP might be the cause for weaker cis-interaction in the ER, compared to APP695. Finally, we provide evidence that APP/APLP1 heterointeractions are likewise initiated in the ER, suggesting a similar mechanism for heterodimerization. Therefore, dynamic alterations of APP between monomeric, homodimeric, and possibly heterodimeric status could at least partially explain some of the variety in the physiological functions of APP.rn

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Stable isotope composition of atmospheric carbon monoxide: A modelling study.rnrnThis study aims at an improved understanding of the stable carbon and oxygen isotope composition of the carbon monoxide (CO) in the global atmosphere by means of numerical simulations. At first, a new kinetic chemistry tagging technique for the most complete parameterisation of isotope effects has been introduced into the Modular Earth Submodel System (MESSy) framework. Incorporated into the ECHAM/MESSy Atmospheric Chemistry (EMAC) general circulation model, an explicit treatment of the isotope effects on the global scale is now possible. The expanded model system has been applied to simulate the chemical system containing up to five isotopologues of all carbon- and oxygen-bearing species, which ultimately determine the δ13C, δ18O and Δ17O isotopic signatures of atmospheric CO. As model input, a new stable isotope-inclusive emission inventory for the relevant trace gases has been compiled. The uncertainties of the emission estimates and of the resulting simulated mixing and isotope ratios have been analysed. The simulated CO mixing and stable isotope ratios have been compared to in-situ measurements from ground-based observatories and from the civil-aircraft-mounted CARIBIC−1 measurement platform.rnrnThe systematically underestimated 13CO/12CO ratios of earlier, simplified modelling studies can now be partly explained. The EMAC simulations do not support the inferences of those studies, which suggest for CO a reduced input of the highly depleted in 13C methane oxidation source. In particular, a high average yield of 0.94 CO per reacted methane (CH4) molecule is simulated in the troposphere, to a large extent due to the competition between the deposition and convective transport processes affecting the CH4 to CO reaction chain intermediates. None of the other factors, assumed or disregarded in previous studies, however hypothesised to have the potential in enriching tropospheric CO in 13C, were found significant when explicitly simulated. The inaccurate surface emissions, likely underestimated over East Asia, are responsible for roughly half of the discrepancies between the simulated and observed 13CO in the northern hemisphere (NH), whereas the remote southern hemisphere (SH) compositions suggest an underestimated fractionation during the oxidation of CO by the hydroxyl radical (OH). A reanalysis of the kinetic isotope effect (KIE) in this reaction contrasts the conventional assumption of a mere pressure dependence, and instead suggests an additional temperature dependence of the 13C KIE, which is driven by changes in the partitioning of the reaction exit channels. This result is yet to be confirmed in the laboratory.rnrnApart from 13CO, for the first time the atmospheric distribution of the oxygen mass-independent fractionation (MIF) in CO, Δ17O, has been consistently simulated on the global scale with EMAC. The applicability of Δ17O(CO) observations to unravelling changes in the tropospheric CH4-CO-OH system has been scrutinised, as well as the implications of the ozone (O3) input to the CO isotope oxygen budget. The Δ17O(CO) is confirmed to be the principal signal for the CO photochemical age, thus providing a measure for the OH chiefly involved in the sink of CO. The highly mass-independently fractionated O3 oxygen is estimated to comprise around 2% of the overall tropospheric CO source, which has implications for the δ18O, but less likely for the Δ17O CO budgets. Finally, additional sensitivity simulations with EMAC corroborate the nearly equal net effects of the present-day CH4 and CO burdens in removing tropospheric OH, as well as the large turnover and stability of the abundance of the latter. The simulated CO isotopologues nonetheless hint at a likely insufficient OH regeneration in the NH high latitudes and the upper troposphere / lower stratosphere (UTLS).rn

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation consists of three self-contained papers that are related to two main topics. In particular, the first and third studies focus on labor market modeling, whereas the second essay presents a dynamic international trade setup.rnrnIn Chapter "Expenses on Labor Market Reforms during Transitional Dynamics", we investigate the arising costs of a potential labor market reform from a government point of view. To analyze various effects of unemployment benefits system changes, this chapter develops a dynamic model with heterogeneous employed and unemployed workers.rn rnIn Chapter "Endogenous Markup Distributions", we study how markup distributions adjust when a closed economy opens up. In order to perform this analysis, we first present a closed-economy general-equilibrium industry dynamics model, where firms enter and exit markets, and then extend our analysis to the open-economy case.rn rnIn Chapter "Unemployment in the OECD - Pure Chance or Institutions?", we examine effects of aggregate shocks on the distribution of the unemployment rates in OECD member countries.rn rnIn all three chapters we model systems that behave randomly and operate on stochastic processes. We therefore exploit stochastic calculus that establishes clear methodological links between the chapters.