24 resultados para Complexity of Relations
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Myc is a transcription factor that can activate transcription of several hundreds genes by direct binding to their promoters at specific DNA sequences (E-box). However, recent studies have also shown that it can exert its biological role by repressing transcription. Such studies collectively support a model in which c-Myc-mediated repression occurs through interactions with transcription factors bound to promoter DNA regions but not through direct recognition of typical E-box sequences. Here, we investigated whether N-Myc can also repress gene transcription, and how this is mechanistically achieved. We used human neuroblastoma cells as a model system in that N-MYC amplification/over-expression represents a key prognostic marker of this tumour. By means of transcription profile analyses we could identify at least 5 genes (TRKA, p75NTR, ABCC3, TG2, p21) that are specifically repressed by N-Myc. Through a dual-step-ChIP assay and genetic dissection of gene promoters, we found that N-Myc is physically associated with gene promoters in vivo, in proximity of the transcription start site. N-Myc association with promoters requires interaction with other proteins, such as Sp1 and Miz1 transcription factors. Furthermore, we found that N-Myc may repress gene expression by interfering directly with Sp1 and/or with Miz1 activity (i.e. TRKA, p75NTR, ABCC3, p21) or by recruiting Histone Deacetylase 1 (Hdac1) (i.e. TG2). In vitro analyses show that distinct N-Myc domains can interact with Sp1, Miz1 and Hdac1, supporting the idea that Myc may participate in distinct repression complexes by interacting specifically with diverse proteins. Finally, results show that N-Myc, through repressed genes, affects important cellular functions, such as apoptosis, growth, differentiation and motility. Overall, our results support a model in which N-Myc, like c-Myc, can repress gene transcription by direct interaction with Sp1 and/or Miz1, and provide further lines of evidence on the importance of transcriptional repression by Myc factors in tumour biology.
Resumo:
Neuroblastoma (NB) is the most common type of tumor in infants and the third most common cancer in children. Current clinical practices employ a variety of strategies for NB treatment, ranging from standard chemotherapy to immunotherapy. Due to a lack of knowledge about the molecular mechanisms underlying the disease's onset, aggressive phenotype, and therapeutic resistance, these approaches are ineffective in the majority of instances. MYCN amplification is one of the most well-known genetic alterations associated with high risk in NB. The following work is divided into three sections and aims to provide new insights into the biology of NB and hypothetical new treatment strategies. First, we identified RUNX1T1 as a key gene involved in MYCN-driven NB onset in a transgenic mouse model. Our results suggested that that RUNX1T1 may recruit the Co-REST complex on target genes that regulate the differentiation of NB cells and that the interaction with RCOR3 is essential. Second, we provided insights into the role of MYCN in dysregulating the CDK/RB/E2F pathway controlling the G1/S transition of the cell cycle. We found that RB is dispensable in regulating MYCN amplified NB's cell cycle, providing the rationale for using cyclin/CDK complexes inhibitors in NBs carrying MYCN amplification and relatively high levels of RB1 expression. Third, we generated an M13 bacteriophage platform to target GD2-expressing cells in NB. Here, we generated a recombinant M13 phage capable of binding GD2-expressing cells selectively (M13GD2). Our results showed that M13GD2 chemically conjugated with the photosensitizer ECB04 preserves the retargeting capability, inducing cell death even at picomolar concentrations upon light irradiation. These results provided proof of concept for M13 phage employment in targeted photodynamic therapy for NB, an exciting strategy to overcome resistance to classical immunotherapy.
Resumo:
Introduction. Postnatal neurogenesis in the hippocampal dentate gyrus, can be modulated by numerous determinants, such as hormones, transmitters and stress. Among the factors positively interfering with neurogenesis, the complexity of the environment appears to play a particularly striking role. Adult mice reared in an enriched environment produce more neurons and exhibit better performance in hippocampus-specific learning tasks. While the effects of complex environments on hippocampal neurogenesis are well documented, there is a lack of information on the effects of living under socio-sensory deprivation conditions. Due to the immaturity of rats and mice at birth, studies dealing with the effects of environmental enrichment on hippocampal neurogenesis were carried out in adult animals, i.e. during a period of relatively low rate of neurogenesis. The impact of environment is likely to be more dramatic during the first postnatal weeks, because at this time granule cell production is remarkably higher than at later phases of development. The aim of the present research was to clarify whether and to what extent isolated or enriched rearing conditions affect hippocampal neurogenesis during the early postnatal period, a time window characterized by a high rate of precursor proliferation and to elucidate the mechanisms underlying these effects. The experimental model chosen for this research was the guinea pig, a precocious rodent, which, at 4-5 days of age can be independent from maternal care. Experimental design. Animals were assigned to a standard (control), an isolated, or an enriched environment a few days after birth (P5-P6). On P14-P17 animals received one daily bromodeoxyuridine (BrdU) injection, to label dividing cells, and were sacrificed either on P18, to evaluate cell proliferation or on P45, to evaluate cell survival and differentiation. Methods. Brain sections were processed for BrdU immunhistochemistry, to quantify the new born and surviving cells. The phenotype of the surviving cells was examined by means of confocal microscopy and immunofluorescent double-labeling for BrdU and either a marker of neurons (NeuN) or a marker of astrocytes (GFAP). Apoptotic cell death was examined with the TUNEL method. Serial sections were processed for immunohistochemistry for i) vimentin, a marker of radial glial cells, ii) BDNF (brain-derived neurotrofic factor), a neurotrophin involved in neuron proliferation/survival, iii) PSA-NCAM (the polysialylated form of the neural cell adhesion molecule), a molecule associated with neuronal migration. Total granule cell number in the dentate gyrus was evaluated by stereological methods, in Nissl-stained sections. Results. Effects of isolation. In P18 isolated animals we found a reduced cell proliferation (-35%) compared to controls and a lower expression of BDNF. Though in absolute terms P45 isolated animals had less surviving cells than controls, they showed no differences in survival rate and phenotype percent distribution compared to controls. Evaluation of the absolute number of surviving cells of each phenotype showed that isolated animals had a reduced number of cells with neuronal phenotype than controls. Looking at the location of the new neurons, we found that while in control animals 76% of them had migrated to the granule cell layer, in isolated animals only 55% of the new neurons had reached this layer. Examination of radial glia cells of P18 and P45 animals by vimentin immunohistochemistry showed that in isolated animals radial glia cells were reduced in density and had less and shorter processes. Granule cell count revealed that isolated animals had less granule cells than controls (-32% at P18 and -42% at P45). Effects of enrichment. In P18 enriched animals there was an increase in cell proliferation (+26%) compared to controls and a higher expression of BDNF. Though in both groups there was a decline in the number of BrdU-positive cells by P45, enriched animals had more surviving cells (+63) and a higher survival rate than controls. No differences were found between control and enriched animals in phenotype percent distribution. Evaluation of the absolute number of cells of each phenotype showed that enriched animals had a larger number of cells of each phenotype than controls. Looking at the location of cells of each phenotype we found that enriched animals had more new neurons in the granule cell layer and more astrocytes and cells with undetermined phenotype in the hilus. Enriched animals had a higher expression of PSA-NCAM in the granule cell layer and hilus Vimentin immunohistochemistry showed that in enriched animals radial glia cells were more numerous and had more processes.. Granule cell count revealed that enriched animals had more granule cells than controls (+37% at P18 and +31% at P45). Discussion. Results show that isolation rearing reduces hippocampal cell proliferation but does not affect cell survival, while enriched rearing increases both cell proliferation and cell survival. Changes in the expression of BDNF are likely to contribute to he effects of environment on precursor cell proliferation. The reduction and increase in final number of granule neurons in isolated and enriched animals, respectively, are attributable to the effects of environment on cell proliferation and survival and not to changes in the differentiation program. As radial glia cells play a pivotal role in neuron guidance to the granule cell layer, the reduced number of radial glia cells in isolated animals and the increased number in enriched animals suggests that the size of radial glia population may change dynamically, in order to match changes in neuron production. The high PSA-NCAM expression in enriched animals may concur to favor the survival of the new neurons by facilitating their migration to the granule cell layer. Conclusions. By using a precocious rodent we could demonstrate that isolated/enriched rearing conditions, at a time window during which intense granule cell proliferation takes place, lead to a notable decrease/increase of total granule cell number. The time-course and magnitude of postnatal granule cell production in guinea pigs are more similar to the human and non-human primate condition than in rats and mice. Translation of current data to humans would imply that exposure of children to environments poor/rich of stimuli may have a notably large impact on dentate neurogenesis and, very likely, on hippocampus dependent memory functions.
Resumo:
The aim of this thesis is to discuss and develop the Unified Patent Court project to account for the role it could play in implementing judicial specialisation in the Intellectual Property field. To provide an original contribution to the existing literature on the topic, this work addresses the issue of how the Unified Patent Court could relate to the other forms of judicial specialisation already operating in the European Union context. This study presents a systematic assessment of the not-yet-operational Unified Patent Court within the EU judicial system, which has recently shown a trend towards being developed outside the institutional framework of the European Union Court of Justice. The objective is to understand to what extent the planned implementation of the Unified Patent Court could succeed in responding to the need for specialisation and in being compliant with the EU legal and constitutional framework. Using the Unified Patent Court as a case study, it is argued that specialised courts in the field of Intellectual Property have a significant role to play in the European judicial system and offer an adequate response to the growing complexity of business operations and relations. The significance of this study is to analyse whether the UPC can still be considered as an appropriate solution to unify the European patent litigation system. The research considers the significant deficiencies, which risks having a negative effect on the European Union institutional procedures. In this perspective, this work aims to make a contribution in identifying the potential negative consequences of this reform. It also focuses on considering different alternatives for a European patent system, which could effectively promote innovation in Europe.
Resumo:
Interaction protocols establish how different computational entities can interact with each other. The interaction can be finalized to the exchange of data, as in 'communication protocols', or can be oriented to achieve some result, as in 'application protocols'. Moreover, with the increasing complexity of modern distributed systems, protocols are used also to control such a complexity, and to ensure that the system as a whole evolves with certain features. However, the extensive use of protocols has raised some issues, from the language for specifying them to the several verification aspects. Computational Logic provides models, languages and tools that can be effectively adopted to address such issues: its declarative nature can be exploited for a protocol specification language, while its operational counterpart can be used to reason upon such specifications. In this thesis we propose a proof-theoretic framework, called SCIFF, together with its extensions. SCIFF is based on Abductive Logic Programming, and provides a formal specification language with a clear declarative semantics (based on abduction). The operational counterpart is given by a proof procedure, that allows to reason upon the specifications and to test the conformance of given interactions w.r.t. a defined protocol. Moreover, by suitably adapting the SCIFF Framework, we propose solutions for addressing (1) the protocol properties verification (g-SCIFF Framework), and (2) the a-priori conformance verification of peers w.r.t. the given protocol (AlLoWS Framework). We introduce also an agent based architecture, the SCIFF Agent Platform, where the same protocol specification can be used to program and to ease the implementation task of the interacting peers.
Resumo:
This Ph.D. candidate thesis collects the research work I conducted under the supervision of Prof.Bruno Samor´ı in 2005,2006 and 2007. Some parts of this work included in the Part III have been begun by myself during my undergraduate thesis in the same laboratory and then completed during the initial part of my Ph.D. thesis: the whole results have been included for the sake of understanding and completeness. During my graduate studies I worked on two very different protein systems. The theorical trait d’union between these studies, at the biological level, is the acknowledgement that protein biophysical and structural studies must, in many cases, take into account the dynamical states of protein conformational equilibria and of local physico-chemical conditions where the system studied actually performs its function. This is introducted in the introductory part in Chapter 2. Two different examples of this are presented: the structural significance deriving from the action of mechanical forces in vivo (Chapter 3) and the complexity of conformational equilibria in intrinsically unstructured proteins and amyloid formation (Chapter 4). My experimental work investigated both these examples by using in both cases the single molecule force spectroscopy technique (described in Chapter 5 and Chapter 6). The work conducted on angiostatin focused on the characterization of the relationships between the mechanochemical properties and the mechanism of action of the angiostatin protein, and most importantly their intertwining with the further layer of complexity due to disulfide redox equilibria (Part III). These studies were accompanied concurrently by the elaboration of a theorical model for a novel signalling pathway that may be relevant in the extracellular space, detailed in Chapter 7.2. The work conducted on -synuclein (Part IV) instead brought a whole new twist to the single molecule force spectroscopy methodology, applying it as a structural technique to elucidate the conformational equilibria present in intrinsically unstructured proteins. These equilibria are of utmost interest from a biophysical point of view, but most importantly because of their direct relationship with amyloid aggregation and, consequently, the aetiology of relevant pathologies like Parkinson’s disease. The work characterized, for the first time, conformational equilibria in an intrinsically unstructured protein at the single molecule level and, again for the first time, identified a monomeric folded conformation that is correlated with conditions leading to -synuclein and, ultimately, Parkinson’s disease. Also, during the research work, I found myself in the need of a generalpurpose data analysis application for single molecule force spectroscopy data analysis that could solve some common logistic and data analysis problems that are common in this technique. I developed an application that addresses some of these problems, herein presented (Part V), and that aims to be publicly released soon.
Resumo:
L’ermeneutica filosofica di Hans-Georg Gadamer – indubbiamente uno dei capisaldi del pensiero novecentesco – rappresenta una filosofia molto composita, sfaccettata e articolata, per così dire formata da una molteplicità di dimensioni diverse che si intrecciano l’una con l’altra. Ciò risulta evidente già da un semplice sguardo alla composizione interna della sua opera principale, Wahrheit und Methode (1960), nella quale si presenta una teoria del comprendere che prende in esame tre differenti dimensioni dell’esperienza umana – arte, storia e linguaggio – ovviamente concepite come fondamentalmente correlate tra loro. Ma questo quadro d’insieme si complica notevolmente non appena si prendano in esame perlomeno alcuni dei numerosi contributi che Gadamer ha scritto e pubblicato prima e dopo il suo opus magnum: contributi che testimoniano l’importante presenza nel suo pensiero di altre tematiche. Di tale complessità, però, non sempre gli interpreti di Gadamer hanno tenuto pienamente conto, visto che una gran parte dei contributi esegetici sul suo pensiero risultano essenzialmente incentrati sul capolavoro del 1960 (ed in particolare sui problemi della legittimazione delle Geisteswissenschaften), dedicando invece minore attenzione agli altri percorsi che egli ha seguito e, in particolare, alla dimensione propriamente etica e politica della sua filosofia ermeneutica. Inoltre, mi sembra che non sempre si sia prestata la giusta attenzione alla fondamentale unitarietà – da non confondere con una presunta “sistematicità”, da Gadamer esplicitamente respinta – che a dispetto dell’indubbia molteplicità ed eterogeneità del pensiero gadameriano comunque vige al suo interno. La mia tesi, dunque, è che estetica e scienze umane, filosofia del linguaggio e filosofia morale, dialogo con i Greci e confronto critico col pensiero moderno, considerazioni su problematiche antropologiche e riflessioni sulla nostra attualità sociopolitica e tecnoscientifica, rappresentino le diverse dimensioni di un solo pensiero, le quali in qualche modo vengono a convergere verso un unico centro. Un centro “unificante” che, a mio avviso, va individuato in quello che potremmo chiamare il disagio della modernità. In altre parole, mi sembra cioè che tutta la riflessione filosofica di Gadamer, in fondo, scaturisca dalla presa d’atto di una situazione di crisi o disagio nella quale si troverebbero oggi il nostro mondo e la nostra civiltà. Una crisi che, data la sua profondità e complessità, si è per così dire “ramificata” in molteplici direzioni, andando ad investire svariati ambiti dell’esistenza umana. Ambiti che pertanto vengono analizzati e indagati da Gadamer con occhio critico, cercando di far emergere i principali nodi problematici e, alla luce di ciò, di avanzare proposte alternative, rimedi, “correttivi” e possibili soluzioni. A partire da una tale comprensione di fondo, la mia ricerca si articola allora in tre grandi sezioni dedicate rispettivamente alla pars destruens dell’ermeneutica gadameriana (prima e seconda sezione) ed alla sua pars costruens (terza sezione). Nella prima sezione – intitolata Una fenomenologia della modernità: i molteplici sintomi della crisi – dopo aver evidenziato come buona parte della filosofia del Novecento sia stata dominata dall’idea di una crisi in cui verserebbe attualmente la civiltà occidentale, e come anche l’ermeneutica di Gadamer possa essere fatta rientrare in questo discorso filosofico di fondo, cerco di illustrare uno per volta quelli che, agli occhi del filosofo di Verità e metodo, rappresentano i principali sintomi della crisi attuale. Tali sintomi includono: le patologie socioeconomiche del nostro mondo “amministrato” e burocratizzato; l’indiscriminata espansione planetaria dello stile di vita occidentale a danno di altre culture; la crisi dei valori e delle certezze, con la concomitante diffusione di relativismo, scetticismo e nichilismo; la crescente incapacità a relazionarsi in maniera adeguata e significativa all’arte, alla poesia e alla cultura, sempre più degradate a mero entertainment; infine, le problematiche legate alla diffusione di armi di distruzione di massa, alla concreta possibilità di una catastrofe ecologica ed alle inquietanti prospettive dischiuse da alcune recenti scoperte scientifiche (soprattutto nell’ambito della genetica). Una volta delineato il profilo generale che Gadamer fornisce della nostra epoca, nella seconda sezione – intitolata Una diagnosi del disagio della modernità: il dilagare della razionalità strumentale tecnico-scientifica – cerco di mostrare come alla base di tutti questi fenomeni egli scorga fondamentalmente un’unica radice, coincidente peraltro a suo giudizio con l’origine stessa della modernità. Ossia, la nascita della scienza moderna ed il suo intrinseco legame con la tecnica e con una specifica forma di razionalità che Gadamer – facendo evidentemente riferimento a categorie interpretative elaborate da Max Weber, Martin Heidegger e dalla Scuola di Francoforte – definisce anche «razionalità strumentale» o «pensiero calcolante». A partire da una tale visione di fondo, cerco quindi di fornire un’analisi della concezione gadameriana della tecnoscienza, evidenziando al contempo alcuni aspetti, e cioè: primo, come l’ermeneutica filosofica di Gadamer non vada interpretata come una filosofia unilateralmente antiscientifica, bensì piuttosto come una filosofia antiscientista (il che naturalmente è qualcosa di ben diverso); secondo, come la sua ricostruzione della crisi della modernità non sfoci mai in una critica “totalizzante” della ragione, né in una filosofia della storia pessimistico-negativa incentrata sull’idea di un corso ineluttabile degli eventi guidato da una razionalità “irrazionale” e contaminata dalla brama di potere e di dominio; terzo, infine, come la filosofia di Gadamer – a dispetto delle inveterate interpretazioni che sono solite scorgervi un pensiero tradizionalista, autoritario e radicalmente anti-illuminista – non intenda affatto respingere l’illuminismo scientifico moderno tout court, né rinnegarne le più importanti conquiste, ma più semplicemente “correggerne” alcune tendenze e recuperare una nozione più ampia e comprensiva di ragione, in grado di render conto anche di quegli aspetti dell’esperienza umana che, agli occhi di una razionalità “limitata” come quella scientista, non possono che apparire come meri residui di irrazionalità. Dopo aver così esaminato nelle prime due sezioni quella che possiamo definire la pars destruens della filosofia di Gadamer, nella terza ed ultima sezione – intitolata Una terapia per la crisi della modernità: la riscoperta dell’esperienza e del sapere pratico – passo quindi ad esaminare la sua pars costruens, consistente a mio giudizio in un recupero critico di quello che egli chiama «un altro tipo di sapere». Ossia, in un tentativo di riabilitazione di tutte quelle forme pre- ed extra-scientifiche di sapere e di esperienza che Gadamer considera costitutive della «dimensione ermeneutica» dell’esistenza umana. La mia analisi della concezione gadameriana del Verstehen e dell’Erfahrung – in quanto forme di un «sapere pratico (praktisches Wissen)» differente in linea di principio da quello teorico e tecnico – conduce quindi ad un’interpretazione complessiva dell’ermeneutica filosofica come vera e propria filosofia pratica. Cioè, come uno sforzo di chiarificazione filosofica di quel sapere prescientifico, intersoggettivo e “di senso comune” effettivamente vigente nella sfera della nostra Lebenswelt e della nostra esistenza pratica. Ciò, infine, conduce anche inevitabilmente ad un’accentuazione dei risvolti etico-politici dell’ermeneutica di Gadamer. In particolare, cerco di esaminare la concezione gadameriana dell’etica – tenendo conto dei suoi rapporti con le dottrine morali di Platone, Aristotele, Kant e Hegel – e di delineare alla fine un profilo della sua ermeneutica filosofica come filosofia del dialogo, della solidarietà e della libertà.
Resumo:
La ricerca si propone di definire le linee guida per la stesura di un Piano che si occupi di qualità della vita e di benessere. Il richiamo alla qualità e al benessere è positivamente innovativo, in quanto impone agli organi decisionali di sintonizzarsi con la soggettività attiva dei cittadini e, contemporaneamente, rende evidente la necessità di un approccio più ampio e trasversale al tema della città e di una più stretta relazione dei tecnici/esperti con i responsabili degli organismi politicoamministrativi. La ricerca vuole indagare i limiti dell’urbanistica moderna di fronte alla complessità di bisogni e di nuove necessità espresse dalle popolazioni urbane contemporanee. La domanda dei servizi è notevolmente cambiata rispetto a quella degli anni Sessanta, oltre che sul piano quantitativo anche e soprattutto sul piano qualitativo, a causa degli intervenuti cambiamenti sociali che hanno trasformato la città moderna non solo dal punto di vista strutturale ma anche dal punto di vista culturale: l’intermittenza della cittadinanza, per cui le città sono sempre più vissute e godute da cittadini del mondo (turisti e/o visitatori, temporaneamente presenti) e da cittadini diffusi (suburbani, provinciali, metropolitani); la radicale trasformazione della struttura familiare, per cui la famiglia-tipo costituita da una coppia con figli, solido riferimento per l’economia e la politica, è oggi minoritaria; l’irregolarità e flessibilità dei calendari, delle agende e dei ritmi di vita della popolazione attiva; la mobilità sociale, per cui gli individui hanno traiettorie di vita e pratiche quotidiane meno determinate dalle loro origini sociali di quanto avveniva nel passato; l’elevazione del livello di istruzione e quindi l’incremento della domanda di cultura; la crescita della popolazione anziana e la forte individualizzazione sociale hanno generato una domanda di città espressa dalla gente estremamente variegata ed eterogenea, frammentata e volatile, e per alcuni aspetti assolutamente nuova. Accanto a vecchie e consolidate richieste – la città efficiente, funzionale, produttiva, accessibile a tutti – sorgono nuove domande, ideali e bisogni che hanno come oggetto la bellezza, la varietà, la fruibilità, la sicurezza, la capacità di stupire e divertire, la sostenibilità, la ricerca di nuove identità, domande che esprimono il desiderio di vivere e di godere la città, di stare bene in città, domande che non possono essere più soddisfatte attraverso un’idea di welfare semplicemente basata sull’istruzione, la sanità, il sistema pensionistico e l’assistenza sociale. La città moderna ovvero l’idea moderna della città, organizzata solo sui concetti di ordine, regolarità, pulizia, uguaglianza e buon governo, è stata consegnata alla storia passata trasformandosi ora in qualcosa di assai diverso che facciamo fatica a rappresentare, a descrivere, a raccontare. La città contemporanea può essere rappresentata in molteplici modi, sia dal punto di vista urbanistico che dal punto di vista sociale: nella letteratura recente è evidente la difficoltà di definire e di racchiudere entro limiti certi l’oggetto “città” e la mancanza di un convincimento forte nell’interpretazione delle trasformazioni politiche, economiche e sociali che hanno investito la società e il mondo nel secolo scorso. La città contemporanea, al di là degli ambiti amministrativi, delle espansioni territoriali e degli assetti urbanistici, delle infrastrutture, della tecnologia, del funzionalismo e dei mercati globali, è anche luogo delle relazioni umane, rappresentazione dei rapporti tra gli individui e dello spazio urbano in cui queste relazioni si muovono. La città è sia concentrazione fisica di persone e di edifici, ma anche varietà di usi e di gruppi, densità di rapporti sociali; è il luogo in cui avvengono i processi di coesione o di esclusione sociale, luogo delle norme culturali che regolano i comportamenti, dell’identità che si esprime materialmente e simbolicamente nello spazio pubblico della vita cittadina. Per studiare la città contemporanea è necessario utilizzare un approccio nuovo, fatto di contaminazioni e saperi trasversali forniti da altre discipline, come la sociologia e le scienze umane, che pure contribuiscono a costruire l’immagine comunemente percepita della città e del territorio, del paesaggio e dell’ambiente. La rappresentazione del sociale urbano varia in base all’idea di cosa è, in un dato momento storico e in un dato contesto, una situazione di benessere delle persone. L’urbanistica moderna mirava al massimo benessere del singolo e della collettività e a modellarsi sulle “effettive necessità delle persone”: nei vecchi manuali di urbanistica compare come appendice al piano regolatore il “Piano dei servizi”, che comprende i servizi distribuiti sul territorio circostante, una sorta di “piano regolatore sociale”, per evitare quartieri separati per fasce di popolazione o per classi. Nella città contemporanea la globalizzazione, le nuove forme di marginalizzazione e di esclusione, l’avvento della cosiddetta “new economy”, la ridefinizione della base produttiva e del mercato del lavoro urbani sono espressione di una complessità sociale che può essere definita sulla base delle transazioni e gli scambi simbolici piuttosto che sui processi di industrializzazione e di modernizzazione verso cui era orientata la città storica, definita moderna. Tutto ciò costituisce quel complesso di questioni che attualmente viene definito “nuovo welfare”, in contrapposizione a quello essenzialmente basato sull’istruzione, sulla sanità, sul sistema pensionistico e sull’assistenza sociale. La ricerca ha quindi analizzato gli strumenti tradizionali della pianificazione e programmazione territoriale, nella loro dimensione operativa e istituzionale: la destinazione principale di tali strumenti consiste nella classificazione e nella sistemazione dei servizi e dei contenitori urbanistici. E’ chiaro, tuttavia, che per poter rispondere alla molteplice complessità di domande, bisogni e desideri espressi dalla società contemporanea le dotazioni effettive per “fare città” devono necessariamente superare i concetti di “standard” e di “zonizzazione”, che risultano essere troppo rigidi e quindi incapaci di adattarsi all’evoluzione di una domanda crescente di qualità e di servizi e allo stesso tempo inadeguati nella gestione del rapporto tra lo spazio domestico e lo spazio collettivo. In questo senso è rilevante il rapporto tra le tipologie abitative e la morfologia urbana e quindi anche l’ambiente intorno alla casa, che stabilisce il rapporto “dalla casa alla città”, perché è in questa dualità che si definisce il rapporto tra spazi privati e spazi pubblici e si contestualizzano i temi della strada, dei negozi, dei luoghi di incontro, degli accessi. Dopo la convergenza dalla scala urbana alla scala edilizia si passa quindi dalla scala edilizia a quella urbana, dal momento che il criterio del benessere attraversa le diverse scale dello spazio abitabile. Non solo, nei sistemi territoriali in cui si è raggiunto un benessere diffuso ed un alto livello di sviluppo economico è emersa la consapevolezza che il concetto stesso di benessere sia non più legato esclusivamente alla capacità di reddito collettiva e/o individuale: oggi la qualità della vita si misura in termini di qualità ambientale e sociale. Ecco dunque la necessità di uno strumento di conoscenza della città contemporanea, da allegare al Piano, in cui vengano definiti i criteri da osservare nella progettazione dello spazio urbano al fine di determinare la qualità e il benessere dell’ambiente costruito, inteso come benessere generalizzato, nel suo significato di “qualità dello star bene”. E’ evidente che per raggiungere tale livello di qualità e benessere è necessario provvedere al soddisfacimento da una parte degli aspetti macroscopici del funzionamento sociale e del tenore di vita attraverso gli indicatori di reddito, occupazione, povertà, criminalità, abitazione, istruzione, etc.; dall’altra dei bisogni primari, elementari e di base, e di quelli secondari, culturali e quindi mutevoli, trapassando dal welfare state allo star bene o well being personale, alla wellness in senso olistico, tutte espressioni di un desiderio di bellezza mentale e fisica e di un nuovo rapporto del corpo con l’ambiente, quindi manifestazione concreta di un’esigenza di ben-essere individuale e collettivo. Ed è questa esigenza, nuova e difficile, che crea la diffusa sensazione dell’inizio di una nuova stagione urbana, molto più di quanto facciano pensare le stesse modifiche fisiche della città.
Resumo:
In fluid dynamics research, pressure measurements are of great importance to define the flow field acting on aerodynamic surfaces. In fact the experimental approach is fundamental to avoid the complexity of the mathematical models for predicting the fluid phenomena. It’s important to note that, using in-situ sensor to monitor pressure on large domains with highly unsteady flows, several problems are encountered working with the classical techniques due to the transducer cost, the intrusiveness, the time response and the operating range. An interesting approach for satisfying the previously reported sensor requirements is to implement a sensor network capable of acquiring pressure data on aerodynamic surface using a wireless communication system able to collect the pressure data with the lowest environmental–invasion level possible. In this thesis a wireless sensor network for fluid fields pressure has been designed, built and tested. To develop the system, a capacitive pressure sensor, based on polymeric membrane, and read out circuitry, based on microcontroller, have been designed, built and tested. The wireless communication has been performed using the Zensys Z-WAVE platform, and network and data management have been implemented. Finally, the full embedded system with antenna has been created. As a proof of concept, the monitoring of pressure on the top of the mainsail in a sailboat has been chosen as working example.
Resumo:
My aim is to develop a theory of cooperation within the organization and empirically test it. Drawing upon social exchange theory, social identity theory, the idea of collective intentions, and social constructivism, the main assumption of my work implies that both cooperation and the organization itself are continually shaped and restructured by actions, judgments, and symbolic interpretations of the parties involved. Therefore, I propose that the decision to cooperate, expressed say as an intention to cooperate, reflects and depends on a three step social process shaped by the interpretations of the actors involved. The first step entails an instrumental evaluation of cooperation in terms of social exchange. In the second step, this “social calculus” is translated into cognitive, emotional and evaluative reactions directed toward the organization. Finally, once the identification process is completed and membership awareness is established, I propose that individuals will start to think largely in terms of “We” instead of “I”. Self-goals are redefined at the collective level, and the outcomes for self, others, and the organization become practically interchangeable. I decided to apply my theory to an important cooperative problem in management research: knowledge exchange within organizations. Hence, I conducted a quantitative survey among the members of the virtual community, “www.borse.it” (n=108). Within this community, members freely decide to exchange their knowledge about the stock market among themselves. Because of the confirmatory requirements and the structural complexity of the theory proposed (i.e., the proposal that instrumental evaluations will induce social identity and this in turn will causes collective intentions), I use Structural Equation Modeling to test all hypotheses in this dissertation. The empirical survey-based study found support for the theory of cooperation proposed in this dissertation. The findings suggest that an appropriate conceptualization of the decision to exchange knowledge is one where collective intentions depend proximally on social identity (i.e., cognitive identification, affective commitment, and evaluative engagement) with the organization, and this identity depends on instrumental evaluations of cooperators (i.e., perceived value of the knowledge received, assessment of past reciprocity, expected reciprocity, and expected social outcomes of the exchange). Furthermore, I find that social identity fully mediates the effects of instrumental motives on collective intentions.
Resumo:
The experience of void, essential to the production of forms and to make use them, can be considered as the base of the activities that attend to the formative processes. If void and matter constitutes the basic substances of architecture. Their role in the definition of form, the symbolic value and the constructive methods of it defines the quality of the space. This job inquires the character of space in the architecture of Moneo interpreting the meaning of the void in the Basque culture through the reading of the form matrices in the work of Jorge Oteiza and Eduardo Chillida. In the tie with the Basque culture a reading key is characterized by concurring to put in relation some of the theoretical principles expressed by Moneo on the relationship between place and time, in an unique and specific vision of the space. In the analysis of the process that determines the genesis of the architecture of Moneo emerges a trajectory whose direction is constructed on two pivos: on the one hand architecture like instrument of appropriation of the place, gushed from an acquaintance process who leans itself to the reading of the relations that define the place and of the resonances through which measuring it, on the other hand the architecture whose character is able to represent and to extend the time in which he is conceived, through the autonomy that is conferred to them from values. Following the trace characterized from this hypothesis, that is supported on the theories elaborated from Moneo, surveying deepens the reading of the principles that construct the sculptural work of Oteiza and Chillida, features from a search around the topic of the void and to its expression through the form. It is instrumental to the definition of a specific area that concurs to interpret the character of the space subtended to a vision of the place and the time, affine to the sensibility of Moneo and in some way not stranger to its cultural formation. The years of the academic formation, during which Moneo enters in contact with the Basque artistic culture, seem to be an important period in the birth of that knowledge that will leads him to the formulation of theories tied to the relationship between time, place and architecture. The values expressed through the experimental work of Oteiza and Chillida during years '50 are valid bases to the understanding of such relationships. In tracing a profile of the figures of Oteiza and Chillida, without the pretension that it is exhaustive for the reading of the complex historical period in which they are placed, but with the needs to put the work in a context, I want to be evidenced the important role carried out from the two artists from the Basque cultural area within which Moneo moves its first steps. The tie that approaches Moneo to the Basque culture following the personal trajectory of the formative experience interlaces to that one of important figures of the art and the Spanish architecture. One of the more meaningful relationships is born just during the years of his academic formation, from 1958 to the 1961, when he works like student in the professional office of the architect Francisco Sáenz de Oiza, who was teaching architectural design at the ETSAM. In these years many figures of Basque artists alternated at the professional office of Oiza that enjoys the important support of the manufacturer and maecenas Juan Huarte Beaumont, introduced to he from Oteiza. The tie between Huarte and Oteiza is solid and continuous in the years and it realizes in a contribution to many of the initiatives that makes of Oteiza a forwarder of the Basque culture. In the four years of collaboration with Oiza, Moneo has the opportunity to keep in contact with an atmosphere permeated by a constant search in the field of the plastic art and with figures directly connected to such atmosphere. It’s of a period of great intensity as in the production like in the promotion of the Basque art. The collective “Blanco y Negro”, than is held in 1959 at the Galería Darro to Madrid, is only one of the many times of an exhibition of the work of Oteiza and Chillida. The end of the Fifties is a period of international acknowledgment for Chillida that for Oteiza. The decade of the Fifties consecrates the hypotheses of a mythical past of the Basque people through the spread of the studies carried out in the antecedent years. The archaeological discoveries that join to a context already rich of signs of the prehistoric era, consolidate the knowledge of a strong cultural identity. Oteiza, like Chillida and other contemporary artists, believe in a cosmogonist conception belonging to the Basques, connected to their matriarchal mythological past. The void in its meaning of absence, in the Basque culture, thus as in various archaic and oriental religions, is equivalent to the spiritual fullness as essential condition to the revealing of essence. Retracing the archaic origins of the Basque culture emerges the deep meaning that the void assumes as key element in the religious interpretation of the passage from the life to the death. The symbology becomes rich of meaningful characters who derive from the fact that it is a chthonic cult. A representation of earth like place in which divine manifest itself but also like connection between divine and human, and this manipulation of the matter of which the earth it is composed is the tangible projection of the continuous search of the man towards God. The search of equilibrium between empty and full, that characterizes also the development of the form in architecture, in the Basque culture assumes therefore a peculiar value that returns like constant in great part of the plastic expressions, than in this context seem to be privileged regarding the other expressive forms. Oteiza and Chillida develop two original points of view in the representation of the void through the form. Both use of rigorous systems of rules sensitive to the physics principles and the characters of the matter. The last aim of the Oteiza’s construction is the void like limit of the knowledge, like border between known and unknown. It doesn’t means to reduce the sculptural object to an only allusive dimension because the void as physical and spiritual power is an active void, that possesses that value able to reveal the being through the trace of un-being. The void in its transcendental manifestation acts at the same time from universal and from particular, like in the atomic structure of the matter, in which on one side it constitutes the inner structure of every atom and on the other one it is necessary condition to the interaction between all the atoms. The void can be seen therefore as the action field that concurs the relations between the forms but is also the necessary condition to the same existence of the form. In the construction of Chillida the void represents that counterpart structuring the matter, inborn in it, the element in absence of which wouldn’t be variations neither distinctive characters to define the phenomenal variety of the world. The physics laws become the subject of the sculptural representation, the void are the instrument that concurs to catch up the equilibrium. Chillida dedicate himself to experience the space through the senses, to perceive of the qualities, to tell the physics laws which forge the matter in the form and the form arranges the places. From the artistic experience of the two sculptors they can be transposed, to the architectonic work of Moneo, those matrices on which they have constructed their original lyric expressions, where the void is absolute protagonist. An ambit is defined thus within which the matrices form them drafts from the work of Oteiza and Chillida can be traced in the definition of the process of birth and construction of the architecture of Moneo, but also in the relation that the architecture establishes with the place and in the time. The void becomes instrument to read the space constructed in its relationships that determine the proportions, rhythms, and relations. In this way the void concurs to interpret the architectonic space and to read the value of it, the quality of the spaces constructing it. This because it’s like an instrument of the composition, whose role is to maintain to the separation between the elements putting in evidence the field of relations. The void is that instrument that serves to characterize the elements that are with in the composition, related between each other, but distinguished. The meaning of the void therefore pushes the interpretation of the architectonic composition on the game of the relations between the elements that, independent and distinguished, strengthen themselves in their identity. On the one hand if void, as measurable reality, concurs all the dimensional changes quantifying the relationships between the parts, on the other hand its dialectic connotation concurs to search the equilibrium that regulated such variations. Equilibrium that therefore does not represent an obtained state applying criteria setting up from arbitrary rules but that depends from the intimate nature of the matter and its embodiment in the form. The production of a form, or a formal system that can be finalized to the construction of a building, is indissolubly tied to the technique that is based on the acquaintance of the formal vocation of the matter, and what it also can representing, meaning, expresses itself in characterizing the site. For Moneo, in fact, the space defined from the architecture is above all a site, because the essence of the site is based on the construction. When Moneo speaks about “birth of the idea of plan” like essential moment in the construction process of the architecture, it refers to a process whose complexity cannot be born other than from a deepened acquaintance of the site that leads to the comprehension of its specificity. Specificity arise from the infinite sum of relations, than for Moneo is the story of the oneness of a site, of its history, of the cultural identity and of the dimensional characters that that they are tied to it beyond that to the physical characteristics of the site. This vision is leaned to a solid made physical structure of perceptions, of distances, guideline and references that then make that the process is first of all acquaintance, appropriation. Appropriation that however does not happen for directed consequence because does not exist a relationship of cause and effect between place and architecture, thus as an univocal and exclusive way does not exist to arrive to a representation of an idea. An approach that, through the construction of the place where the architecture acquires its being, searches an expression of its sense of the truth. The proposal of a distinction for areas like space, matter, spirit and time, answering to the issues that scan the topics of the planning search of Moneo, concurs a more immediate reading of the systems subtended to the composition principles, through which is related the recurrent architectonic elements in its planning dictionary. From the dialectic between the opposites that is expressed in the duality of the form, through the definition of a complex element that can mediate between inside and outside as a real system of exchange, Moneo experiences the form development of the building deepening the relations that the volume establishes in the site. From time to time the invention of a system used to answer to the needs of the program and to resolve the dual character of the construction in an only gesture, involves a deep acquaintance of the professional practice. The technical aspect is the essential support to which the construction of the system is indissolubly tied. What therefore arouses interest is the search of the criteria and the way to construct that can reveal essential aspects of the being of the things. The constructive process demands, in fact, the acquaintance of the formative properties of the matter. Property from which the reflections gush on the relations that can be born around the architecture through the resonance produced from the forms. The void, in fact, through the form is in a position to constructing the site establishing a reciprocity relation. A reciprocity that is determined in the game between empty and full and of the forms between each other, regarding around, but also with regard to the subjective experience. The construction of a background used to amplify what is arranged on it and to clearly show the relations between the parts and at the same time able to tie itself with around opening the space of the vision, is a system that in the architecture of Moneo has one of its more effective applications in the use of the platform used like architectonic element. The spiritual force of this architectonic gesture is in the ability to define a place whose projecting intention is perceived and shared with who experience and has lived like some instrument to contact the cosmic forces, in a delicate process that lead to the equilibrium with them, but in completely physical way. The principles subtended to the construction of the form taken from the study of the void and the relations that it concurs, lead to express human values in the construction of the site. The validity of these principles however is tested from the time. The time is what Moneo considers as filter that every architecture is subordinate to and the survival of architecture, or any of its formal characters, reveals them the validity of the principles that have determined it. It manifests thus, in the tie between the spatial and spiritual dimension, between the material and the worldly dimension, the state of necessity that leads, in the construction of the architecture, to establish a contact with the forces of the universe and the intimate world, through a process that translate that necessity in elaboration of a formal system.
Resumo:
In case of severe osteoarthritis at the knee causing pain, deformity, and loss of stability and mobility, the clinicians consider that the substitution of these surfaces by means of joint prostheses. The objectives to be pursued by this surgery are: complete pain elimination, restoration of the normal physiological mobility and joint stability, correction of all deformities and, thus, of limping. The knee surgical navigation systems have bee developed in computer-aided surgery in order to improve the surgical final outcome in total knee arthroplasty. These systems provide the surgeon with quantitative and real-time information about each surgical action, like bone cut executions and prosthesis component alignment, by mean of tracking tools rigidly fixed onto the femur and the tibia. Nevertheless, there is still a margin of error due to the incorrect surgical procedures and to the still limited number of kinematic information provided by the current systems. Particularly, patello-femoral joint kinematics is not considered in knee surgical navigation. It is also unclear and, thus, a source of misunderstanding, what the most appropriate methodology is to study the patellar motion. In addition, also the knee ligamentous apparatus is superficially considered in navigated total knee arthroplasty, without taking into account how their physiological behavior is altered by this surgery. The aim of the present research work was to provide new functional and biomechanical assessments for the improvement of the surgical navigation systems for joint replacement in the human lower limb. This was mainly realized by means of the identification and development of new techniques that allow a thorough comprehension of the functioning of the knee joint, with particular attention to the patello-femoral joint and to the main knee soft tissues. A knee surgical navigation system with active markers was used in all research activities presented in this research work. Particularly, preliminary test were performed in order to assess the system accuracy and the robustness of a number of navigation procedures. Four studies were performed in-vivo on patients requiring total knee arthroplasty and randomly implanted by means of traditional and navigated procedures in order to check for the real efficacy of the latter with respect to the former. In order to cope with assessment of patello-femoral joint kinematics in the intact and replaced knees, twenty in-vitro tests were performed by using a prototypal tracking tool also for the patella. In addition to standard anatomical and articular recommendations, original proposals for defining the patellar anatomical-based reference frame and for studying the patello-femoral joint kinematics were reported and used in these tests. These definitions were applied to two further in-vitro tests in which, for the first time, also the implant of patellar component insert was fully navigated. In addition, an original technique to analyze the main knee soft tissues by means of anatomical-based fiber mappings was also reported and used in the same tests. The preliminary instrumental tests revealed a system accuracy within the millimeter and a good inter- and intra-observer repeatability in defining all anatomical reference frames. In in-vivo studies, the general alignments of femoral and tibial prosthesis components and of the lower limb mechanical axis, as measured on radiographs, was more satisfactory, i.e. within ±3°, in those patient in which total knee arthroplasty was performed by navigated procedures. As for in-vitro tests, consistent patello-femoral joint kinematic patterns were observed over specimens throughout the knee flexion arc. Generally, the physiological intact knee patellar motion was not restored after the implant. This restoration was successfully achieved in the two further tests where all component implants, included the patellar insert, were fully navigated, i.e. by means of intra-operative assessment of also patellar component positioning and general tibio-femoral and patello-femoral joint assessment. The tests for assessing the behavior of the main knee ligaments revealed the complexity of the latter and the different functional roles played by the several sub-bundles compounding each ligament. Also in this case, total knee arthroplasty altered the physiological behavior of these knee soft tissues. These results reveal in-vitro the relevance and the feasibility of the applications of new techniques for accurate knee soft tissues monitoring, patellar tracking assessment and navigated patellar resurfacing intra-operatively in the contest of the most modern operative techniques. This present research work gives a contribution to the much controversial knowledge on the normal and replaced of knee kinematics by testing the reported new methodologies. The consistence of these results provides fundamental information for the comprehension and improvements of knee orthopedic treatments. In the future, the reported new techniques can be safely applied in-vivo and also adopted in other joint replacements.
Resumo:
Nuclear Magnetic Resonance (NMR) is a branch of spectroscopy that is based on the fact that many atomic nuclei may be oriented by a strong magnetic field and will absorb radiofrequency radiation at characteristic frequencies. The parameters that can be measured on the resulting spectral lines (line positions, intensities, line widths, multiplicities and transients in time-dependent experi-ments) can be interpreted in terms of molecular structure, conformation, molecular motion and other rate processes. In this way, high resolution (HR) NMR allows performing qualitative and quantitative analysis of samples in solution, in order to determine the structure of molecules in solution and not only. In the past, high-field NMR spectroscopy has mainly concerned with the elucidation of chemical structure in solution, but today is emerging as a powerful exploratory tool for probing biochemical and physical processes. It represents a versatile tool for the analysis of foods. In literature many NMR studies have been reported on different type of food such as wine, olive oil, coffee, fruit juices, milk, meat, egg, starch granules, flour, etc using different NMR techniques. Traditionally, univariate analytical methods have been used to ex-plore spectroscopic data. This method is useful to measure or to se-lect a single descriptive variable from the whole spectrum and , at the end, only this variable is analyzed. This univariate methods ap-proach, applied to HR-NMR data, lead to different problems due especially to the complexity of an NMR spectrum. In fact, the lat-ter is composed of different signals belonging to different mole-cules, but it is also true that the same molecules can be represented by different signals, generally strongly correlated. The univariate methods, in this case, takes in account only one or a few variables, causing a loss of information. Thus, when dealing with complex samples like foodstuff, univariate analysis of spectra data results not enough powerful. Spectra need to be considered in their wholeness and, for analysing them, it must be taken in consideration the whole data matrix: chemometric methods are designed to treat such multivariate data. Multivariate data analysis is used for a number of distinct, differ-ent purposes and the aims can be divided into three main groups: • data description (explorative data structure modelling of any ge-neric n-dimensional data matrix, PCA for example); • regression and prediction (PLS); • classification and prediction of class belongings for new samples (LDA and PLS-DA and ECVA). The aim of this PhD thesis was to verify the possibility of identify-ing and classifying plants or foodstuffs, in different classes, based on the concerted variation in metabolite levels, detected by NMR spectra and using the multivariate data analysis as a tool to inter-pret NMR information. It is important to underline that the results obtained are useful to point out the metabolic consequences of a specific modification on foodstuffs, avoiding the use of a targeted analysis for the different metabolites. The data analysis is performed by applying chemomet-ric multivariate techniques to the NMR dataset of spectra acquired. The research work presented in this thesis is the result of a three years PhD study. This thesis reports the main results obtained from these two main activities: A1) Evaluation of a data pre-processing system in order to mini-mize unwanted sources of variations, due to different instrumental set up, manual spectra processing and to sample preparations arte-facts; A2) Application of multivariate chemiometric models in data analy-sis.
Resumo:
Some fundamental biological processes such as embryonic development have been preserved during evolution and are common to species belonging to different phylogenetic positions, but are nowadays largely unknown. The understanding of cell morphodynamics leading to the formation of organized spatial distribution of cells such as tissues and organs can be achieved through the reconstruction of cells shape and position during the development of a live animal embryo. We design in this work a chain of image processing methods to automatically segment and track cells nuclei and membranes during the development of a zebrafish embryo, which has been largely validates as model organism to understand vertebrate development, gene function and healingrepair mechanisms in vertebrates. The embryo is previously labeled through the ubiquitous expression of fluorescent proteins addressed to cells nuclei and membranes, and temporal sequences of volumetric images are acquired with laser scanning microscopy. Cells position is detected by processing nuclei images either through the generalized form of the Hough transform or identifying nuclei position with local maxima after a smoothing preprocessing step. Membranes and nuclei shapes are reconstructed by using PDEs based variational techniques such as the Subjective Surfaces and the Chan Vese method. Cells tracking is performed by combining informations previously detected on cells shape and position with biological regularization constraints. Our results are manually validated and reconstruct the formation of zebrafish brain at 7-8 somite stage with all the cells tracked starting from late sphere stage with less than 2% error for at least 6 hours. Our reconstruction opens the way to a systematic investigation of cellular behaviors, of clonal origin and clonal complexity of brain organs, as well as the contribution of cell proliferation modes and cell movements to the formation of local patterns and morphogenetic fields.
Resumo:
In this work we study the relation between crustal heterogeneities and complexities in fault processes. The first kind of heterogeneity considered involves the concept of asperity. The presence of an asperity in the hypocentral region of the M = 6.5 earthquake of June 17-th, 2000 in the South Iceland Seismic Zone was invoked to explain the change of seismicity pattern before and after the mainshock: in particular, the spatial distribution of foreshock epicentres trends NW while the strike of the main fault is N 7◦ E and aftershocks trend accordingly; the foreshock depths were typically deeper than average aftershock depths. A model is devised which simulates the presence of an asperity in terms of a spherical inclusion, within a softer elastic medium in a transform domain with a deviatoric stress field imposed at remote distances (compressive NE − SW, tensile NW − SE). An isotropic compressive stress component is induced outside the asperity, in the direction of the compressive stress axis, and a tensile component in the direction of the tensile axis; as a consequence, fluid flow is inhibited in the compressive quadrants while it is favoured in tensile quadrants. Within the asperity the isotropic stress vanishes but the deviatoric stress increases substantially, without any significant change in the principal stress directions. Hydrofracture processes in the tensile quadrants and viscoelastic relaxation at depth may contribute to lower the effective rigidity of the medium surrounding the asperity. According to the present model, foreshocks may be interpreted as induced, close to the brittle-ductile transition, by high pressure fluids migrating upwards within the tensile quadrants; this process increases the deviatoric stress within the asperity which eventually fails, becoming the hypocenter of the mainshock, on the optimally oriented fault plane. In the second part of our work we study the complexities induced in fault processes by the layered structure of the crust. In the first model proposed we study the case in which fault bending takes place in a shallow layer. The problem can be addressed in terms of a deep vertical planar crack, interacting with a shallower inclined planar crack. An asymptotic study of the singular behaviour of the dislocation density at the interface reveals that the density distribution has an algebraic singularity at the interface of degree ω between -1 and 0, depending on the dip angle of the upper crack section and on the rigidity contrast between the two media. From the welded boundary condition at the interface between medium 1 and 2, a stress drop discontinuity condition is obtained which can be fulfilled if the stress drop in the upper medium is lower than required for a planar trough-going surface: as a corollary, a vertically dipping strike-slip fault at depth may cross the interface with a sedimentary layer, provided that the shallower section is suitably inclined (fault "refraction"); this results has important implications for our understanding of the complexity of the fault system in the SISZ; in particular, we may understand the observed offset of secondary surface fractures with respect to the strike direction of the seismic fault. The results of this model also suggest that further fractures can develop in the opposite quadrant and so a second model describing fault branching in the upper layer is proposed. As the previous model, this model can be applied only when the stress drop in the shallow layer is lower than the value prescribed for a vertical planar crack surface. Alternative solutions must be considered if the stress drop in the upper layer is higher than in the other layer, which may be the case when anelastic processes relax deviatoric stress in layer 2. In such a case one through-going crack cannot fulfil the welded boundary conditions and unwelding of the interface may take place. We have solved this problem within the theory of fracture mechanics, employing the boundary element method. The fault terminates against the interface in a T-shaped configuration, whose segments interact among each other: the lateral extent of the unwelded surface can be computed in terms of the main fault parameters and the stress field resulting in the shallower layer can be modelled. A wide stripe of high and nearly uniform shear stress develops above the unwelded surface, whose width is controlled by the lateral extension of unwelding. Secondary shear fractures may then open within this stripe, according to the Coulomb failure criterion, and the depth of open fractures opening in mixed mode may be computed and compared with the well studied fault complexities observed in the field. In absence of the T-shaped decollement structure, stress concentration above the seismic fault would be difficult to reconcile with observations, being much higher and narrower.