874 resultados para ENVELOPE FUNCTIONS
Resumo:
A regional envelope curve (REC) of flood flows summarises the current bound on our experience of extreme floods in a region. RECs are available for most regions of the world. Recent scientific papers introduced a probabilistic interpretation of these curves and formulated an empirical estimator of the recurrence interval T associated with a REC, which, in principle, enables us to use RECs for design purposes in ungauged basins. The main aim of this work is twofold. First, it extends the REC concept to extreme rainstorm events by introducing the Depth-Duration Envelope Curves (DDEC), which are defined as the regional upper bound on all the record rainfall depths at present for various rainfall duration. Second, it adapts the probabilistic interpretation proposed for RECs to DDECs and it assesses the suitability of these curves for estimating the T-year rainfall event associated with a given duration and large T values. Probabilistic DDECs are complementary to regional frequency analysis of rainstorms and their utilization in combination with a suitable rainfall-runoff model can provide useful indications on the magnitude of extreme floods for gauged and ungauged basins. The study focuses on two different national datasets, the peak over threshold (POT) series of rainfall depths with duration 30 min., 1, 3, 9 and 24 hrs. obtained for 700 Austrian raingauges and the Annual Maximum Series (AMS) of rainfall depths with duration spanning from 5 min. to 24 hrs. collected at 220 raingauges located in northern-central Italy. The estimation of the recurrence interval of DDEC requires the quantification of the equivalent number of independent data which, in turn, is a function of the cross-correlation among sequences. While the quantification and modelling of intersite dependence is a straightforward task for AMS series, it may be cumbersome for POT series. This paper proposes a possible approach to address this problem.
Resumo:
La tesi si occupa della teoria delle ranking functions di W. Spohn, dottrina epistemologica il cui fine è dare una veste logica rigorosa ai concetti di causalità, legge di natura, spiegazione scientifica, a partire dalla nozione di credenza. Di tale teoria manca ancora una esposizione organica e unitaria e, soprattutto, formulata in linguaggio immediatamente accessibile. Nel mio lavoro, che si presenta come introduzione ad essa, è anche messa a raffronto con le teorie che maggiormente l’hanno influenzata o rispetto alle quali si pone come avversaria. Il PRIMO CAPITOLO si concentra sulla teoria di P. Gärdenfors, il più diretto predecessore e ispiratore di Spohn. Questo consente al lettore di acquisire familiarità con le nozioni di base della logica epistemica. La conoscenza, nella teoria del filosofo svedese, è concepita come processo di acquisizione ed espulsione di credenze, identificate con proposizioni, da un insieme. I tre maggiori fenomeni epistemici sono l’espansione, la revisione e la contrazione. Nel primo caso si immagazzina una proposizione in precedenza sconosciuta, nel secondo se ne espelle una a causa dell’acquisizione della sua contraddittoria, nel terzo si cancella una proposizione per amore di ipotesi e si investigano le conseguenze di tale cancellazione. Controparte linguistica di quest’ultimo fenomeno è la formulazione di un condizionale controfattuale. L’epistemologo, così come Gärdenfors concepisce il suo compito, è fondamentalmente un logico che deve specificare funzioni: vale a dire, le regole che deve rispettare ciascun passaggio da un insieme epistemico a un successivo per via di espansione, revisione e contrazione. Il SECONDO CAPITOLO tratta infine della teoria di Spohn, cercando di esporla in modo esauriente ma anche molto semplice. Anche in Spohn evidentemente il concetto fondamentale è quello di funzione: si tratta però in questo caso di quella regola di giudizio soggettivo che, a ciascuna credenza, identificata con una proposizione, associa un grado (un rank), espresso da un numero naturale positivo o dallo zero. Un rank è un grado di non-credenza (disbelief). Perché la non-credenza (che comporta un notevole appesantimento concettuale)? Perché le leggi della credenza così concepite presentano quella che Spohn chiama una “pervasiva analogia” rispetto alle leggi della probabilità (Spohn la chiama persino “armonia prestabilita” ed è un campo su cui sta ancora lavorando). Essenziale è il concetto di condizionalizzazione (analogo a quello di probabilità condizionale): a una credenza si associa un rank sulla base di (almeno) un’altra credenza. Grazie a tale concetto Spohn può formalizzare un fenomeno che a Gärdenfors sfugge, ossia la presenza di correlazioni interdoxastiche tra le credenze di un soggetto. Nella logica epistemica del predecessore, infatti, tutto si riduce all’inclusione o meno di una proposizione nell’insieme, non si considerano né gradi di credenza né l’idea che una credenza sia creduta sulla base di un’altra. Il TERZO CAPITOLO passa alla teoria della causalità di Spohn. Anche questa nozione è affrontata in prospettiva epistemica. Non ha senso, secondo Spohn, chiedersi quali siano i tratti “reali” della causalità “nel mondo”, occorre invece studiare che cosa accade quando si crede che tra due fatti o eventi sussista una correlazione causale. Anche quest’ultima è fatta oggetto di una formalizzazione logica rigorosa (e diversificata, infatti Spohn riconosce vari tipi di causa). Una causa “innalza lo status epistemico” dell’effetto: vale a dire, quest’ultimo è creduto con rank maggiore (ossia minore, se ci si concentra sulla non-credenza) se condizionalizzato sulla causa. Nello stesso capitolo espongo la teoria della causalità di Gärdenfors, che però è meno articolata e minata da alcuni errori. Il QUARTO CAPITOLO è tutto dedicato a David Lewis e alla sua teoria controfattuale della causalità, che è il maggiore avversario tanto di Spohn quanto di Gärdenfors. Secondo Lewis la migliore definizione di causa può essere data in termini controfattuali: la causa è un evento tale che, se non fosse accaduto, nemmeno l’effetto sarebbe accaduto. Naturalmente questo lo obbliga a specificare una teoria delle condizioni di verità di tale classe di enunciati che, andando contro i fatti per definizione, non possono essere paragonati alla realtà. Lewis ricorre allora alla dottrina dei mondi possibili e della loro somiglianza comparativa, concludendo che un controfattuale è vero se il mondo possibile in cui il suo antecedente e il suo conseguente sono veri è più simile al mondo attuale del controfattuale in cui il suo antecedente è vero e il conseguente è falso. Il QUINTO CAPITOLO mette a confronto la teoria di Lewis con quelle di Spohn e Gärdenfors. Quest’ultimo riduce i controfattuali a un fenomeno linguistico che segnala il processo epistemico di contrazione, trattato nel primo capitolo, rifiutando così completamente la dottrina dei mondi possibili. Spohn non affronta direttamente i controfattuali (in quanto a suo dire sovraccarichi di sottigliezze linguistiche di cui non vuole occuparsi – ha solo un abbozzo di teoria dei condizionali) ma dimostra che la sua teoria è superiore a quella di Lewis perché riesce a rendere conto, con estrema esattezza, di casi problematici di causalità che sfuggono alla formulazione controfattuale. Si tratta di quei casi in cui sono in gioco, rafforzandosi a vicenda o “concorrendo” allo stesso effetto, più fattori causali (casi noti nella letteratura come preemption, trumping etc.). Spohn riesce a renderne conto proprio perché ha a disposizione i rank numerici, che consentono un’analisi secondo cui a ciascun fattore causale è assegnato un preciso ruolo quantitativamente espresso, mentre la dottrina controfattuale è incapace di operare simili distinzioni (un controfattuale infatti è vero o falso, senza gradazioni). Il SESTO CAPITOLO si concentra sui modelli di spiegazione scientifica di Hempel e Salmon, e sulla nozione di causalità sviluppata da quest’ultimo, mettendo in luce soprattutto il ruolo (problematico) delle leggi di natura e degli enunciati controfattuali (a questo proposito sono prese in considerazione anche le famose critiche di Goodman e Chisholm). Proprio dalla riflessione su questi modelli infatti è scaturita la teoria di Gärdenfors, e tanto la dottrina del filosofo svedese quanto quella di Spohn possono essere viste come finalizzate a rendere conto della spiegazione scientifica confrontandosi con questi modelli meno recenti. Il SETTIMO CAPITOLO si concentra sull’analisi che la logica epistemica fornisce delle leggi di natura, che nel capitolo precedente sono ovviamente emerse come elemento fondamentale della spiegazione scientifica. Secondo Spohn le leggi sono innanzitutto proposizioni generali affermative, che sono credute in modo speciale. In primo luogo sono credute persistentemente, vale a dire, non sono mai messe in dubbio (tanto che se si incappa in una loro contro-istanza si va alla ricerca di una violazione della normalità che la giustifichi). In secondo luogo, guidano e fondano la credenza in altre credenze specifiche, che sono su di esse condizionalizzate (si riprendono, con nuovo rigore logico, le vecchie idee di Wittgenstein e di Ramsey e il concetto di legge come inference ticket). In terzo luogo sono generalizzazioni ricavate induttivamente: sono oggettivazioni di schemi induttivi. Questo capitolo si sofferma anche sulla teoria di legge offerta da Gärdenfors (analoga ma embrionale) e sull’analisi che Spohn fornisce della nozione di clausola ceteris paribus. L’OTTAVO CAPITOLO termina l’analisi cominciata con il sesto, considerando finalmente il modello epistemico della spiegazione scientifica. Si comincia dal modello di Gärdenfors, che si mostra essere minato da alcuni errori o comunque caratterizzato in modo non sufficientemente chiaro (soprattutto perché non fa ricorso, stranamente, al concetto di legge). Segue il modello di Spohn; secondo Spohn le spiegazioni scientifiche sono caratterizzate dal fatto che forniscono (o sono finalizzate a fornire) ragioni stabili, vale a dire, riconducono determinati fenomeni alle loro cause e tali cause sono credute in modo persistente. Con una dimostrazione logica molto dettagliata e di acutezza sorprendente Spohn argomenta che simili ragioni, nel lungo periodo, devono essere incontrate. La sua quindi non è solo una teoria della spiegazione scientifica che elabori un modello epistemico di che cosa succede quando un fenomeno è spiegato, ma anche una teoria dello sviluppo della scienza in generale, che incoraggia a perseguire la ricerca delle cause come necessariamente coronata da successo. Le OSSERVAZIONI CONCLUSIVE fanno il punto sulle teorie esposte e sul loro raffronto. E’ riconosciuta la superiorità della teoria di Spohn, di cui si mostra anche che raccoglie in pieno l’eredità costruttiva di Hume, al quale gli avversari si rifanno costantemente ma in modo frammentario. Si analizzano poi gli elementi delle teorie di Hempel e Salmon che hanno precorso l’impostazione epistemica. La teoria di Spohn non è esente però da alcuni punti ancora problematici. Innanzitutto, il ruolo della verità; in un primo tempo Spohn sembra rinunciare, come fa esplicitamente il suo predecessore, a trattare la verità, salvo poi invocarla quando si pone il grave problema dell’oggettivazione delle ranking functions (il problema si presenta poiché di esse inizialmente si dice che sono regole soggettive di giudizio e poi si identificano in parte con le leggi di natura). C’è poi la dottrina dei gradi di credenza che Spohn dice presentarsi “unitamente alle proposizioni” e che costituisce un inutile distacco dal realismo psicologico (critica consueta alla teoria): basterebbe osservare che i gradi di credenza sono ricavati o per condizionalizzazione automatica sulla base del tipo di fonte da cui una proposizione proviene, o per paragone immaginario con altre fonti (la maggiore o minore credenza infatti è un concetto relazionale: si crede di più o di meno “sulla base di…” o “rispetto a…”). Anche la trattazione delle leggi di natura è problematica; Spohn sostiene che sono ranking functions: a mio avviso invece esse concorrono a regole di giudizio, che prescrivono di impiegare le leggi stesse per valutare proposizioni o aspettative. Una legge di natura è un ingranaggio, per così dire, di una valutazione di certezza ma non si identifica totalmente con una legge di giudizio. I tre criteri che Spohn individua per distinguere le leggi poi non sono rispettati da tutte e sole le leggi stesse: la generalizzazione induttiva può anche dare adito a pregiudizi, e non di tutte le leggi si sono viste, individualmente, istanze ripetute tanto da giustificarle induttivamente. Infine, un episodio reale di storia della scienza come la scoperta della sintesi dell’urea da parte di F. Wöhler (1828 – ottenendo carbammide, organico, da due sostanze inorganiche, dimostra che non è vera la legge di natura fini a quel momento presunta tale secondo cui “sostanze organiche non possono essere ricavate da sostanze inorganiche”) è indice che le leggi di natura non sono sempre credute in modo persistente, cosicché per comprendere il momento della scoperta è pur sempre necessario rifarsi a una teoria di tipo popperiano, rispetto alla quale Spohn presenta invece la propria in assoluta antitesi.
Resumo:
[EN]In this work, stiffness and damping functions of pile foundations with inclined end-bearing piles have been computed for square 2X2 and 3X3 pile groups embedded in a soft stratum overlaying a rigid bedrock. The paper algo invetigates the influence that the assumption of a perfectly rigid bedrock and fixed boundary conditions at the pile tips have on the impedance functions.
Resumo:
Akt (also called PKB) is a 63 kDa serine/threonine kinase involved in promotion of cell survival, proliferation a nd metabolic responses downstream the phosphoinositide-3-kinase (PI 3-kinase) signaling pathway. In resting cells, Akt is a predominantly cytosolic enzyme; however generation of PI 3-kinase lipid products recruits Akt to the plasma membrane, resulting in a conformational change which confers full enzymatic activity through the phosphorylation of the membrane-bound protein at two residues, Thr308, and Ser473. Activated Akt redistributes to cytoplasm and nucleus, where phosphorylation of specific substrates occurs. Both the presence and the activity of Akt in the nucleus have been described. An interesting mechanism that mediates nuclear translocation of Akt has been described in human mature T-cell leukemia: the product of TCL1 gene, Tcl1, interacts with the PH domain of phosphorylated Akt, thus driving Akt to the nucleus. In this context, Tcl1 may act as a direct transporter of Akt or may contribute to the formation of a complex that promotes the transport of active Akt to the nucleus, where it can phosphorylate nuclear substrates. A well described nuclear substrate if Foxo. IGF-1 triggers phosphorylation of Foxo by Akt inside the nucleus, where phospho-Foxo associates to 14.3.3 proteins that, in turn, promote its export to the cytoplasm where it is sequestered. Remarkably, Foxo phosphorylation by Akt has been shown to be a crucial event in Akt-dependent myogenesis. However, most Akt nuclear substrates have so far remained elusive, as well as nuclear Akt functions. This lack of information prompted us to undertake a search of substrates of Akt in the nucleus, by the combined use of 2D-separation/mass spectrometry and anti-Akt-phosphosubstrate antibody. This study presents evidence of A-type lamins as novel nuclear substrates of Akt. Lamins are type V intermediate filaments proteins found in the nucleus of higher eukaryotes where, together with lamin-binding proteins, they form the lamina at the nuclear envelope, providing mechanical stability for the nuclear membrane. By coimmunoprecipitation, it is demonstrated here that endogenous lamin A and Akt interact, and that A-type lamins are phosphorylated by Akt both in vitro and in vivo. Moreover, by phosphoaminoacid analysis and mutagenesis, it is further demonstrated that Akt phosphorylates lamin A at Ser404, and, more importantly, that while lamin A/C phosphorylation is stable throughout the cell cycle, phosphorylation of the precursor prelamin A becomes detectable as cells enter the G2 phase, picking at G2/M. This study also shows that lamin phosphorylation by Akt creates a binding site for 14.3.3 adaptors which, in turn, promote prelamin A degradation. While this mechanism is in agreement with a general role of Akt in the regulation of a subset of its substrates, opposite to what has been described, degradation is not mediated through a ubiquitination and proteasomal mechanism but through a lysosomal pathway, as indicated by the reverting action of the lysosomal inhibitor cloroquine. Phosphorylation is a key event in the mitotic breakdown of the nuclear lamina. However, the kinases and the precise sites of phosphorylation are scarcely known. Therefore, these results represent an important breakthrough in this very significant but understudied area. The phosphorylation of the precursor protein prelamin A and its subsequent degradation at G2/M, when both the nuclear envelop and the nuclear lamina disassemble, can be view as part of a mechanism to dispose off the precursor that is not needed in this precise context. The recently reported finding that patients affected by Emery-Dreifuss muscular dystrophy carry a mutation at Arg 401, in the Akt phosphorylation motif, open new perspective that warrant further investigation in this very important field.
Resumo:
Zusammenfassung:Im Infektionszyklus des Hepatitis-B-Virus spielt das große L-Hüllprotein mit seiner einzigartigen PräS1-Domäne eine zentrale Rolle. Es vermittelt die Bindung und Aufnahme in die Leberzelle, die Verpackung der Nukleokapside in die Virushülle, die Regulation der cccDNA-Amplifikation und eine transkriptionelle Aktivierung in der Wirtszelle. Zur Erfüllung seiner vielfältigen Aufgaben benötigt das L-Protein Unterstützung durch Wirtzellfaktoren, von denen einige im Rahmen dieser Untersuchung durch Verwendung von PräS1-Konstrukten als Fängerproteine im Hefe-Zwei-Hybrid-System identifiziert wurden. Mehrere Klone, die im Hefe-Zwei-Hybrid-Test mit dem C-terminalen PräS1-Fängerprotein (Aminosäure 44-108) isoliert worden waren, enthielten Teile der cDNA von gamma2-Adaptin, einem mutmaßlichen Mitglied der Clathrin-Adaptor-Proteine. Diese sind für intrazelluläre Membrantransportprozesse mittels clathrinumhüllter Vesikel verantwortlich. Unter den interagierenden Klonen, die mit dem N-terminalen Konstrukt des L-Proteins (Aminosäure 1-70) isoliert worden waren, befand sich überproportional häufig eine cDNA, die der schweren Kette H4 der Inter-Alpha-Trypsin-Inhibitor-Familie homolog war. H4 besitzt vermutlich bei der 'Akute-Phase-Reaktion', die Entzündungen folgt, und bei der Stabilisierung der extrazellulären Matrix physiologische Bedeutung. Weitere Klone kodierten für die Serinprotease C1r. Diese ist Bestandteil des C1-Komplex, der ersten Komponente des klassischen Komplementsystems. Die Spezifität der Bindung zwischen den positiven Klonen und der PräS1-Domäne wurde in weiteren biochemischen Interaktionstests bestätigt, sodaß H4, C1r und gamma2-Adaptin als Wirtszellfaktoren in der Physiologie des Hepatitis-B-Virus wahrscheinlich eine Rolle spielen.Abstract:Little is known about host cell factors necessary for hepatitis B virus assembly and infectivity. Central to virogenesis is the large L envelope protein that mediates hepatocyte receptor binding, envelopment of viral capsids, regulation of supercoiled DNA amplification and transcriptional transactivation. To assess its multiple functions and host-protein assistance involved, we here initiated a yeast two-hybrid screen using the L-specific preS1 domain as bait to screen a human liver cDNA library for L-interacting proteins. One of the most prominent cDNAs interacting with aminoacid sequence 44-108 of L-protein encodes for gamma2-adaptin, a novel clathrin adaptor-related protein responsible for protein sorting and trafficking. Among the clones interacting with the N-terminal construct of L-protein (aminoacid sequence 1-70), a frequently isolated cDNA corresponds to the gene for inter-alpha-trypsin family heavy chain H4, likely to be involved in acute inflammatory phase response and stabilization of extracellular matrices. Some other interacting clones were found to carry the cDNA for the serine protease C1r, a subunit of the C1 complex which initiates the classical complement cascade. The specificity of the interaction between the positive clones and the preS1 domain was further confirmed in independent biochemical experiments. Taken together, the results suggest a role for H4, C1r and gamma2-adaptin as host-cell factors in L-mediated process of viral biogenesis and/or pathogenesis.
Resumo:
The DNA topology is an important modifier of DNA functions. Torsional stress is generated when right handed DNA is either over- or underwound, producing structural deformations which drive or are driven by processes such as replication, transcription, recombination and repair. DNA topoisomerases are molecular machines that regulate the topological state of the DNA in the cell. These enzymes accomplish this task by either passing one strand of the DNA through a break in the opposing strand or by passing a region of the duplex from the same or a different molecule through a double-stranded cut generated in the DNA. Because of their ability to cut one or two strands of DNA they are also target for some of the most successful anticancer drugs used in standard combination therapies of human cancers. An effective anticancer drug is Camptothecin (CPT) that specifically targets DNA topoisomerase 1 (TOP 1). The research project of the present thesis has been focused on the role of human TOP 1 during transcription and on the transcriptional consequences associated with TOP 1 inhibition by CPT in human cell lines. Previous findings demonstrate that TOP 1 inhibition by CPT perturbs RNA polymerase (RNAP II) density at promoters and along transcribed genes suggesting an involvement of TOP 1 in RNAP II promoter proximal pausing site. Within the transcription cycle, promoter pausing is a fundamental step the importance of which has been well established as a means of coupling elongation to RNA maturation. By measuring nascent RNA transcripts bound to chromatin, we demonstrated that TOP 1 inhibition by CPT can enhance RNAP II escape from promoter proximal pausing site of the human Hypoxia Inducible Factor 1 (HIF-1) and c-MYC genes in a dose dependent manner. This effect is dependent from Cdk7/Cdk9 activities since it can be reversed by the kinases inhibitor DRB. Since CPT affects RNAP II by promoting the hyperphosphorylation of its Rpb1 subunit the findings suggest that TOP 1inhibition by CPT may increase the activity of Cdks which in turn phosphorylate the Rpb1 subunit of RNAP II enhancing its escape from pausing. Interestingly, the transcriptional consequences of CPT induced topological stress are wider than expected. CPT increased co-transcriptional splicing of exon1 and 2 and markedly affected alternative splicing at exon 11. Surprisingly despite its well-established transcription inhibitory activity, CPT can trigger the production of a novel long RNA (5’aHIF-1) antisense to the human HIF-1 mRNA and a known antisense RNA at the 3’ end of the gene, while decreasing mRNA levels. The effects require TOP 1 and are independent from CPT induced DNA damage. Thus, when the supercoiling imbalance promoted by CPT occurs at promoter, it may trigger deregulation of the RNAP II pausing, increased chromatin accessibility and activation/derepression of antisense transcripts in a Cdks dependent manner. A changed balance of antisense transcripts and mRNAs may regulate the activity of HIF-1 and contribute to the control of tumor progression After focusing our TOP 1 investigations at a single gene level, we have extended the study to the whole genome by developing the “Topo-Seq” approach which generates a map of genome-wide distribution of sites of TOP 1 activity sites in human cells. The preliminary data revealed that TOP 1 preferentially localizes at intragenic regions and in particular at 5’ and 3’ ends of genes. Surprisingly upon TOP 1 downregulation, which impairs protein expression by 80%, TOP 1 molecules are mostly localized around 3’ ends of genes, thus suggesting that its activity is essential at these regions and can be compensate at 5’ ends. The developed procedure is a pioneer tool for the detection of TOP 1 cleavage sites across the genome and can open the way to further investigations of the enzyme roles in different nuclear processes.
Resumo:
The term Ambient Intelligence (AmI) refers to a vision on the future of the information society where smart, electronic environment are sensitive and responsive to the presence of people and their activities (Context awareness). In an ambient intelligence world, devices work in concert to support people in carrying out their everyday life activities, tasks and rituals in an easy, natural way using information and intelligence that is hidden in the network connecting these devices. This promotes the creation of pervasive environments improving the quality of life of the occupants and enhancing the human experience. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. Ambient intelligent systems are heterogeneous and require an excellent cooperation between several hardware/software technologies and disciplines, including signal processing, networking and protocols, embedded systems, information management, and distributed algorithms. Since a large amount of fixed and mobile sensors embedded is deployed into the environment, the Wireless Sensor Networks is one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes which can be deployed in a target area to sense physical phenomena and communicate with other nodes and base stations. These simple devices typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). WNS promises of revolutionizing the interactions between the real physical worlds and human beings. Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. To fully exploit the potential of distributed sensing approaches, a set of challengesmust be addressed. Sensor nodes are inherently resource-constrained systems with very low power consumption and small size requirements which enables than to reduce the interference on the physical phenomena sensed and to allow easy and low-cost deployment. They have limited processing speed,storage capacity and communication bandwidth that must be efficiently used to increase the degree of local ”understanding” of the observed phenomena. A particular case of sensor nodes are video sensors. This topic holds strong interest for a wide range of contexts such as military, security, robotics and most recently consumer applications. Vision sensors are extremely effective for medium to long-range sensing because vision provides rich information to human operators. However, image sensors generate a huge amount of data, whichmust be heavily processed before it is transmitted due to the scarce bandwidth capability of radio interfaces. In particular, in video-surveillance, it has been shown that source-side compression is mandatory due to limited bandwidth and delay constraints. Moreover, there is an ample opportunity for performing higher-level processing functions, such as object recognition that has the potential to drastically reduce the required bandwidth (e.g. by transmitting compressed images only when something ‘interesting‘ is detected). The energy cost of image processing must however be carefully minimized. Imaging could play and plays an important role in sensing devices for ambient intelligence. Computer vision can for instance be used for recognising persons and objects and recognising behaviour such as illness and rioting. Having a wireless camera as a camera mote opens the way for distributed scene analysis. More eyes see more than one and a camera system that can observe a scene from multiple directions would be able to overcome occlusion problems and could describe objects in their true 3D appearance. In real-time, these approaches are a recently opened field of research. In this thesis we pay attention to the realities of hardware/software technologies and the design needed to realize systems for distributed monitoring, attempting to propose solutions on open issues and filling the gap between AmI scenarios and hardware reality. The physical implementation of an individual wireless node is constrained by three important metrics which are outlined below. Despite that the design of the sensor network and its sensor nodes is strictly application dependent, a number of constraints should almost always be considered. Among them: • Small form factor to reduce nodes intrusiveness. • Low power consumption to reduce battery size and to extend nodes lifetime. • Low cost for a widespread diffusion. These limitations typically result in the adoption of low power, low cost devices such as low powermicrocontrollers with few kilobytes of RAMand tenth of kilobytes of program memory with whomonly simple data processing algorithms can be implemented. However the overall computational power of the WNS can be very large since the network presents a high degree of parallelism that can be exploited through the adoption of ad-hoc techniques. Furthermore through the fusion of information from the dense mesh of sensors even complex phenomena can be monitored. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas:Low Power Video Sensor Node and Video Processing Alghoritm and Multimodal Surveillance . Low Power Video Sensor Nodes and Video Processing Alghoritms In comparison to scalar sensors, such as temperature, pressure, humidity, velocity, and acceleration sensors, vision sensors generate much higher bandwidth data due to the two-dimensional nature of their pixel array. We have tackled all the constraints listed above and have proposed solutions to overcome the current WSNlimits for Video sensor node. We have designed and developed wireless video sensor nodes focusing on the small size and the flexibility of reuse in different applications. The video nodes target a different design point: the portability (on-board power supply, wireless communication), a scanty power budget (500mW),while still providing a prominent level of intelligence, namely sophisticated classification algorithmand high level of reconfigurability. We developed two different video sensor node: The device architecture of the first one is based on a low-cost low-power FPGA+microcontroller system-on-chip. The second one is based on ARM9 processor. Both systems designed within the above mentioned power envelope could operate in a continuous fashion with Li-Polymer battery pack and solar panel. Novel low power low cost video sensor nodes which, in contrast to sensors that just watch the world, are capable of comprehending the perceived information in order to interpret it locally, are presented. Featuring such intelligence, these nodes would be able to cope with such tasks as recognition of unattended bags in airports, persons carrying potentially dangerous objects, etc.,which normally require a human operator. Vision algorithms for object detection, acquisition like human detection with Support Vector Machine (SVM) classification and abandoned/removed object detection are implemented, described and illustrated on real world data. Multimodal surveillance: In several setup the use of wired video cameras may not be possible. For this reason building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. Energy efficiency for wireless smart camera networks is one of the major efforts in distributed monitoring and surveillance community. For this reason, building an energy efficient wireless vision network for monitoring and surveillance is one of the major efforts in the sensor network community. The Pyroelectric Infra-Red (PIR) sensors have been used to extend the lifetime of a solar-powered video sensor node by providing an energy level dependent trigger to the video camera and the wireless module. Such approach has shown to be able to extend node lifetime and possibly result in continuous operation of the node.Being low-cost, passive (thus low-power) and presenting a limited form factor, PIR sensors are well suited for WSN applications. Moreover techniques to have aggressive power management policies are essential for achieving long-termoperating on standalone distributed cameras needed to improve the power consumption. We have used an adaptive controller like Model Predictive Control (MPC) to help the system to improve the performances outperforming naive power management policies.
Resumo:
The ability of integrating into a unified percept sensory inputs deriving from different sensory modalities, but related to the same external event, is called multisensory integration and might represent an efficient mechanism of sensory compensation when a sensory modality is damaged by a cortical lesion. This hypothesis has been discussed in the present dissertation. Experiment 1 explored the role of superior colliculus (SC) in multisensory integration, testing patients with collicular lesions, patients with subcortical lesions not involving the SC and healthy control subjects in a multisensory task. The results revealed that patients with collicular lesions, paralleling the evidence of animal studies, demonstrated a loss of multisensory enhancement, in contrast with control subjects, providing the first lesional evidence in humans of the essential role of SC in mediating audio-visual integration. Experiment 2 investigated the role of cortex in mediating multisensory integrative effects, inducing virtual lesions by inhibitory theta-burst stimulation on temporo-parietal cortex, occipital cortex and posterior parietal cortex, demonstrating that only temporo-parietal cortex was causally involved in modulating the integration of audio-visual stimuli at the same spatial location. Given the involvement of the retino-colliculo-extrastriate pathway in mediating audio-visual integration, the functional sparing of this circuit in hemianopic patients is extremely relevant in the perspective of a multisensory-based approach to the recovery of unisensory defects. Experiment 3 demonstrated the spared functional activity of this circuit in a group of hemianopic patients, revealing the presence of implicit recognition of the fearful content of unseen visual stimuli (i.e. affective blindsight), an ability mediated by the retino-colliculo-extrastriate pathway and its connections with amygdala. Finally, Experiment 4 provided evidence that a systematic audio-visual stimulation is effective in inducing long-lasting clinical improvements in patients with visual field defect and revealed that the activity of the spared retino-colliculo-extrastriate pathway is responsible of the observed clinical amelioration, as suggested by the greater improvement observed in patients with cortical lesions limited to the occipital cortex, compared to patients with lesions extending to other cortical areas, found in tasks high demanding in terms of spatial orienting. Overall, the present results indicated that multisensory integration is mediated by the retino-colliculo-extrastriate pathway and that a systematic audio-visual stimulation, activating this spared neural circuit, is able to affect orientation towards the blind field in hemianopic patients and, therefore, might constitute an effective and innovative approach for the rehabilitation of unisensory visual impairments.
Resumo:
We investigate the statics and dynamics of a glassy,non-entangled, short bead-spring polymer melt with moleculardynamics simulations. Temperature ranges from slightlyabove the mode-coupling critical temperature to the liquidregime where features of a glassy liquid are absent. Ouraim is to work out the polymer specific effects on therelaxation and particle correlation. We find the intra-chain static structure unaffected bytemperature, it depends only on the distance of monomersalong the backbone. In contrast, the distinct inter-chainstructure shows pronounced site-dependence effects at thelength-scales of the chain and the nearest neighbordistance. There, we also find the strongest temperaturedependence which drives the glass transition. Both the siteaveraged coupling of the monomer and center of mass (CM) andthe CM-CM coupling are weak and presumably not responsiblefor a peak in the coherent relaxation time at the chain'slength scale. Chains rather emerge as soft, easilyinterpenetrating objects. Three particle correlations arewell reproduced by the convolution approximation with theexception of model dependent deviations. In the spatially heterogeneous dynamics of our system weidentify highly mobile monomers which tend to follow eachother in one-dimensional paths forming ``strings''. Thesestrings have an exponential length distribution and aregenerally short compared to the chain length. Thus, arelaxation mechanism in which neighboring mobile monomersmove along the backbone of the chain seems unlikely.However, the correlation of bonded neighbors is enhanced. When liquids are confined between two surfaces in relativesliding motion kinetic friction is observed. We study ageneric model setup by molecular dynamics simulations for awide range of sliding speeds, temperatures, loads, andlubricant coverings for simple and molecular fluids. Instabilities in the particle trajectories are identified asthe origin of kinetic friction. They lead to high particlevelocities of fluid atoms which are gradually dissipatedresulting in a friction force. In commensurate systemsfluid atoms follow continuous trajectories for sub-monolayercoverings and consequently, friction vanishes at low slidingspeeds. For incommensurate systems the velocity probabilitydistribution exhibits approximately exponential tails. Weconnect this velocity distribution to the kinetic frictionforce which reaches a constant value at low sliding speeds. This approach agrees well with the friction obtaineddirectly from simulations and explains Amontons' law on themicroscopic level. Molecular bonds in commensurate systemslead to incommensurate behavior, but do not change thequalitative behavior of incommensurate systems. However,crossed chains form stable load bearing asperities whichstrongly increase friction.
Resumo:
The main part of this thesis describes a method of calculating the massless two-loop two-point function which allows expanding the integral up to an arbitrary order in the dimensional regularization parameter epsilon by rewriting it as a double Mellin-Barnes integral. Closing the contour and collecting the residues then transforms this integral into a form that enables us to utilize S. Weinzierl's computer library nestedsums. We could show that multiple zeta values and rational numbers are sufficient for expanding the massless two-loop two-point function to all orders in epsilon. We then use the Hopf algebra of Feynman diagrams and its antipode, to investigate the appearance of Riemann's zeta function in counterterms of Feynman diagrams in massless Yukawa theory and massless QED. The class of Feynman diagrams we consider consists of graphs built from primitive one-loop diagrams and the non-planar vertex correction, where the vertex corrections only depend on one external momentum. We showed the absence of powers of pi in the counterterms of the non-planar vertex correction and diagrams built by shuffling it with the one-loop vertex correction. We also found the invariance of some coefficients of zeta functions under a change of momentum flow through these vertex corrections.
Resumo:
Introduction and Background: Multiple system atrophy (MSA) is a sporadic, adult-onset, progressive neurodegenerative disease characterized clinically by parkinsonism, cerebellar ataxia, and autonomic failure. We investigated cognitive functions longitudinally in a group of probable MSA patients, matching data with sleep parameters. Patients and Methods: 10 patients (7m/3f) underwent a detailed interview, a general and neurological examination, laboratory exams, MRI scans, a cardiovascular reflexes study, a battery of neuropsychological tests, and video-polysomnographic recording (VPSG). Patients were revaluated (T1) a mean of 16±5 (range: 12-28) months after the initial evaluation (T0). At T1, the neuropsychological assessment and VPSG were repeated. Results: The mean patient age was 57.8±6.4 years (range: 47-64) with a mean age at disease onset of 53.2±7.1 years (range: 43-61) and symptoms duration at T0 of 60±48 months (range: 12-144). At T0, 7 patients showed no cognitive deficits while 3 patients showed isolated cognitive deficits. At T1, 1 patient worsened developing multiple cognitive deficits from a normal condition. At T0 and T1, sleep efficiency was reduced, REM latency increased, NREM sleep stages 1-2 slightly increased. Comparisons between T1 and T0 showed a significant worsening in two tests of attention and no significant differences of VPSG parameters. No correlation was found between neuropsychological results and VPSG findings or RBD duration. Discussion and Conclusions: The majority of our patients do not show any cognitive deficits at T0 and T1, while isolated cognitive deficits are present in the remaining patients. Attention is the cognitive function which significantly worsened. Our data confirm the previous findings concerning the prevalence, type and the evolution of cognitive deficits in MSA. Regarding the developing of a condition of dementia, our data did not show a clear-cut diagnosis of dementia. We confirm a mild alteration of sleep structure. RBD duration does not correlate with neuropsychological findings.
Resumo:
ABSTRACT Human cytomegalovirus (HCMV) employs many different mechanisms to escape and subvert the host immune system surveillance. Among these different mechanisms the role of human IgG Fc receptors (FcγR) in HCMV pathogenesis is still unclear. In mammalians, FcγRs are expressed on the surface of all haematopoietic cells and have a multifaceted role in regulating the activity of antibodies to generate a well-balanced immune response. Viral proteins with Fcγ binding ability are highly diffuse among herpesviruses. They interfere with the host receptors functions in order to counteract immune system recognition. So far, two human HCMV Fcγ binding proteins have been described: UL119 and RL11. This work was aimed to the identification and characterization of HCMV Fcγ binding proteins. The study is divided in two parts: first the characterization of UL119 and RL11; second the identification and characterization of novel HCMV Fcγ binding proteins. Regarding the first part, we demonstrated that both UL119 and RL11 internalize Fcγ fragments from transfected cells surface through a clathrin dependent pathway. In infected cells both proteins were found in the viral assembly complex and on virions surface as envelope associated glycoproteins. Moreover, internalized Fcγ in infected cells do not undergo lysosomal degradation but rather traffic in early endosomes up to the viral assembly complex. Regarding the second part, we were able to identify two novels Fcγ binding protein coded by CMV: RL12 and RL13. The latter was also further characterized as recombinant protein in terms of cellular localization, Fc binding site and IgG internalization ability. Finally binding specificity of both RL12 and RL13 seems to be confined to human IgG1 and IgG2. Taken together, these data show that HCMV codes for up to 4 FcγR and that they could have a double role both on virus and on infected cells.
Resumo:
With this work I elucidated new and unexpected mechanisms of two strong and highly specific transcription inhibitors: Triptolide and Campthotecin. Triptolide (TPL) is a diterpene epoxide derived from the Chinese plant Trypterigium Wilfoordii Hook F. TPL inhibits the ATPase activity of XPB, a subunit of the general transcription factor TFIIH. In this thesis I found that degradation of Rbp1 (the largest subunit of RNA Polymerase II) caused by TPL treatments, is preceded by an hyperphosphorylation event at serine 5 of the carboxy-terminal domain (CTD) of Rbp1. This event is concomitant with a block of RNA Polymerase II at promoters of active genes. The enzyme responsible for Ser5 hyperphosphorylation event is CDK7. Notably, CDK7 downregulation rescued both Ser5 hyperphosphorylation and Rbp1 degradation triggered by TPL. Camptothecin (CPT), derived from the plant Camptotheca acuminata, specifically inhibits topoisomerase 1 (Top1). We first found that CPT induced antisense transcription at divergent CpG islands promoter. Interestingly, by immunofluorescence experiments, CPT was found to induce a burst of R loop structures (DNA/RNA hybrids) at nucleoli and mitochondria. We then decided to investigate the role of Top1 in R loop homeostasis through a short interfering RNA approach (RNAi). Using DNA/RNA immunoprecipitation techniques coupled to NGS I found that Top1 depletion induces an increase of R loops at a genome-wide level. We found that such increase occurs on the entire gene body. At a subset of loci R loops resulted particularly stressed after Top1 depletion: some of these genes showed the formation of new R loops structures, whereas other loci showed a reduction of R loops. Interestingly we found that new peaks usually appear at tandem or divergent genes in the entire gene body, while losses of R loop peaks seems to be a feature specific of 3’ end regions of convergent genes.