921 resultados para TRANSFORMER AT DEEP SATURATION
Resumo:
Un argomento di attualità è la privacy e la sicurezza in rete. La tesi, attraverso lo studio di diversi documenti e la sperimentazione di applicazioni per garantire l'anonimato, analizza la situazione attuale. La nostra privacy è compromessa e risulta importante una sensibilizzazione globale.
Resumo:
Questo lavoro è iniziato con uno studio teorico delle principali tecniche di classificazione di immagini note in letteratura, con particolare attenzione ai più diffusi modelli di rappresentazione dell’immagine, quali il modello Bag of Visual Words, e ai principali strumenti di Apprendimento Automatico (Machine Learning). In seguito si è focalizzata l’attenzione sulla analisi di ciò che costituisce lo stato dell’arte per la classificazione delle immagini, ovvero il Deep Learning. Per sperimentare i vantaggi dell’insieme di metodologie di Image Classification, si è fatto uso di Torch7, un framework di calcolo numerico, utilizzabile mediante il linguaggio di scripting Lua, open source, con ampio supporto alle metodologie allo stato dell’arte di Deep Learning. Tramite Torch7 è stata implementata la vera e propria classificazione di immagini poiché questo framework, grazie anche al lavoro di analisi portato avanti da alcuni miei colleghi in precedenza, è risultato essere molto efficace nel categorizzare oggetti in immagini. Le immagini su cui si sono basati i test sperimentali, appartengono a un dataset creato ad hoc per il sistema di visione 3D con la finalità di sperimentare il sistema per individui ipovedenti e non vedenti; in esso sono presenti alcuni tra i principali ostacoli che un ipovedente può incontrare nella propria quotidianità. In particolare il dataset si compone di potenziali ostacoli relativi a una ipotetica situazione di utilizzo all’aperto. Dopo avere stabilito dunque che Torch7 fosse il supporto da usare per la classificazione, l’attenzione si è concentrata sulla possibilità di sfruttare la Visione Stereo per aumentare l’accuratezza della classificazione stessa. Infatti, le immagini appartenenti al dataset sopra citato sono state acquisite mediante una Stereo Camera con elaborazione su FPGA sviluppata dal gruppo di ricerca presso il quale è stato svolto questo lavoro. Ciò ha permesso di utilizzare informazioni di tipo 3D, quali il livello di depth (profondità) di ogni oggetto appartenente all’immagine, per segmentare, attraverso un algoritmo realizzato in C++, gli oggetti di interesse, escludendo il resto della scena. L’ultima fase del lavoro è stata quella di testare Torch7 sul dataset di immagini, preventivamente segmentate attraverso l’algoritmo di segmentazione appena delineato, al fine di eseguire il riconoscimento della tipologia di ostacolo individuato dal sistema.
Resumo:
In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.
Resumo:
L’obiettivo del lavoro esposto nella seguente relazione di tesi ha riguardato lo studio e la simulazione di esperimenti di radar bistatico per missioni di esplorazione planeteria. In particolare, il lavoro si è concentrato sull’uso ed il miglioramento di un simulatore software già realizzato da un consorzio di aziende ed enti di ricerca nell’ambito di uno studio dell’Agenzia Spaziale Europea (European Space Agency – ESA) finanziato nel 2008, e svolto fra il 2009 e 2010. L’azienda spagnola GMV ha coordinato lo studio, al quale presero parte anche gruppi di ricerca dell’Università di Roma “Sapienza” e dell’Università di Bologna. Il lavoro svolto si è incentrato sulla determinazione della causa di alcune inconsistenze negli output relativi alla parte del simulatore, progettato in ambiente MATLAB, finalizzato alla stima delle caratteristiche della superficie di Titano, in particolare la costante dielettrica e la rugosità media della superficie, mediante un esperimento con radar bistatico in modalità downlink eseguito dalla sonda Cassini-Huygens in orbita intorno al Titano stesso. Esperimenti con radar bistatico per lo studio di corpi celesti sono presenti nella storia dell’esplorazione spaziale fin dagli anni ’60, anche se ogni volta le apparecchiature utilizzate e le fasi di missione, durante le quali questi esperimenti erano effettuati, non sono state mai appositamente progettate per lo scopo. Da qui la necessità di progettare un simulatore per studiare varie possibili modalità di esperimenti con radar bistatico in diversi tipi di missione. In una prima fase di approccio al simulatore, il lavoro si è incentrato sullo studio della documentazione in allegato al codice così da avere un’idea generale della sua struttura e funzionamento. È seguita poi una fase di studio dettagliato, determinando lo scopo di ogni linea di codice utilizzata, nonché la verifica in letteratura delle formule e dei modelli utilizzati per la determinazione di diversi parametri. In una seconda fase il lavoro ha previsto l’intervento diretto sul codice con una serie di indagini volte a determinarne la coerenza e l’attendibilità dei risultati. Ogni indagine ha previsto una diminuzione delle ipotesi semplificative imposte al modello utilizzato in modo tale da identificare con maggiore sicurezza la parte del codice responsabile dell’inesattezza degli output del simulatore. I risultati ottenuti hanno permesso la correzione di alcune parti del codice e la determinazione della principale fonte di errore sugli output, circoscrivendo l’oggetto di studio per future indagini mirate.
Resumo:
La forma spettrale dell’X-ray background richiede l’esistenza di un grande numero di AGN mediamente oscurati, oltre alla presenza di AGN fortemente oscurati, la cui densità di colonna supera il limite Compton (Nh>10^24 cm^(-2)). A causa della loro natura, questi oggetti risultano di difficile osservazione, per cui è necessario adottare un approccio multi-banda per riuscire a rivelarli. In questo lavoro di tesi abbiamo studiato 29 sorgenti osservate nel CDF-S e 10 nel CDF-N a 0.07
Resumo:
The study of inorganic carbon chemistry of the coastal ocean is conducted in the Gulf of Cádiz (GoC). Here we describe observations obtained during 4 sampling cruises in March, June, September and November 2015. The primary data set consists of state-of-the-art measurements of the keystone parameters of the marine CO2 system: Total Alkalinity (TA), pH, dissolved inorganic carbon (DIC). We have then calculated aragonite and calcite saturation state. The distribution of inorganic carbon system parameters in the north eastern shelf of the Gulf of Cádiz showed temporal and spatial variability. River input, mixing, primary production, respiration and remineralization were factors that controlled such distributions. Data related to carbonate saturation of calcite and aragonite reveal the occurrence of a supersaturated water; in any case, both species increased with distance and decreased with depth. The carbon system parameters present a different behaviour close to the coast to offshore ad at deeper water. In this area six water masses are clearly identified by their different chemical properties: Surface Atlantic Water, North Atlantic Central Water (NACW) and Mediterranean Water (MOW). Moreover, with this work the measurement of calcium in seawater is optimize, allowing a better quantification for future work of the saturation state of CaCO3.
Resumo:
The Alburni Massif is the most important karstic area in southern Italy and It contains about 250 caves. Most of these caves are located on the plateau, between 1500 m a.s.l. and 700 m a.s.l., and only a few reach the underground streams that feed the springs and the deep aquifer. The main springs are Grotta di Pertosa-Auletta (CP1) and Auso spring (CP31), both located at 280 m a.s.l., the first on the south-eastern margin whereas the second on south-west margin, and the springs present in Castelcivita area, the Castelcivita-Ausino system (CP2) and Mulino di Castelcivita spring (CP865), located at 60 m a.s.l.. Some other secondary springs are present too. We have monitored Pertosa-Auletta’s spring with a multiparameter logger. This logger has registered data from November 2014 to December 2015 regarding water level, electric conductivity and temperature. The hydrodynamic monitoring has been supported by a sampling campaign in order to obtain chemical water analyses. The work was done from August 2014 to December 2015, not only at Pertosa but also at all the other main springs, and in some caves. It was possible to clarify the behavior of Pertosa-Auletta’s spring, almost exclusively fed by full charge conduits, only marginally affected by seasonal rains. Pertosa-Auletta showed a characteristic Mg/Ca ratio and Mg2+ enrichment, as demonstrated by its saturation index that always showed a dolomite saturation. All other spring have characteristic waters from a chemical point of view. In particular, it highlights the great balance between the components dissolved in the waters of Mulino’ spring opposed to the variability of the nearby Castelcivita-Ausino spring. Regarding the Auso spring the variable behavior in terms of discharge and chemistry is confirmed, greatly influenced by rainfall and, during drought periods, by full charge conduits. Rare element concentrations were also analyzed and allowed to characterize further the different waters. Based on all these data an updated hydrogeological map of the Alburni massif has been drawn, that defines in greater detail the hydrogeological complexes on the basis of lithologies, and therefore of their chemical characteristics.
Resumo:
Three-month anticoagulation is recommended to treat provoked or first distal deep-vein thrombosis (DVT), and indefinite-duration anticoagulation should be considered for patients with unprovoked proximal, unprovoked recurrent, or cancer-associated DVT. In the prospective Outpatient Treatment of Deep Vein Thrombosis in Switzerland (OTIS-DVT) Registry of 502 patients with acute objectively confirmed lower extremity DVT (59% provoked or first distal DVT; 41% unprovoked proximal, unprovoked recurrent, or cancer-associated DVT) from 53 private practices and 11 hospitals, we investigated the planned duration of anticoagulation at the time of treatment initiation. The decision to administer limited-duration anticoagulation therapy was made in 343 (68%) patients with a median duration of 107 (interquartile range 91-182) days for provoked or first distal DVT, and 182 (interquartile range 111-184) days for unprovoked proximal, unprovoked recurrent, or cancer-associated DVT. Among patients with provoked or first distal DVT, anticoagulation was recommended for < 3 months in 11%, 3 months in 63%, and for an indefinite period in 26%. Among patients with unprovoked proximal, unprovoked recurrent, or cancer-associated DVT, anticoagulation was recommended for < 6 months in 22%, 6-12 months in 38%, and for an indefinite period in 40%. Overall, there was more frequent planning of indefinite-duration therapy from hospital physicians as compared with private practice physicians (39% vs. 28%; p=0.019). Considerable inconsistency in planning the duration of anticoagulation therapy mandates an improvement in risk stratification of outpatients with acute DVT.
Resumo:
Deep vein thrombosis (DVT) and its complication, pulmonary embolism, are frequent causes of disability and mortality. Although blood flow disturbance is considered an important triggering factor, the mechanism of DVT initiation remains elusive. Here we show that 48-hour flow restriction in the inferior vena cava (IVC) results in the development of thrombi structurally similar to human deep vein thrombi. von Willebrand factor (VWF)-deficient mice were protected from thrombosis induced by complete (stasis) or partial (stenosis) flow restriction in the IVC. Mice with half normal VWF levels were also protected in the stenosis model. Besides promoting platelet adhesion, VWF carries Factor VIII. Repeated infusions of recombinant Factor VIII did not rescue thrombosis in VWF(-/-) mice, indicating that impaired coagulation was not the primary reason for the absence of DVT in VWF(-/-) mice. Infusion of GPG-290, a mutant glycoprotein Ib?-immunoglobulin chimera that specifically inhibits interaction of the VWF A1 domain with platelets, prevented thrombosis in wild-type mice. Intravital microscopy showed that platelet and leukocyte recruitment in the early stages of DVT was dramatically higher in wild-type than in VWF(-/-) IVC. Our results demonstrate a pathogenetic role for VWF-platelet interaction in flow disturbance-induced venous thrombosis.
Resumo:
Background The goal when resuscitating trauma patients is to achieve adequate tissue perfusion. One parameter of tissue perfusion is tissue oxygen saturation (StO2), as measured by near infrared spectroscopy. Using a commercially available device, we investigated whether clinically relevant blood loss of 500 ml in healthy volunteers can be detected by changes in StO2 after a standardized ischemic event. Methods We performed occlusion of the brachial artery for 3 minutes in 20 healthy female blood donors before and after blood donation. StO2 and total oxygenated tissue hemoglobin (O2Hb) were measured continuously at the thenar eminence. 10 healthy volunteers were assessed in the same way, to examine whether repeated vascular occlusion without blood donation exhibits time dependent effects. Results Blood donation caused a substantial decrease in systolic blood pressure, but did not affect resting StO2 and O2Hb values. No changes were measured in the blood donor group in the reaction to the vascular occlusion test, but in the control group there was an increase in the O2Hb rate of recovery during the reperfusion phase. Conclusion StO2 measured at the thenar eminence seems to be insensitive to blood loss of 500 ml in this setting. Probably blood loss greater than this might lead to detectable changes guiding the treating physician. The exact cut off for detectable changes and the time effect on repeated vascular occlusion tests should be explored further. Until now no such data exist.
Resumo:
The cytidine deaminase AID hypermutates immunoglobulin genes but can also target oncogenes, leading to tumorigenesis. The extent of AID's promiscuity and its predilection for immunoglobulin genes are unknown. We report here that AID interacted broadly with promoter-proximal sequences associated with stalled polymerases and chromatin-activating marks. In contrast, genomic occupancy of replication protein A (RPA), an AID cofactor, was restricted to immunoglobulin genes. The recruitment of RPA to the immunoglobulin loci was facilitated by phosphorylation of AID at Ser38 and Thr140. We propose that stalled polymerases recruit AID, thereby resulting in low frequencies of hypermutation across the B cell genome. Efficient hypermutation and switch recombination required AID phosphorylation and correlated with recruitment of RPA. Our findings provide a rationale for the oncogenic role of AID in B cell malignancy.
Resumo:
The original and modified Wells score are widely used prediction rules for pre-test probability assessment of deep vein thrombosis (DVT). The objective of this study was to compare the predictive performance of both Wells scores in unselected patients with clinical suspicion of DVT.
Resumo:
We aimed to investigate clinical practice patterns for the outpatient management of acute deep vein thrombosis (DVT).