863 resultados para Deep drawing


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Analysts, politicians and international players from all over the world look at China as one of the most powerful countries on the international scenario, and as a country whose economic development can significantly impact on the economies of the rest of the world. However many aspects of this country have still to be investigated. First the still fundamental role played by Chinese rural areas for the general development of the country from a political, economic and social point of view. In particular, the way in which the rural areas have influenced the social stability of the whole country has been widely discussed due to their strict relationship with the urban areas where most people from the countryside emigrate searching for a job and a better life. In recent years many studies have mostly focused on the urbanization phenomenon with little interest in the living conditions in rural areas and in the deep changes which have occurred in some, mainly agricultural provinces. An analysis of the level of infrastructure is one of the main aspects which highlights the principal differences in terms of living conditions between rural and urban areas. In this thesis, I first carried out the analysis through the multivariate statistics approach (Principal Component Analysis and Cluster Analysis) in order to define the new map of rural areas based on the analysis of living conditions. In the second part I elaborated an index (Living Conditions Index) through the Fuzzy Expert/Inference System. Finally I compared this index (LCI) to the results obtained from the cluster analysis drawing geographic maps. The data source is the second national agricultural census of China carried out in 2006. In particular, I analysed the data refer to villages but aggregated at province level.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, we have presented two deep 1.4 GHz and 345 MHz overlapping surveys of the Lockman Hole field taken with the Westerbork Synthesis Radio Telescope. We extracted a catalogue of ~6000 radio sources from the 1.4 GHz mosaic down to a flux limit of ~55 μJy and a catalogue of 334 radio sources down to a flux limit of ~4 mJy from the inner 7 sq. degree region of the 345 MHz image. The extracted catalogues were used to derive the source number counts at 1.4 GHz and at 345 MHz. The source counts were found to be fully consistent with previous determinations. In particular the 1.4 GHz source counts derived by the present sample provide one of the most statistically robust determinations in the flux range 0.1 < S < 1 mJy. During the commissioning program of the LOFAR telescope, the Lockman Hole field was observed at 58 MHz and 150 MHz. The 150 MHz LOFAR observation is particularly relevant as it allowed us to obtain the first LOFAR flux calibrated high resolution image of a deep field. From this image we extracted a preliminary source catalogue down to a flux limit of ~15 mJy (~10σ), that can be considered complete down to 20‒30 mJy. A spectral index study of the mJy sources in the Lockman Hole region, was performed using the available catalogues ( 1.4 GHz, 345 MHz and 150 MHz) and a deep 610 MHz source catalogue available from the literature (Garn et al. 2008, 2010).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Despite the several issues faced in the past, the evolutionary trend of silicon has kept its constant pace. Today an ever increasing number of cores is integrated onto the same die. Unfortunately, the extraordinary performance achievable by the many-core paradigm is limited by several factors. Memory bandwidth limitation, combined with inefficient synchronization mechanisms, can severely overcome the potential computation capabilities. Moreover, the huge HW/SW design space requires accurate and flexible tools to perform architectural explorations and validation of design choices. In this thesis we focus on the aforementioned aspects: a flexible and accurate Virtual Platform has been developed, targeting a reference many-core architecture. Such tool has been used to perform architectural explorations, focusing on instruction caching architecture and hybrid HW/SW synchronization mechanism. Beside architectural implications, another issue of embedded systems is considered: energy efficiency. Near Threshold Computing is a key research area in the Ultra-Low-Power domain, as it promises a tenfold improvement in energy efficiency compared to super-threshold operation and it mitigates thermal bottlenecks. The physical implications of modern deep sub-micron technology are severely limiting performance and reliability of modern designs. Reliability becomes a major obstacle when operating in NTC, especially memory operation becomes unreliable and can compromise system correctness. In the present work a novel hybrid memory architecture is devised to overcome reliability issues and at the same time improve energy efficiency by means of aggressive voltage scaling when allowed by workload requirements. Variability is another great drawback of near-threshold operation. The greatly increased sensitivity to threshold voltage variations in today a major concern for electronic devices. We introduce a variation-tolerant extension of the baseline many-core architecture. By means of micro-architectural knobs and a lightweight runtime control unit, the baseline architecture becomes dynamically tolerant to variations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The thesis describes the implementation of a calibration, format-translation and data conditioning software for radiometric tracking data of deep-space spacecraft. All of the available propagation-media noise rejection techniques available as features in the code are covered in their mathematical formulations, performance and software implementations. Some techniques are retrieved from literature and current state of the art, while other algorithms have been conceived ex novo. All of the three typical deep-space refractive environments (solar plasma, ionosphere, troposphere) are dealt with by employing specific subroutines. Specific attention has been reserved to the GNSS-based tropospheric path delay calibration subroutine, since it is the most bulky module of the software suite, in terms of both the sheer number of lines of code, and development time. The software is currently in its final stage of development and once completed will serve as a pre-processing stage for orbit determination codes. Calibration of transmission-media noise sources in radiometric observables proved to be an essential operation to be performed of radiometric data in order to meet the more and more demanding error budget requirements of modern deep-space missions. A completely autonomous and all-around propagation-media calibration software is a novelty in orbit determination, although standalone codes are currently employed by ESA and NASA. The described S/W is planned to be compatible with the current standards for tropospheric noise calibration used by both these agencies like the AMC, TSAC and ESA IFMS weather data, and it natively works with the Tracking Data Message file format (TDM) adopted by CCSDS as standard aimed to promote and simplify inter-agency collaboration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Un argomento di attualità è la privacy e la sicurezza in rete. La tesi, attraverso lo studio di diversi documenti e la sperimentazione di applicazioni per garantire l'anonimato, analizza la situazione attuale. La nostra privacy è compromessa e risulta importante una sensibilizzazione globale.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Questo lavoro è iniziato con uno studio teorico delle principali tecniche di classificazione di immagini note in letteratura, con particolare attenzione ai più diffusi modelli di rappresentazione dell’immagine, quali il modello Bag of Visual Words, e ai principali strumenti di Apprendimento Automatico (Machine Learning). In seguito si è focalizzata l’attenzione sulla analisi di ciò che costituisce lo stato dell’arte per la classificazione delle immagini, ovvero il Deep Learning. Per sperimentare i vantaggi dell’insieme di metodologie di Image Classification, si è fatto uso di Torch7, un framework di calcolo numerico, utilizzabile mediante il linguaggio di scripting Lua, open source, con ampio supporto alle metodologie allo stato dell’arte di Deep Learning. Tramite Torch7 è stata implementata la vera e propria classificazione di immagini poiché questo framework, grazie anche al lavoro di analisi portato avanti da alcuni miei colleghi in precedenza, è risultato essere molto efficace nel categorizzare oggetti in immagini. Le immagini su cui si sono basati i test sperimentali, appartengono a un dataset creato ad hoc per il sistema di visione 3D con la finalità di sperimentare il sistema per individui ipovedenti e non vedenti; in esso sono presenti alcuni tra i principali ostacoli che un ipovedente può incontrare nella propria quotidianità. In particolare il dataset si compone di potenziali ostacoli relativi a una ipotetica situazione di utilizzo all’aperto. Dopo avere stabilito dunque che Torch7 fosse il supporto da usare per la classificazione, l’attenzione si è concentrata sulla possibilità di sfruttare la Visione Stereo per aumentare l’accuratezza della classificazione stessa. Infatti, le immagini appartenenti al dataset sopra citato sono state acquisite mediante una Stereo Camera con elaborazione su FPGA sviluppata dal gruppo di ricerca presso il quale è stato svolto questo lavoro. Ciò ha permesso di utilizzare informazioni di tipo 3D, quali il livello di depth (profondità) di ogni oggetto appartenente all’immagine, per segmentare, attraverso un algoritmo realizzato in C++, gli oggetti di interesse, escludendo il resto della scena. L’ultima fase del lavoro è stata quella di testare Torch7 sul dataset di immagini, preventivamente segmentate attraverso l’algoritmo di segmentazione appena delineato, al fine di eseguire il riconoscimento della tipologia di ostacolo individuato dal sistema.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

L’obiettivo del lavoro esposto nella seguente relazione di tesi ha riguardato lo studio e la simulazione di esperimenti di radar bistatico per missioni di esplorazione planeteria. In particolare, il lavoro si è concentrato sull’uso ed il miglioramento di un simulatore software già realizzato da un consorzio di aziende ed enti di ricerca nell’ambito di uno studio dell’Agenzia Spaziale Europea (European Space Agency – ESA) finanziato nel 2008, e svolto fra il 2009 e 2010. L’azienda spagnola GMV ha coordinato lo studio, al quale presero parte anche gruppi di ricerca dell’Università di Roma “Sapienza” e dell’Università di Bologna. Il lavoro svolto si è incentrato sulla determinazione della causa di alcune inconsistenze negli output relativi alla parte del simulatore, progettato in ambiente MATLAB, finalizzato alla stima delle caratteristiche della superficie di Titano, in particolare la costante dielettrica e la rugosità media della superficie, mediante un esperimento con radar bistatico in modalità downlink eseguito dalla sonda Cassini-Huygens in orbita intorno al Titano stesso. Esperimenti con radar bistatico per lo studio di corpi celesti sono presenti nella storia dell’esplorazione spaziale fin dagli anni ’60, anche se ogni volta le apparecchiature utilizzate e le fasi di missione, durante le quali questi esperimenti erano effettuati, non sono state mai appositamente progettate per lo scopo. Da qui la necessità di progettare un simulatore per studiare varie possibili modalità di esperimenti con radar bistatico in diversi tipi di missione. In una prima fase di approccio al simulatore, il lavoro si è incentrato sullo studio della documentazione in allegato al codice così da avere un’idea generale della sua struttura e funzionamento. È seguita poi una fase di studio dettagliato, determinando lo scopo di ogni linea di codice utilizzata, nonché la verifica in letteratura delle formule e dei modelli utilizzati per la determinazione di diversi parametri. In una seconda fase il lavoro ha previsto l’intervento diretto sul codice con una serie di indagini volte a determinarne la coerenza e l’attendibilità dei risultati. Ogni indagine ha previsto una diminuzione delle ipotesi semplificative imposte al modello utilizzato in modo tale da identificare con maggiore sicurezza la parte del codice responsabile dell’inesattezza degli output del simulatore. I risultati ottenuti hanno permesso la correzione di alcune parti del codice e la determinazione della principale fonte di errore sugli output, circoscrivendo l’oggetto di studio per future indagini mirate.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La forma spettrale dell’X-ray background richiede l’esistenza di un grande numero di AGN mediamente oscurati, oltre alla presenza di AGN fortemente oscurati, la cui densità di colonna supera il limite Compton (Nh>10^24 cm^(-2)). A causa della loro natura, questi oggetti risultano di difficile osservazione, per cui è necessario adottare un approccio multi-banda per riuscire a rivelarli. In questo lavoro di tesi abbiamo studiato 29 sorgenti osservate nel CDF-S e 10 nel CDF-N a 0.0710^23, spettro piatto, riga del ferro): a seguito di questa analisi sono state individuate 9 sorgenti che presentano indicazioni di forte oscuramento. Dallo studio del rapporto tra la luminosità X (2-10 keV) e quella MIR (12.3um) dedotta dal SED-fitting, si è ottenuta una conferma della possibile presenza di forte oscuramento nelle 9 sorgenti selezionate a seguito dell’analisi spettrale. Abbiamo inoltre confrontato il tasso di formazione stellare dedotto dalla banda X (0.5-8 keV) e da quella IR (8-1000um) per identificare le sorgenti nelle quali l’emissione da AGN risulta essere dominante. A seguito di questa analisi abbiamo identificato 9 sorgenti (le stesse di cui sopra) con indicazione di forte oscuramento; di queste, 3 mostrano chiare indicazioni della presenza di un AGN Compton Thick (Nh>10^24 cm^(-2), riga del ferro intensa, basso rapporto LX/MIR).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Three-month anticoagulation is recommended to treat provoked or first distal deep-vein thrombosis (DVT), and indefinite-duration anticoagulation should be considered for patients with unprovoked proximal, unprovoked recurrent, or cancer-associated DVT. In the prospective Outpatient Treatment of Deep Vein Thrombosis in Switzerland (OTIS-DVT) Registry of 502 patients with acute objectively confirmed lower extremity DVT (59% provoked or first distal DVT; 41% unprovoked proximal, unprovoked recurrent, or cancer-associated DVT) from 53 private practices and 11 hospitals, we investigated the planned duration of anticoagulation at the time of treatment initiation. The decision to administer limited-duration anticoagulation therapy was made in 343 (68%) patients with a median duration of 107 (interquartile range 91-182) days for provoked or first distal DVT, and 182 (interquartile range 111-184) days for unprovoked proximal, unprovoked recurrent, or cancer-associated DVT. Among patients with provoked or first distal DVT, anticoagulation was recommended for < 3 months in 11%, 3 months in 63%, and for an indefinite period in 26%. Among patients with unprovoked proximal, unprovoked recurrent, or cancer-associated DVT, anticoagulation was recommended for < 6 months in 22%, 6-12 months in 38%, and for an indefinite period in 40%. Overall, there was more frequent planning of indefinite-duration therapy from hospital physicians as compared with private practice physicians (39% vs. 28%; p=0.019). Considerable inconsistency in planning the duration of anticoagulation therapy mandates an improvement in risk stratification of outpatients with acute DVT.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Deep vein thrombosis (DVT) and its complication, pulmonary embolism, are frequent causes of disability and mortality. Although blood flow disturbance is considered an important triggering factor, the mechanism of DVT initiation remains elusive. Here we show that 48-hour flow restriction in the inferior vena cava (IVC) results in the development of thrombi structurally similar to human deep vein thrombi. von Willebrand factor (VWF)-deficient mice were protected from thrombosis induced by complete (stasis) or partial (stenosis) flow restriction in the IVC. Mice with half normal VWF levels were also protected in the stenosis model. Besides promoting platelet adhesion, VWF carries Factor VIII. Repeated infusions of recombinant Factor VIII did not rescue thrombosis in VWF(-/-) mice, indicating that impaired coagulation was not the primary reason for the absence of DVT in VWF(-/-) mice. Infusion of GPG-290, a mutant glycoprotein Ib?-immunoglobulin chimera that specifically inhibits interaction of the VWF A1 domain with platelets, prevented thrombosis in wild-type mice. Intravital microscopy showed that platelet and leukocyte recruitment in the early stages of DVT was dramatically higher in wild-type than in VWF(-/-) IVC. Our results demonstrate a pathogenetic role for VWF-platelet interaction in flow disturbance-induced venous thrombosis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The cytidine deaminase AID hypermutates immunoglobulin genes but can also target oncogenes, leading to tumorigenesis. The extent of AID's promiscuity and its predilection for immunoglobulin genes are unknown. We report here that AID interacted broadly with promoter-proximal sequences associated with stalled polymerases and chromatin-activating marks. In contrast, genomic occupancy of replication protein A (RPA), an AID cofactor, was restricted to immunoglobulin genes. The recruitment of RPA to the immunoglobulin loci was facilitated by phosphorylation of AID at Ser38 and Thr140. We propose that stalled polymerases recruit AID, thereby resulting in low frequencies of hypermutation across the B cell genome. Efficient hypermutation and switch recombination required AID phosphorylation and correlated with recruitment of RPA. Our findings provide a rationale for the oncogenic role of AID in B cell malignancy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The original and modified Wells score are widely used prediction rules for pre-test probability assessment of deep vein thrombosis (DVT). The objective of this study was to compare the predictive performance of both Wells scores in unselected patients with clinical suspicion of DVT.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We aimed to investigate clinical practice patterns for the outpatient management of acute deep vein thrombosis (DVT).