980 resultados para Al-Wajh Deep
Resumo:
Chronic low back pain is a difficult condition to be treated. As some patients respond positively to treatment and others do not present any improvements, one can think there are others conditional factors that need to be elucidated. By means of this study, we sought to investigate the association between the occurrence of the formation of a positive relationship between patient and therapist, assessed by the therapeutic alliance inventory, and the adequate recruitment of the deep abdominal muscles, as well as to verify the effect of a protocol intervention based on motor control exercises on levels of pain and disability. The recruitment of the transverse abdominal and internal oblique muscles was examined by ultrasound imaging in 12 subjects with nonspecific chronic low back pain before and after implementation of a protocol for motor control exercises, with subsequent application of the therapeutic alliance inventory questionnaire. No association was found between the level of therapist/patient alliance and muscle recruitment. The proposed protocol was effective in reducing the levels of pain and disability; however, recruitment of transverse abdominal and internal oblique muscles showed no significant changes in the end of the intervention. Based on these findings, we verified that the therapeutic alliance has no association with muscle recruitment in the short term. However, although there were no changes in muscle recruitment after the intervention program, the level of pain and disability was reduced.
Resumo:
Abstract Background The molecular phylogenetic relationships and population structure of the species of the Anopheles triannulatus complex: Anopheles triannulatus s.s., Anopheles halophylus and the putative species Anopheles triannulatus C were investigated. Methods The mitochondrial COI gene, the nuclear white gene and rDNA ITS2 of samples that include the known geographic distribution of these taxa were analyzed. Phylogenetic analyses were performed using Bayesian inference, Maximum parsimony and Maximum likelihood approaches. Results Each data set analyzed septely yielded a different topology but none provided evidence for the seption of An. halophylus and An. triannulatus C, consistent with the hypothesis that the two are undergoing incipient speciation. The phylogenetic analyses of the white gene found three main clades, whereas the statistical parsimony network detected only a single metapopulation of Anopheles triannulatus s.l. Seven COI lineages were detected by phylogenetic and network analysis. In contrast, the network, but not the phylogenetic analyses, strongly supported three ITS2 groups. Combined data analyses provided the best resolution of the trees, with two major clades, Amazonian (clade I) and trans-Andean + Amazon Delta (clade II). Clade I consists of multiple subclades: An. halophylus + An. triannulatus C; trans-Andean Venezuela; central Amazonia + central Bolivia; Atlantic coastal lowland; and Amazon delta. Clade II includes three subclades: Panama; cis-Andean Colombia; and cis-Venezuela. The Amazon delta specimens are in both clades, likely indicating local sympatry. Spatial and molecular variance analyses detected nine groups, corroborating some of subclades obtained in the combined data analysis. Conclusion Combination of the three molecular markers provided the best resolution for differentiation within An. triannulatus s.s. and An. halophylus and C. The latest two species seem to be very closely related and the analyses performed were not conclusive regarding species differentiation. Further studies including new molecular markers would be desirable to solve this species status question. Besides, results of the study indicate a trans-Andean origin for An. triannulatus s.l. The potential implications for malaria epidemiology remain to be investigated.
Resumo:
The deep-sea environments of the South Atlantic Ocean are less studied in comparison to the North Atlantic and Pacific Oceans. With the aim of identifying the deep-sea bacteria in this less known ocean, 70 strains were isolated from eight sediment samples (depth range between 1905 to 5560 m) collected in the eastern part of the South Atlantic, from the equatorial region to the Cape Abyssal Plain, using three different culture media. The strains were classified into three phylogenetic groups, Gammaproteobacteria, Firmicutes and Actinobacteria, by the analysis of 16s rRNA gene sequences. Gammaproteobacteria and Firmicutes were the most frequently identified groups, with Halomonas the most frequent genus among the strains. Microorganisms belonging to Firmicutes were the only ones observed in all samples. Sixteen of the 41 identified operational taxonomic units probably represent new species. The presence of potentially new species reinforces the need for new studies in the deep-sea environments of the South Atlantic.
Resumo:
Máster en Gestión Sostenible de Recursos Pesqueros
Resumo:
Quasars and AGN play an important role in many aspects of the modern cosmology. Of particular interest is the issue of the interplay between AGN activity and formation and evolution of galaxies and structures. Studies on nearby galaxies revealed that most (and possibly all) galaxy nuclei contain a super-massive black hole (SMBH) and that between a third and half of them are showing some evidence of activity (Kormendy and Richstone, 1995). The discovery of a tight relation between black holes mass and velocity dispersion of their host galaxy suggests that the evolution of the growth of SMBH and their host galaxy are linked together. In this context, studying the evolution of AGN, through the luminosity function (LF), is fundamental to constrain the theories of galaxy and SMBH formation and evolution. Recently, many theories have been developed to describe physical processes possibly responsible of a common formation scenario for galaxies and their central black hole (Volonteri et al., 2003; Springel et al., 2005a; Vittorini et al., 2005; Hopkins et al., 2006a) and an increasing number of observations in different bands are focused on collecting larger and larger quasar samples. Many issues remain however not yet fully understood. In the context of the VVDS (VIMOS-VLT Deep Survey), we collected and studied an unbiased sample of spectroscopically selected faint type-1 AGN with a unique and straightforward selection function. Indeed, the VVDS is a large, purely magnitude limited spectroscopic survey of faint objects, free of any morphological and/or color preselection. We studied the statistical properties of this sample and its evolution up to redshift z 4. Because of the contamination of the AGN light by their host galaxies at the faint magnitudes explored by our sample, we observed that a significant fraction of AGN in our sample would be missed by the UV excess and morphological criteria usually adopted for the pre-selection of optical QSO candidates. If not properly taken into account, this failure in selecting particular sub-classes of AGN could, in principle, affect some of the conclusions drawn from samples of AGN based on these selection criteria. The absence of any pre-selection in the VVDS leads us to have a very complete sample of AGN, including also objects with unusual colors and continuum shape. The VVDS AGN sample shows in fact redder colors than those expected by comparing it, for example, with the color track derived from the SDSS composite spectrum. In particular, the faintest objects have on average redder colors than the brightest ones. This can be attributed to both a large fraction of dust-reddened objects and a significant contamination from the host galaxy. We have tested these possibilities by examining the global spectral energy distribution of each object using, in addition to the U, B, V, R and I-band magnitudes, also the UV-Galex and the IR-Spitzer bands, and fitting it with a combination of AGN and galaxy emission, allowing also for the possibility of extinction of the AGN flux. We found that for 44% of our objects the contamination from the host galaxy is not negligible and this fraction decreases to 21% if we restrict the analysis to a bright subsample (M1450 <-22.15). Our estimated integral surface density at IAB < 24.0 is 500 AGN per square degree, which represents the highest surface density of a spectroscopically confirmed sample of optically selected AGN. We derived the luminosity function in B-band for 1.0 < z < 3.6 using the 1/Vmax estimator. Our data, more than one magnitude fainter than previous optical surveys, allow us to constrain the faint part of the luminosity function up to high redshift. A comparison of our data with the 2dF sample at low redshift (1 < z < 2.1) shows that the VDDS data can not be well fitted with the pure luminosity evolution (PLE) models derived by previous optically selected samples. Qualitatively, this appears to be due to the fact that our data suggest the presence of an excess of faint objects at low redshift (1.0 < z < 1.5) with respect to these models. By combining our faint VVDS sample with the large sample of bright AGN extracted from the SDSS DR3 (Richards et al., 2006b) and testing a number of different evolutionary models, we find that the model which better represents the combined luminosity functions, over a wide range of redshift and luminosity, is a luminosity dependent density evolution (LDDE) model, similar to those derived from the major Xsurveys. Such a parameterization allows the redshift of the AGN density peak to change as a function of luminosity, thus fitting the excess of faint AGN that we find at 1.0 < z < 1.5. On the basis of this model we find, for the first time from the analysis of optically selected samples, that the peak of the AGN space density shifts significantly towards lower redshift going to lower luminosity objects. The position of this peak moves from z 2.0 for MB <-26.0 to z 0.65 for -22< MB <-20. This result, already found in a number of X-ray selected samples of AGN, is consistent with a scenario of “AGN cosmic downsizing”, in which the density of more luminous AGN, possibly associated to more massive black holes, peaks earlier in the history of the Universe (i.e. at higher redshift), than that of low luminosity ones, which reaches its maximum later (i.e. at lower redshift). This behavior has since long been claimed to be present in elliptical galaxies and it is not easy to reproduce it in the hierarchical cosmogonic scenario, where more massive Dark Matter Halos (DMH) form on average later by merging of less massive halos.
Resumo:
For its particular position and the complex geological history, the Northern Apennines has been considered as a natural laboratory to apply several kinds of investigations. By the way, it is complicated to joint all the knowledge about the Northern Apennines in a unique picture that explains the structural and geological emplacement that produced it. The main goal of this thesis is to put together all information on the deformation - in the crust and at depth - of this region and to describe a geodynamical model that takes account of it. To do so, we have analyzed the pattern of deformation in the crust and in the mantle. In both cases the deformation has been studied using always information recovered from earthquakes, although using different techniques. In particular the shallower deformation has been studied using seismic moment tensors information. For our purpose we used the methods described in Arvidsson and Ekstrom (1998) that allowing the use in the inversion of surface waves [and not only of the body waves as the Centroid Moment Tensor (Dziewonski et al., 1981) one] allow to determine seismic source parameters for earthquakes with magnitude as small as 4.0. We applied this tool in the Northern Apennines and through this activity we have built up the Italian CMT dataset (Pondrelli et al., 2006) and the pattern of seismic deformation using the Kostrov (1974) method on a regular grid of 0.25 degree cells. We obtained a map of lateral variations of the pattern of seismic deformation on different layers of depth, taking into account the fact that shallow earthquakes (within 15 km of depth) in the region occur everywhere while most of events with a deeper hypocenter (15-40 km) occur only in the outer part of the belt, on the Adriatic side. For the analysis of the deep deformation, i.e. that occurred in the mantle, we used the anisotropy information characterizing the structure below the Northern Apennines. The anisotropy is an earth properties that in the crust is due to the presence of aligned fluid filled cracks or alternating isotropic layers with different elastic properties while in the mantle the most important cause of seismic anisotropy is the lattice preferred orientation (LPO) of the mantle minerals as the olivine. This last is a highly anisotropic mineral and tends to align its fast crystallographic axes (a-axis) parallel to the astenospheric flow as a response to finite strain induced by geodynamic processes. The seismic anisotropy pattern of a region is measured utilizing the shear wave splitting phenomenon (that is the seismological analogue to optical birefringence). Here, to do so, we apply on teleseismic earthquakes recorded on stations located in the study region, the Sileny and Plomerova (1996) approach. The results are analyzed on the basis of their lateral and vertical variations to better define the earth structure beneath Northern Apennines. We find different anisotropic domains, a Tuscany and an Adria one, with a pattern of seismic anisotropy which laterally varies in a similar way respect to the seismic deformation. Moreover, beneath the Adriatic region the distribution of the splitting parameters is so complex to request an appropriate analysis. Therefore we applied on our data the code of Menke and Levin (2003) which allows to look for different models of structures with multilayer anisotropy. We obtained that the structure beneath the Po Plain is probably even more complicated than expected. On the basis of the results obtained for this thesis, added with those from previous works, we suggest that slab roll-back, which created the Apennines and opened the Tyrrhenian Sea, evolved in the north boundary of Northern Apennines in a different way from its southern part. In particular, the trench retreat developed primarily south of our study region, with an eastward roll-back. In the northern portion of the orogen, after a first stage during which the retreat was perpendicular to the trench, it became oblique with respect to the structure.
Resumo:
La tesi è uno studio di alcuni aspetti della nuova metodologia “deep inference”, abbinato ad una rivisitazione dei concetti classici di proof theory, con l'aggiunta di alcuni risultati originali orientati ad una maggior comprensione dell'argomento, nonché alle applicazioni pratiche. Nel primo capitolo vengono introdotti, seguendo un approccio di stampo formalista (con alcuni spunti personali), i concetti base della teoria della dimostrazione strutturale – cioè quella che usa strumenti combinatoriali (o “finitistici”) per studiare le proprietà delle dimostrazioni. Il secondo capitolo focalizza l'attenzione sulla logica classica proposizionale, prima introducendo il calcolo dei sequenti e dimostrando il Gentzen Hauptsatz, per passare poi al calcolo delle strutture (sistema SKS), dimostrando anche per esso un teorema di eliminazione del taglio, appositamente adattato dall'autore. Infine si discute e dimostra la proprietà di località per il sistema SKS. Un percorso analogo viene tracciato dal terzo ed ultimo capitolo, per quanto riguarda la logica lineare. Viene definito e motivato il calcolo dei sequenti lineari, e si discute del suo corrispettivo nel calcolo delle strutture. L'attenzione qui è rivolta maggiormente al problema di definire operatori non-commutativi, che mettono i sistemi in forte relazione con le algebre di processo.
Resumo:
The aim of this proposal is to explain the paradigm of the American foreign policy during the Johnson Administration, especially toward Europe, within the NATO framework, and toward URSS, in the context of the détente, just emerged during the decade of the sixties. During that period, after the passing of the J. F. Kennedy, President L. B. Johnson inherited a complex and very high-powered world politics, which wanted to get a new phase off the ground in the transatlantic relations and share the burden of the Cold war with a refractory Europe. Known as the grand design, it was a policy that needed the support of the allies and a clear purpose which appealed to the Europeans. At first, President Johnson detected in the problem of the nuclear sharing the good deal to make with the NATO allies. At the same time, he understood that the United States needed to reassert their leadeship within the new stage of relations with the Soviet Union. Soon, the “transatlantic bargain” became something not so easy to dealt with. The Federal Germany wanted to say a word in the nuclear affairs and, why not, put the finger on the trigger of the atlantic nuclear weapons. URSS, on the other hand, wanted to keep Germany down. The other allies did not want to share the onus of the defense of Europe, at most the responsability for the use of the weapons and, at least, to participate in the decision-making process. France, which wanted to detach herself from the policy of the United States and regained a world role, added difficulties to the manage of this course of action. Through the years of the Johnson’s office, the divergences of the policies placed by his advisers to gain the goal put the American foreign policy in deep water. The withdrawal of France from the organization but not from the Alliance, give Washington a chance to carry out his goal. The development of a clear-cut disarm policy leaded the Johnson’s administration to the core of the matter. The Non-proliferation Treaty signed in 1968, solved in a business-like fashion the problem with the allies. The question of nuclear sharing faded away with the acceptance of more deep consultative role in the nuclear affairs by the allies, the burden for the defense of Europe became more bearable through the offset agreement with the FRG and a new doctrine, the flexible response, put an end, at least formally, to the taboo of the nuclear age. The Johnson’s grand design proved to be different from the Kennedy’s one, but all things considered, it was more workable. The unpredictable result was a real détente with the Soviet Union, which, we can say, was a merit of President Johnson.
Resumo:
In this thesis, we have presented two deep 1.4 GHz and 345 MHz overlapping surveys of the Lockman Hole field taken with the Westerbork Synthesis Radio Telescope. We extracted a catalogue of ~6000 radio sources from the 1.4 GHz mosaic down to a flux limit of ~55 μJy and a catalogue of 334 radio sources down to a flux limit of ~4 mJy from the inner 7 sq. degree region of the 345 MHz image. The extracted catalogues were used to derive the source number counts at 1.4 GHz and at 345 MHz. The source counts were found to be fully consistent with previous determinations. In particular the 1.4 GHz source counts derived by the present sample provide one of the most statistically robust determinations in the flux range 0.1 < S < 1 mJy. During the commissioning program of the LOFAR telescope, the Lockman Hole field was observed at 58 MHz and 150 MHz. The 150 MHz LOFAR observation is particularly relevant as it allowed us to obtain the first LOFAR flux calibrated high resolution image of a deep field. From this image we extracted a preliminary source catalogue down to a flux limit of ~15 mJy (~10σ), that can be considered complete down to 20‒30 mJy. A spectral index study of the mJy sources in the Lockman Hole region, was performed using the available catalogues ( 1.4 GHz, 345 MHz and 150 MHz) and a deep 610 MHz source catalogue available from the literature (Garn et al. 2008, 2010).
Resumo:
Questo lavoro è iniziato con uno studio teorico delle principali tecniche di classificazione di immagini note in letteratura, con particolare attenzione ai più diffusi modelli di rappresentazione dell’immagine, quali il modello Bag of Visual Words, e ai principali strumenti di Apprendimento Automatico (Machine Learning). In seguito si è focalizzata l’attenzione sulla analisi di ciò che costituisce lo stato dell’arte per la classificazione delle immagini, ovvero il Deep Learning. Per sperimentare i vantaggi dell’insieme di metodologie di Image Classification, si è fatto uso di Torch7, un framework di calcolo numerico, utilizzabile mediante il linguaggio di scripting Lua, open source, con ampio supporto alle metodologie allo stato dell’arte di Deep Learning. Tramite Torch7 è stata implementata la vera e propria classificazione di immagini poiché questo framework, grazie anche al lavoro di analisi portato avanti da alcuni miei colleghi in precedenza, è risultato essere molto efficace nel categorizzare oggetti in immagini. Le immagini su cui si sono basati i test sperimentali, appartengono a un dataset creato ad hoc per il sistema di visione 3D con la finalità di sperimentare il sistema per individui ipovedenti e non vedenti; in esso sono presenti alcuni tra i principali ostacoli che un ipovedente può incontrare nella propria quotidianità. In particolare il dataset si compone di potenziali ostacoli relativi a una ipotetica situazione di utilizzo all’aperto. Dopo avere stabilito dunque che Torch7 fosse il supporto da usare per la classificazione, l’attenzione si è concentrata sulla possibilità di sfruttare la Visione Stereo per aumentare l’accuratezza della classificazione stessa. Infatti, le immagini appartenenti al dataset sopra citato sono state acquisite mediante una Stereo Camera con elaborazione su FPGA sviluppata dal gruppo di ricerca presso il quale è stato svolto questo lavoro. Ciò ha permesso di utilizzare informazioni di tipo 3D, quali il livello di depth (profondità) di ogni oggetto appartenente all’immagine, per segmentare, attraverso un algoritmo realizzato in C++, gli oggetti di interesse, escludendo il resto della scena. L’ultima fase del lavoro è stata quella di testare Torch7 sul dataset di immagini, preventivamente segmentate attraverso l’algoritmo di segmentazione appena delineato, al fine di eseguire il riconoscimento della tipologia di ostacolo individuato dal sistema.
Resumo:
L’obiettivo del lavoro esposto nella seguente relazione di tesi ha riguardato lo studio e la simulazione di esperimenti di radar bistatico per missioni di esplorazione planeteria. In particolare, il lavoro si è concentrato sull’uso ed il miglioramento di un simulatore software già realizzato da un consorzio di aziende ed enti di ricerca nell’ambito di uno studio dell’Agenzia Spaziale Europea (European Space Agency – ESA) finanziato nel 2008, e svolto fra il 2009 e 2010. L’azienda spagnola GMV ha coordinato lo studio, al quale presero parte anche gruppi di ricerca dell’Università di Roma “Sapienza” e dell’Università di Bologna. Il lavoro svolto si è incentrato sulla determinazione della causa di alcune inconsistenze negli output relativi alla parte del simulatore, progettato in ambiente MATLAB, finalizzato alla stima delle caratteristiche della superficie di Titano, in particolare la costante dielettrica e la rugosità media della superficie, mediante un esperimento con radar bistatico in modalità downlink eseguito dalla sonda Cassini-Huygens in orbita intorno al Titano stesso. Esperimenti con radar bistatico per lo studio di corpi celesti sono presenti nella storia dell’esplorazione spaziale fin dagli anni ’60, anche se ogni volta le apparecchiature utilizzate e le fasi di missione, durante le quali questi esperimenti erano effettuati, non sono state mai appositamente progettate per lo scopo. Da qui la necessità di progettare un simulatore per studiare varie possibili modalità di esperimenti con radar bistatico in diversi tipi di missione. In una prima fase di approccio al simulatore, il lavoro si è incentrato sullo studio della documentazione in allegato al codice così da avere un’idea generale della sua struttura e funzionamento. È seguita poi una fase di studio dettagliato, determinando lo scopo di ogni linea di codice utilizzata, nonché la verifica in letteratura delle formule e dei modelli utilizzati per la determinazione di diversi parametri. In una seconda fase il lavoro ha previsto l’intervento diretto sul codice con una serie di indagini volte a determinarne la coerenza e l’attendibilità dei risultati. Ogni indagine ha previsto una diminuzione delle ipotesi semplificative imposte al modello utilizzato in modo tale da identificare con maggiore sicurezza la parte del codice responsabile dell’inesattezza degli output del simulatore. I risultati ottenuti hanno permesso la correzione di alcune parti del codice e la determinazione della principale fonte di errore sugli output, circoscrivendo l’oggetto di studio per future indagini mirate.
Resumo:
La forma spettrale dell’X-ray background richiede l’esistenza di un grande numero di AGN mediamente oscurati, oltre alla presenza di AGN fortemente oscurati, la cui densità di colonna supera il limite Compton (Nh>10^24 cm^(-2)). A causa della loro natura, questi oggetti risultano di difficile osservazione, per cui è necessario adottare un approccio multi-banda per riuscire a rivelarli. In questo lavoro di tesi abbiamo studiato 29 sorgenti osservate nel CDF-S e 10 nel CDF-N a 0.07
Resumo:
La presente tesi si pone come obiettivo quello di analizzare il protocollo LTP (in particolare in ION) e proporre dei miglioramenti utili al caso in cui siano presenti perdite elevate. Piu in dettaglio, una prima parte introduttiva motiva l'inefficacia del TCP/IP in ambito interplanetario e introduce l'architettura DTN Bundle Protocol (Cap.1). La tesi prosegue con la descrizione delle specifiche del protocollo LTP (Cap.2), in particolar modo evidenziando come un bundle venga incapsulato in un blocco LTP, come questo sia successivamente diviso in tanti segmenti LTP e come questi vengano successivamente inviati con il protocollo UDP o con un protocollo analogo. Viene quindi presentata un'approfondita analisi delle penalizzazioni dovute alle perdite dei segmenti LTP, sia di tipo dati che di segnalazione (Cap. 3). Quest'analisi permette di dimostrare la criticita degli effetti delle perdite, in particolare per quello che riguarda i segmenti LTP di segnalazione. Mentre in presenza di perdite basse tali effetti hanno in media un impatto minimo sul tempo di consegna di un blocco LTP (quindi del bundle in esso contenuto), in quanto avvengono raramente, in presenza di perdite elevate rappresentano un collo di bottiglia per il tempo di consegna di un blocco LTP. A tal proposito sono state proposte alcune modifiche che permettono di migliorare le prestazioni di LTP (Cap. 4) compatibilmente con le specifiche RFC in modo da garantire l'interoperabilita con le diverse implementazioni del protocollo. Successivamente nel Cap. 5 viene mostrato come sono state implementate le modifiche proposte in ION 3.4.1. Nel capitolo finale (Cap. 6) sono presenti i risultati numerici relativi ad alcuni test preliminari eseguiti confrontando la versione originale del protocollo con le versioni modificate contenenti i miglioramenti proposti. I test sono risultati molto positivi per elevate perdite, confermando cosi la validita dell'analisi e dei miglioramenti introdotti.