857 resultados para Facial Object Based Method
Resumo:
Industrial and domestic sewage effluents have been found to cause reproductive disorders in wild fish, often as a result of the interference of compounds in the effluents with the endocrine system. This thesis describes laboratory-based exposure experiments and a field survey that were conducted with juveniles of the three-spined stickleback, Gasterosteus aculeatus. This small teleost is a common fish in Swedish coastal waters and was chosen as an alternative to non-native test species commonly used in endocrine disruption studies, which allows the comparison of field data with results from laboratory experiments. The aim of this thesis was to elucidate 1) if genetic sex determination and differentiation can be disturbed by natural and synthetic steroid hormones and 2) whether this provides an endpoint for the detection of endocrine disruption, 3) to evaluate the applicability of specific estrogen- and androgen-inducible marker proteins in juvenile three-spined sticklebacks, 4) to investigate whether estrogenic and/or androgenic endocrine disrupting activity can be detected in effluents from Swedish pulp mills and domestic sewage treatment plants and 5) whether such activity can be detected in coastal waters receiving these effluents. Laboratory exposure experiments found juvenile three-spined sticklebacks to be sensitive to water-borne estrogenic and androgenic steroid substances. Intersex – the co-occurrence of ovarian and testicular tissue in gonads – was induced by 17β-estradiol (E2), 17α-ethinylestradiol (EE2), 17α-methyltestosterone (MT) and 5α-dihydrotestosterone (DHT). The first two weeks after hatching was the phase of highest sensitivity. MT was ambivalent by simultaneously eliciting masculinizing and feminizing effects. When applying a DNA-based method for genetic sex identification, it was found that application of MT only during the first two weeks after hatching caused total and apparently irreversible development of testis in genetic females. E2 caused gonad type reversal from male to female. E2 and EE2 induced vitellogenin - the estrogen-responsive yolk precursor protein, while DHT and MT induced spiggin – the androgen-responsive glue protein of the stickleback. None of the effluents from two pulp mills and two domestic sewage treatment plants had any estrogenic or androgenic activity. Juvenile three-spined sticklebacks were collected during four subsequent summers at the Swedish Baltic Sea coast in recipients of effluents from pulp mills and a domestic sewage treatment plant as well as remote reference sites. No sings of endocrine disruption were observed at any site, when studying gonad development or marker proteins, except for a deviation of sex ratios at a reference site. The three-spined stickleback – with focus on the juvenile stage – was found to be a sensitive species suitable for the study of estrogenic and androgenic endocrine disruption.
Resumo:
[EN] We present an energy based approach to estimate a dense disparity map from a set of two weakly calibrated stereoscopic images while preserving its discontinuities resulting from image boundaries. We first derive a simplified expression for the disparity that allows us to estimate it from a stereo pair of images using an energy minimization approach. We assume that the epipolar geometry is known, and we include this information in the energy model. Discontinuities are preserved by means of a regularization term based on the Nagel-Enkelmann operator. We investigate the associated Euler-Lagrange equation of the energy functional, and we approach the solution of the underlying partial differential equation (PDE) using a gradient descent method The resulting parabolic problem has a unique solution. In order to reduce the risk to be trapped within some irrelevant local minima during the iterations, we use a focusing strategy based on a linear scalespace. Experimental results on both synthetic and real images arere presented to illustrate the capabilities of this PDE and scale-space based method.
Resumo:
L’obiettivo della tesi riguarda l’utilizzo di immagini aerofotogrammetriche e telerilevate per la caratterizzazione qualitativa e quantitativa di ecosistemi forestali e della loro evoluzione. Le tematiche affrontate hanno riguardato, da una parte, l’aspetto fotogrammetrico, mediante recupero, digitalizzazione ed elaborazione di immagini aeree storiche di varie epoche, e, dall’altra, l’aspetto legato all’uso del telerilevamento per la classificazione delle coperture al suolo. Nel capitolo 1 viene fatta una breve introduzione sullo sviluppo delle nuove tecnologie di rilievo con un approfondimento delle applicazioni forestali; nel secondo capitolo è affrontata la tematica legata all’acquisizione dei dati telerilevati e fotogrammetrici con una breve descrizione delle caratteristiche e grandezze principali; il terzo capitolo tratta i processi di elaborazione e classificazione delle immagini per l’estrazione delle informazioni significative. Nei tre capitoli seguenti vengono mostrati tre casi di applicazioni di fotogrammetria e telerilevamento nello studio di ecosistemi forestali. Il primo caso (capitolo 4) riguarda l’area del gruppo montuoso del Prado- Cusna, sui cui è stata compiuta un’analisi multitemporale dell’evoluzione del limite altitudinale degli alberi nell’arco degli ultimi cinquant’anni. E’ stata affrontata ed analizzata la procedura per il recupero delle prese aeree storiche, definibile mediante una serie di successive operazioni, a partire dalla digitalizzazione dei fotogrammi, continuando con la determinazione di punti di controllo noti a terra per l’orientamento delle immagini, per finire con l’ortorettifica e mosaicatura delle stesse, con l’ausilio di un Modello Digitale del Terreno (DTM). Tutto ciò ha permesso il confronto di tali dati con immagini digitali più recenti al fine di individuare eventuali cambiamenti avvenuti nell’arco di tempo intercorso. Nel secondo caso (capitolo 5) si è definita per lo studio della zona del gruppo del monte Giovo una procedura di classificazione per l’estrazione delle coperture vegetative e per l’aggiornamento della cartografia esistente – in questo caso la carta della vegetazione. In particolare si è cercato di classificare la vegetazione soprasilvatica, dominata da brughiere a mirtilli e praterie con prevalenza di quelle secondarie a nardo e brachipodio. In alcune aree sono inoltre presenti comunità che colonizzano accumuli detritici stabilizzati e le rupi arenacee. A questo scopo, oltre alle immagini aeree (Volo IT2000) sono state usate anche immagini satellitari ASTER e altri dati ancillari (DTM e derivati), ed è stato applicato un sistema di classificazione delle coperture di tipo objectbased. Si è cercato di definire i migliori parametri per la segmentazione e il numero migliore di sample per la classificazione. Da una parte, è stata fatta una classificazione supervisionata della vegetazione a partire da pochi sample di riferimento, dall’altra si è voluto testare tale metodo per la definizione di una procedura di aggiornamento automatico della cartografia esistente. Nel terzo caso (capitolo 6), sempre nella zona del gruppo del monte Giovo, è stato fatto un confronto fra la timberline estratta mediante segmentazione ad oggetti ed il risultato di rilievi GPS a terra appositamente effettuati. L’obiettivo è la definizione del limite altitudinale del bosco e l’individuazione di gruppi di alberi isolati al di sopra di esso mediante procedure di segmentazione e classificazione object-based di ortofoto aeree in formato digitale e la verifica sul campo in alcune zone campione dei risultati, mediante creazione di profili GPS del limite del bosco e determinazione delle coordinate dei gruppi di alberi isolati. I risultati finali del lavoro hanno messo in luce come le moderne tecniche di analisi di immagini sono ormai mature per consentire il raggiungimento degli obiettivi prefissi nelle tre applicazioni considerate, pur essendo in ogni caso necessaria una attenta validazione dei dati ed un intervento dell’operatore in diversi momenti del processo. In particolare, le operazioni di segmentazione delle immagini per l’estrazione di feature significative hanno dimostrato grandi potenzialità in tutti e tre i casi. Un software ad “oggetti” semplifica l’implementazione dei risultati della classificazione in un ambiente GIS, offrendo la possibilità, ad esempio, di esportare in formato vettoriale gli oggetti classificati. Inoltre dà la possibilità di utilizzare contemporaneamente, in un unico ambiente, più sorgenti di informazione quali foto aeree, immagini satellitari, DTM e derivati. Le procedure automatiche per l’estrazione della timberline e dei gruppi di alberi isolati e per la classificazione delle coperture sono oggetto di un continuo sviluppo al fine di migliorarne le prestazioni; allo stato attuale esse non devono essere considerate una soluzione ottimale autonoma ma uno strumento per impostare e semplificare l’intervento da parte dello specialista in fotointerpretazione.
Resumo:
Despite new methods and combined strategies, conventional cancer chemotherapy still lacks specificity and induces drug resistance. Gene therapy can offer the potential to obtain the success in the clinical treatment of cancer and this can be achieved by replacing mutated tumour suppressor genes, inhibiting gene transcription, introducing new genes encoding for therapeutic products, or specifically silencing any given target gene. Concerning gene silencing, attention has recently shifted onto the RNA interference (RNAi) phenomenon. Gene silencing mediated by RNAi machinery is based on short RNA molecules, small interfering RNAs (siRNAs) and microRNAs (miRNAs), that are fully o partially homologous to the mRNA of the genes being silenced, respectively. On one hand, synthetic siRNAs appear as an important research tool to understand the function of a gene and the prospect of using siRNAs as potent and specific inhibitors of any target gene provides a new therapeutical approach for many untreatable diseases, particularly cancer. On the other hand, the discovery of the gene regulatory pathways mediated by miRNAs, offered to the research community new important perspectives for the comprehension of the physiological and, above all, the pathological mechanisms underlying the gene regulation. Indeed, changes in miRNAs expression have been identified in several types of neoplasia and it has also been proposed that the overexpression of genes in cancer cells may be due to the disruption of a control network in which relevant miRNA are implicated. For these reasons, I focused my research on a possible link between RNAi and the enzyme cyclooxygenase-2 (COX-2) in the field of colorectal cancer (CRC), since it has been established that the transition adenoma-adenocarcinoma and the progression of CRC depend on aberrant constitutive expression of COX-2 gene. In fact, overexpressed COX-2 is involved in the block of apoptosis, the stimulation of tumor-angiogenesis and promotes cell invasion, tumour growth and metastatization. On the basis of data reported in the literature, the first aim of my research was to develop an innovative and effective tool, based on the RNAi mechanism, able to silence strongly and specifically COX-2 expression in human colorectal cancer cell lines. In this study, I firstly show that an siRNA sequence directed against COX-2 mRNA (siCOX-2), potently downregulated COX-2 gene expression in human umbilical vein endothelial cells (HUVEC) and inhibited PMA-induced angiogenesis in vitro in a specific, non-toxic manner. Moreover, I found that the insertion of a specific cassette carrying anti-COX-2 shRNA sequence (shCOX-2, the precursor of siCOX-2 previously tested) into a viral vector (pSUPER.retro) greatly increased silencing potency in a colon cancer cell line (HT-29) without activating any interferon response. Phenotypically, COX-2 deficient HT-29 cells showed a significant impairment of their in vitro malignant behaviour. Thus, results reported here indicate an easy-to-use, powerful and high selective virus-based method to knockdown COX-2 gene in a stable and long-lasting manner, in colon cancer cells. Furthermore, they open up the possibility of an in vivo application of this anti-COX-2 retroviral vector, as therapeutic agent for human cancers overexpressing COX-2. In order to improve the tumour selectivity, pSUPER.retro vector was modified for the shCOX-2 expression cassette. The aim was to obtain a strong, specific transcription of shCOX-2 followed by COX-2 silencing mediated by siCOX-2 only in cancer cells. For this reason, H1 promoter in basic pSUPER.retro vector [pS(H1)] was substituted with the human Cox-2 promoter [pS(COX2)] and with a promoter containing repeated copies of the TCF binding element (TBE) [pS(TBE)]. These promoters were choosen because they are partculary activated in colon cancer cells. COX-2 was effectively silenced in HT-29 and HCA-7 colon cancer cells by using enhanced pS(COX2) and pS(TBE) vectors. In particular, an higher siCOX-2 production followed by a stronger inhibition of Cox-2 gene were achieved by using pS(TBE) vector, that represents not only the most effective, but also the most specific system to downregulate COX-2 in colon cancer cells. Because of the many limits that a retroviral therapy could have in a possible in vivo treatment of CRC, the next goal was to render the enhanced RNAi-mediate COX-2 silencing more suitable for this kind of application. Xiang and et al. (2006) demonstrated that it is possible to induce RNAi in mammalian cells after infection with engineered E. Coli strains expressing Inv and HlyA genes, which encode for two bacterial factors needed for successful transfer of shRNA in mammalian cells. This system, called “trans-kingdom” RNAi (tkRNAi) could represent an optimal approach for the treatment of colorectal cancer, since E. Coli in normally resident in human intestinal flora and could easily vehicled to the tumor tissue. For this reason, I tested the improved COX-2 silencing mediated by pS(COX2) and pS(TBE) vectors by using tkRNAi system. Results obtained in HT-29 and HCA-7 cell lines were in high agreement with data previously collected after the transfection of pS(COX2) and pS(TBE) vectors in the same cell lines. These findings suggest that tkRNAi system for COX-2 silencing, in particular mediated by pS(TBE) vector, could represent a promising tool for the treatment of colorectal cancer. Flanking the studies addressed to the setting-up of a RNAi-mediated therapeutical strategy, I proposed to get ahead with the comprehension of new molecular basis of human colorectal cancer. In particular, it is known that components of the miRNA/RNAi pathway may be altered during the progressive development of colorectal cancer (CRC), and it has been already demonstrated that some miRNAs work as tumor suppressors or oncomiRs in colon cancer. Thus, my hypothesis was that overexpressed COX-2 protein in colon cancer could be the result of decreased levels of one or more tumor suppressor miRNAs. In this thesis, I clearly show an inverse correlation between COX-2 expression and the human miR- 101(1) levels in colon cancer cell lines, tissues and metastases. I also demonstrate that the in vitro modulating of miR-101(1) expression in colon cancer cell lines leads to significant variations in COX-2 expression, and this phenomenon is based on a direct interaction between miR-101(1) and COX-2 mRNA. Moreover, I started to investigate miR-101(1) regulation in the hypoxic environment since adaptation to hypoxia is critical for tumor cell growth and survival and it is known that COX-2 can be induced directly by hypoxia-inducible factor 1 (HIF-1). Surprisingly, I observed that COX-2 overexpression induced by hypoxia is always coupled to a significant decrease of miR-101(1) levels in colon cancer cell lines, suggesting that miR-101(1) regulation could be involved in the adaption of cancer cells to the hypoxic environment that strongly characterize CRC tissues.
Resumo:
The continuous increase of genome sequencing projects produced a huge amount of data in the last 10 years: currently more than 600 prokaryotic and 80 eukaryotic genomes are fully sequenced and publically available. However the sole sequencing process of a genome is able to determine just raw nucleotide sequences. This is only the first step of the genome annotation process that will deal with the issue of assigning biological information to each sequence. The annotation process is done at each different level of the biological information processing mechanism, from DNA to protein, and cannot be accomplished only by in vitro analysis procedures resulting extremely expensive and time consuming when applied at a this large scale level. Thus, in silico methods need to be used to accomplish the task. The aim of this work was the implementation of predictive computational methods to allow a fast, reliable, and automated annotation of genomes and proteins starting from aminoacidic sequences. The first part of the work was focused on the implementation of a new machine learning based method for the prediction of the subcellular localization of soluble eukaryotic proteins. The method is called BaCelLo, and was developed in 2006. The main peculiarity of the method is to be independent from biases present in the training dataset, which causes the over‐prediction of the most represented examples in all the other available predictors developed so far. This important result was achieved by a modification, made by myself, to the standard Support Vector Machine (SVM) algorithm with the creation of the so called Balanced SVM. BaCelLo is able to predict the most important subcellular localizations in eukaryotic cells and three, kingdom‐specific, predictors were implemented. In two extensive comparisons, carried out in 2006 and 2008, BaCelLo reported to outperform all the currently available state‐of‐the‐art methods for this prediction task. BaCelLo was subsequently used to completely annotate 5 eukaryotic genomes, by integrating it in a pipeline of predictors developed at the Bologna Biocomputing group by Dr. Pier Luigi Martelli and Dr. Piero Fariselli. An online database, called eSLDB, was developed by integrating, for each aminoacidic sequence extracted from the genome, the predicted subcellular localization merged with experimental and similarity‐based annotations. In the second part of the work a new, machine learning based, method was implemented for the prediction of GPI‐anchored proteins. Basically the method is able to efficiently predict from the raw aminoacidic sequence both the presence of the GPI‐anchor (by means of an SVM), and the position in the sequence of the post‐translational modification event, the so called ω‐site (by means of an Hidden Markov Model (HMM)). The method is called GPIPE and reported to greatly enhance the prediction performances of GPI‐anchored proteins over all the previously developed methods. GPIPE was able to predict up to 88% of the experimentally annotated GPI‐anchored proteins by maintaining a rate of false positive prediction as low as 0.1%. GPIPE was used to completely annotate 81 eukaryotic genomes, and more than 15000 putative GPI‐anchored proteins were predicted, 561 of which are found in H. sapiens. In average 1% of a proteome is predicted as GPI‐anchored. A statistical analysis was performed onto the composition of the regions surrounding the ω‐site that allowed the definition of specific aminoacidic abundances in the different considered regions. Furthermore the hypothesis that compositional biases are present among the four major eukaryotic kingdoms, proposed in literature, was tested and rejected. All the developed predictors and databases are freely available at: BaCelLo http://gpcr.biocomp.unibo.it/bacello eSLDB http://gpcr.biocomp.unibo.it/esldb GPIPE http://gpcr.biocomp.unibo.it/gpipe
Resumo:
Motivation An actual issue of great interest, both under a theoretical and an applicative perspective, is the analysis of biological sequences for disclosing the information that they encode. The development of new technologies for genome sequencing in the last years, opened new fundamental problems since huge amounts of biological data still deserve an interpretation. Indeed, the sequencing is only the first step of the genome annotation process that consists in the assignment of biological information to each sequence. Hence given the large amount of available data, in silico methods became useful and necessary in order to extract relevant information from sequences. The availability of data from Genome Projects gave rise to new strategies for tackling the basic problems of computational biology such as the determination of the tridimensional structures of proteins, their biological function and their reciprocal interactions. Results The aim of this work has been the implementation of predictive methods that allow the extraction of information on the properties of genomes and proteins starting from the nucleotide and aminoacidic sequences, by taking advantage of the information provided by the comparison of the genome sequences from different species. In the first part of the work a comprehensive large scale genome comparison of 599 organisms is described. 2,6 million of sequences coming from 551 prokaryotic and 48 eukaryotic genomes were aligned and clustered on the basis of their sequence identity. This procedure led to the identification of classes of proteins that are peculiar to the different groups of organisms. Moreover the adopted similarity threshold produced clusters that are homogeneous on the structural point of view and that can be used for structural annotation of uncharacterized sequences. The second part of the work focuses on the characterization of thermostable proteins and on the development of tools able to predict the thermostability of a protein starting from its sequence. By means of Principal Component Analysis the codon composition of a non redundant database comprising 116 prokaryotic genomes has been analyzed and it has been showed that a cross genomic approach can allow the extraction of common determinants of thermostability at the genome level, leading to an overall accuracy in discriminating thermophilic coding sequences equal to 95%. This result outperform those obtained in previous studies. Moreover, we investigated the effect of multiple mutations on protein thermostability. This issue is of great importance in the field of protein engineering, since thermostable proteins are generally more suitable than their mesostable counterparts in technological applications. A Support Vector Machine based method has been trained to predict if a set of mutations can enhance the thermostability of a given protein sequence. The developed predictor achieves 88% accuracy.
Resumo:
La tesi si pone come obiettivo quello di indagare le mostre di moda contemporanee come macchine testuali. Se consideriamo l’attuale panorama del fashion design come caratterizzato da una complessità costitutiva e da rapidi mutamenti che lo attraversano, e se partiamo dal presupposto che lo spettro di significati che uno stile di abbigliamento e i singoli capi possono assumere è estremamente sfuggente, probabilmente risulta più produttivo interrogarsi su come funziona la moda, su quali sono i suoi meccanismi di produzione di significato. L’analisi delle fashion exhibition si rivela quindi un modo utile per affrontare la questione, dato che gli allestimenti discorsivizzano questi meccanismi e rappresentano delle riflessioni tridimensionali attorno a temi specifici. La mostra di moda mette in scena delle eccezionalità che magnificano aspetti tipici del funzionamento del fashion system, sia se ci rivolgiamo alla moda dal punto di vista della produzione, sia se la consideriamo dal punto di vista della fruizione. L’indagine ha rintracciato nelle mostre curate da Diana Vreeland al Costume Institute del Metropolitan Museum di New York il modello di riferimento per le mostre di moda contemporanee. Vreeland, che dal 1936 al 1971 è stata prima fashion editor e poi editor-in-chief rispettivamente di “Harper’s Bazaar” e di “Vogue USA”, ha segnato un passaggio fondamentale quando nel 1972 ha deciso di accettare il ruolo di Special Consultant al Costume Institute. È ormai opinione diffusa fra critici e studiosi di moda che le mostre da lei organizzate nel corso di più di un decennio abbiano cambiato il modo di mettere in scena i vestiti nei musei. Al lavoro di Vreeland abbiamo poi accostato una recente mostra di moda che ha fatto molto parlare di sé: Spectres. When Fashion Turns Back, a cura di Judith Clark (2004). Nell’indagare i rapporti fra il fashion design contemporaneo e la storia della moda questa mostra ha utilizzato macchine allestitive abitate dai vestiti, per “costruire idee spaziali” e mettere in scena delle connessioni non immediate fra passato e presente. Questa mostra ci è sembrata centrale per evidenziare lo sguardo semiotico del curatore nel suo interrogarsi sul progetto complessivo dell’exhibition design e non semplicemente sullo studio degli abiti in mostra. In questo modo abbiamo delineato due posizioni: una rappresentata da un approccio object-based all’analisi del vestito, che si lega direttamente alla tradizione dei conservatori museali; l’altra rappresentata da quella che ormai si può considerare una disciplina, il fashion curation, che attribuisce molta importanza a tutti gli aspetti che concorrono a formare il progetto allestitivo di una mostra. Un lavoro comparativo fra alcune delle più importanti mostre di moda recentemente organizzate ci ha permesso di individuare elementi ricorrenti e specificità di questi dispositivi testuali. Utilizzando il contributo di Manar Hammad (2006) abbiamo preso in considerazione i diversi livelli di una mostra di moda: gli abiti e il loro rapporto con i manichini; l’exhibition design e lo spazio della mostra; il percorso e la sequenza, sia dal punto di vista della strategia di costruzione e dispiegamento testuale, sia dal punto di vista del fruitore modello. Abbiamo così individuato quattro gruppi di mostre di moda: mostre museali-archivistiche; retrospettive monografiche; mostre legate alla figura di un curatore; forme miste che si posizionano trasversalmente rispetto a questi primi tre modelli. Questa sistematizzazione ha evidenziato che una delle dimensione centrali per le mostre di moda contemporanee è proprio la questione della curatorship, che possiamo leggere in termini di autorialità ed enunciazione. Si sono ulteriormente chiariti anche gli orizzonti valoriali di riferimento: alla dimensione dell’accuratezza storica è associata una mostra che predilige il livello degli oggetti (gli abiti) e un coinvolgimento del visitatore puramente visivo; alla dimensione del piacere visivo possiamo invece associare un modello di mostra che assegna all’exhibition design un ruolo centrale e “chiede” al visitatore di giocare un ruolo pienamente interattivo. L’approccio curatoriale più compiuto ci sembra essere quello che cerca di conciliare queste due dimensioni.
Resumo:
Motorische Bewegungen werden über die visuelle Rückmeldung auf ihre Genauigkeit kontrolliert und ggf. korrigiert. Über einen technischen Eingriff, wie beispielsweise einer Prismenbrille, kann man eine Differenz zwischen optisch wahrgenommener und haptisch erlebter Umwelt erzeugen, um die Fähigkeiten des visuomotorischen Systems zu testen. In dieser Arbeit wurde eine computergestützte Methode entwickelt, eine solche visuomotorische Differenz zu simulieren. Die Versuchspersonen führen eine ballistische Bewegung mit Arm und Hand aus in der Absicht, ein vorgegebenes Ziel zu treffen. Die Trefferpunkte werden durch einen Computer mit Hilfe eines Digitalisierungstablettes aufgenommen. Die visuelle Umwelt, welche den Versuchspersonen präsentiert wird, ist auf einem Monitor dargestellt. Das Monitorabbild – ein Kreuz auf weißem Hintergrund – betrachten die Testpersonen über einen Spiegel. Dieser ist in einem entsprechenden Winkel zwischen Monitor und Digitalisierungstablett angebracht, so dass das Zielbild auf dem Digitalisierungstablett projiziert wird. Die Testpersonen nehmen das Zielkreuz auf dem Digitalisierungstablett liegend wahr. Führt die Versuchsperson eine Zielbewegung aus, können die aufgenommenen Koordinaten als Punkte auf dem Monitor dargestellt werden und die Testperson erhält über diese Punktanzeige ein visuelles Feedback ihrer Bewegung. Der Arbeitsbereich des Digitalisierungstabletts kann über den Computer eingerichtet und so motorische Verschiebungen simuliert werden. Die verschiedenartigen Möglichkeiten dieses Aufbaus wurden zum Teil in Vorversuchen getestet um Fragestellungen, Methodik und technische Einrichtungen aufeinander abzustimmen. Den Hauptversuchen galt besonderes Interesse an der zeitlichen Verzögerung des visuellen Feedbacks sowie dem intermanuellen Transfer. Hierbei ergaben sich folgende Ergebnisse: ● Die Versuchspersonen adaptieren an eine räumlich verschobene Umwelt. Der Adaptationsverlauf lässt sich mit einer Exponentialfunktion mathematisch berechnen und darstellen. ● Dieser Verlauf ist unabhängig von der Art des visuellen Feedbacks. Die Beobachtung der Handbewegung während der Adaptation zeigt die gleiche Zielabfolge wie eine einfache Punktprojektion, die den Trefferort der Bewegung darstellt. ● Der exponentielle Verlauf der Adaptationsbewegung ist unabhängig von den getesteten zeitlichen Verzögerungen des visuellen Feedbacks. ● Die Ergebnisse des Folgeeffektes zeigen, dass bei zunehmender zeitlicher Verzögerung des visuellen Feedbacks während der Adaptationsphase, die Größe des Folgeeffektwertes geringer wird, d.h. die anhaltende Anpassungsleistung an eine visuomotorische Differenz sinkt. ● Die Folgeeffekte weisen individuelle Eigenheiten auf. Die Testpersonen adaptieren verschieden stark an eine simulierte Verschiebung. Ein Vergleich mit den visuomotorischen Herausforderungen im Vorleben der Versuchspersonen ließ vermuten, dass das visuomotorische System des Menschen trainierbar ist und sich - je nach Trainingszustand – unterschiedlich an wahrgenommene Differenzen anpasst. ● Der intermanuelle Transfer konnte unter verschiedenen Bedingungen nachgewiesen werden. ● Ein deutlich stärkerer Folgeeffekt kann beobachtet werden, wenn die wahrgenommene visuomotorische Differenz zwischen Ziel und Trefferpunkt in eine Gehirnhälfte projiziert wird und der Folgeeffekt mit der Hand erfolgt, welche von dieser Hirnhemisphäre gesteuert wird. Der intermanuelle Transfer wird demnach begünstigt, wenn die visuelle Projektion der Fehlerbeobachtung in die Gehirnhälfte erfolgt, die während der Adaptationsphase motorisch passiv ist.
Resumo:
Tracking activities during daily life and assessing movement parameters is essential for complementing the information gathered in confined environments such as clinical and physical activity laboratories for the assessment of mobility. Inertial measurement units (IMUs) are used as to monitor the motion of human movement for prolonged periods of time and without space limitations. The focus in this study was to provide a robust, low-cost and an unobtrusive solution for evaluating human motion using a single IMU. First part of the study focused on monitoring and classification of the daily life activities. A simple method that analyses the variations in signal was developed to distinguish two types of activity intervals: active and inactive. Neural classifier was used to classify active intervals; the angle with respect to gravity was used to classify inactive intervals. Second part of the study focused on extraction of gait parameters using a single inertial measurement unit (IMU) attached to the pelvis. Two complementary methods were proposed for gait parameters estimation. First method was a wavelet based method developed for the estimation of gait events. Second method was developed for estimating step and stride length during level walking using the estimations of the previous method. A special integration algorithm was extended to operate on each gait cycle using a specially designed Kalman filter. The developed methods were also applied on various scenarios. Activity monitoring method was used in a PRIN’07 project to assess the mobility levels of individuals living in a urban area. The same method was applied on volleyball players to analyze the fitness levels of them by monitoring their daily life activities. The methods proposed in these studies provided a simple, unobtrusive and low-cost solution for monitoring and assessing activities outside of controlled environments.
Resumo:
In computer systems, specifically in multithread, parallel and distributed systems, a deadlock is both a very subtle problem - because difficult to pre- vent during the system coding - and a very dangerous one: a deadlocked system is easily completely stuck, with consequences ranging from simple annoyances to life-threatening circumstances, being also in between the not negligible scenario of economical losses. Then, how to avoid this problem? A lot of possible solutions has been studied, proposed and implemented. In this thesis we focus on detection of deadlocks with a static program analysis technique, i.e. an analysis per- formed without actually executing the program. To begin, we briefly present the static Deadlock Analysis Model devel- oped for coreABS−− in chapter 1, then we proceed by detailing the Class- based coreABS−− language in chapter 2. Then, in Chapter 3 we lay the foundation for further discussions by ana- lyzing the differences between coreABS−− and ASP, an untyped Object-based calculi, so as to show how it can be possible to extend the Deadlock Analysis to Object-based languages in general. In this regard, we explicit some hypotheses in chapter 4 first by present- ing a possible, unproven type system for ASP, modeled after the Deadlock Analysis Model developed for coreABS−−. Then, we conclude our discussion by presenting a simpler hypothesis, which may allow to circumvent the difficulties that arises from the definition of the ”ad-hoc” type system discussed in the aforegoing chapter.
Resumo:
In spite of the higher toxicity of oxygen-containing polycyclic aromatic hydrocarbons (OPAHs) than of their parent-PAHs, there are only a few studies of the concentrations, composition pattern, sources and fate of OPAHs in soil, the presumably major environmental sink of OPAHs. This is related to the fact that there are only few available methods to measure OPAHs together with PAHs in soil. rnThe objectives of my thesis were to (i) develop a GC/MS-based method to measure OPAHs and their parent-PAHs in soils of different properties and pollution levels, (ii) apply the method to soils from Uzbekistan and Slovakia and (iii) investigate into the fate of OPAHs, particularly their vertical transport in soilrnI optimized and fully evaluated an analytical method based on pressurized liquid extraction, silica gel column chromatographic fractionation of extracted compounds into alkyl-/parent-PAH and OPAH fractions, silylation of hydroxyl-/carboxyl-OPAHs with N,O-bis(trimethylsilyl)trifluoracetamide and GC/MS quantification of the target compounds. The method was targeted at 34 alkyl-/parent-PAHs, 7 carbonyl-OPAHs and 19 hydroxyl-/carboxyl-OPAHs. I applied the method to 11 soils from each of the Angren industrial region (which hosts a coal mine, power plant, rubber factory and gold refinery) in Uzbekistan and in the city of Bratislava, the densely populated capital of Slovakia.rnRecoveries of five carbonyl-OPAHs in spike experiments ranged between 78-97% (relative standard deviation, RSD, 5-12%), while 1,2-acenaphthenequinone and 1,4-naphtho-quinone had recoveries between 34-44%% (RSD, 19-28%). Five spiked hydroxyl-/carboxyl-OPAHs showed recoveries between 36-70% (RSD, 13-46%), while others showed recoveries <10% or were completely lost. With the optimized method, I determined, on average, 103% of the alkyl-/parent-PAH concentrations in a certified reference material.rnThe ∑OPAHs concentrations in surface soil ranged 62-2692 ng g-1 and those of ∑alkyl-/parent-PAHs was 842-244870 ng g-1. The carbonyl-OPAHs had higher concentrations than the hydroxyl-/carboxyl-OPAHs. The most abundant carbonyl-OPAHs were consistently 9-fluorenone (9-FLO), 9,10-anthraquinone (9,10-ANQ), 1-indanone (1-INDA) and benzo[a]anthracene-7,12-dione (7,12-B(A)A) and the most abundant hydroxyl-/carboxyl-OPAH was 2-hydroxybenzaldehyde. The concentrations of carbonyl-OPAHs were frequently higher than those of their parent-PAHs (e.g., 9-FLO/fluorene >100 near a rubber factory in Angren). The concentrations of OPAHs like those of their alkyl-/parent-PAHs were higher at locations closer to point sources and the OPAH and PAH concentrations were correlated suggesting that both compound classes originated from the same sources. Only for 1-INDA and 2-biphenylcarboxaldehyde sources other than combustion seemed to dominate. Like those of the alkyl-/parent-PAHs, OPAH concentrations were higher in topsoils than subsoils. Evidence of higher mobility of OPAHs than their parent-PAHs was provided by greater subsoil:topsoil concentration ratios of carbonyl-OPAHs (0.41-0.82) than their parent-PAHs (0.41-0.63) in Uzbekistan. This was further backed by the consistently higher contribution of more soluble 9-FLO and 1-INDA to the ∑carbonyl-OPAHs in subsoil than topsoil at the expense of 9,10-ANQ, 7,12-B(A)A and higher OPAH/parent-PAH concentration ratios in subsoil than topsoil in Bratislava.rnWith this thesis, I contribute a suitable method to determine a large number of OPAHs and PAHs in soil. My results demonstrate that carbonyl-OPAHs are more abundant than hydroxyl-/carboxyl-OPAHs and OPAH concentrations are frequently higher than parent-PAH concentrations. Furthermore, there are indications that OPAHs are more mobile in soil than PAHs. This calls for appropriate legal regulation of OPAH concentrations in soil.
Resumo:
This thesis is a collection of works focused on the topic of Earthquake Early Warning, with a special attention to large magnitude events. The topic is addressed from different points of view and the structure of the thesis reflects the variety of the aspects which have been analyzed. The first part is dedicated to the giant, 2011 Tohoku-Oki earthquake. The main features of the rupture process are first discussed. The earthquake is then used as a case study to test the feasibility Early Warning methodologies for very large events. Limitations of the standard approaches for large events arise in this chapter. The difficulties are related to the real-time magnitude estimate from the first few seconds of recorded signal. An evolutionary strategy for the real-time magnitude estimate is proposed and applied to the single Tohoku-Oki earthquake. In the second part of the thesis a larger number of earthquakes is analyzed, including small, moderate and large events. Starting from the measurement of two Early Warning parameters, the behavior of small and large earthquakes in the initial portion of recorded signals is investigated. The aim is to understand whether small and large earthquakes can be distinguished from the initial stage of their rupture process. A physical model and a plausible interpretation to justify the observations are proposed. The third part of the thesis is focused on practical, real-time approaches for the rapid identification of the potentially damaged zone during a seismic event. Two different approaches for the rapid prediction of the damage area are proposed and tested. The first one is a threshold-based method which uses traditional seismic data. Then an innovative approach using continuous, GPS data is explored. Both strategies improve the prediction of large scale effects of strong earthquakes.
Resumo:
The production of the Z boson in proton-proton collisions at the LHC serves as a standard candle at the ATLAS experiment during early data-taking. The decay of the Z into an electron-positron pair gives a clean signature in the detector that allows for calibration and performance studies. The cross-section of ~ 1 nb allows first LHC measurements of parton density functions. In this thesis, simulations of 10 TeV collisions at the ATLAS detector are studied. The challenges for an experimental measurement of the cross-section with an integrated luminositiy of 100 pb−1 are discussed. In preparation for the cross-section determination, the single-electron efficiencies are determined via a simulation based method and in a test of a data-driven ansatz. The two methods show a very good agreement and differ by ~ 3% at most. The ingredients of an inclusive and a differential Z production cross-section measurement at ATLAS are discussed and their possible contributions to systematic uncertainties are presented. For a combined sample of signal and background the expected uncertainty on the inclusive cross-section for an integrated luminosity of 100 pb−1 is determined to 1.5% (stat) +/- 4.2% (syst) +/- 10% (lumi). The possibilities for single-differential cross-section measurements in rapidity and transverse momentum of the Z boson, which are important quantities because of the impact on parton density functions and the capability to check for non-pertubative effects in pQCD, are outlined. The issues of an efficiency correction based on electron efficiencies as function of the electron’s transverse momentum and pseudorapidity are studied. A possible alternative is demonstrated by expanding the two-dimensional efficiencies with the additional dimension of the invariant mass of the two leptons of the Z decay.
Resumo:
Il presente lavoro è inserito nel contesto di applicazioni che riguardano la pianificazione e gestione delle emergenze umanitarie. Gli aspetti che si sono voluti mettere in evidenza sono due. Da un lato l'importanza di conoscere le potenzialità dei dati che si hanno di fronte per poterli sfruttare al meglio. Dall'altro l'esigenza di creare prodotti che siano facilmente consultabili da parte dell'utente utilizzando due diverse tecniche per comprenderne le peculiarità. Gli strumenti che hanno permesso il presente studio sono stati tre: i principi del telerilevamento, il GIS e l'analisi di Change Detection.
Resumo:
Angesichts der sich abzeichnenden Erschöpfung fossiler Ressourcen ist die Erforschung alternativer Energiequellen derzeit eines der meistbeachteten Forschungsgebiete. Durch ihr enormes Potential ist die Photovoltaik besonders im Fokus der Wissenschaft. Um großflächige Beschichtungsverfahren nutzen zu können, wird seit einigen Jahren auf dem Gebiet der Dünnschichtphotovoltaik intensiv geforscht. Jedoch sind die gegenwärtigen Solarzellenkonzepte allesamt durch die Verwendung giftiger (Cd, As) oder seltener Elemente (In, Ga) oder durch eine komplexe Phasenbildung in ihrem Potential beschränkt. Die Entwicklung alternativer Konzepte erscheint daher naheliegend.rnAufgrund dessen wurde in einem BMBF-geförderten Verbundprojekt die Abscheidung von Dünnschichten des binären Halbleiters Bi2S3 mittels physikalischer Gasphasenabscheidung mit dem Ziel der Etablierung als quasi-intrinsischer Absorber in Solarzellenstrukturen mit p-i-n-Schichtfolge hin untersucht.rnDurch sein von einem hochgradig anisotropen Bindungscharakter geprägtes Kristallwachstum war die Abscheidung glatter, einphasiger und für die Integration in eine Multischichtstruktur geeigneter Schichten mit Schichtdicken von einigen 100 nm eine der wichtigsten Herausforderungen. Die Auswirkungen der beiden Parameter Abscheidungstemperatur und Stöchiometrie wurden hinsichtlich ihrer Auswirkungen auf die relevanten Kenngrößen (wie Morphologie, Dotierungsdichte und Photolumineszenz) untersucht. Es gelang, erfolgreich polykristalline Schichten mit geeigneter Rauigkeit und einer Dotierungsdichte von n ≈ 2 1015cm-3 auf anwendungsrelevanten Substraten abzuscheiden, wobei eine besonders starke Abhängigkeit von der Gasphasenzusammensetzung ermittelt werden. Es konnten weiterhin die ersten Messungen der elektronischen Zustandsdichte unter Verwendung von Hochenergie-Photoemissionsspektroskopie durchgeführt werden, die insbesondere den Einfluss variabler Materialzusammensetzungen offenbarten.rnZum Nachweis der Eignung des Materials als Absorberschicht standen innerhalb des Projektes mit SnS, Cu2O und PbS prinzipiell geeignete p-Kontaktmaterialien zur Verfügung. Es konnten trotz der Verwendung besonders sauberer Abscheidungsmethoden im Vakuum keine funktionstüchtigen Solarzellen mit Bi2S3 deponiert werden. Jedoch war es unter Verwendung von Photoemissionspektroskopie möglich, die relevanten Grenzflächen zu spektroskopieren und die Ursachen für die Beobachtungen zu identifizieren. Zudem konnte erfolgreich die Notwendigkeit von Puffermaterialien bei der Bi2S3-Abscheidung nachgewiesen werden, um Oberflächenreaktionen zu unterbinden und die Transporteigenschaften an der Grenzfläche zu verbessern.rn