15 resultados para Facial Object Based Method
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Whole Exome Sequencing (WES) is rapidly becoming the first-tier test in clinics, both thanks to its declining costs and the development of new platforms that help clinicians in the analysis and interpretation of SNV and InDels. However, we still know very little on how CNV detection could increase WES diagnostic yield. A plethora of exome CNV callers have been published over the years, all showing good performances towards specific CNV classes and sizes, suggesting that the combination of multiple tools is needed to obtain an overall good detection performance. Here we present TrainX, a ML-based method for calling heterozygous CNVs in WES data using EXCAVATOR2 Normalized Read Counts. We select males and females’ non pseudo-autosomal chromosome X alignments to construct our dataset and train our model, make predictions on autosomes target regions and use HMM to call CNVs. We compared TrainX against a set of CNV tools differing for the detection method (GATK4 gCNV, ExomeDepth, DECoN, CNVkit and EXCAVATOR2) and found that our algorithm outperformed them in terms of stability, as we identified both deletions and duplications with good scores (0.87 and 0.82 F1-scores respectively) and for sizes reaching the minimum resolution of 2 target regions. We also evaluated the method robustness using a set of WES and SNP array data (n=251), part of the Italian cohort of Epi25 collaborative, and were able to retrieve all clinical CNVs previously identified by the SNP array. TrainX showed good accuracy in detecting heterozygous CNVs of different sizes, making it a promising tool to use in a diagnostic setting.
Resumo:
L’obiettivo della tesi riguarda l’utilizzo di immagini aerofotogrammetriche e telerilevate per la caratterizzazione qualitativa e quantitativa di ecosistemi forestali e della loro evoluzione. Le tematiche affrontate hanno riguardato, da una parte, l’aspetto fotogrammetrico, mediante recupero, digitalizzazione ed elaborazione di immagini aeree storiche di varie epoche, e, dall’altra, l’aspetto legato all’uso del telerilevamento per la classificazione delle coperture al suolo. Nel capitolo 1 viene fatta una breve introduzione sullo sviluppo delle nuove tecnologie di rilievo con un approfondimento delle applicazioni forestali; nel secondo capitolo è affrontata la tematica legata all’acquisizione dei dati telerilevati e fotogrammetrici con una breve descrizione delle caratteristiche e grandezze principali; il terzo capitolo tratta i processi di elaborazione e classificazione delle immagini per l’estrazione delle informazioni significative. Nei tre capitoli seguenti vengono mostrati tre casi di applicazioni di fotogrammetria e telerilevamento nello studio di ecosistemi forestali. Il primo caso (capitolo 4) riguarda l’area del gruppo montuoso del Prado- Cusna, sui cui è stata compiuta un’analisi multitemporale dell’evoluzione del limite altitudinale degli alberi nell’arco degli ultimi cinquant’anni. E’ stata affrontata ed analizzata la procedura per il recupero delle prese aeree storiche, definibile mediante una serie di successive operazioni, a partire dalla digitalizzazione dei fotogrammi, continuando con la determinazione di punti di controllo noti a terra per l’orientamento delle immagini, per finire con l’ortorettifica e mosaicatura delle stesse, con l’ausilio di un Modello Digitale del Terreno (DTM). Tutto ciò ha permesso il confronto di tali dati con immagini digitali più recenti al fine di individuare eventuali cambiamenti avvenuti nell’arco di tempo intercorso. Nel secondo caso (capitolo 5) si è definita per lo studio della zona del gruppo del monte Giovo una procedura di classificazione per l’estrazione delle coperture vegetative e per l’aggiornamento della cartografia esistente – in questo caso la carta della vegetazione. In particolare si è cercato di classificare la vegetazione soprasilvatica, dominata da brughiere a mirtilli e praterie con prevalenza di quelle secondarie a nardo e brachipodio. In alcune aree sono inoltre presenti comunità che colonizzano accumuli detritici stabilizzati e le rupi arenacee. A questo scopo, oltre alle immagini aeree (Volo IT2000) sono state usate anche immagini satellitari ASTER e altri dati ancillari (DTM e derivati), ed è stato applicato un sistema di classificazione delle coperture di tipo objectbased. Si è cercato di definire i migliori parametri per la segmentazione e il numero migliore di sample per la classificazione. Da una parte, è stata fatta una classificazione supervisionata della vegetazione a partire da pochi sample di riferimento, dall’altra si è voluto testare tale metodo per la definizione di una procedura di aggiornamento automatico della cartografia esistente. Nel terzo caso (capitolo 6), sempre nella zona del gruppo del monte Giovo, è stato fatto un confronto fra la timberline estratta mediante segmentazione ad oggetti ed il risultato di rilievi GPS a terra appositamente effettuati. L’obiettivo è la definizione del limite altitudinale del bosco e l’individuazione di gruppi di alberi isolati al di sopra di esso mediante procedure di segmentazione e classificazione object-based di ortofoto aeree in formato digitale e la verifica sul campo in alcune zone campione dei risultati, mediante creazione di profili GPS del limite del bosco e determinazione delle coordinate dei gruppi di alberi isolati. I risultati finali del lavoro hanno messo in luce come le moderne tecniche di analisi di immagini sono ormai mature per consentire il raggiungimento degli obiettivi prefissi nelle tre applicazioni considerate, pur essendo in ogni caso necessaria una attenta validazione dei dati ed un intervento dell’operatore in diversi momenti del processo. In particolare, le operazioni di segmentazione delle immagini per l’estrazione di feature significative hanno dimostrato grandi potenzialità in tutti e tre i casi. Un software ad “oggetti” semplifica l’implementazione dei risultati della classificazione in un ambiente GIS, offrendo la possibilità, ad esempio, di esportare in formato vettoriale gli oggetti classificati. Inoltre dà la possibilità di utilizzare contemporaneamente, in un unico ambiente, più sorgenti di informazione quali foto aeree, immagini satellitari, DTM e derivati. Le procedure automatiche per l’estrazione della timberline e dei gruppi di alberi isolati e per la classificazione delle coperture sono oggetto di un continuo sviluppo al fine di migliorarne le prestazioni; allo stato attuale esse non devono essere considerate una soluzione ottimale autonoma ma uno strumento per impostare e semplificare l’intervento da parte dello specialista in fotointerpretazione.
Resumo:
Despite new methods and combined strategies, conventional cancer chemotherapy still lacks specificity and induces drug resistance. Gene therapy can offer the potential to obtain the success in the clinical treatment of cancer and this can be achieved by replacing mutated tumour suppressor genes, inhibiting gene transcription, introducing new genes encoding for therapeutic products, or specifically silencing any given target gene. Concerning gene silencing, attention has recently shifted onto the RNA interference (RNAi) phenomenon. Gene silencing mediated by RNAi machinery is based on short RNA molecules, small interfering RNAs (siRNAs) and microRNAs (miRNAs), that are fully o partially homologous to the mRNA of the genes being silenced, respectively. On one hand, synthetic siRNAs appear as an important research tool to understand the function of a gene and the prospect of using siRNAs as potent and specific inhibitors of any target gene provides a new therapeutical approach for many untreatable diseases, particularly cancer. On the other hand, the discovery of the gene regulatory pathways mediated by miRNAs, offered to the research community new important perspectives for the comprehension of the physiological and, above all, the pathological mechanisms underlying the gene regulation. Indeed, changes in miRNAs expression have been identified in several types of neoplasia and it has also been proposed that the overexpression of genes in cancer cells may be due to the disruption of a control network in which relevant miRNA are implicated. For these reasons, I focused my research on a possible link between RNAi and the enzyme cyclooxygenase-2 (COX-2) in the field of colorectal cancer (CRC), since it has been established that the transition adenoma-adenocarcinoma and the progression of CRC depend on aberrant constitutive expression of COX-2 gene. In fact, overexpressed COX-2 is involved in the block of apoptosis, the stimulation of tumor-angiogenesis and promotes cell invasion, tumour growth and metastatization. On the basis of data reported in the literature, the first aim of my research was to develop an innovative and effective tool, based on the RNAi mechanism, able to silence strongly and specifically COX-2 expression in human colorectal cancer cell lines. In this study, I firstly show that an siRNA sequence directed against COX-2 mRNA (siCOX-2), potently downregulated COX-2 gene expression in human umbilical vein endothelial cells (HUVEC) and inhibited PMA-induced angiogenesis in vitro in a specific, non-toxic manner. Moreover, I found that the insertion of a specific cassette carrying anti-COX-2 shRNA sequence (shCOX-2, the precursor of siCOX-2 previously tested) into a viral vector (pSUPER.retro) greatly increased silencing potency in a colon cancer cell line (HT-29) without activating any interferon response. Phenotypically, COX-2 deficient HT-29 cells showed a significant impairment of their in vitro malignant behaviour. Thus, results reported here indicate an easy-to-use, powerful and high selective virus-based method to knockdown COX-2 gene in a stable and long-lasting manner, in colon cancer cells. Furthermore, they open up the possibility of an in vivo application of this anti-COX-2 retroviral vector, as therapeutic agent for human cancers overexpressing COX-2. In order to improve the tumour selectivity, pSUPER.retro vector was modified for the shCOX-2 expression cassette. The aim was to obtain a strong, specific transcription of shCOX-2 followed by COX-2 silencing mediated by siCOX-2 only in cancer cells. For this reason, H1 promoter in basic pSUPER.retro vector [pS(H1)] was substituted with the human Cox-2 promoter [pS(COX2)] and with a promoter containing repeated copies of the TCF binding element (TBE) [pS(TBE)]. These promoters were choosen because they are partculary activated in colon cancer cells. COX-2 was effectively silenced in HT-29 and HCA-7 colon cancer cells by using enhanced pS(COX2) and pS(TBE) vectors. In particular, an higher siCOX-2 production followed by a stronger inhibition of Cox-2 gene were achieved by using pS(TBE) vector, that represents not only the most effective, but also the most specific system to downregulate COX-2 in colon cancer cells. Because of the many limits that a retroviral therapy could have in a possible in vivo treatment of CRC, the next goal was to render the enhanced RNAi-mediate COX-2 silencing more suitable for this kind of application. Xiang and et al. (2006) demonstrated that it is possible to induce RNAi in mammalian cells after infection with engineered E. Coli strains expressing Inv and HlyA genes, which encode for two bacterial factors needed for successful transfer of shRNA in mammalian cells. This system, called “trans-kingdom” RNAi (tkRNAi) could represent an optimal approach for the treatment of colorectal cancer, since E. Coli in normally resident in human intestinal flora and could easily vehicled to the tumor tissue. For this reason, I tested the improved COX-2 silencing mediated by pS(COX2) and pS(TBE) vectors by using tkRNAi system. Results obtained in HT-29 and HCA-7 cell lines were in high agreement with data previously collected after the transfection of pS(COX2) and pS(TBE) vectors in the same cell lines. These findings suggest that tkRNAi system for COX-2 silencing, in particular mediated by pS(TBE) vector, could represent a promising tool for the treatment of colorectal cancer. Flanking the studies addressed to the setting-up of a RNAi-mediated therapeutical strategy, I proposed to get ahead with the comprehension of new molecular basis of human colorectal cancer. In particular, it is known that components of the miRNA/RNAi pathway may be altered during the progressive development of colorectal cancer (CRC), and it has been already demonstrated that some miRNAs work as tumor suppressors or oncomiRs in colon cancer. Thus, my hypothesis was that overexpressed COX-2 protein in colon cancer could be the result of decreased levels of one or more tumor suppressor miRNAs. In this thesis, I clearly show an inverse correlation between COX-2 expression and the human miR- 101(1) levels in colon cancer cell lines, tissues and metastases. I also demonstrate that the in vitro modulating of miR-101(1) expression in colon cancer cell lines leads to significant variations in COX-2 expression, and this phenomenon is based on a direct interaction between miR-101(1) and COX-2 mRNA. Moreover, I started to investigate miR-101(1) regulation in the hypoxic environment since adaptation to hypoxia is critical for tumor cell growth and survival and it is known that COX-2 can be induced directly by hypoxia-inducible factor 1 (HIF-1). Surprisingly, I observed that COX-2 overexpression induced by hypoxia is always coupled to a significant decrease of miR-101(1) levels in colon cancer cell lines, suggesting that miR-101(1) regulation could be involved in the adaption of cancer cells to the hypoxic environment that strongly characterize CRC tissues.
Resumo:
The continuous increase of genome sequencing projects produced a huge amount of data in the last 10 years: currently more than 600 prokaryotic and 80 eukaryotic genomes are fully sequenced and publically available. However the sole sequencing process of a genome is able to determine just raw nucleotide sequences. This is only the first step of the genome annotation process that will deal with the issue of assigning biological information to each sequence. The annotation process is done at each different level of the biological information processing mechanism, from DNA to protein, and cannot be accomplished only by in vitro analysis procedures resulting extremely expensive and time consuming when applied at a this large scale level. Thus, in silico methods need to be used to accomplish the task. The aim of this work was the implementation of predictive computational methods to allow a fast, reliable, and automated annotation of genomes and proteins starting from aminoacidic sequences. The first part of the work was focused on the implementation of a new machine learning based method for the prediction of the subcellular localization of soluble eukaryotic proteins. The method is called BaCelLo, and was developed in 2006. The main peculiarity of the method is to be independent from biases present in the training dataset, which causes the over‐prediction of the most represented examples in all the other available predictors developed so far. This important result was achieved by a modification, made by myself, to the standard Support Vector Machine (SVM) algorithm with the creation of the so called Balanced SVM. BaCelLo is able to predict the most important subcellular localizations in eukaryotic cells and three, kingdom‐specific, predictors were implemented. In two extensive comparisons, carried out in 2006 and 2008, BaCelLo reported to outperform all the currently available state‐of‐the‐art methods for this prediction task. BaCelLo was subsequently used to completely annotate 5 eukaryotic genomes, by integrating it in a pipeline of predictors developed at the Bologna Biocomputing group by Dr. Pier Luigi Martelli and Dr. Piero Fariselli. An online database, called eSLDB, was developed by integrating, for each aminoacidic sequence extracted from the genome, the predicted subcellular localization merged with experimental and similarity‐based annotations. In the second part of the work a new, machine learning based, method was implemented for the prediction of GPI‐anchored proteins. Basically the method is able to efficiently predict from the raw aminoacidic sequence both the presence of the GPI‐anchor (by means of an SVM), and the position in the sequence of the post‐translational modification event, the so called ω‐site (by means of an Hidden Markov Model (HMM)). The method is called GPIPE and reported to greatly enhance the prediction performances of GPI‐anchored proteins over all the previously developed methods. GPIPE was able to predict up to 88% of the experimentally annotated GPI‐anchored proteins by maintaining a rate of false positive prediction as low as 0.1%. GPIPE was used to completely annotate 81 eukaryotic genomes, and more than 15000 putative GPI‐anchored proteins were predicted, 561 of which are found in H. sapiens. In average 1% of a proteome is predicted as GPI‐anchored. A statistical analysis was performed onto the composition of the regions surrounding the ω‐site that allowed the definition of specific aminoacidic abundances in the different considered regions. Furthermore the hypothesis that compositional biases are present among the four major eukaryotic kingdoms, proposed in literature, was tested and rejected. All the developed predictors and databases are freely available at: BaCelLo http://gpcr.biocomp.unibo.it/bacello eSLDB http://gpcr.biocomp.unibo.it/esldb GPIPE http://gpcr.biocomp.unibo.it/gpipe
Resumo:
Motivation An actual issue of great interest, both under a theoretical and an applicative perspective, is the analysis of biological sequences for disclosing the information that they encode. The development of new technologies for genome sequencing in the last years, opened new fundamental problems since huge amounts of biological data still deserve an interpretation. Indeed, the sequencing is only the first step of the genome annotation process that consists in the assignment of biological information to each sequence. Hence given the large amount of available data, in silico methods became useful and necessary in order to extract relevant information from sequences. The availability of data from Genome Projects gave rise to new strategies for tackling the basic problems of computational biology such as the determination of the tridimensional structures of proteins, their biological function and their reciprocal interactions. Results The aim of this work has been the implementation of predictive methods that allow the extraction of information on the properties of genomes and proteins starting from the nucleotide and aminoacidic sequences, by taking advantage of the information provided by the comparison of the genome sequences from different species. In the first part of the work a comprehensive large scale genome comparison of 599 organisms is described. 2,6 million of sequences coming from 551 prokaryotic and 48 eukaryotic genomes were aligned and clustered on the basis of their sequence identity. This procedure led to the identification of classes of proteins that are peculiar to the different groups of organisms. Moreover the adopted similarity threshold produced clusters that are homogeneous on the structural point of view and that can be used for structural annotation of uncharacterized sequences. The second part of the work focuses on the characterization of thermostable proteins and on the development of tools able to predict the thermostability of a protein starting from its sequence. By means of Principal Component Analysis the codon composition of a non redundant database comprising 116 prokaryotic genomes has been analyzed and it has been showed that a cross genomic approach can allow the extraction of common determinants of thermostability at the genome level, leading to an overall accuracy in discriminating thermophilic coding sequences equal to 95%. This result outperform those obtained in previous studies. Moreover, we investigated the effect of multiple mutations on protein thermostability. This issue is of great importance in the field of protein engineering, since thermostable proteins are generally more suitable than their mesostable counterparts in technological applications. A Support Vector Machine based method has been trained to predict if a set of mutations can enhance the thermostability of a given protein sequence. The developed predictor achieves 88% accuracy.
Resumo:
La tesi si pone come obiettivo quello di indagare le mostre di moda contemporanee come macchine testuali. Se consideriamo l’attuale panorama del fashion design come caratterizzato da una complessità costitutiva e da rapidi mutamenti che lo attraversano, e se partiamo dal presupposto che lo spettro di significati che uno stile di abbigliamento e i singoli capi possono assumere è estremamente sfuggente, probabilmente risulta più produttivo interrogarsi su come funziona la moda, su quali sono i suoi meccanismi di produzione di significato. L’analisi delle fashion exhibition si rivela quindi un modo utile per affrontare la questione, dato che gli allestimenti discorsivizzano questi meccanismi e rappresentano delle riflessioni tridimensionali attorno a temi specifici. La mostra di moda mette in scena delle eccezionalità che magnificano aspetti tipici del funzionamento del fashion system, sia se ci rivolgiamo alla moda dal punto di vista della produzione, sia se la consideriamo dal punto di vista della fruizione. L’indagine ha rintracciato nelle mostre curate da Diana Vreeland al Costume Institute del Metropolitan Museum di New York il modello di riferimento per le mostre di moda contemporanee. Vreeland, che dal 1936 al 1971 è stata prima fashion editor e poi editor-in-chief rispettivamente di “Harper’s Bazaar” e di “Vogue USA”, ha segnato un passaggio fondamentale quando nel 1972 ha deciso di accettare il ruolo di Special Consultant al Costume Institute. È ormai opinione diffusa fra critici e studiosi di moda che le mostre da lei organizzate nel corso di più di un decennio abbiano cambiato il modo di mettere in scena i vestiti nei musei. Al lavoro di Vreeland abbiamo poi accostato una recente mostra di moda che ha fatto molto parlare di sé: Spectres. When Fashion Turns Back, a cura di Judith Clark (2004). Nell’indagare i rapporti fra il fashion design contemporaneo e la storia della moda questa mostra ha utilizzato macchine allestitive abitate dai vestiti, per “costruire idee spaziali” e mettere in scena delle connessioni non immediate fra passato e presente. Questa mostra ci è sembrata centrale per evidenziare lo sguardo semiotico del curatore nel suo interrogarsi sul progetto complessivo dell’exhibition design e non semplicemente sullo studio degli abiti in mostra. In questo modo abbiamo delineato due posizioni: una rappresentata da un approccio object-based all’analisi del vestito, che si lega direttamente alla tradizione dei conservatori museali; l’altra rappresentata da quella che ormai si può considerare una disciplina, il fashion curation, che attribuisce molta importanza a tutti gli aspetti che concorrono a formare il progetto allestitivo di una mostra. Un lavoro comparativo fra alcune delle più importanti mostre di moda recentemente organizzate ci ha permesso di individuare elementi ricorrenti e specificità di questi dispositivi testuali. Utilizzando il contributo di Manar Hammad (2006) abbiamo preso in considerazione i diversi livelli di una mostra di moda: gli abiti e il loro rapporto con i manichini; l’exhibition design e lo spazio della mostra; il percorso e la sequenza, sia dal punto di vista della strategia di costruzione e dispiegamento testuale, sia dal punto di vista del fruitore modello. Abbiamo così individuato quattro gruppi di mostre di moda: mostre museali-archivistiche; retrospettive monografiche; mostre legate alla figura di un curatore; forme miste che si posizionano trasversalmente rispetto a questi primi tre modelli. Questa sistematizzazione ha evidenziato che una delle dimensione centrali per le mostre di moda contemporanee è proprio la questione della curatorship, che possiamo leggere in termini di autorialità ed enunciazione. Si sono ulteriormente chiariti anche gli orizzonti valoriali di riferimento: alla dimensione dell’accuratezza storica è associata una mostra che predilige il livello degli oggetti (gli abiti) e un coinvolgimento del visitatore puramente visivo; alla dimensione del piacere visivo possiamo invece associare un modello di mostra che assegna all’exhibition design un ruolo centrale e “chiede” al visitatore di giocare un ruolo pienamente interattivo. L’approccio curatoriale più compiuto ci sembra essere quello che cerca di conciliare queste due dimensioni.
Resumo:
Tracking activities during daily life and assessing movement parameters is essential for complementing the information gathered in confined environments such as clinical and physical activity laboratories for the assessment of mobility. Inertial measurement units (IMUs) are used as to monitor the motion of human movement for prolonged periods of time and without space limitations. The focus in this study was to provide a robust, low-cost and an unobtrusive solution for evaluating human motion using a single IMU. First part of the study focused on monitoring and classification of the daily life activities. A simple method that analyses the variations in signal was developed to distinguish two types of activity intervals: active and inactive. Neural classifier was used to classify active intervals; the angle with respect to gravity was used to classify inactive intervals. Second part of the study focused on extraction of gait parameters using a single inertial measurement unit (IMU) attached to the pelvis. Two complementary methods were proposed for gait parameters estimation. First method was a wavelet based method developed for the estimation of gait events. Second method was developed for estimating step and stride length during level walking using the estimations of the previous method. A special integration algorithm was extended to operate on each gait cycle using a specially designed Kalman filter. The developed methods were also applied on various scenarios. Activity monitoring method was used in a PRIN’07 project to assess the mobility levels of individuals living in a urban area. The same method was applied on volleyball players to analyze the fitness levels of them by monitoring their daily life activities. The methods proposed in these studies provided a simple, unobtrusive and low-cost solution for monitoring and assessing activities outside of controlled environments.
Resumo:
This thesis is a collection of works focused on the topic of Earthquake Early Warning, with a special attention to large magnitude events. The topic is addressed from different points of view and the structure of the thesis reflects the variety of the aspects which have been analyzed. The first part is dedicated to the giant, 2011 Tohoku-Oki earthquake. The main features of the rupture process are first discussed. The earthquake is then used as a case study to test the feasibility Early Warning methodologies for very large events. Limitations of the standard approaches for large events arise in this chapter. The difficulties are related to the real-time magnitude estimate from the first few seconds of recorded signal. An evolutionary strategy for the real-time magnitude estimate is proposed and applied to the single Tohoku-Oki earthquake. In the second part of the thesis a larger number of earthquakes is analyzed, including small, moderate and large events. Starting from the measurement of two Early Warning parameters, the behavior of small and large earthquakes in the initial portion of recorded signals is investigated. The aim is to understand whether small and large earthquakes can be distinguished from the initial stage of their rupture process. A physical model and a plausible interpretation to justify the observations are proposed. The third part of the thesis is focused on practical, real-time approaches for the rapid identification of the potentially damaged zone during a seismic event. Two different approaches for the rapid prediction of the damage area are proposed and tested. The first one is a threshold-based method which uses traditional seismic data. Then an innovative approach using continuous, GPS data is explored. Both strategies improve the prediction of large scale effects of strong earthquakes.
Resumo:
This dissertation, comprised of three separate studies, focuses on the relationship between remote work adoption and employee job performance, analyzing employee social isolation and job concentration as the main mediators of this relationship. It also examines the impact of concern about COVID-19 and emotional stability as moderators of these relationships. Using a survey-based method in an emergency homeworking context, the first study found that social isolation had a negative effect on remote work productivity and satisfaction, and that COVID-19 concerns affected this relationship differently for individuals with high and low levels of concern. The second study, a diary study analyzing hybrid workers, found a positive correlation between work from home (WFH) adoption and job performance through social isolation and job concentration, with emotional stability serving respectively as a buffer and booster in the relationships between WFH and the mediators. The third study, even in this case a diary study of hybrid workers, confirmed the benefits of work from home on job performance and the importance of job concentration as a mediator, while suggesting that social isolation may not be significant when studying employee job performance, but it is relevant for employee well-being. Although each study provides autonomously a discussion and research and practical implications, this dissertation also presents a general discussion on remote work and its psychological implications, highlighting areas for future research
Resumo:
INTRODUCTION Endograft deployment is a well-known cause of arterial stiffness increase as well as arterial stiffness increase represent a recognized cardiovascular risk factor. A harmful effect on cardiac function induced by the endograft deployment should be investigated. Aim of this study was to evaluate the impact of endograft deployment on the arterial stiffness and cardiac geometry of patients treated for aortic aneurysm in order to detect modifications that could justify an increased cardiac mortality at follow-up. MATHERIALS AND METHODS Over a period of 3 years, patients undergoing elective EVAR for infrarenal aortic pathologies in two university centers in Emilia Romagna were examined. All patients underwent pre-operative and six-months post-operative Pulse Wave Velocity (PWV) examination using an ultrasound-based method performed by vascular surgeons together with trans-thoracic echocardiography examination in order to evaluate cardiac chambers geometry before and after the treatment. RESULTS 69 patients were enrolled. After 36 months, 36 patients (52%) completed the 6 months follow-up examination.The ultrasound-based carotid-femoral PWV measurements performed preoperatively and 6 months after the procedure revealed a significant postoperative increase of cf-PWV (11,6±3,6 m/sec vs 12,3±8 m/sec; p.value:0,037).Postoperative LVtdV (90±28,3 ml/m2 vs 99,1±29,7 ml/m2; p.value:0.031) LVtdVi (47,4±15,9 ml/m2 vs 51,9±14,9 ml/m2; p.value:0.050), IVStd (12±1,5 mm vs 12,1±1,3 mm; p.value:0,027) were significantly increased if compared with preoperative measures.Postoperative E/A (0,76±0,26 vs 0,6±0,67; p.value:0,011), E’ lateral (9,5±2,6 vs 7,9±2,6; p.value:0,024) and A’ septal (10,8±1,5 vs 8,9±2; p.value0,005) were significantly reduced if compared with preoperative measurements CONCLUSION The endovascular treatment of the abdominal aorta causes an immediate and significant increase of the aortic stiffness.This increase reflects negatively on patients’ cardiac geometry inducing left ventricle hypertrophy and mild diastolic disfunction after just 6 months from endograft’s implantation.Further investigations and long-term results are necessary to access if this negative remodeling could affect the cardiac outcome of patient treated using the endovascular approach.
Resumo:
Earthquake prediction is a complex task for scientists due to the rare occurrence of high-intensity earthquakes and their inaccessible depths. Despite this challenge, it is a priority to protect infrastructure, and populations living in areas of high seismic risk. Reliable forecasting requires comprehensive knowledge of seismic phenomena. In this thesis, the development, application, and comparison of both deterministic and probabilistic forecasting methods is shown. Regarding the deterministic approach, the implementation of an alarm-based method using the occurrence of strong (fore)shocks, widely felt by the population, as a precursor signal is described. This model is then applied for retrospective prediction of Italian earthquakes of magnitude M≥5.0,5.5,6.0, occurred in Italy from 1960 to 2020. Retrospective performance testing is carried out using tests and statistics specific to deterministic alarm-based models. Regarding probabilistic models, this thesis focuses mainly on the EEPAS and ETAS models. Although the EEPAS model has been previously applied and tested in some regions of the world, it has never been used for forecasting Italian earthquakes. In the thesis, the EEPAS model is used to retrospectively forecast Italian shallow earthquakes with a magnitude of M≥5.0 using new MATLAB software. The forecasting performance of the probabilistic models was compared to other models using CSEP binary tests. The EEPAS and ETAS models showed different characteristics for forecasting Italian earthquakes, with EEPAS performing better in the long-term and ETAS performing better in the short-term. The FORE model based on strong precursor quakes is compared to EEPAS and ETAS using an alarm-based deterministic approach. All models perform better than a random forecasting model, with ETAS and FORE models showing better performance. However, to fully evaluate forecasting performance, prospective tests should be conducted. The lack of objective tests for evaluating deterministic models and comparing them with probabilistic ones was a challenge faced during the study.
Resumo:
Long-term monitoring of acoustical environments is gaining popularity thanks to the relevant amount of scientific and engineering insights that it provides. The increasing interest is due to the constant growth of storage capacity and computational power to process large amounts of data. In this perspective, machine learning (ML) provides a broad family of data-driven statistical techniques to deal with large databases. Nowadays, the conventional praxis of sound level meter measurements limits the global description of a sound scene to an energetic point of view. The equivalent continuous level Leq represents the main metric to define an acoustic environment, indeed. Finer analyses involve the use of statistical levels. However, acoustic percentiles are based on temporal assumptions, which are not always reliable. A statistical approach, based on the study of the occurrences of sound pressure levels, would bring a different perspective to the analysis of long-term monitoring. Depicting a sound scene through the most probable sound pressure level, rather than portions of energy, brought more specific information about the activity carried out during the measurements. The statistical mode of the occurrences can capture typical behaviors of specific kinds of sound sources. The present work aims to propose an ML-based method to identify, separate and measure coexisting sound sources in real-world scenarios. It is based on long-term monitoring and is addressed to acousticians focused on the analysis of environmental noise in manifold contexts. The presented method is based on clustering analysis. Two algorithms, Gaussian Mixture Model and K-means clustering, represent the main core of a process to investigate different active spaces monitored through sound level meters. The procedure has been applied in two different contexts: university lecture halls and offices. The proposed method shows robust and reliable results in describing the acoustic scenario and it could represent an important analytical tool for acousticians.
Resumo:
The "sustainability" concept relates to the prolonging of human economic systems with as little detrimental impact on ecological systems as possible. Construction that exhibits good environmental stewardship and practices that conserve resources in a manner that allow growth and development to be sustained for the long-term without degrading the environment are indispensable in a developed society. Past, current and future advancements in asphalt as an environmentally sustainable paving material are especially important because the quantities of asphalt used annually in Europe as well as in the U.S. are large. The asphalt industry is still developing technological improvements that will reduce the environmental impact without affecting the final mechanical performance. Warm mix asphalt (WMA) is a type of asphalt mix requiring lower production temperatures compared to hot mix asphalt (HMA), while aiming to maintain the desired post construction properties of traditional HMA. Lowering the production temperature reduce the fuel usage and the production of emissions therefore and that improve conditions for workers and supports the sustainable development. Even the crumb-rubber modifier (CRM), with shredded automobile tires and used in the United States since the mid 1980s, has proven to be an environmentally friendly alternative to conventional asphalt pavement. Furthermore, the use of waste tires is not only relevant in an environmental aspect but also for the engineering properties of asphalt [Pennisi E., 1992]. This research project is aimed to demonstrate the dual value of these Asphalt Mixes in regards to the environmental and mechanical performance and to suggest a low environmental impact design procedure. In fact, the use of eco-friendly materials is the first phase towards an eco-compatible design but it cannot be the only step. The eco-compatible approach should be extended also to the design method and material characterization because only with these phases is it possible to exploit the maximum potential properties of the used materials. Appropriate asphalt concrete characterization is essential and vital for realistic performance prediction of asphalt concrete pavements. Volumetric (Mix design) and mechanical (Permanent deformation and Fatigue performance) properties are important factors to consider. Moreover, an advanced and efficient design method is necessary in order to correctly use the material. A design method such as a Mechanistic-Empirical approach, consisting of a structural model capable of predicting the state of stresses and strains within the pavement structure under the different traffic and environmental conditions, was the application of choice. In particular this study focus on the CalME and its Incremental-Recursive (I-R) procedure, based on damage models for fatigue and permanent shear strain related to the surface cracking and to the rutting respectively. It works in increments of time and, using the output from one increment, recursively, as input to the next increment, predicts the pavement conditions in terms of layer moduli, fatigue cracking, rutting and roughness. This software procedure was adopted in order to verify the mechanical properties of the study mixes and the reciprocal relationship between surface layer and pavement structure in terms of fatigue and permanent deformation with defined traffic and environmental conditions. The asphalt mixes studied were used in a pavement structure as surface layer of 60 mm thickness. The performance of the pavement was compared to the performance of the same pavement structure where different kinds of asphalt concrete were used as surface layer. In comparison to a conventional asphalt concrete, three eco-friendly materials, two warm mix asphalt and a rubberized asphalt concrete, were analyzed. The First Two Chapters summarize the necessary steps aimed to satisfy the sustainable pavement design procedure. In Chapter I the problem of asphalt pavement eco-compatible design was introduced. The low environmental impact materials such as the Warm Mix Asphalt and the Rubberized Asphalt Concrete were described in detail. In addition the value of a rational asphalt pavement design method was discussed. Chapter II underlines the importance of a deep laboratory characterization based on appropriate materials selection and performance evaluation. In Chapter III, CalME is introduced trough a specific explanation of the different equipped design approaches and specifically explaining the I-R procedure. In Chapter IV, the experimental program is presented with a explanation of test laboratory devices adopted. The Fatigue and Rutting performances of the study mixes are shown respectively in Chapter V and VI. Through these laboratory test data the CalME I-R models parameters for Master Curve, fatigue damage and permanent shear strain were evaluated. Lastly, in Chapter VII, the results of the asphalt pavement structures simulations with different surface layers were reported. For each pavement structure, the total surface cracking, the total rutting, the fatigue damage and the rutting depth in each bound layer were analyzed.
Resumo:
Geometric nonlinearities of flexure hinges introduced by large deflections often complicate the analysis of compliant mechanisms containing such members, and therefore, Pseudo-Rigid-Body Models (PRBMs) have been well proposed and developed by Howell [1994] to analyze the characteristics of slender beams under large deflection. These models, however, fail to approximate the characteristics for the deep beams (short beams) or the other flexure hinges. Lobontiu's work [2001] contributed to the diverse flexure hinge analysis building on the assumptions of small deflection, which also limits the application range of these flexure hinges and cannot analyze the stiffness and stress characteristics of these flexure hinges for large deflection. Therefore, the objective of this thesis is to analyze flexure hinges considering both the effects of large-deflection and shear force, which guides the design of flexure-based compliant mechanisms. The main work conducted in the thesis is outlined as follows. 1. Three popular types of flexure hinges: (circular flexure hinges, elliptical flexure hinges and corner-filleted flexure hinges) are chosen for analysis at first. 2. Commercial software (Comsol) based Finite Element Analysis (FEA) method is then used for correcting the errors produced by the equations proposed by Lobontiu when the chosen flexure hinges suffer from large deformation. 3. Three sets of generic design equations for the three types of flexure hinges are further proposed on the basis of stiffness and stress characteristics from the FEA results. 4. A flexure-based four-bar compliant mechanism is finally studied and modeled using the proposed generic design equations. The load-displacement relationships are verified by a numerical example. The results show that a maximum error about the relationship between moment and rotation deformation is less than 3.4% for a flexure hinge, and it is lower than 5% for the four-bar compliant mechanism compared with the FEA results.
Resumo:
Nanoscience is an emerging and fast-growing field of science with the aim of manipulating nanometric objects with dimension below 100 nm. Top down approach is currently used to build these type of architectures (e.g microchips). The miniaturization process cannot proceed indefinitely due to physical and technical limitations. Those limits are focusing the interest on the bottom-up approach and construction of nano-objects starting from “nano-bricks” like atoms, molecules or nanocrystals. Unlike atoms, molecules can be “fully programmable” and represent the best choice to build up nanostructures. In the past twenty years many examples of functional nano-devices able to perform simple actions have been reported. Nanocrystals which are often considered simply nanostructured materials, can be active part in the development of those nano-devices, in combination with functional molecules. The object of this dissertation is the photophysical and photochemical investigation of nano-objects bearing molecules and semiconductor nanocrystals (QDs) as components. The first part focuses on the characterization of a bistable rotaxane. This study, in collaboration with the group of Prof. J.F. Stoddart (Northwestern University, Evanston, Illinois, USA) who made the synthesis of the compounds, shows the ability of this artificial machine to operate as bistable molecular-level memory under kinetic control. The second part concerns the study of the surface properties of luminescent semiconductor nanocrystals (QDs) and in particular the effect of acid and base on the spectroscopical properties of those nanoparticles. In this section is also reported the work carried out in the laboratory of Prof H. Mattoussi (Florida State University, Tallahassee, Florida, USA), where I developed a novel method for the surface decoration of QDs with lipoic acid-based ligands involving the photoreduction of the di-thiolane moiety.