804 resultados para video as a research tool
Resumo:
Fraud is a global problem that has required more attention due to an accentuated expansion of modern technology and communication. When statistical techniques are used to detect fraud, whether a fraud detection model is accurate enough in order to provide correct classification of the case as a fraudulent or legitimate is a critical factor. In this context, the concept of bootstrap aggregating (bagging) arises. The basic idea is to generate multiple classifiers by obtaining the predicted values from the adjusted models to several replicated datasets and then combining them into a single predictive classification in order to improve the classification accuracy. In this paper, for the first time, we aim to present a pioneer study of the performance of the discrete and continuous k-dependence probabilistic networks within the context of bagging predictors classification. Via a large simulation study and various real datasets, we discovered that the probabilistic networks are a strong modeling option with high predictive capacity and with a high increment using the bagging procedure when compared to traditional techniques. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Primary voice production occurs in the larynx through vibrational movements carried out by vocal folds. However, many problems can affect this complex system resulting in voice disorders. In this context, time-frequency-shape analysis based on embedding phase space plots and nonlinear dynamics methods have been used to evaluate the vocal fold dynamics during phonation. For this purpose, the present work used high-speed video to record the vocal fold movements of three subjects and extract the glottal area time series using an image segmentation algorithm. This signal is used for an optimization method which combines genetic algorithms and a quasi-Newton method to optimize the parameters of a biomechanical model of vocal folds based on lumped elements (masses, springs and dampers). After optimization, this model is capable of simulating the dynamics of recorded vocal folds and their glottal pulse. Bifurcation diagrams and phase space analysis were used to evaluate the behavior of this deterministic system in different circumstances. The results showed that this methodology can be used to extract some physiological parameters of vocal folds and reproduce some complex behaviors of these structures contributing to the scientific and clinical evaluation of voice production. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
Background: The species of T. harzianum are well known for their biocontrol activity against many plant pathogens. However, there is a lack of studies concerning its use as a biological control agent against F. solani, a pathogen involved in several crop diseases. In this study, we have used subtractive library hybridization (SSH) and quantitative real-time PCR (RT-qPCR) techniques in order to explore changes in T. harzianum genes expression during growth on cell wall of F. solani (FSCW) or glucose. RT-qPCR was also used to examine the regulation of 18 genes, potentially involved in biocontrol, during confrontation between T. harzianum and F. solani. Results: Data obtained from two subtractive libraries were compared after annotation using the Blast2GO suite. A total of 417 and 78 readable EST sequence were annotated in the FSCW and glucose libraries, respectively. Functional annotation of these genes identified diverse biological processes and molecular functions required during T. harzianum growth on FSCW or glucose. We identified various genes of biotechnological value encoding to proteins which function such as transporters, hydrolytic activity, adherence, appressorium development and pathogenesis. Fifteen genes were up-regulated and sixteen were down-regulated at least at one-time point during growth of T. harzianum in FSCW. During the confrontation assay most of the genes were up-regulated, mainly after contact, when the interaction has been established. Conclusions: This study demonstrates that T. harzianum expressed different genes when grown on FSCW compared to glucose. It provides insights into the mechanisms of gene expression involved in mycoparasitism of T. harzianum against F. solani. The identification and evaluation of these genes may contribute to the development of an efficient biological control agent.
Resumo:
This research addresses the application of friction stir welding (FWS) of titanium alloy Ti–6Al–4V. Friction stir welding is a recent process, developed in the 1990s for aluminum joining; this joining process is being increasingly applied in many industries from basic materials, such as steel alloys, to high performance alloys, such as titanium. It is a process in great development and has its economic advantages when compared to conventional welding. For high performance alloys such as titanium, a major problem to overcome is the construction of tools that can withstand the extreme process environment. In the literature, the possibilities approached are only few tungsten alloys. Early experiments with tools made of cemented carbide (WC) showed optimistic results consistent with the literature. It was initially thought that WC tools may be an option to the FSW process since it is possible to improve the wear resistance of the tool. The metallographic analysis of the welds did not show primary defects of voids (tunneling) or similar internal defects due to processing, only defects related to tool wear which can cause loss of weld quality. The severe tool wear caused loss of surface quality and inclusions of fragments inside the joining, which should be corrected or mitigated by means of coating techniques on tool, or the replacement of cemented carbide with tungsten alloys, as found in the literature.
Resumo:
Facial expression recognition is one of the most challenging research areas in the image recognition ¯eld and has been actively studied since the 70's. For instance, smile recognition has been studied due to the fact that it is considered an important facial expression in human communication, it is therefore likely useful for human–machine interaction. Moreover, if a smile can be detected and also its intensity estimated, it will raise the possibility of new applications in the future
Resumo:
In the recent years TNFRSF13B coding variants have been implicated by clinical genetics studies in Common Variable Immunodeficiency (CVID), the most common clinically relevant primary immunodeficiency in individuals of European ancestry, but their functional effects in relation to the development of the disease have not been entirely established. To examine the potential contribution of such variants to CVID, the more comprehensive perspective of an evolutionary approach was applied in this study, underling the belief that evolutionary genetics methods can play a role in dissecting the origin, causes and diffusion of human diseases, representing a powerful tool also in human health research. For this purpose, TNFRSF13B coding region was sequenced in 451 healthy individuals belonging to 26 worldwide populations, in addition to 96 control, 77 CVID and 38 Selective IgA Deficiency (IgAD) individuals from Italy, leading to the first achievement of a global picture of TNFRSF13B nucleotide diversity and haplotype structure and making suggestion of its evolutionary history possible. A slow rate of evolution, within our species and when compared to the chimpanzee, low levels of genetic diversity geographical structure and the absence of recent population specific selective pressures were observed for the examined genomic region, suggesting that geographical distribution of its variability is more plausibly related to its involvement also in innate immunity rather than in adaptive immunity only. This, together with the extremely subtle disease/healthy samples differences observed, suggests that CVID might be more likely related to still unknown environmental and genetic factors, rather than to the nature of TNFRSF13B variants only.
Resumo:
The thesis deals with channel coding theory applied to upper layers in the protocol stack of a communication link and it is the outcome of four year research activity. A specific aspect of this activity has been the continuous interaction between the natural curiosity related to the academic blue-sky research and the system oriented design deriving from the collaboration with European industry in the framework of European funded research projects. In this dissertation, the classical channel coding techniques, that are traditionally applied at physical layer, find their application at upper layers where the encoding units (symbols) are packets of bits and not just single bits, thus explaining why such upper layer coding techniques are usually referred to as packet layer coding. The rationale behind the adoption of packet layer techniques is in that physical layer channel coding is a suitable countermeasure to cope with small-scale fading, while it is less efficient against large-scale fading. This is mainly due to the limitation of the time diversity inherent in the necessity of adopting a physical layer interleaver of a reasonable size so as to avoid increasing the modem complexity and the latency of all services. Packet layer techniques, thanks to the longer codeword duration (each codeword is composed of several packets of bits), have an intrinsic longer protection against long fading events. Furthermore, being they are implemented at upper layer, Packet layer techniques have the indisputable advantages of simpler implementations (very close to software implementation) and of a selective applicability to different services, thus enabling a better matching with the service requirements (e.g. latency constraints). Packet coding technique improvement has been largely recognized in the recent communication standards as a viable and efficient coding solution: Digital Video Broadcasting standards, like DVB-H, DVB-SH, and DVB-RCS mobile, and 3GPP standards (MBMS) employ packet coding techniques working at layers higher than the physical one. In this framework, the aim of the research work has been the study of the state-of-the-art coding techniques working at upper layer, the performance evaluation of these techniques in realistic propagation scenario, and the design of new coding schemes for upper layer applications. After a review of the most important packet layer codes, i.e. Reed Solomon, LDPC and Fountain codes, in the thesis focus our attention on the performance evaluation of ideal codes (i.e. Maximum Distance Separable codes) working at UL. In particular, we analyze the performance of UL-FEC techniques in Land Mobile Satellite channels. We derive an analytical framework which is a useful tool for system design allowing to foresee the performance of the upper layer decoder. We also analyze a system in which upper layer and physical layer codes work together, and we derive the optimal splitting of redundancy when a frequency non-selective slowly varying fading channel is taken into account. The whole analysis is supported and validated through computer simulation. In the last part of the dissertation, we propose LDPC Convolutional Codes (LDPCCC) as possible coding scheme for future UL-FEC application. Since one of the main drawbacks related to the adoption of packet layer codes is the large decoding latency, we introduce a latency-constrained decoder for LDPCCC (called windowed erasure decoder). We analyze the performance of the state-of-the-art LDPCCC when our decoder is adopted. Finally, we propose a design rule which allows to trade-off performance and latency.
Resumo:
This work has been realized by the author in his PhD course in Electrical, Computer Science and Telecommunication at the University of Bologna, Faculty of Engineering, Italy. All the documentation here reported is a summary of years of work, under the supervision of Prof. Oreste Andrisano, coordinator of Wireless Communication Laboratory - WiLab, in Bologna. The subject of this thesis is the transmission of video in a context of heterogeneous network, and in particular, using a wireless channel. All the instrumentation that has been used for the characterization of the telecommunication systems belongs to CNR (National Research Council), CNIT (Italian Inter- University Center), and DEIS (Dept. of Electrical, Computer Science, and Systems). From November 2009 to July 2010, the author spent his time abroad, working in collaboration with DLR - German Aerospace Center in Munich, Germany, on channel coding area, developing a general purpose decoder machine to decode a huge family of iterative codes. A patent concerning Doubly Generalized-Low Density Parity Check codes has been produced by the author as well as some important scientic papers, published on IEEE journals and conferences.
Resumo:
The purpose of this research is to contribute to the literature on organizational demography and new product development by investigating how diverse individual career histories impact team performance. Moreover we highlighted the importance of considering also the institutional context and the specific labour market arrangements in which a team is embedded, in order to interpret correctly the effect of career-related diversity measures on performance. The empirical setting of the study is the videogame industry, and the teams in charge of the development of new game titles. Video games development teams are the ideal setting to investigate the influence of career histories on team performance, since the development of videogames is performed by multidisciplinary teams composed by specialists with a wide variety of technical and artistic backgrounds, who execute a significant amounts of creative thinking. We investigate our research question both with quantitative methods and with a case study on the Japanese videogame industry: one of the most innovative in this sector. Our results show how career histories in terms of occupational diversity, prior functional diversity and prior product diversity, usually have a positive influence on team performance. However, when the moderating effect of the institutional setting is taken in to account, career diversity has different or even opposite effect on team performance, according to the specific national context in which a team operates.
Resumo:
The surface electrocardiogram (ECG) is an established diagnostic tool for the detection of abnormalities in the electrical activity of the heart. The interest of the ECG, however, extends beyond the diagnostic purpose. In recent years, studies in cognitive psychophysiology have related heart rate variability (HRV) to memory performance and mental workload. The aim of this thesis was to analyze the variability of surface ECG derived rhythms, at two different time scales: the discrete-event time scale, typical of beat-related features (Objective I), and the “continuous” time scale of separated sources in the ECG (Objective II), in selected scenarios relevant to psychophysiological and clinical research, respectively. Objective I) Joint time-frequency and non-linear analysis of HRV was carried out, with the goal of assessing psychophysiological workload (PPW) in response to working memory engaging tasks. Results from fourteen healthy young subjects suggest the potential use of the proposed indices in discriminating PPW levels in response to varying memory-search task difficulty. Objective II) A novel source-cancellation method based on morphology clustering was proposed for the estimation of the atrial wavefront in atrial fibrillation (AF) from body surface potential maps. Strong direct correlation between spectral concentration (SC) of atrial wavefront and temporal variability of the spectral distribution was shown in persistent AF patients, suggesting that with higher SC, shorter observation time is required to collect spectral distribution, from which the fibrillatory rate is estimated. This could be time and cost effective in clinical decision-making. The results held for reduced leads sets, suggesting that a simplified setup could also be considered, further reducing the costs. In designing the methods of this thesis, an online signal processing approach was kept, with the goal of contributing to real-world applicability. An algorithm for automatic assessment of ambulatory ECG quality, and an automatic ECG delineation algorithm were designed and validated.
Resumo:
Poiché la diagnosi differenziale degli episodi parossistici notturni è affidata alla VEPSG, tenendo conto dei limiti di tale metodica, il progetto attuale ha lo scopo di definire la resa diagnostica di strumenti alternativi alla VEPSG: anamnesi, home-made video ed EEG intercritico. Sono stati reclutati consecutivamente 13 pazienti, afferiti al nostro Dipartimento per episodi parossistici notturni. Ciascun paziente è stato sottoposto ad un protocollo diagnostico standardizzato. A 5 Medici Esperti in Epilessia e Medicina del Sonno è stato chiesto di formulare un orientamento diagnostico sulla base di anamnesi, EEG intercritico, home-made video e VEPSG. Attraverso l’elaborazione degli orientamenti diagnostici è stata calcolata la resa diagnostica delle procedure esaminate, a confronto con la VEPSG, attraverso il concetto di “accuratezza diagnostica”. Per 6 pazienti è stato possibile porre una diagnosi di Epilessia Frontale Notturna, per 2 di parasonnia, in 5 la diagnosi è rimasta dubbia. L’accuratezza diagnostica di ciascuna procedura è risultata moderata, con lievi differenze tra le diverse procedure (61.5% anamnesi; 66% home-made video; 69,2 % EEG intercritico). E’ essenziale migliorare ulteriormente l’accuratezza diagnostica di anamnesi, EEG intercritico ed home-made video, che possono risultare cruciali nei casi in cui la diagnosi non è certa o quando la VEPSG non è disponibile.
Resumo:
La neuroriabilitazione è un processo attraverso cui individui affetti da patologie neurologiche mirano al conseguimento di un recupero completo o alla realizzazione del loro potenziale ottimale benessere fisico, mentale e sociale. Elementi essenziali per una riabilitazione efficace sono: una valutazione clinica da parte di un team multidisciplinare, un programma riabilitativo mirato e la valutazione dei risultati conseguiti mediante misure scientifiche e clinicamente appropriate. Obiettivo principale di questa tesi è stato sviluppare metodi e strumenti quantitativi per il trattamento e la valutazione motoria di pazienti neurologici. I trattamenti riabilitativi convenzionali richiedono a pazienti neurologici l’esecuzione di esercizi ripetitivi, diminuendo la loro motivazione. La realtà virtuale e i feedback sono in grado di coinvolgerli nel trattamento, permettendo ripetibilità e standardizzazione dei protocolli. È stato sviluppato e valutato uno strumento basato su feedback aumentati per il controllo del tronco. Inoltre, la realtà virtuale permette l’individualizzare il trattamento in base alle esigenze del paziente. Un’applicazione virtuale per la riabilitazione del cammino è stata sviluppata e testata durante un training su pazienti di sclerosi multipla, valutandone fattibilità e accettazione e dimostrando l'efficacia del trattamento. La valutazione quantitativa delle capacità motorie dei pazienti viene effettuata utilizzando sistemi di motion capture. Essendo il loro uso nella pratica clinica limitato, una metodologia per valutare l’oscillazione delle braccia in soggetti parkinsoniani basata su sensori inerziali è stata proposta. Questi sono piccoli, accurati e flessibili ma accumulano errori durante lunghe misurazioni. È stato affrontato questo problema e i risultati suggeriscono che, se il sensore è sul piede e le accelerazioni sono integrate iniziando dalla fase di mid stance, l’errore e le sue conseguenze nella determinazione dei parametri spaziali sono contenuti. Infine, è stata presentata una validazione del Kinect per il tracking del cammino in ambiente virtuale. Risultati preliminari consentono di definire il campo di utilizzo del sensore in riabilitazione.
Resumo:
Con l’avvento di Internet, potentissimo strumento tecnologico di diffusione di informazioni e di comunicazione a distanza, anche le modalità di apprendimento sono cambiate: persino nelle scuole si tende a non utilizzare più i classici libri di testo, ma ad utilizzare dispositivi dai quali scaricare in formato elettronico, libri, dispense, test, video ed ogni altro genere di materiale di apprendimento, dando vita a un vero e proprio nuovo modo di apprendere chiamato E-learning, più veloce, comodo e ricco di alternative rispetto al vecchio modello offline che si presentava sottoforma di floppy inizialmente e poi di CD-ROM. E-learning significa, electronic based learning, ed è appunto una vera e propria metodologia di didattica che sfrutta e viene facilitata da risorse e servizi disponibili e accessibili virtualmente in rete. Al momento vi sono numerose piattaforme di E-learning, una delle quali è il nucleo di questa tesi, ovvero il tool autore AContent. Questo documento di tesi, infatti, raccoglie la descrizione della progettazione e della fase implementativa della gestione delle politiche di copyright per il tool AContent. L’obbiettivo è quello di rendere possibile l’assegnazione di un copyright a qualsiasi tipo di materiale didattico venga creato, caricato e/o condiviso sulla piattaforma in questione. Pertanto l’idea è stata quella di dare la possibilità di scegliere fra più copyright preimpostati, utilizzando degli standard di licenze riguardanti i diritti d’autore, lasciando anche l’opportunità di inserire la propria politica.
Resumo:
Oggetto di questo lavoro è un’analisi dettagliata di un campione ristretto di riprese televisive della Quinta Sinfonia di Beethoven, con l’obiettivo di far emergere il loro meccanismo costruttivo, in senso sia tecnico sia culturale. Premessa dell’indagine è che ciascuna ripresa sia frutto di un’autorialità specifica che si sovrappone alle due già presenti in ogni esecuzione della Quinta Sinfonia, quella del compositore e quella dell’interprete, definita perciò «terza autorialità» riassumendo nella nozione la somma di contributi specifici che portano alla produzione di una ripresa (consulente musicale, regista, operatori di ripresa). La ricerca esamina i rapporti che volta a volta si stabiliscono fra i tre diversi piani autoriali, ma non mira a una ricostruzione filologica: l’obiettivo non è ricostruire le intenzioni dell’autore materiale quanto di far emergere dall’esame della registrazione della ripresa, così com’è data a noi oggi (spesso in una versione già più volte rimediata, in genere sotto forma di dvd commercializzato), scelte tecniche, musicali e culturali che potevano anche essere inconsapevoli. L’analisi dettagliata delle riprese conferma l’ipotesi di partenza che ci sia una sorta di sistema convenzionale, quasi una «solita forma» o approccio standardizzato, che sottende la gran parte delle riprese; gli elementi che si possono definire convenzionali, sia per la presenza sia per la modalità di trattamento, sono diversi, ma sono soprattutto due gli aspetti che sembrano esserne costitutivi: il legame con il rito del concerto, che viene rispettato e reincarnato televisivamente, con la costruzione di una propria, specifica aura; e la presenza di un paradigma implicito e sostanzialmente ineludibile che pone la maggior parte delle riprese televisive entro l’alveo della concezione della musica classica come musica pura, astratta, che deve essere compresa nei suoi propri termini.
Resumo:
Hair cortisol is a novel marker to measure long-term secretion cortisol free from many methodological caveats associated with other matrices such as plasma, saliva, urine, milk and faeces. For decades hair analysis has been successfully used in forensic science and toxicology to evaluate the exposure to exogenous substances and assess endogenous steroid hormones. Evaluation of cortisol in hair matrix began about a decade ago and have over the past five years had a remarkable development by advancing knowledge and affirming this method as a new and efficient way to study the hypothalamic-pituitary-adrenal (HPA) axis activity over a long time period. In farm animals, certain environmental or management conditions can potentially activate the HPA axis. Given the importance of cortisol in monitoring the HPA axis activity, a first approach has involved the study on the distribution of hair cortisol concentrations (HCC) in healthy dairy cows showing a physiological range of variation of this hormone. Moreover, HCC have been significantly influenced also by changes in environmental conditions and a significant positive correlation was detected between HCC and cows clinically or physiologically compromised suggesting that these cows were subjected to repeated HPA axis activation. Additionally, Crossbreed F1 heifers showed significantly lower HCC compared to pure animals and a breed influence has been seen also on the HPA axis activity stimulated by an environmental change showing thus a higher level of resilience and a better adaptability to the environment of certain genotypes. Hair proved to be an excellent matrix also in the study of the activation of the HPA axis during the perinatal period. The use of hair analysis in research holds great promise to significantly enhance current understanding on the role of HPA axis over a long period of time.