893 resultados para non-conscious cognitive processing (NCCP) time.


Relevância:

40.00% 40.00%

Publicador:

Resumo:

This thesis explores the capabilities of heterogeneous multi-core systems, based on multiple Graphics Processing Units (GPUs) in a standard desktop framework. Multi-GPU accelerated desk side computers are an appealing alternative to other high performance computing (HPC) systems: being composed of commodity hardware components fabricated in large quantities, their price-performance ratio is unparalleled in the world of high performance computing. Essentially bringing “supercomputing to the masses”, this opens up new possibilities for application fields where investing in HPC resources had been considered unfeasible before. One of these is the field of bioelectrical imaging, a class of medical imaging technologies that occupy a low-cost niche next to million-dollar systems like functional Magnetic Resonance Imaging (fMRI). In the scope of this work, several computational challenges encountered in bioelectrical imaging are tackled with this new kind of computing resource, striving to help these methods approach their true potential. Specifically, the following main contributions were made: Firstly, a novel dual-GPU implementation of parallel triangular matrix inversion (TMI) is presented, addressing an crucial kernel in computation of multi-mesh head models of encephalographic (EEG) source localization. This includes not only a highly efficient implementation of the routine itself achieving excellent speedups versus an optimized CPU implementation, but also a novel GPU-friendly compressed storage scheme for triangular matrices. Secondly, a scalable multi-GPU solver for non-hermitian linear systems was implemented. It is integrated into a simulation environment for electrical impedance tomography (EIT) that requires frequent solution of complex systems with millions of unknowns, a task that this solution can perform within seconds. In terms of computational throughput, it outperforms not only an highly optimized multi-CPU reference, but related GPU-based work as well. Finally, a GPU-accelerated graphical EEG real-time source localization software was implemented. Thanks to acceleration, it can meet real-time requirements in unpreceeded anatomical detail running more complex localization algorithms. Additionally, a novel implementation to extract anatomical priors from static Magnetic Resonance (MR) scansions has been included.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The surface electrocardiogram (ECG) is an established diagnostic tool for the detection of abnormalities in the electrical activity of the heart. The interest of the ECG, however, extends beyond the diagnostic purpose. In recent years, studies in cognitive psychophysiology have related heart rate variability (HRV) to memory performance and mental workload. The aim of this thesis was to analyze the variability of surface ECG derived rhythms, at two different time scales: the discrete-event time scale, typical of beat-related features (Objective I), and the “continuous” time scale of separated sources in the ECG (Objective II), in selected scenarios relevant to psychophysiological and clinical research, respectively. Objective I) Joint time-frequency and non-linear analysis of HRV was carried out, with the goal of assessing psychophysiological workload (PPW) in response to working memory engaging tasks. Results from fourteen healthy young subjects suggest the potential use of the proposed indices in discriminating PPW levels in response to varying memory-search task difficulty. Objective II) A novel source-cancellation method based on morphology clustering was proposed for the estimation of the atrial wavefront in atrial fibrillation (AF) from body surface potential maps. Strong direct correlation between spectral concentration (SC) of atrial wavefront and temporal variability of the spectral distribution was shown in persistent AF patients, suggesting that with higher SC, shorter observation time is required to collect spectral distribution, from which the fibrillatory rate is estimated. This could be time and cost effective in clinical decision-making. The results held for reduced leads sets, suggesting that a simplified setup could also be considered, further reducing the costs. In designing the methods of this thesis, an online signal processing approach was kept, with the goal of contributing to real-world applicability. An algorithm for automatic assessment of ambulatory ECG quality, and an automatic ECG delineation algorithm were designed and validated.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Das Time-of-Flight Aerosol Mass Spectrometer (ToF-AMS) der Firma Aerodyne ist eine Weiterentwicklung des Aerodyne Aerosolmassenspektrometers (Q-AMS). Dieses ist gut charakterisiert und kommt weltweit zum Einsatz. Beide Instrumente nutzen eine aerodynamische Linse, aerodynamische Partikelgrößenbestimmung, thermische Verdampfung und Elektronenstoß-Ionisation. Im Gegensatz zum Q-AMS, wo ein Quadrupolmassenspektrometer zur Analyse der Ionen verwendet wird, kommt beim ToF-AMS ein Flugzeit-Massenspektrometer zum Einsatz. In der vorliegenden Arbeit wird anhand von Laborexperimenten und Feldmesskampagnen gezeigt, dass das ToF-AMS zur quantitativen Messung der chemischen Zusammensetzung von Aerosolpartikeln mit hoher Zeit- und Größenauflösung geeignet ist. Zusätzlich wird ein vollständiges Schema zur ToF-AMS Datenanalyse vorgestellt, dass entwickelt wurde, um quantitative und sinnvolle Ergebnisse aus den aufgenommenen Rohdaten, sowohl von Messkampagnen als auch von Laborexperimenten, zu erhalten. Dieses Schema basiert auf den Charakterisierungsexperimenten, die im Rahmen dieser Arbeit durchgeführt wurden. Es beinhaltet Korrekturen, die angebracht werden müssen, und Kalibrationen, die durchgeführt werden müssen, um zuverlässige Ergebnisse aus den Rohdaten zu extrahieren. Beträchtliche Arbeit wurde außerdem in die Entwicklung eines zuverlässigen und benutzerfreundlichen Datenanalyseprogramms investiert. Dieses Programm kann zur automatischen und systematischen ToF-AMS Datenanalyse und –korrektur genutzt werden.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A broad variety of solid state NMR techniques were used to investigate the chain dynamics in several polyethylene (PE) samples, including ultrahigh molecular weight PEs (UHMW-PEs) and low molecular weight PEs (LMW-PEs). Via changing the processing history, i.e. melt/solution crystallization and drawing processes, these samples gain different morphologies, leading to different molecular dynamics. Due to the long chain nature, the molecular dynamics of polyethylene can be distinguished in local fluctuation and long range motion. With the help of NMR these different kinds of molecular dynamics can be monitored separately. In this work the local chain dynamics in non-crystalline regions of polyethylene samples was investigated via measuring 1H-13C heteronuclear dipolar coupling and 13C chemical shift anisotropy (CSA). By analyzing the motionally averaged 1H-13C heteronuclear dipolar coupling and 13C CSA, the information about the local anisotropy and geometry of motion was obtained. Taking advantage of the big difference of the 13C T1 relaxation time in crystalline and non-crystalline regions of PEs, the 1D 13C MAS exchange experiment was used to investigate the cooperative chain motion between these regions. The different chain organizations in non-crystalline regions were used to explain the relationship between the local fluctuation and the long range motion of the samples. In a simple manner the cooperative chain motion between crystalline and non-crystalline regions of PE results in the experimentally observed diffusive behavior of PE chain. The morphological influences on the diffusion motion have been discussed. The morphological factors include lamellar thickness, chain organization in non-crystalline regions and chain entanglements. Thermodynamics of the diffusion motion in melt and solution crystallized UHMW-PEs is discussed, revealing entropy-controlled features of the chain diffusion in PE. This thermodynamic consideration explains the counterintuitive relationship between the local fluctuation and the long range motion of the samples. Using the chain diffusion coefficient, the rates of jump motion in crystals of the melt crystallized PE have been calculated. A concept of "effective" jump motion has been proposed to explain the difference between the values derived from the chain diffusion coefficients and those in literatures. The observations of this thesis give a clear demonstration of the strong relationship between the sample morphology and chain dynamics. The sample morphologies governed by the processing history lead to different spatial constraints for the molecular chains, leading to different features of the local and long range chain dynamics. The knowledge of the morphological influence on the microscopic chain motion has many implications in our understanding of the alpha-relaxation process in PE and the related phenomena such as crystal thickening, drawability of PE, the easy creep of PE fiber, etc.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Perfusion CT imaging of the liver has potential to improve evaluation of tumour angiogenesis. Quantitative parameters can be obtained applying mathematical models to Time Attenuation Curve (TAC). However, there are still some difficulties for an accurate quantification of perfusion parameters due, for example, to algorithms employed, to mathematical model, to patient’s weight and cardiac output and to the acquisition system. In this thesis, new parameters and alternative methodologies about liver perfusion CT are presented in order to investigate the cause of variability of this technique. Firstly analysis were made to assess the variability related to the mathematical model used to compute arterial Blood Flow (BFa) values. Results were obtained implementing algorithms based on “ maximum slope method” and “Dual input one compartment model” . Statistical analysis on simulated data demonstrated that the two methods are not interchangeable. Anyway slope method is always applicable in clinical context. Then variability related to TAC processing in the application of slope method is analyzed. Results compared with manual selection allow to identify the best automatic algorithm to compute BFa. The consistency of a Standardized Perfusion Index (SPV) was evaluated and a simplified calibration procedure was proposed. At the end the quantitative value of perfusion map was analyzed. ROI approach and map approach provide related values of BFa and this means that pixel by pixel algorithm give reliable quantitative results. Also in pixel by pixel approach slope method give better results. In conclusion the development of new automatic algorithms for a consistent computation of BFa and the analysis and definition of simplified technique to compute SPV parameter, represent an improvement in the field of liver perfusion CT analysis.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Ultrasound imaging is widely used in medical diagnostics as it is the fastest, least invasive, and least expensive imaging modality. However, ultrasound images are intrinsically difficult to be interpreted. In this scenario, Computer Aided Detection (CAD) systems can be used to support physicians during diagnosis providing them a second opinion. This thesis discusses efficient ultrasound processing techniques for computer aided medical diagnostics, focusing on two major topics: (i) Ultrasound Tissue Characterization (UTC), aimed at characterizing and differentiating between healthy and diseased tissue; (ii) Ultrasound Image Segmentation (UIS), aimed at detecting the boundaries of anatomical structures to automatically measure organ dimensions and compute clinically relevant functional indices. Research on UTC produced a CAD tool for Prostate Cancer detection to improve the biopsy protocol. In particular, this thesis contributes with: (i) the development of a robust classification system; (ii) the exploitation of parallel computing on GPU for real-time performance; (iii) the introduction of both an innovative Semi-Supervised Learning algorithm and a novel supervised/semi-supervised learning scheme for CAD system training that improve system performance reducing data collection effort and avoiding collected data wasting. The tool provides physicians a risk map highlighting suspect tissue areas, allowing them to perform a lesion-directed biopsy. Clinical validation demonstrated the system validity as a diagnostic support tool and its effectiveness at reducing the number of biopsy cores requested for an accurate diagnosis. For UIS the research developed a heart disease diagnostic tool based on Real-Time 3D Echocardiography. Thesis contributions to this application are: (i) the development of an automated GPU based level-set segmentation framework for 3D images; (ii) the application of this framework to the myocardium segmentation. Experimental results showed the high efficiency and flexibility of the proposed framework. Its effectiveness as a tool for quantitative analysis of 3D cardiac morphology and function was demonstrated through clinical validation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In questa tesi si è studiato un metodo per modellare e virtualizzare tramite algoritmi in Matlab le distorsioni armoniche di un dispositivo audio non lineare, ovvero uno “strumento” che, sollecitato da un segnale audio, lo modifichi, introducendovi delle componenti non presenti in precedenza. Il dispositivo che si è scelto per questo studio il pedale BOSS SD-1 Super OverDrive per chitarra elettrica e lo “strumento matematico” che ne fornisce il modello è lo sviluppo in serie di Volterra. Lo sviluppo in serie di Volterra viene diffusamente usato nello studio di sistemi fisici non lineari, nel caso in cui si abbia interesse a modellare un sistema che si presenti come una “black box”. Il metodo della Nonlinear Convolution progettato dall'Ing. Angelo Farina ha applicato con successo tale sviluppo anche all'ambito dell'acustica musicale: servendosi di una tecnica di misurazione facilmente realizzabile e del modello fornito dalla serie di Volterra Diagonale, il metodo permette di caratterizzare un dispositivo audio non lineare mediante le risposte all'impulso non lineari che il dispositivo fornisce a fronte di un opportuno segnale di test (denominato Exponential Sine Sweep). Le risposte all'impulso del dispositivo vengono utilizzate per ricavare i kernel di Volterra della serie. L'utilizzo di tale metodo ha permesso all'Università di Bologna di ottenere un brevetto per un software che virtualizzasse in post-processing le non linearità di un sistema audio. In questa tesi si è ripreso il lavoro che ha portato al conseguimento del brevetto, apportandovi due innovazioni: si è modificata la scelta del segnale utilizzato per testare il dispositivo (si è fatto uso del Synchronized Sine Sweep, in luogo dell'Exponential Sine Sweep); si è messo in atto un primo tentativo di orientare la virtualizzazione verso l'elaborazione in real-time, implementando un procedimento (in post-processing) di creazione dei kernel in dipendenza dal volume dato in input al dispositivo non lineare.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Wireless Sensor Networks (WSNs) offer a new solution for distributed monitoring, processing and communication. First of all, the stringent energy constraints to which sensing nodes are typically subjected. WSNs are often battery powered and placed where it is not possible to recharge or replace batteries. Energy can be harvested from the external environment but it is a limited resource that must be used efficiently. Energy efficiency is a key requirement for a credible WSNs design. From the power source's perspective, aggressive energy management techniques remain the most effective way to prolong the lifetime of a WSN. A new adaptive algorithm will be presented, which minimizes the consumption of wireless sensor nodes in sleep mode, when the power source has to be regulated using DC-DC converters. Another important aspect addressed is the time synchronisation in WSNs. WSNs are used for real-world applications where physical time plays an important role. An innovative low-overhead synchronisation approach will be presented, based on a Temperature Compensation Algorithm (TCA). The last aspect addressed is related to self-powered WSNs with Energy Harvesting (EH) solutions. Wireless sensor nodes with EH require some form of energy storage, which enables systems to continue operating during periods of insufficient environmental energy. However, the size of the energy storage strongly restricts the use of WSNs with EH in real-world applications. A new approach will be presented, which enables computation to be sustained during intermittent power supply. The discussed approaches will be used for real-world WSN applications. The first presented scenario is related to the experience gathered during an European Project (3ENCULT Project), regarding the design and implementation of an innovative network for monitoring heritage buildings. The second scenario is related to the experience with Telecom Italia, regarding the design of smart energy meters for monitoring the usage of household's appliances.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Im Bereich sicherheitsrelevanter eingebetteter Systeme stellt sich der Designprozess von Anwendungen als sehr komplex dar. Entsprechend einer gegebenen Hardwarearchitektur lassen sich Steuergeräte aufrüsten, um alle bestehenden Prozesse und Signale pünktlich auszuführen. Die zeitlichen Anforderungen sind strikt und müssen in jeder periodischen Wiederkehr der Prozesse erfüllt sein, da die Sicherstellung der parallelen Ausführung von größter Bedeutung ist. Existierende Ansätze können schnell Designalternativen berechnen, aber sie gewährleisten nicht, dass die Kosten für die nötigen Hardwareänderungen minimal sind. Wir stellen einen Ansatz vor, der kostenminimale Lösungen für das Problem berechnet, die alle zeitlichen Bedingungen erfüllen. Unser Algorithmus verwendet Lineare Programmierung mit Spaltengenerierung, eingebettet in eine Baumstruktur, um untere und obere Schranken während des Optimierungsprozesses bereitzustellen. Die komplexen Randbedingungen zur Gewährleistung der periodischen Ausführung verlagern sich durch eine Zerlegung des Hauptproblems in unabhängige Unterprobleme, die als ganzzahlige lineare Programme formuliert sind. Sowohl die Analysen zur Prozessausführung als auch die Methoden zur Signalübertragung werden untersucht und linearisierte Darstellungen angegeben. Des Weiteren präsentieren wir eine neue Formulierung für die Ausführung mit fixierten Prioritäten, die zusätzlich Prozessantwortzeiten im schlimmsten anzunehmenden Fall berechnet, welche für Szenarien nötig sind, in denen zeitliche Bedingungen an Teilmengen von Prozessen und Signalen gegeben sind. Wir weisen die Anwendbarkeit unserer Methoden durch die Analyse von Instanzen nach, welche Prozessstrukturen aus realen Anwendungen enthalten. Unsere Ergebnisse zeigen, dass untere Schranken schnell berechnet werden können, um die Optimalität von heuristischen Lösungen zu beweisen. Wenn wir optimale Lösungen mit Antwortzeiten liefern, stellt sich unsere neue Formulierung in der Laufzeitanalyse vorteilhaft gegenüber anderen Ansätzen dar. Die besten Resultate werden mit einem hybriden Ansatz erzielt, der heuristische Startlösungen, eine Vorverarbeitung und eine heuristische mit einer kurzen nachfolgenden exakten Berechnungsphase verbindet.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In questo lavoro si è tentato di fornire un metodo per la calibrazione di modelli numerici in analisi dinamiche spettrali. Attraverso una serie di analisi time history non lineari sono stati ottenuti gli spostamenti relativi orizzontali che nascono, in corrispondenza della connessione trave-pilastro di tipo attritivo, quando una struttura prefabbricata monopiano viene investita dalla componente orizzontale e verticale del sisma. Con un procedimento iterativo su varie analisi spettrali sono state calibrate delle rigidezze equivalenti che hanno permesso di ottenere, con buona approssimazione, gli stessi risultati delle analisi time history. Tali rigidezze sono state poi restituite in forma grafica. Per riprodurre gli spostamenti relativi orizzontali con un’analisi dinamica spettrale è quindi possibile collegare le travi ai pilastri con degli elementi elastici aventi rigidezza Kcoll. I valori di rigidezza restituiti da questo studio valgono per un’ampia gamma di prefabbricati monopiano (periodo proprio 0.20s < T < 2.00s) e tre differenti livelli di intensità sismica; inoltre è stata data la possibilità di considerare la plasticizzazione alla base dei pilastri e di scegliere fra due diverse posizioni nei confronti della rottura di faglia (Near Fault System o Far Fault System). La diminuzione di forza d’attrito risultante (a seguito della variazione dell’accelerazione verticale indotta dal sisma) è stata presa in considerazione utilizzando un modello in cui fra trave e pilastro è posto un isolatore a pendolo inverso (opportunamente calibrato per funzionare come semplice appoggio ad attrito). Con i modelli lineari equivalenti si riescono ad ottenere buoni risultati in tempi relativamente ridotti: è possibile così compiere delle valutazioni approssimate sulla perdita di appoggio e sulle priorità d’intervento in una determinata zona sismica.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This is the first part of a study investigating a model-based transient calibration process for diesel engines. The motivation is to populate hundreds of parameters (which can be calibrated) in a methodical and optimum manner by using model-based optimization in conjunction with the manual process so that, relative to the manual process used by itself, a significant improvement in transient emissions and fuel consumption and a sizable reduction in calibration time and test cell requirements is achieved. Empirical transient modelling and optimization has been addressed in the second part of this work, while the required data for model training and generalization are the focus of the current work. Transient and steady-state data from a turbocharged multicylinder diesel engine have been examined from a model training perspective. A single-cylinder engine with external air-handling has been used to expand the steady-state data to encompass transient parameter space. Based on comparative model performance and differences in the non-parametric space, primarily driven by a high engine difference between exhaust and intake manifold pressures (ΔP) during transients, it has been recommended that transient emission models should be trained with transient training data. It has been shown that electronic control module (ECM) estimates of transient charge flow and the exhaust gas recirculation (EGR) fraction cannot be accurate at the high engine ΔP frequently encountered during transient operation, and that such estimates do not account for cylinder-to-cylinder variation. The effects of high engine ΔP must therefore be incorporated empirically by using transient data generated from a spectrum of transient calibrations. Specific recommendations on how to choose such calibrations, how many data to acquire, and how to specify transient segments for data acquisition have been made. Methods to process transient data to account for transport delays and sensor lags have been developed. The processed data have then been visualized using statistical means to understand transient emission formation. Two modes of transient opacity formation have been observed and described. The first mode is driven by high engine ΔP and low fresh air flowrates, while the second mode is driven by high engine ΔP and high EGR flowrates. The EGR fraction is inaccurately estimated at both modes, while EGR distribution has been shown to be present but unaccounted for by the ECM. The two modes and associated phenomena are essential to understanding why transient emission models are calibration dependent and furthermore how to choose training data that will result in good model generalization.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Patients with brain metastases (BM) rarely survive longer than 6months and are commonly excluded from clinical trials. We explored two combined modality regimens with novel agents with single agent activity and radiosensitizing properties.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study examines the links between human perceptions, cognitive biases and neural processing of symmetrical stimuli. While preferences for symmetry have largely been examined in the context of disorders such as obsessive-compulsive disorder and autism spectrum disorders, we examine various these phenomena in non-clinical subjects and suggest that such preferences are distributed throughout the typical population as part of our cognitive and neural architecture. In Experiment 1, 82 young adults reported on the frequency of their obsessive-compulsive spectrum behaviors. Subjects also performed an emotional Stroop or variant of an Implicit Association Task (the OC-CIT) developed to assess cognitive biases for symmetry. Data not only reveal that subjects evidence a cognitive conflict when asked to match images of positive affect with asymmetrical stimuli, and disgust with symmetry, but also that their slowed reaction times when asked to do so were predicted by reports of OC behavior, particularly checking behavior. In Experiment 2, 26 participants were administered an oddball Event-Related Potential task specifically designed to assess sensitivity to symmetry as well as the OC-CIT. These data revealed that reaction times on the OC-CIT were strongly predicted by frontal electrode sites indicating faster processing of an asymmetrical stimulus (unparallel lines) relative to a symmetrical stimulus (parallel lines). The results point to an overall cognitive bias linking disgust with asymmetry and suggest that such cognitive biases are reflected in neural responses to symmetrical/asymmetrical stimuli.