19 resultados para signal processing in the encrypted domain
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Among the experimental methods commonly used to define the behaviour of a full scale system, dynamic tests are the most complete and efficient procedures. A dynamic test is an experimental process, which would define a set of characteristic parameters of the dynamic behaviour of the system, such as natural frequencies of the structure, mode shapes and the corresponding modal damping values associated. An assessment of these modal characteristics can be used both to verify the theoretical assumptions of the project, to monitor the performance of the structural system during its operational use. The thesis is structured in the following chapters: The first introductive chapter recalls some basic notions of dynamics of structure, focusing the discussion on the problem of systems with multiply degrees of freedom (MDOF), which can represent a generic real system under study, when it is excited with harmonic force or in free vibration. The second chapter is entirely centred on to the problem of dynamic identification process of a structure, if it is subjected to an experimental test in forced vibrations. It first describes the construction of FRF through classical FFT of the recorded signal. A different method, also in the frequency domain, is subsequently introduced; it allows accurately to compute the FRF using the geometric characteristics of the ellipse that represents the direct input-output comparison. The two methods are compared and then the attention is focused on some advantages of the proposed methodology. The third chapter focuses on the study of real structures when they are subjected to experimental test, where the force is not known, like in an ambient or impact test. In this analysis we decided to use the CWT, which allows a simultaneous investigation in the time and frequency domain of a generic signal x(t). The CWT is first introduced to process free oscillations, with excellent results both in terms of frequencies, dampings and vibration modes. The application in the case of ambient vibrations defines accurate modal parameters of the system, although on the damping some important observations should be made. The fourth chapter is still on the problem of post processing data acquired after a vibration test, but this time through the application of discrete wavelet transform (DWT). In the first part the results obtained by the DWT are compared with those obtained by the application of CWT. Particular attention is given to the use of DWT as a tool for filtering the recorded signal, in fact in case of ambient vibrations the signals are often affected by the presence of a significant level of noise. The fifth chapter focuses on another important aspect of the identification process: the model updating. In this chapter, starting from the modal parameters obtained from some environmental vibration tests, performed by the University of Porto in 2008 and the University of Sheffild on the Humber Bridge in England, a FE model of the bridge is defined, in order to define what type of model is able to capture more accurately the real dynamic behaviour of the bridge. The sixth chapter outlines the necessary conclusions of the presented research. They concern the application of a method in the frequency domain in order to evaluate the modal parameters of a structure and its advantages, the advantages in applying a procedure based on the use of wavelet transforms in the process of identification in tests with unknown input and finally the problem of 3D modeling of systems with many degrees of freedom and with different types of uncertainty.
Resumo:
This thesis presents several data processing and compression techniques capable of addressing the strict requirements of wireless sensor networks. After introducing a general overview of sensor networks, the energy problem is introduced, dividing the different energy reduction approaches according to the different subsystem they try to optimize. To manage the complexity brought by these techniques, a quick overview of the most common middlewares for WSNs is given, describing in detail SPINE2, a framework for data processing in the node environment. The focus is then shifted on the in-network aggregation techniques, used to reduce data sent by the network nodes trying to prolong the network lifetime as long as possible. Among the several techniques, the most promising approach is the Compressive Sensing (CS). To investigate this technique, a practical implementation of the algorithm is compared against a simpler aggregation scheme, deriving a mixed algorithm able to successfully reduce the power consumption. The analysis moves from compression implemented on single nodes to CS for signal ensembles, trying to exploit the correlations among sensors and nodes to improve compression and reconstruction quality. The two main techniques for signal ensembles, Distributed CS (DCS) and Kronecker CS (KCS), are introduced and compared against a common set of data gathered by real deployments. The best trade-off between reconstruction quality and power consumption is then investigated. The usage of CS is also addressed when the signal of interest is sampled at a Sub-Nyquist rate, evaluating the reconstruction performance. Finally the group sparsity CS (GS-CS) is compared to another well-known technique for reconstruction of signals from an highly sub-sampled version. These two frameworks are compared again against a real data-set and an insightful analysis of the trade-off between reconstruction quality and lifetime is given.
Resumo:
Non Destructive Testing (NDT) and Structural Health Monitoring (SHM) are becoming essential in many application contexts, e.g. civil, industrial, aerospace etc., to reduce structures maintenance costs and improve safety. Conventional inspection methods typically exploit bulky and expensive instruments and rely on highly demanding signal processing techniques. The pressing need to overcome these limitations is the common thread that guided the work presented in this Thesis. In the first part, a scalable, low-cost and multi-sensors smart sensor network is introduced. The capability of this technology to carry out accurate modal analysis on structures undergoing flexural vibrations has been validated by means of two experimental campaigns. Then, the suitability of low-cost piezoelectric disks in modal analysis has been demonstrated. To enable the use of this kind of sensing technology in such non conventional applications, ad hoc data merging algorithms have been developed. In the second part, instead, imaging algorithms for Lamb waves inspection (namely DMAS and DS-DMAS) have been implemented and validated. Results show that DMAS outperforms the canonical Delay and Sum (DAS) approach in terms of image resolution and contrast. Similarly, DS-DMAS can achieve better results than both DMAS and DAS by suppressing artefacts and noise. To exploit the full potential of these procedures, accurate group velocity estimations are required. Thus, novel wavefield analysis tools that can address the estimation of the dispersion curves from SLDV acquisitions have been investigated. An image segmentation technique (called DRLSE) was exploited in the k-space to draw out the wavenumber profile. The DRLSE method was compared with compressive sensing methods to extract the group and phase velocity information. The validation, performed on three different carbon fibre plates, showed that the proposed solutions can accurately determine the wavenumber and velocities in polar coordinates at multiple excitation frequencies.
Resumo:
This thesis deal with the design of advanced OFDM systems. Both waveform and receiver design have been treated. The main scope of the Thesis is to study, create, and propose, ideas and novel design solutions able to cope with the weaknesses and crucial aspects of modern OFDM systems. Starting from the the transmitter side, the problem represented by low resilience to non-linear distortion has been assessed. A novel technique that considerably reduces the Peak-to-Average Power Ratio (PAPR) yielding a quasi constant signal envelope in the time domain (PAPR close to 1 dB) has been proposed.The proposed technique, named Rotation Invariant Subcarrier Mapping (RISM),is a novel scheme for subcarriers data mapping,where the symbols belonging to the modulation alphabet are not anchored, but maintain some degrees of freedom. In other words, a bit tuple is not mapped on a single point, rather it is mapped onto a geometrical locus, which is totally or partially rotation invariant. The final positions of the transmitted complex symbols are chosen by an iterative optimization process in order to minimize the PAPR of the resulting OFDM symbol. Numerical results confirm that RISM makes OFDM usable even in severe non-linear channels. Another well known problem which has been tackled is the vulnerability to synchronization errors. Indeed in OFDM system an accurate recovery of carrier frequency and symbol timing is crucial for the proper demodulation of the received packets. In general, timing and frequency synchronization is performed in two separate phases called PRE-FFT and POST-FFT synchronization. Regarding the PRE-FFT phase, a novel joint symbol timing and carrier frequency synchronization algorithm has been presented. The proposed algorithm is characterized by a very low hardware complexity, and, at the same time, it guarantees very good performance in in both AWGN and multipath channels. Regarding the POST-FFT phase, a novel approach for both pilot structure and receiver design has been presented. In particular, a novel pilot pattern has been introduced in order to minimize the occurrence of overlaps between two pattern shifted replicas. This allows to replace conventional pilots with nulls in the frequency domain, introducing the so called Silent Pilots. As a result, the optimal receiver turns out to be very robust against severe Rayleigh fading multipath and characterized by low complexity. Performance of this approach has been analytically and numerically evaluated. Comparing the proposed approach with state of the art alternatives, in both AWGN and multipath fading channels, considerable performance improvements have been obtained. The crucial problem of channel estimation has been thoroughly investigated, with particular emphasis on the decimation of the Channel Impulse Response (CIR) through the selection of the Most Significant Samples (MSSs). In this contest our contribution is twofold, from the theoretical side, we derived lower bounds on the estimation mean-square error (MSE) performance for any MSS selection strategy,from the receiver design we proposed novel MSS selection strategies which have been shown to approach these MSE lower bounds, and outperformed the state-of-the-art alternatives. Finally, the possibility of using of Single Carrier Frequency Division Multiple Access (SC-FDMA) in the Broadband Satellite Return Channel has been assessed. Notably, SC-FDMA is able to improve the physical layer spectral efficiency with respect to single carrier systems, which have been used so far in the Return Channel Satellite (RCS) standards. However, it requires a strict synchronization and it is also sensitive to phase noise of local radio frequency oscillators. For this reason, an effective pilot tone arrangement within the SC-FDMA frame, and a novel Joint Multi-User (JMU) estimation method for the SC-FDMA, has been proposed. As shown by numerical results, the proposed scheme manages to satisfy strict synchronization requirements and to guarantee a proper demodulation of the received signal.
Resumo:
The advent of Bitcoin suggested a disintermediated economy in which Internet users can take part directly. The conceptual disruption brought about by this Internet of Money (IoM) mirrors the cross-industry impacts of blockchain and distributed ledger technologies (DLTs). While related instances of non-centralisation thwart regulatory efforts to establish accountability, in the financial domain further challenges arise from the presence in the IoM of two seemingly opposing traits: anonymity and transparency. Indeed, DLTs are often described as architecturally transparent, but the perceived level of anonymity of cryptocurrency transfers fuels fears of illicit exploitation. This is a primary concern for the framework to prevent money laundering and the financing of terrorism and proliferation (AML/CFT/CPF), and a top priority both globally and at the EU level. Nevertheless, the anonymous and transparent features of the IoM are far from clear-cut, and the same is true for its levels of disintermediation and non-centralisation. Almost fifteen years after the first Bitcoin transaction, the IoM today comprises a diverse set of socio-technical ecosystems. Building on an analysis of their phenomenology, this dissertation shows how there is more to their traits of anonymity and transparency than it may seem, and how these features range across a spectrum of combinations and degrees. In this context, trade-offs can be evaluated by referring to techno-legal benchmarks, established through socio-technical assessments grounded on teleological interpretation. Against this backdrop, this work provides framework-level recommendations for the EU to respond to the twofold nature of the IoM legitimately and effectively. The methodology cherishes the mutual interaction between regulation and technology when drafting regulation whose compliance can be eased by design. This approach mitigates the risk of overfitting in a fast-changing environment, while acknowledging specificities in compliance with the risk-based approach that sits at the core of the AML/CFT/CPF regime.
Resumo:
Most electronic systems can be described in a very simplified way as an assemblage of analog and digital components put all together in order to perform a certain function. Nowadays, there is an increasing tendency to reduce the analog components, and to replace them by operations performed in the digital domain. This tendency has led to the emergence of new electronic systems that are more flexible, cheaper and robust. However, no matter the amount of digital process implemented, there will be always an analog part to be sorted out and thus, the step of converting digital signals into analog signals and vice versa cannot be avoided. This conversion can be more or less complex depending on the characteristics of the signals. Thus, even if it is desirable to replace functions carried out by analog components by digital processes, it is equally important to do so in a way that simplifies the conversion from digital to analog signals and vice versa. In the present thesis, we have study strategies based on increasing the amount of processing in the digital domain in such a way that the implementation of analog hardware stages can be simplified. To this aim, we have proposed the use of very low quantized signals, i.e. 1-bit, for the acquisition and for the generation of particular classes of signals.
Resumo:
Machines with moving parts give rise to vibrations and consequently noise. The setting up and the status of each machine yield to a peculiar vibration signature. Therefore, a change in the vibration signature, due to a change in the machine state, can be used to detect incipient defects before they become critical. This is the goal of condition monitoring, in which the informations obtained from a machine signature are used in order to detect faults at an early stage. There are a large number of signal processing techniques that can be used in order to extract interesting information from a measured vibration signal. This study seeks to detect rotating machine defects using a range of techniques including synchronous time averaging, Hilbert transform-based demodulation, continuous wavelet transform, Wigner-Ville distribution and spectral correlation density function. The detection and the diagnostic capability of these techniques are discussed and compared on the basis of experimental results concerning gear tooth faults, i.e. fatigue crack at the tooth root and tooth spalls of different sizes, as well as assembly faults in diesel engine. Moreover, the sensitivity to fault severity is assessed by the application of these signal processing techniques to gear tooth faults of different sizes.
Resumo:
In the present thesis, a new methodology of diagnosis based on advanced use of time-frequency technique analysis is presented. More precisely, a new fault index that allows tracking individual fault components in a single frequency band is defined. More in detail, a frequency sliding is applied to the signals being analyzed (currents, voltages, vibration signals), so that each single fault frequency component is shifted into a prefixed single frequency band. Then, the discrete Wavelet Transform is applied to the resulting signal to extract the fault signature in the frequency band that has been chosen. Once the state of the machine has been qualitatively diagnosed, a quantitative evaluation of the fault degree is necessary. For this purpose, a fault index based on the energy calculation of approximation and/or detail signals resulting from wavelet decomposition has been introduced to quantify the fault extend. The main advantages of the developed new method over existing Diagnosis techniques are the following: - Capability of monitoring the fault evolution continuously over time under any transient operating condition; - Speed/slip measurement or estimation is not required; - Higher accuracy in filtering frequency components around the fundamental in case of rotor faults; - Reduction in the likelihood of false indications by avoiding confusion with other fault harmonics (the contribution of the most relevant fault frequency components under speed-varying conditions are clamped in a single frequency band); - Low memory requirement due to low sampling frequency; - Reduction in the latency of time processing (no requirement of repeated sampling operation).
Resumo:
The importance of Helicobacter pylori as a human pathogen is underlined by the plethora of diseases it is responsible for. The capacity of H. pylori to adapt to the restricted host-associated environment andto evade the host immune response largely depends on a streamlined signalling network. The peculiar H. pylori small genome size combined with its paucity of transcriptional regulators highlights the relevance of post-transcriptional regulatory mechanisms as small non-coding RNAs (sRNAs). However, among the 8 RNases represented in H. pylori genome, a regulator guiding sRNAs metabolism is still not well studied. We investigated for the first time the physiological role in H. pylori G27 strain of the RNase Y enzyme. In the first line of research we provide a comprehensive characterization of the RNase Y activity by analysing its genomic organization and the factors that orchestrate its expression. Then, based on bioinformatic prediction models, we depict the most relevant determinants of RNase Y function, demonstrating a correlation of both structure and domain organization with orthologues represented in Gram-positive bacteria. To unveil the post-transcriptional regulatory effect exerted by the RNase Y, we compared the transcriptome of an RNase Y knock-out mutant to the parental wild type strain by RNA-seq approach. In the second line of research we characterized the activity of this single strand specific endoribonuclease on cag-PAI non coding RNA 1 (CncR1) sRNA. We found that deletion or inactivation of RNase Y led to the accumulation of a 3’-extended CncR1 (CncR1-L) transcript over time. Moreover, beneath its increased half-life, CncR1-L resembled a CncR1 inactive phenotype. Finally, we focused on the characterization of the in vivo interactome of CncR1. We set up a preliminary MS2-affinity purification coupled with RNA-sequencing (MAPS) approach and we evaluated the enrichment of specific targets, demonstrating the suitability of the technique in the H. pylori G27 strain.
Resumo:
Biological processes are very complex mechanisms, most of them being accompanied by or manifested as signals that reflect their essential characteristics and qualities. The development of diagnostic techniques based on signal and image acquisition from the human body is commonly retained as one of the propelling factors in the advancements in medicine and biosciences recorded in the recent past. It is a fact that the instruments used for biological signal and image recording, like any other acquisition system, are affected by non-idealities which, by different degrees, negatively impact on the accuracy of the recording. This work discusses how it is possible to attenuate, and ideally to remove, these effects, with a particular attention toward ultrasound imaging and extracellular recordings. Original algorithms developed during the Ph.D. research activity will be examined and compared to ones in literature tackling the same problems; results will be drawn on the base of comparative tests on both synthetic and in-vivo acquisitions, evaluating standard metrics in the respective field of application. All the developed algorithms share an adaptive approach to signal analysis, meaning that their behavior is not dependent only on designer choices, but driven by input signal characteristics too. Performance comparisons following the state of the art concerning image quality assessment, contrast gain estimation and resolution gain quantification as well as visual inspection highlighted very good results featured by the proposed ultrasound image deconvolution and restoring algorithms: axial resolution up to 5 times better than algorithms in literature are possible. Concerning extracellular recordings, the results of the proposed denoising technique compared to other signal processing algorithms pointed out an improvement of the state of the art of almost 4 dB.
Resumo:
Statistical modelling and statistical learning theory are two powerful analytical frameworks for analyzing signals and developing efficient processing and classification algorithms. In this thesis, these frameworks are applied for modelling and processing biomedical signals in two different contexts: ultrasound medical imaging systems and primate neural activity analysis and modelling. In the context of ultrasound medical imaging, two main applications are explored: deconvolution of signals measured from a ultrasonic transducer and automatic image segmentation and classification of prostate ultrasound scans. In the former application a stochastic model of the radio frequency signal measured from a ultrasonic transducer is derived. This model is then employed for developing in a statistical framework a regularized deconvolution procedure, for enhancing signal resolution. In the latter application, different statistical models are used to characterize images of prostate tissues, extracting different features. These features are then uses to segment the images in region of interests by means of an automatic procedure based on a statistical model of the extracted features. Finally, machine learning techniques are used for automatic classification of the different region of interests. In the context of neural activity signals, an example of bio-inspired dynamical network was developed to help in studies of motor-related processes in the brain of primate monkeys. The presented model aims to mimic the abstract functionality of a cell population in 7a parietal region of primate monkeys, during the execution of learned behavioural tasks.
Resumo:
Traceability is often perceived by food industry executives as an additional cost of doing business, one to be avoided if possible. However, a traceability system can in fact comply the regulatory requirements, increase food safety and recall performance, improving marketing performances and, as well as, improving supply chain management. Thus, traceability affects business performances of firms in terms of costs and benefits determined by traceability practices. Costs and benefits affect factors such as, firms’ characteristics, level of traceability and ,lastly, costs and benefits perceived prior to traceability implementation. This thesis was undertaken to understand how these factors are linked to affect the outcome of costs and benefits. Analysis of the results of a plant level survey of the Italian ichthyic processing industry revealed that processors generally adopt various level of traceability while government support appears to increase the level of traceability and the expectations and actual costs and benefits. None of the firms’ characteristics, with the exception of government support, influences costs and level of traceability. Only size of firms and level of QMS certifications are linked with benefits while precision of traceability increases benefits without affecting costs. Finally, traceability practices appear due to the request from “external“ stakeholders such as government, authority and customers rather than “internal” factors (e.g. improving the firm management) while the traceability system does not provide any added value from the market in terms of price premium or market share increase.
Intrinsic uncoupling in the ATP synthase of Escherichia coli. Studies on WT and ε-truncated mutants
Resumo:
The H+/ATP ratio in the catalysis of ATP synthase has generally been considered a fixed parameter. However, Melandri and coworkers have recently shown that, in the ATP synthase of the photosynthetic bacterium Rb.capsulatus, this ratio can significantly decrease during ATP hydrolysis when the concentration of either ADP or Pi is maintained at a low level (Turina et al., 2004). The present work has dealt with the ATP synthase of E.coli, looking for evidence of this phenomenon of intrinsic uncoupling in this organism as well. First of all, we have shown that the DCCD-sensitive ATP hydrolysis activity of E.coli internal membranes was strongly inhibited by ADP and Pi, with a half-maximal effect in the submicromolar range for ADP and at 140 µM for Pi. In contrast to this monotonic inhibition, however, the proton pumping activity of the enzyme, as estimated under the same conditions by the fluorescence quenching of the ÎpH-sensitive probe ACMA, showed a clearly biphasic progression, both for Pi, increasing from 0 up to approximately 200 µM, and for ADP, increasing from 0 up to a few µM. We have interpreted these results as indicating that the occupancy of ADP and Pi binding sites shifts the enzyme from a partially uncoupled state to a fully coupled state, and we expect that the ADP- and Pi-modulated intrinsic uncoupling is likely to be a general feature of prokaryotic ATP synthases. Moreover, the biphasicity of the proton pumping data suggested that two Pi binding sites are involved. In order to verify whether the same behaviour could be observed in the isolated enzyme, we have purified the ATP synthase of E.coli and reconstituted it into liposomes. Similarly as observed in the internal membrane preparation, in the isolated and reconstituted enzyme it was possible to observe inhibition of the hydrolytic activity by ADP and Pi (with half-maximal effects at few µM for ADP and at 400 µM for Pi) with a concomitant stimulation of proton pumping. Both the inhibition of ATP hydrolysis and the stimulation of proton pumping as a function of Pi were lost upon ADP removal by an ADP trap. These data have made it possible to conclude that the results obtained in E.coli internal membranes are not due to the artefactual interference of enzymatic activities other than the ones of the ATP synthase. In addition, data obtained with liposomes have allowed a calibration of the ACMA signal by ÎpH transitions of known extent, leading to a quantitative evaluation of the proton pumping data. Finally, we have focused our efforts on searching for a possible structural candidate involved in the phenomenon of intrinsic uncoupling. The ε-subunit of the ATP-synthase is known as an endogenous inhibitor of the hydrolysis activity of the complex and appears to undergo drastic conformational changes between a non-inhibitory form (down-state) and an inhibitory form (up-state)(Rodgers & Wilce, 2000; Gibbons et al., 2000). In addition, the results of Cipriano & Dunn (2006) indicated that the C-terminal domain of this subunit played an important role in the coupling mechanism of the pump, and those of Capaldi et al. (2001), Suzuki et al. (2003) were consistent with the down-state showing a higher hydrolysis-to-synthesis ratio than the up-state. Therefore, we decided to search for modulation of pumping efficiency in a C-terminally truncated ε mutant. A low copy number expression vector has been built, carrying an extra copy of uncC, with the aim of generating an ε-overexpressing E.coli strain in which normal levels of assembly of the mutated ATP-synthase complex would be promoted. We have then compared the ATP hydrolysis and the proton pumping activity in membranes prepared from these ε-overexpressing E.coli strains, which carried either the WT ε subunit or the ε88-stop truncated form. Both strains yielded well energized membranes. Noticeably, they showed a marked difference in the inhibition of hydrolysis by Pi, this effect being largely lost in the truncated mutant. However, pre-incubation of the mutated enzyme with ADP at low nanomolar concentrations (apparent Kd = 0.7nM) restored the hydrolysis inhibition, together with the modulation of intrinsic uncoupling by Pi, indicating that, contrary to wild-type, during membrane preparation the truncated mutant had lost the ADP bound at this high-affinity site, evidently due to a lower affinity (and/or higher release) for ADP of the mutant relative to wild type. Therefore, one of the effects of the C-terminal domain of ε appears to be to modulate the affinity of at least one of the binding sites for ADP. The lack of this domain does not appear so much to influence the modulability of coupling efficiency, but instead the extent of this modulation. At higher preincubated ADP concentrations (apparent Kd = 117nM), the only observed effects were inhibition of both hydrolysis and synthesis, providing a direct proof that two ADP-binding sites on the enzyme are involved in the inhibition of hydrolysis, of which only the one at higher affinity also modulates the coupling efficiency.
Resumo:
The aim of the present study is understanding the properties of a new group of redox proteins having in common a DOMON-type domain with characteristics of cytochromes b. The superfamily of proteins containing a DOMON of this type includes a few protein families. With the aim of better characterizing this new protein family, the present work addresses both a CyDOM protein (a cytochrome b561) and a protein only comprised of DOMON(AIR12), both of plant origin. Apoplastic ascorbate can be regenerated from monodehydroascorbate by a trans-plasma membrane redox system which uses cytosolic ascorbate as a reductant and comprises a high potential cytochrome b. We identified the major plasma membrane (PM) ascorbate-reducible b-type cytochrome of bean (Phaseolus vulgaris) and soybean (Glycine max) hypocotyls as orthologs of Arabidopsis auxin-responsive gene air12. The protein, which is glycosylated and glycosylphosphatidylinositol-anchored to the external side of the PM in vivo, was expressed in Pichia pastoris in a recombinant form, lacking the glycosylphosphatidylinositol-modification signal, and purified from the culture medium. Recombinant AIR12 is a soluble protein predicted to fold into a β-sandwich domain and belonging to the DOMON superfamily. It is shown to be a b-type cytochrome with a symmetrical α-band at 561 nm, to be fully reduced by ascorbate and fully oxidized by monodehydroascorbate. Redox potentiometry suggests that AIR12 binds two high-potential hemes (Em,7 +135 and +236 mV). Phylogenetic analyses reveal that the auxin-responsive genes AIR12 constitute a new family of plasma membrane b-type cytochromes specific to flowering plants. Although AIR12 is one of the few redox proteins of the PM characterized to date, the role of AIR12 in trans-PM electron transfer would imply interaction with other partners which are still to be identified. Another part of the present project was aimed at understanding of a soybean protein comprised of a DOMON fused with a well-defined b561 cytochrome domain (CyDOM). Various bioinformatic approaches show this protein to be composed of an N-terminal DOMON followed by b561 domain. The latter contains five transmembrane helices featuring highly conserved histidines, which might bind haem groups. The CyDOM has been cloned and expressed in the yeast Pichia pastoris, and spectroscopic analyses have been accomplished on solubilized yeast membranes. CyDOM clearly reveal the properties of b-type cytochrome. The results highlight the fact that CyDOM is clearly able to lead an electron flux through the plasmamembrane. Voltage clamp experiments demonstrate that Xenopus laevis oocytes transformed with CyDOM of soybean exhibit negative electrical currents in presence of an external electron acceptor. Analogous investigations were carried out with SDR2, a CyDOM of Drosophila melanogaster which shows an electron transport capacity even higher than plant CyDOM. As quoted above, these data reinforce those obtained in plant CyDOM on the one hand, and on the other hand allow to attribute to SDR2-like proteins the properties assigned to CyDOM. Was expressed in Regenerated tobacco roots, transiently transformed with infected a with chimeral construct GFP: CyDOM (by A. rhizogenes infection) reveals a plasmamembrane localization of CyDOM both in epidermal cells of the elongation zone of roots and in root hairs. In conclusion. Although the data presented here await to be expanded and in part clarified, it is safe to say they open a new perspective about the role of this group of proteins. The biological relevance of the functional and physiological implications of DOMON redox domains seems noteworthy, and it can but increase with future advances in research. Beyond the very finding, however interesting in itself, of DOMON domains as extracellular cytochromes, the present study testifies to the fact that cytochrome proteins containing DOMON domains of the type of “CyDOM” can transfer electrons through membranes and may represent the most important redox component of the plasmamembrane as yet discovered.
Resumo:
This thesis explores the capabilities of heterogeneous multi-core systems, based on multiple Graphics Processing Units (GPUs) in a standard desktop framework. Multi-GPU accelerated desk side computers are an appealing alternative to other high performance computing (HPC) systems: being composed of commodity hardware components fabricated in large quantities, their price-performance ratio is unparalleled in the world of high performance computing. Essentially bringing “supercomputing to the masses”, this opens up new possibilities for application fields where investing in HPC resources had been considered unfeasible before. One of these is the field of bioelectrical imaging, a class of medical imaging technologies that occupy a low-cost niche next to million-dollar systems like functional Magnetic Resonance Imaging (fMRI). In the scope of this work, several computational challenges encountered in bioelectrical imaging are tackled with this new kind of computing resource, striving to help these methods approach their true potential. Specifically, the following main contributions were made: Firstly, a novel dual-GPU implementation of parallel triangular matrix inversion (TMI) is presented, addressing an crucial kernel in computation of multi-mesh head models of encephalographic (EEG) source localization. This includes not only a highly efficient implementation of the routine itself achieving excellent speedups versus an optimized CPU implementation, but also a novel GPU-friendly compressed storage scheme for triangular matrices. Secondly, a scalable multi-GPU solver for non-hermitian linear systems was implemented. It is integrated into a simulation environment for electrical impedance tomography (EIT) that requires frequent solution of complex systems with millions of unknowns, a task that this solution can perform within seconds. In terms of computational throughput, it outperforms not only an highly optimized multi-CPU reference, but related GPU-based work as well. Finally, a GPU-accelerated graphical EEG real-time source localization software was implemented. Thanks to acceleration, it can meet real-time requirements in unpreceeded anatomical detail running more complex localization algorithms. Additionally, a novel implementation to extract anatomical priors from static Magnetic Resonance (MR) scansions has been included.