877 resultados para High-performance computing hyperspectral imaging
Resumo:
Modifications in low-density lipoprotein (LDL) have emerged as a major pathogenic factor of atherosclerosis, which is the main cause of morbidity and mortality in the western world. Measurements of the heat diffusivity of human LDL solutions in their native and in vitro oxidized states are presented by using the Z-Scan (ZS) technique. Other complementary techniques were used to obtain the physical parameters necessary to interpret the optical results, e. g., pycnometry, refractometry, calorimetry, and spectrophotometry, and to understand the oxidation phase of LDL particles. To determine the sample's thermal diffusivity using the thermal lens model, an iterative one-parameter fitting method is proposed which takes into account several characteristic ZS time-dependent and the position-dependent transmittance measurements. Results show that the thermal diffusivity increases as a function of the LDL oxidation degree, which can be explained by the increase of the hydroperoxides production due to the oxidation process. The oxidation products go from one LDL to another, disseminating the oxidation process and caring the heat across the sample. This phenomenon leads to a quick thermal homogenization of the sample, avoiding the formation of the thermal lens in highly oxidized LDL solutions. (C) 2012 Society of Photo-Optical Instrumentation Engineers (SPIE). [DOI: 10.1117/1.JBO.17.10.105003]
Resumo:
The modern GPUs are well suited for intensive computational tasks and massive parallel computation. Sparse matrix multiplication and linear triangular solver are the most important and heavily used kernels in scientific computation, and several challenges in developing a high performance kernel with the two modules is investigated. The main interest it to solve linear systems derived from the elliptic equations with triangular elements. The resulting linear system has a symmetric positive definite matrix. The sparse matrix is stored in the compressed sparse row (CSR) format. It is proposed a CUDA algorithm to execute the matrix vector multiplication using directly the CSR format. A dependence tree algorithm is used to determine which variables the linear triangular solver can determine in parallel. To increase the number of the parallel threads, a coloring graph algorithm is implemented to reorder the mesh numbering in a pre-processing phase. The proposed method is compared with parallel and serial available libraries. The results show that the proposed method improves the computation cost of the matrix vector multiplication. The pre-processing associated with the triangular solver needs to be executed just once in the proposed method. The conjugate gradient method was implemented and showed similar convergence rate for all the compared methods. The proposed method showed significant smaller execution time.
Resumo:
[EN]A new parallel algorithm for simultaneous untangling and smoothing of tetrahedral meshes is proposed in this paper. We provide a detailed analysis of its performance on shared-memory many-core computer architectures. This performance analysis includes the evaluation of execution time, parallel scalability, load balancing, and parallelism bottlenecks. Additionally, we compare the impact of three previously published graph coloring procedures on the performance of our parallel algorithm. We use six benchmark meshes with a wide range of sizes. Using these experimental data sets, we describe the behavior of the parallel algorithm for different data sizes. We demonstrate that this algorithm is highly scalable when it runs on two different high-performance many-core computers with up to 128 processors...
Resumo:
Nowadays, computing is migrating from traditional high performance and distributed computing to pervasive and utility computing based on heterogeneous networks and clients. The current trend suggests that future IT services will rely on distributed resources and on fast communication of heterogeneous contents. The success of this new range of services is directly linked to the effectiveness of the infrastructure in delivering them. The communication infrastructure will be the aggregation of different technologies even though the current trend suggests the emergence of single IP based transport service. Optical networking is a key technology to answer the increasing requests for dynamic bandwidth allocation and configure multiple topologies over the same physical layer infrastructure, optical networks today are still “far” from accessible from directly configure and offer network services and need to be enriched with more “user oriented” functionalities. However, current Control Plane architectures only facilitate efficient end-to-end connectivity provisioning and certainly cannot meet future network service requirements, e.g. the coordinated control of resources. The overall objective of this work is to provide the network with the improved usability and accessibility of the services provided by the Optical Network. More precisely, the definition of a service-oriented architecture is the enable technology to allow user applications to gain benefit of advanced services over an underlying dynamic optical layer. The definition of a service oriented networking architecture based on advanced optical network technologies facilitates users and applications access to abstracted levels of information regarding offered advanced network services. This thesis faces the problem to define a Service Oriented Architecture and its relevant building blocks, protocols and languages. In particular, this work has been focused on the use of the SIP protocol as a inter-layers signalling protocol which defines the Session Plane in conjunction with the Network Resource Description language. On the other hand, an advantage optical network must accommodate high data bandwidth with different granularities. Currently, two main technologies are emerging promoting the development of the future optical transport network, Optical Burst and Packet Switching. Both technologies respectively promise to provide all optical burst or packet switching instead of the current circuit switching. However, the electronic domain is still present in the scheduler forwarding and routing decision. Because of the high optics transmission frequency the burst or packet scheduler faces a difficult challenge, consequentially, high performance and time focused design of both memory and forwarding logic is need. This open issue has been faced in this thesis proposing an high efficiently implementation of burst and packet scheduler. The main novelty of the proposed implementation is that the scheduling problem has turned into simple calculation of a min/max function and the function complexity is almost independent of on the traffic conditions.
Resumo:
This work presents exact, hybrid algorithms for mixed resource Allocation and Scheduling problems; in general terms, those consist into assigning over time finite capacity resources to a set of precedence connected activities. The proposed methods have broad applicability, but are mainly motivated by applications in the field of Embedded System Design. In particular, high-performance embedded computing recently witnessed the shift from single CPU platforms with application-specific accelerators to programmable Multi Processor Systems-on-Chip (MPSoCs). Those allow higher flexibility, real time performance and low energy consumption, but the programmer must be able to effectively exploit the platform parallelism. This raises interest in the development of algorithmic techniques to be embedded in CAD tools; in particular, given a specific application and platform, the objective if to perform optimal allocation of hardware resources and to compute an execution schedule. On this regard, since embedded systems tend to run the same set of applications for their entire lifetime, off-line, exact optimization approaches are particularly appealing. Quite surprisingly, the use of exact algorithms has not been well investigated so far; this is in part motivated by the complexity of integrated allocation and scheduling, setting tough challenges for ``pure'' combinatorial methods. The use of hybrid CP/OR approaches presents the opportunity to exploit mutual advantages of different methods, while compensating for their weaknesses. In this work, we consider in first instance an Allocation and Scheduling problem over the Cell BE processor by Sony, IBM and Toshiba; we propose three different solution methods, leveraging decomposition, cut generation and heuristic guided search. Next, we face Allocation and Scheduling of so-called Conditional Task Graphs, explicitly accounting for branches with outcome not known at design time; we extend the CP scheduling framework to effectively deal with the introduced stochastic elements. Finally, we address Allocation and Scheduling with uncertain, bounded execution times, via conflict based tree search; we introduce a simple and flexible time model to take into account duration variability and provide an efficient conflict detection method. The proposed approaches achieve good results on practical size problem, thus demonstrating the use of exact approaches for system design is feasible. Furthermore, the developed techniques bring significant contributions to combinatorial optimization methods.
Resumo:
Next generation electronic devices have to guarantee high performance while being less power-consuming and highly reliable for several application domains ranging from the entertainment to the business. In this context, multicore platforms have proven the most efficient design choice but new challenges have to be faced. The ever-increasing miniaturization of the components produces unexpected variations on technological parameters and wear-out characterized by soft and hard errors. Even though hardware techniques, which lend themselves to be applied at design time, have been studied with the objective to mitigate these effects, they are not sufficient; thus software adaptive techniques are necessary. In this thesis we focus on multicore task allocation strategies to minimize the energy consumption while meeting performance constraints. We firstly devise a technique based on an Integer Linear Problem formulation which provides the optimal solution but cannot be applied on-line since the algorithm it needs is time-demanding; then we propose a sub-optimal technique based on two steps which can be applied on-line. We demonstrate the effectiveness of the latter solution through an exhaustive comparison against the optimal solution, state-of-the-art policies, and variability-agnostic task allocations by running multimedia applications on the virtual prototype of a next generation industrial multicore platform. We also face the problem of the performance and lifetime degradation. We firstly focus on embedded multicore platforms and propose an idleness distribution policy that increases core expected lifetimes by duty cycling their activity; then, we investigate the use of micro thermoelectrical coolers in general-purpose multicore processors to control the temperature of the cores at runtime with the objective of meeting lifetime constraints without performance loss.
Resumo:
In this thesis, elemental research towards the implantation of a diamond-based molecular quantum computer is presented. The approach followed requires linear alignment of endohedral fullerenes on the diamond C(100) surface in the vicinity of subsurface NV-centers. From this, four fundamental experimental challenges arise: 1) The well-controlled deposition of endohedral fullerenes on a diamond surface. 2) The creation of NV-centers in diamond close to the surface. 3) Preparation and characterization of atomically-flat diamondsurfaces. 4) Assembly of linear chains of endohedral fullerenes. First steps to overcome all these challenges were taken in the framework of this thesis. Therefore, a so-called “pulse injection” technique was implemented and tested in a UHV chamber that was custom-designed for this and further tasks. Pulse injection in principle allows for the deposition of molecules from solution onto a substrate and can therefore be used to deposit molecular species that are not stable to sublimation under UHV conditions, such as the endohedral fullerenes needed for a quantum register. Regarding the targeted creation of NV-centers, FIB experiments were carried out in cooperation with the group of Prof. Schmidt-Kaler (AG Quantum, Physics Department, Johannes Gutenberg-Universität Mainz). As an entry into this challenging task, argon cations were implanted into (111) surface-oriented CaF2 crystals. The resulting implantation spots on the surface were imaged and characterized using AFM. In this context, general relations between the impact of the ions on the surface and their valency or kinetic energy, respectively, could be established. The main part of this thesis, however, is constituted by NCAFM studies on both, bare and hydrogen-terminated diamond C(100) surfaces. In cooperation with the group of Prof. Dujardin (Molecular Nanoscience Group, ISMO, Université de Paris XI), clean and atomically-flat diamond surfaces were prepared by exposure of the substrate to a microwave hydrogen plasma. Subsequently, both surface modifications were imaged in high resolution with NC-AFM. In the process, both hydrogen atoms in the unit cell of the hydrogenated surface were resolved individually, which was not achieved in previous STM studies of this surface. The NC-AFM images also reveal, for the first time, atomic-resolution contrast on the clean, insulating diamond surface and provide real-space experimental evidence for a (2×1) surface reconstruction. With regard to the quantum computing concept, high-resolution NC-AFM imaging was also used to study the adsorption and self-assembly potential of two different kinds of fullerenes (C60 and C60F48) on aforementioned diamond surfaces. In case of the hydrogenated surface, particular attention was paid to the influence of charge transfer doping on the fullerene-substrate interaction and the morphology emerging from self-assembly. Finally, self-assembled C60 islands on the hydrogen-terminated diamond surface were subject to active manipulation by an NC-AFM tip. Two different kinds of tip-induced island growth modes have been induced and were presented. In conclusion, the results obtained provide fundamental informations mandatory for the realization of a molecular quantum computer. In the process it was shown that NC-AFM is, under proper circumstances, a very capable tool for imaging diamond surfaces with highest resolution, surpassing even what has been achieved with STM up to now. Particular attention was paid to the influence of transfer doping on the morphology of fullerenes on the hydrogenated diamond surface, revealing new possibilities for tailoring the self-assembly of molecules that have a high electron affinity.
Resumo:
Die Arbeit beschäftigt sich mit der Kontrolle von Selbstorganisation und Mikrostruktur von organischen Halbleitern und deren Einsatz in OFETs. In Kapiteln 3, 4 und 5 eine neue Lösungsmittel-basierte Verabeitungsmethode, genannt als Lösungsmitteldampfdiffusion, ist konzipiert, um die Selbstorganisation von Halbleitermolekülen auf der Oberfläche zu steuern. Diese Methode als wirkungsvolles Werkzeug erlaubt eine genaue Kontrolle über die Mikrostruktur, wie in Kapitel 3 am Beispiel einer D-A Dyad bestehend aus Hexa-peri-hexabenzocoronene (HBC) als Donor und Perylene Diimide (PDI) als Akzeptor beweisen. Die Kombination aus Oberflächenmodifikation und Lösungsmitteldampf kann die Entnetzungseffekte ausgleichen, so dass die gewüschte Mikrostruktur und molekulare Organisation auf der Oberfläche erreicht werden kann. In Kapiteln 4 und 5 wurde diese Methode eingesetzt, um die Selbstorganisation von Dithieno[2, 3-d;2’, 3’-d’] benzo[1,2-b;4,5-b’]dithiophene (DTBDT) und Cyclopentadithiophene -benzothiadiazole copolymer (CDT-BTZ) Copolymer zu steuern. Die Ergebnisse könnten weitere Studien stimulieren und werfen Licht aus andere leistungsfaähige konjugierte Polymere. rnIn Kapiteln 6 und 7 Monolagen und deren anschlieβende Mikrostruktur von zwei konjugierten Polymeren, Poly (2,5-bis(3-alkylthiophen-2-yl)thieno[3,2-b]thiophene) PBTTT und Poly{[N,N ′-bis(2-octyldodecyl)-naphthalene-1,4,5,8-bis (dicarboximide)-2,6-diyl]-alt-5,5′- (2,2′-bithiophene)}, P(NDI2OD-T2)) wurden auf steife Oberflächen mittels Tauchbeschichtung aufgebracht. Da sist das erste Mal, dass es gelungen ist, Polymer Monolagen aus der Lösung aufzubringen. Dieser Ansatz kann weiter auf eine breite Reihe von anderen konjugierten Polymeren ausgeweitet werden.rnIn Kapitel 8 wurden PDI-CN2 Filme erfolgreich von Monolagen zu Bi- und Tri-Schichten auf Oberflächen aufgebracht, die unterschiedliche Rauigkeiten besitzen. Für das erste Mal, wurde der Einfluss der Rauigkeit auf Lösungsmittel-verarbeitete dünne Schichten klar beschrieben.rn
Resumo:
There is a demand for technologies able to assess the perfusion of surgical flaps quantitatively and reliably to avoid ischemic complications. The aim of this study is to test a new high-speed high-definition laser Doppler imaging (LDI) system (FluxEXPLORER, Microvascular Imaging, Lausanne, Switzerland) in terms of preoperative mapping of the vascular supply (perforator vessels) and postoperative flow monitoring. The FluxEXPLORER performs perfusion mapping of an area 9 x 9 cm with a resolution of 256 x 256 pixels within 6 s in high-definition imaging mode. The sensitivity and predictability to localize perforators is expressed by the coincidence of preoperatively assessed LDI high flow spots with intraoperatively verified perforators in nine patients. 18 free flaps are monitored before, during, and after total ischemia. 63% of all verified perforators correspond to a high flow spot, and 38% of all high flow spots correspond to a verified perforator (positive predictive value). All perfused flaps reveal a value of above 221 perfusion units (PUs), and all values obtained in the ischemic flaps are beneath 187 PU. In summary, we conclude that the present LDI system can serve as a reliable, fast, and easy-to-handle tool to detect ischemia in free flaps, whereas perforator vessels cannot be detected appropriately.
Resumo:
To characterize proteomic changes found in Barrett's adenocarcinoma and its premalignant stages, the proteomic profiles of histologically defined precursor and invasive carcinoma lesions were analyzed by MALDI imaging MS. For a primary proteomic screening, a discovery cohort of 38 fresh frozen Barrett's adenocarcinoma patient tissue samples was used. The goal was to find proteins that might be used as markers for monitoring cancer development as well as for predicting regional lymph node metastasis and disease outcome. Using mass spectrometry for protein identification and validating the results by immunohistochemistry on an independent validation set, we could identify two of 60 differentially expressed m/z species between Barrett's adenocarcinoma and the precursor lesion: COX7A2 and S100-A10. Furthermore, among 22 m/z species that are differentially expressed in Barrett's adenocarcinoma cases with and without regional lymph node metastasis, one was identified as TAGLN2. In the validation set, we found a correlation of the expression levels of COX7A2 and TAGLN2 with a poor prognosis while S100-A10 was confirmed by multivariate analysis as a novel independent prognostic factor in Barrett's adenocarcinoma. Our results underscore the high potential of MALDI imaging for revealing new biologically significant molecular details from cancer tissues which might have potential for clinical application. This article is part of a Special Issue entitled: Translational Proteomics.
Resumo:
Background: Over the last 15 years, efforts to detect psychoses early in their prodromal states have greatly progressed; meanwhile, ultra-high risk (UHR) criteria have been the subject of such consensus that parts of them have been proposed for inclusion in DSM-5 in terms of an attenuated psychosis syndrome. However, it is frequently unacknowledged that the definitions and operationalizations of UHR-related at-risk criteria, including the relevant attenuated psychotic symptoms, vary considerably across centers and time and, thus, between prediction studies. Methods: These variations in UHR criteria are described and discussed with reference to the rates of transition to psychosis, their prevalence in the general population and the proposed new operationalization of the attenuated psychosis syndrome. Results: A comparison of samples recruited according to different UHR operationalizations reveals differences in the distribution of UHR criteria and transition rates as well as in the prevalence rates of at-risk criteria in the general population. Conclusion: The evidence base for the introduction of such a new syndrome is weaker than the number of studies using supposedly equal UHR criteria would at first suggest. Thus, studies comparing the effects of different (sub-)criteria not only on transition rates and outcomes but also on other important aspects, such as neurocognitive performance and brain imaging results, are necessary. Meanwhile, the preliminary attenuated psychosis syndrome in DSM-5 should not follow an altogether new definition but, rather, the currently most reliable UHR definition, which must still demonstrate its reliability and validity outside specialized psychiatric services.
Resumo:
BACKGROUND: Despite recent algorithmic and conceptual progress, the stoichiometric network analysis of large metabolic models remains a computationally challenging problem. RESULTS: SNA is a interactive, high performance toolbox for analysing the possible steady state behaviour of metabolic networks by computing the generating and elementary vectors of their flux and conversions cones. It also supports analysing the steady states by linear programming. The toolbox is implemented mainly in Mathematica and returns numerically exact results. It is available under an open source license from: http://bioinformatics.org/project/?group_id=546. CONCLUSION: Thanks to its performance and modular design, SNA is demonstrably useful in analysing genome scale metabolic networks. Further, the integration into Mathematica provides a very flexible environment for the subsequent analysis and interpretation of the results.
Resumo:
OBJECT: Disturbed ionic and neurotransmitter homeostasis are now recognized as probably the most important mechanisms contributing to the development of secondary brain swelling after traumatic brain injury (TBI). Evidence obtained in animal models indicates that posttraumatic neuronal excitation by excitatory amino acids leads to an increase in extracellular potassium, probably due to ion channel activation. The purpose of this study was therefore to measure dialysate potassium in severely head injured patients and to correlate these results with measurements of intracranial pressure (ICP), patient outcome, and levels of dialysate glutamate and lactate, and cerebral blood flow (CBF) to determine the role of ischemia in this posttraumatic ion dysfunction. METHODS: Eighty-five patients with severe TBI (Glasgow Coma Scale Score < 8) were treated according to an intensive ICP management-focused protocol. All patients underwent intracerebral microdialyis. Dialysate potassium levels were analyzed using flame photometry, and dialysate glutamate and dialysate lactate levels were measured using high-performance liquid chromatography and an enzyme-linked amperometric method in 72 and 84 patients, respectively. Cerebral blood flow studies (stable xenon computerized tomography scanning) were performed in 59 patients. In approximately 20% of the patients, dialysate potassium values were increased (dialysate potassium > 1.8 mM) for 3 hours or more. A mean amount of dialysate potassium greater than 2 mM throughout the entire monitoring period was associated with ICP above 30 mm Hg and fatal outcome, as were progressively rising levels of dialysate potassium. The presence of dialysate potassium correlated positively with dialysate glutamate (p < 0.0001) and lactate (p < 0.0001) levels. Dialysate potassium was significantly inversely correlated with reduced CBF (p = 0.019). CONCLUSIONS: Dialysate potassium was increased after TBI in 20% of measurements. High levels of dialysate potassium were associated with increased ICP and poor outcome. The simultaneous increase in dialysate potassium, together with dialysate glutamate and lactate, supports the concept that glutamate induces ionic flux and consequently increases ICP, which the authors speculate may be due to astrocytic swelling. Reduced CBF was also significantly correlated with increased levels of dialysate potassium. This may be due to either cell swelling or altered vasoreactivity in cerebral blood vessels caused by higher levels of potassium after trauma. Additional studies in which potassium-sensitive microelectrodes are used are needed to validate these ionic events more clearly.
Resumo:
Disturbed ionic and neurotransmitter homeostasis are now recognized to be probably the most important mechanisms contributing to the development of secondary brain swelling after traumatic brian injury (TBI). Evidence obtained from animal models indicates that posttraumatic neuronal excitation via excitatory amino acids leads to an increase in extracellular potassium, probably due to ion channel activation. The purpose of this study was therefore to measure dialysate potassium in severely head injured patients and to correlate these results with intracranial pressure (ICP), outcome, and also with the levels of dialysate glutamate, lactate, and cerebral blood flow (CBF) so as to determine the role of ischemia in this posttraumatic ionic dysfunction. Eighty-five patients with severe TBI (Glasgow Coma Scale score < 8) were treated according to an intensive ICP management-focused protocol. All patients underwent intracerebral microdialyis. Dialysate potassium levels were analyzed by flame photometry, as were dialysate glutamate and dialysate lactate levels, which were measured using high-performance liquid chromatography and an enzyme-linked amperometric method in 72 and 84 patients respectively. Cerebral blood flow studies (stable Xenon--computerized tomography scanning) were performed in 59 patients. In approximately 20% of the patients, potassium values were increased (dialysate potassium > 1.8 mmol). Mean dialysate potassium (> 2 mmol) was associated with ICP above 30 mm Hg and fatal outcome. Dialysate potassium correlated positively with dialysate glutamate (p < 0.0001) and lactate levels (p < 0.0001). Dialysate potassium was significantly inversely correlated with reduced CBF (p = 0.019). Dialysate potassium was increased after TBI in 20% of measurements. High levels of dialysate potassium were associated with increased ICP and poor outcome. The simultaneous increase of potassium, together with dialysate glutamate and lactate, supports the hypothesis that glutamate induces ionic flux and consequently increases ICP due to astrocytic swelling. Reduced CBF was also significantly correlated with increased levels of dialysate potassium. This may be due to either cell swelling or altered potassium reactivity in cerebral blood vessels after trauma.