827 resultados para Multiple-scale processing
Resumo:
Several countries have acquired, over the past decades, large amounts of area covering Airborne Electromagnetic data. Contribution of airborne geophysics has dramatically increased for both groundwater resource mapping and management proving how those systems are appropriate for large-scale and efficient groundwater surveying. We start with processing and inversion of two AEM dataset from two different systems collected over the Spiritwood Valley Aquifer area, Manitoba, Canada respectively, the AeroTEM III (commissioned by the Geological Survey of Canada in 2010) and the “Full waveform VTEM” dataset, collected and tested over the same survey area, during the fall 2011. We demonstrate that in the presence of multiple datasets, either AEM and ground data, due processing, inversion, post-processing, data integration and data calibration is the proper approach capable of providing reliable and consistent resistivity models. Our approach can be of interest to many end users, ranging from Geological Surveys, Universities to Private Companies, which are often proprietary of large geophysical databases to be interpreted for geological and\or hydrogeological purposes. In this study we deeply investigate the role of integration of several complimentary types of geophysical data collected over the same survey area. We show that data integration can improve inversions, reduce ambiguity and deliver high resolution results. We further attempt to use the final, most reliable output resistivity models as a solid basis for building a knowledge-driven 3D geological voxel-based model. A voxel approach allows a quantitative understanding of the hydrogeological setting of the area, and it can be further used to estimate the aquifers volumes (i.e. potential amount of groundwater resources) as well as hydrogeological flow model prediction. In addition, we investigated the impact of an AEM dataset towards hydrogeological mapping and 3D hydrogeological modeling, comparing it to having only a ground based TEM dataset and\or to having only boreholes data.
Resumo:
Opportunistic diseases caused by Human Immunodeficiency Virus (HIV) and Hepatitis B Virus (HBV) is an omnipresent global challenge. In order to manage these epidemics, we need to have low cost and easily deployable platforms at the point-of-care in high congestions regions like airports and public transit systems. In this dissertation we present our findings in using Localized Surface Plasmon Resonance (LSPR)-based detection of pathogens and other clinically relevant applications using microfluidic platforms at the point-of-care setting in resource constrained environment. The work presented here adopts the novel technique of LSPR to multiplex a lab-on-a-chip device capable of quantitatively detecting various types of intact viruses and its various subtypes, based on the principle of a change in wavelength occurring when metal nano-particle surface is modified with a specific surface chemistry allowing the binding of a desired pathogen to a specific antibody. We demonstrate the ability to detect and quantify subtype A, B, C, D, E, G and panel HIV with a specificity of down to 100 copies/mL using both whole blood sample and HIV-patient blood sample discarded from clinics. These results were compared against the gold standard Reverse Transcriptase Polymerase Chain Reaction (RT-qPCR). This microfluidic device has a total evaluation time for the assays of about 70 minutes, where 60 minutes is needed for the capture and 10 minutes for data acquisition and processing. This LOC platform eliminates the need for any sample preparation before processing. This platform is highly multiplexable as the same surface chemistry can be adapted to capture and detect several other pathogens like dengue virus, E. coli, M. Tuberculosis, etc.
Resumo:
Urban centers significantly contribute to anthropogenic air pollution, although they cover only a minor fraction of the Earth's land surface. Since the worldwide degree of urbanization is steadily increasing, the anthropogenic contribution to air pollution from urban centers is expected to become more substantial in future air quality assessments. The main objective of this thesis was to obtain a more profound insight in the dispersion and the deposition of aerosol particles from 46 individual major population centers (MPCs) as well as the regional and global influence on the atmospheric distribution of several aerosol types. For the first time, this was assessed in one model framework, for which the global model EMAC was applied with different representations of aerosol particles. First, in an approach with passive tracers and a setup in which the results depend only on the source location and the size and the solubility of the tracers, several metrics and a regional climate classification were used to quantify the major outflow pathways, both vertically and horizontally, and to compare the balance between pollution export away from and pollution build-up around the source points. Then in a more comprehensive approach, the anthropogenic emissions of key trace species were changed at the MPC locations to determine the cumulative impact of the MPC emissions on the atmospheric aerosol burdens of black carbon, particulate organic matter, sulfate, and nitrate. Ten different mono-modal passive aerosol tracers were continuously released at the same constant rate at each emission point. The results clearly showed that on average about five times more mass is advected quasi-horizontally at low levels than exported into the upper troposphere. The strength of the low-level export is mainly determined by the location of the source, while the vertical transport is mainly governed by the lifting potential and the solubility of the tracers. Similar to insoluble gas phase tracers, the low-level export of aerosol tracers is strongest at middle and high latitudes, while the regions of strongest vertical export differ between aerosol (temperate winter dry) and gas phase (tropics) tracers. The emitted mass fraction that is kept around MPCs is largest in regions where aerosol tracers have short lifetimes; this mass is also critical for assessing the impact on humans. However, the number of people who live in a strongly polluted region around urban centers depends more on the population density than on the size of the area which is affected by strong air pollution. Another major result was that fine aerosol particles (diameters smaller than 2.5 micrometer) from MPCs undergo substantial long-range transport, with about half of the emitted mass being deposited beyond 1000 km away from the source. In contrast to this diluted remote deposition, there are areas around the MPCs which experience high deposition rates, especially in regions which are frequently affected by heavy precipitation or are situated in poorly ventilated locations. Moreover, most MPC aerosol emissions are removed over land surfaces. In particular, forests experience more deposition from MPC pollutants than other land ecosystems. In addition, it was found that the generic treatment of aerosols has no substantial influence on the major conclusions drawn in this thesis. Moreover, in the more comprehensive approach, it was found that emissions of black carbon, particulate organic matter, sulfur dioxide, and nitrogen oxides from MPCs influence the atmospheric burden of various aerosol types very differently, with impacts generally being larger for secondary species, sulfate and nitrate, than for primary species, black carbon and particulate organic matter. While the changes in the burdens of sulfate, black carbon, and particulate organic matter show an almost linear response for changes in the emission strength, the formation of nitrate was found to be contingent upon many more factors, e.g., the abundance of sulfuric acid, than only upon the strength of the nitrogen oxide emissions. The generic tracer experiments were further extended to conduct the first risk assessment to obtain the cumulative risk of contamination from multiple nuclear reactor accidents on the global scale. For this, many factors had to be taken into account: the probability of major accidents, the cumulative deposition field of the radionuclide cesium-137, and a threshold value that defines contamination. By collecting the necessary data and after accounting for uncertainties, it was found that the risk is highest in western Europe, the eastern US, and in Japan, where on average contamination by major accidents is expected about every 50 years.
Resumo:
In der Erdöl– und Gasindustrie sind bildgebende Verfahren und Simulationen auf der Porenskala im Begriff Routineanwendungen zu werden. Ihr weiteres Potential lässt sich im Umweltbereich anwenden, wie z.B. für den Transport und Verbleib von Schadstoffen im Untergrund, die Speicherung von Kohlendioxid und dem natürlichen Abbau von Schadstoffen in Böden. Mit der Röntgen-Computertomografie (XCT) steht ein zerstörungsfreies 3D bildgebendes Verfahren zur Verfügung, das auch häufig für die Untersuchung der internen Struktur geologischer Proben herangezogen wird. Das erste Ziel dieser Dissertation war die Implementierung einer Bildverarbeitungstechnik, die die Strahlenaufhärtung der Röntgen-Computertomografie beseitigt und den Segmentierungsprozess dessen Daten vereinfacht. Das zweite Ziel dieser Arbeit untersuchte die kombinierten Effekte von Porenraumcharakteristika, Porentortuosität, sowie die Strömungssimulation und Transportmodellierung in Porenräumen mit der Gitter-Boltzmann-Methode. In einer zylindrischen geologischen Probe war die Position jeder Phase auf Grundlage der Beobachtung durch das Vorhandensein der Strahlenaufhärtung in den rekonstruierten Bildern, das eine radiale Funktion vom Probenrand zum Zentrum darstellt, extrahierbar und die unterschiedlichen Phasen ließen sich automatisch segmentieren. Weiterhin wurden Strahlungsaufhärtungeffekte von beliebig geformten Objekten durch einen Oberflächenanpassungsalgorithmus korrigiert. Die Methode der „least square support vector machine” (LSSVM) ist durch einen modularen Aufbau charakterisiert und ist sehr gut für die Erkennung und Klassifizierung von Mustern geeignet. Aus diesem Grund wurde die Methode der LSSVM als pixelbasierte Klassifikationsmethode implementiert. Dieser Algorithmus ist in der Lage komplexe geologische Proben korrekt zu klassifizieren, benötigt für den Fall aber längere Rechenzeiten, so dass mehrdimensionale Trainingsdatensätze verwendet werden müssen. Die Dynamik von den unmischbaren Phasen Luft und Wasser wird durch eine Kombination von Porenmorphologie und Gitter Boltzmann Methode für Drainage und Imbibition Prozessen in 3D Datensätzen von Böden, die durch synchrotron-basierte XCT gewonnen wurden, untersucht. Obwohl die Porenmorphologie eine einfache Methode ist Kugeln in den verfügbaren Porenraum einzupassen, kann sie dennoch die komplexe kapillare Hysterese als eine Funktion der Wassersättigung erklären. Eine Hysterese ist für den Kapillardruck und die hydraulische Leitfähigkeit beobachtet worden, welche durch die hauptsächlich verbundenen Porennetzwerke und der verfügbaren Porenraumgrößenverteilung verursacht sind. Die hydraulische Konduktivität ist eine Funktion des Wassersättigungslevels und wird mit einer makroskopischen Berechnung empirischer Modelle verglichen. Die Daten stimmen vor allem für hohe Wassersättigungen gut überein. Um die Gegenwart von Krankheitserregern im Grundwasser und Abwässern vorhersagen zu können, wurde in einem Bodenaggregat der Einfluss von Korngröße, Porengeometrie und Fluidflussgeschwindigkeit z.B. mit dem Mikroorganismus Escherichia coli studiert. Die asymmetrischen und langschweifigen Durchbruchskurven, besonders bei höheren Wassersättigungen, wurden durch dispersiven Transport aufgrund des verbundenen Porennetzwerks und durch die Heterogenität des Strömungsfeldes verursacht. Es wurde beobachtet, dass die biokolloidale Verweilzeit eine Funktion des Druckgradienten als auch der Kolloidgröße ist. Unsere Modellierungsergebnisse stimmen sehr gut mit den bereits veröffentlichten Daten überein.
Resumo:
Glioblastoma multiforme (GBM) is the most common and most aggressive astrocytic tumor of the central nervous system (CNS) in adults. The standard treatment consisting of surgery, followed by a combinatorial radio- and chemotherapy, is only palliative and prolongs patient median survival to 12 to 15 months. The tumor subpopulation of stem cell-like glioma-initiating cells (GICs) shows resistance against radiation as well as chemotherapy, and has been suggested to be responsible for relapses of more aggressive tumors after therapy. The efficacy of immunotherapies, which exploit the immune system to specifically recognize and eliminate malignant cells, is limited due to strong immunosuppressive activities of the GICs and the generation of a specialized protective microenvironment. The molecular mechanisms underlying the therapy resistance of GICs are largely unknown. rnThe first aim of this study was to identify immune evasion mechanisms in GICs triggered by radiation. A model was used in which patient-derived GICs were treated in vitro with fractionated ionizing radiation (2.5 Gy in 7 consecutive passages) to select for a more radio-resistant phenotype. In the model cell line 1080, this selection process resulted in increased proliferative but diminished migratory capacities in comparison to untreated control GICs. Furthermore, radio-selected GICs downregulated various proteins involved in antigen processing and presentation, resulting in decreased expression of MHC class I molecules on the cellular surface and diminished recognition potential by cytotoxic CD8+ T cells. Thus, sub-lethal fractionated radiation can promote immune evasion and hamper the success of adjuvant immunotherapy. Among several immune-associated proteins, interferon-induced transmembrane protein 3 (IFITM3) was found to be upregulated in radio-selected GICs. While high expression of IFITM3 was associated with a worse overall survival of GBM patients (TCGA database) and increased proliferation and migration of differentiated glioma cell lines, a strong contribution of IFITM3 to proliferation in vitro as well as tumor growth and invasiveness in a xenograft model could not be observed. rnMultiple sclerosis (MS) is the most common autoimmune disease of the CNS in young adults of the Western World, which leads to progressive disability in genetically susceptible individuals, possibly triggered by environmental factors. It is assumed that self-reactive, myelin-specific T helper cell 1 (Th1) and Th17 cells, which have escaped the control mechanisms of the immune system, are critical in the pathogenesis of the human disease and its animal model experimental autoimmune encephalomyelitis (EAE). It was observed that in vitro differentiated interleukin 17 (IL-17) producing Th17 cells co-expressed the Th1-phenotypic cytokine Interferon-gamma (IFN-γ) in combination with the two respective lineage-associated transcription factors RORγt and T-bet after re-isolation from the CNS of diseased mice. Pathogenic molecular mechanisms that render a CD4+ T cell encephalitogenic have scarcely been investigated up to date. rnIn the second part of the thesis, whole transcriptional changes occurring in in vitro differentiated Th17 cells in the course of EAE were analyzed. Evaluation of signaling networks revealed an overrepresentation of genes involved in communication between the innate and adaptive immune system and metabolic alterations including cholesterol biosynthesis. The transcription factors Cebpa, Fos, Klf4, Nfatc1 and Spi1, associated with thymocyte development and naïve T cells were upregulated in encephalitogenic CNS-isolated CD4+ T cells, proposing a contribution to T cell plasticity. Correlation of the murine T-cell gene expression dataset to putative MS risk genes, which were selected based on their proximity (± 500 kb; ensembl database, release 75) to the MS risk single nucleotide polymorphisms (SNPs) proposed by the most recent multiple sclerosis GWAS in 2011, revealed that 67.3% of the MS risk genes were differentially expressed in EAE. Expression patterns of Bach2, Il2ra, Irf8, Mertk, Odf3b, Plek, Rgs1, Slc30a7, and Thada were confirmed in independent experiments, suggesting a contribution to T cell pathogenicity. Functional analysis of Nfatc1 revealed that Nfatc1-deficient CD4+ T cells were restrained in their ability to induce clinical signs of EAE. Nfatc1-deficiency allowed proper T cell activation, but diminished their potential to fully differentiate into Th17 cells and to express high amounts of lineage cytokines. As the inducible Nfatc1/αA transcript is distinct from the other family members, it could represent an interesting target for therapeutic intervention in MS.rn
Resumo:
Ein System in einem metastabilen Zustand muss eine bestimmte Barriere in derrnfreien Energie überwinden um einen Tropfen der stabilen Phase zu formen.rnHerkömmliche Untersuchungen nehmen hierbei kugelförmige Tropfen an. Inrnanisotropen Systemen (wie z.B. Kristallen) ist diese Annahme aber nicht ange-rnbracht. Bei tiefen Temperaturen wirkt sich die Anisotropie des Systems starkrnauf die freie Energie ihrer Oberfläche aus. Diese Wirkung wird oberhalb derrnAufrauungstemperatur T R schwächer. Das Ising-Modell ist ein einfaches Mo-rndell, welches eine solche Anisotropie aufweist. Wir führen großangelegte Sim-rnulationen durch, um die Effekte, die mit einer endlichen Simulationsbox ein-rnhergehen, sowie statistische Ungenauigkeiten möglichst klein zu halten. DasrnAusmaß der Simulationen die benötigt werden um sinnvolle Ergebnisse zu pro-rnduzieren, erfordert die Entwicklung eines skalierbaren Simulationsprogrammsrnfür das Ising-Modell, welcher auf verschiedenen parallelen Architekturen (z.B.rnGrafikkarten) verwendet werden kann. Plattformunabhängigkeit wird durch ab-rnstrakte Schnittstellen erreicht, welche plattformspezifische Implementierungs-rndetails verstecken. Wir benutzen eine Systemgeometrie die es erlaubt eine Ober-rnfläche mit einem variablen Winkel zur Kristallebene zu untersuchen. Die Ober-rnfläche ist in Kontakt mit einer harten Wand, wobei der Kontaktwinkel Θ durchrnein Oberflächenfeld eingestellt werden kann. Wir leiten eine Differenzialglei-rnchung ab, welche das Verhalten der freien Energie der Oberfläche in einemrnanisotropen System beschreibt. Kombiniert mit thermodynamischer Integrationrnkann die Gleichung benutzt werden, um die anisotrope Oberflächenspannungrnüber einen großen Winkelbereich zu integrieren. Vergleiche mit früheren Mes-rnsungen in anderen Geometrien und anderen Methoden zeigen hohe Überein-rnstimung und Genauigkeit, welche vor allem durch die im Vergleich zu früherenrnMessungen wesentlich größeren Simulationsdomänen erreicht wird. Die Temper-rnaturabhängigkeit der Oberflächensteifheit κ wird oberhalb von T R durch diernKrümmung der freien Energie der Oberfläche für kleine Winkel gemessen. DiesernMessung lässt sich mit Simulationsergebnissen in der Literatur vergleichen undrnhat bessere Übereinstimmung mit theoretischen Voraussagen über das Skalen-rnverhalten von κ. Darüber hinaus entwickeln wir ein Tieftemperatur-Modell fürrndas Verhalten um Θ = 90 Grad weit unterhalb von T R. Der Winkel bleibt bis zu einemrnkritischen Feld H C quasi null; oberhalb des kritischen Feldes steigt der Winkelrnrapide an. H C wird mit der freien Energie einer Stufe in Verbindung gebracht,rnwas es ermöglicht, das kritische Verhalten dieser Größe zu analysieren. Die harternWand muss in die Analyse einbezogen werden. Durch den Vergleich freier En-rnergien bei geschickt gewählten Systemgrößen ist es möglich, den Beitrag derrnKontaktlinie zur freien Energie in Abhängigkeit von Θ zu messen. Diese Anal-rnyse wird bei verschiedenen Temperaturen durchgeführt. Im letzten Kapitel wirdrneine 2D Fluiddynamik Simulation für Grafikkarten parallelisiert, welche u. a.rnbenutzt werden kann um die Dynamik der Atmosphäre zu simulieren. Wir im-rnplementieren einen parallelen Evolution Galerkin Operator und erreichen
Resumo:
Die rasante Entwicklung der Computerindustrie durch die stetige Verkleinerung der Transistoren führt immer schneller zum Erreichen der Grenze der Si-Technologie, ab der die Tunnelprozesse in den Transistoren ihre weitere Verkleinerung und Erhöhung ihrer Dichte in den Prozessoren nicht mehr zulassen. Die Zukunft der Computertechnologie liegt in der Verarbeitung der Quanteninformation. Für die Entwicklung von Quantencomputern ist die Detektion und gezielte Manipulation einzelner Spins in Festkörpern von größter Bedeutung. Die Standardmethoden der Spindetektion, wie ESR, erlauben jedoch nur die Detektion von Spinensembles. Die Idee, die das Auslesen von einzelnen Spins ermöglich sollte, besteht darin, die Manipulation getrennt von der Detektion auszuführen.rn Bei dem NV−-Zentrum handelt es sich um eine spezielle Gitterfehlstelle im Diamant, die sich als einen atomaren, optisch auslesbaren Magnetfeldsensor benutzen lässt. Durch die Messung seiner Fluoreszenz sollte es möglich sein die Manipulation anderer, optisch nicht detektierbaren, “Dunkelspins“ in unmittelbarer Nähe des NV-Zentrums mittels der Spin-Spin-Kopplung zu detektieren. Das vorgeschlagene Modell des Quantencomputers basiert auf dem in SWCNT eingeschlossenen N@C60.Die Peapods, wie die Einheiten aus den in Kohlenstoffnanoröhre gepackten Fullerenen mit eingefangenem Stickstoff genannt werden, sollen die Grundlage für die Recheneinheiten eines wahren skalierbaren Quantencomputers bilden. Die in ihnen mit dem Stickstoff-Elektronenspin durchgeführten Rechnungen sollen mit den oberflächennahen NV-Zentren (von Diamantplatten), über denen sie positioniert sein sollen, optisch ausgelesen werden.rnrnDie vorliegende Arbeit hatte das primäre Ziel, die Kopplung der oberflächennahen NV-Einzelzentren an die optisch nicht detektierbaren Spins der Radikal-Moleküle auf der Diamantoberfläche mittels der ODMR-Kopplungsexperimente optisch zu detektieren und damit entscheidende Schritte auf dem Wege der Realisierung eines Quantenregisters zu tun.rn Es wurde ein sich im Entwicklungsstadium befindende ODMR-Setup wieder aufgebaut und seine bisherige Funktionsweise wurde an kommerziellen NV-Zentrum-reichen Nanodiamanten verifiziert. Im nächsten Schritt wurde die Effektivität und Weise der Messung an die Detektion und Manipulation der oberflächennah (< 7 nm Tiefe) implantieren NV-Einzelzenten in Diamantplatten angepasst.Ein sehr großer Teil der Arbeit, der hier nur bedingt beschrieben werden kann, bestand aus derrnAnpassung der existierenden Steuersoftware an die Problematik der praktischen Messung. Anschließend wurde die korrekte Funktion aller implementierten Pulssequenzen und anderer Software-Verbesserungen durch die Messung an oberflächennah implantierten NV-Einzelzentren verifiziert. Auch wurde der Messplatz um die zur Messung der Doppelresonanz notwendigen Komponenten wie einen steuerbaren Elektromagneten und RF-Signalquelle erweitert. Unter der Berücksichtigung der thermischen Stabilität von N@C60 wurde für zukünftige Experimente auch ein optischer Kryostat geplant, gebaut, in das Setup integriert und charakterisiert.rn Die Spin-Spin-Kopplungsexperimente wurden mit dem sauerstoffstabilen Galvinoxyl-Radikalals einem Modell-System für Kopplung durchgeführt. Dabei wurde über die Kopplung mit einem NVZentrum das RF-Spektrum des gekoppelten Radikal-Spins beobachtet. Auch konnte von dem gekoppelten Spin eine Rabi-Nutation aufgenommen werden.rn Es wurden auch weitere Aspekte der Peapod Messung und Oberflächenimplantation betrachtet.Es wurde untersucht, ob sich die NV-Detektion durch die SWCNTs, Peapods oder Fullerene stören lässt. Es zeigte sich, dass die Komponenten des geplanten Quantencomputers, bis auf die C60-Cluster, für eine ODMR-Messanordnung nicht detektierbar sind und die NV-Messung nicht stören werden. Es wurde auch betrachtet, welche Arten von kommerziellen Diamantplatten für die Oberflächenimplantation geeignet sind, für die Kopplungsmessungen geeignete Dichte der implantierten NV-Zentren abgeschätzt und eine Implantation mit abgeschätzter Dichte betrachtet.
Resumo:
Prediction of long-term disability in patients with multiple sclerosis (MS) is essential. Magnetic resonance imaging (MRI) measurement of brain volume may be of predictive value but sophisticated MRI techniques are often inaccessible in clinical practice. The corpus callosum index (CCI) is a normalized measurement that reflects changes of brain volume. We investigated medical records and 533 MRI scans at diagnosis and during clinical follow-up of 169 MS patients (mean age 42 +/- 11 years, 86% relapsing-remitting MS, time since first relapse 11 +/- 9 years). CCI at diagnosis was 0.345 +/- 0.04 and correlated with duration of disease (p = 0.002; r = -0.234) and expanded disability status scale (EDSS) score at diagnosis (r = -0.428; p < 0.001). Linear regression analyses identified age, duration of disease, relapse rate and EDSS at diagnosis as independent predictors for disability after mean of 7.1 years (Nagelkerkes' R:0.56). Annual CCI decrease was 0.01 +/- 0.02 (annual tissue loss: 1.3%). In secondary progressive MS patients, CCI decrease was double compared to that in relapsing-remitting MS patients (p = 0.04). There was a trend of greater CCI decrease in untreated patients compared to those who received disease modifying drugs (p = 0.2). CCI is an easy to use MRI marker for estimating brain atrophy in patients with MS. Brain atrophy as measured with CCI was associated with disability progression but it was not an independent predictor of long-term disability.
Recurrent antitopographic inhibition mediates competitive stimulus selection in an attention network
Resumo:
Topographically organized neurons represent multiple stimuli within complex visual scenes and compete for subsequent processing in higher visual centers. The underlying neural mechanisms of this process have long been elusive. We investigate an experimentally constrained model of a midbrain structure: the optic tectum and the reciprocally connected nucleus isthmi. We show that a recurrent antitopographic inhibition mediates the competitive stimulus selection between distant sensory inputs in this visual pathway. This recurrent antitopographic inhibition is fundamentally different from surround inhibition in that it projects on all locations of its input layer, except to the locus from which it receives input. At a larger scale, the model shows how a focal top-down input from a forebrain region, the arcopallial gaze field, biases the competitive stimulus selection via the combined activation of a local excitation and the recurrent antitopographic inhibition. Our findings reveal circuit mechanisms of competitive stimulus selection and should motivate a search for anatomical implementations of these mechanisms in a range of vertebrate attentional systems.
Resumo:
Investigates multiple processing parameters, includingpolymer type, filler type, processing technique, severity of SSSP (Solid-state shear pulverization)processing, and postprocessing, of SSSP. HDPE and LLDPE polymers with pristine clay and organo-clay samples are explored. Effects on crystallization, high-temperature behavior, mechanicalproperties, and gas barrier properties are examined. Thermal, mechanical, and morphological characterization is conducted to determine polymer/filler compatibility and superior processing methods for the polymer-clay nanocomposites.
Resumo:
The aim of this study was to refine a multi-dimensional scale based on physiological and behavioural parameters, known as the post abdominal surgery pain assessment scale (PASPAS), to quantify pain after laparotomy in horses. After a short introduction, eight observers used the scale to assess eight horses at multiple time points after laparotomy. In addition, a single observer was used to test the correlation of each parameter with the total pain index in 34 patients, and the effect of general anaesthesia on PASPAS was investigated in a control group of eight horses. Inter-observer variability was low (coefficient of variation 0.3), which indicated good reliability of PASPAS. The correlation of individual parameters with the total pain index differed between parameters. PASPAS, which was not influenced by general anaesthesia, was a useful tool to evaluate pain in horses after abdominal surgery and may also be useful to investigate analgesic protocols or for teaching purposes.
Resumo:
This thesis presents two frameworks- a software framework and a hardware core manager framework- which, together, can be used to develop a processing platform using a distributed system of field-programmable gate array (FPGA) boards. The software framework providesusers with the ability to easily develop applications that exploit the processing power of FPGAs while the hardware core manager framework gives users the ability to configure and interact with multiple FPGA boards and/or hardware cores. This thesis describes the design and development of these frameworks and analyzes the performance of a system that was constructed using the frameworks. The performance analysis included measuring the effect of incorporating additional hardware components into the system and comparing the system to a software-only implementation. This work draws conclusions based on the provided results of the performance analysis and offers suggestions for future work.
Resumo:
Submicroscopic changes in chromosomal DNA copy number dosage are common and have been implicated in many heritable diseases and cancers. Recent high-throughput technologies have a resolution that permits the detection of segmental changes in DNA copy number that span thousands of basepairs across the genome. Genome-wide association studies (GWAS) may simultaneously screen for copy number-phenotype and SNP-phenotype associations as part of the analytic strategy. However, genome-wide array analyses are particularly susceptible to batch effects as the logistics of preparing DNA and processing thousands of arrays often involves multiple laboratories and technicians, or changes over calendar time to the reagents and laboratory equipment. Failure to adjust for batch effects can lead to incorrect inference and requires inefficient post-hoc quality control procedures that exclude regions that are associated with batch. Our work extends previous model-based approaches for copy number estimation by explicitly modeling batch effects and using shrinkage to improve locus-specific estimates of copy number uncertainty. Key features of this approach include the use of diallelic genotype calls from experimental data to estimate batch- and locus-specific parameters of background and signal without the requirement of training data. We illustrate these ideas using a study of bipolar disease and a study of chromosome 21 trisomy. The former has batch effects that dominate much of the observed variation in quantile-normalized intensities, while the latter illustrates the robustness of our approach to datasets where as many as 25% of the samples have altered copy number. Locus-specific estimates of copy number can be plotted on the copy-number scale to investigate mosaicism and guide the choice of appropriate downstream approaches for smoothing the copy number as a function of physical position. The software is open source and implemented in the R package CRLMM available at Bioconductor (http:www.bioconductor.org).
Resumo:
To evaluate a triphasic injection protocol for whole-body multidetector computed tomography (MDCT) in patients with multiple trauma. Fifty consecutive patients (41 men) were examined. Contrast medium (300 mg/mL iodine) was injected starting with 70 mL at 3 mL/s, followed by 0.1 mL/s for 8 s, and by another bolus of 75 mL at 4 mL/s. CT data acquisition started 50 s after the beginning of the first injection. Two experienced, blinded readers independently measured the density in all major arteries, veins, and parenchymatous organs. Image quality was assessed using a five-point ordinal rating scale and compared to standard injection protocols [n = 25 each for late arterial chest, portovenous abdomen, and MDCT angiography (CTA)]. With the exception of the infrarenal inferior caval vein, all blood vessels were depicted with diagnostic image quality using the multiple-trauma protocol. Arterial luminal density was slightly but significantly smaller compared to CTA (P < 0.01). Veins and parenchymatous organs were opacified significantly better compared to all other protocols (P < 0.01). Arm artifacts reduced the density of spleen and liver parenchyma significantly (P < 0.01). Similarly high image quality is achieved for arteries using the multiple-trauma protocol compared to CTA, and parenchymatous organs are depicted with better image quality compared to specialized protocols. Arm artifacts should be avoided.
Resumo:
Zeki and co-workers recently proposed that perception can best be described as locally distributed, asynchronous processes that each create a kind of microconsciousness, which condense into an experienced percept. The present article is aimed at extending this theory to metacognitive feelings. We present evidence that perceptual fluency-the subjective feeling of ease during perceptual processing-is based on speed of processing at different stages of the perceptual process. Specifically, detection of briefly presented stimuli was influenced by figure-ground contrast, but not by symmetry (Experiment 1) or the font (Experiment 2) of the stimuli. Conversely, discrimination of these stimuli was influenced by whether they were symmetric (Experiment 1) and by the font they were presented in (Experiment 2), but not by figure-ground contrast. Both tasks however were related with the subjective experience of fluency (Experiments 1 and 2). We conclude that subjective fluency is the conscious phenomenal correlate of different processing stages in visual perception.