855 resultados para Sight
Resumo:
Zielgerichtete Orientierung ermöglicht es Lebewesen, überlebenswichtige Aufgaben, wie die Suche nach Ressourcen, Fortpflanzungspartnern und sicheren Plätzen zu bewältigen. Dafür ist es essentiell, die Umgebung sensorisch wahrzunehmen, frühere Erfahrungen zu speichern und wiederabzurufen und diese Informationen zu integrieren und in motorische Aktionen umzusetzen.rnWelche Neuronengruppen vermitteln zielgerichtete Orientierung im Gehirn einer Fliege? Welche sensorischen Informationen sind in einem gegebenen Kontext relevant und wie werden diese Informationen sowie gespeichertes Vorwissen in motorische Aktionen übersetzt? Wo findet im Gehirn der Übergang von der sensorischen Verarbeitung zur motorischen Kontrolle statt? rnDer Zentralkomplex, ein Verbund von vier Neuropilen des Zentralhirns von Drosophila melanogaster, fungiert als Übergang zwischen in den optischen Loben vorverarbeiteten visuellen Informationen und prämotorischem Ausgang. Die Neuropile sind die Protocerebralbrücke, der Fächerförmige Körper, der Ellipsoidkörper und die Noduli. rnIn der vorliegenden Arbeit konnte gezeigt werden, dass Fruchtfliegen ein räumliches Arbeitsgedächtnis besitzen. Dieses Gedächtnis kann aktuelle visuelle Information ersetzen, wenn die Sicht auf das Zielobjekt verloren geht. Dies erfordert die sensorische Wahrnehmung von Zielobjekten, die Speicherung der Position, die kontinuierliche Integration von Eigen-und Objektposition, sowie die Umsetzung der sensorischen Information in zielgerichtete Bewegung. Durch konditionale Expression von Tetanus Toxin mittels des GAL4/UAS/GAL80ts Systems konnte gezeigt werden, dass die Ringneurone, welche in den Ellipsoidkörper projizieren, für das Orientierungsgedächtnis notwendig sind. Außerdem konnte gezeigt werden, dass Fliegen, denen die ribosomale Serinkinase S6KII fehlt, die Richtung verlieren, sobald keine Objekte mehr sichtbar sind und, dass die partielle Rettung dieser Kinase ausschließlich in den Ringneuronenklassen R3 und R4d hinreichend ist, um das Gedächtnis wieder herzustellen. Bei dieser Gedächtnisleistung scheint es sich um eine idiothetische Form der Orientierung zu handeln. rn Während das räumliche Arbeitsgedächtnis nach Verschwinden von Objekten relevant ist, wurde in der vorliegende Arbeit auch die Vermittlung zielgerichteter Bewegung auf sichtbare Objekte untersucht. Dabei wurde die zentrale Frage bearbeitet, welche Neuronengruppen visuelle Orientierung vermitteln. Anhand von Gehirnstrukturmutanten konnte gezeigt werden, dass eine intakte Protocerebralbrücke notwendig ist, um Laufgeschwindigkeit, Laufaktivität und Zielgenauigkeit bei der Ansteuerung visueller Stimuli korrekt zu vermitteln. Dabei scheint das Horizontale Fasersystem, welches von der Protocerebralbrücke über den Fächerförmigen Körper auf den Zentralkomplex assoziierte Neuropile, die Ventralkörper, projiziert, notwendig für die lokomotorische Kontrolle und die zielgenaue Bewegung zu sein. Letzeres konnte zum einen durch Blockade der synaptischen Transmission anhand konditionaler Tetanus Toxin Expression mittels des GAL4/UAS/GAL80ts Systems im Horizontalen Fasersystem gezeigt werden;. zum anderen auch durch partielle Rettung der in den Strukturmutanten betroffenen Gene. rn Den aktuellen Ergebnissen und früheren Studien folgend, ergibt sich dabei ein Modell, wie zielgerichtete Bewegung auf visuelle Stimuli neuronal vermittelt werden könnte. Nach diesem Modell bildet die Protocerebralbrücke die Azimuthpositionen von Objekten ab und das Horizontale Fasersystem vermittelt die entsprechende lokomotorische Wo-Information für zielgerichtete Bewegungen. Die Eigenposition in Relation zum Zielobjekt wird über die Ringneurone und den Ellipsoidkörper vermittelt. Wenn das Objekt aus der Sicht verschwindet, kann die Relativposition ideothetisch ermittelt werden und integriert werden mit Vorinformation über das Zielobjekt, die im Fächerförmigen Körper abgelegt ist (Was-Information). Die resultierenden Informationen könnten dann über das Horizontale Fasersystem in den Ventralkörpern auf absteigende Neurone gelangen und in den Thorax zu den motorischen Zentren weitergeleitet werden.rn
Resumo:
Robben sind amphibische marine Säugetiere. Das bedeutet, dass sie zweirnunterschiedliche Lebensräume, Wasser und Land, bewohnen. Ihre sensorischen Systeme müssen auf beide Medien abgestimmt sein. Gerade für das Sehvermögen ist es eine große Herausforderung, sich den zwei optisch unterschiedlichen Medien anzupassen. Deshalb sind Forscher an dem Sehen von marinen Säugern seit dem zwanzigsten Jahrhundert so sehr interessiert. rnBis heute wird kontrovers diskutiert, ob marine Säugetiere Farbe sehen können, da sie durch einen Gendefekt nur einen Zapfentyp besitzen und somit zu den Zapfen-Monochromaten gehören. Dressurexperimente zeigten jedoch, dass Seebären und Seelöwen in der Lage sind grüne und blaue Testfelder von Graustufen zu unterscheiden (Busch & Dücker, 1987; Griebel & Schmid, 1992).rnUm auszuschließen, dass die Tiere ein Farbensehen über die Unterscheidung von Helligkeit vortäuschen, wurde in der vorliegenden Arbeit zunächst die Kontrasterkennung untersucht und danach Tests auf Farbensehen durchgeführt. Als Versuchstiere dienten zwei Seehunde (Phoca vitulina) und zwei Südafrikanische Seebären (Arctocephalus pusillus). Alle Versuche wurden unter freien Himmel im Zoo Frankfurt durchgeführt. Den Tieren wurden immer drei Testfelder zur Auswahl geboten: zwei waren gleich und zeigten ein homogenen Hintergrund, das dritte zeigte ein Dreieck auf demselben Hintergrund. Die Tiere wurden auf das Dreieck dressiert. In den Versuchen zum Helligkeitskontrast wurden graue Dreiecke auf grauem Hintergrund verwendet. Das Dreieck wurde nicht erkannt bei einem Luminanz-Kontrast (K= LD/(LD+LH)) zwischen 0,03 und -0,12.rnBeim Test auf Farbensehen wurden die Farben Blau, Grün, Gelb und Orange auf grauem Hintergrund verwendet. Die Testreihen zeigten, dass jedes Tier auch in Bereichen von geringem Helligkeitskontrast hohe Wahlhäufigkeiten auf das farbige Dreieck erzielte und somit eindeutig die Farben Blau, Grün und Gelb sehen konnte. Lediglich bei der Farbe Orange kann keine Aussage zum Farbensehen getroffen werden, da das farbige Dreieck immer dunkler war als der Hintergrund. rnZusammenfassend konnte in dieser Arbeit gezeigt werden, dass Seehunde und Seebären in der Lage sind Farbe zu sehen. Vermutlich beruht diese Fähigkeit auf der Interaktion von Stäbchen und Zapfen. rn
Resumo:
Stratosphärische Partikel sind typischerweise mit dem bloßen Auge nicht wahrnehmbar. Dennoch haben sie einen signifikanten Einfluss auf die Strahlungsbilanz der Erde und die heteorogene Chemie in der Stratosphäre. Kontinuierliche, vertikal aufgelöste, globale Datensätze sind daher essenziell für das Verständnis physikalischer und chemischer Prozesse in diesem Teil der Atmosphäre. Beginnend mit den Messungen des zweiten Stratospheric Aerosol Measurement (SAM II) Instruments im Jahre 1978 existiert eine kontinuierliche Zeitreihe für stratosphärische Aerosol-Extinktionsprofile, welche von Messinstrumenten wie dem zweiten Stratospheric Aerosol and Gas Experiment (SAGE II), dem SCIAMACHY, dem OSIRIS und dem OMPS bis heute fortgeführt wird. rnrnIn dieser Arbeit wird ein neu entwickelter Algorithmus vorgestellt, der das sogenannte ,,Zwiebel-Schäl Prinzip'' verwendet, um Extinktionsprofile zwischen 12 und 33 km zu berechnen. Dafür wird der Algorithmus auf Radianzprofile einzelner Wellenlängen angewandt, die von SCIAMACHY in der Limb-Geometrie gemessen wurden. SCIAMACHY's einzigartige Methode abwechselnder Limb- und Nadir-Messungen bietet den Vorteil, hochaufgelöste vertikale und horizontale Messungen mit zeitlicher und räumlicher Koinzidenz durchführen zu können. Die dadurch erlangten Zusatzinformationen können verwendet werden, um die Effekte von horizontalen Gradienten entlang der Sichtlinie des Messinstruments zu korrigieren, welche vor allem kurz nach Vulkanausbrüchen und für polare Stratosphärenwolken beobachtet werden. Wenn diese Gradienten für die Berechnung von Extinktionsprofilen nicht beachtet werden, so kann dies dazu führen, dass sowohl die optischen Dicke als auch die Höhe von Vulkanfahnen oder polarer Stratosphärenwolken unterschätzt werden. In dieser Arbeit wird ein Verfahren vorgestellt, welches mit Hilfe von dreidimensionalen Strahlungstransportsimulationen und horizontal aufgelösten Datensätzen die berechneten Extinktionsprofile korrigiert.rnrnVergleichsstudien mit den Ergebnissen von Satelliten- (SAGE II) und Ballonmessungen zeigen, dass Extinktionsprofile von stratosphärischen Partikeln mit Hilfe des neu entwickelten Algorithmus berechnet werden können und gut mit bestehenden Datensätzen übereinstimmen. Untersuchungen des Nabro Vulkanausbruchs 2011 und des Auftretens von polaren Stratosphärenwolken in der südlichen Hemisphäre zeigen, dass das Korrekturverfahren für horizontale Gradienten die berechneten Extinktionsprofile deutlich verbessert.
Resumo:
L’occhio è l’organo di senso responsabile della visione. Uno strumento ottico il cui principio di funzionamento è paragonabile a quanto avviene in una macchina fotografica. Secondo l’Organizzazione mondiale della sanità (WHO 2010) sulla Terra vivono 285 milioni di persone con handicap visivo grave: 39 milioni sono i ciechi e 246 milioni sono gli ipovedenti. Si evince pertanto la necessità di tecnologie in grado di ripristinare la funzionalità retinica nelle differenti condizioni fisiopatologiche che ne causano la compromissione. In quest’ottica, scopo di questa tesi è stato quello di passare in rassegna le principali tipologie di sistemi tecnologici volti alla diagnosi e alla terapia delle fisiopatologie retiniche. La ricerca di soluzioni bioingegneristiche per il recupero della funzionalità della retina in condizioni fisiopatologiche, coinvolge differenti aree di studio, come la medicina, la biologia, le neuroscienze, l’elettronica, la chimica dei materiali. In particolare, sono stati descritti i principali impianti retinali tra cui l’impianto di tipo epiretinale e subretinale, corticale e del nervo ottico. Tra gli impianti che ad oggi hanno ricevuto la certificazione dell’Unione Europea vi sono il sistema epiretinale Argus II (Second Sight Medical Products) e il dispositivo subretinale Alpha IMS (Retina Implant AG). Lo stato dell’arte delle retine artificiali, basate sulla tecnologia inorganica, trova tuttavia limitazioni legate principalmente a: necessità di un’alimentazione esterna, biocompatibilità a lungo termine, complessità dei processi di fabbricazione, la difficoltà dell’intervento chirurgico, il numero di elettrodi, le dimensioni e la geometria, l’elevata impedenza, la produzione di calore. Approcci bioingegneristici alternativi avanzano nel campo d’indagine della visione artificiale. Fra le prospettive di frontiera, sono attualmente in fase di studio le tecnologie optogenetiche, il cui scopo è la fotoattivazione di neuroni compromessi. Inoltre, vengono annoverate le tecnologie innovative che sfruttano le proprietà meccaniche, optoelettroniche e di biocompatibilità delle molecole di materiali organici polimerici. L’integrazione di funzioni fotoniche nell’elettronica organica offre nuove possibilità al campo dell’optoelettronica, che sfrutta le proprietà ottiche e elettroniche dei semiconduttori organici per la progettazione di dispositivi ed applicazioni optoelettronici nel settore dell’imaging e del rilevamento biomedico. La combinazione di tecnologie di tipo organico ed organico potrebbe aprire in prospettiva la strada alla realizzazione di dispositivi retinici ed impianti di nuova generazione.
Resumo:
Moebius sequence is a congenital disorder that not only affects the oculomotor system but also the eyes themselves. Ocular involvement might be sight-threatening and needs regular follow-up by an ophthalmologist.
Resumo:
In the human body, over 1000 different G protein-coupled receptors (GPCRs) mediate a broad spectrum of extracellular signals at the plasma membrane, transmitting vital physiological features such as pain, sight, smell, inflammation, heart rate and contractility of muscle cells. Signaling through these receptors is primarily controlled and regulated by a group of kinases, the GPCR kinases (GRKs), of which only seven are known and thus, interference with these common downstream GPCR regulators suggests a powerful therapeutic strategy. Molecular modulation of the kinases that are ubiquitously expressed in the heart has proven GRK2, and also GRK5, to be promising targets for prevention and reversal of one of the most severe pathologies in man, chronic heart failure (HF). In this article we will focus on the structural aspects of these GRKs important for their physiological and pathological regulation as well as well known and novel therapeutic approaches that target these GRKs in order to overcome the development of cardiac injury and progression of HF.
Resumo:
BACKGROUND: Severe postoperative loss of vision has been occasionally reported as a rare complication of retrobulbar anesthesia, and several possible causes have been proposed in the literature. In this work, our own and other investigators' experiences with these complications are surveyed with a view to identifying its pathophysiology. PATIENTS: This observational case series refers to six patients who presented during a 3-month period with occlusion of either the central artery itself (n = 3) or a branch thereof (n = 3) 2-14 days after uneventful vitreoretinal surgery following retrobulbar anesthesia with a commercial preparation of mepivacaine (1% Scandicain®, Astra Chemicals, Sweden) containing methyl- and propyl parahydroxybenzoate as preservatives. RESULTS: Three of the patients carried risk factors, which were medically controlled. In three individuals, vasoocclusion was observed after a second vitreoretinal intervention, which was performed 3-12 months after uneventful primary surgery. Good visual recovery was observed in only one instance. CONCLUSIONS: In patients who were anesthetized with preservative-free mepivacaine, no vasoocclusion occurred. In individuals who were anesthetized with mepivacaine containing the preservatives methyl- and propyl parahydroxybenzoate, a tenfold increase in the incidence of eyes requiring re-operation was documented, with a 2- to 14-day lapse in the onset of vasoocclusion. These findings reveal a possible implication of preservatives contained in the local anesthetic solution for the vasoocclusive events. Due to this potential hazard, the use of preservative-free preparations of local anesthesia in ocular surgery is emphasized in order to prevent this sight-threatening complication.
Resumo:
Image overlay projection is a form of augmented reality that allows surgeons to view underlying anatomical structures directly on the patient surface. It improves intuitiveness of computer-aided surgery by removing the need for sight diversion between the patient and a display screen and has been reported to assist in 3-D understanding of anatomical structures and the identification of target and critical structures. Challenges in the development of image overlay technologies for surgery remain in the projection setup. Calibration, patient registration, view direction, and projection obstruction remain unsolved limitations to image overlay techniques. In this paper, we propose a novel, portable, and handheld-navigated image overlay device based on miniature laser projection technology that allows images of 3-D patient-specific models to be projected directly onto the organ surface intraoperatively without the need for intrusive hardware around the surgical site. The device can be integrated into a navigation system, thereby exploiting existing patient registration and model generation solutions. The position of the device is tracked by the navigation system’s position sensor and used to project geometrically correct images from any position within the workspace of the navigation system. The projector was calibrated using modified camera calibration techniques and images for projection are rendered using a virtual camera defined by the projectors extrinsic parameters. Verification of the device’s projection accuracy concluded a mean projection error of 1.3 mm. Visibility testing of the projection performed on pig liver tissue found the device suitable for the display of anatomical structures on the organ surface. The feasibility of use within the surgical workflow was assessed during open liver surgery. We show that the device could be quickly and unobtrusively deployed within the sterile environment.
Resumo:
Presenting visual feedback for image-guided surgery on a monitor requires the surgeon to perform time-consuming comparisons and diversion of sight and attention away from the patient. Deficiencies in previously developed augmented reality systems for image-guided surgery have, however, prevented the general acceptance of any one technique as a viable alternative to monitor displays. This work presents an evaluation of the feasibility and versatility of a novel augmented reality approach for the visualisation of surgical planning and navigation data. The approach, which utilises a portable image overlay device, was evaluated during integration into existing surgical navigation systems and during application within simulated navigated surgery scenarios.
Resumo:
PURPOSE: The aim of this study is to implement augmented reality in real-time image-guided interstitial brachytherapy to allow an intuitive real-time intraoperative orientation. METHODS AND MATERIALS: The developed system consists of a common video projector, two high-resolution charge coupled device cameras, and an off-the-shelf notebook. The projector was used as a scanning device by projecting coded-light patterns to register the patient and superimpose the operating field with planning data and additional information in arbitrary colors. Subsequent movements of the nonfixed patient were detected by means of stereoscopically tracking passive markers attached to the patient. RESULTS: In a first clinical study, we evaluated the whole process chain from image acquisition to data projection and determined overall accuracy with 10 patients undergoing implantation. The described method enabled the surgeon to visualize planning data on top of any preoperatively segmented and triangulated surface (skin) with direct line of sight during the operation. Furthermore, the tracking system allowed dynamic adjustment of the data to the patient's current position and therefore eliminated the need for rigid fixation. Because of soft-part displacement, we obtained an average deviation of 1.1 mm by moving the patient, whereas changing the projector's position resulted in an average deviation of 0.9 mm. Mean deviation of all needles of an implant was 1.4 mm (range, 0.3-2.7 mm). CONCLUSIONS: The developed low-cost augmented-reality system proved to be accurate and feasible in interstitial brachytherapy. The system meets clinical demands and enables intuitive real-time intraoperative orientation and monitoring of needle implantation.
Resumo:
Since the introduction of the rope-pump in Nicaragua in the 1990s, the dependence on wells in rural areas has grown steadily. However, little or no attention is paid to rope-pump well performance after installation. Due to financial restraints, groundwater resource monitoring using conventional testing methods is too costly and out of reach of rural municipalities. Nonetheless, there is widespread agreement that without a way to quantify the changes in well performance over time, prioritizing regulatory actions is impossible. A manual pumping test method is presented, which at a fraction of the cost of a conventional pumping test, measures the specific capacity of rope-pump wells. The method requires only sight modifcations to the well and reasonable limitations on well useage prior to testing. The pumping test was performed a minimum of 33 times in three wells over an eight-month period in a small rural community in Chontales, Nicaragua. Data was used to measure seasonal variations in specific well capacity for three rope-pump wells completed in fractured crystalline basalt. Data collected from the tests were analyzed using four methods (equilibrium approximation, time-drawdown during pumping, time-drawdown during recovery, and time-drawdown during late-time recovery) to determine the best data-analyzing method. One conventional pumping test was performed to aid in evaluating the manual method. The equilibrim approximation can be performed while in the field with only a calculator and is the most technologically appropriate method for analyzing data. Results from this method overestimate specific capacity by 41% when compared to results from the conventional pumping test. The other analyes methods, requiring more sophisticated tools and higher-level interpretation skills, yielded results that agree to within 14% (pumping phase), 31% (recovery phase) and 133% (late-time recovery) of the conventional test productivity value. The wide variability in accuracy results principally from difficulties in achieving equilibrated pumping level and casing storage effects in the puping/recovery data. Decreases in well productivity resulting from naturally occuring seasonal water-table drops varied from insignificant in two wells to 80% in the third. Despite practical and theoretical limitations on the method, the collected data may be useful for municipal institutions to track changes in well behavior, eventually developing a database for planning future ground water development projects. Furthermore, the data could improve well-users’ abilities to self regulate well usage without expensive aquifer characterization.
Resumo:
This dissertation investigates high performance cooperative localization in wireless environments based on multi-node time-of-arrival (TOA) and direction-of-arrival (DOA) estimations in line-of-sight (LOS) and non-LOS (NLOS) scenarios. Here, two categories of nodes are assumed: base nodes (BNs) and target nodes (TNs). BNs are equipped with antenna arrays and capable of estimating TOA (range) and DOA (angle). TNs are equipped with Omni-directional antennas and communicate with BNs to allow BNs to localize TNs; thus, the proposed localization is maintained by BNs and TNs cooperation. First, a LOS localization method is proposed, which is based on semi-distributed multi-node TOA-DOA fusion. The proposed technique is applicable to mobile ad-hoc networks (MANETs). We assume LOS is available between BNs and TNs. One BN is selected as the reference BN, and other nodes are localized in the coordinates of the reference BN. Each BN can localize TNs located in its coverage area independently. In addition, a TN might be localized by multiple BNs. High performance localization is attainable via multi-node TOA-DOA fusion. The complexity of the semi-distributed multi-node TOA-DOA fusion is low because the total computational load is distributed across all BNs. To evaluate the localization accuracy of the proposed method, we compare the proposed method with global positioning system (GPS) aided TOA (DOA) fusion, which are applicable to MANETs. The comparison criterion is the localization circular error probability (CEP). The results confirm that the proposed method is suitable for moderate scale MANETs, while GPS-aided TOA fusion is suitable for large scale MANETs. Usually, TOA and DOA of TNs are periodically estimated by BNs. Thus, Kalman filter (KF) is integrated with multi-node TOA-DOA fusion to further improve its performance. The integration of KF and multi-node TOA-DOA fusion is compared with extended-KF (EKF) when it is applied to multiple TOA-DOA estimations made by multiple BNs. The comparison depicts that it is stable (no divergence takes place) and its accuracy is slightly lower than that of the EKF, if the EKF converges. However, the EKF may diverge while the integration of KF and multi-node TOA-DOA fusion does not; thus, the reliability of the proposed method is higher. In addition, the computational complexity of the integration of KF and multi-node TOA-DOA fusion is much lower than that of EKF. In wireless environments, LOS might be obstructed. This degrades the localization reliability. Antenna arrays installed at each BN is incorporated to allow each BN to identify NLOS scenarios independently. Here, a single BN measures the phase difference across two antenna elements using a synchronized bi-receiver system, and maps it into wireless channel’s K-factor. The larger K is, the more likely the channel would be a LOS one. Next, the K-factor is incorporated to identify NLOS scenarios. The performance of this system is characterized in terms of probability of LOS and NLOS identification. The latency of the method is small. Finally, a multi-node NLOS identification and localization method is proposed to improve localization reliability. In this case, multiple BNs engage in the process of NLOS identification, shared reflectors determination and localization, and NLOS TN localization. In NLOS scenarios, when there are three or more shared reflectors, those reflectors are localized via DOA fusion, and then a TN is localized via TOA fusion based on the localization of shared reflectors.
Resumo:
Routine bridge inspections require labor intensive and highly subjective visual interpretation to determine bridge deck surface condition. Light Detection and Ranging (LiDAR) a relatively new class of survey instrument has become a popular and increasingly used technology for providing as-built and inventory data in civil applications. While an increasing number of private and governmental agencies possess terrestrial and mobile LiDAR systems, an understanding of the technology’s capabilities and potential applications continues to evolve. LiDAR is a line-of-sight instrument and as such, care must be taken when establishing scan locations and resolution to allow the capture of data at an adequate resolution for defining features that contribute to the analysis of bridge deck surface condition. Information such as the location, area, and volume of spalling on deck surfaces, undersides, and support columns can be derived from properly collected LiDAR point clouds. The LiDAR point clouds contain information that can provide quantitative surface condition information, resulting in more accurate structural health monitoring. LiDAR scans were collected at three study bridges, each of which displayed a varying degree of degradation. A variety of commercially available analysis tools and an independently developed algorithm written in ArcGIS Python (ArcPy) were used to locate and quantify surface defects such as location, volume, and area of spalls. The results were visual and numerically displayed in a user-friendly web-based decision support tool integrating prior bridge condition metrics for comparison. LiDAR data processing procedures along with strengths and limitations of point clouds for defining features useful for assessing bridge deck condition are discussed. Point cloud density and incidence angle are two attributes that must be managed carefully to ensure data collected are of high quality and useful for bridge condition evaluation. When collected properly to ensure effective evaluation of bridge surface condition, LiDAR data can be analyzed to provide a useful data set from which to derive bridge deck condition information.
Resumo:
BACKGROUND: Exudative age-related macular degeneration (AMD) is a sight-threatening event in many elderly people. Some patients have a much better outcome in visual acuity (VA) than others after treatment with photodynamic therapy (PDT) with verteporfin. The combination of fluorescein angiography (FA) and indocyanine green (ICG) angiography using the Heidelberg Retina Angiograph II (HRA 2) should make a delineation of distinct pattern(s) possible in order to better select and assess therapy. METHODS: This is a retrospective, case-control, single-centre study. We identified a total of 168 eyes of 168 patients from July 2003 to June 2006, including 30 eyes of 30 patients with better visual outcome, defined in this study as VA < or = 0.48 logMAR (> or =20/60 Snellen chart) at the end of the study. Best-corrected VA, maximal central retinal thickness as measured by optical coherence tomography, and results of the FA/ICG angiography using the HRA 2 were analyzed. In this article, we discuss patients with polypoidal choroidal vasculopathy (PCV) and their characteristics. RESULTS: The average follow-up time was 15.3 months (range 4-28 months). Seventeen (57%) of the 30 patients with better visual outcome had PCV. All patients in the group with better visual outcome needed fewer PDT treatments compared with our control group of patients with an exudative AMD. INTERPRETATION: Simultaneous FA/ICG angiography using the HRA 2 allowed delineation of a subgroup of patients with PCV who showed a better visual outcome compared with those with other types of exudative AMD, after treatment with PDT.
Resumo:
The purpose of Part I of this report is to determine the origin of the bentonite deposits, also to locate them with reference to section corners in the vicinity and to determine their extent. The field work for this report was done in the fall of 1933 and during the spring of 1934. The roads, geologic contacts, and culture in general were mapped with the use of an open sight alidade and plane table. Distances were determined on the roads by the speedometer on the automobile; the detailed survey in the immediate vicinity of the deposits was done with use of the Brunton compass and pacing. The purpose of Part II in this report is to determine if the bentonite deposits immediately west of Butte, Montana are of commercial importance and also to determine the use to which they are best suited.