896 resultados para MICROSCOPIC VISUALIZATION
Resumo:
Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2016
Resumo:
This paper introduces the theory of algorithm visualization and its education-related results obtained so far, then an algorithm visualization tool is going to be presented as an example, which we will finally evaluate. This article illustrates furthermore how algorithm visualization tools can be used by teachers and students during the teaching and learning process of programming, and equally evaluates teaching and learning methods. Two tools will be introduced: Jeliot and TRAKLA2.
Resumo:
Because some Web users will be able to design a template to visualize information from scratch, while other users need to automatically visualize information by changing some parameters, providing different levels of customization of the information is a desirable goal. Our system allows the automatic generation of visualizations given the semantics of the data, and the static or pre-specified visualization by creating an interface language. We address information visualization taking into consideration the Web, where the presentation of the retrieved information is a challenge. ^ We provide a model to narrow the gap between the user's way of expressing queries and database manipulation languages (SQL) without changing the system itself thus improving the query specification process. We develop a Web interface model that is integrated with the HTML language to create a powerful language that facilitates the construction of Web-based database reports. ^ As opposed to other papers, this model offers a new way of exploring databases focusing on providing Web connectivity to databases with minimal or no result buffering, formatting, or extra programming. We describe how to easily connect the database to the Web. In addition, we offer an enhanced way on viewing and exploring the contents of a database, allowing users to customize their views depending on the contents and the structure of the data. Current database front-ends typically attempt to display the database objects in a flat view making it difficult for users to grasp the contents and the structure of their result. Our model narrows the gap between databases and the Web. ^ The overall objective of this research is to construct a model that accesses different databases easily across the net and generates SQL, forms, and reports across all platforms without requiring the developer to code a complex application. This increases the speed of development. In addition, using only the Web browsers, the end-user can retrieve data from databases remotely to make necessary modifications and manipulations of data using the Web formatted forms and reports, independent of the platform, without having to open different applications, or learn to use anything but their Web browser. We introduce a strategic method to generate and construct SQL queries, enabling inexperienced users that are not well exposed to the SQL world to build syntactically and semantically a valid SQL query and to understand the retrieved data. The generated SQL query will be validated against the database schema to ensure harmless and efficient SQL execution. (Abstract shortened by UMI.)^
Resumo:
Current reform initiatives recommend that school geometry teaching and learning include the study of three-dimensional geometric objects and provide students with opportunities to use spatial abilities in mathematical tasks. Two ways of using Geometer's Sketchpad (GSP), a dynamic and interactive computer program, in conjunction with manipulatives enable students to investigate and explore geometric concepts, especially when used in a constructivist setting. Research on spatial abilities has focused on visual reasoning to improve visualization skills. This dissertation investigated the hypothesis that connecting visual and analytic reasoning may better improve students' spatial visualization abilities as compared to instruction that makes little or no use of the connection of the two. Data were collected using the Purdue Spatial Visualization Tests (PSVT) administered as a pretest and posttest to a control and two experimental groups. Sixty-four 10th grade students in three geometry classrooms participated in the study during 6 weeks. Research questions were answered using statistical procedures. An analysis of covariance was used for a quantitative analysis, whereas a description of students' visual-analytic processing strategies was presented using qualitative methods. The quantitative results indicated that there were significant differences in gender, but not in the group factor. However, when analyzing a sub sample of 33 participants with pretest scores below the 50th percentile, males in one of the experimental groups significantly benefited from the treatment. A review of previous research also indicated that students with low visualization skills benefited more than those with higher visualization skills. The qualitative results showed that girls were more sophisticated in their visual-analytic processing strategies to solve three-dimensional tasks. It is recommended that the teaching and learning of spatial visualization start in the middle school, prior to students' more rigorous mathematics exposure in high school. A duration longer than 6 weeks for treatments in similar future research studies is also recommended.
Resumo:
Due to the rapid advances in computing and sensing technologies, enormous amounts of data are being generated everyday in various applications. The integration of data mining and data visualization has been widely used to analyze these massive and complex data sets to discover hidden patterns. For both data mining and visualization to be effective, it is important to include the visualization techniques in the mining process and to generate the discovered patterns for a more comprehensive visual view. In this dissertation, four related problems: dimensionality reduction for visualizing high dimensional datasets, visualization-based clustering evaluation, interactive document mining, and multiple clusterings exploration are studied to explore the integration of data mining and data visualization. In particular, we 1) propose an efficient feature selection method (reliefF + mRMR) for preprocessing high dimensional datasets; 2) present DClusterE to integrate cluster validation with user interaction and provide rich visualization tools for users to examine document clustering results from multiple perspectives; 3) design two interactive document summarization systems to involve users efforts and generate customized summaries from 2D sentence layouts; and 4) propose a new framework which organizes the different input clusterings into a hierarchical tree structure and allows for interactive exploration of multiple clustering solutions.
Resumo:
With the exponential increasing demands and uses of GIS data visualization system, such as urban planning, environment and climate change monitoring, weather simulation, hydrographic gauge and so forth, the geospatial vector and raster data visualization research, application and technology has become prevalent. However, we observe that current web GIS techniques are merely suitable for static vector and raster data where no dynamic overlaying layers. While it is desirable to enable visual explorations of large-scale dynamic vector and raster geospatial data in a web environment, improving the performance between backend datasets and the vector and raster applications remains a challenging technical issue. This dissertation is to implement these challenging and unimplemented areas: how to provide a large-scale dynamic vector and raster data visualization service with dynamic overlaying layers accessible from various client devices through a standard web browser, and how to make the large-scale dynamic vector and raster data visualization service as rapid as the static one. To accomplish these, a large-scale dynamic vector and raster data visualization geographic information system based on parallel map tiling and a comprehensive performance improvement solution are proposed, designed and implemented. They include: the quadtree-based indexing and parallel map tiling, the Legend String, the vector data visualization with dynamic layers overlaying, the vector data time series visualization, the algorithm of vector data rendering, the algorithm of raster data re-projection, the algorithm for elimination of superfluous level of detail, the algorithm for vector data gridding and re-grouping and the cluster servers side vector and raster data caching.
Resumo:
It has long been known that vocabulary is essential in the development of reading. Because vocabulary leading to increased comprehension is important, it necessary to determine strategies for ensuring that the best methods of teaching vocabulary are used to help students make gains in vocabulary leading to reading comprehension. According to the National Reading Panel, multiple strategies that involve active engagement on the part of the student are more effective than the use of just one strategy. The purpose of this study was to determine if students' use of visualization, student-generated pictures of onset-and-rime-patterned vocabulary, and story read-alouds with discussion, would enable diverse first-grade students to increase their vocabulary and comprehension. In addition, this study examined the effect of the multimodal framework of strategies on English learners (ELs). This quasi-experimental study (N=69) was conducted in four first-grade classrooms in a low socio-economic school. Two treatment classes used a multimodal framework of strategies to learn weekly vocabulary words and comprehension. Two comparison classrooms used the traditional method of teaching weekly vocabulary and comprehension. Data sources included Florida Assessments for Instruction in Reading (FAIR), comprehension and vocabulary scores, and weekly MacMillan/McGraw Hill Treasures basal comprehension questions and onset-and-rime vocabulary questions. This research determined that the treatment had an effect in adjusted FAIR comprehension means by group, with the treatment group (adj M = 5.14) significantly higher than the comparison group ( adj M = -8.26) on post scores. However, the treatment means did not increase from pre to post, but the comparison means significantly decreased from pre to post as the materials became more challenging. For the FAIR vocabulary, there was a significant difference by group with the comparison adjusted post mean higher than the treatment's, although both groups significantly increased from pre to post. However, the FAIR vocabulary posttest was not part of the Treasures vocabulary, which was taught using the multimodal framework of strategies. The Treasures vocabulary scores were not significantly different by group on the assessment across the weeks, although the treatment means were higher than those of the comparison group. Continued research is needed in the area of vocabulary and comprehension instructional methods in order to determine strategies to increase diverse, urban students' performance.
Resumo:
The optimization of the timing parameters of traffic signals provides for efficient operation of traffic along a signalized transportation system. Optimization tools with macroscopic simulation models have been used to determine optimal timing plans. These plans have been, in some cases, evaluated and fine tuned using microscopic simulation tools. A number of studies show inconsistencies between optimization tool results based on macroscopic simulation and the results obtained from microscopic simulation. No attempts have been made to determine the reason behind these inconsistencies. This research investigates whether adjusting the parameters of macroscopic simulation models to correspond to the calibrated microscopic simulation model parameters can reduce said inconsistencies. The adjusted parameters include platoon dispersion model parameters, saturation flow rates, and cruise speeds. The results from this work show that adjusting cruise speeds and saturation flow rates can have significant impacts on improving the optimization/macroscopic simulation results as assessed by microscopic simulation models.
Resumo:
Archaeologists are often considered frontrunners in employing spatial approaches within the social sciences and humanities, including geospatial technologies such as geographic information systems (GIS) that are now routinely used in archaeology. Since the late 1980s, GIS has mainly been used to support data collection and management as well as spatial analysis and modeling. While fruitful, these efforts have arguably neglected the potential contribution of advanced visualization methods to the generation of broader archaeological knowledge. This paper reviews the use of GIS in archaeology from a geographic visualization (geovisual) perspective and examines how these methods can broaden the scope of archaeological research in an era of more user-friendly cyber-infrastructures. Like most computational databases, GIS do not easily support temporal data. This limitation is particularly problematic in archaeology because processes and events are best understood in space and time. To deal with such shortcomings in existing tools, archaeologists often end up having to reduce the diversity and complexity of archaeological phenomena. Recent developments in geographic visualization begin to address some of these issues, and are pertinent in the globalized world as archaeologists amass vast new bodies of geo-referenced information and work towards integrating them with traditional archaeological data. Greater effort in developing geovisualization and geovisual analytics appropriate for archaeological data can create opportunities to visualize, navigate and assess different sources of information within the larger archaeological community, thus enhancing possibilities for collaborative research and new forms of critical inquiry.
Resumo:
Peer reviewed
Resumo:
Peer reviewed
Resumo:
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
Resumo:
The goal of my Ph.D. thesis is to enhance the visualization of the peripheral retina using wide-field optical coherence tomography (OCT) in a clinical setting.
OCT has gain widespread adoption in clinical ophthalmology due to its ability to visualize the diseases of the macula and central retina in three-dimensions, however, clinical OCT has a limited field-of-view of 300. There has been increasing interest to obtain high-resolution images outside of this narrow field-of-view, because three-dimensional imaging of the peripheral retina may prove to be important in the early detection of neurodegenerative diseases, such as Alzheimer's and dementia, and the monitoring of known ocular diseases, such as diabetic retinopathy, retinal vein occlusions, and choroid masses.
Before attempting to build a wide-field OCT system, we need to better understand the peripheral optics of the human eye. Shack-Hartmann wavefront sensors are commonly used tools for measuring the optical imperfections of the eye, but their acquisition speed is limited by their underlying camera hardware. The first aim of my thesis research is to create a fast method of ocular wavefront sensing such that we can measure the wavefront aberrations at numerous points across a wide visual field. In order to address aim one, we will develop a sparse Zernike reconstruction technique (SPARZER) that will enable Shack-Hartmann wavefront sensors to use as little as 1/10th of the data that would normally be required for an accurate wavefront reading. If less data needs to be acquired, then we can increase the speed at which wavefronts can be recorded.
For my second aim, we will create a sophisticated optical model that reproduces the measured aberrations of the human eye. If we know how the average eye's optics distort light, then we can engineer ophthalmic imaging systems that preemptively cancel inherent ocular aberrations. This invention will help the retinal imaging community to design systems that are capable of acquiring high resolution images across a wide visual field. The proposed model eye is also of interest to the field of vision science as it aids in the study of how anatomy affects visual performance in the peripheral retina.
Using the optical model from aim two, we will design and reduce to practice a clinical OCT system that is capable of imaging a large (800) field-of-view with enhanced visualization of the peripheral retina. A key aspect of this third and final aim is to make the imaging system compatible with standard clinical practices. To this end, we will incorporate sensorless adaptive optics in order to correct the inter- and intra- patient variability in ophthalmic aberrations. Sensorless adaptive optics will improve both the brightness (signal) and clarity (resolution) of features in the peripheral retina without affecting the size of the imaging system.
The proposed work should not only be a noteworthy contribution to the ophthalmic and engineering communities, but it should strengthen our existing collaborations with the Duke Eye Center by advancing their capability to diagnose pathologies of the peripheral retinal.
Resumo:
Minimally-invasive microsurgery has resulted in improved outcomes for patients. However, operating through a microscope limits depth perception and fixes the visual perspective, which result in a steep learning curve to achieve microsurgical proficiency. We introduce a surgical imaging system employing four-dimensional (live volumetric imaging through time) microscope-integrated optical coherence tomography (4D MIOCT) capable of imaging at up to 10 volumes per second to visualize human microsurgery. A custom stereoscopic heads-up display provides real-time interactive volumetric feedback to the surgeon. We report that 4D MIOCT enhanced suturing accuracy and control of instrument positioning in mock surgical trials involving 17 ophthalmic surgeons. Additionally, 4D MIOCT imaging was performed in 48 human eye surgeries and was demonstrated to successfully visualize the pathology of interest in concordance with preoperative diagnosis in 93% of retinal surgeries and the surgical site of interest in 100% of anterior segment surgeries. In vivo 4D MIOCT imaging revealed sub-surface pathologic structures and instrument-induced lesions that were invisible through the operating microscope during standard surgical maneuvers. In select cases, 4D MIOCT guidance was necessary to resolve such lesions and prevent post-operative complications. Our novel surgical visualization platform achieves surgeon-interactive 4D visualization of live surgery which could expand the surgeon's capabilities.