815 resultados para Haptic rendering


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current tools for assessing risks associated with mental-health problems require assessors to make high-level judgements based on clinical experience. This paper describes how new technologies can enhance qualitative research methods to identify lower-level cues underlying these judgements, which can be collected by people without a specialist mental-health background. Content analysis of interviews with 46 multidisciplinary mental-health experts exposed the cues and their interrelationships, which were represented by a mind map using software that stores maps as XML. All 46 mind maps were integrated into a single XML knowledge structure and analysed by a Lisp program to generate quantitative information about the numbers of experts associated with each part of it. The knowledge was refined by the experts, using software developed in Flash to record their collective views within the XML itself. These views specified how the XML should be transformed by XSLT, a technology for rendering XML, which resulted in a validated hierarchical knowledge structure associating patient cues with risks. Changing knowledge elicitation requirements were accommodated by flexible transformations of XML data using XSLT, which also facilitated generation of multiple data-gathering tools suiting different assessment circumstances and levels of mental-health knowledge. © 2007 Informa UK Ltd All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Spatial objects may not only be perceived visually but also by touch. We report recent experiments investigating to what extent prior object knowledge acquired in either the haptic or visual sensory modality transfers to a subsequent visual learning task. Results indicate that even mental object representations learnt in one sensory modality may attain a multi-modal quality. These findings seem incompatible with picture-based reasoning schemas but leave open the possibility of modality-specific reasoning mechanisms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are around 285 million visually impaired people worldwide, and around 370,000 people are registered as blind or partially sighted in the UK. Ongoing advances in information technology (IT) are increasing the scope for IT-based mobile assistive technologies to facilitate the independence, safety, and improved quality of life of the visually impaired. Research is being directed at making mobile phones and other handheld devices accessible via our haptic (touch) and audio sensory channels. We review research and innovation within the field of mobile assistive technology for the visually impaired and, in so doing, highlight the need for successful collaboration between clinical expertise, computer science, and domain users to realize fully the potential benefits of such technologies. We initially reflect on research that has been conducted to make mobile phones more accessible to people with vision loss. We then discuss innovative assistive applications designed for the visually impaired that are either delivered via mainstream devices and can be used while in motion (e.g., mobile phones) or are embedded within an environment that may be in motion (e.g., public transport) or within which the user may be in motion (e.g., smart homes). © 2013 Elsevier Inc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The recent advancement in the growth technology of InGaN/GaN has decently positioned InGaN based white LEDs to leap into the area of general or daily lighting. Monolithic white LEDs with multiple QWs were previously demonstrated by Damilano et al. [1] in 2001. However, there are several challenges yet to be overcome for InGaN based monolithic white LEDs to establish themselves as an alternative to other day-to-day lighting sources [2,3]. Alongside the key characteristics of luminous efficacy and EQE, colour rendering index (CRI) and correlated colour temperature (CCT) are important characteristics for these structures [2,4]. Investigated monolithic white structures were similar to that described in [5] and contained blue and green InGaN multiple QWs without short-period superlattice between them and emitting at 440 nm and 530 nm, respectively. The electroluminescence (EL) measurements were done in the CW and pulse current modes. An integration sphere (Labsphere “CDS 600” spectrometer) and a pulse generator (Agilent 8114A) were used to perform the measurements. The CCT and Green/Blue radiant flux ratio were investigated at extended operation currents from 100mA to 2A using current pulses from 100ns to 100μs with a duty cycle varying from 1% to 95%. The strong dependence of the CCT on the duty cycle value, with the CCT value decreasing by more than three times at high duty cycle values (shown at the 300 mA pulse operation current) was demonstrated (Fig. 1). The pulse width variation seems to have a negligible effect on the CCT (Fig. 1). To account for the joule heating, a duty cycle more than 1% was considered as an overheated mode. For the 1% duty cycle it was demonstrated that the CCT was tuneable in three times by modulating input current and pulse width (Fig. 2). It has also been demonstrated that there is a possibility of keeping luminous flux independent of pulse width variation for a constant value of current pulse (Fig. 3).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis describes advances in the characterisation, calibration and data processing of optical coherence tomography (OCT) systems. Femtosecond (fs) laser inscription was used for producing OCT-phantoms. Transparent materials are generally inert to infra-red radiations, but with fs lasers material modification occurs via non-linear processes when the highly focused light source interacts with the materials. This modification is confined to the focal volume and is highly reproducible. In order to select the best inscription parameters, combination of different inscription parameters were tested, using three fs laser systems, with different operating properties, on a variety of materials. This facilitated the understanding of the key characteristics of the produced structures with the aim of producing viable OCT-phantoms. Finally, OCT-phantoms were successfully designed and fabricated in fused silica. The use of these phantoms to characterise many properties (resolution, distortion, sensitivity decay, scan linearity) of an OCT system was demonstrated. Quantitative methods were developed to support the characterisation of an OCT system collecting images from phantoms and also to improve the quality of the OCT images. Characterisation methods include the measurement of the spatially variant resolution (point spread function (PSF) and modulation transfer function (MTF)), sensitivity and distortion. Processing of OCT data is a computer intensive process. Standard central processing unit (CPU) based processing might take several minutes to a few hours to process acquired data, thus data processing is a significant bottleneck. An alternative choice is to use expensive hardware-based processing such as field programmable gate arrays (FPGAs). However, recently graphics processing unit (GPU) based data processing methods have been developed to minimize this data processing and rendering time. These processing techniques include standard-processing methods which includes a set of algorithms to process the raw data (interference) obtained by the detector and generate A-scans. The work presented here describes accelerated data processing and post processing techniques for OCT systems. The GPU based processing developed, during the PhD, was later implemented into a custom built Fourier domain optical coherence tomography (FD-OCT) system. This system currently processes and renders data in real time. Processing throughput of this system is currently limited by the camera capture rate. OCTphantoms have been heavily used for the qualitative characterization and adjustment/ fine tuning of the operating conditions of OCT system. Currently, investigations are under way to characterize OCT systems using our phantoms. The work presented in this thesis demonstrate several novel techniques of fabricating OCT-phantoms and accelerating OCT data processing using GPUs. In the process of developing phantoms and quantitative methods, a thorough understanding and practical knowledge of OCT and fs laser processing systems was developed. This understanding leads to several novel pieces of research that are not only relevant to OCT but have broader importance. For example, extensive understanding of the properties of fs inscribed structures will be useful in other photonic application such as making of phase mask, wave guides and microfluidic channels. Acceleration of data processing with GPUs is also useful in other fields.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper describes a framework for building virtual collections of several digital objects and presenting them in an interactive 3D environment, rendered in a web browser. Using that environment, the website visitor can examine a given collection from a first-person perspective by walking around and inspecting each object in detail by viewing it from any angle. The rendering and visualization of the models is done solely by the web browser with the use of HTML5 and the Three.js JavaScript library, without any additional requirements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The object of this paper is presenting the University of Economics – Varna, using a 3D model with 3Ds MAX. Created in 1920, May 14, University of Economics - Varna is a cultural institution with a place and style of its own. With the emergence of the three-dimensional modeling we entered a new stage of the evolution of computer graphics. The main target is to preserve the historical vision, to demonstrate forward-thinking and using of future-oriented approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many practical routing algorithms are heuristic, adhoc and centralized, rendering generic and optimal path configurations difficult to obtain. Here we study a scenario whereby selected nodes in a given network communicate with fixed routers and employ statistical physics methods to obtain optimal routing solutions subject to a generic cost. A distributive message-passing algorithm capable of optimizing the path configuration in real instances is devised, based on the analytical derivation, and is greatly simplified by expanding the cost function around the optimized flow. Good algorithmic convergence is observed in most of the parameter regimes. By applying the algorithm, we study and compare the pros and cons of balanced traffic configurations to that of consolidated traffic, which provides important implications to practical communication and transportation networks. Interesting macroscopic phenomena are observed from the optimized states as an interplay between the communication density and the cost functions used. © 2013 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cooperative Greedy Pursuit Strategies are considered for approximating a signal partition subjected to a global constraint on sparsity. The approach aims at producing a high quality sparse approximation of the whole signal, using highly coherent redundant dictionaries. The cooperation takes place by ranking the partition units for their sequential stepwise approximation, and is realized by means of i)forward steps for the upgrading of an approximation and/or ii) backward steps for the corresponding downgrading. The advantage of the strategy is illustrated by approximation of music signals using redundant trigonometric dictionaries. In addition to rendering stunning improvements in sparsity with respect to the concomitant trigonometric basis, these dictionaries enable a fast implementation of the approach via the Fast Fourier Transform

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Relevant carbon-based materials, home-made carbon-silica hybrids, commercial activated carbon, and nanostructured multi-walled carbon nanotubes (MWCNT) were tested in the oxidative dehydrogenation of ethylbenzene (EB). Special attention was given to the reaction conditions, using a relatively concentrated EB feed (10 vol.% EB), and limited excess of O2 (O 2:EB = 0.6) in order to work at full oxygen conversion and consequently avoid O2 in the downstream processing and recycle streams. The temperature was varied between 425 and 475 °C, that is about 150-200 °C lower than that of the commercial steam dehydrogenation process. The stability was evaluated from runs of 60 h time on stream. Under the applied reactions conditions, all the carbon-based materials are apparently stable in the first 15 h time on stream. The effect of the gasification/burning was significantly visible only after this period where most of them fully decomposes. The carbon of the hybrids decomposes completely rendering the silica matrix and the activated carbon bed is fully consumed. Nano structured MWCNT is the most stable; the structure resists the demanding reaction conditions showing an EB conversion of ∼30% (but deactivating) with a steady selectivity of ∼80%. The catalyst stability under the ODH reaction conditions is predicted from the combustion apparent activation energies. © 2014 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This project studied the frequency and of water contamination at the source, during transportation, and at home to determine the causes of contamination and its impact on the health of children aged 0 to 5 years. The methods used were construction of the infrastructure for three sources of potable water, administration of a questionnaire about socioeconomic status and sanitation behavior, anthropometric measurement of children, and analysis of water and feces. The contamination, first thought to be only a function of rainfall, turned out to be a very complex phenomenon. Water in homes was contaminated (43.4%) with more than 1100 total coliforms/100 ml due to the use of unclean utensils to transport and store water. This socio-economic and cultural problem should be ad- dressed with health education about sanitation, The latrines (found in 43.8% of families) presented a double-edged problem. The extremely high population density reduced the surface area of land per family, which resulted in a severe nutritional deficit (15% of the children) affecting mainly young children, rendering them more susceptible to diarrhea (three episodes/child/year).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

3D geographic information system (GIS) is data and computation intensive in nature. Internet users are usually equipped with low-end personal computers and network connections of limited bandwidth. Data reduction and performance optimization techniques are of critical importance in quality of service (QoS) management for online 3D GIS. In this research, QoS management issues regarding distributed 3D GIS presentation were studied to develop 3D TerraFly, an interactive 3D GIS that supports high quality online terrain visualization and navigation. ^ To tackle the QoS management challenges, multi-resolution rendering model, adaptive level of detail (LOD) control and mesh simplification algorithms were proposed to effectively reduce the terrain model complexity. The rendering model is adaptively decomposed into sub-regions of up-to-three detail levels according to viewing distance and other dynamic quality measurements. The mesh simplification algorithm was designed as a hybrid algorithm that combines edge straightening and quad-tree compression to reduce the mesh complexity by removing geometrically redundant vertices. The main advantage of this mesh simplification algorithm is that grid mesh can be directly processed in parallel without triangulation overhead. Algorithms facilitating remote accessing and distributed processing of volumetric GIS data, such as data replication, directory service, request scheduling, predictive data retrieving and caching were also proposed. ^ A prototype of the proposed 3D TerraFly implemented in this research demonstrates the effectiveness of our proposed QoS management framework in handling interactive online 3D GIS. The system implementation details and future directions of this research are also addressed in this thesis. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The impact of eliminating extraneous sound and light on students’ achievement was investigated under four conditions: Light and Sound controlled, Sound Only controlled, Light Only controlled and neither Light nor Sound controlled. Group, age and gender were the control variables. Four randomly selected groups of high school freshmen students with different backgrounds were the participants in this study. Academic achievement was the dependent variable measured on a pretest, a posttest and a post-posttest, each separated by an interval of 15 days. ANOVA was used to test the various hypotheses related to the impact of eliminating sound and light on student learning. Independent sample T tests on the effect of gender indicated a significant effect while age was non- significant. Follow up analysis indicated that sound and light are not potential sources of extraneous load when tested individually. However, the combined effect of sound and light seems to be a potential source of extrinsic load. The findings revealed that the performance of the Sound and Light controlled group was greater during the posttest and post-posttest. The overall performance of boys was greater than that of girls. Results indicated a significant interaction effect between group and gender on treatment subjects. However gender alone was non-significant. Performance of group by age had no significant interaction and age alone was non-significant in the posttest and post-posttest. Based on the results obtained sound and light combined seemed to be the potential sources of extraneous load in this type of learning environment. This finding supports previous research on the effect of sound and light on learning. The findings of this study show that extraneous sound and light have an impact on learning. These findings can be used to design better learning environments. Such environments can be achieved with different electric lighting and sound systems that provide optimal color rendering, low glare, low flicker, low noise and reverberation. These environments will help people avoid unwanted distraction, drowsiness, and photosensitive behavior.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent intervention efforts in promoting positive identity in troubled adolescents have begun to draw on the potential for an integration of the self-construction and self-discovery perspectives in conceptualizing identity processes, as well as the integration of quantitative and qualitative data analytic strategies. This study reports an investigation of the Changing Lives Program (CLP), using an Outcome Mediation (OM) evaluation model, an integrated model for evaluating targets of intervention, while theoretically including a Self-Transformative Model of Identity Development (STM), a proposed integration of self-discovery and self-construction identity processes. This study also used a Relational Data Analysis (RDA) integration of quantitative and qualitative analysis strategies and a structural equation modeling approach (SEM), to construct and evaluate the hypothesized OM/STM model. The CLP is a community supported positive youth development intervention, targeting multi-problem youth in alternative high schools in the Miami Dade County Public Schools (M-DCPS). The 259 participants for this study were drawn from the CLP’s archival data file. The model evaluated in this study utilized three indices of core identity processes (1) personal expressiveness, (2) identity conflict resolution, and (3) informational identity style that were conceptualized as mediators of the effects of participation in the CLP on change in two qualitative outcome indices of participants’ sense of self and identity. Findings indicated the model fit the data (χ2 (10) = 3.638, p = .96; RMSEA = .00; CFI = 1.00; WRMR = .299). The pattern of findings supported the utilization of the STM in conceptualizing identity processes and provided support for the OM design. The findings also suggested the need for methods capable of detecting and rendering unique sample specific free response data to increase the likelihood of identifying emergent core developmental research concepts and constructs in studies of intervention/developmental change over time in ways not possible using fixed response methods alone.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Three-Dimensional (3-D) imaging is vital in computer-assisted surgical planning including minimal invasive surgery, targeted drug delivery, and tumor resection. Selective Internal Radiation Therapy (SIRT) is a liver directed radiation therapy for the treatment of liver cancer. Accurate calculation of anatomical liver and tumor volumes are essential for the determination of the tumor to normal liver ratio and for the calculation of the dose of Y-90 microspheres that will result in high concentration of the radiation in the tumor region as compared to nearby healthy tissue. Present manual techniques for segmentation of the liver from Computed Tomography (CT) tend to be tedious and greatly dependent on the skill of the technician/doctor performing the task. ^ This dissertation presents the development and implementation of a fully integrated algorithm for 3-D liver and tumor segmentation from tri-phase CT that yield highly accurate estimations of the respective volumes of the liver and tumor(s). The algorithm as designed requires minimal human intervention without compromising the accuracy of the segmentation results. Embedded within this algorithm is an effective method for extracting blood vessels that feed the tumor(s) in order to plan effectively the appropriate treatment. ^ Segmentation of the liver led to an accuracy in excess of 95% in estimating liver volumes in 20 datasets in comparison to the manual gold standard volumes. In a similar comparison, tumor segmentation exhibited an accuracy of 86% in estimating tumor(s) volume(s). Qualitative results of the blood vessel segmentation algorithm demonstrated the effectiveness of the algorithm in extracting and rendering the vasculature structure of the liver. Results of the parallel computing process, using a single workstation, showed a 78% gain. Also, statistical analysis carried out to determine if the manual initialization has any impact on the accuracy showed user initialization independence in the results. ^ The dissertation thus provides a complete 3-D solution towards liver cancer treatment planning with the opportunity to extract, visualize and quantify the needed statistics for liver cancer treatment. Since SIRT requires highly accurate calculation of the liver and tumor volumes, this new method provides an effective and computationally efficient process required of such challenging clinical requirements.^