889 resultados para user interface development


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Interactive intention understanding is important for Pen-based User Interface (PUI). Many works on this topic are reported, and focus on handwriting or sketching recognition algorithms at the lexical layer. But these algorithms cannot totally solve the problem of intention understanding and can not provide the pen-based software with high usability. Hence, a scenario-based interactive intention understanding framework is presented in this paper, and is used to simulate human cognitive mechanisms and cognitive habits. By providing the understanding environment supporting the framework, we can apply the framework to the practical PUI system. The evaluation of the Scientific Training Management System for the Chinese National Diving Team shows that the framework is effective in improving the usability and enhancing the intention understanding capacity of this system.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

语音是人们日常生活中高效、自然的交流方式之一。但是直到目前为止,语音交互方式在计算机技术上的应用还是比较少的。近年来,随着Ubiquitous Computing和便携式计算机的出现,再次对语音用户界面的应用提出了迫切的需求。而且语音识别、合成技术的发展也为语音交互界面的实现提供了技术基础。本文综合参考了国内外语音界面的一些应用系统实例以及语音这种独特的交流媒体的优点和局限性.总结了语音用户界面的适用环境和设计指导原则,并提出了对语音界面的发展展望。

Relevância:

90.00% 90.00%

Publicador:

Resumo:

笔式界面软件快速开发工具Visual PBAP(pen-based application)Creator可以根据用户的个性化需求快速设计和开发相应软件,缩短开发周期。一方面,Visual PBAP Creator通过场景和UI编辑方便开发人员和用户确定需求;另一方面,自动生成代码可以缩短开发周期。Visual PBAP Creator以笔式操作平台为软件平台,采用基于场景设计的开发方法,设计结果形成XML格式的文档,通过解析这些XML文档最终生成C代码。实践表明,VisualPBAP Creator可以提高笔式界面软件的开发效率。

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Crosshole Seismic tomography has been broadly studied and applied in the fields of resource exploration and engineering exploration because of its special observing manner and better resolution than normal seismic exploration. This thesis will state the theory and method of Crosshole Seismic tomography. Basing on the previous studies,the thesis studied the initial velocity model,ray-tracing method, and developed the three-dimension tomography software. All the cells that a ray passes through are of the same velocities if the paths from transmitters to receivers are straight. The cells that the each ray passes through are recorded, and rays that pass through each cell are calculated. The ray average velocity which passes through a cell is set as the cell velocity. Analogously we can make a initial node velocity model because the velocity sum is calculated on the all cells which own to a certain node, and the cell number is summed about each nodes,the ratio of the velocity sum to the all cells number is set as the node velocity. The inversion result from the initial node velocity model is better than that of the average velocity model. Ray-bending and Shortest Path for Rays (SPR) have shortcomings and limitations respectively. Using crooked rays obtained from SPR rather than straight lines as the starting point can not only avoid ray bending converging to the local minimum travel time path, but also settle the no smooth ray problem obtained by SPR. The hybrid method costs much computation time, which is roughly equal to the time that SPR expends. The Delphi development tool based on the Object Pascal language standard has an advantage of object-oriented. TDTOM (Three Dimensions Tomography) was developed by using Delphi from the DOS version. Improvement on the part of inversion was made, which bring faster convergence velocity. TDTOM can be used to do velocity tomography from the first arrival travel time of the seismic wave, and it has the good qualities of friendly user interface and convenient operation. TDTOM is used to reconstruct the velocity image for a set of crosshole data from Karamay Oil Field. The geological explanation is then given by comparing the inversion effects of different ray-tracing methods. High velocity zones mean the cover of oil reservoir, and low velocity zones correspond to the reservoir or the steam flooding layer.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Population research is a front area concerned by domestic and overseas, especially its researches on its spatial visualization and its geo-visualization system design, which provides a sound base for understanding and analysis of the regional difference in population distribution and its spatial rules. With the development of GIS, the theory of geo-visualization more and more plays an important role in many research fields, especially in population information visualization, and has been made the big achievements recently. Nevertheless, the current research is less attention paid to the system design for statistical-geo visualization for population information. This paper tries to explore the design theories and methodologies for statistical-geo-visualization system for population information. The researches are mainly focused on the framework, the methodologies and techniques for the system design and construction. The purpose of the research is developed a platform for population atlas by the integration of the former owned copy software of the research group in statistical mapping system. As a modern tool, the system will provide a spatial visual environment for user to analyze the characteristics of population distribution and differentiate the interrelations of the population components. Firstly, the paper discusses the essentiality of geo-visualization for population information and brings forward the key issue in statistical-geo visualization system design based on the analysis of inland and international trends. Secondly, the geo-visualization system for population design, including its structure, functionality, module, user interface design, is studied based on the concepts of theory and technology of geo-visualization. The system design is proposed and further divided into three parts: support layer, technical layer, user layer. The support layer is a basic operation module and main part of the system. The technical layer is a core part of the system, supported by database and function modules. The database module mainly include the integrated population database (comprises spatial data, attribute data and geographical features information), the cartographic symbol library, the color library, the statistical analysis model. The function module of the system consists of thematic map maker component, statistical graph maker component, database management component and statistical analysis component. The user layer is an integrated platform, which provides the functions to design and implement a visual interface for user to query, analysis and management the statistic data and the electronic map. Based on the above, China's E-atlas for population was designed and developed by the integration of the national fifth census data with 1:400 million scaled spatial data. The atlas illustrates the actual development level of the population nowadays in China by about 200 thematic maps relating with 10 map categories(environment, population distribution, sex and age, immigration, nation, family and marriage, birth, education, employment, house). As a scientific reference tool, China's E-atlas for population has already received the high evaluation after published in early 2005. Finally, the paper makes the deep analysis of the sex ratio in China, to show how to use the functions of the system to analyze the specific population problem and how to make the data mining. The analysis results showed that: 1. The sex ratio has been increased in many regions after fourth census in 1990 except the cities in the east region, and the high sex ratio is highly located in hilly and low mountain areas where with the high illiteracy rate and the high poor rate; 2. The statistical-geo visualization system is a powerful tool to handle population information, which can be used to reflect the regional differences and the regional variations of population in China and indicate the interrelations of the population with other environment factors. Although the author tries to bring up a integrate design frame of the statistical-geo visualization system, there are still many problems needed to be resolved with the development of geo-visualization studies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This Report contains the proceedings of the Fourth Phantom Users Group Workshop contains 17 papers presented October 9-12, 1999 at MIT Endicott House in Dedham Massachusetts. The workshop included sessions on, Tools for Programmers, Dynamic Environments, Perception and Cognition, Haptic Connections, Collision Detection / Collision Response, Medical and Seismic Applications, and Haptics Going Mainstream. The proceedings include papers that cover a variety of subjects in computer haptics including rendering, contact determination, development libraries, and applications in medicine, path planning, data interaction and training.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We introduce "BU-MIA," a Medical Image Analysis system that integrates various advanced chest image analysis methods for detection, estimation, segmentation, and registration. BU-MIA evaluates repeated computed tomography (CT) scans of the same patient to facilitate identification and evaluation of pulmonary nodules for interval growth. It provides a user-friendly graphical user interface with a number of interaction tools for development, evaluation, and validation of chest image analysis methods. The structures that BU-MIA processes include the thorax, lungs, and trachea, pulmonary structures, such as lobes, fissures, nodules, and vessels, and bones, such as sternum, vertebrae, and ribs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Remote sensing airborne hyperspectral data are routinely used for applications including algorithm development for satellite sensors, environmental monitoring and atmospheric studies. Single flight lines of airborne hyperspectral data are often in the region of tens of gigabytes in size. This means that a single aircraft can collect terabytes of remotely sensed hyperspectral data during a single year. Before these data can be used for scientific analyses, they need to be radiometrically calibrated, synchronised with the aircraft's position and attitude and then geocorrected. To enable efficient processing of these large datasets the UK Airborne Research and Survey Facility has recently developed a software suite, the Airborne Processing Library (APL), for processing airborne hyperspectral data acquired from the Specim AISA Eagle and Hawk instruments. The APL toolbox allows users to radiometrically calibrate, geocorrect, reproject and resample airborne data. Each stage of the toolbox outputs data in the common Band Interleaved Lines (BILs) format, which allows its integration with other standard remote sensing software packages. APL was developed to be user-friendly and suitable for use on a workstation PC as well as for the automated processing of the facility; to this end APL can be used under both Windows and Linux environments on a single desktop machine or through a Grid engine. A graphical user interface also exists. In this paper we describe the Airborne Processing Library software, its algorithms and approach. We present example results from using APL with an AISA Eagle sensor and we assess its spatial accuracy using data from multiple flight lines collected during a campaign in 2008 together with in situ surveyed ground control points.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Remote sensing airborne hyperspectral data are routinely used for applications including algorithm development for satellite sensors, environmental monitoring and atmospheric studies. Single flight lines of airborne hyperspectral data are often in the region of tens of gigabytes in size. This means that a single aircraft can collect terabytes of remotely sensed hyperspectral data during a single year. Before these data can be used for scientific analyses, they need to be radiometrically calibrated, synchronised with the aircraft's position and attitude and then geocorrected. To enable efficient processing of these large datasets the UK Airborne Research and Survey Facility has recently developed a software suite, the Airborne Processing Library (APL), for processing airborne hyperspectral data acquired from the Specim AISA Eagle and Hawk instruments. The APL toolbox allows users to radiometrically calibrate, geocorrect, reproject and resample airborne data. Each stage of the toolbox outputs data in the common Band Interleaved Lines (BILs) format, which allows its integration with other standard remote sensing software packages. APL was developed to be user-friendly and suitable for use on a workstation PC as well as for the automated processing of the facility; to this end APL can be used under both Windows and Linux environments on a single desktop machine or through a Grid engine. A graphical user interface also exists. In this paper we describe the Airborne Processing Library software, its algorithms and approach. We present example results from using APL with an AISA Eagle sensor and we assess its spatial accuracy using data from multiple flight lines collected during a campaign in 2008 together with in situ surveyed ground control points.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Here we describe the development of the MALTS software which is a generalized tool that simulates Lorentz Transmission Electron Microscopy (LTEM) contrast of magnetic nanostructures. Complex magnetic nanostructures typically have multiple stable domain structures. MALTS works in conjunction with the open access micromagnetic software Object Oriented Micromagnetic Framework or MuMax. Magnetically stable trial magnetization states of the object of interest are input into MALTS and simulated LTEM images are output. MALTS computes the magnetic and electric phases accrued by the transmitted electrons via the Aharonov-Bohm expressions. Transfer and envelope functions are used to simulate the progression of the electron wave through the microscope lenses. The final contrast image due to these effects is determined by Fourier Optics. Similar approaches have been used previously for simulations of specific cases of LTEM contrast. The novelty here is the integration with micromagnetic codes via a simple user interface enabling the computation of the contrast from any structure. The output from MALTS is in good agreement with both experimental data and published LTEM simulations. A widely-available generalized code for the analysis of Lorentz contrast is a much needed step towards the use of LTEM as a standardized laboratory technique.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper describes an end-user model for a domestic pervasive computing platform formed by regular home objects. The platform does not rely on pre-planned infrastructure; instead, it exploits objects that are already available in the home and exposes their joint sensing, actuating and computing capabilities to home automation applications. We advocate an incremental process of the platform formation and introduce tangible, object-like artifacts for representing important platform functions. One of those artifacts, the application pill, is a tiny object with a minimal user interface, used to carry the application, as well as to start and stop its execution and provide hints about its operational status. We also emphasize streamlining the user's interaction with the platform. The user engages any UI-capable object of his choice to configure applications, while applications issue notifications and alerts exploiting whichever available objects can be used for that purpose. Finally, the paper briefly describes an actual implementation of the presented end-user model. © (2010) by International Academy, Research, and Industry Association (IARIA).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents an automated design framework for the development of individual part forming tools for a composite stiffener. The framework uses parametrically developed design geometries for both the part and its layup tool. The framework has been developed with a functioning user interface where part / tool combinations are passed to a virtual environment for utility based assessment of their features and assemblability characteristics. The work demonstrates clear benefits in process design methods with conventional design timelines reduced from hours and days to minutes and seconds. The methods developed here were able to produce a digital mock up of a component with its associated layup tool in less than 3 minutes. The virtual environment presenting the design to the designer for interactive assembly planning was generated in 20 seconds. Challenges still exist in determining the level of reality required to provide an effective learning environment in the virtual world. Full representation of physical phenomena such as gravity, part clashes and the representation of standard build functions require further work to represent real physical phenomena more accurately.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

No panorama atual do desenvolvimento de software educativo é importante que os processos de desenvolvimento sejam adequados e compatíveis com o contexto em que serão utilizados este tipo de recursos. Desta forma, é importante melhorar continuamente os processos de desenvolvimento bem como se proceder à avaliação de forma a garantir a sua qualidade e viabilidade económica. Este estudo propõe uma Metodologia Híbrida de Desenvolvimento Centrado no Utilizador (MHDCU) aplicada ao software educativo. Trata-se de um processo de desenvolvimento simples, iterativo e incremental que tem como “alicerces” princípios do Design Centrado no Utilizador, especificados na International Organization for Standardization - ISO 13407. Na sua base encontra-se a estrutura disciplinada de processos de desenvolvimento, bem como práticas e valores dos métodos ágeis de desenvolvimento de software. O processo é constituído por 4 fases principais: planeamento (guião didático), design (storyboard), implementação e manutenção/operação. A prototipagem e a avaliação são realizadas de modo transversal a todo o processo. A metodologia foi implementada numa Pequena e Média Empresa de desenvolvimento de recursos educacionais, com o objetivo de desenvolver recursos educacionais com qualidade reconhecida e simultaneamente viáveis do ponto de vista económico. O primeiro recurso que teve por base a utilização desta metodologia foi o Courseware Sere – “O Ser Humano e os Recursos Naturais”. O trabalho seguiu uma metodologia de investigação & desenvolvimento, de natureza mista, em que se pretendeu descrever e analisar/avaliar uma metodologia de desenvolvimento de software educativo, i.e., o processo, bem como o produto final. O estudo é fundamentalmente descritivo e exploratório. A metodologia de desenvolvimento do software (primeira questão de investigação) foi proposta, essencialmente, com base na revisão integrativa da literatura da especialidade e com base nos resultados que emergiram das Fases 2 e 3. Do ponto de vista exploratório, foi avaliado, por um lado, o potencial técnico e didático da 1ª versão do software inserido no Courseware Sere (segunda questão de investigação), e, por outro lado, analisar os pontos fortes e as fragilidades da metodologia utilizada para o seu desenvolvimento (terceira questão de investigação). Como técnicas de recolha de dados recorreu-se a dois inquéritos por questionário e à observação direta participante (mediada pela plataforma moodle). Quanto às técnicas de análise de dados optou-se pela análise estatística descritiva e pela análise de conteúdo. Os resultados indicam que o recurso desenvolvido possui qualidade técnica e didática. Relativamente a análise da Metodologia Híbrida de desenvolvimento Centrado no Utilizador foram propostas algumas melhorias relacionadas com o envolvimento do utilizador e introdução de novos métodos. Apesar de identificadas algumas limitações, este projeto permitiu que a empresa melhorasse significativamente os processos de desenvolvimento de recursos (mesmo os que não são informatizados), bem como permitiu o aumento do seu portefólio com o desenvolvimento do Courseware Sere.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Esta dissertação descreve o processo de desenvolvimento de um sistema de informação para a gestão da informação académica de programas de pósgraduação - Sistema WebMaster - que tem como objectivo tornar aquela informação acessível aos utilizadores através da World Wide Web (WWW). Começa-se por apresentar alguns conceitos que se julgam relevantes para a compreensão da problemática dos sistemas de informação em toda a sua abrangência numa determinada organização, particularizando alguns conceitos para o caso das universidades. De seguida reflecte-se sobre os sistemas de informação com base na Web, confrontando-se os conceitos de Web Site (tradicional) e aplicação Web, a nível de arquitectura tecnológica, principais vantagens e desvantagens, fazendo-se, ainda, uma breve referência às principais tecnologias para a construção de soluções com geração dinâmica de conteúdos. Por último representa-se o sistema WebMaster ao longo das suas diferentes etapas de desenvolvimento, desde a análise de requisitos, projecto do sistema, até à fase da implementação. A fase análise de requisitos foi levada a cabo através de um inquérito realizado aos potenciais utilizadores no sentido de identificar as suas necessidades de informação. Com base nos resultados desta fase apresenta-se o projecto do sistema numa perspectiva conceptual, navegacional e de interface de utilizador, fazendo uso da metodologia OOHDM - Object-Oriented Hypermedia Design Method. Finalmente, passa-se à fase da implementação que, com base nas etapas anteriores e nas tecnologias seleccionadas na fase do planeamento, proporciona um espaço interactivo e de troca de informação a todos os interessados da comunidade académica envolvidos em cursos de pós-graduação.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Tese de doutoramento, Ciências do Mar, da Terra e do Ambiente, Faculdade de Ciências e Tecnologia, Universidade do Algarve, 2015