175 resultados para SQL


Relevância:

10.00% 10.00%

Publicador:

Resumo:

HIRFL(兰州重离子加速器)是我国第一个大型重离子物理研究装置。HIRFL控制系统是保证HIRFL系统正常高效运转的重要环节。本文利用数据库技术、网络技术设计了HIRFL控制系统数据库系统,实现了在网络环境中只使用一个应用程序就可对整个HIRFL系统控制进行控制的要求。论文首先介绍了数据库系统理论,然后对软件设计涉及的各部分基础知识进行了阐述,最后详细讨论了整个数据库系统设计中的数据库设计、应用程序人机界面设计等部分。数据库应用SQL Server 2000大型数据库,并使用ODBC技术作为数据库访问手段;人机界面部分使用面向对象编程技术进行Windows编程。整个论文工作在建立了一个包含整个控制系统各部分设备信息的数据库的同时,还提供了一套包含所有基本控制功能的HIRFL分布式控制软件,为HIRPL控制系统数据库系统的进一步完善提供了二次开发平台。

Relevância:

10.00% 10.00%

Publicador:

Resumo:

空间数据管理和维护技术是卫星数据应用系统的关键技术之一,对于推动卫星数据应用的发展有不可或缺的重要意义。本文在分析数据管理的各种先进技术的基础上,设计了卫星地面应用系统中的数据管理维护解决方案;讨论了该解决方案的具体设计实现。主要工作有:一、 系统架构:本文设计了层次式的系统架构,将系统从上到下分为数据应用层、数据访问接口层、逻辑数据层、物理数据层等,并对各层之间的功能和之间的交互做了明确定义。二、 数据统一访问:本文设计了数据访问接口层,向下层负责操作逻辑数据,向上层用户提供一个易用的统一的数据访问接口。该接口层屏蔽了底层数据格式和存储等差异,支持数据的统一存取,提供透明的数据访问,降低了系统之间的耦合度。三、 数据操作实现:本文基于XML设计了数据用户和数据管理维护系统之间数据交换的格式。在数据访问接口层实现了从XML格式请求到SQL语句(或文件API调用)的转换逻辑以实现各种数据操作请求,提高了系统可扩展性。论文还设计了多种数据安全方法相结合的数据安全策略,提高了系统中数据的安全性。数据管理维护系统的开发和测试过程表明,该方案基本满足卫星地面应用系统中数据管理和维护的需求。

Relevância:

10.00% 10.00%

Publicador:

Resumo:

本文对山东省海岛进行了研究,认识了山东省不同地区海岛的自然环境、资源和开发利用的总体状况。通过对大量的海岛调查资料的整理、分析,提出了利用Geodatabase数据模型构建山东省海岛空间数据库来组织和存储海岛数据的方案,采用GIS软件平台ArcEngine和程序开发语言研发山东省海岛管理信息系统。 山东省是我国海岛资源比较丰富的省份。从整体来看,山东省海岛地理位置优越,气候条件好,海岛类型分为基岩岛、沙岛和人工岛,其中大部分海岛为基岩岛,沙岛主要分布在黄河三角洲北侧的滨州和潍坊昌邑。山东省海岛拥有丰富的渔业资源、水产资源、旅游资源、土地资源、港口资源、地下卤水资源、贝壳砂资源等,依据这些资源优势,海岛的开发也集中在养殖业开发、旅游开发、港口资源开发等。 Geodatabase数据模型能够有效地定义、组织空间数据,ArcSDE技术提供了空间数据存储在商业关系型数据库的通道,Geodatabase数据模型和ArcSDE技术使得使用同一大型的关系型数据库统一存储空间数据和属性数据成为可能。本文采用Geodatabase数据模型、ArcSDE技术、SQL Server数据库,设计了海岛数据的统一存储和管理模型,通过数据处理和入库建立了山东省海岛空间数据库。 本文还基于GIS软件平台ArcEngine组件和程序开发语言VB.NET完成了山东省海岛管理信息系统的开发。该系统实现了海岛数据的访问和加载、海岛地图的显示和操作、海岛数据的多种方式的查询显示、地图的几何量算功能以及海岛专题资料的集成与综合分析,同时系统也提供数据图层的标注、图层颜色的设置,图层的移动、删除功能。这些功能的实现对山东省海岛的开发、管理提供了有用的平台,为决策部门提供服务。

Relevância:

10.00% 10.00%

Publicador:

Resumo:

本文介绍了一个 CIMS环境下的基于范例的船体装配 CAPP系统 .包括 :CAPP系统在渤海造船厂 CIMS工程中的地位 ,基于范例的工艺规划 (范例的表达 ,范例的索引、存储、检索和适应 ) ,工艺定额和材料定额制定 ,分系统之间的信息集成 .系统采用 Power Builder和 SQL Server数据库中的 SQL语言编写 .目前 ,船体装配 CAPP系统已经在渤海造船厂的微机上连网运行

Relevância:

10.00% 10.00%

Publicador:

Resumo:

计算机网络技术的迅速发展,以及Internet网络与技术的迅速普及,为各种分布式应用互连提供了底层通信条件。特别是以CORBA/IIOP技术为主流的分布对象技术的兴起,为解决各种开放式分布异构环境(包括 Internet/Intranet)下的应用互操作提供了基础框架和技术支持。但是,应用互操作技术及其在特定应用领域的分布式异构环境下的应用互操作本身是非常复杂和不断发展的,也是当前本领域研究的热点问题,具有广泛的应用需求与应用价值。本文结合国家“863/511”主题研究课题“面向CIMS基于CORBA的多数据源互操作与开放分布处理技术的研究与实现”,以及沈阳市科学技术基金课题“信息网多元数据库互操作技术攻关与应用”两项课题,在对分布对象技术和先进的CORBA/IIOP软件总线的深入分析和研究的基础上,给出了解决开放式分布异构环境下应用互操作问题的三种方法:应用组件化、遗产应用的面向对象包装、多数据源的对象化。本文探讨了基于CORBA/IIOP的应用组件的四个基本特征:自描述、可定制、可集成、连接机制,以及遗产应用的面向对象包装的七种方法:分层、数据升迁、重建应用、中件、封装、体系结构实现的包装、代理的包装。本文还给出了基于Internet/Intranet环境下的应用互操作技术与Object Web技术的实现方法与实现结构。基于上述探讨、分析和研究,为本文构造基于CORBA/IIOP的应用互操作系统的实现与应用奠定了坚实的基础。本文还重点研究和分析了应用互操作研究领域典型和普遍存在的多数据源互操作问题,提出了多数据源对象化的公共数据模型OOMDSCDM,并设计和实现了CORBA内核ORB原型系统,以及建立在此基础上的多数据源互操作系统MISORB系统。本系统实现了局部数据源模型到面向对象的多数据源公共数据模型OOMDSCDM的转化,成功的实现了文件型数据源和关系型数据源(包括SQL Server、Oracle等)等基于CORBA的对象化,解决了各种数据资源的“即插即用”。本系统还提出一种多数据源互操作的视图扩展机制,增强了多数据源互操作的灵活性和透明性。上述内容是“面向CIMS基于CORBA的多数据源互操作与开放分布处理技术的研究与实现”课题成果的重要组成部分。该项目研究成果通过了国家863专家组评审与验收,认为在基于CORBA规范的ORB实现与基于ORB的多数据源互操作与开放分布处理的模型、实现机制和实现方法上有创新,达到国际先进水平。本文以科技信息网络系统的信息资源为对象,设计并实现了一个基于CORBA/IIOP的应用互操作系统,实现了通过Web浏览器对信息资源,包括遗产光盘数据库资源的远程访问。

Relevância:

10.00% 10.00%

Publicador:

Resumo:

目前ERP系统和MES系统在企业中的应用越来越广泛,如何实现两系统的信息集成﹑消除“信息孤岛”﹑达到资源的有效共享对于企业而言具有重要的意义。 本文首先介绍了ERP系统、MES系统、中间件与信息集成的相关技术背景,为后文的深入研究奠定了基础;然后阐述了ISA SP95标准、XML技术和Microsoft集成中间件产品BizTalk Server的工作原理和开发方法。接下来讨论了MES与 ERP 集成的必要性,并对MES与 ERP 集成的现状进行了分析,比较了现有的几种MES和ERP系统集成的模式。在讨论了国内外对MES和ERP系统信息集成解决方案的缺陷与不足的基础上,以ISA SP95为标准分析抽取并建立了MES和ERP系统信息集成模型,提出了基于消息队列通信中间件技术来集成MES和ERP系统的方法,并结合陕西西安法士特齿轮有限公司的MES项目,在消息队列通信中间件BizTalk Server的基础上设计开发了ERP/MES信息集成系统。 本课题涉及到以下主要技术及工具:XML, Biztalk Server, SQL Server。本文提出的ERP/MES信息集成系统采用基于.NET平台的组件化程序设计方法,结合ISA SP95标准分析抽取并建立了ERP/MES系统信息集成模型, 以XML-XSD标准作为消息定义的规范,Biztalk Server作为消息模板定义、转换及消息传递的集成中间件,SQL Server 2000作为数据库服务器。主要介绍了ERP/MES信息集成系统的体系结构,给出了系统的功能结构框图,介绍了系统各个功能模块说明和安全性设计,并阐述了ERP/MES集成系统中关键技术的具体实现,最后对系统进行了初步测试,实现了ERP向MES下达生产计划、MES接收计划并指导生产、计划完成后向ERP回馈生产绩效信息的闭环全过程。本系统的研究开发,实现了MES系统和ERP系统间异步、实时、可靠的集成,又由于BizTalk Server具有平台无关性特点,本系统可以应用到许多大规模应用系统的集成,具有一定的实用和推广价值。 关键词 ERP;MES;中间件;信息集成

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Carregando os dados de comércio exterior. Estrutura base de diretórios. Arquivos de dados. Criação das tabelas temporárias. Carregamento com o utilitário Oracle SQL Loader. Transformações nas tabelas temporárias. Transferência de dados das tabelas de dimensão temporárias. Testes de integridade referencial e correções de violações. Criação de índices e restrições nas tabelas de fatos temporárias. Transferência de dados das tabelas de fatos temporárias. Remoção das tabelas temporárias. Renovação dos sumários do Armazem de Dados da Fruticultura.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Some WWW image engines allow the user to form a query in terms of text keywords. To build the image index, keywords are extracted heuristically from HTML documents containing each image, and/or from the image URL and file headers. Unfortunately, text-based image engines have merely retro-fitted standard SQL database query methods, and it is difficult to include images cues within such a framework. On the other hand, visual statistics (e.g., color histograms) are often insufficient for helping users find desired images in a vast WWW index. By truly unifying textual and visual statistics, one would expect to get better results than either used separately. In this paper, we propose an approach that allows the combination of visual statistics with textual statistics in the vector space representation commonly used in query by image content systems. Text statistics are captured in vector form using latent semantic indexing (LSI). The LSI index for an HTML document is then associated with each of the images contained therein. Visual statistics (e.g., color, orientedness) are also computed for each image. The LSI and visual statistic vectors are then combined into a single index vector that can be used for content-based search of the resulting image database. By using an integrated approach, we are able to take advantage of possible statistical couplings between the topic of the document (latent semantic content) and the contents of images (visual statistics). This allows improved performance in conducting content-based search. This approach has been implemented in a WWW image search engine prototype.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work proceeds from the assumption that a European environmental information and communication system (EEICS) is already established. In the context of primary users (land-use planners, conservationists, and environmental researchers) we ask what use may be made of the EEICS for building models and tools which is of use in building decision support systems for the land-use planner. The complex task facing the next generation of environmental and forest modellers is described, and a range of relevant modelling approaches are reviewed. These include visualization and GIS; statistical tabulation and database SQL, MDA and OLAP methods. The major problem of noncomparability of the definitions and measures of forest area and timber volume is introduced and the possibility of a model-based solution is considered. The possibility of using an ambitious and challenging biogeochemical modelling approach to understanding and managing European forests sustainably is discussed. It is emphasised that all modern methodological disciplines must be brought to bear, and a heuristic hybrid modelling approach should be used so as to ensure that the benefits of practical empirical modelling approaches are utilised in addition to the scientifically well-founded and holistic ecosystem and environmental modelling. The data and information system required is likely to end up as a grid-based-framework because of the heavy use of computationally intensive model-based facilities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Web databases are now pervasive. Such a database can be accessed via its query interface (usually HTML query form) only. Extracting Web query interfaces is a critical step in data integration across multiple Web databases, which creates a formal representation of a query form by extracting a set of query conditions in it. This paper presents a novel approach to extracting Web query interfaces. In this approach, a generic set of query condition rules are created to define query conditions that are semantically equivalent to SQL search conditions. Query condition rules represent the semantic roles that labels and form elements play in query conditions, and how they are hierarchically grouped into constructs of query conditions. To group labels and form elements in a query form, we explore both their structural proximity in the hierarchy of structures in the query form, which is captured by a tree of nested tags in the HTML codes of the form, and their semantic similarity, which is captured by various short texts used in labels, form elements and their properties. We have implemented the proposed approach and our experimental results show that the approach is highly effective.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Popular approaches in human tissue-based biomarker discovery include tissue microarrays (TMAs) and DNA Microarrays (DMAs) for protein and gene expression profiling respectively. The data generated by these analytic platforms, together with associated image, clinical and pathological data currently reside on widely different information platforms, making searching and cross-platform analysis difficult. Consequently, there is a strong need to develop a single coherent database capable of correlating all available data types.

Method: This study presents TMAX, a database system to facilitate biomarker discovery tasks. TMAX organises a variety of biomarker discovery-related data into the database. Both TMA and DMA experimental data are integrated in TMAX and connected through common DNA/protein biomarkers. Patient clinical data (including tissue pathological data), computer assisted tissue image and associated analytic data are also included in TMAX to enable the truly high throughput processing of ultra-large digital slides for both TMAs and whole slide tissue digital slides. A comprehensive web front-end was built with embedded XML parser software and predefined SQL queries to enable rapid data exchange in the form of standard XML files.

Results & Conclusion: TMAX represents one of the first attempts to integrate TMA data with public gene expression experiment data. Experiments suggest that TMAX is robust in managing large quantities of data from different sources (clinical, TMA, DMA and image analysis). Its web front-end is user friendly, easy to use, and most importantly allows the rapid and easy data exchange of biomarker discovery related data. In conclusion, TMAX is a robust biomarker discovery data repository and research tool, which opens up the opportunities for biomarker discovery and further integromics research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Access control is a software engineering challenge in database applications. Currently, there is no satisfactory solution to dynamically implement evolving fine-grained access control mechanisms (FGACM) on business tiers of relational database applications. To tackle this access control gap, we propose an architecture, herein referred to as Dynamic Access Control Architecture (DACA). DACA allows FGACM to be dynamically built and updated at runtime in accordance with the established fine-grained access control policies (FGACP). DACA explores and makes use of Call Level Interfaces (CLI) features to implement FGACM on business tiers. Among the features, we emphasize their performance and their multiple access modes to data residing on relational databases. The different access modes of CLI are wrapped by typed objects driven by FGACM, which are built and updated at runtime. Programmers prescind of traditional access modes of CLI and start using the ones dynamically implemented and updated. DACA comprises three main components: Policy Server (repository of metadata for FGACM), Dynamic Access Control Component (DACC) (business tier component responsible for implementing FGACM) and Policy Manager (broker between DACC and Policy Server). Unlike current approaches, DACA is not dependent on any particular access control model or on any access control policy, this way promoting its applicability to a wide range of different situations. In order to validate DACA, a solution based on Java, Java Database Connectivity (JDBC) and SQL Server was devised and implemented. Two evaluations were carried out. The first one evaluates DACA capability to implement and update FGACM dynamically, at runtime, and, the second one assesses DACA performance against a standard use of JDBC without any FGACM. The collected results show that DACA is an effective approach for implementing evolving FGACM on business tiers based on Call Level Interfaces, in this case JDBC.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Relatório da prática de ensino supervisionada, Mestrado em Ensino de Informática, Universidade de Lisboa, 2014

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Informática e de Computadores