945 resultados para Source code


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: With the advances in DNA sequencer-based technologies, it has become possible to automate several steps of the genotyping process leading to increased throughput. To efficiently handle the large amounts of genotypic data generated and help with quality control, there is a strong need for a software system that can help with the tracking of samples and capture and management of data at different steps of the process. Such systems, while serving to manage the workflow precisely, also encourage good laboratory practice by standardizing protocols, recording and annotating data from every step of the workflow Results: A laboratory information management system (LIMS) has been designed and implemented at the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT) that meets the requirements of a moderately high throughput molecular genotyping facility. The application is designed as modules and is simple to learn and use. The application leads the user through each step of the process from starting an experiment to the storing of output data from the genotype detection step with auto-binning of alleles; thus ensuring that every DNA sample is handled in an identical manner and all the necessary data are captured. The application keeps track of DNA samples and generated data. Data entry into the system is through the use of forms for file uploads. The LIMS provides functions to trace back to the electrophoresis gel files or sample source for any genotypic data and for repeating experiments. The LIMS is being presently used for the capture of high throughput SSR (simple-sequence repeat) genotyping data from the legume (chickpea, groundnut and pigeonpea) and cereal (sorghum and millets) crops of importance in the semi-arid tropics. Conclusions: A laboratory information management system is available that has been found useful in the management of microsatellite genotype data in a moderately high throughput genotyping laboratory. The application with source code is freely available for academic users and can be downloaded from http://www.icrisat.org/bt-software-d-lims.htm

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Department of Forest Resource Management in the University of Helsinki has in years 2004?2007 carried out so-called SIMO -project to develop a new generation planning system for forest management. Project parties are organisations doing most of Finnish forest planning in government, industry and private owned forests. Aim of this study was to find out the needs and requirements for new forest planning system and to clarify how parties see targets and processes in today's forest planning. Representatives responsible for forest planning in each organisation were interviewed one by one. According to study the stand-based system for managing and treating forests continues in the future. Because of variable data acquisition methods with different accuracy and sources, and development of single tree interpretation, more and more forest data is collected without field work. The benefits of using more specific forest data also calls for use of information units smaller than tree stand. In Finland the traditional way to arrange forest planning computation is divided in two elements. After updating the forest data to present situation every stand unit's growth is simulated with different alternative treatment schedule. After simulation, optimisation selects for every stand one treatment schedule so that the management program satisfies the owner's goals in the best possible way. This arrangement will be maintained in the future system. The parties' requirements to add multi-criteria problem solving, group decision support methods as well as heuristic and spatial optimisation into system make the programming work more challenging. Generally the new system is expected to be adjustable and transparent. Strict documentation and free source code helps to bring these expectations into effect. Variable growing models and treatment schedules with different source information, accuracy, methods and the speed of processing are supposed to work easily in system. Also possibilities to calibrate models regionally and to set local parameters changing in time are required. In future the forest planning system will be integrated in comprehensive data management systems together with geographic, economic and work supervision information. This requires a modular method of implementing the system and the use of a simple data transmission interface between modules and together with other systems. No major differences in parties' view of the systems requirements were noticed in this study. Rather the interviews completed the full picture from slightly different angles. In organisation the forest management is considered quite inflexible and it only draws the strategic lines. It does not yet have a role in operative activity, although the need and benefits of team level forest planning are admitted. Demands and opportunities of variable forest data, new planning goals and development of information technology are known. Party organisations want to keep on track with development. One example is the engagement in extensive SIMO-project which connects the whole field of forest planning in Finland.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A key trait of Free and Open Source Software (FOSS) development is its distributed nature. Nevertheless, two project-level operations, the fork and the merge of program code, are among the least well understood events in the lifespan of a FOSS project. Some projects have explicitly adopted these operations as the primary means of concurrent development. In this study, we examine the effect of highly distributed software development, is found in the Linux kernel project, on collection and modelling of software development data. We find that distributed development calls for sophisticated temporal modelling techniques where several versions of the source code tree can exist at once. Attention must be turned towards the methods of quality assurance and peer review that projects employ to manage these parallel source trees. Our analysis indicates that two new metrics, fork rate and merge rate, could be useful for determining the role of distributed version control systems in FOSS projects. The study presents a preliminary data set consisting of version control and mailing list data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A key trait of Free and Open Source Software (FOSS) development is its distributed nature. Nevertheless, two project-level operations, the fork and the merge of program code, are among the least well understood events in the lifespan of a FOSS project. Some projects have explicitly adopted these operations as the primary means of concurrent development. In this study, we examine the effect of highly distributed software development, is found in the Linux kernel project, on collection and modelling of software development data. We find that distributed development calls for sophisticated temporal modelling techniques where several versions of the source code tree can exist at once. Attention must be turned towards the methods of quality assurance and peer review that projects employ to manage these parallel source trees. Our analysis indicates that two new metrics, fork rate and merge rate, could be useful for determining the role of distributed version control systems in FOSS projects. The study presents a preliminary data set consisting of version control and mailing list data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The main aim of the present study was to develop information and communication technology (ICT) based chemistry education. The goals for the study were to support meaningful chemistry learning, research-based teaching and diffusion of ICT innovations. These goals were used as guidelines that form the theoretical framework for this study. This Doctoral Dissertation is based on eight-stage research project that included three design researches. These three design researches were scrutinized as separate case studies in which the different cases were formed according to different design teams: i) one researcher was in charge of the design and teachers were involved in the research process, ii) a research group was in charge of the design and students were involved in the research process, and iii) the design was done by student teams, the research was done collaboratively, and the design process was coordinated by a researcher. The research projects were conducted using mixed method approach, which enabled a comprehensive view on education design. In addition, the three central areas of design research: problem analysis, design solution and design process were included in the research, which was guided by the main research questions formed according to these central areas: 1) design solution: what kind of elements are included in ICT-based learning environments that support meaningful chemistry learning and diffusion of innovation, 2) problem analysis: what kind of new possibilities the designed learning environments offer for the support of meaningful chemistry learning, and 3) design process: what kind of opportunities and challenges does collaboration bring to the design of ICT-based learning environments? The main research questions were answered according to the analysis of the survey and observation data, six designed learning environments and ten design narratives from the three case studies. Altogether 139 chemistry teachers and teacher students were involved in the design processes. The data was mainly analysed by methods of qualitative content analysis. The first main result from the study give new information on the meaningful chemistry learning and the elements of ICT-based learning environment that support the diffusion of innovation, which can help in the development of future ICT-education design. When the designed learning environment was examined in the context of chemistry education, it was evident that an ICT-based chemistry learning environment supporting the meaningful learning of chemistry motivates the students and makes the teacher s work easier. In addition, it should enable the simultaneous fulfilment of several pedagogical goals and activate higher-level cognitive processes. The learning environment supporting the diffusion of ICT innovation is suitable for Finnish school environment, based on open source code, and easy to use with quality chemistry content. According to the second main result, new information was acquired about the possibilities of ICT-based learning environments in supporting meaningful chemistry learning. This will help in setting the goals for future ICT education. After the analysis of design solutions and their evaluations, it can be said that ICT enables the recognition of all elements that define learning environments (i.e. didactic, physical, technological and social elements). The research particularly demonstrates the significance of ICT in supporting students motivation and higher-level cognitive processes as well as versatile visualization resources for chemistry that ICT makes possible. In addition, research-based teaching method supports well the diffusion of studied innovation on individual level. The third main result brought out new information on the significance of collaboration in design research, which guides the design of ICT education development. According to the analysis of design narratives, it can be said that collaboration is important in the execution of scientifically reliable design research. It enables comprehensive requirement analysis and multifaceted development, which improves the reliability and validity of the research. At the same time, it sets reliability challenges by complicating documenting and coordination, for example. In addition, a new method for design research was developed. Its aim is to support the execution of complicated collaborative design projects. To increase the reliability and validity of the research, a model theory was used. It enables time-pound documenting and visualization of design decisions that clarify the process. This improves the reliability of the research. The validity of the research is improved by requirement definition through models. This way learning environments that meet the design goals can be constructed. The designed method can be used in education development from comprehensive to higher level. It can be used to recognize the needs of different interest groups and individuals with regard to processes, technology and substance knowledge as well as interfaces and relations between them. The developed method has also commercial potential. It is used to design learning environments for national and international market.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The open development model of software production has been characterized as the future model of knowledge production and distributed work. Open development model refers to publicly available source code ensured by an open source license, and the extensive and varied distributed participation of volunteers enabled by the Internet. Contemporary spokesmen of open source communities and academics view open source development as a new form of volunteer work activity characterized by hacker ethic and bazaar governance . The development of the Linux operating system is perhaps the best know example of such an open source project. It started as an effort by a user-developer and grew quickly into a large project with hundreds of user-developer as contributors. However, in hybrids , in which firms participate in open source projects oriented towards end-users, it seems that most users do not write code. The OpenOffice.org project, initiated by Sun Microsystems, in this study represents such a project. In addition, the Finnish public sector ICT decision-making concerning open source use is studied. The purpose is to explore the assumptions, theories and myths related to the open development model by analysing the discursive construction of the OpenOffice.org community: its developers, users and management. The qualitative study aims at shedding light on the dynamics and challenges of community construction and maintenance, and related power relations in hybrid open source, by asking two main research questions: How is the structure and membership constellation of the community, specifically the relation between developers and users linguistically constructed in hybrid open development? What characterizes Internet-mediated virtual communities and how can they be defined? How do they differ from hierarchical forms of knowledge production on one hand and from traditional volunteer communities on the other? The study utilizes sociological, psychological and anthropological concepts of community for understanding the connection between the real and the imaginary in so-called virtual open source communities. Intermediary methodological and analytical concepts are borrowed from discourse and rhetorical theories. A discursive-rhetorical approach is offered as a methodological toolkit for studying texts and writing in Internet communities. The empirical chapters approach the problem of community and its membership from four complementary points of views. The data comprises mailing list discussion, personal interviews, web page writings, email exchanges, field notes and other historical documents. The four viewpoints are: 1) the community as conceived by volunteers 2) the individual contributor s attachment to the project 3) public sector organizations as users of open source 4) the community as articulated by the community manager. I arrive at four conclusions concerning my empirical studies (1-4) and two general conclusions (5-6). 1) Sun Microsystems and OpenOffice.org Groupware volunteers failed in developing necessary and sufficient open code and open dialogue to ensure collaboration thus splitting the Groupware community into volunteers we and the firm them . 2) Instead of separating intrinsic and extrinsic motivations, I find that volunteers unique patterns of motivations are tied to changing objects and personal histories prior and during participation in the OpenOffice.org Lingucomponent project. Rather than seeing volunteers as a unified community, they can be better understood as independent entrepreneurs in search of a collaborative community . The boundaries between work and hobby are blurred and shifting, thus questioning the usefulness of the concept of volunteer . 3) The public sector ICT discourse portrays a dilemma and tension between the freedom to choose, use and develop one s desktop in the spirit of open source on one hand and the striving for better desktop control and maintenance by IT staff and user advocates, on the other. The link between the global OpenOffice.org community and the local end-user practices are weak and mediated by the problematic IT staff-(end)user relationship. 4) Authoring community can be seen as a new hybrid open source community-type of managerial practice. The ambiguous concept of community is a powerful strategic tool for orienting towards multiple real and imaginary audiences as evidenced in the global membership rhetoric. 5) The changing and contradictory discourses of this study show a change in the conceptual system and developer-user relationship of the open development model. This change is characterized as a movement from hacker ethic and bazaar governance to more professionally and strategically regulated community. 6) Community is simultaneously real and imagined, and can be characterized as a runaway community . Discursive-action can be seen as a specific type of online open source engagement. Hierarchies and structures are created through discursive acts. Key words: Open Source Software, open development model, community, motivation, discourse, rhetoric, developer, user, end-user

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: Temporal analysis of gene expression data has been limited to identifying genes whose expression varies with time and/or correlation between genes that have similar temporal profiles. Often, the methods do not consider the underlying network constraints that connect the genes. It is becoming increasingly evident that interactions change substantially with time. Thus far, there is no systematic method to relate the temporal changes in gene expression to the dynamics of interactions between them. Information on interaction dynamics would open up possibilities for discovering new mechanisms of regulation by providing valuable insight into identifying time-sensitive interactions as well as permit studies on the effect of a genetic perturbation. Results: We present NETGEM, a tractable model rooted in Markov dynamics, for analyzing the dynamics of the interactions between proteins based on the dynamics of the expression changes of the genes that encode them. The model treats the interaction strengths as random variables which are modulated by suitable priors. This approach is necessitated by the extremely small sample size of the datasets, relative to the number of interactions. The model is amenable to a linear time algorithm for efficient inference. Using temporal gene expression data, NETGEM was successful in identifying (i) temporal interactions and determining their strength, (ii) functional categories of the actively interacting partners and (iii) dynamics of interactions in perturbed networks. Conclusions: NETGEM represents an optimal trade-off between model complexity and data requirement. It was able to deduce actively interacting genes and functional categories from temporal gene expression data. It permits inference by incorporating the information available in perturbed networks. Given that the inputs to NETGEM are only the network and the temporal variation of the nodes, this algorithm promises to have widespread applications, beyond biological systems. The source code for NETGEM is available from https://github.com/vjethava/NETGEM

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We study a State Dependent Attempt Rate (SDAR) approximation to model M queues (one queue per node) served by the Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) protocol as standardized in the IEEE 802.11 Distributed Coordination Function (DCF). The approximation is that, when n of the M queues are non-empty, the (transmission) attempt probability of each of the n non-empty nodes is given by the long-term (transmission) attempt probability of n saturated nodes. With the arrival of packets into the M queues according to independent Poisson processes, the SDAR approximation reduces a single cell with non-saturated nodes to a Markovian coupled queueing system. We provide a sufficient condition under which the joint queue length Markov chain is positive recurrent. For the symmetric case of equal arrival rates and finite and equal buffers, we develop an iterative method which leads to accurate predictions for important performance measures such as collision probability, throughput and mean packet delay. We replace the MAC layer with the SDAR model of contention by modifying the NS-2 source code pertaining to the MAC layer, keeping all other layers unchanged. By this model-based simulation technique at the MAC layer, we achieve speed-ups (w.r.t. MAC layer operations) up to 5.4. Through extensive model-based simulations and numerical results, we show that the SDAR model is an accurate model for the DCF MAC protocol in single cells. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Knowledge of protein-ligand interactions is essential to understand several biological processes and important for applications ranging from understanding protein function to drug discovery and protein engineering. Here, we describe an algorithm for the comparison of three-dimensional ligand-binding sites in protein structures. A previously described algorithm, PocketMatch (version 1.0) is optimised, expanded, and MPI-enabled for parallel execution. PocketMatch (version 2.0) rapidly quantifies binding-site similarity based on structural descriptors such as residue nature and interatomic distances. Atomic-scale alignments may also be obtained from amino acid residue pairings generated. It allows an end-user to compute database-wide, all-to-all comparisons in a matter of hours. The use of our algorithm on a sample dataset, performance-analysis, and annotated source code is also included.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Atomization is the process of disintegration of a liquid jet into ligaments and subsequently into smaller droplets. A liquid jet injected from a circular orifice into cross flow of air undergoes atomization primarily due to the interaction of the two phases rather than an intrinsic break up. Direct numerical simulation of this process resolving the finest droplets is computationally very expensive and impractical. In the present study, we resort to multiscale modelling to reduce the computational cost. The primary break up of the liquid jet is simulated using Gerris, an open source code, which employs Volume-of-Fluid (VOF) algorithm. The smallest droplets formed during primary atomization are modeled as Lagrangian particles. This one-way coupling approach is validated with the help of the simple test case of tracking a particle in a Taylor-Green vortex. The temporal evolution of the liquid jet forming the spray is captured and the flattening of the cylindrical liquid column prior to breakup is observed. The size distribution of the resultant droplets is presented at different distances downstream from the location of injection and their spatial evolution is analyzed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Esta tese tem por objetivo propor uma estratégia de obtenção automática de parâmetros hidrodinâmicos e de transporte através da solução de problemas inversos. A obtenção dos parâmetros de um modelo físico representa um dos principais problemas em sua calibração, e isso se deve em grande parte à dificuldade na medição em campo desses parâmetros. Em particular na modelagem de rios e estuários, a altura da rugosidade e o coeficiente de difusão turbulenta representam dois dos parâmetros com maior dificuldade de medição. Nesta tese é apresentada uma técnica automatizada de estimação desses parâmetros através deum problema inverso aplicado a um modelo do estuário do rio Macaé, localizado no norte do Rio de Janeiro. Para este estudo foi utilizada a plataforma MOHID, desenvolvida na Universidade Técnica de Lisboa, e que tem tido ampla aplicação na simulação de corpos hídricos. Foi realizada uma análise de sensibilidade das respostas do modelo com relação aos parâmetros de interesse. Verificou-se que a salinidade é uma variável sensível a ambos parâmetros. O problema inverso foi então resolvido utilizando vários métodos de otimização através do acoplamento da plataforma MOHID a códigos de otimização implementados em Fortran. O acoplamento foi realizado de forma a não alterar o código fonte do MOHID, possibilitando a utilização da ferramenta computacional aqui desenvolvida em qualquer versão dessa plataforma, bem como a sua alteração para o uso com outros simuladores. Os testes realizados confirmam a eficiência da técnica e apontam as melhores abordagens para uma rápida e precisa estimação dos parâmetros.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

[ES]Este proyecto consiste en obtener un mayor control por parte del usuario a nivel de red en entornos con máquinas virtuales creadas a partir de la plataforma OpenStack. Cada vez que se arranca o inicia una máquina virtual en OpenStack, los parámetros de red se asignan por defecto, haciendo muy difícil su gestión y control tanto para investigación como para mantenimiento. Si estos parámetros siguieran un mismo patrón para cada proyecto o usuario sería mucho más sencillo tener controlado cada interfaz de red, pudiendo así gestionarlos de una manera más eficiente. Para realizar esta tarea será necesario introducir unos cambios en el código fuente de OpenStack, adaptándolo para que cumpla con nuestros requerimientos.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Os recentes avanços tecnológicos fizeram aumentar o nível de qualificação do pesquisador em epidemiologia. A importância do papel estratégico da educação não pode ser ignorada. Todavia, a Associação Brasileira de Pós-graduação em Saúde Coletiva (ABRASCO), no seu último plano diretor (2005-2009), aponta uma pequena valorização na produção de material didático-pedagógico e, ainda, a falta de uma política de desenvolvimento e utilização de software livre no ensino da epidemiologia. É oportuno, portanto, investir em uma perspectiva relacional, na linha do que a corrente construtivista propõe, uma vez que esta teoria tem sido reconhecida como a mais adequada no desenvolvimento de materiais didáticos informatizados. Neste sentido, promover cursos interativos e, no bojo destes, desenvolver material didático conexo é oportuno e profícuo. No âmbito da questão política de desenvolvimento e utilização de software livre no ensino da epidemiologia, particularmente em estatística aplicada, o R tem se mostrado um software de interesse emergente. Ademais, não só porque evita possíveis penalizações por utilização de software comercial sem licença, mas também porque o franco acesso aos códigos e programação o torna uma ferramenta excelente para a elaboração de material didático em forma de hiperdocumentos, importantes alicerces para uma tão desejada interação docentediscente em sala de aula. O principal objetivo é desenvolver material didático em R para os cursos de bioestatística aplicada à análise epidemiológica. Devido a não implementação de certas funções estatísticas no R, também foi incluída a programação de funções adicionais. Os cursos empregados no desenvolvimento desse material fundamentaram-se nas disciplinas Uma introdução à Plataforma R para Modelagem Estatística de Dados e Instrumento de Aferição em Epidemiologia I: Teoria Clássica de Medidas (Análise) vinculadas ao departamento de Epidemiologia, Instituto de Medicina Social (IMS) da Universidade do Estado do Rio de Janeiro (UERJ). A base teórico-pedagógica foi definida a partir dos princípios construtivistas, na qual o indivíduo é agente ativo e crítico de seu próprio conhecimento, construindo significados a partir de experiências próprias. E, à ótica construtivista, seguiu-se a metodologia de ensino da problematização, abrangendo problemas oriundos de situações reais e sistematizados por escrito. Já os métodos computacionais foram baseados nas Novas Tecnologias da Informação e Comunicação (NTIC). As NTICs exploram a busca pela consolidação de currículos mais flexíveis, adaptados às características diferenciadas de aprendizagem dos alunos. A implementação das NTICs foi feita através de hipertexto, que é uma estrutura de textos interligados por nós ou vínculos (links), formando uma rede de informações relacionadas. Durante a concepção do material didático, foram realizadas mudanças na interface básica do sistema de ajuda do R para garantir a interatividade aluno-material. O próprio instrutivo é composto por blocos, que incentivam a discussão e a troca de informações entre professor e alunos.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present a new software framework for the implementation of applications that use stencil computations on block-structured grids to solve partial differential equations. A key feature of the framework is the extensive use of automatic source code generation which is used to achieve high performance on a range of leading multi-core processors. Results are presented for a simple model stencil running on Intel and AMD CPUs as well as the NVIDIA GT200 GPU. The generality of the framework is demonstrated through the implementation of a complete application consisting of many different stencil computations, taken from the field of computational fluid dynamics. © 2010 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

随着硬件性能的不断提升,计算机正在被赋予越来越艰巨的任务,运行其上的软件作为沟通人类思维和底层硬件的桥梁,其重要性日益增加。与此同时,软件系统的规模也在不断变大,所涉及的逻辑也更为复杂,这导致开发人员难免会由于疏漏在软件设计实现的过程中引入缺陷、埋下隐患。所以,如何检验、确保软件的属性就成为时下一个亟待解决的热点问题。而在此背景下,源代码静态分析技术由于恰好可以弥补现有测试方法的不足,已经开始在这一研究领域崭露头角。有鉴于此,本文为了推进安全信息系统的研发,分别围绕源代码静态分析技术在软件属性保障中两个最主要的应用场景展开研究,涉及高等级安全操作系统开发过程中的源代码自动化审计,以及分布式信息系统中平台间互信建立时针对软件属性所进行的远程验证,其中,前者是为从深度上将现有安全操作系统向更高等级推进提供助力,而后者是为了从广度上将信息安全领域现有的围绕单机平台的研究成果向分布式架构推广建立基础。具体来说,本文选择针对编程接口规范的一致性检验和应用静态分析的软件属性远程验证作为研究的切入点,探讨了应用源代码静态分析技术检验、确保软件属性的方法和用途,主要取得以下几个方面的成果: 第一,本文给出了一个基于值等价类的别名分析方法。该方法依据相关的传值操作维护一个值等价类空间,可以在编程接口规范一致性检验的过程中按需推导变量符号间的等值关系,不仅有能力支持上下文相关、路径相关的全局分析,还可以有效应对C代码中因结构、指针等构件所衍生出来的大量变量符号。 第二,针对大部分现有代码静态分析工具分析规模受限的问题,本文围绕编程接口规范的一致性检验给出了可以与别名分析有效结合的性能优化方案。该方案不仅能通过剔除与分析无关的执行分支和引入缓存机制提高分析效率,还可以尽量确保分析的准确性少受影响。 第三,我们设计、实现了一个C代码静态分析工具ABAZER(A Bug AnalyZER)。该工具可以依据用户使用有限自动机模型描述的编程接口规范,对操作系统内核级别的软件进行全局分析,指出代码中可能有悖于规范的部分。我们使用ABAZER实际考查了FreeBSD内核中锁机制以及GCC 4.x中库GNU Libiberty的使用情况,从中发现了若干真实的缺陷。 第四,本文针对现有应用可信计算技术、基于完整性信息进行远程验证的方案在灵活性和实用性上所存在的不足,给出一个扩展方案。该方案通过引入虚拟机技术,在软件构建过程中收集举证信息,应用静态分析方法分析软件功能模块间的相关性,划分出与验证相关的模块,有效控制用户定制软件验证时所要依赖的可信列表的规模,使其有能力适应当今网络环境中的大量异质平台和各种安全需求。此外,它还可以为自身所依赖的可信计算基的替换和更新提供支持。 第五,本文针对Flask架构的特点,给出了一个既能检验强制访问控制实现正确性,又能最大限度保留软件灵活性、使得用户可以在一定程度上对软件进行定制的远程验证方案。该方案依赖源代码静态分析技术界定软件中无需基于完整性进行验证的模块,在进一步缩减可信列表规模的同时,使用代码改写技术在这些模块中自动化地插入监控代码约束软件的动态行为,以达到确保强制访问控制实现正确性的目的。该方案初步展现了源代码静态分析技术在远程验证中广阔的应用前景。