959 resultados para Computer software maintenance
Resumo:
When different strains or breeds of a particular species are available, the best choice is seldom immediately obvious for producers. Scientists are also interested in the relative performance of different strains because it provides a basis for recommendations to producers and it often stimulates the conduct of work aimed at unraveling the underlying biological mechanisms involved in the expression of such differences. Hence, strain or breed comparisons of some sort are frequently conducted. This manual is designed to provide general guidelines for the design of strain comparison trials in aquaculture species. Example analyzes are provided using SAS and SPSS. The manual is intended to serve a wide range of readers from developing countries with limited access to information. The users, however, are expected to have a basic knowledge of quantitative genetics and experience in statistical methods and data analysis as well as familiarity with computer software. The manual mainly focuses on the practical aspects of design and data analysis, and interpretation of results.
Resumo:
重构是软件系统不断演化的关键之一,也是一项复杂而又困难的活动.传统的定位重构代码方法依赖开发者的观察和主观意识,耗时耗力,尤其在重构代码较多时.因此,提出了一套自动化定位重构的方法.该方法利用基于面向对象软件度量指标获取代码特征信息,使用相关性检验查验特征信息数据,应用主成分分析压缩和解释特征信息,应用聚类分析分类相似代码段,迅速准确定位重构.一个简单的实例表明该方法是简单有效的,并且优于传统方法.
Resumo:
旱地作物需水量预报决策辅助系统是利用人工智能技术 ,在 Penman公式的基础上结合现有西北旱区的农学知识、模型以及经验进行系统集成而建立的智能化计算机软件系统 ,该系统是西北地区节水农业专家系统的一个子系统。在生产实践中可为陕西关中地区的冬小麦、夏玉米的栽培作出灌溉方案的决策咨询。
Resumo:
Univ SE Calif, Ctr Syst & Software Engn, ABB, Microsoft Res, IEEE, ACMSIGSOFT, N Carolina State Univ Comp Sci
Resumo:
ACM SIGGRAPH; ACM SIGCHI
Resumo:
Seismic While Drilling (SWD) is a new wellbore seismic technique. It uses the vibrations produced by a drill-bit while drilling as a downhole seismic energy source. The continuous signals generated by the drill bit are recorded by a pilot sensor attached to the top of the drill-string. Seismic wave receivers positioned in the earth near its surface receive the seismic waves both directly and reflection from the geologic formations. The pilot signal is cross-correlated with the receiver signals to compute travel-times of the arrivals (direct arrival and reflected arrival) and attenuate incoherent noise. No downhole intrusmentation is required to obtain the data and the data recording does not interfere with the drilling process. These characteristics offer a method by which borehole seismic data can be acquired, processed, and interpreted while drilling. As a Measure-While-Drill technique. SWD provides real-time seismic data for use at the well site . This can aid the engineer or driller by indicating the position of the drill-bit and providing a look at reflecting horizons yet to be encountered by the drill-bit. Furthermore, the ease with which surface receivers can be deployed makes multi-offset VSP economically feasible. First, this paper is theoretically studying drill-bit wavefield, interaction mode between drill-bit and formation below drill-bit , the new technique of modern signal process was applied to seismic data, the seismic body wave radiation pattern of a working roller-cone drill-bit can be characterized by theoretical modeling. Then , a systematical analysis about the drill-bit wave was done, time-distance equation of seismic wave traveling was established, the process of seismic while drilling was simulated using the computer software adaptive modeling of SWD was done . In order to spread this technique, I have made trial SWD modeling during drilling. the paper sketches out the procedure for trial SWD modeling during drilling , the involved instruments and their functions, and the trial effect. Subsurface condition ahead of the drill-bit can be predicted drillstring velocity was obtained by polit sensor autocorrelation. Reference decovolution, the drillstring multiples in the polit signal are removed by reference deconvolution, the crosscorrelation process enhance the signal-to-noise power ratio, lithologies. Final, SWD provides real-time seismic data for use at the well site well trajectory control exploratory well find out and preserve reservoirs. intervel velocity was computed by the traveltime The results of the interval velocity determination reflects the pore-pressure present in the subsurface units ahead of the drill-bit. the presences of fractures in subsurface formation was detected by shear wave. et al.
Resumo:
Study of 3D visualization technology of engineering geology and its application to engineering is a cross subject which includes geosciences, computer, software and information technology. Being an important part of the secondary theme of National Basic Research Program of China (973 Program) whose name is Study of Multi-Scale Structure and Occurrence Environment of Complicated Geological Engineering Mass(No.2002CB412701), the dissertation involves the studies of key problems of 3D geological modeling, integrated applications of multi-format geological data, effective modeling methods of complex approximately layered geological mass as well as applications of 3D virtual reality information management technology.The main research findings are listed below:Integrated application method of multi-format geological data is proposed,which has solved the integrated application of drill holes, engineering geology plandrawings, sectional drawings and cutting drawings as well as exploratory trenchsketch. Its application can provide as more as possible fundamental data for 3Dgeological modeling.A 3D surface construction method combined Laplace interpolation points withoriginal points is proposed, so the deformation of 3D model and the crossing error ofupper and lower surface of model resulted from lack of data when constructing alaminated stratum can be eliminated.3D modeling method of approximately layered geological mass is proposed,which has solved the problems of general modeling method based on the sections or points and faces when constructing terrain and concordant strata.The 3D geological model of VII dam site of Xiangjiaba hydropower stationhas been constructed. The applications of 3D geological model to the auto-plotting ofsectional drawing and the converting of numerical analysis model are also discussed.3D virtual reality information integrated platform is developed, whose mostimportant character is that it is a software platform having the functions of 3D virtualreality flying and multi-format data management simultaneously. Therefore, theplatform can load different 3D model so as to satisfy the different engineeringdemands.The relics of Aigong Cave of Longyou Stone Caves are recovered. Thereinforcement plans of 1# and 2# cave in phoenix hill also be expressed. The intuitiveexpression provided decision makers and designers a very good environment.The basic framework and specific functions of 3D geological informationsystem are proposed.The main research findings in the dissertation have been successfully applied to some important engineering such as Xiangjiaba hydropower station, a military airport and Longyou Stone Caves etc.
Resumo:
A maioria dos produtores de gado de corte não realiza adequadamente o controle de receitas e despesas de sua atividade. Como a lucratividade da pecuária de corte tem se reduzido pela queda no preço de seus produtos e pela alta nos custos de produção, tal função tornou-se crucial para aqueles que querem se manter no negócio. Para atender essa demanda, a Embrapa Gado de Corte desenvolveu, em ambiente Excel, o Controlpec 1.0, uma ferramenta simples e de fácil utilização pelos produtores. Com base no movimento financeiro da fazenda são gerados relatórios que consolidam despesas, receitas e margens econômicas da atividade, tendo em conta o plano de contas definido pelo usuário. Os resultados são expostos para cada mês do ano e para todo o ano.
Resumo:
Timing data is infrequently reported in aphasiological literature and time taken is only a minor factor, where it is considered at all, in existing aphasia assessments. This is not surprising because reaction times are difficult to obtain manually, but it is a pity, because speed data should be indispensable in assessing the severity of language processing disorders and in evaluating the effects of treatment. This paper argues that reporting accuracy data without discussing speed of performance gives an incomplete and potentially misleading picture of any cognitive function. Moreover, in deciding how to treat, when to continue treatment and when to cease therapy, clinicians should have regard to both parameters: Speed and accuracy of performance. Crerar, Ellis and Dean (1996) reported a study in which the written sentence comprehension of 14 long-term agrammatic subjects was assessed and treated using a computer-based microworld. Some statistically significant and durable treatment effects were obtained after a short amount of focused therapy. Only accuracy data were reported in that (already long) paper, and interestingly, although it has been a widely read study, neither referees nor subsequent readers seemed to miss "the other side of the coin": How these participants compared with controls for their speed of processing and what effect treatment had on speed. This paper considers both aspects of the data and presents a tentative way of combining treatment effects on both accuracy and speed of performance in a single indicator. Looking at rehabilitation this way gives us a rather different perspective on which individuals benefited most from the intervention. It also demonstrates that while some subjects are capable of utilising metalinguistic skills to achieve normal accuracy scores even many years post-stroke, there is little prospect of reducing the time taken to within the normal range. Without considering speed of processing, the extent of this residual functional impairment can be overlooked.
Resumo:
Security policies are increasingly being implemented by organisations. Policies are mapped to device configurations to enforce the policies. This is typically performed manually by network administrators. The development and management of these enforcement policies is a difficult and error prone task. This thesis describes the development and evaluation of an off-line firewall policy parser and validation tool. This provides the system administrator with a textual interface and the vendor specific low level languages they trust and are familiar with, but the support of an off-line compiler tool. The tool was created using the Microsoft C#.NET language, and the Microsoft Visual Studio Integrated Development Environment (IDE). This provided an object environment to create a flexible and extensible system, as well as simple Web and Windows prototyping facilities to create GUI front-end applications for testing and evaluation. A CLI was provided with the tool, for more experienced users, but it was also designed to be easily integrated into GUI based applications for non-expert users. The evaluation of the system was performed from a custom built GUI application, which can create test firewall rule sets containing synthetic rules, to supply a variety of experimental conditions, as well as record various performance metrics. The validation tool was created, based around a pragmatic outlook, with regard to the needs of the network administrator. The modularity of the design was important, due to the fast changing nature of the network device languages being processed. An object oriented approach was taken, for maximum changeability and extensibility, and a flexible tool was developed, due to the possible needs of different types users. System administrators desire, low level, CLI-based tools that they can trust, and use easily from scripting languages. Inexperienced users may prefer a more abstract, high level, GUI or Wizard that has an easier to learn process. Built around these ideas, the tool was implemented, and proved to be a usable, and complimentary addition to the many network policy-based systems currently available. The tool has a flexible design and contains comprehensive functionality. As opposed to some of the other tools which perform across multiple vendor languages, but do not implement a deep range of options for any of the languages. It compliments existing systems, such as policy compliance tools, and abstract policy analysis systems. Its validation algorithms were evaluated for both completeness, and performance. The tool was found to correctly process large firewall policies in just a few seconds. A framework for a policy-based management system, with which the tool would integrate, is also proposed. This is based around a vendor independent XML-based repository of device configurations, which could be used to bring together existing policy management and analysis systems.
Resumo:
The appropriation of digital artefacts involves their use, which has changed, evolved or developed beyond their original design. Thus, to understand appropriation, we must understand use. We define use as the active, purposive exploitation of the affordances offered by the technology and from this perspective; appropriation emerges as a natural consequence of this enactive use. Enaction tells us that perception is an active process. It is something we do, and not something that happens to us. From this reading, use then becomes the active exploitation of the affordances offered us by the artefact, system or service. In turn, we define appropriation as the engagement with these actively disclosed affordances—disclosed as a consequence of, not just, seeing but of seeing as. We present a small case study that highlights instances of perception as an actively engaged skill. We conclude that appropriation is a simple consequence of enactive perception.
Resumo:
Web threats are becoming a major issue for both governments and companies. Generally, web threats increased as much as 600% during last year (WebSense, 2013). This appears to be a significant issue, since many major businesses seem to provide these services. Denial of Service (DoS) attacks are one of the most significant web threats and generally their aim is to waste the resources of the target machine (Mirkovic & Reiher, 2004). Dis-tributed Denial of Service (DDoS) attacks are typically executed from many sources and can result in large traf-fic flows. During last year 11% of DDoS attacks were over 60 Gbps (Prolexic, 2013a). The DDoS attacks are usually performed from the large botnets, which are networks of remotely controlled computers. There is an increasing effort by governments and companies to shut down the botnets (Dittrich, 2012), which has lead the attackers to look for alternative DDoS attack methods. One of the techniques to which attackers are returning to is DDoS amplification attacks. Amplification attacks use intermediate devices called amplifiers in order to amplify the attacker's traffic. This work outlines an evaluation tool and evaluates an amplification attack based on the Trivial File Transfer Proto-col (TFTP). This attack could have amplification factor of approximately 60, which rates highly alongside other researched amplification attacks. This could be a substantial issue globally, due to the fact this protocol is used in approximately 599,600 publicly open TFTP servers. Mitigation methods to this threat have also been consid-ered and a variety of countermeasures are proposed. Effects of this attack on both amplifier and target were analysed based on the proposed metrics. While it has been reported that the breaching of TFTP would be possible (Schultz, 2013), this paper provides a complete methodology for the setup of the attack, and its verification.
Resumo:
It is anticipated that constrained devices in the Internet of Things (IoT) will often operate in groups to achieve collective monitoring or management tasks. For sensitive and mission-critical sensing tasks, securing multicast applications is therefore highly desirable. To secure group communications, several group key management protocols have been introduced. However, the majority of the proposed solutions are not adapted to the IoT and its strong processing, storage, and energy constraints. In this context, we introduce a novel decentralized and batch-based group key management protocol to secure multicast communications. Our protocol is simple and it reduces the rekeying overhead triggered by membership changes in dynamic and mobile groups and guarantees both backward and forward secrecy. To assess our protocol, we conduct a detailed analysis with respect to its communcation and storage costs. This analysis is validated through simulation to highlight energy gains. The obtained results show that our protocol outperforms its peers with respect to keying overhead and the mobility of members.