963 resultados para atk-ohjelmat - LSP - Library software package
Resumo:
Established in 1986, ASWEC is the premier technical meeting for the Australian software engineering community, and attracts a significant number of international participants. The major goal of the conference is to provide a forum for exchanging experience and new research results in software engineering. To increase the industry participation at ASWEC, we organized two separate paper tracks, which we have called Research Papers and Industry Experience Reports. These paper tracks had separate deadlines, separate program committees, separate review procedures, and separate proceedings. The Research Papers appear in these proceedings and the Industry Experience Reports will appear on a CD-Rom that will be distributed at the conference. The Research Papers track for ASWEC 2005 includes submissions from Australia and across the world. This year we received 79 submissions from 13 countries: 48 from Australia, 7 from New Zealand, 11 from Asia, 9 from Europe, and 2 each from North and South America. All papers were fully refereed by three Program Committee members. We accepted 34 papers to be presented at the conference. We are grateful to all authors who contributed to ASWEC.
Resumo:
The following topics are dealt with: Requirements engineering; components; design; formal specification analysis; education; model checking; human computer interaction; software design and architecture; formal methods and components; software maintenance; software process; formal methods and design; server-based applications; review and testing; measurement; documentation; management and knowledge-based approaches.
Resumo:
Expert systems, and artificial intelligence more generally, can provide a useful means for representing decision-making processes. By linking expert systems software to simulation software an effective means of including these decision-making processes in a simulation model can be achieved. This paper demonstrates how a commercial-off-the-shelf simulation package (Witness) can be linked to an expert systems package (XpertRule) through a Visual Basic interface. The methodology adopted could be used for models, and possibly software, other than those presented here.
Resumo:
This paper provides an account of the way Enterprise Resource Planning (ERP) systems change over time. These changes are conceptualized as a biographical accumulation that gives the specific ERP technology its present character, attributes and historicity. The paper presents empirics from the implementation of an ERP package within an Australasian organization. Changes to the ERP take place as a result of imperatives which arise during the implementation. Our research and evidence then extends to a different time and place where the new release of the ERP software was being 'sold' to client firms in the UK. We theorize our research through a lens based on ideas from actor network theory (ANT) and the concept of biography. The paper seeks to contribute an additional theorization for ANT studies that places the focus on the technological object and frees it from the ties of the implementation setting. The research illustrates the opportunistic and contested fabrication of a technological object and emphasizes the stability as well as the fluidity of its technologic. Copyright © 2007 SAGE.
Resumo:
The topic of this thesis is the development of knowledge based statistical software. The shortcomings of conventional statistical packages are discussed to illustrate the need to develop software which is able to exhibit a greater degree of statistical expertise, thereby reducing the misuse of statistical methods by those not well versed in the art of statistical analysis. Some of the issues involved in the development of knowledge based software are presented and a review is given of some of the systems that have been developed so far. The majority of these have moved away from conventional architectures by adopting what can be termed an expert systems approach. The thesis then proposes an approach which is based upon the concept of semantic modelling. By representing some of the semantic meaning of data, it is conceived that a system could examine a request to apply a statistical technique and check if the use of the chosen technique was semantically sound, i.e. will the results obtained be meaningful. Current systems, in contrast, can only perform what can be considered as syntactic checks. The prototype system that has been implemented to explore the feasibility of such an approach is presented, the system has been designed as an enhanced variant of a conventional style statistical package. This involved developing a semantic data model to represent some of the statistically relevant knowledge about data and identifying sets of requirements that should be met for the application of the statistical techniques to be valid. Those areas of statistics covered in the prototype are measures of association and tests of location.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
The paper presents in brief the Bulgarian Digital Mathematical Library BulDML and the Czech Digital Mathematical Library DML-CZ. Both libraries use the open source software DSpace and both are partners in the European Digital Mathematics Library EuDML. We describe their content and metadata schemas; outline the architecture system and overview the statistics of its use.
Resumo:
In an audio cueing system, a teacher is presented with randomly spaced auditory signals via tape recorder or intercom. The teacher is instructed to praise a child who is on-task each time the cue is presented. In this study, a baseline was obtained on the teacher's praise rate and the children's on-task behaviour in a Grade 5 class of 37 students. Children were then divided into high, medium and low on-task groups. Followinq baseline, the teacher's praise rate and the children's on-task behaviour were observed under the following successively implemented conditions: (l) Audio Cueing 1: Audio cueing at a rate of 30 cues per hour was introduced into the classroom and remained in effect during subsequent conditions. A group of consistently low on-task children were delineated. (2) Audio Cueing Plus 'focus praise package': Instructions to direct two-thirds o£ the praise to children identified by the experimenter (consistently low on-task children), feedback and experimenter praise for meeting or surpassing the criterion distribution of praise ('focus praise package') were introduced. (3) Audio Cueing 2: The 'focus praise package' was removed. (4) Audio Cueing Plus 'increase praise package': Instructions to increase the rate of praise, feedback and experimenter praise for improved praise rates ('increase praise package') were introduced. The primary aims of the study were to determine the distribution of praise among hi~h, medium and low on-task children when audio cueinq was first introduced and to investigate the effect of the 'focus praise package' on the distribution of teacher praise. The teacher distributed her praise evenly among the hiqh, medium and low on-task groups during audio cueing 1. The effect of the 'focus praise package' was to increase the percentage of praise received by the consistently low on-task children. Other findings tended to suggest that audio cueing increased the teacher's praise rate. However, the teacher's praise rate unexpectedly decreased to a level considerably below the cued rate during audio cueing 2. The 'increase praise package' appeared to increase the teacher's praise rate above the audio cueing 2 level. The effect of an increased praise rate and two distributions of praise on on-task behaviour were considered. Significant increases in on-task behaviour were found in audio cueing 1 for the low on-task group, in the audio cueing plus 'focus praise package' condition for the entire class and the consistently low on-task group and in audio cueing 2 for the medium on-task group. Except for the high on-task children who did not change, the effects of the experimental manipulations on on-task behaviour were e quivocal. However, there were some indications that directing 67% of the praise to the consistently low on-task children was more effective for increasing this group's on-task behaviour than distributing praise equally among on-task groups.
Resumo:
© 2016 Springer Science+Business Media New YorkResearchers studying mammalian dentitions from functional and adaptive perspectives increasingly have moved towards using dental topography measures that can be estimated from 3D surface scans, which do not require identification of specific homologous landmarks. Here we present molaR, a new R package designed to assist researchers in calculating four commonly used topographic measures: Dirichlet Normal Energy (DNE), Relief Index (RFI), Orientation Patch Count (OPC), and Orientation Patch Count Rotated (OPCR) from surface scans of teeth, enabling a unified application of these informative new metrics. In addition to providing topographic measuring tools, molaR has complimentary plotting functions enabling highly customizable visualization of results. This article gives a detailed description of the DNE measure, walks researchers through installing, operating, and troubleshooting molaR and its functions, and gives an example of a simple comparison that measured teeth of the primates Alouatta and Pithecia in molaR and other available software packages. molaR is a free and open source software extension, which can be found at the doi:10.13140/RG.2.1.3563.4961(molaR v. 2.0) as well as on the Internet repository CRAN, which stores R packages.
Resumo:
Over the past few years, logging has evolved from from simple printf statements to more complex and widely used logging libraries. Today logging information is used to support various development activities such as fixing bugs, analyzing the results of load tests, monitoring performance and transferring knowledge. Recent research has examined how to improve logging practices by informing developers what to log and where to log. Furthermore, the strong dependence on logging has led to the development of logging libraries that have reduced the intricacies of logging, which has resulted in an abundance of log information. Two recent challenges have emerged as modern software systems start to treat logging as a core aspect of their software. In particular, 1) infrastructural challenges have emerged due to the plethora of logging libraries available today and 2) processing challenges have emerged due to the large number of log processing tools that ingest logs and produce useful information from them. In this thesis, we explore these two challenges. We first explore the infrastructural challenges that arise due to the plethora of logging libraries available today. As systems evolve, their logging infrastructure has to evolve (commonly this is done by migrating to new logging libraries). We explore logging library migrations within Apache Software Foundation (ASF) projects. We i find that close to 14% of the pro jects within the ASF migrate their logging libraries at least once. For processing challenges, we explore the different factors which can affect the likelihood of a logging statement changing in the future in four open source systems namely ActiveMQ, Camel, Cloudstack and Liferay. Such changes are likely to negatively impact the log processing tools that must be updated to accommodate such changes. We find that 20%-45% of the logging statements within the four systems are changed at least once. We construct random forest classifiers and Cox models to determine the likelihood of both just-introduced and long-lived logging statements changing in the future. We find that file ownership, developer experience, log density and SLOC are important factors in determining the stability of logging statements.
Resumo:
The large upfront investments required for game development pose a severe barrier for the wider uptake of serious games in education and training. Also, there is a lack of well-established methods and tools that support game developers at preserving and enhancing the games’ pedagogical effectiveness. The RAGE project, which is a Horizon 2020 funded research project on serious games, addresses these issues by making available reusable software components that aim to support the pedagogical qualities of serious games. In order to easily deploy and integrate these game components in a multitude of game engines, platforms and programming languages, RAGE has developed and validated a hybrid component-based software architecture that preserves component portability and interoperability. While a first set of software components is being developed, this paper presents selected examples to explain the overall system’s concept and its practical benefits. First, the Emotion Detection component uses the learners’ webcams for capturing their emotional states from facial expressions. Second, the Performance Statistics component is an add-on for learning analytics data processing, which allows instructors to track and inspect learners’ progress without bothering about the required statistics computations. Third, a set of language processing components accommodate the analysis of textual inputs of learners, facilitating comprehension assessment and prediction. Fourth, the Shared Data Storage component provides a technical solution for data storage - e.g. for player data or game world data - across multiple software components. The presented components are exemplary for the anticipated RAGE library, which will include up to forty reusable software components for serious gaming, addressing diverse pedagogical dimensions.
Resumo:
This paper discusses the advantages of database-backed websites and describes the model for a library website implemented at the University of Nottingham using open source software, PHP and MySQL. As websites continue to grow in size and complexity it becomes increasingly important to introduce automation to help manage them. It is suggested that a database-backed website offers many advantages over one built from static HTML pages. These include a consistency of style and content, the ability to present different views of the same data, devolved editing and enhanced security. The University of Nottingham Library Services website is described and issues surrounding its design, technological implementation and management are explored.
Resumo:
A organização, a gestão e o planejamento de uma unidade de informação compreende várias etapas e envolve os processos e técnicas do campo de pesquisa do profissional do Bibliotecário. Neste estudo pretendemos construir uma proposta de reestruturação da Biblioteca do Centro de Estudos Teológicos das Assembléias de Deus na Paraíba - CETAD/PB. E especificamente: definir um sistema de organização para o acervo que conduza à autonomia do usuário no processo de busca e recuperação da informação; indicar um software de gerenciamento de bibliotecas que supra as necessidades da unidade de informação; conhecer o público alvo, a partir de instrumento de estudo de usuário, a fim de adequar as ferramentas tecnológicas que serão utilizadas; organizar um guia para auxiliar o processo de reestruturação e propor medidas para a regulamentação do funcionamento da biblioteca do CETAD/PB. A metodologia utiliza a abordagem de pesquisa qualitativa, com características do tipo descritiva e exploratória. Adota a pesquisa de campo, para conhecer e detalhar o universo de pesquisa que foi o Centro de Estudos Teológicos das Assembléias de Deus na Paraíba CETAD/PB, bem como os sujeitos da pesquisa, ou seja, os alunos da instituição. O instrumento de coleta dos dados utilizado foi o questionário. Para representar os dados recorre às técnicas e aos recursos estatísticos da pesquisa quantitativa. Com as análises dos dados desvenda o perfil dos seus usuários, constata a insatisfação dos mesmos com relação a organização do acervo, assim como quais ferramentas tecnológicas se adéquam a esse perfil para o aprimoramento nas etapas de tratamento e disseminação dos suportes informacionais, como também no serviços de atendimento ao usuário. Destaca o profissional da informação como gestor nas Unidades de Informação, com atuação que vai além dos procedimentos e técnicas tradicionais da profissão. Palavras-chave: Biblioteca Especializada. Biblioteca – Teologia. Organização de Bibliotecas.
Resumo:
La normalización facilita la comunicación y permite el intercambio de información con cualquier institución nacional o internacional. Este objetivo es posible a través de los formatos de comunicación para intercambio de información automatizada como CEPAL, MARC., FCC.La Escuela de Bibliotecología, Documentación e Información de la Universidad Nacional utiliza el software MICROISIS en red para la enseñanza. Las bases de datos que se diseñan utilizan el formato MARC y para la descripción bibliográfica las RCAA2.Se presenta la experiencia con la base de datos “I&D” sobre desarrollo rural, presentando la Tabla de Definición de Campos, la hoja de trabajo, el formato de despliegue y Tabla de selección de Campos.
Resumo:
The document begins by describing the problem of budget information units and the high cost of commercial software that specializes in library automation. Describes the origins of free software and its meaning. Mentioned the three levels of automation in library: catalog automation, generation of repositories and full automation. Mentioned the various free software applications for each of the levels and offers a number of advantages and disadvantages in the use of these products. Concludes that the automation project is hard but full of satisfaction, emphasizing that there is no cost-free project, because if it is true that free software is free, there are other costs related to implementation, training and commissioning project progress.