959 resultados para Software compatibility.


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Texas Department of Transportation, Austin

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Includes bibliographical references.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Includes bibliographical references.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

"Issued August 1980."

Relevância:

60.00% 60.00%

Publicador:

Resumo:

"Performing organization: Oklahoma State University, College of Business Administration , Stillwater."

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Functional brain imaging techniques such as functional MRI (fMRI) that allow the in vivo investigation of the human brain have been exponentially employed to address the neurophysiological substrates of emotional processing. Despite the growing number of fMRI studies in the field, when taken separately these individual imaging studies demonstrate contrasting findings and variable pictures, and are unable to definitively characterize the neural networks underlying each specific emotional condition. Different imaging packages, as well as the statistical approaches for image processing and analysis, probably have a detrimental role by increasing the heterogeneity of findings. In particular, it is unclear to what extent the observed neurofunctional response of the brain cortex during emotional processing depends on the fMRI package used in the analysis. In this pilot study, we performed a double analysis of an fMRI dataset using emotional faces. The Statistical Parametric Mapping (SPM) version 2.6 (Wellcome Department of Cognitive Neurology, London, UK) and the XBAM 3.4 (Brain Imaging Analysis Unit, Institute of Psychiatry, Kings College London, UK) programs, which use parametric and non-parametric analysis, respectively, were used to assess our results. Both packages revealed that processing of emotional faces was associated with an increased activation in the brain`s visual areas (occipital, fusiform and lingual gyri), in the cerebellum, in the parietal cortex, in the cingulate cortex (anterior and posterior cingulate), and in the dorsolateral and ventrolateral prefrontal cortex. However, blood oxygenation level-dependent (BOLD) response in the temporal regions, insula and putamen was evident in the XBAM analysis but not in the SPM analysis. Overall, SPM and XBAM analyses revealed comparable whole-group brain responses. Further Studies are needed to explore the between-group compatibility of the different imaging packages in other cognitive and emotional processing domains. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Euroopan hiukkastutkimuslaitoksen CERNin rakenteilla olevan LHC-hiukkaskiihdyttimen CMS-koeasema on tarkoitettu erityisesti myonin ilmaisuun. Tässä työssä on esitelty CMS-koeaseman RPC-ilmaisintyypin linkkijärjestelmä ja sen testaamiseen tarkoitetut laitteet sekä laitteiden testaamiseen tarvittavat ohjelmistot. Työssä on selvitetty ohjelmien toimivuus ja keskinäinen yhteensopivuus.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software faults are expensive and cause serious damage, particularly if discovered late or not at all. Some software faults tend to be hidden. One goal of the thesis is to figure out the status quo in the field of software fault elimination since there are no recent surveys of the whole area. Basis for a structural framework is proposed for this unstructured field, paying attention to compatibility and how to find studies. Bug elimination means are surveyed, including bug knowhow, defect prevention and prediction, analysis, testing, and fault tolerance. The most common research issues for each area are identified and discussed, along with issues that do not get enough attention. Recommendations are presented for software developers, researchers, and teachers. Only the main lines of research are figured out. The main emphasis is on technical aspects. The survey was done by performing searches in IEEE, ACM, Elsevier, and Inspect databases. In addition, a systematic search was done for a few well-known related journals from recent time intervals. Some other journals, some conference proceedings and a few books, reports, and Internet articles have been investigated, too. The following problems were found and solutions for them discussed. Quality assurance is testing only is a common misunderstanding, and many checks are done and some methods applied only in the late testing phase. Many types of static review are almost forgotten even though they reveal faults that are hard to be detected by other means. Other forgotten areas are knowledge of bugs, knowing continuously repeated bugs, and lightweight means to increase reliability. Compatibility between studies is not always good, which also makes documents harder to understand. Some means, methods, and problems are considered method- or domain-specific when they are not. The field lacks cross-field research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study is to study whether a Web CMS can be used to implement and host an online community. The study is divided into two parts. The theoretical part contains the definition of Web CMS and clarifies the relation between an online community and a social software. The first part also defines the parameters, which must be taken account when choosing a Web CMS for hosting an online community. The practical part of the study contains analyses of three Web CMSs, Drupal, Liferay and Plone. All the three Web CMSs were analyzed using the technical and social parameters discovered in the theoretical part of the study. The primary objective is to investigate whether the selected Web CMS can be used to implement and host an online community. If hosting is possible, the secondary objective is to investigate whether the selected Web CMS have an effect to the online community.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article reviews Article 6 of the Software Directive and discusses the need for a revision. Beyond clarification of the scope of the very limited provision on reverse engineering, it seems that the introduction of the clause into copyright was unfortunate. The indirect protection of ideas by prohibiting reverse engineering is foreign to the copyright concept. Permitting reverse engineering altogether would promote research and development and further other goals like ICT security. Innovation would not be retarded, which is the reason why US trade secret law permits reverse engineering based also on economic arguments. The notions of compatibility Article 6 tries to address are better dealt with by Competition Law, which was demonstrated by the Microsoft Decision of the European Court in 2007.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we follow a theory-based approach to study the assimilation of compliance software in highly regulated multinational enterprises. These relatively new software products support the automation of controls which are associated with mandatory compliance requirements. We use institutional and success factor theories to explain the assimilation of compliance software. A framework for analyzing the assimilation of Access Control Systems (ACS), a special type of compliance software, is developed and used to reflect the experiences obtained in four in-depth case studies. One result is that coercive, mimetic, and normative pressures significantly effect ACS assimilation. On the other hand, quality aspects have only a moderate impact at the beginning of the assimilation process, in later phases the impact may increase if performance and improvement objectives become more relevant. In addition, it turns out that position of the enterprises and compatibility heavily influence the assimilation process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context: Today’s project managers have a myriad of methods to choose from for the development of software applications. However, they lack empirical data about the character of these methods in terms of usefulness, ease of use or compatibility, all of these being relevant variables to assess the developer’s intention to use them. Objective: To compare three methods, each following a different paradigm (Model-Driven, Model-Based and Code-Centric) with respect to their adoption potential by junior software developers engaged in the development of the business layer of a Web 2.0 application. Method: We have conducted a quasi-experiment with 26 graduate students of the University of Alicante. The application developed was a Social Network, which was organized around a fixed set of modules. Three of them, similar in complexity, were used for the experiment. Subjects were asked to use a different method for each module, and then to answer a questionnaire that gathered their perceptions during such use. Results: The results show that the Model-Driven method is regarded as the most useful, although it is also considered the least compatible with previous developers’ experiences. They also show that junior software developers feel comfortable with the use of models, and that they are likely to use them if the models are accompanied by a Model-Driven development environment. Conclusions: Despite their relatively low level of compatibility, Model-Driven development methods seem to show a great potential for adoption. That said, however, further experimentation is needed to make it possible to generalize the results to a different population, different methods, other languages and tools, different domains or different application sizes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large component-based systems are often built from many of the same components. As individual component-based software systems are developed, tested and maintained, these shared components are repeatedly manipulated. As a result there are often significant overlaps and synergies across and among the different test efforts of different component-based systems. However, in practice, testers of different systems rarely collaborate, taking a test-all-by-yourself approach. As a result, redundant effort is spent testing common components, and important information that could be used to improve testing quality is lost. The goal of this research is to demonstrate that, if done properly, testers of shared software components can save effort by avoiding redundant work, and can improve the test effectiveness for each component as well as for each component-based software system by using information obtained when testing across multiple components. To achieve this goal I have developed collaborative testing techniques and tools for developers and testers of component-based systems with shared components, applied the techniques to subject systems, and evaluated the cost and effectiveness of applying the techniques. The dissertation research is organized in three parts. First, I investigated current testing practices for component-based software systems to find the testing overlap and synergy we conjectured exists. Second, I designed and implemented infrastructure and related tools to facilitate communication and data sharing between testers. Third, I designed two testing processes to implement different collaborative testing algorithms and applied them to large actively developed software systems. This dissertation has shown the benefits of collaborative testing across component developers who share their components. With collaborative testing, researchers can design algorithms and tools to support collaboration processes, achieve better efficiency in testing configurations, and discover inter-component compatibility faults within a minimal time window after they are introduced.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article aimed at comparing the accuracy of linear measurement tools of different commercial software packages. Eight fully edentulous dry mandibles were selected for this study. Incisor, canine, premolar, first molar and second molar regions were selected. Cone beam computed tomography (CBCT) images were obtained with i-CAT Next Generation. Linear bone measurements were performed by one observer on the cross-sectional images using three different software packages: XoranCat®, OnDemand3D® and KDIS3D®, all able to assess DICOM images. In addition, 25% of the sample was reevaluated for the purpose of reproducibility. The mandibles were sectioned to obtain the gold standard for each region. Intraclass coefficients (ICC) were calculated to examine the agreement between the two periods of evaluation; the one-way analysis of variance performed with the post-hoc Dunnett test was used to compare each of the software-derived measurements with the gold standard. The ICC values were excellent for all software packages. The least difference between the software-derived measurements and the gold standard was obtained with the OnDemand3D and KDIS3D (-0.11 and -0.14 mm, respectively), and the greatest, with the XoranCAT (+0.25 mm). However, there was no statistical significant difference between the measurements obtained with the different software packages and the gold standard (p> 0.05). In conclusion, linear bone measurements were not influenced by the software package used to reconstruct the image from CBCT DICOM data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents SMarty, a variability management approach for UML-based software product lines (PL). SMarty is supported by a UML profile, the SMartyProfile, and a process for managing variabilities, the SMartyProcess. SMartyProfile aims at representing variabilities, variation points, and variants in UML models by applying a set of stereotypes. SMartyProcess consists of a set of activities that is systematically executed to trace, identify, and control variabilities in a PL based on SMarty. It also identifies variability implementation mechanisms and analyzes specific product configurations. In addition, a more comprehensive application of SMarty is presented using SEI's Arcade Game Maker PL. An evaluation of SMarty and related work are discussed.