938 resultados para Analysis software
Resumo:
The increasing amount of data available about software systems poses new challenges for re- and reverse engineering research, as the proposed approaches need to scale. In this context, concerns about meta-modeling and analysis techniques need to be augmented by technical concerns about how to reuse and how to build upon the efforts of previous research. Moose is an extensive infrastructure for reverse engineering evolved for over 10 years that promotes the reuse of engineering efforts in research. Moose accommodates various types of data modeled in the FAMIX family of meta-models. The goal of this half-day workshop is to strengthen the community of researchers and practitioners who are working in re- and reverse engineering, by providing a forum for building future research starting from Moose and FAMIX as shared infrastructure.
Resumo:
The increasing amount of data available about software systems poses new challenges for re- and reverse engineering research, as the proposed approaches need to scale. In this context, concerns about meta-modeling and analysis techniques need to be augmented by technical concerns about how to reuse and how to build upon the efforts of previous research. MOOSE is an extensive infrastructure for reverse engineering evolved for over 10 years that promotes the reuse of engineering efforts in research. MOOSE accommodates various types of data modeled in the FAMIX family of meta-models. The goal of this half-day workshop is to strengthen the community of researchers and practitioners who are working in re- and reverse engineering, by providing a forum for building future research starting from MOOSE and FAMIX as shared infrastructure.
Resumo:
In rapidly evolving domains such as Computer Assisted Orthopaedic Surgery (CAOS) emphasis is often put first on innovation and new functionality, rather than in developing the common infrastructure needed to support integration and reuse of these innovations. In fact, developing such an infrastructure is often considered to be a high-risk venture given the volatility of such a domain. We present CompAS, a method that exploits the very evolution of innovations in the domain to carry out the necessary quantitative and qualitative commonality and variability analysis, especially in the case of scarce system documentation. We show how our technique applies to the CAOS domain by using conference proceedings as a key source of information about the evolution of features in CAOS systems over a period of several years. We detect and classify evolution patterns to determine functional commonality and variability. We also identify non-functional requirements to help capture domain variability. We have validated our approach by evaluating the degree to which representative test systems can be covered by the common and variable features produced by our analysis.
Resumo:
eLearning supports the education in certain disciplines. Here, we report about novel eLearning concepts, techniques, and tools to support education in Software Engineering, a subdiscipline of computer science. We call this "Software Engineering eLearning". On the other side, software support is a substantial prerequisite for eLearning in any discipline. Thus, Software Engineering techniques have to be applied to develop and maintain those software systems. We call this "eLearning Software Engineering". Both aspects have been investigated in a large joint, BMBF-funded research project, termed MuSofT (Multimedia in Software Engineering). The main results are summarized in this paper.
Resumo:
We describe the use of log file analysis to investigate whether the use of CSCL applications corresponds to its didactical purposes. Exemplarily we examine the use of the web-based system CommSy as software support for project-oriented university courses. We present two findings: (1) We suggest measures to shape the context of CSCL applications and support their initial and continuous use. (2) We show how log files can be used to analyze how, when and by whom a CSCL system is used and thus help to validate further empirical findings. However, log file analyses can only be interpreted reasonably when additional data concerning the context of use is available.
Resumo:
Die Dissertationsschrift widmet sich der Erforschung des Online-Lernens mittels Weblogs unter Anwendung der E-Portfolio Methode als einer seit mehreren Jahren verstärkt aufkommenden Lern- und Präsentationsform im Bildungskontext. Über mehrere Lehrveranstaltungen des Studiengangs "Angewandte Medien- und Kommunikationswissenschaft" an der Technischen Universität Ilmenau hinweg wurden drei Fallstudien gebildet. Innerhalb dieser wurde das Führen von eigenen E-Portfolio Blogs durch Studierende über einen Zeitraum von etwa drei Jahren evaluiert. Als Evaluationsziel wurde anhand spezifischer Fragestellungen ermittelt, wie das damit einhergehende selbstgesteuert-konnektive Lernen zu entsprechendem Erfolg führen kann. Hierbei wurde insbesondere die Teildimension Medienkompetenz im Spannungsfeld von Lernaktivität, Wissenserwerb und Informations-/Wissensmanagement betrachtet sowie weitere intervenierende Variablen, wie zum Beispiel Aufwand oder Akzeptanz, berücksichtigt. Inhaltlich wurden zunächst begriffliche Grundlagen dargestellt, die Nutzung von E-Portfolios in Theorie und Praxis beschrieben, Medienkompetenz-Ansätze detailliert aufgezeigt sowie in den Kontext von E-Portfolios gebracht und schließlich eine umfangreiche Analyse des Forschungsstandes aufbereitet. Diese gingen mit Erkenntnissen aus einer qualitativen Vorstudie in Form von fünf leitfadengestützten Experteninterviews einher. Die darauf aufbauende Hauptstudie widmete sich anschließend der Erhebung und Auswertung quantitativer Daten anhand von Online-Befragungen mit den Studierenden zu fünf Zeitpunkten aus intra- und interindividueller Perspektive. Als markanteste empirische Erkenntnis der Arbeit kann festgehalten werden, dass es durch das selbstgesteuert-konnektive Lernen mit E-Portfolio Blogs zu einer nachhaltigen Förderung der Medienkompetenz kommt, die sich auch in signifikanten Zusammenhängen mit den anderen Teildimensionen und intervenierenden Variablen widerspiegelt. Darüber hinaus bieten sich aber auch Potenziale für eine steigende Lernaktivität, einen ansteigenden Wissenserwerb und ein verbessertes Informations-/Wissensmanagement, die es aber noch weiterführend zu erforschen gilt. Demgegenüber können allerdings der entstehende und kontinuierlich hohe Aufwand sowie die erforderliche (Eigen-) Motivation als entscheidende Herausforderungen dieser Lernmethode identifiziert werden.
Resumo:
Background Balkan endemic nephropathy (BEN) represents a chronic progressive interstitial nephritis in striking correlation with uroepithelial tumours of the upper urinary tract. The disease has endemic distribution in the Danube river regions in several Balkan countries. DNA methylation is a primary epigenetic modification that is involved in major processes such as cancer, genomic imprinting, gene silencing, etc. The significance of CpG island methylation status in normal development, cell differentiation and gene expression is widely recognized, although still stays poorly understood. Methods We performed whole genome DNA methylation array analysis on DNA pool samples from peripheral blood from 159 affected individuals and 170 healthy individuals. This technique allowed us to determine the methylation status of 27 627 CpG islands throughout the whole genome in healthy controls and BEN patients. Thus we obtained the methylation profile of BEN patients from Bulgarian and Serbian endemic regions. Results Using specifically developed software we compared the methylation profiles of BEN patients and corresponding controls and revealed the differently methylated regions. We then compared the DMRs between all patient-control pairs to determine common changes in the epigenetic profiles. SEC61G, IL17RA, HDAC11 proved to be differently methylated throughout all patient-control pairs. The CpG islands of all 3 genes were hypomethylated compared to controls. This suggests that dysregulation of these genes involved in immunological response could be a common mechanism in BEN pathogenesis in both endemic regions and in both genders. Conclusion Our data propose a new hypothesis that immunologic dysregulation has a place in BEN etiopathogenesis. Keywords: Epigenetics; Whole genome array analysis; Balkan endemic nephropathy
Resumo:
Coordinated eye and head movements simultaneously occur to scan the visual world for relevant targets. However, measuring both eye and head movements in experiments allowing natural head movements may be challenging. This paper provides an approach to study eye-head coordination: First, we demonstra- te the capabilities and limits of the eye-head tracking system used, and compare it to other technologies. Second, a beha- vioral task is introduced to invoke eye-head coordination. Third, a method is introduced to reconstruct signal loss in video- based oculography caused by cornea reflection artifacts in order to extend the tracking range. Finally, parameters of eye- head coordination are identified using EHCA (eye-head co- ordination analyzer), a MATLAB software which was developed to analyze eye-head shifts. To demonstrate the capabilities of the approach, a study with 11 healthy subjects was performed to investigate motion behavior. The approach presented here is discussed as an instrument to explore eye-head coordination, which may lead to further insights into attentional and motor symptoms of certain neurological or psychiatric diseases, e.g., schizophrenia.
Resumo:
ABSTRACT ONTOLOGIES AND METHODS FOR INTEROPERABILITY OF ENGINEERING ANALYSIS MODELS (EAMS) IN AN E-DESIGN ENVIRONMENT SEPTEMBER 2007 NEELIMA KANURI, B.S., BIRLA INSTITUTE OF TECHNOLOGY AND SCIENCES PILANI INDIA M.S., UNIVERSITY OF MASSACHUSETTS AMHERST Directed by: Professor Ian Grosse Interoperability is the ability of two or more systems to exchange and reuse information efficiently. This thesis presents new techniques for interoperating engineering tools using ontologies as the basis for representing, visualizing, reasoning about, and securely exchanging abstract engineering knowledge between software systems. The specific engineering domain that is the primary focus of this report is the modeling knowledge associated with the development of engineering analysis models (EAMs). This abstract modeling knowledge has been used to support integration of analysis and optimization tools in iSIGHT FD , a commercial engineering environment. ANSYS , a commercial FEA tool, has been wrapped as an analysis service available inside of iSIGHT-FD. Engineering analysis modeling (EAM) ontology has been developed and instantiated to form a knowledge base for representing analysis modeling knowledge. The instances of the knowledge base are the analysis models of real world applications. To illustrate how abstract modeling knowledge can be exploited for useful purposes, a cantilever I-Beam design optimization problem has been used as a test bed proof-of-concept application. Two distinct finite element models of the I-beam are available to analyze a given beam design- a beam-element finite element model with potentially lower accuracy but significantly reduced computational costs and a high fidelity, high cost, shell-element finite element model. The goal is to obtain an optimized I-beam design at minimum computational expense. An intelligent KB tool was developed and implemented in FiPER . This tool reasons about the modeling knowledge to intelligently shift between the beam and the shell element models during an optimization process to select the best analysis model for a given optimization design state. In addition to improved interoperability and design optimization, methods are developed and presented that demonstrate the ability to operate on ontological knowledge bases to perform important engineering tasks. One such method is the automatic technical report generation method which converts the modeling knowledge associated with an analysis model to a flat technical report. The second method is a secure knowledge sharing method which allocates permissions to portions of knowledge to control knowledge access and sharing. Both the methods acting together enable recipient specific fine grain controlled knowledge viewing and sharing in an engineering workflow integration environment, such as iSIGHT-FD. These methods together play a very efficient role in reducing the large scale inefficiencies existing in current product design and development cycles due to poor knowledge sharing and reuse between people and software engineering tools. This work is a significant advance in both understanding and application of integration of knowledge in a distributed engineering design framework.
Resumo:
The platform-independent software package consisting of the oligonucleotide mass assembler (OMA) and the oligonucleotide peak analyzer (OPA) was created to support the analysis of oligonucleotide mass spectra. It calculates all theoretically possible fragments of a given input sequence and annotates it to an experimental spectrum, thus, saving a large amount of manual processing time. The software performs analysis of precursor and product ion spectra of oligonucleotides and their analogues comprising user-defined modifications of the backbone, the nucleobases, or the sugar moiety, as well as adducts with metal ions or drugs. The ability to expand the library of building blocks and to implement individual structural variations makes it extremely useful for supporting the analysis of therapeutically active compounds. The functionality of the software tool is demonstrated on the examples of a platinated doublestranded oligonucleotide and a modified RNA sequence. Experiments also reveal the unique dissociation behavior of platinated higher-order DNA structures.
Resumo:
Given the increasing interest in using social software for company-internal communication and collaboration, this paper examines drivers and inhibitors of micro-blogging adoption at the workplace. While nearly one in two companies is currently planning to introduce social software, there is no empirically validated research on employees’ adoption. In this paper, we build on previous focus group results and test our research model in an empirical study using Structural Equation Modeling. Based on our findings, we derive recommendations on how to foster adoption. We suggest that micro-blogging should be presented to employees as an efficient means of communication, personal brand building, and knowledge management. In order to particularly promote content contribution, privacy concerns should be eased by setting clear rules on who has access to postings and for how long they will be archived.
Resumo:
This work contributes to the ongoing debate on the productivity paradox by considering CIOs’ perceptions of IT business value. Applying regression analysis to data from an international survey, we study how the adoption of certain types of enterprise software affects the CIOs’ perception of the impact of IT on the firm’s business activities and vice versa. Other potentially important factors such as country, sector and size of the firms are also taken into account. Our results indicate a more significant support for the impact of perceived IT benefits on adoption of enterprise software than vice versa. CIOs based in the US perceive IT benefits more strongly than their German counterparts. Furthermore, certain types of enterprise software seem to be more prevalent in the US.
Resumo:
Withdrawal reflexes of the mollusk Aplysia exhibit sensitization, a simple form of long-term memory (LTM). Sensitization is due, in part, to long-term facilitation (LTF) of sensorimotor neuron synapses. LTF is induced by the modulatory actions of serotonin (5-HT). Pettigrew et al. developed a computational model of the nonlinear intracellular signaling and gene network that underlies the induction of 5-HT-induced LTF. The model simulated empirical observations that repeated applications of 5-HT induce persistent activation of protein kinase A (PKA) and that this persistent activation requires a suprathreshold exposure of 5-HT. This study extends the analysis of the Pettigrew model by applying bifurcation analysis, singularity theory, and numerical simulation. Using singularity theory, classification diagrams of parameter space were constructed, identifying regions with qualitatively different steady-state behaviors. The graphical representation of these regions illustrates the robustness of these regions to changes in model parameters. Because persistent protein kinase A (PKA) activity correlates with Aplysia LTM, the analysis focuses on a positive feedback loop in the model that tends to maintain PKA activity. In this loop, PKA phosphorylates a transcription factor (TF-1), thereby increasing the expression of an ubiquitin hydrolase (Ap-Uch). Ap-Uch then acts to increase PKA activity, closing the loop. This positive feedback loop manifests multiple, coexisting steady states, or multiplicity, which provides a mechanism for a bistable switch in PKA activity. After the removal of 5-HT, the PKA activity either returns to its basal level (reversible switch) or remains at a high level (irreversible switch). Such an irreversible switch might be a mechanism that contributes to the persistence of LTM. The classification diagrams also identify parameters and processes that might be manipulated, perhaps pharmacologically, to enhance the induction of memory. Rational drug design, to affect complex processes such as memory formation, can benefit from this type of analysis.
Resumo:
IT has turned out to be a key factor for the purposes of gaining maturity in Business Process Management (BPM). This book presents a worldwide investigation that was conducted among companies from the ‘Forbes Global 2000’ list to explore the current usage of software throughout the BPM life cycle and to identify the companies’ requirements concerning process modelling. The responses from 130 companies indicate that, at the present time, it is mainly software for process description and analysis that is required, while process execution is supported by general software such as databases, ERP systems and office tools. The resulting complex system landscapes give rise to distinct requirements for BPM software, while the process modelling requirements can be equally satisfied by the most common languages (BPMN, UML, EPC).
Resumo:
We present results of a benchmark test evaluating the resource allocation capabilities of the project management software packages Acos Plus.1 8.2, CA SuperProject 5.0a, CS Project Professional 3.0, MS Project 2000, and Scitor Project Scheduler 8.0.1. The tests are based on 1560 instances of precedence– and resource–constrained project scheduling problems. For different complexity scenarios, we analyze the deviation of the makespan obtained by the software packages from the best feasible makespan known. Among the tested software packages, Acos Plus.1 and Project Scheduler show the best resource allocation performance. Moreover, our numerical analysis reveals a considerable performance gap between the implemented methods and state–of–the–art project scheduling algorithms, especially for large–sized problems. Thus, there is still a significant potential for improving solutions to resource allocation problems in practice.