943 resultados para Software analysis
Resumo:
Semantic Web Mining aims at combining the two fast-developing research areas Semantic Web and Web Mining. This survey analyzes the convergence of trends from both areas: Growing numbers of researchers work on improving the results of Web Mining by exploiting semantic structures in the Web, and they use Web Mining techniques for building the Semantic Web. Last but not least, these techniques can be used for mining the Semantic Web itself. The second aim of this paper is to use these concepts to circumscribe what Web space is, what it represents and how it can be represented and analyzed. This is used to sketch the role that Semantic Web Mining and the software agents and human agents involved in it can play in the evolution of Web space.
Resumo:
The identification of chemical mechanism that can exhibit oscillatory phenomena in reaction networks are currently of intense interest. In particular, the parametric question of the existence of Hopf bifurcations has gained increasing popularity due to its relation to the oscillatory behavior around the fixed points. However, the detection of oscillations in high-dimensional systems and systems with constraints by the available symbolic methods has proven to be difficult. The development of new efficient methods are therefore required to tackle the complexity caused by the high-dimensionality and non-linearity of these systems. In this thesis, we mainly present efficient algorithmic methods to detect Hopf bifurcation fixed points in (bio)-chemical reaction networks with symbolic rate constants, thereby yielding information about their oscillatory behavior of the networks. The methods use the representations of the systems on convex coordinates that arise from stoichiometric network analysis. One of the methods called HoCoQ reduces the problem of determining the existence of Hopf bifurcation fixed points to a first-order formula over the ordered field of the reals that can then be solved using computational-logic packages. The second method called HoCaT uses ideas from tropical geometry to formulate a more efficient method that is incomplete in theory but worked very well for the attempted high-dimensional models involving more than 20 chemical species. The instability of reaction networks may lead to the oscillatory behaviour. Therefore, we investigate some criterions for their stability using convex coordinates and quantifier elimination techniques. We also study Muldowney's extension of the classical Bendixson-Dulac criterion for excluding periodic orbits to higher dimensions for polynomial vector fields and we discuss the use of simple conservation constraints and the use of parametric constraints for describing simple convex polytopes on which periodic orbits can be excluded by Muldowney's criteria. All developed algorithms have been integrated into a common software framework called PoCaB (platform to explore bio- chemical reaction networks by algebraic methods) allowing for automated computation workflows from the problem descriptions. PoCaB also contains a database for the algebraic entities computed from the models of chemical reaction networks.
Resumo:
Compositional data naturally arises from the scientific analysis of the chemical composition of archaeological material such as ceramic and glass artefacts. Data of this type can be explored using a variety of techniques, from standard multivariate methods such as principal components analysis and cluster analysis, to methods based upon the use of log-ratios. The general aim is to identify groups of chemically similar artefacts that could potentially be used to answer questions of provenance. This paper will demonstrate work in progress on the development of a documented library of methods, implemented using the statistical package R, for the analysis of compositional data. R is an open source package that makes available very powerful statistical facilities at no cost. We aim to show how, with the aid of statistical software such as R, traditional exploratory multivariate analysis can easily be used alongside, or in combination with, specialist techniques of compositional data analysis. The library has been developed from a core of basic R functionality, together with purpose-written routines arising from our own research (for example that reported at CoDaWork'03). In addition, we have included other appropriate publicly available techniques and libraries that have been implemented in R by other authors. Available functions range from standard multivariate techniques through to various approaches to log-ratio analysis and zero replacement. We also discuss and demonstrate a small selection of relatively new techniques that have hitherto been little-used in archaeometric applications involving compositional data. The application of the library to the analysis of data arising in archaeometry will be demonstrated; results from different analyses will be compared; and the utility of the various methods discussed
Resumo:
En este trabajo se describe la solución ideada para la implantación de un Sistema de Información Geográfica que debe dar servicio al Instituto Universitario del Agua y del Medio Ambiente de la Universidad de Murcia y al Instituto Euromediterráneo del Agua. Dada la naturaleza de ambas instituciones, se trata de una herramienta orientada fundamentalmente al estudio de recursos hídricos y procesos hidrológicos. El proceso se inició con una identificación de las necesidades de los usuarios (con perfiles y requerimiento diferentes) y el posterior desarrollo del diseño conceptual que pudiera asegurar la satisfacción de estas necesidades. Debido a que los requerimientos de los usuarios así lo demandaban, se ha tenido en cuenta tanto a usuarios que trabajan en entorno linux como a otros que lo hacen en entorno windows. Se ha optado por un sistema basado en software libre utilizando GRASS para el manejo de información raster y modelización; postgis (sobre postgreSQL) y GRASS para la gestión de información vectorial; y QGIS, gvSIG y Kosmo como interfaces gráficas de usuario. Otros programas utilizados para propósitos específicos han sido R, Mapserver o GMT
Resumo:
This presentation explains how we move from a problem definition to an algorithmic solution using simple tools like noun verb analysis. It also looks at how we might judge the quality of a solution through coupling, cohesion and generalisation.
Resumo:
This presentation discusses the role and purpose of testing in the systems/Software Development Life Cycle. We examine the consequences of the 'cost curve' on defect removal and how agile methods can reduce its effects. We concentrate on Black Box Testing and use Equivalence Partitioning and Boundary Value Analysis to construct the smallest number of test cases, test scenarios necessary for a test plan.
Resumo:
El presente trabajo de investigación se basa en la descripción y análisis de las diferentes rutas existentes para la internacionalización de empresas. Así mismo, se conocerá a mayor profundidad el concepto de internacionalización, sus ventajas e implicaciones y diferentes modelos que aplican a este concepto. De igual forma, este estudio muestra las diferencias estratégicas que pueden existir entre los sectores, teniendo en cuenta que Formesan S.A.S. pertenece al sector de la construcción y SQL Software S.A., al sector del software y tecnologías de la información y comunicaciones. A partir de la comparación y análisis del proceso de internacionalización de estas empresas, se puede evidenciar que no existe una ruta específica por la cual toman la decisión de internacionalizarse, sino que en muchos casos, éstas deciden realizarlo desde la búsqueda de oportunidades y la motivación que tengan los empresarios para que esto suceda. Si bien, la internacionalización no es un proceso que está ligado a un modelo teórico sino más bien está asociado a una experiencia dada por las condiciones y decisiones que el empresario decide tomar al querer hacer de su empresa una compañía global.
Resumo:
Establecer un marco estandarizado de referencia que permita a la empresa Software House Ltda. Conocer los aspectos fundamentales para una caracterización de la empresa, teniendo en cuenta el análisis del sector, el análisis interno de la empresa, análisis de posibles países a exportar, el mejoramiento del servicio a exportar, el análisis de precios y el planteamiento del plan de mercado. Igualmente este proyecto soporta su desarrollo en la situación actual de la empresa para proponer planes de acción y de mejoramiento que le permitan el fortalecimiento interno de la misma focalizados a la preparación para la internacionalización de sus servicios.
Resumo:
Resumen tomado de la publicaci??n
Resumo:
Cada vez hay más datos LiDAR disponibles que cubren grandes extensiones del territorio pero la distribución de este tipo de datos todavía no se ha resuelto debido al elevado volumen de datos y a que el análisis de la información no es trivial para usuarios no expertos en tecnología LiDAR. Actualmente DIELMO está llevando a cabo un proyecto implementar diferentes servicios para la distribución de datos LiDAR a través de una IDE
Resumo:
This is a brief report of a research project, coordinated by me and funded by the Portuguese Government. It studies ‘The Representation of the Feminine in the Portuguese Press’ (POCI/COM 55780/2004), and works on the content analysis of discourse on the feminine in various Portuguese newspapers, covering the time span of February 1st till April 30th 2006. The paper is divided into two parts: in the first part, I will briefly discuss the typology used to code the text units of selected articles; in the second part, I will explore the most expressive percentages of the first two weeks of February for the content analysis of the Diário de Notícias newspaper. These percentages were obtained with the NVivo 6 qualitative data treatment software programme.
Resumo:
O presente estudo de investigação-ação partiu da necessidade de investigar e aprofundar a aprendizagem do mecanismo da leitura e da escrita numa criança com Paralisia Cerebral mediante a aplicação do software educativo “Comunicar com Símbolos”. O trabalho desenvolveu-se inicialmente num Centro Escolar de um Agrupamento de Escolas da zona centro do país, no distrito de Santarém, passando a realizar-se, após avaliação diagnóstica, numa Instituição Particular de Segurança Social - Centro de Deficientes Profundos da mesma região e analisa essencialmente o desenvolvimento da aprendizagem da leitura e da escrita numa criança com Paralisia Cerebral Espástica Bilateral com predomínio nos membros inferiores através da aplicação de dez sessões planificadas com base na utilização do software educativo Comunicar com Símbolos, da Cnotinfor – Imagina. Após a intervenção e a análise dos resultados, concluiu-se que o programa informático supramencionado apresenta vantagens significativas na consolidação da leitura e da escrita da criança com Paralisia Cerbral. Este trabalho de natureza interventiva não pretende, de forma alguma, dar respostas únicas na implementação de estratégias na melhoria do desenvolvimento do mecanismo da leitura e da escrita em crianças com Paralisia Cerebral, mas apenas contribuir para uma reflexão aprofundada sobre a importância da aplicação das tecnologias de apoio na prática pedagógica com crianças com Necessidades Educativas Especiais, no geral.
Resumo:
Context: Learning can be regarded as knowledge construction in which prior knowledge and experience serve as basis for the learners to expand their knowledge base. Such a process of knowledge construction has to take place continuously in order to enhance the learners’ competence in a competitive working environment. As the information consumers, the individual users demand personalised information provision which meets their own specific purposes, goals, and expectations. Objectives: The current methods in requirements engineering are capable of modelling the common user’s behaviour in the domain of knowledge construction. The users’ requirements can be represented as a case in the defined structure which can be reasoned to enable the requirements analysis. Such analysis needs to be enhanced so that personalised information provision can be tackled and modelled. However, there is a lack of suitable modelling methods to achieve this end. This paper presents a new ontological method for capturing individual user’s requirements and transforming the requirements onto personalised information provision specifications. Hence the right information can be provided to the right user for the right purpose. Method: An experiment was conducted based on the qualitative method. A medium size of group of users participated to validate the method and its techniques, i.e. articulates, maps, configures, and learning content. The results were used as the feedback for the improvement. Result: The research work has produced an ontology model with a set of techniques which support the functions for profiling user’s requirements, reasoning requirements patterns, generating workflow from norms, and formulating information provision specifications. Conclusion: The current requirements engineering approaches provide the methodical capability for developing solutions. Our research outcome, i.e. the ontology model with the techniques, can further enhance the RE approaches for modelling the individual user’s needs and discovering the user’s requirements.
Resumo:
A simple and practical technique for assessing the risks, that is, the potential for error, and consequent loss, in software system development, acquired during a requirements engineering phase is described. The technique uses a goal-based requirements analysis as a framework to identify and rate a set of key issues in order to arrive at estimates of the feasibility and adequacy of the requirements. The technique is illustrated and how it has been applied to a real systems development project is shown. How problems in this project could have been identified earlier is shown, thereby avoiding costly additional work and unhappy users.
Resumo:
We present a method to enhance fault localization for software systems based on a frequent pattern mining algorithm. Our method is based on a large set of test cases for a given set of programs in which faults can be detected. The test executions are recorded as function call trees. Based on test oracles the tests can be classified into successful and failing tests. A frequent pattern mining algorithm is used to identify frequent subtrees in successful and failing test executions. This information is used to rank functions according to their likelihood of containing a fault. The ranking suggests an order in which to examine the functions during fault analysis. We validate our approach experimentally using a subset of Siemens benchmark programs.