980 resultados para Task-Oriented Methodology
Resumo:
Usually, data warehousing populating processes are data-oriented workflows composed by dozens of granular tasks that are responsible for the integration of data coming from different data sources. Specific subset of these tasks can be grouped on a collection together with their relationships in order to form higher- level constructs. Increasing task granularity allows for the generalization of processes, simplifying their views and providing methods to carry out expertise to new applications. Well-proven practices can be used to describe general solutions that use basic skeletons configured and instantiated according to a set of specific integration requirements. Patterns can be applied to ETL processes aiming to simplify not only a possible conceptual representation but also to reduce the gap that often exists between two design perspectives. In this paper, we demonstrate the feasibility and effectiveness of an ETL pattern-based approach using task clustering, analyzing a real world ETL scenario through the definitions of two commonly used clusters of tasks: a data lookup cluster and a data conciliation and integration cluster.
Resumo:
This paper presents a relational positioning methodology for flexibly and intuitively specifying offline programmed robot tasks, as well as for assisting the execution of teleoperated tasks demanding precise movements.In relational positioning, the movements of an object can be restricted totally or partially by specifying its allowed positions in terms of a set of geometric constraints. These allowed positions are found by means of a 3D sequential geometric constraint solver called PMF – Positioning Mobile with respect to Fixed. PMF exploits the fact that in a set of geometric constraints, the rotational component can often be separated from the translational one and solved independently.
Resumo:
This thesis presents ⇡SOD-M (Policy-based Service Oriented Development Methodology), a methodology for modeling reliable service-based applications using policies. It proposes a model driven method with: (i) a set of meta-models for representing non-functional constraints associated to service-based applications, starting from an use case model until a service composition model; (ii) a platform providing guidelines for expressing the composition and the policies; (iii) model-to-model and model-to-text transformation rules for semi-automatizing the implementation of reliable service-based applications; and (iv) an environment that implements these meta-models and rules, and enables the application of ⇡SOD-M. This thesis also presents a classification and nomenclature for non-functional requirements for developing service-oriented applications. Our approach is intended to add value to the development of service-oriented applications that have quality requirements needs. This work uses concepts from the service-oriented development, non-functional requirements design and model-driven delevopment areas to propose a solution that minimizes the problem of reliable service modeling. Some examples are developed as proof of concepts
Resumo:
Advances in electronics nowadays facilitate the design of smart spaces based on physical mash-ups of sensor and actuator devices. At the same time, software paradigms such as Internet of Things (IoT) and Web of Things (WoT) are motivating the creation of technology to support the development and deployment of web-enabled embedded sensor and actuator devices with two major objectives: (i) to integrate sensing and actuating functionalities into everyday objects, and (ii) to easily allow a diversity of devices to plug into the Internet. Currently, developers who are applying this Internet-oriented approach need to have solid understanding about specific platforms and web technologies. In order to alleviate this development process, this research proposes a Resource-Oriented and Ontology-Driven Development (ROOD) methodology based on the Model Driven Architecture (MDA). This methodology aims at enabling the development of smart spaces through a set of modeling tools and semantic technologies that support the definition of the smart space and the automatic generation of code at hardware level. ROOD feasibility is demonstrated by building an adaptive health monitoring service for a Smart Gym.
Resumo:
Las pruebas de software (Testing) son en la actualidad la técnica más utilizada para la validación y la evaluación de la calidad de un programa. El testing está integrado en todas las metodologías prácticas de desarrollo de software y juega un papel crucial en el éxito de cualquier proyecto de software. Desde las unidades de código más pequeñas a los componentes más complejos, su integración en un sistema de software y su despliegue a producción, todas las piezas de un producto de software deben ser probadas a fondo antes de que el producto de software pueda ser liberado a un entorno de producción. La mayor limitación del testing de software es que continúa siendo un conjunto de tareas manuales, representando una buena parte del coste total de desarrollo. En este escenario, la automatización resulta fundamental para aliviar estos altos costes. La generación automática de casos de pruebas (TCG, del inglés test case generation) es el proceso de generar automáticamente casos de prueba que logren un alto recubrimiento del programa. Entre la gran variedad de enfoques hacia la TCG, esta tesis se centra en un enfoque estructural de caja blanca, y más concretamente en una de las técnicas más utilizadas actualmente, la ejecución simbólica. En ejecución simbólica, el programa bajo pruebas es ejecutado con expresiones simbólicas como argumentos de entrada en lugar de valores concretos. Esta tesis se basa en un marco general para la generación automática de casos de prueba dirigido a programas imperativos orientados a objetos (Java, por ejemplo) y basado en programación lógica con restricciones (CLP, del inglés constraint logic programming). En este marco general, el programa imperativo bajo pruebas es primeramente traducido a un programa CLP equivalente, y luego dicho programa CLP es ejecutado simbólicamente utilizando los mecanismos de evaluación estándar de CLP, extendidos con operaciones especiales para el tratamiento de estructuras de datos dinámicas. Mejorar la escalabilidad y la eficiencia de la ejecución simbólica constituye un reto muy importante. Es bien sabido que la ejecución simbólica resulta impracticable debido al gran número de caminos de ejecución que deben ser explorados y a tamaño de las restricciones que se deben manipular. Además, la generación de casos de prueba mediante ejecución simbólica tiende a producir un número innecesariamente grande de casos de prueba cuando es aplicada a programas de tamaño medio o grande. Las contribuciones de esta tesis pueden ser resumidas como sigue. (1) Se desarrolla un enfoque composicional basado en CLP para la generación de casos de prueba, el cual busca aliviar el problema de la explosión de caminos interprocedimiento analizando de forma separada cada componente (p.ej. método) del programa bajo pruebas, almacenando los resultados y reutilizándolos incrementalmente hasta obtener resultados para el programa completo. También se ha desarrollado un enfoque composicional basado en especialización de programas (evaluación parcial) para la herramienta de ejecución simbólica Symbolic PathFinder (SPF). (2) Se propone una metodología para usar información del consumo de recursos del programa bajo pruebas para guiar la ejecución simbólica hacia aquellas partes del programa que satisfacen una determinada política de recursos, evitando la exploración de aquellas partes del programa que violan dicha política. (3) Se propone una metodología genérica para guiar la ejecución simbólica hacia las partes más interesantes del programa, la cual utiliza abstracciones como generadores de trazas para guiar la ejecución de acuerdo a criterios de selección estructurales. (4) Se propone un nuevo resolutor de restricciones, el cual maneja eficientemente restricciones sobre el uso de la memoria dinámica global (heap) durante ejecución simbólica, el cual mejora considerablemente el rendimiento de la técnica estándar utilizada para este propósito, la \lazy initialization". (5) Todas las técnicas propuestas han sido implementadas en el sistema PET (el enfoque composicional ha sido también implementado en la herramienta SPF). Mediante evaluación experimental se ha confirmado que todas ellas mejoran considerablemente la escalabilidad y eficiencia de la ejecución simbólica y la generación de casos de prueba. ABSTRACT Testing is nowadays the most used technique to validate software and assess its quality. It is integrated into all practical software development methodologies and plays a crucial role towards the success of any software project. From the smallest units of code to the most complex components and their integration into a software system and later deployment; all pieces of a software product must be tested thoroughly before a software product can be released. The main limitation of software testing is that it remains a mostly manual task, representing a large fraction of the total development cost. In this scenario, test automation is paramount to alleviate such high costs. Test case generation (TCG) is the process of automatically generating test inputs that achieve high coverage of the system under test. Among a wide variety of approaches to TCG, this thesis focuses on structural (white-box) TCG, where one of the most successful enabling techniques is symbolic execution. In symbolic execution, the program under test is executed with its input arguments being symbolic expressions rather than concrete values. This thesis relies on a previously developed constraint-based TCG framework for imperative object-oriented programs (e.g., Java), in which the imperative program under test is first translated into an equivalent constraint logic program, and then such translated program is symbolically executed by relying on standard evaluation mechanisms of Constraint Logic Programming (CLP), extended with special treatment for dynamically allocated data structures. Improving the scalability and efficiency of symbolic execution constitutes a major challenge. It is well known that symbolic execution quickly becomes impractical due to the large number of paths that must be explored and the size of the constraints that must be handled. Moreover, symbolic execution-based TCG tends to produce an unnecessarily large number of test cases when applied to medium or large programs. The contributions of this dissertation can be summarized as follows. (1) A compositional approach to CLP-based TCG is developed which overcomes the inter-procedural path explosion by separately analyzing each component (method) in a program under test, stowing the results as method summaries and incrementally reusing them to obtain whole-program results. A similar compositional strategy that relies on program specialization is also developed for the state-of-the-art symbolic execution tool Symbolic PathFinder (SPF). (2) Resource-driven TCG is proposed as a methodology to use resource consumption information to drive symbolic execution towards those parts of the program under test that comply with a user-provided resource policy, avoiding the exploration of those parts of the program that violate such policy. (3) A generic methodology to guide symbolic execution towards the most interesting parts of a program is proposed, which uses abstractions as oracles to steer symbolic execution through those parts of the program under test that interest the programmer/tester most. (4) A new heap-constraint solver is proposed, which efficiently handles heap-related constraints and aliasing of references during symbolic execution and greatly outperforms the state-of-the-art standard technique known as lazy initialization. (5) All techniques above have been implemented in the PET system (and some of them in the SPF tool). Experimental evaluation has confirmed that they considerably help towards a more scalable and efficient symbolic execution and TCG.
Resumo:
Descripción y análisis critic de una metodología de taller de posgrado a realizar entre dos universidades en idioma ingles y con el apoyo de las nuevas tecnologías
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
The application of systems thinking to designing, managing, and improving business processes has developed a new "holonic-based" process modeling methodology. The theoretical background and the methodology are described using examples taken from a large organization designing and manufacturing capital goods equipment operating within a complex and dynamic environment. A key point of differentiation attributed to this methodology is that it allows a set of models to be produced without taking a task breakdown approach but instead uses systems thinking and a construct known as the "holon" to build process descriptions as a system of systems (i.e., a holarchy). The process-oriented holonic modeling methodology has been used for total quality management and business process engineering exercises in different industrial sectors and builds models that connect the strategic vision of a company to its operational processes. Exercises have been conducted in response to environmental pressures to make operations align with strategic thinking as well as becoming increasingly agile and efficient. This unique methodology is best applied in environments of high complexity, low volume, and high variety, where repeated learning opportunities are few and far between (e.g., large development projects). © 2007 IEEE.
Resumo:
We present our approach to real-time service-oriented scheduling problems with the objective of maximizing the total system utility. Different from the traditional utility accrual scheduling problems that each task is associated with only a single time utility function (TUF), we associate two different TUFs—a profit TUF and a penalty TUF—with each task, to model the real-time services that not only need to reward the early completions but also need to penalize the abortions or deadline misses. The scheduling heuristics we proposed in this paper judiciously accept, schedule, and abort real-time services when necessary to maximize the accrued utility. Our extensive experimental results show that our proposed algorithms can significantly outperform the traditional scheduling algorithms such as the Earliest Deadline First (EDF), the traditional utility accrual (UA) scheduling algorithms, and an earlier scheduling approach based on a similar model.
Resumo:
Part 12: Collaboration Platforms
Resumo:
Monitoring agricultural crops constitutes a vital task for the general understanding of land use spatio-temporal dynamics. This paper presents an approach for the enhancement of current crop monitoring capabilities on a regional scale, in order to allow for the analysis of environmental and socio-economic drivers and impacts of agricultural land use. This work discusses the advantages and current limitations of using 250m VI data from the Moderate Resolution Imaging Spectroradiometer (MODIS) for this purpose, with emphasis in the difficulty of correctly analyzing pixels whose temporal responses are disturbed due to certain sources of interference such as mixed or heterogeneous land cover. It is shown that the influence of noisy or disturbed pixels can be minimized, and a much more consistent and useful result can be attained, if individual agricultural fields are identified and each field's pixels are analyzed in a collective manner. As such, a method is proposed that makes use of image segmentation techniques based on MODIS temporal information in order to identify portions of the study area that agree with actual agricultural field borders. The pixels of each portion or segment are then analyzed individually in order to estimate the reliability of the temporal signal observed and the consequent relevance of any estimation of land use from that data. The proposed method was applied in the state of Mato Grosso, in mid-western Brazil, where extensive ground truth data was available. Experiments were carried out using several supervised classification algorithms as well as different subsets of land cover classes, in order to test the methodology in a comprehensive way. Results show that the proposed method is capable of consistently improving classification results not only in terms of overall accuracy but also qualitatively by allowing a better understanding of the land use patterns detected. It thus provides a practical and straightforward procedure for enhancing crop-mapping capabilities using temporal series of moderate resolution remote sensing data.
Resumo:
Este trabalho tem como objetivo geral apresentar mecanismos de análise e validação de propostas de material didático na forma de Webquests e, com base nesses mecanismos, elaborar e validar três propostas de material didático em forma de WebQuests criticamente situados para a ensinagem de inglês como língua adicional. As WebQuests elaboradas visam priorizar o desenvolvimento do letramento digital crítico e da competência comunicativa em inglês como língua adicional do aprendiz. A WebQuest se insere na perspectiva de uma metodologia que tem esse mesmo nome e consiste na proposta de uma pesquisa orientada e organizada em etapas em que toda ou grande parte do conteúdo a ser acessado e necessário para a realização da(s) tarefa(s) encontra-se disponível online (DODGE, 1995). A metodologia deste estudo é de cunho qualitativo e também se insere na perspectiva da metodologia de desenvolvimento. O design metodológico dessa investigação foi organizado em três etapas, quais sejam, a análise de necessidades, a elaboração de três WebQuests e a análise das WebQuests elaboradas a partir de uma rubrica. A revisão de literatura, que constituiu parte da análise de necessidades, sugere que as demandas do século XXI exigem maior atenção e investimento para o desenvolvimento de letramentos múltiplos e críticos, de competências comunicativas e interacionais e de formação de cidadania. A análise de WebQuests disponíveis para ensinagem de inglês no principal sítio brasileiro de WebQuests, que compôs a segunda parte da análise de necessidades desse estudo, evidenciou a escassez de WebQuests que abordam de forma significativa tanto as questões do letramento digital crítico quanto os aspectos da competência comunicativa na língua adicional do indivíduo. A análise de necessidades como um todo forneceu subsídios relevantes para o processo de elaboração das WebQuests propostas neste estudo, que também se embasou nas diretrizes e princípios do modelo WebQuest e em grande parte do seu embasamento teórico. Como fase final deste estudo, as três WebQuests elaboradas foram submetidas à validação a partir de uma rubrica criada especialmente para esse propósito. Os resultados das análises de validação das três WebQuests elaboradas sugerem que a proposta desses materiais é válida sob o ponto de vista teórico, pois mostram que as ferramentas criadas vão ao encontro da proposta do modelo WebQuests de Dodge (1995, 2001) e das recomendações de qualidade sugeridas por Bottentuit Junior e Coutinho (2008a, 2012), bem como estão ancoradas na teoria sócio-construtivista e do ensino situado e nos princípios metodológicos da abordagem de ensino baseada em tarefas e da abordagem de ensino de conteúdos diversos por meio da língua (CLIL). Concluímos que as três WebQuests são materiais de ensinagem de inglês que fogem do enfoque tradicional conteudista historicamente voltado para o ensino de vocabulário e gramática na língua-alvo, extrapolando os objetivos linguísticos para alcançar também objetivos sociais e culturais da ensinagem de inglês como língua adicional, na medida em que se trabalha paralelamente (e intencionalmente, por entender que ambos se complementam) o desenvolvimento da competência comunicativa e do letramento digital crítico do indivíduo, contribuindo, assim, para a sua formação cidadã e colaborando para a “inclusão” do aprendiz no mundo social e digital.