970 resultados para Software geomere Cabri II
Resumo:
Se apuesta por el uso de los videojuegos, utilizados como instrumento para la adquisición de competencias digitales de los siguientes tipos: competencias instrumentales (uso del ordenador, periféricos, sistema operativo y software), competencias para la gestión de recursos (uso de fuentes de información del propio juego y sobre el juego en otras fuentes), competencias en entornos multimedia (análisis y reflexión sobre la navegación, el feedback, la intencionalidad), competencias para la comunicación (mediante correos electrónicos, blogs o wikis), competencias para la crítica reflexiva (selección y evaluación de programas) y competencias para desarrollar buenas prácticas de habilidades sociales.
Resumo:
Este es el segundo artículo de una serie de dos, acerca de los weblogs, cuadernos de bitácora o simplemente 'bitácoras'. Esta segunda parte profundizará más acerca de la tecnología de los cuadernos (sus formatos XML, RSS y formas de hacer asociadas), sobre el uso para narraciones y nuevo periodismo, y sobre aplicaciones educativas (comunicación escrita entre pares y acerca del docente como editor en general). Este segundo artículo es un resumen realizado sobre ciertas áreas de la actividad humana en las cuales el software de cuadernos de bitácora y los modelos de producción de contenidos y de interoperatividad de plataformas para los weblogs están demostrando ser cada vez más útiles y poderosos, alentando e inspirando desarrollos innovadores y nuevos usos de la Web.
Resumo:
O aspecto fulcral desta dissertação centra-se-à volta do desafio de procurar facilitar o acesso à informação contida na base de dados bibliográfica da Biblioteca Universitária João Paulo II (BUJPII) da Universidade Católica Portuguesa (UCP) cujo conteúdo temático tem sido até agora representado pela Classificação Decimal Universal (CDU), linguagem documental pouco acessível a grande parte dos nossos utilizadores, na sua maioria estudantes universitários que a consideram um instrumento de pesquisa pouco amigável porque estão muito pouco ou nada familiarizados com este tipo de classificação numérica preferindo o uso de palavras-chave no acesso ao conteúdo temático das obras. Com este objectivo em vista, propusemo-nos levar a cabo este trabalho de investigação fazendo a harmonização (correspondência) entre as notações da CDU, usada na classificação da colecção de fundos da BUJPII e uma lista simplificada de Cabeçalhos de Assunto da Biblioteca do Congresso, com o propósito de iniciar um processo de atribuição de cabeçalhos de assunto, mapeados a partir das notações da CDU, a parte dos referidos fundos, cuja recuperação de conteúdo tem sido feita até agora através da Classificação Decimal Universal. O estudo incidiu experimentalmente numa amostragem de monografias de áreas não indexadas mas já classificadas, cujos registos bibliográficos se encontram na base de dados da Biblioteca Universitária João Paulo II. O projecto consistiu na atribuição de cabeçalhos de assunto, traduzidos manualmente para português a partir da lista em inglês dos Cabeçalhos de Assunto da Biblioteca do Congresso (LCSH). Procurou-se que estivessem semanticamente tão próximos quanto possível dos assuntos que correspondiam às notações da Classificação Decimal Universal (CDU) com as quais as monografias tinham sido anteriormente classificadas. O trabalho foi primeiro elaborado de forma manual e depois “carregado” no software Horizon, dado ser este o sistema informático de gestão integrada em uso na Biblioteca Universitária João Paulo II, sendo o objectivo futuro a indexação de todas as áreas do seu acervo bibliográfico, como forma complementar privilegiada no acesso à informação.
Resumo:
En los textos de Empire y Multitude, Antonio Negri y Michael Hardt proponen que en el mundo actual la fuerza dominante que controla el capitalismo, y así el poder, es el Imperio. El Imperio obtiene su fuerza a través del control de la producción intelectual y su poder está ere cien - do durante este período de transición en el modelo capitalista. En este ensayo, se argumenta que los oprimidos por el Imperio, quienes conforman como clase la multitud, necesitan el software libre para crear su sueño: la democracia. Este software es a la vez el mejor ejemplo de como puede ser la democracia y una herramienta que permite la ampliación de ella. Además, su potencial en la región andina es todavía mayor por la debilidad del modelo de democracia liberal que promociona el Imperio.
Resumo:
El Capítulo 1 referente a Nociones Generales de Propiedad Intelectual y de Empresa, es un extracto de las formas de protección de los programas de ordenador, en especial de los derechos de autor. Se pone real énfasis al desarrollo de la ciencia y la tecnología en este capítulo ya que constituye un pilar fundamental para que el desarrollo de la protección legal de software se dé. El Capítulo II recoge información sobre legislación tanto en el ámbito nacional como internacional. En el presente capítulo se hace un análisis general de la legislación, de los procedimientos legales utilizados en el Ecuador, englobando, además, nuevas formas internacionales de aplicación para la protección de software que no consta en la Ley ecuatoriana y que representan una novedad para el contexto nacional. Siguiendo el orden, el Capítulo III, con título Programas de Ordenador en el Ecuador, hace referencia a la creación del software en pasos sistemáticos a seguir. El procedimiento para crear un programa de ordenador, debe realizárselo según ciertos parámetros que serán la guía para obtener un resultado. Los Mecanismos alternativos de Protección de los Programas de Ordenador en el Ecuador, contenidos en el Capítulo IV, son fundamentales en este mundo de constante cambio ya que a falta de una protección completa por el efecto legal, los empresarios y creadores de software han intentado por cuenta propia dotarse de mecanismos que les permitan tener un control más cerrado y efectivo del software que ellos producen, con el objeto de impedir la piratería y pérdidas por efecto de ella. En función de determinar los costos para la creación de software se ha analizado de manera general, en el Capítulo V, lo que puede costar realizar un determinado programa (aplicación) y del efecto que este produce en el contexto global (PNB). Por último, el Capítulo VI, constituye un estudio de los efectos de la globalización en el campo del software. Es inquietante conocer los efectos que ella, la globalización, puede conllevar en una economía pequeña como la nuestra, la influencia de organismos poderosos y de los posibles cambios en el ámbito legal y tecnológico que pueden darse, tanto nacional como internacionalmente. Puede que esta situación sea preocupante en un futuro, dependiendo del ambiente y su desarrollo.
Resumo:
This paper describes some of the preliminary outcomes of a UK project looking at control education. The focus is on two aspects: (i) the most important control concepts and theories for students doing just one or two courses and (ii) the effective use of software to improve student learning and engagement. There is also some discussion of the correct balance between teaching theory and practise. The paper gives examples from numerous UK universities and some industrial comment.
Resumo:
A crescente complexidade das aplicações, a contínua evolução tecnológica e o uso cada vez mais disseminado de redes de computadores têm impulsionado os estudos referentes ao desenvolvimento de sistemas distribuídos. Como estes sistemas não podem ser facilmente desenvolvidos com tecnologias de software tradicionais por causa dos limites destas em lidar com aspectos relacionados, por exemplo, à distribuição e interoperabilidade, a tecnologia baseada em agentes parece ser uma resposta promissora para facilitar o desenvolvimento desses sistemas, pois ela foi planejada para suportar estes aspectos, dentre outros. Portanto, é necessário também que a arquitetura dos ambientes de desenvolvimento de software (ADS) evolua para suportar novas metodologias de desenvolvimento que ofereçam o suporte necessário à construção de softwares complexos, podendo também estar integrada a outras tecnologias como a de agentes. Baseada nesse contexto, essa dissertação tem por objetivo apresentar a especificação de uma arquitetura de um ADS distribuído baseada em agentes (DiSEN – Distributed Software Engineering Environment). Esse ambiente deverá fornecer suporte ao desenvolvimento de software distribuído, podendo estar em locais geograficamente distintos e também os desenvolvedores envolvidos poderão estar trabalhando de forma cooperativa. Na arquitetura proposta podem ser identificadas as seguintes camadas: dinâmica, que será responsável pelo gerenciamento da (re)configuração do ambiente em tempo de execução; aplicação, que terá, entre os elementos constituintes, a MDSODI (Metodologia para Desenvolvimento de Software Distribuído), que leva em consideração algumas características identificadas em sistemas distribuídos, já nas fases iniciais do projeto e o repositório para armazenamento dos dados necessários ao ambiente; e, infra-estrutura, que proverá suporte às tarefas de nomeação, persistência e concorrência e incorporará o canal de comunicação. Para validar o ambiente será realizada uma simulação da comunicação que pode ser necessária entre as partes constituintes do DiSEN, por meio da elaboração de diagramas de use case e de seqüência, conforme a notação MDSODI. Assim, as principais contribuições desse trabalho são: (i) especificação da arquitetura de um ADS distribuído que poderá estar distribuído geograficamente; incorporará a MDSODI; proporcionará desenvolvimento distribuído; possuirá atividades executadas por agentes; (ii) os agentes identificados para o DiSEN deverão ser desenvolvidos obedecendo ao padrão FIPA (Foundation for Intelligent Physical Agents); (iii) a identificação de um elemento que irá oferecer apoio ao trabalho cooperativo, permitindo a integração de profissionais, agentes e artefatos.
Resumo:
Generalized hyper competitiveness in the world markets has determined the need to offer better products to potential and actual clients in order to mark an advantagefrom other competitors. To ensure the production of an adequate product, enterprises need to work on the efficiency and efficacy of their business processes (BPs) by means of the construction of Interactive Information Systems (IISs, including Interactive Multimedia Documents) so that they are processed more fluidly and correctly.The construction of the correct IIS is a major task that can only be successful if the needs from every intervenient are taken into account. Their requirements must bedefined with precision, extensively analyzed and consequently the system must be accurately designed in order to minimize implementation problems so that the IIS isproduced on schedule and with the fewer mistakes as possible. The main contribution of this thesis is the proposal of Goals, a software (engineering) construction process which aims at defining the tasks to be carried out in order to develop software. This process defines the stakeholders, the artifacts, and the techniques that should be applied to achieve correctness of the IIS. Complementarily, this process suggests two methodologies to be applied in the initial phases of the lifecycle of the Software Engineering process: Process Use Cases for the phase of requirements, and; MultiGoals for the phases of analysis and design. Process Use Cases is a UML-based (Unified Modeling Language), goal-driven and use case oriented methodology for the definition of functional requirements. It uses an information oriented strategy in order to identify BPs while constructing the enterprise’s information structure, and finalizes with the identification of use cases within the design of these BPs. This approach provides a useful tool for both activities of Business Process Management and Software Engineering. MultiGoals is a UML-based, use case-driven and architectural centric methodology for the analysis and design of IISs with support for Multimedia. It proposes the analysis of user tasks as the basis of the design of the: (i) user interface; (ii) the system behaviour that is modeled by means of patterns which can combine Multimedia and standard information, and; (iii) the database and media contents. This thesis makes the theoretic presentation of these approaches accompanied with examples from a real project which provide the necessary support for the understanding of the used techniques.
Resumo:
Alterations in the neuropsychomotor development of children are not rare and can manifest themselves with varying intensity at different stages of their development. In this context, maternal risk factors may contribute to the appearance of these alterations. A number of studies have reported that neuropsychomotor development diagnosis is not an easy task, especially in the basic public health network. Diagnosis requires effective, low-cost, and easy - to-apply procedures. The Denver Developmental Screening Test, first published in 1967, is currently used in several countries. It has been revised and renamed as the Denver II Test and meets the aforementioned criteria. Accordingly, the aim of this study was to apply the Denver II Test in order to verify the prevalence of suspected neuropsychomotor development delay in children between the ages of 0 and 12 months and correlate it with the following maternal risk factors: family income, schooling, age at pregnancy, drug use during pregnancy, gestational age, gestational problems, type of delivery and the desire to have children. For data collection, performed during the first 6 months of 2004, a clinical assessment was made of 398 children selected by pediatricians and the nursing team of each public health unit. Later, the parents or guardians were asked to complete a structured questionnaire to determine possible risk indicators of neuropsychomotor development delay. Finally the Denver II Developmental Screening Test (DDST) was applied. The data were analyzed together, using Statistical Package for Social Science (SPSS) software, version 6.1. The confidence interval was set at 95%. The Denver II Test yielded normal and questionable results. This suggests compromised neuropsychomotor development in the children examined and deserves further investigation. The correlation of the results with preestablished maternal risk variables (family income, mother s schooling, age at pregnancy, drug use during the pregnancy and gestational age) was strongly significant. The other maternal risk variables (gestational problems, type of delivery and desire to have children) were not significant. Using an adjusted logistic regression model, we obtained the estimate of the greater likelihood of a child having suspected neuropsychomotor development delay: a mother with _75 4 years of schooling, chronological age less than 20 years and a drug user during pregnancy. This study produced two manuscripts, one published in Acta Cirúrgica Brasileira , in which an analysis was performed of children with suspected neuropsychomotor development delay in the city of Natal, Brazil. The other paper (to be published) analyzed the magnitude of the independent variable maternal schooling associated to neuropsychomotor development delay, every 3 months during the first twelve months of life of the children selected.. The results of the present study reinforce the multifactorial characteristic of development and the cumulative effect of maternal risk factors, and show the need for a regional policy that promotes low-cost programs for the community, involving children at risk of neuropsychomotor development delay. Moreover, they suggest the need for better qualified health professionals in terms of monitoring child development. This was an inter- and multidisciplinary study with the integrated participation of doctors, nurses, nursing assistants and professionals from other areas, such as statisticians and information technology professionals, who met all the requirements of the Postgraduate Program in Health Sciences of the Federal University of Rio Grande do Norte
Resumo:
Nowadays, the importance of using software processes is already consolidated and is considered fundamental to the success of software development projects. Large and medium software projects demand the definition and continuous improvement of software processes in order to promote the productive development of high-quality software. Customizing and evolving existing software processes to address the variety of scenarios, technologies, culture and scale is a recurrent challenge required by the software industry. It involves the adaptation of software process models for the reality of their projects. Besides, it must also promote the reuse of past experiences in the definition and development of software processes for the new projects. The adequate management and execution of software processes can bring a better quality and productivity to the produced software systems. This work aimed to explore the use and adaptation of consolidated software product lines techniques to promote the management of the variabilities of software process families. In order to achieve this aim: (i) a systematic literature review is conducted to identify and characterize variability management approaches for software processes; (ii) an annotative approach for the variability management of software process lines is proposed and developed; and finally (iii) empirical studies and a controlled experiment assess and compare the proposed annotative approach against a compositional one. One study a comparative qualitative study analyzed the annotative and compositional approaches from different perspectives, such as: modularity, traceability, error detection, granularity, uniformity, adoption, and systematic variability management. Another study a comparative quantitative study has considered internal attributes of the specification of software process lines, such as modularity, size and complexity. Finally, the last study a controlled experiment evaluated the effort to use and the understandability of the investigated approaches when modeling and evolving specifications of software process lines. The studies bring evidences of several benefits of the annotative approach, and the potential of integration with the compositional approach, to assist the variability management of software process lines
Resumo:
Motion estimation is the main responsible for data reduction in digital video encoding. It is also the most computational damanding step. H.264 is the newest standard for video compression and was planned to double the compression ratio achievied by previous standards. It was developed by the ITU-T Video Coding Experts Group (VCEG) together with the ISO/IEC Moving Picture Experts Group (MPEG) as the product of a partnership effort known as the Joint Video Team (JVT). H.264 presents novelties that improve the motion estimation efficiency, such as the adoption of variable block-size, quarter pixel precision and multiple reference frames. This work defines an architecture for motion estimation in hardware/software, using a full search algorithm, variable block-size and mode decision. This work consider the use of reconfigurable devices, soft-processors and development tools for embedded systems such as Quartus II, SOPC Builder, Nios II and ModelSim
Resumo:
Through the adoption of the software product line (SPL) approach, several benefits are achieved when compared to the conventional development processes that are based on creating a single software system at a time. The process of developing a SPL differs from traditional software construction, since it has two essential phases: the domain engineering - when common and variables elements of the SPL are defined and implemented; and the application engineering - when one or more applications (specific products) are derived from the reuse of artifacts created in the domain engineering. The test activity is also fundamental and aims to detect defects in the artifacts produced in SPL development. However, the characteristics of an SPL bring new challenges to this activity that must be considered. Several approaches have been recently proposed for the testing process of product lines, but they have been shown limited and have only provided general guidelines. In addition, there is also a lack of tools to support the variability management and customization of automated case tests for SPLs. In this context, this dissertation has the goal of proposing a systematic approach to software product line testing. The approach offers: (i) automated SPL test strategies to be applied in the domain and application engineering, (ii) explicit guidelines to support the implementation and reuse of automated test cases at the unit, integration and system levels in domain and application engineering; and (iii) tooling support for automating the variability management and customization of test cases. The approach is evaluated through its application in a software product line for web systems. The results of this work have shown that the proposed approach can help the developers to deal with the challenges imposed by the characteristics of SPLs during the testing process
Resumo:
Software Repository Mining (MSR) is a research area that analyses software repositories in order to derive relevant information for the research and practice of software engineering. The main goal of repository mining is to extract static information from repositories (e.g. code repository or change requisition system) into valuable information providing a way to support the decision making of software projects. On the other hand, another research area called Process Mining (PM) aims to find the characteristics of the underlying process of business organizations, supporting the process improvement and documentation. Recent works have been doing several analyses through MSR and PM techniques: (i) to investigate the evolution of software projects; (ii) to understand the real underlying process of a project; and (iii) create defect prediction models. However, few research works have been focusing on analyzing the contributions of software developers by means of MSR and PM techniques. In this context, this dissertation proposes the development of two empirical studies of assessment of the contribution of software developers to an open-source and a commercial project using those techniques. The contributions of developers are assessed through three different perspectives: (i) buggy commits; (ii) the size of commits; and (iii) the most important bugs. For the opensource project 12.827 commits and 8.410 bugs have been analyzed while 4.663 commits and 1.898 bugs have been analyzed for the commercial project. Our results indicate that, for the open source project, the developers classified as core developers have contributed with more buggy commits (although they have contributed with the majority of commits), more code to the project (commit size) and more important bugs solved while the results could not indicate differences with statistical significance between developer groups for the commercial project
Resumo:
Os sensores inteligentes são dispositivos que se diferenciam dos sensores comuns por apresentar capacidade de processamento sobre os dados monitorados. Eles tipicamente são compostos por uma fonte de alimentação, transdutores (sensores e atuadores), memória, processador e transceptor. De acordo com o padrão IEEE 1451 um sensor inteligente pode ser dividido em módulos TIM e NCAP que devem se comunicar através de uma interface padronizada chamada TII. O módulo NCAP é a parte do sensor inteligente que comporta o processador. Portanto, ele é o responsável por atribuir a característica de inteligência ao sensor. Existem várias abordagens que podem ser utilizadas para o desenvolvimento desse módulo, dentre elas se destacam aquelas que utilizam microcontroladores de baixo custo e/ou FPGA. Este trabalho aborda o desenvolvimento de uma arquitetura hardware/software para um módulo NCAP segundo o padrão IEEE 1451.1. A infra-estrutura de hardware é composta por um driver de interface RS-232, uma memória RAM de 512kB, uma interface TII, o processador embarcado NIOS II e um simulador do módulo TIM. Para integração dos componentes de hardware é utilizada ferramenta de integração automática SOPC Builder. A infra-estrutura de software é composta pelo padrão IEEE 1451.1 e pela aplicação especí ca do NCAP que simula o monitoramento de pressão e temperatura em poços de petróleo com o objetivo de detectar vazamento. O módulo proposto é embarcado em uma FPGA e para a sua prototipação é usada a placa DE2 da Altera que contém a FPGA Cyclone II EP2C35F672C6. O processador embarcado NIOS II é utilizado para dar suporte à infra-estrutura de software do NCAP que é desenvolvido na linguagem C e se baseia no padrão IEEE 1451.1. A descrição do comportamento da infra-estrutura de hardware é feita utilizando a linguagem VHDL
Resumo:
Introduction: The aim of this study was to evaluate the ability of Resilon (Resilon Research, LLC, North Branford, CT) and 2 types of gutta-percha to fill simulated lateral canals when using the Obtura II system (Model 823-700; Obtura Spartan, Fenton, MO). Methods: Forty-five human single-rooted teeth were selected and subjected to root canal preparation. After that, simulated lateral canals were made at 2, 5, and 8 mm from the working length (WL). The specimens were divided into 3 groups (n = 15) according to the filling material used: Obtura Flow 150 gutta-percha (Obtura flow), Odous Endo Flow gutta-percha (Odous; Odous de Deus Ind e Corn. Ltda Belo Horizonte, MG, Brazil), and Resilon pellets (Resilon). Root canals were filled using the Obtura II system with the tip inserted to 3 mm from the WL. No sealer was used for root canal obturation. Specimens were subjected to a tooth decalcification and clearing method, and filling of the lateral canals was analyzed by digital radiography and photographs. The measurement of lateral canal filling was done using Image Tool software (UTHSCSA Image Tool for Windows version 3.0, San Antonio, TX). Data were statistically analyzed with the Kruskal-Wallis test at 5% significance. Results: All materials showed an ability to penetrate into the simulated lateral canals, with a minimum percentage of 73% in all thirds of the root canal. Conclusions: It was concluded that gutta-percha and Resilon are solid core materials with a lateral canal filling ability when used with the Obtura II system. (J Endod 2012;38:676-679)