831 resultados para Formal Methods. Component-Based Development. Competition. Model Checking


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the last 50 years a new research area, science education research, has arisen and undergone singular development worldwide. In the specific case of Brazil, research in science education first appeared systematically 40 years ago, as a consequence of an overall renovation in the field of science education. This evolution was also related to the political events taking place in the country. We will use the theoretical work of Rene Kaes on the development of groups and institutions as a basis for our discussion of the most important aspects that have helped the area of science education research develop into an institution and kept it operating as such. The growth of this area of research can be divided into three phases: The first was related to its beginning and early configurations; the second consisted of a process of consolidation of this institution; and the third consists of more recent developments, characterised by a multiplicity of research lines and corresponding challenges to be faced. In particular, we will analyse the special contributions to this study gleaned from the field known as the history and philosophy of science.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The search for better performance in the structural systems has been taken to more refined models, involving the analysis of a growing number of details, which should be correctly formulated aiming at defining a representative model of the real system. Representative models demand a great detailing of the project and search for new techniques of evaluation and analysis. Model updating is one of this technologies, it can be used to improve the predictive capabilities of computer-based models. This paper presents a FRF-based finite element model updating procedure whose the updating variables are physical parameters of the model. It includes the damping effects in the updating procedure assuming proportional and none proportional damping mechanism. The updating parameters are defined at an element level or macro regions of the model. So, the parameters are adjusted locally, facilitating the physical interpretation of the adjusting of the model. Different tests for simulated and experimental data are discussed aiming at defining the characteristics and potentialities of the methodology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of interactive systems involves several professionals and the integration between them normally uses common artifacts, such as models, that drive the development process. In the model-driven development approach, the interaction model is an artifact that includes the most of the aspects related to what and how the user can do while he/she interacting with the system. Furthermore, the interactive model may be used to identify usability problems at design time. Therefore, the central problematic addressed by this thesis is twofold. In the first place, the interaction modeling, in a perspective that helps the designer to explicit to developer, who will implement the interface, the aspcts related to the interaction process. In the second place, the anticipated identification of usability problems, that aims to reduce the application final costs. To achieve these goals, this work presents (i) the ALaDIM language, that aims to help the designer on the conception, representation and validation of his interactive message models; (ii) the ALaDIM editor, which was built using the EMF (Eclipse Modeling Framework) and its standardized technologies by OMG (Object Management Group); and (iii) the ALaDIM inspection method, which allows the anticipated identification of usability problems using ALaDIM models. ALaDIM language and editor were respectively specified and implemented using the OMG standards and they can be used in MDA (Model Driven Architecture) activities. Beyond that, we evaluated both ALaDIM language and editor using a CDN (Cognitive Dimensions of Notations) analysis. Finally, this work reports an experiment that validated the ALaDIM inspection method

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Formal methods should be used to specify and verify on-card software in Java Card applications. Furthermore, Java Card programming style requires runtime verification of all input conditions for all on-card methods, where the main goal is to preserve the data in the card. Design by contract, and in particular, the JML language, are an option for this kind of development and verification, as runtime verification is part of the Design by contract method implemented by JML. However, JML and its currently available tools for runtime verification were not designed with Java Card limitations in mind and are not Java Card compliant. In this thesis, we analyze how much of this situation is really intrinsic of Java Card limitations and how much is just a matter of a complete re-design of JML and its tools. We propose the requirements for a new language which is Java Card compliant and indicate the lines on which a compiler for this language should be built. JCML strips from JML non-Java Card aspects such as concurrency and unsupported types. This would not be enough, however, without a great effort in optimization of the verification code generated by its compiler, as this verification code must run on the card. The JCML compiler, although being much more restricted than the one for JML, is able to generate Java Card compliant verification code for some lightweight specifications. As conclusion, we present a Java Card compliant variant of JML, JCML (Java Card Modeling Language), with a preliminary version of its compiler

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The software development processes proposed by the most recent approaches in Software Engineering make use old models. UML was proposed as the standard language for modeling. The user interface is an important part of the software and has a fundamental importance to improve its usability. Unfortunately the standard UML does not offer appropriate resources to model user interfaces. Some proposals have already been proposed to solve this problem: some authors have been using models in the development of interfaces (Model Based Development) and some proposals to extend UML have been elaborated. But none of them considers the theoretical perspective presented by the semiotic engineering, that considers that, through the system, the designer should be able to communicate to the user what he can do, and how to use the system itself. This work presents Visual IMML, an UML Profile that emphasizes the aspects of the semiotic engineering. This Profile is based on IMML, that is a declarative textual language. The Visual IMML is a proposal that aims to improve the specification process by using a visual modeling (using diagrams) language. It proposes a new set of modeling elements (stereotypes) specifically designed to the specification and documentation of user interfaces, considering the aspects of communication, interaction and functionality in an integrated manner

Relevância:

100.00% 100.00%

Publicador:

Resumo:

On the last years, several middleware platforms for Wireless Sensor Networks (WSN) were proposed. Most of these platforms does not consider issues of how integrate components from generic middleware architectures. Many requirements need to be considered in a middleware design for WSN and the design, in this case, it is possibility to modify the source code of the middleware without changing the external behavior of the middleware. Thus, it is desired that there is a middleware generic architecture that is able to offer an optimal configuration according to the requirements of the application. The adoption of middleware based in component model consists of a promising approach because it allows a better abstraction, low coupling, modularization and management features built-in middleware. Another problem present in current middleware consists of treatment of interoperability with external networks to sensor networks, such as Web. Most current middleware lacks the functionality to access the data provided by the WSN via the World Wide Web in order to treat these data as Web resources, and they can be accessed through protocols already adopted the World Wide Web. Thus, this work presents the Midgard, a component-based middleware specifically designed for WSNs, which adopts the architectural patterns microkernel and REST. The microkernel architectural complements the component model, since microkernel can be understood as a component that encapsulates the core system and it is responsible for initializing the core services only when needed, as well as remove them when are no more needed. Already REST defines a standardized way of communication between different applications based on standards adopted by the Web and enables him to treat WSN data as web resources, allowing them to be accessed through protocol already adopted in the World Wide Web. The main goals of Midgard are: (i) to provide easy Web access to data generated by WSN, exposing such data as Web resources, following the principles of Web of Things paradigm and (ii) to provide WSN application developer with capabilities to instantiate only specific services required by the application, thus generating a customized middleware and saving node resources. The Midgard allows use the WSN as Web resources and still provide a cohesive and weakly coupled software architecture, addressing interoperability and customization. In addition, Midgard provides two services needed for most WSN applications: (i) configuration and (ii) inspection and adaptation services. New services can be implemented by others and easily incorporated into the middleware, because of its flexible and extensible architecture. According to the assessment, the Midgard provides interoperability between the WSN and external networks, such as web, as well as between different applications within a single WSN. In addition, we assessed the memory consumption, the application image size, the size of messages exchanged in the network, and response time, overhead and scalability on Midgard. During the evaluation, the Midgard proved satisfies their goals and shown to be scalable without consuming resources prohibitively

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The development of smart card applications requires a high level of reliability. Formal methods provide means for this reliability to be achieved. The BSmart method and tool contribute to the development of smart card applications with the support of the B method, generating Java Card code from B specifications. For the development with BSmart to be effectively rigorous without overloading the user it is important to have a library of reusable components built in B. The goal of KitSmart is to provide this support. A first research about the composition of this library was a graduation work from Universidade Federal do Rio Grande do Norte, made by Thiago Dutra in 2006. This first version of the kit resulted in a specification of Java Card primitive types byte, short and boolean in B and the creation of reusable components for application development. This work provides an improvement of KitSmart with the addition of API Java Card specification made in B and a guide for the creation of new components. The API Java Card in B, besides being available to be used for development of applications, is also useful as a documentation of each API class. The reusable components correspond to modules to manipulate specific structures, such as date and time. These structures are not available for B or Java Card. These components for Java Card are generated from specifications formally verified in B. The guide contains quick reference on how to specify some structures and how some situations were adapted from object-orientation to the B Method. This work was evaluated through a case study made through the BSmart tool, that makes use of the KitSmart library. In this case study, it is possible to see the contribution of the components in a B specification. This kit should be useful for B method users and Java Card application developers

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PLCs (acronym for Programmable Logic Controllers) perform control operations, receiving information from the environment, processing it and modifying this same environment according to the results produced. They are commonly used in industry in several applications, from mass transport to petroleum industry. As the complexity of these applications increase, and as various are safety critical, a necessity for ensuring that they are reliable arouses. Testing and simulation are the de-facto methods used in the industry to do so, but they can leave flaws undiscovered. Formal methods can provide more confidence in an application s safety, once they permit their mathematical verification. We make use of the B Method, which has been successfully applied in the formal verification of industrial systems, is supported by several tools and can handle decomposition, refinement, and verification of correctness according to the specification. The method we developed and present in this work automatically generates B models from PLC programs and verify them in terms of safety constraints, manually derived from the system requirements. The scope of our method is the PLC programming languages presented in the IEC 61131-3 standard, although we are also able to verify programs not fully compliant with the standard. Our approach aims to ease the integration of formal methods in the industry through the abbreviation of the effort to perform formal verification in PLCs

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O objetivo deste estudo é analisar o resultado de intervenções psicopedagógicas no desempenho intelectual e em algumas funções cognitivas específicas em crianças provenientes de famílias de baixa renda, expostas a fatores pessoais e sociais adversos, como desnutrição, stress familiar, ambientes doméstico e de estimulação empobrecidos. Foram examinadas 63 crianças, alunas de escola, gratuita e em regime de semi-internato, que recebe crianças consideradas sob risco pessoal e social. Quarenta e três crianças receberam atividades que objetivam ativação cognitiva, durante período mínimo de 1 ano. Vinte crianças eram recém-admitidas. As técnicas da ativação escolhidas foram: método de aprendizagem ativa, com base em Piaget e método de ativação cognitiva para, através de exercícios psicomotores, desenvolver os pré-requisitos para aprendizagem e prevenção de dificuldades escolares, segundo Lambert. A avaliação das funções cognitivas mostrou: nível intelectual insatisfatório em 30% e médio ou superior em 70% e deficiências cognitivas específicas (noção do esquema corporal, percepção viso-motora, percepção de forma e perseveração) em 74%. Maior prevalência de crianças com inteligência superior (p < 0,05) associou-se a dois fatores: 1º: maior tempo de freqüência à escola (de 1 a 3 anos) e 2º: programas de ativação cognitiva. Não foram observadas diferenças entre os 2 grupos em relação à prevalência de alterações das funções cognitivas específicas examinadas. Os resultados demonstram que a recuperação de crianças com as dificuldades descritas é difícil. Exige investigação sistemática sobre os métodos psicopedagógicas selecionados e possivelmente, grande tempo de permanência da criança na escola, além de admissão mais precoce.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ties among event times are often recorded in survival studies. For example, in a two week laboratory study where event times are measured in days, ties are very likely to occur. The proportional hazards model might be used in this setting using an approximated partial likelihood function. This approximation works well when the number of ties is small. on the other hand, discrete regression models are suggested when the data are heavily tied. However, in many situations it is not clear which approach should be used in practice. In this work, empirical guidelines based on Monte Carlo simulations are provided. These recommendations are based on a measure of the amount of tied data present and the mean square error. An example illustrates the proposed criterion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We employ finite elements methods for the approximation of solutions of the Ginzburg-Landau equations describing the deconfinement transition in quantum chromodynamics. These methods seem appropriate for situations where the deconfining transition occurs over a finite volume as in relativistic heavy ion collisions. where in addition expansion of the system and flow of matter are important. Simulation results employing finite elements are presented for a Ginzburg-Landau equation based on a model free energy describing the deconfining transition in pure gauge SU(2) theory. Results for finite and infinite system are compared. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper provides a review on the latest advances and applications of the luminescence spectroscopy for the development of pharmaceuticals analyses methods, basically based on the photo- and chemiluminescence. The different forms of the drugs determination on pharmaceuticals through the fluorescence and chemiluminescence are discussed. The analyses include the drugs native fluorescence (liquid and solid-phases); the fluorescence from the oxidizing or reducing forms of the drug; the fluorescence from the chemical derivatization and their photochemistry and hydrolysis reactions. The quenching of luminescence and chemiluminescence generation for the pharmaceutical quantification are also shown. Finally, the trends and future perspectives of the luminescence spectroscopy in the field of the pharmaceutical research are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJETIVO: Comparar os tempos de geração e digitação de laudos radiológicos entre um sistema eletrônico baseado na tecnologia de voz sobre o protocolo de internet (VoIP) e o sistema tradicional, em que o radiologista escreve o laudo à mão. MATERIAIS E MÉTODOS: Foi necessário modelar, construir e implantar o sistema eletrônico proposto, capaz de gravar o laudo em formato de áudio digital, e compará-lo com o tradicional já existente. Por meio de formulários, radiologistas e digitadores anotaram os tempos de geração e digitação dos laudos nos dois sistemas. RESULTADOS: Comparadas as médias dos tempos entre os sistemas, o eletrônico apresentou redução de 20% (p = 0,0410) do tempo médio de geração do laudo em comparação com o sistema tradicional. O tradicional foi mais eficiente em relação ao tempo de digitação, uma vez que a média de tempo do eletrônico foi três vezes maior (p < 0,0001). CONCLUSÃO: Os resultados mostraram diferença estatisticamente significante entre os sistemas comparados, sendo que o eletrônico foi mais eficiente do que o tradicional em relação ao tempo de geração dos laudos, porém, em relação ao tempo de digitação, o tradicional apresentou melhores resultados.