897 resultados para Contracts of execution
Resumo:
Mode of access: Internet.
Resumo:
Software engineering researchers are challenged to provide increasingly more powerful levels of abstractions to address the rising complexity inherent in software solutions. One new development paradigm that places models as abstraction at the forefront of the development process is Model-Driven Software Development (MDSD). MDSD considers models as first class artifacts, extending the capability for engineers to use concepts from the problem domain of discourse to specify apropos solutions. A key component in MDSD is domain-specific modeling languages (DSMLs) which are languages with focused expressiveness, targeting a specific taxonomy of problems. The de facto approach used is to first transform DSML models to an intermediate artifact in a HLL e.g., Java or C++, then execute that resulting code.^ Our research group has developed a class of DSMLs, referred to as interpreted DSMLs (i-DSMLs), where models are directly interpreted by a specialized execution engine with semantics based on model changes at runtime. This execution engine uses a layered architecture and is referred to as a domain-specific virtual machine (DSVM). As the domain-specific model being executed descends the layers of the DSVM the semantic gap between the user-defined model and the services being provided by the underlying infrastructure is closed. The focus of this research is the synthesis engine, the layer in the DSVM which transforms i-DSML models into executable scripts for the next lower layer to process.^ The appeal of an i-DSML is constrained as it possesses unique semantics contained within the DSVM. Existing DSVMs for i-DSMLs exhibit tight coupling between the implicit model of execution and the semantics of the domain, making it difficult to develop DSVMs for new i-DSMLs without a significant investment in resources.^ At the onset of this research only one i-DSML had been created for the user- centric communication domain using the aforementioned approach. This i-DSML is the Communication Modeling Language (CML) and its DSVM is the Communication Virtual machine (CVM). A major problem with the CVM's synthesis engine is that the domain-specific knowledge (DSK) and the model of execution (MoE) are tightly interwoven consequently subsequent DSVMs would need to be developed from inception with no reuse of expertise.^ This dissertation investigates how to decouple the DSK from the MoE and subsequently producing a generic model of execution (GMoE) from the remaining application logic. This GMoE can be reused to instantiate synthesis engines for DSVMs in other domains. The generalized approach to developing the model synthesis component of i-DSML interpreters utilizes a reusable framework loosely coupled to DSK as swappable framework extensions.^ This approach involves first creating an i-DSML and its DSVM for a second do- main, demand-side smartgrid, or microgrid energy management, and designing the synthesis engine so that the DSK and MoE are easily decoupled. To validate the utility of the approach, the SEs are instantiated using the GMoE and DSKs of the two aforementioned domains and an empirical study to support our claim of reduced developmental effort is performed.^
Resumo:
Software engineering researchers are challenged to provide increasingly more pow- erful levels of abstractions to address the rising complexity inherent in software solu- tions. One new development paradigm that places models as abstraction at the fore- front of the development process is Model-Driven Software Development (MDSD). MDSD considers models as first class artifacts, extending the capability for engineers to use concepts from the problem domain of discourse to specify apropos solutions. A key component in MDSD is domain-specific modeling languages (DSMLs) which are languages with focused expressiveness, targeting a specific taxonomy of problems. The de facto approach used is to first transform DSML models to an intermediate artifact in a HLL e.g., Java or C++, then execute that resulting code. Our research group has developed a class of DSMLs, referred to as interpreted DSMLs (i-DSMLs), where models are directly interpreted by a specialized execution engine with semantics based on model changes at runtime. This execution engine uses a layered architecture and is referred to as a domain-specific virtual machine (DSVM). As the domain-specific model being executed descends the layers of the DSVM the semantic gap between the user-defined model and the services being provided by the underlying infrastructure is closed. The focus of this research is the synthesis engine, the layer in the DSVM which transforms i-DSML models into executable scripts for the next lower layer to process. The appeal of an i-DSML is constrained as it possesses unique semantics contained within the DSVM. Existing DSVMs for i-DSMLs exhibit tight coupling between the implicit model of execution and the semantics of the domain, making it difficult to develop DSVMs for new i-DSMLs without a significant investment in resources. At the onset of this research only one i-DSML had been created for the user- centric communication domain using the aforementioned approach. This i-DSML is the Communication Modeling Language (CML) and its DSVM is the Communication Virtual machine (CVM). A major problem with the CVM’s synthesis engine is that the domain-specific knowledge (DSK) and the model of execution (MoE) are tightly interwoven consequently subsequent DSVMs would need to be developed from inception with no reuse of expertise. This dissertation investigates how to decouple the DSK from the MoE and sub- sequently producing a generic model of execution (GMoE) from the remaining appli- cation logic. This GMoE can be reused to instantiate synthesis engines for DSVMs in other domains. The generalized approach to developing the model synthesis com- ponent of i-DSML interpreters utilizes a reusable framework loosely coupled to DSK as swappable framework extensions. This approach involves first creating an i-DSML and its DSVM for a second do- main, demand-side smartgrid, or microgrid energy management, and designing the synthesis engine so that the DSK and MoE are easily decoupled. To validate the utility of the approach, the SEs are instantiated using the GMoE and DSKs of the two aforementioned domains and an empirical study to support our claim of reduced developmental effort is performed.
Resumo:
A presente investigação procurou descrever, de forma exaustiva, o processo de previsão, negociação, implementação e avaliação do Contrato de Execução celebrado entre a Câmara Municipal de Sintra e o Ministério da Educação em 2009. Este contrato corresponde a um instrumento previsto na regulamentação do quadro de transferências de competências para os municípios em matéria de educação, de acordo com o regime previsto no Decreto-Lei n.º 144/2008, de 28 de julho. Definida a problemática e os objetivos, a investigação centrou-se num estudo de caso no qual foi feita a descrição e interpretação do processo e das ações desenvolvidas pelos intervenientes no período compreendido entre 2008 e 2011. Recorreu-se à confrontação dos dados obtidos através da análise das fontes documentais e do recurso às entrevistas realizadas aos responsáveis pelo Pelouro da Educação e diretores dos Agrupamentos de Escolas, à luz da revisão da literatura e do contributo de diferentes trabalhos de investigadores nesta matéria. A investigação permitiu concluir que o processo de contratualização foi algo complexo face à realidade deste Município e que o normativo apresenta várias lacunas no que diz respeito à contratualização da referida transferência de competências, designadamente porque procura generalizar algo que não é, de todo, generalizável – o campo da educação face à complexidade dos territórios educativos em causa e aos dos intervenientes envolvidos no mesmo.
Resumo:
Concurrent programming is a difficult and error-prone task because the programmer must reason about multiple threads of execution and their possible interleavings. A concurrent program must synchronize the concurrent accesses to shared memory regions, but this is not enough to prevent all anomalies that can arise in a concurrent setting. The programmer can misidentify the scope of the regions of code that need to be atomic, resulting in atomicity violations and failing to ensure the correct behavior of the program. Executing a sequence of atomic operations may lead to incorrect results when these operations are co-related. In this case, the programmer may be required to enforce the sequential execution of those operations as a whole to avoid atomicity violations. This situation is specially common when the developer makes use of services from third-party packages or modules. This thesis proposes a methodology, based on the design by contract methodology, to specify which sequences of operations must be executed atomically. We developed an analysis that statically verifies that a client of a module is respecting its contract, allowing the programmer to identify the source of possible atomicity violations.
Resumo:
Finnish design and consulting companies are delivering robust and cost-efficient steel structures solutions to a large number of manufacturing companies worldwide. Recently introduced EN 1090-2 standard obliges these companies to specify the execution class of steel structures for their customers. This however, requires clarifying, understanding and interpreting the sophisticated procedure of execution class assignment. The objective of this research is to provide a clear explanation and guidance through the process of execution class assignment for a given steel structure and to support the implementation of EN 1090-2 standard in Rejlers Oy, one of Finnish design and consulting companies. This objective is accomplished by creating a guideline for designers that elaborates on the four-step process of the execution class assignment for a steel structure or its part. Steps one to three define the consequence class (projected consequences of structure failure), the service category (hazards associated with the service use exploitation of steel structure) and the production category (manufacturing process peculiarities), based on the ductility class (capacity of structure to withstand deformations) and the behaviour factor (corresponds to structure seismic behaviour). The final step is the execution class assignment taking into account results of previous steps. Main research method is indepth literature review of European standards family for steel structures. Other research approach is a series of interviews of Rejlers Oy representatives and its clients, results of which have been used to evaluate the level of EN 1090-2 awareness. Rejlers Oy will use the developed novel coherent standard implementation guideline to improve its services and to obtain greater customer satisfaction.
Resumo:
We consider exchange economies with a continuum of agents and differential information about finitely many states of nature. It was proved in Einy, Moreno and Shitovitz (2001) that if we allow for free disposal in the market clearing (feasibility) constraints then an irreducible economy has a competitive (or Walrasian expectations) equilibrium, and moreover, the set of competitive equilibrium allocations coincides with the private core. However when feasibility is defined with free disposal, competitive equilibrium allocations may not be incentive compatible and contracts may not be enforceable (see e.g. Glycopantis, Muir and Yannelis (2002)). This is the main motivation for considering equilibrium solutions with exact feasibility. We first prove that the results in Einy et al. (2001) are still valid without free-disposal. Then we define an incentive compatibility property motivated by the issue of contracts’ execution and we prove that every Pareto optimal exact feasible allocation is incentive compatible, implying that contracts of a competitive or core allocations are enforceable.
Resumo:
In the year 1999 approves the Law of Construction Building (LOE, in Spanish) to regulate a sector such as construction, which contained some shortcomings from the legal point of view. Currently, the LOE has been in force 12 years, changing the spanish world of the construction, due to influenced by internationalization. Within the LOE, there regulating the different actors involved in the construction building, as the Projects design, the Director of Construction, the developer, The builder, Director of execution of the construction (actor only in Spain, similar as construcion engineer and abroad in), control entities and the users, but lacks figure Project manager will assume the delegation of the promoter helping and you organize, direct and management the process. This figure assumes that the market and contracts are not legally regulated in Spain, then should define and establish its regulation in the LOE. (Spain Construction Law) The translation in spanish of the words "Project Manager is owed to Professor Rafael de Heredia in his book Integrated Project Management, as agent acting on behalf of the organization and promoter assuming control of the project, ie Integraded Project Management . Already exist in Spain, AEDIP (Spanish Association Integrated of Project Construction management) which comprises the major companies in “Project Management” in Spain, and MeDIP (Master in Integrated Construction Project) the largest and most advanced studies at the Polytechnic University of Madrid, in "Construction Project Management" they teach which is also in Argentina. The Integrated Project ("Project Management") applied to the construction process is a methodological technique that helps to organize, control and manage the resources of the promoters in the building process. When resources are limited (which is usually most situations) to manage them efficiently becomes very important. Well, we find that in this situation, the resources are not only limited, but it is limited, so a comprehensive control and monitoring of them becomes not only important if not crucial. The alternative of starting from scratch with a team that specializes in developing these follow directly intervening to ensure that scarce resources are used in the best possible way requires the use of a specific methodology (Manual DIP, Matrix Foreign EDR breakdown structure EDP Project, Risk Management and Control, Design Management, et ..), that is the methodology used by "Projects managers" to ensure that the initial objectives of the promoters or investors are met and all actors in process, from design to construction company have the mind aim of the project will do, trying to get their interests do not prevail over the interests of the project. Among the agents listed in the building process, "Project Management" or DIPE (Director Comprehensive building process, a proposed name for possible incorporation into the LOE, ) currently not listed as such in the LOE (Act on Construction Planning ), one of the agents that exist within the building process is not regulated from the legal point of view, no obligations, ie, as is required by law to have a project, a builder, a construction management, etc. DIPE only one who wants to hire you as have been advanced knowledge of their services by the clients they have been hiring these agents, there being no legal obligation as mentioned above, then the market is dictating its ruling on this new figure, as if it were necessary, he was not hired and eventually disappeared from the building process. As the aim of this article is regular the process and implement the name of DIPE in the Spanish Law of buildings construction (LOE)
Resumo:
La reproducibilidad de estudios y resultados científicos es una meta a tener en cuenta por cualquier científico a la hora de publicar el producto de una investigación. El auge de la ciencia computacional, como una forma de llevar a cabo estudios empíricos haciendo uso de modelos matemáticos y simulaciones, ha derivado en una serie de nuevos retos con respecto a la reproducibilidad de dichos experimentos. La adopción de los flujos de trabajo como método para especificar el procedimiento científico de estos experimentos, así como las iniciativas orientadas a la conservación de los datos experimentales desarrolladas en las últimas décadas, han solucionado parcialmente este problema. Sin embargo, para afrontarlo de forma completa, la conservación y reproducibilidad del equipamiento computacional asociado a los flujos de trabajo científicos deben ser tenidas en cuenta. La amplia gama de recursos hardware y software necesarios para ejecutar un flujo de trabajo científico hace que sea necesario aportar una descripción completa detallando que recursos son necesarios y como estos deben de ser configurados. En esta tesis abordamos la reproducibilidad de los entornos de ejecución para flujos de trabajo científicos, mediante su documentación usando un modelo formal que puede ser usado para obtener un entorno equivalente. Para ello, se ha propuesto un conjunto de modelos para representar y relacionar los conceptos relevantes de dichos entornos, así como un conjunto de herramientas que hacen uso de dichos módulos para generar una descripción de la infraestructura, y un algoritmo capaz de generar una nueva especificación de entorno de ejecución a partir de dicha descripción, la cual puede ser usada para recrearlo usando técnicas de virtualización. Estas contribuciones han sido aplicadas a un conjunto representativo de experimentos científicos pertenecientes a diferentes dominios de la ciencia, exponiendo cada uno de ellos diferentes requisitos hardware y software. Los resultados obtenidos muestran la viabilidad de propuesta desarrollada, reproduciendo de forma satisfactoria los experimentos estudiados en diferentes entornos de virtualización. ABSTRACT Reproducibility of scientific studies and results is a goal that every scientist must pursuit when announcing research outcomes. The rise of computational science, as a way of conducting empirical studies by using mathematical models and simulations, have opened a new range of challenges in this context. The adoption of workflows as a way of detailing the scientific procedure of these experiments, along with the experimental data conservation initiatives that have been undertaken during last decades, have partially eased this problem. However, in order to fully address it, the conservation and reproducibility of the computational equipment related to them must be also considered. The wide range of software and hardware resources required to execute a scientific workflow implies that a comprehensive description detailing what those resources are and how they are arranged is necessary. In this thesis we address the issue of reproducibility of execution environments for scientific workflows, by documenting them in a formalized way, which can be later used to obtain and equivalent one. In order to do so, we propose a set of semantic models for representing and relating the relevant information of those environments, as well as a set of tools that uses these models for generating a description of the infrastructure, and an algorithmic process that consumes these descriptions for deriving a new execution environment specification, which can be enacted into a new equivalent one using virtualization solutions. We apply these three contributions to a set of representative scientific experiments, belonging to different scientific domains, and exposing different software and hardware requirements. The obtained results prove the feasibility of the proposed approach, by successfully reproducing the target experiments under different virtualization environments.
Resumo:
Objectives: To determine the differences between tympanic and extratympanic electrodes regarding recording technique, comfort and ease of execution of the exam, and quality of auditory potential tracings. Study Design: Prospective cross-section investigation. Methods: Determination of the summation potential/action potential (SP/AP) ratio by electrocochleography (EchoG) using tympanic and extratympanic electrodes and separate analysis of SP and AP regarding the amplitude recorded. Results: Twenty-three subjects (15 men and 8 women; mean age: 33.17 years) with normal tonal threshold audiometry were evaluated. EchoG analysis revealed no significant difference between the two tympanic electrodes. Eleven of the 23 subjects reported discomfort with the insertion of the tympanic electrode even with the use of topical xylocaine, whereas no complaints of discomfort were reported with the use of the extratympanic electrode. Conclusions: Both electrodes were effective for EchoG evaluation, but the extratympanic one was easier to insert and did not cause discomfort. However, the tympanic electrode produced tracings of greater amplitude and of better reproducibility. Laryngoscope, 119:563-566, 2009
Resumo:
This paper studies the application of commercial biocides to old maritime pine timber structures (Pinus pinaster Ait.) that have previously been impregnated with other products. A method was developed in the laboratory to be used in situ to determine the impregnation depth achieved by a new generation biocide product applied to timber from an old building. This timber had once been treated with an unknown product difficult to characterize without extensive analysis. The test was initially developed in laboratory conditions and later tested on elements of the roof structure of an 18th century building. In both cases the results were promising and mutually consistent with penetration depths for some treatments reaching 2.0 cm. The application in situ proved the tests viability and simplicity of execution giving a clear indication on the feasibility of possible re-treatments.
Resumo:
Cluster analysis for categorical data has been an active area of research. A well-known problem in this area is the determination of the number of clusters, which is unknown and must be inferred from the data. In order to estimate the number of clusters, one often resorts to information criteria, such as BIC (Bayesian information criterion), MML (minimum message length, proposed by Wallace and Boulton, 1968), and ICL (integrated classification likelihood). In this work, we adopt the approach developed by Figueiredo and Jain (2002) for clustering continuous data. They use an MML criterion to select the number of clusters and a variant of the EM algorithm to estimate the model parameters. This EM variant seamlessly integrates model estimation and selection in a single algorithm. For clustering categorical data, we assume a finite mixture of multinomial distributions and implement a new EM algorithm, following a previous version (Silvestre et al., 2008). Results obtained with synthetic datasets are encouraging. The main advantage of the proposed approach, when compared to the above referred criteria, is the speed of execution, which is especially relevant when dealing with large data sets.
Resumo:
Consider the problem of scheduling a set of implicitdeadline sporadic tasks on a heterogeneous multiprocessor so as to meet all deadlines. Tasks cannot migrate and the platform is restricted in that each processor is either of type-1 or type-2 (with each task characterized by a different speed of execution upon each type of processor). We present an algorithm for this problem with a timecomplexity of O(n·m), where n is the number of tasks and m is the number of processors. It offers the guarantee that if a task set can be scheduled by any non-migrative algorithm to meet deadlines then our algorithm meets deadlines as well if given processors twice as fast. Although this result is proven for only a restricted heterogeneous multiprocessor, we consider it significant for being the first realtime scheduling algorithm to use a low-complexity binpacking approach to schedule tasks on a heterogeneous multiprocessor with provably good performance.
Resumo:
Recent integrated circuit technologies have opened the possibility to design parallel architectures with hundreds of cores on a single chip. The design space of these parallel architectures is huge with many architectural options. Exploring the design space gets even more difficult if, beyond performance and area, we also consider extra metrics like performance and area efficiency, where the designer tries to design the architecture with the best performance per chip area and the best sustainable performance. In this paper we present an algorithm-oriented approach to design a many-core architecture. Instead of doing the design space exploration of the many core architecture based on the experimental execution results of a particular benchmark of algorithms, our approach is to make a formal analysis of the algorithms considering the main architectural aspects and to determine how each particular architectural aspect is related to the performance of the architecture when running an algorithm or set of algorithms. The architectural aspects considered include the number of cores, the local memory available in each core, the communication bandwidth between the many-core architecture and the external memory and the memory hierarchy. To exemplify the approach we did a theoretical analysis of a dense matrix multiplication algorithm and determined an equation that relates the number of execution cycles with the architectural parameters. Based on this equation a many-core architecture has been designed. The results obtained indicate that a 100 mm(2) integrated circuit design of the proposed architecture, using a 65 nm technology, is able to achieve 464 GFLOPs (double precision floating-point) for a memory bandwidth of 16 GB/s. This corresponds to a performance efficiency of 71 %. Considering a 45 nm technology, a 100 mm(2) chip attains 833 GFLOPs which corresponds to 84 % of peak performance These figures are better than those obtained by previous many-core architectures, except for the area efficiency which is limited by the lower memory bandwidth considered. The results achieved are also better than those of previous state-of-the-art many-cores architectures designed specifically to achieve high performance for matrix multiplication.
Resumo:
The dot-enzyme-linked immunosorbent assay (dot-ELISA) was standardized using somatic (S) and excretory-secretory (ES) antigens of Toxocara-canis for the detection of specific antibodies in 22 serum samples from children aged 1 to 15 years, with clinical signs of toxocariasis. Fourteen serum samples from apparently normal individuals and 28 sera from patients with other pathologies were used as controls. All samples were used before and after absorption with Ascaris suum extract. When the results were evaluated in comparison with ELISA, the two tests were found to have similar sensitivity, but dot-ELISA was found to be more specific in the presence of the two antigens studied. Dot-ELISA proved to be effective for the diagnosis of human toxocariasis, presenting advantages in terms of yield, stability, time and ease of execution and low cost.