999 resultados para Data refinement
Resumo:
Building a computational model for complex biological systems is an iterative process. It starts from an abstraction of the process and then incorporates more details regarding the specific biochemical reactions which results in the change of the model fit. Meanwhile, the model’s numerical properties such as its numerical fit and validation should be preserved. However, refitting the model after each refinement iteration is computationally expensive resource-wise. There is an alternative approach which ensures the model fit preservation without the need to refit the model after each refinement iteration. And this approach is known as quantitative model refinement. The aim of this thesis is to develop and implement a tool called ModelRef which does the quantitative model refinement automatically. It is both implemented as a stand-alone Java application and as one of Anduril framework components. ModelRef performs data refinement of a model and generates the results in two different well known formats (SBML and CPS formats). The development of this tool successfully reduces the time and resource needed and the errors generated as well by traditional reiteration of the whole model to perform the fitting procedure.
Resumo:
Action systems are a framework for reasoning about discrete reactive systems. Back, Petre and Porres have extended these action systems to continuous action systems, which can be. used to model hybrid systems. In this paper we define a refinement relation, and develop practical data refinement rules for continuous action systems. The meaning of continuous action systems is expressed in terms of a mapping from continuous action systems to action systems. First, we present a new mapping from continuous act ion systems to action systems, such that Back's definition of trace refinement is correct with respect to it. Second, we present a stream semantics that is compatible with the trace semantics, but is preferable to it because it is more general. Although action system trace refinement rules are applicable to continuous action systems with a stream semantics, they are not complete. Finally, we introduce a new data refinement rule that is valid with respect to the stream semantics and can be used to prove refinements that are not possible in the trace semantics, and we analyse the completeness of our new rule in conjunction with the existing trace refinement rules.
Resumo:
Data refinements are refinement steps in which a program’s local data structures are changed. Data refinement proof obligations require the software designer to find an abstraction relation that relates the states of the original and new program. In this paper we describe an algorithm that helps a designer find an abstraction relation for a proposed refinement. Given sufficient time and space, the algorithm can find a minimal abstraction relation, and thus show that the refinement holds. As it executes, the algorithm displays mappings that cannot be in any abstraction relation. When the algorithm is not given sufficient resources to terminate, these mappings can help the designer find a suitable abstraction relation. The same algorithm can be used to test an abstraction relation supplied by the designer.
Resumo:
In New Zealand and Australia, the BRACElet project has been investigating students' acquisition of programming skills in introductory programming courses. The project has explored students' skills in basic syntax, tracing code, understanding code, and writing code, seeking to establish the relationships between these skills. This ITiCSE working group report presents the most recent step in the BRACElet project, which includes replication of earlier analysis using a far broader pool of naturally occurring data, refinement of the SOLO taxonomy in code-explaining questions, extension of the taxonomy to code-writing questions, extension of some earlier studies on students' 'doodling' while answering exam questions, and exploration of a further theoretical basis for work that until now has been primarily empirical.
Resumo:
This dissertation focuses on rock thermal conductivity and its correlations with petrographic, textural, and geochemical aspects, especially in granite rocks. It aims at demonstrating the relations of these variables in an attempt to enlighten the behavior of thermal effect on rocks. Results can be useful for several applications, such as understanding and conferring regional thermal flow results, predicting the behavior of thermal effect on rocks based upon macroscopic evaluation (texture and mineralogy), in the building construction field in order to provide more precise information on data refinement on thermal properties emphasizing a rocky material thermal conductivity, and especially in the dimension stone industry in order to open a discussion on the use of these variables as a new technological parameter directly related to thermal comfort. Thermal conductivity data were obtained by using Anter Corporation s QuicklineTM -30 a thermal property measuring equipment. Measurements were conducted at temperatures ranging between 25 to 38 OC in samples with 2cm in length and an area of at least 6cm of diameter. As to petrography data, results demonstrated good correlations with quartz and mafics. Linear correlation between mineralogy and thermal conductivity revealed a positive relation of a quartz percentage increase in relation to a thermal conductivity increase and its decrease with mafic minerals increase. As to feldspates (K-feldspate and plagioclase) they show dispersion. Quartz relation gets more evident when compared to sample sets with >20% and <20%. Sets with more than 20% quartz (sienogranites, monzogranites, granodiorites, etc.), exhibit to a great extent conductivity values which vary from 2,5 W/mK and the set with less than 20% (sienites, monzonites, gabbros, diorites, etc.) have an average thermal conductivity below 2,5 W/mK. As to textures it has been verified that rocks considered thick/porphyry demonstrated in general better correlations when compared to rocks considered thin/medium. In the case of quartz, thick rocks/porphyry showed greater correlation factors when compared to the thin/medium ones. As to feldspates (K-feldspate and plagioclase) again there was dispersion. As to mafics, both thick/porphyry and thin/medium showed negative correlations with correlation factor smaller than those obtained in relation to the quartz. As to rocks related to the Streckeisen s QAP diagram (1976), they tend to fall from alcali-feldspates granites to tonalites, and from sienites to gabbros, diorites, etc. Thermal conductivity data correlation with geochemistry confirmed to a great extent mineralogy results. It has been seen that correlation is linear if there is any. Such behavior could be seen especially with the SiO2. In this case similar correlation can be observed with the quartz, that is, thermal conductivity increases as SiO2 is incremented. Another aspect observed is that basic to intermediate rocks presented values always below 2,5 W/mK, a similar behavior to that observed in rocks with quartz <20%. Acid rocks presented values above 2,5 W/mK, a similar behavior to that observed in rocks with quartz >20% (granites). For all the other cases, correlation factors are always low and present opposite behavior to Fe2O3, CaO, MgO, and TiO2. As to Al2O3, K2O, and Na2O results are not conclusive and are statistically disperse. Thermal property knowledge especially thermal conductivity and its application in the building construction field appeared to be very satisfactory for it involves both technological and thermal comfort aspects, which favored in all cases fast, cheap, and precise results. The relation between thermal conductivity and linear thermal dilatation have also shown satisfactory results especially when it comes to the quartz role as a common, determining phase between the two variables. Thermal conductivity studies together with rocky material density can function as an additional tool for choosing materials when considering structural calculation aspects and thermal comfort, for in the dimension stone case there is a small density variation in relation to a thermal conductivity considerable variation
Resumo:
Back and von Wright have developed algebraic laws for reasoning about loops in the refinement calculus. We extend their work to reasoning about probabilistic loops in the probabilistic refinement calculus. We apply our algebraic reasoning to derive transformation rules for probabilistic action systems. In particular we focus on developing data refinement rules for probabilistic action systems. Our extension is interesting since some well known transformation rules that are applicable to standard programs are not applicable to probabilistic ones: we identify some of these important differences and we develop alternative rules where possible. In particular, our probabilistic action system data refinement rules are new.
Resumo:
The compounds chlorothiazide and hydrochlorothiazide (crystalline form II) have been studied in their fully hydrogenous forms by powder neutron diffraction on the GEM diffractometer. The results of joint Rietveld refinement of the structures against multi-bank neutron and single-bank X-ray powder data are reported and show that accurate and precise structural information can be obtained from polycrystalline molecular organic materials by this route.
Resumo:
We present a new methodology that couples neutron diffraction experiments over a wide Q range with single chain modelling in order to explore, in a quantitative manner, the intrachain organization of non-crystalline polymers. The technique is based on the assignment of parameters describing the chemical, geometric and conformational characteristics of the polymeric chain, and on the variation of these parameters to minimize the difference between the predicted and experimental diffraction patterns. The method is successfully applied to the study of molten poly(tetrafluoroethylene) at two different temperatures, and provides unambiguous information on the configuration of the chain and its degree of flexibility. From analysis of the experimental data a model is derived with CC and CF bond lengths of 1.58 and 1.36 Å, respectively, a backbone valence angle of 110° and a torsional angle distribution which is characterized by four isometric states, namely a split trans state at ± 18°, giving rise to a helical chain conformation, and two gauche states at ± 112°. The probability of trans conformers is 0.86 at T = 350°C, which decreases slightly to 0.84 at T = 400°C. Correspondingly, the chain segments are characterized by long all-trans sequences with random changes in sign, rather anisotropic in nature, which give rise to a rather stiff chain. We compare the results of this quantitative analysis of the experimental scattering data with the theoretical predictions of both force fields and molecular orbital conformation energy calculations.
Resumo:
Powder X-ray diffraction (XRD) data were collected for La0.65Sr0.35MnO3 prepared through an alternative method from a stoichiometric mixture of Mn2O3, La2O3, and SrO2, fired at 1300 degreesC for 16 h. XRD analysis using the Rietveld method was carried out and it was found that manganite has rhombohedral symmetry (space group R(3) over bar c). The lattice parameters are found to be a=5.5032 Angstrom and c=13.3674 Angstrom. The bond valence computation indicates that the initial inclusion of Sr occurs at higher temperature. (C) 2002 International Centre for Diffraction Data.
Resumo:
Nominally pure Gd2O3 C-form structure from basic carbonate fine spherical particles and its differences concerning the XRD data among literature patterns using Rietveld method is reported. Gd2O3: Eu3+ from basic carbonate and Gd2O3 from oxalate were also investigated. All samples, except the one from oxalate precursor, are narrow sized, 100-200 nm. Only non-doped Gd2O3 from basic carbonate presents XRD data with smaller d(hkl) values than the literature ones. From Rietveld refinement, non-doped Gd2O3 from basic carbonate has the smallest crystallite size and from oxalate shows the greatest one. Also, the unit cell parameters indicate a plan contraction of the Gd2O3 from basic carbonate. The presence of Eu3+ increases crystallite size when basic carbonate precursor is used to prepare Gd2O3 and avoids plan contraction. The structural differences observed among Gd2O3 samples obtained are related to the type of precursor and to the presence or not of doping ion. (C) 2003 Elsevier B.V. (USA). All rights reserved.
Resumo:
In soil surveys, several sampling systems can be used to define the most representative sites for sample collection and description of soil profiles. In recent years, the conditioned Latin hypercube sampling system has gained prominence for soil surveys. In Brazil, most of the soil maps are at small scales and in paper format, which hinders their refinement. The objectives of this work include: (i) to compare two sampling systems by conditioned Latin hypercube to map soil classes and soil properties; (II) to retrieve information from a detailed scale soil map of a pilot watershed for its refinement, comparing two data mining tools, and validation of the new soil map; and (III) to create and validate a soil map of a much larger and similar area from the extrapolation of information extracted from the existing soil map. Two sampling systems were created by conditioned Latin hypercube and by the cost-constrained conditioned Latin hypercube. At each prospection place, soil classification and measurement of the A horizon thickness were performed. Maps were generated and validated for each sampling system, comparing the efficiency of these methods. The conditioned Latin hypercube captured greater variability of soils and properties than the cost-constrained conditioned Latin hypercube, despite the former provided greater difficulty in field work. The conditioned Latin hypercube can capture greater soil variability and the cost-constrained conditioned Latin hypercube presents great potential for use in soil surveys, especially in areas of difficult access. From an existing detailed scale soil map of a pilot watershed, topographical information for each soil class was extracted from a Digital Elevation Model and its derivatives, by two data mining tools. Maps were generated using each tool. The more accurate of these tools was used for extrapolation of soil information for a much larger and similar area and the generated map was validated. It was possible to retrieve the existing soil map information and apply it on a larger area containing similar soil forming factors, at much low financial cost. The KnowledgeMiner tool for data mining, and ArcSIE, used to create the soil map, presented better results and enabled the use of existing soil map to extract soil information and its application in similar larger areas at reduced costs, which is especially important in development countries with limited financial resources for such activities, such as Brazil.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
Tibolone is used for hormone reposition of postmenopause women and isotibolone is considered the major degradation product of tibolone. Isotibolone can also be present in tibolone API raw materials due to some inadequate synthesis. Its presence is then necessary to be identified and quantified in the quality control of both API and drug products. In this work we present the indexing of an isotibolone X-ray diffraction pattern measured with synchrotron light (lambda=1.2407 angstrom) in the transmission mode. The characterization of the isotibolone sample by IR spectroscopy, elemental analysis, and thermal analysis are also presented. The isotibolone crystallographic data are a=6.8066 angstrom, b=20.7350 angstrom, c=6.4489 angstrom, beta=76.428 degrees, V=884.75 angstrom(3), and space group P2(1), rho(o)= 1.187 g cm(-3), Z=2. (C) 2009 International Centre for Diffraction Data. [DOI: 10.1154/1.3257612]
Resumo:
In this paper we demonstrate a refinement calculus for logic programs, which is a framework for developing logic programs from specifications. The paper is written in a tutorial-style, using a running example to illustrate how the refinement calculus is used to develop logic programs. The paper also presents an overview of some of the advanced features of the calculus, including the introduction of higher-order procedures and the refinement of abstract data types.