855 resultados para rietveld refinement
Resumo:
Hybrid logics, which add to the modal description of transition structures the ability to refer to specific states, offer a generic framework to approach the specification and design of reconfigurable systems, i.e., systems with reconfiguration mechanisms governing the dynamic evolution of their execution configurations in response to both external stimuli or internal performance measures. A formal representation of such systems is through transition structures whose states correspond to the different configurations they may adopt. Therefore, each node is endowed with, for example, an algebra, or a first-order structure, to precisely characterise the semantics of the services provided in the corresponding configuration. This paper characterises equivalence and refinement for these sorts of models in a way which is independent of (or parametric on) whatever logic (propositional, equational, fuzzy, etc) is found appropriate to describe the local configurations. A Hennessy–Milner like theorem is proved for hybridised logics.
Resumo:
Magdeburg, Univ., Fak. für Informatik, Diss., 2012
Resumo:
Weaning, social environment, dendrites, dendritic spines, limbic system
Resumo:
In several computer graphics areas, a refinement criterion is often needed to decide whether to goon or to stop sampling a signal. When the sampled values are homogeneous enough, we assume thatthey represent the signal fairly well and we do not need further refinement, otherwise more samples arerequired, possibly with adaptive subdivision of the domain. For this purpose, a criterion which is verysensitive to variability is necessary. In this paper, we present a family of discrimination measures, thef-divergences, meeting this requirement. These convex functions have been well studied and successfullyapplied to image processing and several areas of engineering. Two applications to global illuminationare shown: oracles for hierarchical radiosity and criteria for adaptive refinement in ray-tracing. Weobtain significantly better results than with classic criteria, showing that f-divergences are worth furtherinvestigation in computer graphics. Also a discrimination measure based on entropy of the samples forrefinement in ray-tracing is introduced. The recursive decomposition of entropy provides us with a naturalmethod to deal with the adaptive subdivision of the sampling region
Resumo:
Levels of low-density lipoprotein (LDL) cholesterol, high-density lipoprotein (HDL) cholesterol, triglycerides and total cholesterol are heritable, modifiable risk factors for coronary artery disease. To identify new loci and refine known loci influencing these lipids, we examined 188,577 individuals using genome-wide and custom genotyping arrays. We identify and annotate 157 loci associated with lipid levels at P < 5 × 10(-8), including 62 loci not previously associated with lipid levels in humans. Using dense genotyping in individuals of European, East Asian, South Asian and African ancestry, we narrow association signals in 12 loci. We find that loci associated with blood lipid levels are often associated with cardiovascular and metabolic traits, including coronary artery disease, type 2 diabetes, blood pressure, waist-hip ratio and body mass index. Our results demonstrate the value of using genetic data from individuals of diverse ancestry and provide insights into the biological mechanisms regulating blood lipids to guide future genetic, biological and therapeutic research.
Resumo:
The characterization and categorization of coarse aggregates for use in portland cement concrete (PCC) pavements is a highly refined process at the Iowa Department of Transportation. Over the past 10 to 15 years, much effort has been directed at pursuing direct testing schemes to supplement or replace existing physical testing schemes. Direct testing refers to the process of directly measuring the chemical and mineralogical properties of an aggregate and then attempting to correlate those measured properties to historical performance information (i.e., field service record). This is in contrast to indirect measurement techniques, which generally attempt to extrapolate the performance of laboratory test specimens to expected field performance. The purpose of this research project was to investigate and refine the use of direct testing methods, such as X-ray analysis techniques and thermal analysis techniques, to categorize carbonate aggregates for use in portland cement concrete. The results of this study indicated that the general testing methods that are currently used to obtain data for estimating service life tend to be very reliable and have good to excellent repeatability. Several changes in the current techniques were recommended to enhance the long-term reliability of the carbonate database. These changes can be summarized as follows: (a) Limits that are more stringent need to be set on the maximum particle size in the samples subjected to testing. This should help to improve the reliability of all three of the test methods studied during this project. (b) X-ray diffraction testing needs to be refined to incorporate the use of an internal standard. This will help to minimize the influence of sample positioning errors and it will also allow for the calculation of the concentration of the various minerals present in the samples. (c) Thermal analysis data needs to be corrected for moisture content and clay content prior to calculating the carbonate content of the sample.
Resumo:
Homology modeling is the most commonly used technique to build a three-dimensional model for a protein sequence. It heavily relies on the quality of the sequence alignment between the protein to model and related proteins with a known three dimensional structure. Alignment quality can be assessed according to the physico-chemical properties of the three dimensional models it produces.In this work, we introduce fifteen predictors designed to evaluate the properties of the models obtained for various alignments. They consist of an energy value obtained from different force fields (CHARMM, ProsaII or ANOLEA) computed on residue selected around misaligned regions. These predictors were evaluated on ten challenging test cases. For each target, all possible ungapped alignments are generated and their corresponding models are computed and evaluated.The best predictor, retrieving the structural alignment for 9 out of 10 test cases, is based on the ANOLEA atomistic mean force potential and takes into account residues around misaligned secondary structure elements. The performance of the other predictors is significantly lower. This work shows that substantial improvement in local alignments can be obtained by careful assessment of the local structure of the resulting models.
Resumo:
In the field of molecular biology, scientists adopted for decades a reductionist perspective in their inquiries, being predominantly concerned with the intricate mechanistic details of subcellular regulatory systems. However, integrative thinking was still applied at a smaller scale in molecular biology to understand the underlying processes of cellular behaviour for at least half a century. It was not until the genomic revolution at the end of the previous century that we required model building to account for systemic properties of cellular activity. Our system-level understanding of cellular function is to this day hindered by drastic limitations in our capability of predicting cellular behaviour to reflect system dynamics and system structures. To this end, systems biology aims for a system-level understanding of functional intraand inter-cellular activity. Modern biology brings about a high volume of data, whose comprehension we cannot even aim for in the absence of computational support. Computational modelling, hence, bridges modern biology to computer science, enabling a number of assets, which prove to be invaluable in the analysis of complex biological systems, such as: a rigorous characterization of the system structure, simulation techniques, perturbations analysis, etc. Computational biomodels augmented in size considerably in the past years, major contributions being made towards the simulation and analysis of large-scale models, starting with signalling pathways and culminating with whole-cell models, tissue-level models, organ models and full-scale patient models. The simulation and analysis of models of such complexity very often requires, in fact, the integration of various sub-models, entwined at different levels of resolution and whose organization spans over several levels of hierarchy. This thesis revolves around the concept of quantitative model refinement in relation to the process of model building in computational systems biology. The thesis proposes a sound computational framework for the stepwise augmentation of a biomodel. One starts with an abstract, high-level representation of a biological phenomenon, which is materialised into an initial model that is validated against a set of existing data. Consequently, the model is refined to include more details regarding its species and/or reactions. The framework is employed in the development of two models, one for the heat shock response in eukaryotes and the second for the ErbB signalling pathway. The thesis spans over several formalisms used in computational systems biology, inherently quantitative: reaction-network models, rule-based models and Petri net models, as well as a recent formalism intrinsically qualitative: reaction systems. The choice of modelling formalism is, however, determined by the nature of the question the modeler aims to answer. Quantitative model refinement turns out to be not only essential in the model development cycle, but also beneficial for the compilation of large-scale models, whose development requires the integration of several sub-models across various levels of resolution and underlying formal representations.
Resumo:
Building a computational model for complex biological systems is an iterative process. It starts from an abstraction of the process and then incorporates more details regarding the specific biochemical reactions which results in the change of the model fit. Meanwhile, the model’s numerical properties such as its numerical fit and validation should be preserved. However, refitting the model after each refinement iteration is computationally expensive resource-wise. There is an alternative approach which ensures the model fit preservation without the need to refit the model after each refinement iteration. And this approach is known as quantitative model refinement. The aim of this thesis is to develop and implement a tool called ModelRef which does the quantitative model refinement automatically. It is both implemented as a stand-alone Java application and as one of Anduril framework components. ModelRef performs data refinement of a model and generates the results in two different well known formats (SBML and CPS formats). The development of this tool successfully reduces the time and resource needed and the errors generated as well by traditional reiteration of the whole model to perform the fitting procedure.
Resumo:
Adults code faces in reference to category-specific norms that represent the different face categories encountered in the environment (e.g., race, age). Reliance on such norm-based coding appears to aid recognition, but few studies have examined the development of separable prototypes and the way in which experience influences the refinement of the coding dimensions associated with different face categories. The present dissertation was thus designed to investigate the organization and refinement of face space and the role of experience in shaping sensitivity to its underlying dimensions. In Study 1, I demonstrated that face space is organized with regard to norms that reflect face categories that are both visually and socially distinct. These results provide an indication of the types of category-specific prototypes that can conceivably exist in face space. Study 2 was designed to investigate whether children rely on category-specific prototypes and the extent to which experience facilitates the development of separable norms. I demonstrated that unlike adults and older children, 5-year-olds rely on a relatively undifferentiated face space, even for categories with which they receive ample experience. These results suggest that the dimensions of face space undergo significant refinement throughout childhood; 5 years of experience with a face category is not sufficient to facilitate the development of separable norms. In Studies 3 through 5, I examined how early and continuous exposure to young adult faces may optimize the face processing system for the dimensions of young relative to older adult faces. In Study 3, I found evidence for a young adult bias in attentional allocation among young and older adults. However, whereas young adults showed an own-age recognition advantage, older adults exhibited comparable recognition for young and older faces. These results suggest that despite the significant experience that older adults have with older faces, the early and continuous exposure they received with young faces continues to influence their recognition, perhaps because face space is optimized for young faces. In Studies 4 and 5, I examined whether sensitivity to deviations from the norm is superior for young relative to older adult faces. I used normality/attractiveness judgments as a measure of this sensitivity; to examine whether biases were specific to norm-based coding, I asked participants to discriminate between the same faces. Both young and older adults were more accurate when tested with young relative to older faces—but only when judging normality. Like adults, 3- and 7-year-olds were more accurate in judging the attractiveness of young faces; however, unlike adults, this bias extended to the discrimination task. Thus by 3 years of age children are more sensitive to differences among young relative to older faces, suggesting that young children's perceptual system is more finely tuned for young than older adult faces. Collectively, the results of this dissertation help elucidate the development of category-specific norms and clarify the role of experience in shaping sensitivity to the dimensions of face space.
Resumo:
[Tesis] ( Maestría en Ciencias con Orientación en Ingeniería Cerámica) U.A.N.L.
Resumo:
MAGNESIUM ALLOYS have strong potential for weight reduction in a wide range of technical applications because of their low density compared to other structural metallic materials. Therefore, an extensive growth of magnesium alloys usage in the automobile sector is expected in the coming years to enhance the fuel efficiency through mass reduction. The drawback associated with the use of commercially cheaper Mg-Al based alloys, such as AZ91, AM60 and AM50 are their inferior creep properties above 100ºC due to the presence of discontinuous Mg17A112 phases at the grain boundaries. Although rare earth-based magnesium alloys show better mechanical properties, it is not economically viable to use these alloys in auto industries. Recently, many new Mg-Al based alloy systems have been developed for high temperature applications, which do not contain the Mg17Al12 phase. It has been proved that the addition of a high percentage of zinc (which depends upon the percentage of Al) to binary Mg-Al alloys also ensures the complete removal of the Mg17Al12 phase and hence exhibits superior high temperature properties.ZA84 alloy is one such system, which has 8%Zn in it (Mg-8Zn-4Al-0.2Mn, all are in wt %) and shows superior creep resistance compared to AZ and AM series alloys. These alloys are mostly used in die casting industries. However, there are certain large and heavy components, made up of this alloy by sand castings that show lower mechanical properties because of their coarse microstructure. Moreover, further improvement in their high temperature behaviour through microstructural modification is also an essential task to make this alloy suitable for the replacement of high strength aluminium alloys used in automobile industry. Grain refinement is an effective way to improve the tensile behaviour of engineering alloys. In fact, grain refinement of Mg-Al based alloys is well documented in literature. However, there is no grain refiner commercially available in the market for Mg-Al alloys. It is also reported in the literature that the microstructure of AZ91 alloy is modified through the minor elemental additions such as Sb, Si, Sr, Ca, etc., which enhance its high temperature properties because of the formation of new stable intermetallics. The same strategy can be used with the ZA84 alloy system to improve its high temperature properties further without sacrificing the other properties. The primary objective of the present research work, “Studies on grain refinement and alloying additions on the microstructure and mechanical properties of Mg-8Zn-4Al alloy” is twofold: 1. To investigate the role of individual and combined additions of Sb and Ca on the microstructure and mechanical properties of ZA84 alloy. 2. To synthesis a novel Mg-1wt%Al4C3 master alloy for grain refinement of ZA84 alloy and investigate its effects on mechanical properties.
Resumo:
Demand on magnesium and its alloys is increased significantly in the automotive industry because of their great potential in reducing the weight of components, thus resulting in improvement in fuel efficiency of the vehicle. To date, most of Mg products have been fabricated by casting, especially, by die-casting because of its high productivity, suitable strength, acceptable quality & dimensional accuracy and the components produced through sand, gravity and low pressure die casting are small extent. In fact, higher solidification rate is possible only in high pressure die casting, which results in finer grain size. However, achieving high cooling rate in gravity casting using sand and permanent moulds is a difficult task, which ends with a coarser grain nature and exhibit poor mechanical properties, which is an important aspect of the performance in industrial applications. Grain refinement is technologically attractive because it generally does not adversely affect ductility and toughness, contrary to most other strengthening methods. Therefore formation of fine grain structure in these castings is crucial, in order to improve the mechanical properties of these cast components. Therefore, the present investigation is “GRAIN REFINEMENT STUDIES ON Mg AND Mg-Al BASED ALLOYS”. The primary objective of this present investigation is to study the effect of various grain refining inoculants (Al-4B, Al- 5TiB2 master alloys, Al4C3, Charcoal particles) on Pure Mg and Mg-Al alloys such as AZ31, AZ91 and study their grain refining mechanisms. The second objective of this work is to study the effect of superheating process on the grain size of AZ31, AZ91 Mg alloys with and without inoculants addition. In addition, to study the effect of grain refinement on the mechanical properties of Mg and Mg-Al alloys. The thesis is well organized with seven chapters and the details of the studies are given below in detail.
Resumo:
This paper presents methods for moving object detection in airborne video surveillance. The motion segmentation in the above scenario is usually difficult because of small size of the object, motion of camera, and inconsistency in detected object shape etc. Here we present a motion segmentation system for moving camera video, based on background subtraction. An adaptive background building is used to take advantage of creation of background based on most recent frame. Our proposed system suggests CPU efficient alternative for conventional batch processing based background subtraction systems. We further refine the segmented motion by meanshift based mode association.
Resumo:
In several computer graphics areas, a refinement criterion is often needed to decide whether to go on or to stop sampling a signal. When the sampled values are homogeneous enough, we assume that they represent the signal fairly well and we do not need further refinement, otherwise more samples are required, possibly with adaptive subdivision of the domain. For this purpose, a criterion which is very sensitive to variability is necessary. In this paper, we present a family of discrimination measures, the f-divergences, meeting this requirement. These convex functions have been well studied and successfully applied to image processing and several areas of engineering. Two applications to global illumination are shown: oracles for hierarchical radiosity and criteria for adaptive refinement in ray-tracing. We obtain significantly better results than with classic criteria, showing that f-divergences are worth further investigation in computer graphics. Also a discrimination measure based on entropy of the samples for refinement in ray-tracing is introduced. The recursive decomposition of entropy provides us with a natural method to deal with the adaptive subdivision of the sampling region