915 resultados para group model building
Resumo:
This is a version of the Jisc ‘Six Elements of Digital Capabilities’ model, specifically for teaching sta or for academic sta in their teaching role. It is an update on the earlier ‘7 elements of digital literacy’ model (2009) and has many continuities with this framework. This version was produced in response to feedback that the base model alone does not provide enough detail to support embedding into practice. However, it is an example of how the base model could be used to define the digital capabilities of teaching sta and is meant to be adapted to suit specific settings.
Resumo:
This paper presents a model-based approach for reconstructing 3D polyhedral building models from aerial images. The proposed approach exploits some geometric and photometric properties resulting from the perspective projection of planar structures. Data are provided by calibrated aerial images. The novelty of the approach lies in its featurelessness and in its use of direct optimization based on image rawbrightness. The proposed framework avoids feature extraction and matching. The 3D polyhedral model is directly estimated by optimizing an objective function that combines an image-based dissimilarity measure and a gradient score over several aerial images. The optimization process is carried out by the Differential Evolution algorithm. The proposed approach is intended to provide more accurate 3D reconstruction than feature-based approaches. Fast 3D model rectification and updating can take advantage of the proposed method. Several results and evaluations of performance from real and synthetic images show the feasibility and robustness of the proposed approach.
Resumo:
The formation of cerebral senile plaques composed of amyloid beta peptide (A beta) is a fundamental feature of Alzheimer's disease (AD). Glial cells and more specifically microglia become reactive in the presence of A beta. In a triple transgenic model of AD (3 x Tg-AD), we found a significant increase in activated microglia at 12 (by 111%) and 18 (by 88%) months of age when compared with non-transgenic (non-Tg) controls. This microglial activation correlated with A beta plaque formation, and the activation in microglia was closely associated with A beta plaques and smaller A beta deposits. We also found a significant increase in the area density of resting microglia in 3 x Tg-AD animals both at plaque-free stage (at 9 months by 105%) and after the development of A plaques (at 12 months by 54% and at 18 months by 131%). Our results show for the first time that the increase in the density of resting microglia precedes both plaque formation and activation of microglia by extracellular A beta accumulation. We suggest that AD pathology triggers a complex microglial reaction: at the initial stages of the disease the number of resting microglia increases, as if in preparation for the ensuing activation in an attempt to fight the extracellular A beta load that is characteristic of the terminal stages of the disease. Cell Death and Disease (2010) 1, e1; doi:10.1038/cddis.2009.2; published online 14 January 2010
Resumo:
The aim of this study is to develop a reference model for intervention in the language processes applied to the transformation of language normalisation within organisations of a socio-economic nature. It is based on the case study of an experience carried out over10 years within a trades’ union confederation, and has pursued a strategy of a basically qualitative research carried out in three stages: 1) undertaking field work through application of action-research methodology, 2) reconstructing experiences following processes of systematisation and conceptualisation of the systematised data, applying methodologies for the Systematisation of Experiences and Grounded Theory, and 3) formulating a model for intervention, applying the Systems Approach methodology. Finally, we identified nine key ideas that make up the conceptual framework for the ENEKuS reference model, which is structured in nine ‘action points', each having an operating sub-model applicable in practice.
Resumo:
We investigate the 2d O(3) model with the standard action by Monte Carlo simulation at couplings β up to 2.05. We measure the energy density, mass gap and susceptibility of the model, and gather high statistics on lattices of size L ≤ 1024 using the Floating Point Systems T-series vector hypercube and the Thinking Machines Corp.'s Connection Machine 2. Asymptotic scaling does not appear to set in for this action, even at β = 2.10, where the correlation length is 420. We observe a 20% difference between our estimate m/Λ^─_(Ms) = 3.52(6) at this β and the recent exact analytical result . We use the overrelaxation algorithm interleaved with Metropolis updates and show that decorrelation time scales with the correlation length and the number of overrelaxation steps per sweep. We determine its effective dynamical critical exponent to be z' = 1.079(10); thus critical slowing down is reduced significantly for this local algorithm that is vectorizable and parallelizable.
We also use the cluster Monte Carlo algorithms, which are non-local Monte Carlo update schemes which can greatly increase the efficiency of computer simulations of spin models. The major computational task in these algorithms is connected component labeling, to identify clusters of connected sites on a lattice. We have devised some new SIMD component labeling algorithms, and implemented them on the Connection Machine. We investigate their performance when applied to the cluster update of the two dimensional Ising spin model.
Finally we use a Monte Carlo Renormalization Group method to directly measure the couplings of block Hamiltonians at different blocking levels. For the usual averaging block transformation we confirm the renormalized trajectory (RT) observed by Okawa. For another improved probabilistic block transformation we find the RT, showing that it is much closer to the Standard Action. We then use this block transformation to obtain the discrete β-function of the model which we compare to the perturbative result. We do not see convergence, except when using a rescaled coupling β_E to effectively resum the series. For the latter case we see agreement for m/ Λ^─_(Ms) at , β = 2.14, 2.26, 2.38 and 2.50. To three loops m/Λ^─_(Ms) = 3.047(35) at β = 2.50, which is very close to the exact value m/ Λ^─_(Ms) = 2.943. Our last point at β = 2.62 disagrees with this estimate however.
Resumo:
Threefold symmetric Fe phosphine complexes have been used to model the structural and functional aspects of biological N2 fixation by nitrogenases. Low-valent bridging Fe-S-Fe complexes in the formal oxidation states Fe(II)Fe(II), Fe(II)/Fe(I), and Fe(I)/Fe(I) have been synthesized which display rich spectroscopic and magnetic behavior. A series of cationic tris-phosphine borane (TPB) ligated Fe complexes have been synthesized and been shown to bind a variety of nitrogenous ligands including N2H4, NH3, and NH2
Treatment of an anionic FeN2 complex with excess acid also results in the formation of some NH3, suggesting the possibility of a catalytic cycle for the conversion of N2 to NH3 mediated by Fe. Indeed, use of excess acid and reductant results in the formation of seven equivalents of NH3 per Fe center, demonstrating Fe mediated catalytic N2 fixation with acids and protons for the first time. Numerous control experiments indicate that this catalysis is likely being mediated by a molecular species.
A number of other phosphine ligated Fe complexes have also been tested for catalysis and suggest that a hemi-labile Fe-B interaction may be critical for catalysis. Additionally, various conditions for the catalysis have been investigated. These studies further support the assignment of a molecular species and delineate some of the conditions required for catalysis.
Finally, combined spectroscopic studies have been performed on a putative intermediate for catalysis. These studies converge on an assignment of this new species as a hydrazido(2-) complex. Such species have been known on group 6 metals for some time, but this represents the first characterization of this ligand on Fe. Further spectroscopic studies suggest that this species is present in catalytic mixtures, which suggests that the first steps of a distal mechanism for N2 fixation are feasible in this system.
Resumo:
A long-standing challenge in transition metal catalysis is selective C–C bond coupling of simple feedstocks, such as carbon monoxide, ethylene or propylene, to yield value-added products. This work describes efforts toward selective C–C bond formation using early- and late-transition metals, which may have important implications for the production of fuels and plastics, as well as many other commodity chemicals.
The industrial Fischer-Tropsch (F-T) process converts synthesis gas (syngas, a mixture of CO + H2) into a complex mixture of hydrocarbons and oxygenates. Well-defined homogeneous catalysts for F-T may provide greater product selectivity for fuel-range liquid hydrocarbons compared to traditional heterogeneous catalysts. The first part of this work involved the preparation of late-transition metal complexes for use in syngas conversion. We investigated C–C bond forming reactions via carbene coupling using bis(carbene)platinum(II) compounds, which are models for putative metal–carbene intermediates in F-T chemistry. It was found that C–C bond formation could be induced by either (1) chemical reduction of or (2) exogenous phosphine coordination to the platinum(II) starting complexes. These two mild methods afforded different products, constitutional isomers, suggesting that at least two different mechanisms are possible for C–C bond formation from carbene intermediates. These results are encouraging for the development of a multicomponent homogeneous catalysis system for the generation of higher hydrocarbons.
A second avenue of research focused on the design and synthesis of post-metallocene catalysts for olefin polymerization. The polymerization chemistry of a new class of group 4 complexes supported by asymmetric anilide(pyridine)phenolate (NNO) pincer ligands was explored. Unlike typical early transition metal polymerization catalysts, NNO-ligated catalysts produce nearly regiorandom polypropylene, with as many as 30-40 mol % of insertions being 2,1-inserted (versus 1,2-inserted), compared to <1 mol % in most metallocene systems. A survey of model Ti polymerization catalysts suggests that catalyst modification pathways that could affect regioselectivity, such as C–H activation of the anilide ring, cleavage of the amine R-group, or monomer insertion into metal–ligand bonds are unlikely. A parallel investigation of a Ti–amido(pyridine)phenolate polymerization catalyst, which features a five- rather than a six-membered Ti–N chelate ring, but maintained a dianionic NNO motif, revealed that simply maintaining this motif was not enough to produce regioirregular polypropylene; in fact, these experiments seem to indicate that only an intact anilide(pyridine)phenolate ligated-complex will lead to regioirregular polypropylene. As yet, the underlying causes for the unique regioselectivity of anilide(pyridine)phenolate polymerization catalysts remains unknown. Further exploration of NNO-ligated polymerization catalysts could lead to the controlled synthesis of new types of polymer architectures.
Finally, we investigated the reactivity of a known Ti–phenoxy(imine) (Ti-FI) catalyst that has been shown to be very active for ethylene homotrimerization in an effort to upgrade simple feedstocks to liquid hydrocarbon fuels through co-oligomerization of heavy and light olefins. We demonstrated that the Ti-FI catalyst can homo-oligomerize 1-hexene to C12 and C18 alkenes through olefin dimerization and trimerization, respectively. Future work will include kinetic studies to determine monomer selectivity by investigating the relative rates of insertion of light olefins (e.g., ethylene) vs. higher α-olefins, as well as a more detailed mechanistic study of olefin trimerization. Our ultimate goal is to exploit this catalyst in a multi-catalyst system for conversion of simple alkenes into hydrocarbon fuels.
Resumo:
The olfactory bulb of mammals aids in the discrimination of odors. A mathematical model based on the bulbar anatomy and electrophysiology is described. Simulations of the highly non-linear model produce a 35-60 Hz modulated activity, which is coherent across the bulb. The decision states (for the odor information) in this system can be thought of as stable cycles, rather than as point stable states typical of simpler neuro-computing models. Analysis shows that a group of coupled non-linear oscillators are responsible for the oscillatory activities. The output oscillation pattern of the bulb is determined by the odor input. The model provides a framework in which to understand the transformation between odor input and bulbar output to the olfactory cortex. This model can also be extended to other brain areas such as the hippocampus, thalamus, and neocortex, which show oscillatory neural activities. There is significant correspondence between the model behavior and observed electrophysiology.
It has also been suggested that the olfactory bulb, the first processing center after the sensory cells in the olfactory pathway, plays a role in olfactory adaptation, odor sensitivity enhancement by motivation, and other olfactory psychophysical phenomena. The input from the higher olfactory centers to the inhibitory cells in the bulb are shown to be able to modulate the response, and thus the sensitivity, of the bulb to odor input. It follows that the bulb can decrease its sensitivity to a pre-existing and detected odor (adaptation) while remaining sensitive to new odors, or can increase its sensitivity to discover interesting new odors. Other olfactory psychophysical phenomena such as cross-adaptation are also discussed.
Resumo:
This thesis describes the preparation, characterization, and application of welldefined single-component group ten salicylaldimine complexes for the polymerization of ethylene to high molecular weight materials as well as the copolymerization of ethylene and functionalized olefins. After an initial introduction to the field, Chapter 2 describes the preparation of PPh3 complexes that contain a series of modified salicylaldimine and naphthaldimine ligands. Such complexes were activated for polymerization by the addition of cocatalysts such as Ni(COD)2 or B(C6F5)3. As the steric demand of the ligand set increased-the molecular weight, polymerization activity, and lifetime of the catalyst was observed to increase. In fact, complexes containing "bulky" ligands, such as the [Anthr,HSal] ligand (2.5), were found to be highly-active single component complexes for the polymerization of ethylene. Model hydrido compound were prepared-allowing for a better understanding of both the mechanism of polymerization and one mode of decomposition.
Chapter 3 describes the effect which additives play on neutral NiII polymerization catalysts such as 2.5. The addition of excess ethers, esters, ketones, anhydrides, alcohols, and water do not deactivate the catalysts for polymerization. However, the addition of excess acid, thiols, and phosphines was observed to shut-down catalysis. Since excess phosphine was found to inhibit catalysis, "phosphine-free" complexes, such as the acetonittile complex (3.26), were prepared. The acetonitrile complex was found to be the most active neutral polymerization catalyst prepared to date.
Chapter 4 outlines the use of catalyst 2.5 and 3.26 for the preparation of linear functionalized copolymers containing alcohols, esters, anhydrides, and ethers. Copolymers can be prepared with γ-functionalized-α-olefins, functionalized norbornenes, and functionalized tricyclononenes, with up to 30 mol% comonomer incorporation.
Chapter 5 outlines the preparation of a series of PtII alkyl/olefin salicylaldimine complexes which serve as models for the active species in the NiII-catalyzed polymerization process. Understanding the nature of the M-olefin interaction as a the electronic and steric properties of the salicylaldimine ligand is varied has allowed for a number of predictions about the design of future polymerization systems.
Resumo:
This thesis is a theoretical work on the space-time dynamic behavior of a nuclear reactor without feedback. Diffusion theory with G-energy groups is used.
In the first part the accuracy of the point kinetics (lumped-parameter description) model is examined. The fundamental approximation of this model is the splitting of the neutron density into a product of a known function of space and an unknown function of time; then the properties of the system can be averaged in space through the use of appropriate weighting functions; as a result a set of ordinary differential equations is obtained for the description of time behavior. It is clear that changes of the shape of the neutron-density distribution due to space-dependent perturbations are neglected. This results to an error in the eigenvalues and it is to this error that bounds are derived. This is done by using the method of weighted residuals to reduce the original eigenvalue problem to that of a real asymmetric matrix. Then Gershgorin-type theorems .are used to find discs in the complex plane in which the eigenvalues are contained. The radii of the discs depend on the perturbation in a simple manner.
In the second part the effect of delayed neutrons on the eigenvalues of the group-diffusion operator is examined. The delayed neutrons cause a shifting of the prompt-neutron eigenvalue s and the appearance of the delayed eigenvalues. Using a simple perturbation method this shifting is calculated and the delayed eigenvalues are predicted with good accuracy.
Resumo:
This thesis outlines the construction of several types of structured integrators for incompressible fluids. We first present a vorticity integrator, which is the Hamiltonian counterpart of the existing Lagrangian-based fluid integrator. We next present a model-reduced variational Eulerian integrator for incompressible fluids, which combines the efficiency gains of dimension reduction, the qualitative robustness to coarse spatial and temporal resolutions of geometric integrators, and the simplicity of homogenized boundary conditions on regular grids to deal with arbitrarily-shaped domains with sub-grid accuracy.
Both these numerical methods involve approximating the Lie group of volume-preserving diffeomorphisms by a finite-dimensional Lie-group and then restricting the resulting variational principle by means of a non-holonomic constraint. Advantages and limitations of this discretization method will be outlined. It will be seen that these derivation techniques are unable to yield symplectic integrators, but that energy conservation is easily obtained, as is a discretized version of Kelvin's circulation theorem.
Finally, we outline the basis of a spectral discrete exterior calculus, which may be a useful element in producing structured numerical methods for fluids in the future.
A model for energy and morphology of crystalline grain boundaries with arbitrary geometric character
Resumo:
It has been well-established that interfaces in crystalline materials are key players in the mechanics of a variety of mesoscopic processes such as solidification, recrystallization, grain boundary migration, and severe plastic deformation. In particular, interfaces with complex morphologies have been observed to play a crucial role in many micromechanical phenomena such as grain boundary migration, stability, and twinning. Interfaces are a unique type of material defect in that they demonstrate a breadth of behavior and characteristics eluding simplified descriptions. Indeed, modeling the complex and diverse behavior of interfaces is still an active area of research, and to the author's knowledge there are as yet no predictive models for the energy and morphology of interfaces with arbitrary character. The aim of this thesis is to develop a novel model for interface energy and morphology that i) provides accurate results (especially regarding "energy cusp" locations) for interfaces with arbitrary character, ii) depends on a small set of material parameters, and iii) is fast enough to incorporate into large scale simulations.
In the first half of the work, a model for planar, immiscible grain boundary is formulated. By building on the assumption that anisotropic grain boundary energetics are dominated by geometry and crystallography, a construction on lattice density functions (referred to as "covariance") is introduced that provides a geometric measure of the order of an interface. Covariance forms the basis for a fully general model of the energy of a planar interface, and it is demonstrated by comparison with a wide selection of molecular dynamics energy data for FCC and BCC tilt and twist boundaries that the model accurately reproduces the energy landscape using only three material parameters. It is observed that the planar constraint on the model is, in some cases, over-restrictive; this motivates an extension of the model.
In the second half of the work, the theory of faceting in interfaces is developed and applied to the planar interface model for grain boundaries. Building on previous work in mathematics and materials science, an algorithm is formulated that returns the minimal possible energy attainable by relaxation and the corresponding relaxed morphology for a given planar energy model. It is shown that the relaxation significantly improves the energy results of the planar covariance model for FCC and BCC tilt and twist boundaries. The ability of the model to accurately predict faceting patterns is demonstrated by comparison to molecular dynamics energy data and experimental morphological observation for asymmetric tilt grain boundaries. It is also demonstrated that by varying the temperature in the planar covariance model, it is possible to reproduce a priori the experimentally observed effects of temperature on facet formation.
Finally, the range and scope of the covariance and relaxation models, having been demonstrated by means of extensive MD and experimental comparison, future applications and implementations of the model are explored.
Resumo:
I. It was not possible to produce anti-tetracycline antibody in laboratory animals by any of the methods tried. Tetracycline protein conjugates were prepared and characterized. It was shown that previous reports of the detection of anti-tetracycline antibody by in vitro-methods were in error. Tetracycline precipitates non-specifically with serum proteins. The anaphylactic reaction reported was the result of misinterpretation, since the observations were inconsistent with the known mechanism of anaphylaxis and the supposed antibody would not sensitize guinea pig skin. The hemagglutination reaction was not reproducible and was extremely sensitive to minute amounts of microbial contamination. Both free tetracyclines and the conjugates were found to be poor antigens.
II. Anti-aspiryl antibodies were produced in rabbits using 3 protein carriers. The method of inhibition of precipitation was used to determine the specificity of the antibody produced. ε-Aminocaproate was found to be the most effective inhibitor of the haptens tested, indicating that the combining hapten of the protein is ε-aspiryl-lysyl. Free aspirin and salicylates were poor inhibitors and did not combine with the antibody to a significant extent. The ortho group was found to participate in the binding to antibody. The average binding constants were measured.
Normal rabbit serum was acetylated by aspirin under in vitro conditions, which are similar to physiological conditions. The extent of acetylation was determined by immunochemical tests. The acetylated serum proteins were shown to be potent antigens in rabbits. It was also shown that aspiryl proteins were partially acetylated. The relation of these results to human aspirin intolerance is discussed.
III. Aspirin did not induce contact sensitivity in guinea pigs when they were immunized by techniques that induce sensitivity with other reactive compounds. The acetylation mechanism is not relevant to this type of hypersensitivity, since sensitivity is not produced by potent acetylating agents like acetyl chloride and acetic anhydride. Aspiryl chloride, a totally artificial system, is a good sensitizer. Its specificity was examined.
IV. Protein conjugates were prepared with p-aminosalicylic acid and various carriers using azo, carbodiimide and mixed anhydride coupling. These antigens were injected into rabbits and guinea pigs and no anti-hapten IgG or IgM response was obtained. Delayed hypersensitivity was produced in guinea pigs by immunization with the conjugates, and its specificity was determined. Guinea pigs were not sensitized by either injections or topical application of p-amino-salicylic acid or p-aminosalicylate.
Resumo:
I. Crossing transformations constitute a group of permutations under which the scattering amplitude is invariant. Using Mandelstem's analyticity, we decompose the amplitude into irreducible representations of this group. The usual quantum numbers, such as isospin or SU(3), are "crossing-invariant". Thus no higher symmetry is generated by crossing itself. However, elimination of certain quantum numbers in intermediate states is not crossing-invariant, and higher symmetries have to be introduced to make it possible. The current literature on exchange degeneracy is a manifestation of this statement. To exemplify application of our analysis, we show how, starting with SU(3) invariance, one can use crossing and the absence of exotic channels to derive the quark-model picture of the tensor nonet. No detailed dynamical input is used.
II. A dispersion relation calculation of the real parts of forward π±p and K±p scattering amplitudes is carried out under the assumption of constant total cross sections in the Serpukhov energy range. Comparison with existing experimental results as well as predictions for future high energy experiments are presented and discussed. Electromagnetic effects are found to be too small to account for the expected difference between the π-p and π+p total cross sections at higher energies.
Resumo:
STEEL, the Caltech created nonlinear large displacement analysis software, is currently used by a large number of researchers at Caltech. However, due to its complexity, lack of visualization tools (such as pre- and post-processing capabilities) rapid creation and analysis of models using this software was difficult. SteelConverter was created as a means to facilitate model creation through the use of the industry standard finite element solver ETABS. This software allows users to create models in ETABS and intelligently convert model information such as geometry, loading, releases, fixity, etc., into a format that STEEL understands. Models that would take several days to create and verify now take several hours or less. The productivity of the researcher as well as the level of confidence in the model being analyzed is greatly increased.
It has always been a major goal of Caltech to spread the knowledge created here to other universities. However, due to the complexity of STEEL it was difficult for researchers or engineers from other universities to conduct analyses. While SteelConverter did help researchers at Caltech improve their research, sending SteelConverter and its documentation to other universities was less than ideal. Issues of version control, individual computer requirements, and the difficulty of releasing updates made a more centralized solution preferred. This is where the idea for Caltech VirtualShaker was born. Through the creation of a centralized website where users could log in, submit, analyze, and process models in the cloud, all of the major concerns associated with the utilization of SteelConverter were eliminated. Caltech VirtualShaker allows users to create profiles where defaults associated with their most commonly run models are saved, and allows them to submit multiple jobs to an online virtual server to be analyzed and post-processed. The creation of this website not only allowed for more rapid distribution of this tool, but also created a means for engineers and researchers with no access to powerful computer clusters to run computationally intensive analyses without the excessive cost of building and maintaining a computer cluster.
In order to increase confidence in the use of STEEL as an analysis system, as well as verify the conversion tools, a series of comparisons were done between STEEL and ETABS. Six models of increasing complexity, ranging from a cantilever column to a twenty-story moment frame, were analyzed to determine the ability of STEEL to accurately calculate basic model properties such as elastic stiffness and damping through a free vibration analysis as well as more complex structural properties such as overall structural capacity through a pushover analysis. These analyses showed a very strong agreement between the two softwares on every aspect of each analysis. However, these analyses also showed the ability of the STEEL analysis algorithm to converge at significantly larger drifts than ETABS when using the more computationally expensive and structurally realistic fiber hinges. Following the ETABS analysis, it was decided to repeat the comparisons in a software more capable of conducting highly nonlinear analysis, called Perform. These analyses again showed a very strong agreement between the two softwares in every aspect of each analysis through instability. However, due to some limitations in Perform, free vibration analyses for the three story one bay chevron brace frame, two bay chevron brace frame, and twenty story moment frame could not be conducted. With the current trend towards ultimate capacity analysis, the ability to use fiber based models allows engineers to gain a better understanding of a building’s behavior under these extreme load scenarios.
Following this, a final study was done on Hall’s U20 structure [1] where the structure was analyzed in all three softwares and their results compared. The pushover curves from each software were compared and the differences caused by variations in software implementation explained. From this, conclusions can be drawn on the effectiveness of each analysis tool when attempting to analyze structures through the point of geometric instability. The analyses show that while ETABS was capable of accurately determining the elastic stiffness of the model, following the onset of inelastic behavior the analysis tool failed to converge. However, for the small number of time steps the ETABS analysis was converging, its results exactly matched those of STEEL, leading to the conclusion that ETABS is not an appropriate analysis package for analyzing a structure through the point of collapse when using fiber elements throughout the model. The analyses also showed that while Perform was capable of calculating the response of the structure accurately, restrictions in the material model resulted in a pushover curve that did not match that of STEEL exactly, particularly post collapse. However, such problems could be alleviated by choosing a more simplistic material model.