857 resultados para Alternative Formulations
Resumo:
The conversion between representations of angular momentum in spherical polar and cartesian form is discussed.
Resumo:
When studying genotype X environment interaction in multi-environment trials, plant breeders and geneticists often consider one of the effects, environments or genotypes, to be fixed and the other to be random. However, there are two main formulations for variance component estimation for the mixed model situation, referred to as the unconstrained-parameters (UP) and constrained-parameters (CP) formulations. These formulations give different estimates of genetic correlation and heritability as well as different tests of significance for the random effects factor. The definition of main effects and interactions and the consequences of such definitions should be clearly understood, and the selected formulation should be consistent for both fixed and random effects. A discussion of the practical outcomes of using the two formulations in the analysis of balanced data from multi-environment trials is presented. It is recommended that the CP formulation be used because of the meaning of its parameters and the corresponding variance components. When managed (fixed) environments are considered, users will have more confidence in prediction for them but will not be overconfident in prediction in the target (random) environments. Genetic gain (predicted response to selection in the target environments from the managed environments) is independent of formulation.
Resumo:
Longitudinal data, where data are repeatedly observed or measured on a temporal basis of time or age provides the foundation of the analysis of processes which evolve over time, and these can be referred to as growth or trajectory models. One of the traditional ways of looking at growth models is to employ either linear or polynomial functional forms to model trajectory shape, and account for variation around an overall mean trend with the inclusion of random eects or individual variation on the functional shape parameters. The identification of distinct subgroups or sub-classes (latent classes) within these trajectory models which are not based on some pre-existing individual classification provides an important methodology with substantive implications. The identification of subgroups or classes has a wide application in the medical arena where responder/non-responder identification based on distinctly diering trajectories delivers further information for clinical processes. This thesis develops Bayesian statistical models and techniques for the identification of subgroups in the analysis of longitudinal data where the number of time intervals is limited. These models are then applied to a single case study which investigates the neuropsychological cognition for early stage breast cancer patients undergoing adjuvant chemotherapy treatment from the Cognition in Breast Cancer Study undertaken by the Wesley Research Institute of Brisbane, Queensland. Alternative formulations to the linear or polynomial approach are taken which use piecewise linear models with a single turning point, change-point or knot at a known time point and latent basis models for the non-linear trajectories found for the verbal memory domain of cognitive function before and after chemotherapy treatment. Hierarchical Bayesian random eects models are used as a starting point for the latent class modelling process and are extended with the incorporation of covariates in the trajectory profiles and as predictors of class membership. The Bayesian latent basis models enable the degree of recovery post-chemotherapy to be estimated for short and long-term followup occasions, and the distinct class trajectories assist in the identification of breast cancer patients who maybe at risk of long-term verbal memory impairment.
Resumo:
An integrated approach to energy planning, when applied to large hydroelectric projects, requires that the energy-opportunity cost of the land submerged under the reservoir be incorporated into the planning methodology. Biomass energy lost from the submerged land has to be compared to the electrical energy generated, for which we develop four alternative formulations of the net-energy function. The design problem is posed as an LP problem and is solved for two sites in India. Our results show that the proposed designs may not be viable in net-energy terms, whereas a marginal reduction in the generation capacity could lead to an optimal design that gives substantial savings in the submerged area. Allowing seasonal variations in the hydroelectric generation capacity also reduces the reservoir size. A mixed hydro-wood generation system is then examined and is found to be viable.
Resumo:
We have sought to determine the nature of the free-radical precursors to ring-opened hydrocarbon 5 and ring-closed hydrocarbon 6. Reasonable alternative formulations involve the postulation of hydrogen abstraction (a) by a pair of rapidly equilibrating classical radicals (the ring-opened allylcarbinyl-type radical 3 and the ring-closed cyclopropylcarbinyl-type 4), or (b) by a nonclassical radical such as homoallylic radical 7.
[Figure not reproduced.]
Entry to the radical system is gained via degassed thermal decomposition of peresters having the ring-opened and the ring-closed structures. The ratio of 6:5 is essentially independent of the hydrogen donor concentration for decomposition of the former at 125° in the presence of triethyltin hydrdride. A deuterium labeling study showed that the α and β methylene groups in 3 (or the equivalent) are rapidly interchanged under these conditions.
Existence of two (or more) product-forming intermediates is indicated (a) by dependence of the ratio 6:5 on the tin hydride concentration for decomposition of the ring-closed perester at 10 and 35°, and (b) by formation of cage products having largely or wholly the structure (ring-opened or ring-closed) of the starting perester.
Relative rates of hydrogen abstraction by 3 could be inferred by comparison of ratios of rate constants for hydrogen abstraction and ortho-ring cyclization:
[Figure not reproduced.]
At 100° values of ka/kr are 0.14 for hydrogen abstraction from 1,4-cyclohexadiene and 7 for abstraction from triethyltin hydride. The ratio 6:5 at the same temperature is ~0.0035 for hydrogen abstraction from 1,4-cyclohexadiene, ~0.078 for abstraction from the tin hydride, and ≥ 5 for abstraction from cyclohexadienyl radicals. These data indicate that abstraction of hydrogen from triethyltin hydride is more rapid than from 1,4-cyclohexadiene by a factor of ~1000 for 4, but only ~50 for 3.
Measurements of product ratios at several temperatures allowed the construction of an approximate energy-level scheme. A major inference is that isomerization of 3 to 4 is exothermic by 8 ± 3 kcal/mole, in good agreement with expectations based on bond dissociation energies. Absolute rate-constant estimates are also given.
The results are nicely compatible with a classical-radical mechanism, but attempted interpretation in terms of a nonclassical radical precursor of product ratios formed even from equilibrated radical intermediates leads, it is argued, to serious difficulties.
The roles played by hydrogen abstraction from 1,4,-cyclohexadiene and from the derived cyclohexadienyl radicals were probed by fitting observed ratios of 6:5 and 5:10 in the sense of least-squares to expressions derived for a complex mechanistic scheme. Some 30 to 40 measurements on each product ratio, obtained under a variety of experimental conditions, could be fit with an average deviation of ~6%. Significant systematic deviations were found, but these could largely be redressed by assuming (a) that the rate constant for reaction of 4 with cyclohexadienyl radical is inversely proportional to the viscosity of the medium (i.e., is diffusion-controlled), and (b) that ka/kr for hydrogen abstraction from 1,4-cyclohexadiene depends slightly on the composition of the medium. An average deviation of 4.4% was thereby attained.
Degassed thermal decomposition of the ring-opened perester in the presence of the triethyltin hydride occurs primarily by attack on perester of triethyltin radicals, presumably at the –O-O- bond, even at 0.01 M tin hydride at 100 and 125°. Tin ester and tin ether are apparently formed in closely similar amounts under these conditions, but the tin ester predominates at room temperature in the companion air-induced decomposition, indicating that attack on perester to give the tin ether requires an activation energy approximately 5 kcal/mole in excess of that for the formation of tin ester.
Resumo:
A biomechanical model of the human oculomotor plant kinematics in 3-D as a function of muscle length changes is presented. It can represent a range of alternative interpretations of the data as a function of one parameter. The model is free from such deficits as singularities and the nesting of axes found in alternative formulations such as the spherical wrist (Paul, l98l). The equations of motion are defined on a quaternion based representation of eye rotations and are compact atnd computationally efficient.
Resumo:
We provide an axiomatization of Yitzhaki’s index of individual deprivation. Our result differs from an earlier characterization due to Ebert and Moyes in the way the reference group of an individual is represented in the model. Ebert and Moyes require the index to be defined for all logically possible reference groups, whereas we employ the standard definition of the reference group as the set of all agents in a society. As a consequence of this modification, some of the axioms used by Ebert and Moyes can no longer be applied and we provide alternative formulations.
Resumo:
The concept of “working” memory is traceable back to nineteenth century theorists (Baldwin, 1894; James 1890) but the term itself was not used until the mid-twentieth century (Miller, Galanter & Pribram, 1960). A variety of different explanatory constructs have since evolved which all make use of the working memory label (Miyake & Shah, 1999). This history is briefly reviewed and alternative formulations of working memory (as language-processor, executive attention, and global workspace) are considered as potential mechanisms for cognitive change within and between individuals and between species. A means, derived from the literature on human problem-solving (Newell & Simon, 1972), of tracing memory and computational demands across a single task is described and applied to two specific examples of tool-use by chimpanzees and early hominids. The examples show how specific proposals for necessary and/or sufficient computational and memory requirements can be more rigorously assessed on a task by task basis. General difficulties in connecting cognitive theories (arising from the observed capabilities of individuals deprived of material support) with archaeological data (primarily remnants of material culture) are discussed.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Condicionantes de la adherencia y anclaje en el refuerzo de muros de fábrica con elementos de fibras
Resumo:
Es cada vez más frecuente la rehabilitación de patrimonio construido, tanto de obras deterioradas como para la adecuación de obras existentes a nuevos usos o solicitaciones. Se ha considerado el estudio del refuerzo de obras de fábrica ya que constituyen un importante número dentro del patrimonio tanto de edificación como de obra civil (sistemas de muros de carga o en estructuras principales porticadas de acero u hormigón empleándose las fábricas como cerramiento o distribución con elementos autoportantes). A la hora de reparar o reforzar una estructura es importante realizar un análisis de las deficiencias, caracterización mecánica del elemento y solicitaciones presentes o posibles; en el apartado 1.3 del presente trabajo se refieren acciones de rehabilitación cuando lo que se precisa no es refuerzo estructural, así como las técnicas tradicionales más habituales para refuerzo de fábricas que suelen clasificarse según se trate de refuerzos exteriores o interiores. En los últimos años se ha adoptado el sistema de refuerzo de FRP, tecnología con origen en los refuerzos de hormigón tanto de elementos a flexión como de soportes. Estos refuerzos pueden ser de láminas adheridas a la fábrica soporte (SM), o de barras incluidas en rozas lineales (NSM). La elección de un sistema u otro depende de la necesidad de refuerzo y tipo de solicitación predominante, del acceso para colocación y de la exigencia de impacto visual. Una de las mayores limitaciones de los sistemas de refuerzo por FRP es que no suele movilizarse la resistencia del material de refuerzo, produciéndose previamente fallo en la interfase con el soporte con el consecuente despegue o deslaminación; dichos fallos pueden tener un origen local y propagarse a partir de una discontinuidad, por lo que es preciso un tratamiento cuidadoso de la superficie soporte, o bien como consecuencia de una insuficiente longitud de anclaje para la transferencia de los esfuerzos en la interfase. Se considera imprescindible una caracterización mecánica del elemento a reforzar. Es por ello que el trabajo presenta en el capítulo 2 métodos de cálculo de la fábrica soporte de distintas normativas y también una formulación alternativa que tiene en cuenta la fábrica histórica ya que su caracterización suele ser más complicada por la heterogeneidad y falta de clasificación de sus materiales, especialmente de los morteros. Una vez conocidos los parámetros resistentes de la fábrica soporte es posible diseñar el refuerzo; hasta la fecha existe escasa normativa de refuerzos de FRP para muros de fábrica, consistente en un protocolo propuesto por la ACI 440 7R-10 que carece de mejoras por tipo de anclaje y aporta valores muy conservadores de la eficacia del refuerzo. Como se ha indicado, la problemática principal de los refuerzos de FRP en muros es el modo de fallo que impide un aprovechamiento óptimo de las propiedades del material. Recientemente se están realizando estudios con distintos métodos de anclaje para estos refuerzos, con lo que se incremente la capacidad última y se mantenga el soporte ligado al refuerzo tras la rotura. Junto con sistemas de anclajes por prolongación del refuerzo (tanto para láminas como para barras) se han ensayado anclajes con llaves de cortante, barras embebidas, o anclajes mecánicos de acero o incluso de FRP. Este texto resume, en el capítulo 4, algunas de las campañas experimentales llevadas a cabo entre los años 2000 y 2013 con distintos anclajes. Se observan los parámetros fundamentales para medir la eficacia del anclajes como son: el modo de fallo, el incremento de resistencia, y los desplazamientos que permite observar la ductilidad del refuerzo; estos datos se analizan en función de la variación de: tipo de refuerzo incluyéndose el tipo de fibra y sistema de colocación, y tipo de anclaje. Existen también parámetros de diseño de los propios anclajes. En el caso de barras embebidas se resumen en diámetro y material de la barra, acabado superficial, dimensiones y forma de la roza, tipo de adhesivo. En el caso de anclajes de FRP tipo pasador la caracterización incluye: tipo de fibra, sistema de fabricación del anclajes y diámetro del mismo, radio de expansión del abanico, espaciamiento longitudinal de anclajes, número de filas de anclajes, número de láminas del refuerzo, longitud adherida tras el anclaje; es compleja la sistematización de resultados de los autores de las campañas expuestas ya que algunos de estos parámetros varían impidiendo la comparación. El capítulo 5 presenta los ensayos empleados para estas campañas de anclajes, distinguiéndose entre ensayos de modo I, tipo tracción directa o arrancamiento, que servirían para sistemas NSM o para cuantificar la resistencia individual de anclajes tipo pasador; ensayos de modo II, tipo corte simple, que se asemeja más a las condiciones de trabajo de los refuerzos. El presente texto se realiza con objeto de abrir una posible investigación sobre los anclajes tipo pasador, considerándose que junto con los sistemas de barra embebida son los que permiten una mayor versatilidad de diseño para los refuerzos de FRP y siendo su eficacia aún difícil de aislar por el número de parámetros de diseño. Rehabilitation of built heritage is becoming increasingly frequent, including repair of damaged works and conditioning for a new use or higher loads. In this work it has been considered the study of masonry wall reinforcement, as most buildings and civil works have load bearing walls or at least infilled masonry walls in concrete and steel structures. Before repairing or reinforcing an structure, it is important to analyse its deficiencies, its mechanical properties and both existing and potential loads; chapter 1, section 4 includes the most common rehabilitation methods when structural reinforcement is not needed, as well as traditional reinforcement techniques (internal and external reinforcement) In the last years the FRP reinforcement system has been adopted for masonry walls. FRP materials for reinforcement were initially used for concrete pillars and beams. FRP reinforcement includes two main techniques: surface mounted laminates (SM) and near surface mounted bars (NSM); one of them may be more accurate according to the need for reinforcement and main load, accessibility for installation and aesthetic requirements. One of the main constraints of FRP systems is not reaching maximum load for material due to premature debonding failure, which can be caused by surface irregularities so surface preparation is necessary. But debonding (or delamination for SM techniques) can also be a consequence of insufficient anchorage length or stress concentration. In order to provide an accurate mechanical characterisation of walls, chapter 2 summarises the calculation methods included in guidelines as well as alternative formulations for old masonry walls as historic wall properties are more complicated to obtain due to heterogeneity and data gaps (specially for mortars). The next step is designing reinforcement system; to date there are scarce regulations for walls reinforcement with FRP: ACI 440 7R-10 includes a protocol without considering the potential benefits provided by anchorage devices and with conservative values for reinforcement efficiency. As noted above, the main problem of FRP masonry walls reinforcement is failure mode. Recently, some authors have performed studies with different anchorage systems, finding that these systems are able to delay or prevent debonding . Studies include the following anchorage systems: Overlap, embedded bars, shear keys, shear restraint and fiber anchors. Chapter 4 briefly describes several experimental works between years 2000 and 2013, concerning different anchorage systems. The main parameters that measure the anchorage efficiency are: failure mode, failure load increase, displacements (in order to evaluate the ductility of the system); all these data points strongly depend on: reinforcement system, FRP fibers, anchorage system, and also on the specific anchorage parameters. Specific anchorage parameters are a function of the anchorage system used. The embedded bar system have design variables which can be identified as: bar diameter and material, surface finish, groove dimensions, and adhesive. In FRP anchorages (spikes) a complete design characterisation should include: type of fiber, manufacturing process, diameter, fan orientation, anchor splay width, anchor longitudinal spacing and number or rows, number or FRP sheet plies, bonded length beyond anchorage devices,...the parameters considered differ from some authors to others, so the comparison of results is quite complicated. Chapter 5 includes the most common tests used in experimental investigations on bond-behaviour and anchorage characterisation: direct shear tests (with variations single-shear and double-shear), pullout tests and bending tests. Each of them may be used according to the data needed. The purpose of this text is to promote further investigation of anchor spikes, accepting that both FRP anchors and embedded bars are the most versatile anchorage systems of FRP reinforcement and considering that to date its efficiency cannot be evaluated as there are too many design uncertainties.
Resumo:
Targeting of drugs and therapies locally to the esophagus is an important objective in the development of new and more effective dosage forms. Therapies that are retained within the oral cavity for both local and systemic action have been utilized for many years, although delivery to the esophagus has been far less reported. Esophageal disease states, including infections, motility disorders, gastric reflux, and cancers, would all benefit from localized drug delivery. Therefore, research in this area provides significant opportunities. The key limitation to effective drug delivery within the esophagus is sufficient retention at this site coupled with activity profiles to correspond with these retention times; therefore, a suitable formulation needs to provide the drug in a ready-to-work form at the site of action during the rapid transit through this organ. A successfully designed esophageal-targeted system can overcome these obstacles. This review presents a range of dosage form approaches for targeting the esophagus, including bioadhesive liquids and orally retained lozenges, chewing gums, gels, and films, as well as endoscopically delivered therapeutics. The techniques used to measure efficacy both in vitro and in vivo are also discussed. Drug delivery is a growing driver within the pharmaceutical industry and offers benefits both in terms of clinical efficacy, as well as in market positioning, as a means of extending a drug's exclusivity and profitability. Emerging systems that can be used to target the esophagus are reported within this review, as well as the potential of alternative formulations that offer benefits in this exciting area.
Resumo:
Investigations into the modelling techniques that depict the transport of discrete phases (gas bubbles or solid particles) and model biochemical reactions in a bubble column reactor are discussed here. The mixture model was used to calculate gas-liquid, solid-liquid and gasliquid-solid interactions. Multiphase flow is a difficult phenomenon to capture, particularly in bubble columns where the major driving force is caused by the injection of gas bubbles. The gas bubbles cause a large density difference to occur that results in transient multi-dimensional fluid motion. Standard design procedures do not account for the transient motion, due to the simplifying assumptions of steady plug flow. Computational fluid dynamics (CFD) can assist in expanding the understanding of complex flows in bubble columns by characterising the flow phenomena for many geometrical configurations. Therefore, CFD has a role in the education of chemical and biochemical engineers, providing the examples of flow phenomena that many engineers may not experience, even through experimentation. The performance of the mixture model was investigated for three domains (plane, rectangular and cylindrical) and three flow models (laminar, k-e turbulence and the Reynolds stresses). mThis investigation raised many questions about how gas-liquid interactions are captured numerically. To answer some of these questions the analogy between thermal convection in a cavity and gas-liquid flow in bubble columns was invoked. This involved modelling the buoyant motion of air in a narrow cavity for a number of turbulence schemes. The difference in density was caused by a temperature gradient that acted across the width of the cavity. Multiple vortices were obtained when the Reynolds stresses were utilised with the addition of a basic flow profile after each time step. To implement the three-phase models an alternative mixture model was developed and compared against a commercially available mixture model for three turbulence schemes. The scheme where just the Reynolds stresses model was employed, predicted the transient motion of the fluids quite well for both mixture models. Solid-liquid and then alternative formulations of gas-liquid-solid model were compared against one another. The alternative form of the mixture model was found to perform particularly well for both gas and solid phase transport when calculating two and three-phase flow. The improvement in the solutions obtained was a result of the inclusion of the Reynolds stresses model and differences in the mixture models employed. The differences between the alternative mixture models were found in the volume fraction equation (flux and deviatoric stress tensor terms) and the viscosity formulation for the mixture phase.
Resumo:
Mechanistic models used for prediction should be parsimonious, as models which are over-parameterised may have poor predictive performance. Determining whether a model is parsimonious requires comparisons with alternative model formulations with differing levels of complexity. However, creating alternative formulations for large mechanistic models is often problematic, and usually time-consuming. Consequently, few are ever investigated. In this paper, we present an approach which rapidly generates reduced model formulations by replacing a model’s variables with constants. These reduced alternatives can be compared to the original model, using data based model selection criteria, to assist in the identification of potentially unnecessary model complexity, and thereby inform reformulation of the model. To illustrate the approach, we present its application to a published radiocaesium plant-uptake model, which predicts uptake on the basis of soil characteristics (e.g. pH, organic matter content, clay content). A total of 1024 reduced model formulations were generated, and ranked according to five model selection criteria: Residual Sum of Squares (RSS), AICc, BIC, MDL and ICOMP. The lowest scores for RSS and AICc occurred for the same reduced model in which pH dependent model components were replaced. The lowest scores for BIC, MDL and ICOMP occurred for a further reduced model in which model components related to the distinction between adsorption on clay and organic surfaces were replaced. Both these reduced models had a lower RSS for the parameterisation dataset than the original model. As a test of their predictive performance, the original model and the two reduced models outlined above were used to predict an independent dataset. The reduced models have lower prediction sums of squares than the original model, suggesting that the latter may be overfitted. The approach presented has the potential to inform model development by rapidly creating a class of alternative model formulations, which can be compared.