981 resultados para 2 Experimental-models


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Affiliation: Louise Potvin: Groupe de recherche interdisciplinaire en santé, Faculté de médecine, Université de Montréal

Relevância:

40.00% 40.00%

Publicador:

Resumo:

L'interleukine-15 (IL-15) contribue au développement et à l’activation des lymphocytes T CD8, des cellules immunes qui ont été impliquées dans plusieurs maladies auto-immunes telle la sclérose en plaques. Des niveaux élevés de l'IL-15 ont été trouvés chez les patients atteints de cette maladie comparativement aux témoins, mais aucune étude n'a examiné les effets de tels niveaux élevés sur les lymphocytes T CD8. Les objectifs de notre étude étaient 1- de caractériser l’expression de l'IL-15 par des lymphocytes B humains et de déterminer ses effets sur les fonctions des lymphocytes T CD8, et 2- d’évaluer l'expression in vivo de l'IL-15 dans des modèles murins de la sclérose en plaques. Nous avons établi que les cellules B humaines augmentaient leur expression de l'IL-15 suite à une stimulation via le CD40. De plus, les fonctions effectrices des lymphocytes T CD8 ont été significativement augmentées lors des co-cultures avec des cellules B alloréactives exprimant l'IL-15. Dans les modèles murins de la sclérose en plaques, nous avons détecté au sein du système nerveux central des cellules immunes exprimant l’IL-15 ainsi que des cellules T CD8 exprimant le récepteur pour cette cytokine à différents stades de la maladie. Nous avons démontré que les cellules B modulent des réponses des lymphocytes T CD8 via l’IL-15, ce qui suggère un rôle pour les cellules B dans la pathogenèse de la sclérose en plaques. Nous avons aussi mis en évidence la présence de cellules exprimant l’IL-15 dans le système nerveux central dans des modèles murins de cette maladie.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

La prise de décision est un processus computationnel fondamental dans de nombreux aspects du comportement animal. Le modèle le plus souvent rencontré dans les études portant sur la prise de décision est appelé modèle de diffusion. Depuis longtemps, il explique une grande variété de données comportementales et neurophysiologiques dans ce domaine. Cependant, un autre modèle, le modèle d’urgence, explique tout aussi bien ces mêmes données et ce de façon parcimonieuse et davantage encrée sur la théorie. Dans ce travail, nous aborderons tout d’abord les origines et le développement du modèle de diffusion et nous verrons comment il a été établi en tant que cadre de travail pour l’interprétation de la plupart des données expérimentales liées à la prise de décision. Ce faisant, nous relèveront ses points forts afin de le comparer ensuite de manière objective et rigoureuse à des modèles alternatifs. Nous réexaminerons un nombre d’assomptions implicites et explicites faites par ce modèle et nous mettrons alors l’accent sur certains de ses défauts. Cette analyse servira de cadre à notre introduction et notre discussion du modèle d’urgence. Enfin, nous présenterons une expérience dont la méthodologie permet de dissocier les deux modèles, et dont les résultats illustrent les limites empiriques et théoriques du modèle de diffusion et démontrent en revanche clairement la validité du modèle d'urgence. Nous terminerons en discutant l'apport potentiel du modèle d'urgence pour l'étude de certaines pathologies cérébrales, en mettant l'accent sur de nouvelles perspectives de recherche.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Constatar si los profesores de EGB realizan o no una planificación o programación al preparar su tarea diaria. Conocer los niveles o formas de planificar. Confirmar las diferencias de planificación entre profesores de EGB con y sin experiencia. Comprender los elementos que caracterizan y diferencian los procesos de planificación, enseñanza e interacción en los profesores con y sin experiencia. Comprobar la validez de la utilización conjunta de técnicas de investigación cuantitativas y cualitativas. Las hipótesis son las siguientes: H1. Los profesores de EGB de Mallorca realizan una planificación o programación previa a su tarea docente. H2. Aquellos que planifican su tarea no siguen un modelo único. SH 2.1. Utilizan la planificación anual y/o trimestral. SH 2.2. Utilizan la planificación quincenal y-o semanal. H3. La planificación que realizan los profesores de EGB varía significativamente en función de los años de experiencia docente. SH 3.1. Varía el tipo de planificación. SH 3.2. Varía la elaboración en grupo. SH 3.3. Varían las finalidades atribuidas a la planificación anual y-o trimestral. SH 3.4. Varía la utilización de los objetivos educativos que recogen los programas renovados como fuente básica en la elaboración de la planificación anual y-o trimestral. SH 3.5. Varían los elementos que se incluyen en la planificación anual y-o trimestral. SH 3.6. Varía la finalidad básica de la planificación quincenal y-o semanal. SH 3.7. Varían los elementos que se incluyen en la planificación quincenal y-o semanal. SH 3.8. Varía la preparación diaria de la tarea docente. H4. La media del rendimiento de los alumnos de profesores con experiencia será más alta que la de los alumnos de profesores sin experiencia. Muestra primera: 301 profesores de EGB de las 50 escuelas seleccionadas al azar en el curso 87-88. Segunda muestra: dos grupos de 15 sujetos seleccionados en función de su experiencia docente y el tipo de centro. Tercera muestra: selección de dos aulas de quinto de EGB, 30 y 33 alumnos, con sus correspondientes maestros -uno con más experiencia que otro, uno centrado en el orden y adquisición de hábitos de estudio y el otro no. En el ámbito teórico se analizan las investigaciones producidas en el campo de la didáctica, diferenciando el paradigma proceso-producto, el mediacional y el ecológico. Con esta revisión se constata la validez de ciertas premisas conceptuales y metodológicas. Seguidamente, se realiza una reflexión sobre los elementos que conforman el currículum y se analiza la concreción de los distintos modelos en la planificación diaria del profesorado. El apartado experimental queda perfectamente reflejado en los demás apartados. Se utilizan conjuntamente instrumentos cuantitativos y cualitativos. A la primera muestra se le aplica un cuestionario para recoger información relativa a las tres primeras hipótesis. A la segunda muestra se le aplica una entrevista sobre la planificación que realizan los docentes. A la tercera muestra se le aplica una observación no participante de la planificación y enseñanza de una unidad didáctica de sociales para poder realizar un análisis cualitativo de los profesores con y sin experiencia. Posteriormente se realiza una prueba de rendimiento de los alumnos, test escrito de siete ítems, para poder valorar la última hipótesis. A los profesores se les insta a reflexionar mediante una entrevista en la que se escucha el registro auditivo de la explicación de la unidad didáctica para que así el profesor pueda verbalizar sus pensamientos y explicar las decisiones que ha tomado. Se dispone del paquete informático SPSSX para procesar los datos. Para las dos primeras hipótesis se utiliza el cálculo de los intervalos de confianza. Para las SH 3.1, 3.3, 3.5, 3.7 y 3.8 se utiliza la prueba de comparación de varias reparticiones observadas. El valor obtenido se compara con la ley de x2. Para las SH 3.2, 3.4 y 3.6 se utiliza la prueba de comparación de las dos proporciones observadas en grupos con datos independientes. Para la cuarta hipótesis se utiliza la prueba de comparación de dos medias observadas en muestras pequeñas con datos independientes. La contrastación se realiza con la ley de Student-Fisher. Se confirma la primera hipótesis al constatar que un 80 por ciento de los profesores de EGB realiza una programación o planificación previa. Se confirma la segunda hipótesis al existir diferentes modelos de planificación. La mayoría de planificaciones tienen un carácter anual y-o trimestral. Este tipo de programación cumple una finalidad organizativa, se elabora a partir de los programas renovados del MEC e incluye básicamente los objetivos de cada curso y el contenido básico de cada asignatura. Un 49,5 por ciento afirma que se encarga de su confección el equipo docente del mismo curso o ciclo. La programación quincenal y-o semanal tiene como finalidad primordial la selección de actividades de aprendizaje para los alumnos; varían los aspectos incluidos en su redacción. La tercera hipótesis queda rechazada al no variar excesivamente la planificación en función de los años de experiencia docente. Se rechaza la cuarta hipótesis ya que el rendimiento de los alumnos no depende de la experiencia docente de sus profesores. Urge incluir en los programas de formación del profesorado elementos instruccionales que propicien la reflexión sobre la propia tarea introduciendo un esquema de acción a tres niveles: dotar de herramientas y técnicas de investigación, planificación y observación; examinar críticamente la propia inserción profesional como profesor y propiciar un enfoque crítico a los futuros maestros. Respecto al apartado curricular, se puede proponer un modelo empírico adjuntando una nueva figura, la de diseñador y evaluador del currículum para que dinamice la relación teoría-práctica, posibilitando la reflexión sobre situaciones concretas del aula. Es necesario desarrollar nuevas investigaciones sobre el tema de carácter descriptivo, estadístico, experimental y cualitativo. Metodológicamente conviene investigar en equipo y tratar un solo problema a la vez. Se debe conocer el concepto funcional de profesor existente tras cada modelo de diseño curricular.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The [2+2+2] cycloaddition reaction involves the formation of three carbon-carbon bonds in one single step using alkynes, alkenes, nitriles, carbonyls and other unsaturated reagents as reactants. This is one of the most elegant methods for the construction of polycyclic aromatic compounds and heteroaromatic, which have important academic and industrial uses. The thesis is divided into ten chapters including six related publications. The first study based on the Wilkinson’s catalyst, RhCl(PPh3)3, compares the reaction mechanism of the [2+2+2] cycloaddition process of acetylene with the cycloaddition obtained for the model of the complex, RhCl(PH3)3. In an attempt to reduce computational costs in DFT studies, this research project aimed to substitute PPh3 ligands for PH3, despite the electronic and steric effects produced by PPh3 ligands being significantly different to those created by PH3 ones. In this first study, detailed theoretical calculations were performed to determine the reaction mechanism of the two complexes. Despite some differences being detected, it was found that modelling PPh3 by PH3 in the catalyst helps to reduce the computational cost significantly while at the same time providing qualitatively acceptable results. Taking into account the results obtained in this earlier study, the model of the Wilkinson’s catalyst, RhCl(PH3)3, was applied to study different [2+2+2] cycloaddition reactions with unsaturated systems conducted in the laboratory. Our research group found that in the case of totally closed systems, specifically 15- and 25-membered azamacrocycles can afford benzenic compounds, except in the case of 20-membered azamacrocycle (20-MAA) which was inactive with the Wilkinson’s catalyst. In this study, theoretical calculations allowed to determine the origin of the different reactivity of the 20-MAA, where it was found that the activation barrier of the oxidative addition of two alkynes is higher than those obtained for the 15- and 25-membered macrocycles. This barrier was attributed primarily to the interaction energy, which corresponds to the energy that is released when the two deformed reagents interact in the transition state. The main factor that helped to provide an explanation to the different reactivity observed was that the 20-MAA had a more stable and delocalized HOMO orbital in the oxidative addition step. Moreover, we observed that the formation of a strained ten-membered ring during the cycloaddition of 20-MAA presents significant steric hindrance. Furthermore, in Chapter 5, an electrochemical study is presented in collaboration with Prof. Anny Jutand from Paris. This work allowed studying the main steps of the catalytic cycle of the [2+2+2] cycloaddition reaction between diynes with a monoalkyne. First kinetic data were obtained of the [2+2+2] cycloaddition process catalyzed by the Wilkinson’s catalyst, where it was observed that the rate-determining step of the reaction can change depending on the structure of the starting reagents. In the case of the [2+2+2] cycloaddition reaction involving two alkynes and one alkene in the same molecule (enediynes), it is well known that the oxidative coupling may occur between two alkynes giving the corresponding metallacyclopentadiene, or between one alkyne and the alkene affording the metallacyclopentene complex. Wilkinson’s model was used in DFT calculations to analyze the different factors that may influence in the reaction mechanism. Here it was observed that the cyclic enediynes always prefer the oxidative coupling between two alkynes moieties, while the acyclic cases have different preferences depending on the linker and the substituents used in the alkynes. Moreover, the Wilkinson’s model was used to explain the experimental results achieved in Chapter 7 where the [2+2+2] cycloaddition reaction of enediynes is studied varying the position of the double bond in the starting reagent. It was observed that enediynes type yne-ene-yne preferred the standard [2+2+2] cycloaddition reaction, while enediynes type yne-yne-ene suffered β-hydride elimination followed a reductive elimination of Wilkinson’s catalyst giving cyclohexadiene compounds, which are isomers from those that would be obtained through standard [2+2+2] cycloaddition reactions. Finally, the last chapter of this thesis is based on the use of DFT calculations to determine the reaction mechanism when the macrocycles are treated with transition metals that are inactive to the [2+2+2] cycloaddition reaction, but which are thermally active leading to new polycyclic compounds. Thus, a domino process was described combining an ene reaction and a Diels-Alder cycloaddition.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

RothC and Century are two of the most widely used soil organic matter (SOM) models. However there are few examples of specific parameterisation of these models for environmental conditions in East Africa. The aim of this study was therefore, to evaluate the ability of RothC and the Century to estimate changes in soil organic carbon (SOC) resulting from varying land use/management practices for the climate and soil conditions found in Kenya. The study used climate, soils and crop data from a long term experiment (1976-2001) carried out at The Kabete site at The Kenya National Agricultural Research Laboratories (NARL, located in a semi-humid region) and data from a 13 year experiment carried out in Machang'a (Embu District, located in a semi-arid region). The NARL experiment included various fertiliser (0, 60 and 120 kg of N and P2O5 ha(-1)), farmyard manure (FYM - 5 and 10 t ha(-1)) and plant residue treatments, in a variety of combinations. The Machang'a experiment involved a fertiliser (51 kg N ha(-1)) and a FYM (0, 5 and 10 t ha(-1)) treatment with both monocropping and intercropping. At Kabete both models showed a fair to good fit to measured data, although Century simulations for treatments with high levels of FYM were better than those without. At the Machang'a site with monocrops, both models showed a fair to good fit to measured data for all treatments. However, the fit of both models (especially RothC) to measured data for intercropping treatments at Machang'a was much poorer. Further model development for intercrop systems is recommended. Both models can be useful tools in soil C Predictions, provided time series of measured soil C and crop production data are available for validating model performance against local or regional agricultural crops. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The MarQUEST (Marine Biogeochemistry and Ecosystem Modelling Initiative in QUEST) project was established to develop improved descriptions of marine biogeochemistry, suited for the next generation of Earth system models. We review progress in these areas providing insight on the advances that have been made as well as identifying remaining key outstanding gaps for the development of the marine component of next generation Earth system models. The following issues are discussed and where appropriate results are presented; the choice of model structure, scaling processes from physiology to functional types, the ecosystem model sensitivity to changes in the physical environment, the role of the coastal ocean and new methods for the evaluation and comparison of ecosystem and biogeochemistry models. We make recommendations as to where future investment in marine ecosystem modelling should be focused, highlighting a generic software framework for model development, improved hydrodynamic models, and better parameterisation of new and existing models, reanalysis tools and ensemble simulations. The final challenge is to ensure that experimental/observational scientists are stakeholders in the models and vice versa.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Projections of stratospheric ozone from a suite of chemistry-climate models (CCMs) have been analyzed. In addition to a reference simulation where anthropogenic halogenated ozone depleting substances (ODSs) and greenhouse gases (GHGs) vary with time, sensitivity simulations with either ODS or GHG concentrations fixed at 1960 levels were performed to disaggregate the drivers of projected ozone changes. These simulations were also used to assess the two distinct milestones of ozone returning to historical values (ozone return dates) and ozone no longer being influenced by ODSs (full ozone recovery). The date of ozone returning to historical values does not indicate complete recovery from ODSs in most cases, because GHG-induced changes accelerate or decelerate ozone changes in many regions. In the upper stratosphere where CO2-induced stratospheric cooling increases ozone, full ozone recovery is projected to not likely have occurred by 2100 even though ozone returns to its 1980 or even 1960 levels well before (~2025 and 2040, respectively). In contrast, in the tropical lower stratosphere ozone decreases continuously from 1960 to 2100 due to projected increases in tropical upwelling, while by around 2040 it is already very likely that full recovery from the effects of ODSs has occurred, although ODS concentrations are still elevated by this date. In the midlatitude lower stratosphere the evolution differs from that in the tropics, and rather than a steady decrease in ozone, first a decrease in ozone is simulated from 1960 to 2000, which is then followed by a steady increase through the 21st century. Ozone in the midlatitude lower stratosphere returns to 1980 levels by ~2045 in the Northern Hemisphere (NH) and by ~2055 in the Southern Hemisphere (SH), and full ozone recovery is likely reached by 2100 in both hemispheres. Overall, in all regions except the tropical lower stratosphere, full ozone recovery from ODSs occurs significantly later than the return of total column ozone to its 1980 level. The latest return of total column ozone is projected to occur over Antarctica (~2045–2060) whereas it is not likely that full ozone recovery is reached by the end of the 21st century in this region. Arctic total column ozone is projected to return to 1980 levels well before polar stratospheric halogen loading does so (~2025–2030 for total column ozone, cf. 2050–2070 for Cly+60×Bry) and it is likely that full recovery of total column ozone from the effects of ODSs has occurred by ~2035. In contrast to the Antarctic, by 2100 Arctic total column ozone is projected to be above 1960 levels, but not in the fixed GHG simulation, indicating that climate change plays a significant role.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Models of windblown pollen or spore movement are required to predict gene flow from genetically modified (GM) crops and the spread of fungal diseases. We suggest a simple form for a function describing the distance moved by a pollen grain or fungal spore, for use in generic models of dispersal. The function has power-law behaviour over sub-continental distances. We show that air-borne dispersal of rapeseed pollen in two experiments was inconsistent with an exponential model, but was fitted by power-law models, implying a large contribution from distant fields to the catches observed. After allowance for this 'background' by applying Fourier transforms to deconvolve the mixture of distant and local sources, the data were best fit by power-laws with exponents between 1.5 and 2. We also demonstrate that for a simple model of area sources, the median dispersal distance is a function of field radius and that measurement from the source edge can be misleading. Using an inverse-square dispersal distribution deduced from the experimental data and the distribution of rapeseed fields deduced by remote sensing, we successfully predict observed rapeseed pollen density in the city centres of Derby and Leicester (UK).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In areas such as drug development, clinical diagnosis and biotechnology research, acquiring details about the kinetic parameters of enzymes is crucial. The correct design of an experiment is critical to collecting data suitable for analysis, modelling and deriving the correct information. As classical design methods are not targeted to the more complex kinetics being frequently studied, attention is needed to estimate parameters of such models with low variance. We demonstrate that a Bayesian approach (the use of prior knowledge) can produce major gains quantifiable in terms of information, productivity and accuracy of each experiment. Developing the use of Bayesian Utility functions, we have used a systematic method to identify the optimum experimental designs for a number of kinetic model data sets. This has enabled the identification of trends between kinetic model types, sets of design rules and the key conclusion that such designs should be based on some prior knowledge of K-M and/or the kinetic model. We suggest an optimal and iterative method for selecting features of the design such as the substrate range, number of measurements and choice of intermediate points. The final design collects data suitable for accurate modelling and analysis and minimises the error in the parameters estimated. (C) 2003 Elsevier Science B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Experimental data for the title reaction were modeled using master equation (ME)/RRKM methods based on the Multiwell suite of programs. The starting point for the exercise was the empirical fitting provided by the NASA (Sander, S. P.; Finlayson-Pitts, B. J.; Friedl, R. R.; Golden, D. M.; Huie, R. E.; Kolb, C. E.; Kurylo, M. J.; Molina, M. J.; Moortgat, G. K.; Orkin, V. L.; Ravishankara, A. R. Chemical Kinetics and Photochemical Data for Use in Atmospheric Studies, Evaluation Number 15; Jet Propulsion Laboratory: Pasadena, California, 2006)(1) and IUPAC (Atkinson, R.; Baulch, D. L.; Cox, R. A.: R. F. Hampson, J.; Kerr, J. A.; Rossi, M. J.; Troe, J. J. Phys. Chem. Ref. Data. 2000, 29, 167) 2 data evaluation panels, which represents the data in the experimental pressure ranges rather well. Despite the availability of quite reliable parameters for these calculations (molecular vibrational frequencies (Parthiban, S.; Lee, T. J. J. Chem. Phys. 2000, 113, 145)3 and a. value (Orlando, J. J.; Tyndall, G. S. J. Phys. Chem. 1996, 100,. 19398)4 of the bond dissociation energy, D-298(BrO-NO2) = 118 kJ mol(-1), corresponding to Delta H-0(circle) = 114.3 kJ mol(-1) at 0 K) and the use of RRKM/ME methods, fitting calculations to the reported data or the empirical equations was anything but straightforward. Using these molecular parameters resulted in a discrepancy between the calculations and the database of rate constants of a factor of ca. 4 at, or close to, the low-pressure limit. Agreement between calculation and experiment could be achieved in two ways, either by increasing Delta H-0(circle) to an unrealistically high value (149.3 kJ mol(-1)) or by increasing , the average energy transferred in a downward collision, to an unusually large value (> 5000 cm(-1)). The discrepancy could also be reduced by making all overall rotations fully active. The system was relatively insensitive to changing the moments of inertia in the transition state to increase the centrifugal effect. The possibility of involvement of BrOONO was tested and cannot account for the difficulties of fitting the data.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Crumpets are made by heating fermented batter on a hot plate at around 230°C. The characteristic structure dominated by vertical pores develops rapidly: structure has developed throughout around 75% of the product height within 30s, which is far faster than might be expected from transient heat conduction through the batter. Cooking is complete within around 3 min. Image analysis based on results from X-ray tomography shows that the voidage fraction is approximately constant and that there is continual coalescence between the larger pores throughout the product although there is also a steady level of small bubbles trapped within the solidified batter. We report here experimental studies which shed light on some of the mechanisms responsible for this structure, together with some models of key phenomena.Three aspects are discussed here: the role of gas (carbon dioxide and nitrogen) nuclei in initiating structure development; convective heat transfer inside the developing pores; and the kinetics of setting the batter into an elastic solid structure. It is shown conclusively that the small bubbles of carbon dioxide resulting from the fermentation stage play a crucial role as nuclei for pore development: without these nuclei, the result is not a porous structure, but rather a solid, elastic, inedible, gelatinized product. These nuclei are also responsible for the tiny bubbles which are set in the final product. The nuclei form the source of the dominant pore structure which is largely driven by the, initially explosive, release of water vapour from the batter together with the desorption of dissolved carbon dioxide. It is argued that the rapid evaporation, transport and condensation of steam within the growing pores provides an important mechanism, as in a heat pipe, for rapid heat transfer, and models for this process are developed and tested. The setting of the continuous batter phase is essential for final product quality: studies using differential scanning calorimetry and on the kinetics of change in the visco-elastic properties of the batter suggest that this process is driven by the kinetics of gelatinization. Unlike many thermally driven food processes the rates of heating are such that gelatinization kinetics cannot be neglected. The implications of these results for modelling and for the development of novel structures are discussed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

It is sometimes argued that experimental economists do not have to worry about external validity so long as the design sticks closely to a theoretical model. This position mistakes the model for the theory. As a result, applied economics designs often study phenomena distinct from their stated objects of inquiry. Because the implemented models are abstract, they may provide improbable analogues to their stated subject matter. This problem is exacerbated by the relational character of the social world, which also sets epistemic limits for the social science laboratory more generally.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A solution has been found to the long-standing problem of experimental modelling of the interfacial instability in aluminium reduction cells. The idea is to replace the electrolyte overlaying molten aluminium with a mesh of thin rods supplying current down directly into the liquid metal layer. This eliminates electrolysis altogether and all the problems associated with it, such as high temperature, chemical aggressiveness of media, products of electrolysis, the necessity for electrolyte renewal, high power demands, etc. The result is a room temperature, versatile laboratory model which simulates Sele-type, rolling pad interfacial instability. Our new, safe laboratory model enables detailed experimental investigations to test the existing theoretical models for the first time.