979 resultados para Soft-core potential model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We resolve the real-time dynamics of a purely dissipative s=1/2 quantum spin or, equivalently, hard-core boson model on a hypercubic d-dimensional lattice. The considered quantum dissipative process drives the system to a totally symmetric macroscopic superposition in each of the S3 sectors. Different characteristic time scales are identified for the dynamics and we determine their finite-size scaling. We introduce the concept of cumulative entanglement distribution to quantify multiparticle entanglement and show that the considered protocol serves as an efficient method to prepare a macroscopically entangled Bose-Einstein condensate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sepsis is a significant cause for multiple organ failure and death in the burn patient, yet identification in this population is confounded by chronic hypermetabolism and impaired immune function. The purpose of this study was twofold: 1) determine the ability of the systemic inflammatory response syndrome (SIRS) and American Burn Association (ABA) criteria to predict sepsis in the burn patient; and 2) develop a model representing the best combination of clinical predictors associated with sepsis in the same population. A retrospective, case-controlled, within-patient comparison of burn patients admitted to a single intensive care unit (ICU) was conducted for the period January 2005 to September 2010. Blood culture results were paired with clinical condition: "positive-sick"; "negative-sick", and "screening-not sick". Data were collected for the 72 hours prior to each blood culture. The most significant predictors were evaluated using logistic regression, Generalized Estimating Equations (GEE) and ROC area under the curve (AUC) analyses to assess model predictive ability. Bootstrapping methods were employed to evaluate potential model over-fitting. Fifty-nine subjects were included, representing 177 culture periods. SIRS criteria were not found to be associated with culture type, with an average of 98% of subjects meeting criteria in the 3 days prior. ABA sepsis criteria were significantly different among culture type only on the day prior (p = 0.004). The variables identified for the model included: heart rate>130 beats/min, mean blood pressure<60 mmHg, base deficit<-6 mEq/L, temperature>36°C, use of vasoactive medications, and glucose>150 mg/d1. The model was significant in predicting "positive culture-sick" and sepsis state, with AUC of 0.775 (p < 0.001) and 0.714 (p < .001), respectively; comparatively, the ABA criteria AUC was 0.619 (p = 0.028) and 0.597 (p = .035), respectively. SIRS criteria are not appropriate for identifying sepsis in the burn population. The ABA criteria perform better, but only for the day prior to positive blood culture results. The time period useful to diagnose sepsis using clinical criteria may be limited to 24 hours. A combination of predictors is superior to individual variable trends, yet algorithms or computer support will be necessary for the clinician to find such models useful. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we describe a new promising procedure to model hyperelastic materials from given stress-strain data. The main advantage of the proposed method is that the user does not need to have a relevant knowledge of hyperelasticity, large strains or hyperelastic constitutive modelling. The engineer simply has to prescribe some stress strain experimental data (whether isotropic or anisotropic) in also user prescribed stress and strain measures and the model almost exactly replicates the experimental data. The procedure is based on the piece-wise splines model by Sussman and Bathe and may be easily generalized to transversely isotropic and orthotropic materials. The model is also amenable of efficient finite element implementation. In this paper we briefly describe the general procedure, addressing the advantages and limitations. We give predictions for arbitrary ?experimental data? and also give predictions for actual experiments of the behaviour of living soft tissues. The model may be also implemented in a general purpose finite element program. Since the obtained strain energy functions are analytic piece-wise functions, the constitutive tangent may be readily derived in order to be used for implicit static problems, where the equilibrium iterations must be performed and the material tangent is needed in order to preserve the quadratic rate of convergence of Newton procedures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The wake produced by the structural supports of the ultrasonic anemometers (UAs)causes distortions in the velocity field in the vicinity of the sonic path. These distortions are measured by the UA, inducing errors in the determination of the mean velocity, turbulence intensity, spectrum, etc.; basic parameters to determine the effect of wind on structures. Additionally, these distortions can lead to indefinition in the calibration function of the sensors (Cuerva et al., 2004). Several wind tunnel tests have been dedicated to obtaining experimental data, from which have been developed fit models to describe and to correct these distortions (Kaimal, 1978 and Wyngaard, 1985). This work explores the effect of a vortex wake generated by the supports of an UA, on the measurement of wind speed done by this instrument. To do this, the Von Karman¿s vortex street potential model is combined with the mathematical model of the measuring process carried out by UAs developed by Franchini et al. (2007). The obtained results are the correction functions of the measured wind velocity, which depends on the geometry of the sonic anemometer and aerodynamic conditions. These results have been validated with the ones obtained in a wind tunnel test done on a single path UA, especially developed for research. The supports of this UA have been modified in order to reproduce the conditions of the theoretical model. Good agreements between experimental and theoretical results have been found.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En el mundo actual las aplicaciones basadas en sistemas biométricos, es decir, aquellas que miden las señales eléctricas de nuestro organismo, están creciendo a un gran ritmo. Todos estos sistemas incorporan sensores biomédicos, que ayudan a los usuarios a controlar mejor diferentes aspectos de la rutina diaria, como podría ser llevar un seguimiento detallado de una rutina deportiva, o de la calidad de los alimentos que ingerimos. Entre estos sistemas biométricos, los que se basan en la interpretación de las señales cerebrales, mediante ensayos de electroencefalografía o EEG están cogiendo cada vez más fuerza para el futuro, aunque están todavía en una situación bastante incipiente, debido a la elevada complejidad del cerebro humano, muy desconocido para los científicos hasta el siglo XXI. Por estas razones, los dispositivos que utilizan la interfaz cerebro-máquina, también conocida como BCI (Brain Computer Interface), están cogiendo cada vez más popularidad. El funcionamiento de un sistema BCI consiste en la captación de las ondas cerebrales de un sujeto para después procesarlas e intentar obtener una representación de una acción o de un pensamiento del individuo. Estos pensamientos, correctamente interpretados, son posteriormente usados para llevar a cabo una acción. Ejemplos de aplicación de sistemas BCI podrían ser mover el motor de una silla de ruedas eléctrica cuando el sujeto realice, por ejemplo, la acción de cerrar un puño, o abrir la cerradura de tu propia casa usando un patrón cerebral propio. Los sistemas de procesamiento de datos están evolucionando muy rápido con el paso del tiempo. Los principales motivos son la alta velocidad de procesamiento y el bajo consumo energético de las FPGAs (Field Programmable Gate Array). Además, las FPGAs cuentan con una arquitectura reconfigurable, lo que las hace más versátiles y potentes que otras unidades de procesamiento como las CPUs o las GPUs.En el CEI (Centro de Electrónica Industrial), donde se lleva a cabo este TFG, se dispone de experiencia en el diseño de sistemas reconfigurables en FPGAs. Este TFG es el segundo de una línea de proyectos en la cual se busca obtener un sistema capaz de procesar correctamente señales cerebrales, para llegar a un patrón común que nos permita actuar en consecuencia. Más concretamente, se busca detectar cuando una persona está quedándose dormida a través de la captación de unas ondas cerebrales, conocidas como ondas alfa, cuya frecuencia está acotada entre los 8 y los 13 Hz. Estas ondas, que aparecen cuando cerramos los ojos y dejamos la mente en blanco, representan un estado de relajación mental. Por tanto, este proyecto comienza como inicio de un sistema global de BCI, el cual servirá como primera toma de contacto con el procesamiento de las ondas cerebrales, para el posterior uso de hardware reconfigurable sobre el cual se implementarán los algoritmos evolutivos. Por ello se vuelve necesario desarrollar un sistema de procesamiento de datos en una FPGA. Estos datos se procesan siguiendo la metodología de procesamiento digital de señales, y en este caso se realiza un análisis de la frecuencia utilizando la transformada rápida de Fourier, o FFT. Una vez desarrollado el sistema de procesamiento de los datos, se integra con otro sistema que se encarga de captar los datos recogidos por un ADC (Analog to Digital Converter), conocido como ADS1299. Este ADC está especialmente diseñado para captar potenciales del cerebro humano. De esta forma, el sistema final capta los datos mediante el ADS1299, y los envía a la FPGA que se encarga de procesarlos. La interpretación es realizada por los usuarios que analizan posteriormente los datos procesados. Para el desarrollo del sistema de procesamiento de los datos, se dispone primariamente de dos plataformas de estudio, a partir de las cuales se captarán los datos para después realizar el procesamiento: 1. La primera consiste en una herramienta comercial desarrollada y distribuida por OpenBCI, proyecto que se dedica a la venta de hardware para la realización de EEG, así como otros ensayos. Esta herramienta está formada por un microprocesador, un módulo de memoria SD para el almacenamiento de datos, y un módulo de comunicación inalámbrica que transmite los datos por Bluetooth. Además cuenta con el mencionado ADC ADS1299. Esta plataforma ofrece una interfaz gráfica que sirve para realizar la investigación previa al diseño del sistema de procesamiento, al permitir tener una primera toma de contacto con el sistema. 2. La segunda plataforma consiste en un kit de evaluación para el ADS1299, desde la cual se pueden acceder a los diferentes puertos de control a través de los pines de comunicación del ADC. Esta plataforma se conectará con la FPGA en el sistema integrado. Para entender cómo funcionan las ondas más simples del cerebro, así como saber cuáles son los requisitos mínimos en el análisis de ondas EEG se realizaron diferentes consultas con el Dr Ceferino Maestu, neurofisiólogo del Centro de Tecnología Biomédica (CTB) de la UPM. Él se encargó de introducirnos en los distintos procedimientos en el análisis de ondas en electroencefalogramas, así como la forma en que se deben de colocar los electrodos en el cráneo. Para terminar con la investigación previa, se realiza en MATLAB un primer modelo de procesamiento de los datos. Una característica muy importante de las ondas cerebrales es la aleatoriedad de las mismas, de forma que el análisis en el dominio del tiempo se vuelve muy complejo. Por ello, el paso más importante en el procesamiento de los datos es el paso del dominio temporal al dominio de la frecuencia, mediante la aplicación de la transformada rápida de Fourier o FFT (Fast Fourier Transform), donde se pueden analizar con mayor precisión los datos recogidos. El modelo desarrollado en MATLAB se utiliza para obtener los primeros resultados del sistema de procesamiento, el cual sigue los siguientes pasos. 1. Se captan los datos desde los electrodos y se escriben en una tabla de datos. 2. Se leen los datos de la tabla. 3. Se elige el tamaño temporal de la muestra a procesar. 4. Se aplica una ventana para evitar las discontinuidades al principio y al final del bloque analizado. 5. Se completa la muestra a convertir con con zero-padding en el dominio del tiempo. 6. Se aplica la FFT al bloque analizado con ventana y zero-padding. 7. Los resultados se llevan a una gráfica para ser analizados. Llegados a este punto, se observa que la captación de ondas alfas resulta muy viable. Aunque es cierto que se presentan ciertos problemas a la hora de interpretar los datos debido a la baja resolución temporal de la plataforma de OpenBCI, este es un problema que se soluciona en el modelo desarrollado, al permitir el kit de evaluación (sistema de captación de datos) actuar sobre la velocidad de captación de los datos, es decir la frecuencia de muestreo, lo que afectará directamente a esta precisión. Una vez llevado a cabo el primer procesamiento y su posterior análisis de los resultados obtenidos, se procede a realizar un modelo en Hardware que siga los mismos pasos que el desarrollado en MATLAB, en la medida que esto sea útil y viable. Para ello se utiliza el programa XPS (Xilinx Platform Studio) contenido en la herramienta EDK (Embedded Development Kit), que nos permite diseñar un sistema embebido. Este sistema cuenta con: Un microprocesador de tipo soft-core llamado MicroBlaze, que se encarga de gestionar y controlar todo el sistema; Un bloque FFT que se encarga de realizar la transformada rápida Fourier; Cuatro bloques de memoria BRAM, donde se almacenan los datos de entrada y salida del bloque FFT y un multiplicador para aplicar la ventana a los datos de entrada al bloque FFT; Un bus PLB, que consiste en un bus de control que se encarga de comunicar el MicroBlaze con los diferentes elementos del sistema. Tras el diseño Hardware se procede al diseño Software utilizando la herramienta SDK(Software Development Kit).También en esta etapa se integra el sistema de captación de datos, el cual se controla mayoritariamente desde el MicroBlaze. Por tanto, desde este entorno se programa el MicroBlaze para gestionar el Hardware que se ha generado. A través del Software se gestiona la comunicación entre ambos sistemas, el de captación y el de procesamiento de los datos. También se realiza la carga de los datos de la ventana a aplicar en la memoria correspondiente. En las primeras etapas de desarrollo del sistema, se comienza con el testeo del bloque FFT, para poder comprobar el funcionamiento del mismo en Hardware. Para este primer ensayo, se carga en la BRAM los datos de entrada al bloque FFT y en otra BRAM los datos de la ventana aplicada. Los datos procesados saldrán a dos BRAM, una para almacenar los valores reales de la transformada y otra para los imaginarios. Tras comprobar el correcto funcionamiento del bloque FFT, se integra junto al sistema de adquisición de datos. Posteriormente se procede a realizar un ensayo de EEG real, para captar ondas alfa. Por otro lado, y para validar el uso de las FPGAs como unidades ideales de procesamiento, se realiza una medición del tiempo que tarda el bloque FFT en realizar la transformada. Este tiempo se compara con el tiempo que tarda MATLAB en realizar la misma transformada a los mismos datos. Esto significa que el sistema desarrollado en Hardware realiza la transformada rápida de Fourier 27 veces más rápido que lo que tarda MATLAB, por lo que se puede ver aquí la gran ventaja competitiva del Hardware en lo que a tiempos de ejecución se refiere. En lo que al aspecto didáctico se refiere, este TFG engloba diferentes campos. En el campo de la electrónica:  Se han mejorado los conocimientos en MATLAB, así como diferentes herramientas que ofrece como FDATool (Filter Design Analysis Tool).  Se han adquirido conocimientos de técnicas de procesado de señal, y en particular, de análisis espectral.  Se han mejorado los conocimientos en VHDL, así como su uso en el entorno ISE de Xilinx.  Se han reforzado los conocimientos en C mediante la programación del MicroBlaze para el control del sistema.  Se ha aprendido a crear sistemas embebidos usando el entorno de desarrollo de Xilinx usando la herramienta EDK (Embedded Development Kit). En el campo de la neurología, se ha aprendido a realizar ensayos EEG, así como a analizar e interpretar los resultados mostrados en el mismo. En cuanto al impacto social, los sistemas BCI afectan a muchos sectores, donde destaca el volumen de personas con discapacidades físicas, para los cuales, este sistema implica una oportunidad de aumentar su autonomía en el día a día. También otro sector importante es el sector de la investigación médica, donde los sistemas BCIs son aplicables en muchas aplicaciones como, por ejemplo, la detección y estudio de enfermedades cognitivas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present an extension of the logic outer-approximation algorithm for dealing with disjunctive discrete-continuous optimal control problems whose dynamic behavior is modeled in terms of differential-algebraic equations. Although the proposed algorithm can be applied to a wide variety of discrete-continuous optimal control problems, we are mainly interested in problems where disjunctions are also present. Disjunctions are included to take into account only certain parts of the underlying model which become relevant under some processing conditions. By doing so the numerical robustness of the optimization algorithm improves since those parts of the model that are not active are discarded leading to a reduced size problem and avoiding potential model singularities. We test the proposed algorithm using three examples of different complex dynamic behavior. In all the case studies the number of iterations and the computational effort required to obtain the optimal solutions is modest and the solutions are relatively easy to find.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we studied vapor-liquid equilibria (VLE) and adsorption of ethylene on graphitized thermal carbon black and in slit pores whose walls are composed of graphene layers. Simple models of a one-center Lennard-Jones (LJ) potential and a two-center united atom (UA)-LJ potential are investigated to study the impact of the choice of potential models in the description of VLE and adsorption behavior. Here, we used a Monte Carlo simulation method with grand canonical Monte Carlo (GCMC) and Gibbs ensemble Monte Carlo ensembles. The one-center potential model cannot describe adequately the VLE over the practical range of temperature from the triple point to the critical point. On the other hand, the two-center potential model (Wick et al. J. Phys. Chem. B 2000, 104, 8008-8016) performs well in the description of VLE (saturated vapor and liquid densities and vapor pressure) over the wide range of temperature. This UA-LJ model is then used in the study of adsorption of ethylene on graphitized thermal carbon black and in slit pores. Agreement between the GCMC simulation results and the experimental data on graphitized thermal carbon black for moderate temperatures is excellent, demonstrating that the potential of the GCMC method and the proper choice of potential model are essential to investigate adsorption. For slit pores of various sizes, we have found that the behavior of ethylene exhibits a number of features that are not manifested in the study of spherical LJ particles. In particular, the singlet density distribution versus distance across the pore and the angle between the molecular axis and the z direction provide rich information about the way molecules arrange themselves when the pore width is varied. Such an arrangement has been found to be very sensitive to the pore width.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we study the effect of solid surface mediation on the intermolecular potential energy of nitrogen, and its impact on the adsorption of nitrogen on a graphitized carbon black surface and in carbon slit-shaped pores. This effect arises from the lower effective interaction potential energy between two particles close to the surface compared to the potential energy of the same two particles when they are far away from the surface. A simple equation is proposed to calculate the reduction factor and this is used in the Grand Canonical Monte Carlo (GCMC) simulation of nitrogen adsorption on graphitized thermal carbon black. With this modification, the GCMC simulation results agree extremely well with the experimental data over a wide range of pressure; the simulation results with the original potential energy (i.e. no surface mediation) give rise to a shoulder in the neighbourhood of monolayer coverage and a significant over-prediction of the second and higher layer coverages. The influence of this surface mediation on the dependence of the pore-filling pressure on the pore width is also studied. It is shown that such surface mediation has a significant effect on the pore-filling pressure. This implies that the use of the local isotherms obtained from the potential model without surface mediation could give rise to a serious error in the determination of the pore-size distribution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The adsorption of simple Lennard-Jones fluids in a carbon slit pore of finite length was studied with Canonical Ensemble (NVT) and Gibbs Ensemble Monte Carlo Simulations (GEMC). The Canonical Ensemble was a collection of cubic simulation boxes in which a finite pore resides, while the Gibbs Ensemble was that of the pore space of the finite pore. Argon was used as a model for Lennard-Jones fluids, while the adsorbent was modelled as a finite carbon slit pore whose two walls were composed of three graphene layers with carbon atoms arranged in a hexagonal pattern. The Lennard-Jones (LJ) 12-6 potential model was used to compute the interaction energy between two fluid particles, and also between a fluid particle and a carbon atom. Argon adsorption isotherms were obtained at 87.3 K for pore widths of 1.0, 1.5 and 2.0 nm using both Canonical and Gibbs Ensembles. These results were compared with isotherms obtained with corresponding infinite pores using Grand Canonical Ensembles. The effects of the number of cycles necessary to reach equilibrium, the initial allocation of particles, the displacement step and the simulation box size were particularly investigated in the Monte Carlo simulation with Canonical Ensembles. Of these parameters, the displacement step had the most significant effect on the performance of the Monte Carlo simulation. The simulation box size was also important, especially at low pressures at which the size must be sufficiently large to have a statistically acceptable number of particles in the bulk phase. Finally, it was found that the Canonical Ensemble and the Gibbs Ensemble both yielded the same isotherm (within statistical error); however, the computation time for GEMC was shorter than that for canonical ensemble simulation. However, the latter method described the proper interface between the reservoir and the adsorbed phase (and hence the meniscus).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we investigate the suitability of the grand canonical Monte Carlo in the description of adsorption equilibria of flexible n-alkane (butane, pentane and hexane) on graphitized thermal carbon black. Potential model of n-alkane of Martin and Siepmann (J. Phys. Chem. 102 (1998) 2569) is employed in the simulation, and we consider the flexibility of molecule in the simulation. By this we study two models, one is the fully flexible molecular model in which n-alkane is subject to bending and torsion, while the other is the rigid molecular model in which all carbon atoms reside on the same plane. It is found that (i) the adsorption isotherm results of these two models are close to each other, suggesting that n-alkane model behaves mostly as rigid molecules with respect to adsorption although the isotherm for longer chain n-hexane is better described by the flexible molecular model (ii) the isotherms agree very well with the experimental data at least up to two layers on the surface.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent work we have developed a novel variational inference method for partially observed systems governed by stochastic differential equations. In this paper we provide a comparison of the Variational Gaussian Process Smoother with an exact solution computed using a Hybrid Monte Carlo approach to path sampling, applied to a stochastic double well potential model. It is demonstrated that the variational smoother provides us a very accurate estimate of mean path while conditional variance is slightly underestimated. We conclude with some remarks as to the advantages and disadvantages of the variational smoother. © 2008 Springer Science + Business Media LLC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A number of professional sectors have recently moved away from their longstanding career model of up-or-out promotion and embraced innovative alternatives. Professional labor is a critical resource in professional service firms. Therefore, changes to these internal labor markets are likely to trigger other innovations, for example in knowledge management, incentive schemes and team composition. In this chapter we look at how new career models affect the core organizing model of professional firms and, in turn, their capacity for and processes of innovation. We consider how professional firms link the development of human capital and the division of professional labor to distinctive demands for innovation and how novel career systems help them respond to these demands.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have studied a series of samples of bovine serum albumin (BSA) solutions with protein concentration, c, ranging from 2 to 500 mg/mL and ionic strength, I, from 0 to 2 M by small-angle X-ray scattering (SAXS). The scattering intensity distribution was compared to simulations using an oblate ellipsoid form factor with radii of 17 x 42 x 42 A, combined with either a screened Coulomb, repulsive structure factor, S-SC(q), or an attractive square-well structure factor, S-SW(q). At pH = 7, BSA is negatively charged. At low ionic strength, I <0.3 M, the total interaction exhibits a decrease of the repulsive interaction when compared to the salt-free solution, as the net surface charge is screened, and the data can be fitted by assuming an ellipsoid form factor and screened Coulomb interaction. At moderate ionic strength (0.3-0.5 M), the interaction is rather weak, and a hard-sphere structure factor has been used to simulate the data with a higher volume fraction. Upon further increase of the ionic strength (I >= 1.0 M), the overall interaction potential was dominated by an additional attractive potential, and the data could be successfully fitted by an ellipsoid form factor and a square-well potential model. The fit parameters, well depth and well width, indicate that the attractive potential caused by a high salt concentration is weak and long-ranged. Although the long-range, attractive potential dominated the protein interaction, no gelation or precipitation was observed in any of the samples. This is explained by the increase of a short-range, repulsive interaction between protein molecules by forming a hydration layer with increasing salt concentration. The competition between long-range, attractive and short-range, repulsive interactions accounted for the stability of concentrated BSA solution at high ionic strength.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Glucosamine increases flux through the hexosamine pathway, causing insulin resistance and disturbances similar to diabetic glucose toxicity. Aim: This study examines the effect of glucosamine on glucose uptake by cultured L6 muscle cells as a model of insulin resistance. Methods: Glucose uptake by L6 myotubes was measured using the non-metabolized glucose analogue 2-deoxy-D-glucose after incubation with glucosamine for 4 and 24 h, with and without insulin and several other agents (metformin, peroxovanadium and D-pinitol) that improve glucose uptake in diabetic states. Results: After 4 h, high concentrations of glucosamine (5 × 10-3 and 10-2 M) reduced basal and insulin-stimulated glucose uptake by up to 50%. After 24 h, the effect of insulin was completely abolished by 10-2 M glucosamine and reduced over 50% by 5 × 10-3 M glucosamine. Lower concentrations of glucosamine did not significantly alter glucose uptake. The effect of glucosamine could not be attributed to cytotoxicity assessed by the Trypan Blue test. Metformin, peroxovanadium and D-pinitol, each of which increased glucose uptake by L6 cells, did not prevent the decrease in glucose uptake with glucosamine. Conclusion: Glucosamine decreased insulin-stimulated glucose uptake by L6 muscle cells, providing a potential model of insulin resistance with similarities to glucose toxicity. Insulin resistance induced by glucosamine was not reversed by three agents (metformin, peroxovanadium and D-pinitol) known to enhance or partially mimic the effects of insulin. © 2004 Blackwell Publishing Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Eighty per cent of Malawi's 8 million children live in rural areas, and there is an extensive tiered health system infrastructure from village health clinics to district hospitals which refers patients to one of the four central hospitals. The clinics and district hospitals are staffed by nurses, non-physician clinicians and recently qualified doctors. There are 16 paediatric specialists working in two of the four central hospitals which serve the urban population as well as accepting referrals from district hospitals. In order to provide expert paediatric care as close to home as possible, we describe our plan to task share within a managed clinical network and our hypothesis that this will improve paediatric care and child health.

PRESENTATION OF THE HYPOTHESIS: Managed clinical networks have been found to improve equity of care in rural districts and to ensure that the correct care is provided as close to home as possible. A network for paediatric care in Malawi with mentoring of non-physician clinicians based in a district hospital by paediatricians based at the central hospitals will establish and sustain clinical referral pathways in both directions. Ultimately, the plan envisages four managed paediatric clinical networks, each radiating from one of Malawi's four central hospitals and covering the entire country. This model of task sharing within four hub-and-spoke networks may facilitate wider dissemination of scarce expertise and improve child healthcare in Malawi close to the child's home.

TESTING THE HYPOTHESIS: Funding has been secured to train sufficient personnel to staff all central and district hospitals in Malawi with teams of paediatric specialists in the central hospitals and specialist non-physician clinicians in each government district hospital. The hypothesis will be tested using a natural experiment model. Data routinely collected by the Ministry of Health will be corroborated at the district. This will include case fatality rates for common childhood illness, perinatal mortality and process indicators. Data from different districts will be compared at baseline and annually until 2020 as the specialists of both cadres take up posts.

IMPLICATIONS OF THE HYPOTHESIS: If a managed clinical network improves child healthcare in Malawi, it may be a potential model for the other countries in sub-Saharan Africa with similar cadres in their healthcare system and face similar challenges in terms of scarcity of specialists.