979 resultados para first order transition system


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A simulation model adopting a health system perspective showed population-based screening with DXA, followed by alendronate treatment of persons with osteoporosis, or with anamnestic fracture and osteopenia, to be cost-effective in Swiss postmenopausal women from age 70, but not in men. INTRODUCTION: We assessed the cost-effectiveness of a population-based screen-and-treat strategy for osteoporosis (DXA followed by alendronate treatment if osteoporotic, or osteopenic in the presence of fracture), compared to no intervention, from the perspective of the Swiss health care system. METHODS: A published Markov model assessed by first-order Monte Carlo simulation was refined to reflect the diagnostic process and treatment effects. Women and men entered the model at age 50. Main screening ages were 65, 75, and 85 years. Age at bone densitometry was flexible for persons fracturing before the main screening age. Realistic assumptions were made with respect to persistence with intended 5 years of alendronate treatment. The main outcome was cost per quality-adjusted life year (QALY) gained. RESULTS: In women, costs per QALY were Swiss francs (CHF) 71,000, CHF 35,000, and CHF 28,000 for the main screening ages of 65, 75, and 85 years. The threshold of CHF 50,000 per QALY was reached between main screening ages 65 and 75 years. Population-based screening was not cost-effective in men. CONCLUSION: Population-based DXA screening, followed by alendronate treatment in the presence of osteoporosis, or of fracture and osteopenia, is a cost-effective option in Swiss postmenopausal women after age 70.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quantitative characterisation of carotid atherosclerosis and classification into symptomatic or asymptomatic is crucial in planning optimal treatment of atheromatous plaque. The computer-aided diagnosis (CAD) system described in this paper can analyse ultrasound (US) images of carotid artery and classify them into symptomatic or asymptomatic based on their echogenicity characteristics. The CAD system consists of three modules: a) the feature extraction module, where first-order statistical (FOS) features and Laws' texture energy can be estimated, b) the dimensionality reduction module, where the number of features can be reduced using analysis of variance (ANOVA), and c) the classifier module consisting of a neural network (NN) trained by a novel hybrid method based on genetic algorithms (GAs) along with the back propagation algorithm. The hybrid method is able to select the most robust features, to adjust automatically the NN architecture and to optimise the classification performance. The performance is measured by the accuracy, sensitivity, specificity and the area under the receiver-operating characteristic (ROC) curve. The CAD design and development is based on images from 54 symptomatic and 54 asymptomatic plaques. This study demonstrates the ability of a CAD system based on US image analysis and a hybrid trained NN to identify atheromatous plaques at high risk of stroke.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The estimation of the average travel distance in a low-level picker-to-part order picking system can be done by analytical methods in most cases. Often a uniform distribution of the access frequency over all bin locations is assumed in the storage system. This only applies if the bin location assignment is done randomly. If the access frequency of the articles is considered in the bin location assignment to reduce the average total travel distance of the picker, the access frequency over the bin locations of one aisle can be approximated by an exponential density function or any similar density function. All known calculation methods assume that the average number of orderlines per order is greater than the number of aisles of the storage system. In case of small orders this assumption is often invalid. This paper shows a new approach for calculating the average total travel distance taking into account that the average number of orderlines per order is lower than the total number of aisles in the storage system and the access frequency over the bin locations of an aisle can be approximated by any density function.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Time series of geocenter coordinates were determined with data of two global navigation satellite systems (GNSSs), namely the U.S. GPS (Global Positioning System) and the Russian GLONASS (Global’naya Nawigatsionnaya Sputnikowaya Sistema). The data was recorded in the years 2008–2011 by a global network of 92 permanently observing GPS/GLONASS receivers. Two types of daily solutions were generated independently for each GNSS, one including the estimation of geocenter coordinates and one without these parameters. A fair agreement for GPS and GLONASS was found in the geocenter x- and y-coordinate series. Our tests, however, clearly reveal artifacts in the z-component determined with the GLONASS data. Large periodic excursions in the GLONASS geocenter z-coordinates of about 40 cm peak-to-peak are related to the maximum elevation angles of the Sun above/below the orbital planes of the satellite system and thus have a period of about 4 months (third of a year). A detailed analysis revealed that the artifacts are almost uniquely governed by the differences of the estimates of direct solar radiation pressure (SRP) in the two solution series (with and without geocenter estimation). A simple formula is derived, describing the relation between the geocenter z-coordinate and the corresponding parameter of the SRP. The effect can be explained by first-order perturbation theory of celestial mechanics. The theory also predicts a heavy impact on the GNSS-derived geocenter if once-per-revolution SRP parameters are estimated in the direction of the satellite’s solar panel axis. Specific experiments using GPS observations revealed that this is indeed the case. Although the main focus of this article is on GNSS, the theory developed is applicable to all satellite observing techniques. We applied the theory to satellite laser ranging (SLR) solutions using LAGEOS. It turns out that the correlation between geocenter and SRP parameters is not a critical issue for the SLR solutions. The reasons are threefold: The direct SRP is about a factor of 30–40 smaller for typical geodetic SLR satellites than for GNSS satellites, allowing it in most cases to not solve for SRP parameters (ruling out the correlation between these parameters and the geocenter coordinates); the orbital arc length of 7 days (which is typically used in SLR analysis) contains more than 50 revolutions of the LAGEOS satellites as compared to about two revolutions of GNSS satellites for the daily arcs used in GNSS analysis; the orbit geometry is not as critical for LAGEOS as for GNSS satellites, because the elevation angle of the Sun w.r.t. the orbital plane is usually significantly changing over 7 days.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We report a case of massive suicidal overdose of meprobamate leading to cardiovascular collapse, respiratory failure, and severe central nervous system depression. We observed first-order elimination kinetics despite significant overdose, and demonstrated effectiveness of continuous venovenous hemodiafiltration (CVVHDF) for extracorporeal removal of meprobamate in this patient. Total body clearance was calculated to be 87 mL/minute, with 64 mL/minute (74%) due to CVVHDF. CVVHDF was stopped after 36 hours, and the patient made an uneventful recovery.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A dedicated mission to investigate exoplanetary atmospheres represents a major milestone in our quest to understand our place in the universe by placing our Solar System in context and by addressing the suitability of planets for the presence of life. EChO—the Exoplanet Characterisation Observatory—is a mission concept specifically geared for this purpose. EChO will provide simultaneous, multi-wavelength spectroscopic observations on a stable platform that will allow very long exposures. The use of passive cooling, few moving parts and well established technology gives a low-risk and potentially long-lived mission. EChO will build on observations by Hubble, Spitzer and ground-based telescopes, which discovered the first molecules and atoms in exoplanetary atmospheres. However, EChO’s configuration and specifications are designed to study a number of systems in a consistent manner that will eliminate the ambiguities affecting prior observations. EChO will simultaneously observe a broad enough spectral region—from the visible to the mid-infrared—to constrain from one single spectrum the temperature structure of the atmosphere, the abundances of the major carbon and oxygen bearing species, the expected photochemically-produced species and magnetospheric signatures. The spectral range and resolution are tailored to separate bands belonging to up to 30 molecules and retrieve the composition and temperature structure of planetary atmospheres. The target list for EChO includes planets ranging from Jupiter-sized with equilibrium temperatures T eq up to 2,000 K, to those of a few Earth masses, with T eq \u223c 300 K. The list will include planets with no Solar System analog, such as the recently discovered planets GJ1214b, whose density lies between that of terrestrial and gaseous planets, or the rocky-iron planet 55 Cnc e, with day-side temperature close to 3,000 K. As the number of detected exoplanets is growing rapidly each year, and the mass and radius of those detected steadily decreases, the target list will be constantly adjusted to include the most interesting systems. We have baselined a dispersive spectrograph design covering continuously the 0.4–16 μm spectral range in 6 channels (1 in the visible, 5 in the InfraRed), which allows the spectral resolution to be adapted from several tens to several hundreds, depending on the target brightness. The instrument will be mounted behind a 1.5 m class telescope, passively cooled to 50 K, with the instrument structure and optics passively cooled to \u223c45 K. EChO will be placed in a grand halo orbit around L2. This orbit, in combination with an optimised thermal shield design, provides a highly stable thermal environment and a high degree of visibility of the sky to observe repeatedly several tens of targets over the year. Both the baseline and alternative designs have been evaluated and no critical items with Technology Readiness Level (TRL) less than 4–5 have been identified. We have also undertaken a first-order cost and development plan analysis and find that EChO is easily compatible with the ESA M-class mission framework.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The crystalline phases of YbBr2 were investigated by powder neutron diffraction between 1.5 K and the melting point at 955 K (682 °C). The low temperature SrI2 phase is observed up to 550 K, the α-PbO2 phase between 260 K and 750 K, the CaCl2 phase between 690 K and 790 K, and the rutile phase from 790 K to the melting point. All observed phase transitions are first order, except for the second order CaCl2 to rutile transition. The transition temperatures and enthalpies were determined by differential scanning calorimetry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Considerable research has been conducted into the kinetics and selectivity of the oxygen delignification process to overcome limitation in its use. However most studies were performed in a batch reactor whereby the hydroxide and dissolved oxygen concentrations are changing during the reaction time in an effort to simulate tower performance in pulp mills. This makes it difficult to determine the reaction order of the different reactants in the rate expressions. Also the lignin content and cellulose degradation of the pulp are only established at the end of the experiment when the sample is removed from the batch reactor. To overcome these deficiencies, we have adopted a differential reactor system used frequently for fluid-solid rate studies (so-called Berty reactor) for measurement of oxygen delignification kinetics. In this reactor, the dissolved oxygen concentration and the alkali concentration in the feed are kept constant, and the rate of lignin removal is determined from the dissolved lignin content in the outflow stream measured by UV absorption. The mass of lignin removed is verified by analyzing the pulp at several time intervals. Experiments were performed at different temperatures, oxygen pressures and caustic concentrations. The delignification rate was found to be first order in HexA-free residual lignin content. The delignification rate reaction order in caustic concentration and oxygen pressure were determined to be 0.42 and 0.44 respectively. The activation energy was found to be 53kJ/mol. The carbohydrate degradation during oxygen delignification can be described by two contributions: one due to radicals produced by phenolic delignification, and a much smaller contribution due to alkaline hydrolysis. From the first order of the reaction and the pKa of the active lignin site, a new oxygen delignification mechanism is proposed. The number 3 carbon atom in the aromatic ring with the attached methoxyl group forms the lignin active site for oxygen adsorption and subsequent electrophic reaction to form a hydroperoxide with a pKa value similar to that of the present delignification kinetics. The uniform presence of the aromatic methoxyl groups in residual lignin further support the first order in lignin kinetics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The normal boiling point is a fundamental thermo-physical property, which is important in describing the transition between the vapor and liquid phases. Reliable method which can predict it is of great importance, especially for compounds where there are no experimental data available. In this work, an improved group contribution method, which is second order method, for determination of the normal boiling point of organic compounds based on the Joback functional first order groups with some changes and added some other functional groups was developed by using experimental data for 632 organic components. It could distinguish most of structural isomerism and stereoisomerism, which including the structural, cis- and trans- isomers of organic compounds. First and second order contributions for hydrocarbons and hydrocarbon derivatives containing carbon, hydrogen, oxygen, nitrogen, sulfur, fluorine, chlorine and bromine atoms, are given. The fminsearch mathematical approach from MATLAB software is used in this study to select an optimal collection of functional groups (65 functional groups) and subsequently to develop the model. This is a direct search method that uses the simplex search method of Lagarias et al. The results of the new method are compared to the several currently used methods and are shown to be far more accurate and reliable. The average absolute deviation of normal boiling point predictions for 632 organic compounds is 4.4350 K; and the average absolute relative deviation is 1.1047 %, which is of adequate accuracy for many practical applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Alpine glacier samples were collected in four contrasting regions to measure supraglacial dust and debris geochemical composition. A total of 70 surface glacier ice, snow and debris samples were collected in 2009 and 2010 in Svalbard, Norway, Nepal and New Zealand. Trace elemental abundances in snow and ice samples were measured via inductively coupled plasma mass spectrometry (ICP-MS). Supraglacial debris mineral, bulk oxide and trace element composition were determined via X-ray diffraction (XRD) and X-ray fluorescence spectroscopy (XRF). A total of 45 elements and 10 oxide compound abundances are reported. The uniform data collection procedure, analytical measurement methods and geochemical comparison techniques are used to evaluate supraglacial dust and debris composition variability in the contrasting glacier study regions. Elemental abundances revealed sea salt aerosol and metal enrichment in Svalbard, low levels of crustal dust and marine influences to southern Norway, high crustal dust and anthropogenic enrichment in the Khumbu Himalayas, and sulfur and metals attributed to quiescent degassing and volcanic activity in northern New Zealand. Rare earth element and Al/Ti elemental ratios demonstrated distinct provenance of particulates in each study region. Ca/S elemental ratio data showed seasonal denudation in Svalbard and Norway. Ablation season atmospheric particulate transport trajectories were mapped in each of the study regions and suggest provenance pathways. The in situ data presented provides first order glacier surface geochemical variability as measured from four diverse alpine glacier regions. This geochemical surface glacier data is relevant to glaciologic ablation rate understanding as well as satellite atmospheric and land-surface mapping techniques currently in development.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We integrate upper Eocene-lower Oligocene lithostratigraphic, magnetostratigraphic, biostratigraphic, stable isotopic, benthic foraminiferal faunal, downhole log, and sequence stratigraphic studies from the Alabama St. Stephens Quarry (SSQ) core hole, linking global ice volume, sea level, and temperature changes through the greenhouse to icehouse transition of the Cenozoic. We show that the SSQ succession is dissected by hiatuses associated with sequence boundaries. Three previously reported sequence boundaries are well dated here: North Twistwood Creek-Cocoa (35.4-35.9 Ma), Mint Spring-Red Bluff (33.0 Ma), and Bucatunna-Chickasawhay (the mid-Oligocene fall, ca. 30.2 Ma). In addition, we document three previously undetected or controversial sequences: mid-Pachuta (33.9-35.0 Ma), Shubuta-Bumpnose (lowermost Oligocene, ca. 33.6 Ma), and Byram-Glendon (30.5-31.7 Ma). An ~0.9 per mil d18O increase in the SSQ core hole is correlated to the global earliest Oligocene (Oi1) event using magnetobiostratigraphy; this increase is associated with the Shubuta-Bumpnose contact, an erosional surface, and a biofacies shift in the core hole, providing a first-order correlation between ice growth and a sequence boundary that indicates a sea-level fall. The d18O increase is associated with a eustatic fall of ~55 m, indicating that ~0.4 per mil of the increase at Oi1 time was due to temperature. Maximum d18O values of Oi1 occur above the sequence boundary, requiring that deposition resumed during the lowest eustatic lowstand. A precursor d18O increase of 0.5 per mil (33.8 Ma, midchron C13r) at SSQ correlates with a 0.5 per mil increase in the deep Pacific Ocean; the lack of evidence for a sea-level change with the precursor suggests that this was primarily a cooling event, not an ice-volume event. Eocene-Oligocene shelf water temperatures of ~17-19 °C at SSQ are similar to modern values for 100 m water depth in this region. Our study establishes the relationships among ice volume, d18O, and sequences: a latest Eocene cooling event was followed by an earliest Oligocene ice volume and cooling event that lowered sea level and formed a sequence boundary during the early stages of eustatic fall.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Planktic foraminifera are heterotrophic mesozooplankton of global marine abundance. The position of planktic foraminifers in the marine food web is different compared to other protozoans and ranges above the base of heterotrophic consumers. Being secondary producers with an omnivorous diet, which ranges from algae to small metazoans, planktic foraminifers are not limited to a single food source, and are assumed to occur at a balanced abundance displaying the overall marine biological productivity at a regional scale. We have calculated the assemblage carbon biomass from data on standing stocks between the sea surface and 2500 m water depth, based on 754 protein-biomass data of 21 planktic foraminifer species and morphotypes, produced with a newly developed method to analyze the protein biomass of single planktic foraminifer specimens. Samples include symbiont bearing and symbiont barren species, characteristic of surface and deep-water habitats. Conversion factors between individual protein-biomass and assemblage-biomass are calculated for test sizes between 72 and 845 µm (minimum diameter). The calculated assemblage biomass data presented here include 1057 sites and water depth intervals. Although the regional coverage of database is limited to the North Atlantic, Arabian Sea, Red Sea, and Caribbean, our data include a wide range of oligotrophic to eutrophic waters covering six orders of magnitude of assemblage biomass. A first order estimate of the global planktic foraminifer biomass from average standing stocks (>125 µm) ranges at 8.5-32.7 Tg C yr-1 (i.e. 0.008-0.033 Gt C yr-1), and might be more than three time as high including the entire fauna including neanic and juvenile individuals adding up to 25-100 Tg C yr-1. However, this is a first estimate of regional planktic-foraminifer assemblage-biomass (PFAB) extrapolated to the global scale, and future estimates based on larger data-sets might considerably deviate from the one presented here. This paper is supported by, and a contribution to the Marine Ecosystem Data project (MAREDAT).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El cálculo de relaciones binarias fue creado por De Morgan en 1860 para ser posteriormente desarrollado en gran medida por Peirce y Schröder. Tarski, Givant, Freyd y Scedrov demostraron que las álgebras relacionales son capaces de formalizar la lógica de primer orden, la lógica de orden superior así como la teoría de conjuntos. A partir de los resultados matemáticos de Tarski y Freyd, esta tesis desarrolla semánticas denotacionales y operacionales para la programación lógica con restricciones usando el álgebra relacional como base. La idea principal es la utilización del concepto de semántica ejecutable, semánticas cuya característica principal es el que la ejecución es posible utilizando el razonamiento estándar del universo semántico, este caso, razonamiento ecuacional. En el caso de este trabajo, se muestra que las álgebras relacionales distributivas con un operador de punto fijo capturan toda la teoría y metateoría estándar de la programación lógica con restricciones incluyendo los árboles utilizados en la búsqueda de demostraciones. La mayor parte de técnicas de optimización de programas, evaluación parcial e interpretación abstracta pueden ser llevadas a cabo utilizando las semánticas aquí presentadas. La demostración de la corrección de la implementación resulta extremadamente sencilla. En la primera parte de la tesis, un programa lógico con restricciones es traducido a un conjunto de términos relacionales. La interpretación estándar en la teoría de conjuntos de dichas relaciones coincide con la semántica estándar para CLP. Las consultas contra el programa traducido son llevadas a cabo mediante la reescritura de relaciones. Para concluir la primera parte, se demuestra la corrección y equivalencia operacional de esta nueva semántica, así como se define un algoritmo de unificación mediante la reescritura de relaciones. La segunda parte de la tesis desarrolla una semántica para la programación lógica con restricciones usando la teoría de alegorías—versión categórica del álgebra de relaciones—de Freyd. Para ello, se definen dos nuevos conceptos de Categoría Regular de Lawvere y _-Alegoría, en las cuales es posible interpretar un programa lógico. La ventaja fundamental que el enfoque categórico aporta es la definición de una máquina categórica que mejora e sistema de reescritura presentado en la primera parte. Gracias al uso de relaciones tabulares, la máquina modela la ejecución eficiente sin salir de un marco estrictamente formal. Utilizando la reescritura de diagramas, se define un algoritmo para el cálculo de pullbacks en Categorías Regulares de Lawvere. Los dominios de las tabulaciones aportan información sobre la utilización de memoria y variable libres, mientras que el estado compartido queda capturado por los diagramas. La especificación de la máquina induce la derivación formal de un juego de instrucciones eficiente. El marco categórico aporta otras importantes ventajas, como la posibilidad de incorporar tipos de datos algebraicos, funciones y otras extensiones a Prolog, a la vez que se conserva el carácter 100% declarativo de nuestra semántica. ABSTRACT The calculus of binary relations was introduced by De Morgan in 1860, to be greatly developed by Peirce and Schröder, as well as many others in the twentieth century. Using different formulations of relational structures, Tarski, Givant, Freyd, and Scedrov have shown how relation algebras can provide a variable-free way of formalizing first order logic, higher order logic and set theory, among other formal systems. Building on those mathematical results, we develop denotational and operational semantics for Constraint Logic Programming using relation algebra. The idea of executable semantics plays a fundamental role in this work, both as a philosophical and technical foundation. We call a semantics executable when program execution can be carried out using the regular theory and tools that define the semantic universe. Throughout this work, the use of pure algebraic reasoning is the basis of denotational and operational results, eliminating all the classical non-equational meta-theory associated to traditional semantics for Logic Programming. All algebraic reasoning, including execution, is performed in an algebraic way, to the point we could state that the denotational semantics of a CLP program is directly executable. Techniques like optimization, partial evaluation and abstract interpretation find a natural place in our algebraic models. Other properties, like correctness of the implementation or program transformation are easy to check, as they are carried out using instances of the general equational theory. In the first part of the work, we translate Constraint Logic Programs to binary relations in a modified version of the distributive relation algebras used by Tarski. Execution is carried out by a rewriting system. We prove adequacy and operational equivalence of the semantics. In the second part of the work, the relation algebraic approach is improved by using allegory theory, a categorical version of the algebra of relations developed by Freyd and Scedrov. The use of allegories lifts the semantics to typed relations, which capture the number of logical variables used by a predicate or program state in a declarative way. A logic program is interpreted in a _-allegory, which is in turn generated from a new notion of Regular Lawvere Category. As in the untyped case, program translation coincides with program interpretation. Thus, we develop a categorical machine directly from the semantics. The machine is based on relation composition, with a pullback calculation algorithm at its core. The algorithm is defined with the help of a notion of diagram rewriting. In this operational interpretation, types represent information about memory allocation and the execution mechanism is more efficient, thanks to the faithful representation of shared state by categorical projections. We finish the work by illustrating how the categorical semantics allows the incorporation into Prolog of constructs typical of Functional Programming, like abstract data types, and strict and lazy functions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La diabetes mellitus es un trastorno del metabolismo de los carbohidratos producido por la insuficiente o nula producción de insulina o la reducida sensibilidad a esta hormona. Es una enfermedad crónica con una mayor prevalencia en los países desarrollados debido principalmente a la obesidad, la vida sedentaria y disfunciones en el sistema endocrino relacionado con el páncreas. La diabetes Tipo 1 es una enfermedad autoinmune en la que son destruidas las células beta del páncreas, que producen la insulina, y es necesaria la administración de insulina exógena. Un enfermo de diabetes Tipo 1 debe seguir una terapia con insulina administrada por la vía subcutánea que debe estar adaptada a sus necesidades metabólicas y a sus hábitos de vida, esta terapia intenta imitar el perfil insulínico de un páncreas no patológico. La tecnología actual permite abordar el desarrollo del denominado “páncreas endocrino artificial”, que aportaría precisión, eficacia y seguridad para los pacientes, en cuanto a la normalización del control glucémico y reducción del riesgo de hipoglucemias. Permitiría que el paciente no estuviera tan pendiente de su enfermedad. El páncreas artificial consta de un sensor continuo de glucosa, una bomba de infusión de insulina y un algoritmo de control, que calcula la insulina a infusionar usando la glucosa como información principal. Este trabajo presenta un método de control en lazo semi-cerrado mediante un sistema borroso experto basado en reglas. La regulación borrosa se fundamenta en la ambigüedad del lenguaje del ser humano. Esta incertidumbre sirve para la formación de una serie de reglas que representan el pensamiento humano, pero a la vez es el sistema que controla un proceso, en este caso el sistema glucorregulatorio. Este proyecto está enfocado en el diseño de un controlador borroso que haciendo uso de variables como la glucosa, insulina y dieta, sea capaz de restaurar la función endocrina del páncreas de forma tecnológica. La validación del algoritmo se ha realizado principalmente mediante experimentos en simulación utilizando una población de pacientes sintéticos, evaluando los resultados con estadísticos de primer orden y algunos más específicos como el índice de riesgo de Kovatchev, para después comparar estos resultados con los obtenidos por otros métodos de control anteriores. Los resultados demuestran que el control borroso (FBPC) mejora el control glucémico con respecto a un sistema predictivo experto basado en reglas booleanas (pBRES). El FBPC consigue reducir siempre la glucosa máxima y aumentar la mínima respecto del pBRES pero es en terapias desajustadas, donde el FBPC es especialmente robusto, hace descender la glucosa máxima 8,64 mg/dl, el uso de insulina es 3,92 UI menor, aumenta la glucosa mínima 3,32 mg/dl y lleva al rango de glucosa 80 – 110 mg/dl 15,33 muestras más. Por lo tanto se puede concluir que el FBPC realiza un mejor control glucémico que el controlador pBRES haciéndole especialmente efectivo, robusto y seguro en condiciones de desajustes de terapia basal y con gran capacidad de mejora futura. SUMMARY The diabetes mellitus is a metabolic disorder caused by a poor or null insulin secretion or a reduced sensibility to insulin. Diabetes is a chronic disease with a higher prevalence in the industrialized countries, mainly due to obesity, the sedentary life and endocrine disfunctions connected with the pancreas. Type 1 diabetes is a self-immune disease where the beta cells of the pancreas, which are the responsible of secreting insulin, are damaged. Hence, it is necessary an exogenous delivery of insulin. The Type 1 diabetic patient has to follow a therapy with subcutaneous insulin administration which should be adjusted to his/her metabolic needs and life style. This therapy tries to mimic the insulin profile of a non-pathological pancreas. Current technology lets the development of the so-called endocrine artificial pancreas that would provide accuracy, efficiency and safety to patients, in regards to the glycemic control normalization and reduction of the risk of hypoglycemic. In addition, it would help the patient not to be so concerned about his disease. The artificial pancreas has a continuous glucose sensor, an insulin infusion pump and a control algorithm, that calculates the insulin infusion using the glucose as main information. This project presents a method of control in semi-closed-loop, through an expert fuzzy system based on rules. The fuzzy regulation is based on the human language ambiguity. This uncertainty serves for construction of some rules that represent the human language besides it is the system that controls a process, in this case the glucoregulatory system. This project is focus on the design of a fuzzy controller that, using variables like glucose insulin and diet, will be able to restore the pancreas endocrine function with technology. The algorithm assessment has mainly been done through experiments in simulation using a population of synthetic patients, evaluating the results with first order statistical parameters and some other more specific such as the Kovatchev risk index, to compare later these results with the ones obtained in others previous methods of control. The results demonstrate that the fuzzy control (FBPC) improves the glycemic control connected with a predictive expert system based on Booleans rules (pBRES). The FBPC is always able to reduce the maximum level of glucose and increase the minimum level as compared with pBRES but it is in unadjusted therapies where FBPC is especially strong, it manages to decrease the maximum level of glucose and insulin used by 8,64 mg/dl and 3,92 UI respectively, also increases the value of minimum glucose by 3,32 mg/dl, getting 15,33 samples more inside the 80-110 mg/dl glucose rank. Therefore we can conclude that FBPC achieves a better glycemic control than the controller pBRES doing it especially effective, robust and safe in conditions of mismatch basal therapy and with a great capacity for future improvements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An electrodynamic tether system for power generation at Jupiter is presented that allows extracting energy from Jupiter's corotating plasmasphere while leaving the system orbital energy unaltered to first order. The spacecraft is placed in a polar orbit with the tether spinning in the orbital plane so that the resulting Lorentz force, neglecting Jupiter's magnetic dipole tilt, is orthogonal to the instantaneous velocity vector and orbital radius, hence affecting orbital inclination rather than orbital energy. In addition, the electrodynamic tether subsystem, which consists of two radial tether arms deployed from the main central spacecraft, is designed in such a way as to extract maximum power while keeping the resulting Lorentz torque constantly null. The power-generation performance of the system and the effect on the orbit inclination is evaluated analytically for different orbital conditions and verified numerically. Finally, a thruster-based inclination-compensation maneuver at apoapsis is added, resulting in an efficient scheme to extract energy from the plasmasphere of the planet with minimum propellant consumption and no inclination change. A tradeoff analysis is conducted showing that, depending on tether size and orbit characteristics, the system performance can be considerably higher than conventional power-generation methods.