893 resultados para Linear optimal control
Resumo:
The pentavalent antimonies, mainly the meglumine antimoniate, are recommends as first-choice medicines for leishmaniasis therapy. In this work we described the development of formulations of meglumine antimoniate injectable medication, as well as the analytical methodology used in the selective determination of Sb(III) and Sb(Total) by hydride generation - inductively coupled plasma atomic emission spectrometry (HG-ICP-AES) and ICP-AES, respectively. On that purpose the analytical methodology was developed focusing on the HG-ICP-AES technique. The formulations using propylene glycol/water as vehicles in a 20:80 proportion were more appropriate for subsequent use in industrial scale. These formulations also showed a lower variation on Sb(III) percentage, no need of buffer solution to stabilize the formulation and no influence of the autoclaving in the quality of the product. The results of the development of the analytical methodology point out the proposed method as an efficient alternative for the determination of Sb(III) in the presence of large quantities of Sb(V) in injectable solutions of meglumine antimoniate, in a selective, linear, accurate and precise manner. In addition, the method showed a low limit of quantification, less interference of the matrix, and more resilience than batch techniques proposed in the Brazilian Pharmacopeia.
Resumo:
Objective. To examine the association between pre-diagnostic circulating vitamin D concentration, dietary intake of vitamin D and calcium, and the risk of colorectal cancer in European populations. Design Nested case-control study. Setting. The study was conducted within the EPIC study, a cohort of more than 520 000 participants from 10 western European countries. Participants: 1248 cases of incident colorectal cancer, which developed after enrolment into the cohort, were matched to 1248 controls. Main outcome measures. Circulating vitamin D concentration (25-hydroxy-vitamin-D, 25-(OH)D) was measured by enzyme immunoassay. Dietary and lifestyle data were obtained from questionnaires. Incidence rate ratios and 95% confidence intervals for the risk of colorectal cancer by 25-(OH)D concentration and levels of dietary calcium and vitamin D intake were estimated from multivariate conditional logistic regression models, with adjustment for potential dietary and other confounders. Results. 25-(OH)D concentration showed a strong inverse linear dose-response association with risk of colorectal cancer (P for trend <0.001). Compared with a pre-defined mid-level concentration of 25-(OH)D (50.0-75.0 nmol/l), lower levels were associated with higher colorectal cancer risk (<25.0 nmol/l: incidence rate ratio 1.32 (95% confidence interval 0.87 to 2.01); 25.0-49.9 nmol/l: 1.28 (1.05 to 1.56), and higher concentrations associated with lower risk (75.0-99.9 nmol/l: 0.88 (0.68 to 1.13); ≥100.0 nmol/l: 0.77 (0.56 to 1.06)). In analyses by quintile of 25-(OH)D concentration, patients in the highest quintile had a 40% lower risk of colorectal cancer than did those in the lowest quintile (P<0.001). Subgroup analyses showed a strong association for colon but not rectal cancer (P for heterogeneity=0.048). Greater dietary intake of calcium was associated with a lower colorectal cancer risk. Dietary vitamin D was not associated with disease risk. Findings did not vary by sex and were not altered by corrections for season or month of blood donation. Conclusions The results of this large observational study indicate a strong inverse association between levels of pre-diagnostic 25-(OH)D concentration and risk of colorectal cancer in western European populations. Further randomised trials are needed to assess whether increases in circulating 25-(OH)D concentration can effectively decrease the risk of colorectal cancer.
Resumo:
This paper analyses the associations between Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI) on the prevalence of schistosomiasis and the presence of Biomphalaria glabrata in the state of Minas Gerais (MG), Brazil. Additionally, vegetation, soil and shade fraction images were created using a Linear Spectral Mixture Model (LSMM) from the blue, red and infrared channels of the Moderate Resolution Imaging Spectroradiometer spaceborne sensor and the relationship between these images and the prevalence of schistosomiasis and the presence of B. glabrata was analysed. First, we found a high correlation between the vegetation fraction image and EVI and second, a high correlation between soil fraction image and NDVI. The results also indicate that there was a positive correlation between prevalence and the vegetation fraction image (July 2002), a negative correlation between prevalence and the soil fraction image (July 2002) and a positive correlation between B. glabrata and the shade fraction image (July 2002). This paper demonstrates that the LSMM variables can be used as a substitute for the standard vegetation indices (EVI and NDVI) to determine and delimit risk areas for B. glabrata and schistosomiasis in MG, which can be used to improve the allocation of resources for disease control.
Resumo:
PURPOSE OF REVIEW: A substantial body of evidence supports the use of intensive insulin therapy in general critical care practice, particularly in surgical intensive care unit patients. The impact of intensive insulin therapy on the outcome of critically ill neurological patients, however, is still controversial. While avoidance of hyperglycemia is recommended in neurointensive care, no recommendations exist regarding the optimal target for systemic glucose control after severe brain injury. RECENT FINDINGS: An increase in brain metabolic demand leading to a deficiency in cerebral extracellular glucose has been observed in critically ill neurological patients and correlates with poor outcome. In this setting, a reduction of systemic glucose below 6 mmol/l with exogenous insulin has been found to exacerbate brain metabolic distress. Recent studies have confirmed these findings while showing intensive insulin therapy to have no substantial benefit on the outcome of critically ill neurological patients. SUMMARY: Questions persist regarding the optimal target for glucose control after severe brain injury. Further studies are needed to analyze the impact of intensive insulin therapy on brain glucose metabolism and outcome of critically ill neurological patients. According to the available evidence, a less restrictive target for systemic glucose control (6-10 mmol/l) may be more appropriate.
Resumo:
In this paper, we address this problem through the design of a semiactive controller based on the mixed H2/H∞ control theory. The vibrations caused by the seismic motions are mitigated by a semiactive damper installed in the bottom of the structure. It is meant by semiactive damper, a device that absorbs but cannot inject energy into the system. Sufficient conditions for the design of a desired control are given in terms of linear matrix inequalities (LMIs). A controller that guarantees asymptotic stability and a mixed H2/H∞ performance is then developed. An algorithm is proposed to handle the semiactive nature of the actuator. The performance of the controller is experimentally evaluated in a real-time hybrid testing facility that consists of a physical specimen (a small-scale magnetorheological damper) and a numerical model (a large-scale three-story building)
Resumo:
Executive control refers to a set of abilities enabling us to plan, control and implement our behavior to rapidly and flexibly adapt to environmental requirements. These adaptations notably involve the suppression of intended or ongoing cognitive or motor processes, a skill referred to as "inhibitory control". To implement efficient executive control of behavior, one must monitor our performance following errors to adjust our behavior accordingly. Deficits in inhibitory control have been associated with the emergènce of a wide range of psychiatric disorders, ranging from drug addiction to attention deficit/hyperactivity disorders. Inhibitory control deficits could, however, be remediated- The brain has indeed the amazing possibility to reorganize following training to allow for behavioral improvements. This mechanism is referred to as neural and behavioral plasticity. Here, our aim is to investigate training-induced plasticity in inhibitory control and propose a model of inhibitory control explaining the spatio- temporal brain mechanisms supporting inhibitory control processes and their plasticity. In the two studies entitled "Brain dynamics underlying training-induced improvement in suppressing inappropriate action" (Manuel et al., 2010) and "Training-induced neuroplastic reinforcement óf top-down inhibitory control" (Manuel et al., 2012c), we investigated the neurophysiological and behavioral changes induced by inhibitory control training with two different tasks and populations of healthy participants. We report that different inhibitory control training developed either automatic/bottom-up inhibition in parietal areas or reinforced controlled/top-down inhibitory control in frontal brain regions. We discuss the results of both studies in the light of a model of fronto-basal inhibition processes. In "Spatio-temporal brain dynamics mediating post-error behavioral adjustments" (Manuel et al., 2012a), we investigated how error detection modulates the processing of following stimuli and in turn impact behavior. We showed that during early integration of stimuli, the activity of prefrontal and parietal areas is modulated according to previous performance and impacts the post-error behavioral adjustments. We discuss these results in terms of a shift from an automatic to a controlled form of inhibition induced by the detection of errors, which in turn influenced response speed. In "Inter- and intra-hemispheric dissociations in ideomotor apraxia: a large-scale lesion- symptom mapping study in subacute brain-damaged patients" (Manuel et al., 2012b), we investigated ideomotor apraxia, a deficit in performing pantomime gestures of object use, and identified the anatomical correlates of distinct ideomotor apraxia error types in 150 subacute brain-damaged patients. Our results reveal a left intra-hemispheric dissociation for different pantomime error types, but with an unspecific role for inferior frontal areas. Les fonctions exécutives désignent un ensemble de processus nous permettant de planifier et contrôler notre comportement afin de nous adapter de manière rapide et flexible à l'environnement. L'une des manières de s'adapter consiste à arrêter un processus cognitif ou moteur en cours ; le contrôle de l'inhibition. Afin que le contrôle exécutif soit optimal il est nécessaire d'ajuster notre comportement après avoir fait des erreurs. Les déficits du contrôle de l'inhibition sont à l'origine de divers troubles psychiatriques tels que l'addiction à la drogue ou les déficits d'attention et d'hyperactivité. De tels déficits pourraient être réhabilités. En effet, le cerveau a l'incroyable capacité de se réorganiser après un entraînement et ainsi engendrer des améliorations comportementales. Ce mécanisme s'appelle la plasticité neuronale et comportementale. Ici, notre but èst d'étudier la plasticité du contrôle de l'inhibition après un bref entraînement et de proposer un modèle du contrôle de l'inhibition qui permette d'expliquer les mécanismes cérébraux spatiaux-temporels sous-tendant l'amélioration du contrôle de l'inhibition et de leur plasticité. Dans les deux études intitulées "Brain dynamics underlying training-induced improvement in suppressing inappropriate action" (Manuel et al., 2010) et "Training-induced neuroplastic reinforcement of top-down inhibitory control" (Manuel et al., 2012c), nous nous sommes intéressés aux changements neurophysiologiques et comportementaux liés à un entraînement du contrôle de l'inhibition. Pour ce faire, nous avons étudié l'inhibition à l'aide de deux différentes tâches et deux populations de sujets sains. Nous avons démontré que différents entraînements pouvaient soit développer une inhibition automatique/bottom-up dans les aires pariétales soit renforcer une inhibition contrôlée/top-down dans les aires frontales. Nous discutons ces résultats dans le contexte du modèle fronto-basal du contrôle de l'inhibition. Dans "Spatio-temporal brain dynamics mediating post-error behavioral adjustments" (Manuel et al., 2012a), nous avons investigué comment la détection d'erreurs influençait le traitement du prochain stimulus et comment elle agissait sur le comportement post-erreur. Nous avons montré que pendant l'intégration précoce des stimuli, l'activité des aires préfrontales et pariétales était modulée en fonction de la performance précédente et avait un impact sur les ajustements post-erreur. Nous proposons que la détection d'erreur ait induit un « shift » d'un mode d'inhibition automatique à un mode contrôlé qui a à son tour influencé le temps de réponse. Dans "Inter- and intra-hemispheric dissociations in ideomotor apraxia: a large-scale lesion-symptom mapping study in subacute brain-damaged patients" (Manuel et al., 2012b), nous avons examiné l'apraxie idémotrice, une incapacité à exécuter des gestes d'utilisation d'objets, chez 150 patients cérébro-lésés. Nous avons mis en avant une dissociation intra-hémisphérique pour différents types d'erreurs avec un rôle non spécifique pour les aires frontales inférieures.
Resumo:
In a thermally fluctuating long linear polymeric chain in a solution, the ends, from time to time, approach each other. At such an instance, the chain can be regarded as closed and thus will form a knot or rather a virtual knot. Several earlier studies of random knotting demonstrated that simpler knots show a higher occurrence for shorter random walks than do more complex knots. However, up to now there have been no rules that could be used to predict the optimal length of a random walk, i.e. the length for which a given knot reaches its highest occurrence. Using numerical simulations, we show here that a power law accurately describes the relation between the optimal lengths of random walks leading to the formation of different knots and the previously characterized lengths of ideal knots of a corresponding type.
Resumo:
Escherichia coli-based bioreporters for arsenic detection are typically based on the natural feedback loop that controls ars operon transcription. Feedback loops are known to show a wide range linear response to the detriment of the overall amplification of the incoming signal. While being a favourable feature in controlling arsenic detoxification for the cell, a feedback loop is not necessarily the most optimal for obtaining highest sensitivity and response in a designed cellular reporter for arsenic detection. Here we systematically explore the effects of uncoupling the topology of arsenic sensing circuitry on the developed reporter signal as a function of arsenite concentration input. A model was developed to describe relative ArsR and GFP levels in feedback and uncoupled circuitry, which was used to explore new ArsR-based synthetic circuits. The expression of arsR was then placed under the control of a series of constitutive promoters, which differed in promoter strength, and which could be further modulated by TetR repression. Expression of the reporter gene was maintained under the ArsR-controlled Pars promoter. ArsR expression in the systems was measured by using ArsR-mCherry fusion proteins. We find that stronger constitutive ArsR production decreases arsenite-dependent EGFP output from Pars and vice versa. This leads to a tunable series of arsenite-dependent EGFP outputs in a variety of systematically characterized circuitries. The higher expression levels and sensitivities of the response curves in the uncoupled circuits may be useful for improving field-test assays using arsenic bioreporters.
Resumo:
We investigate the problem of finding minimum-distortion policies for streaming delay-sensitive but distortion-tolerant data. We consider cross-layer approaches which exploit the coupling between presentation and transport layers. We make the natural assumption that the distortion function is convex and decreasing. We focus on a single source-destination pair and analytically find the optimum transmission policy when the transmission is done over an error-free channel. This optimum policy turns out to be independent of the exact form of the convex and decreasing distortion function. Then, for a packet-erasure channel, we analytically find the optimum open-loop transmission policy, which is also independent of the form of the convex distortion function. We then find computationally efficient closed-loop heuristic policies and show, through numerical evaluation, that they outperform the open-loop policy and have near optimal performance.
Resumo:
The paper proposes an approach aimed at detecting optimal model parameter combinations to achieve the most representative description of uncertainty in the model performance. A classification problem is posed to find the regions of good fitting models according to the values of a cost function. Support Vector Machine (SVM) classification in the parameter space is applied to decide if a forward model simulation is to be computed for a particular generated model. SVM is particularly designed to tackle classification problems in high-dimensional space in a non-parametric and non-linear way. SVM decision boundaries determine the regions that are subject to the largest uncertainty in the cost function classification, and, therefore, provide guidelines for further iterative exploration of the model space. The proposed approach is illustrated by a synthetic example of fluid flow through porous media, which features highly variable response due to the parameter values' combination.
Resumo:
A Investigação Operacional vem demonstrando ser uma valiosa ferramenta de gestão nos dias de hoje em que se vive num mercado cada vez mais competitivo. Através da Programação Linear pode-se reproduzir matematicamente um problema de maximização dos resultados ou minimização dos custos de produção com o propósito de auxiliar os gestores na tomada de decisão. A Programação Linear é um método matemático em que a função objectivo e as restrições assumem características lineares, com diversas aplicações no controlo de gestão, envolvendo normalmente problemas de utilização dos recursos disponíveis sujeitos a limitações impostas pelo processo produtivo ou pelo mercado. O objectivo geral deste trabalho é o de propor um modelo de Programação Linear para a programação ou produção e alocação de recursos necessários. Optimizar uma quantidade física designada função objectivo, tendo em conta um conjunto de condicionalismos endógenas às actividades em gestão. O objectivo crucial é dispor um modelo de apoio à gestão contribuindo assim para afectação eficiente de recursos escassos à disposição da unidade económica. Com o trabalho desenvolvido ficou patente a importância da abordagem quantitativa como recurso imprescindível de apoio ao processo de decisão. The operational research has proven to be a valuable management tool today we live in an increasingly competitive market. Through Linear Programming can be mathematically reproduce a problem of maximizing performance or minimizing production costs in order to assist managers in decision making. The Linear Programming is a mathematical method in which the objective function and constraints are linear features, with several applications in the control of management, usually involving problems of resource use are available subject to limitations imposed by the production process or the market. The overall objective of this work is to propose a Linear Programming model for scheduling or production and allocation of necessary resources. Optimizing a physical quantity called the objective function, given a set of endogenous constraints on management thus contributing to efficient allocation of scarce resources available to the economic unit. With the work has demonstrated the importance of the quantitative approach as essential resource to support the decision process.
Resumo:
Revenue management practices often include overbooking capacity to account for customerswho make reservations but do not show up. In this paper, we consider the network revenuemanagement problem with no-shows and overbooking, where the show-up probabilities are specificto each product. No-show rates differ significantly by product (for instance, each itinerary andfare combination for an airline) as sale restrictions and the demand characteristics vary byproduct. However, models that consider no-show rates by each individual product are difficultto handle as the state-space in dynamic programming formulations (or the variable space inapproximations) increases significantly. In this paper, we propose a randomized linear program tojointly make the capacity control and overbooking decisions with product-specific no-shows. Weestablish that our formulation gives an upper bound on the optimal expected total profit andour upper bound is tighter than a deterministic linear programming upper bound that appearsin the existing literature. Furthermore, we show that our upper bound is asymptotically tightin a regime where the leg capacities and the expected demand is scaled linearly with the samerate. We also describe how the randomized linear program can be used to obtain a bid price controlpolicy. Computational experiments indicate that our approach is quite fast, able to scale to industrialproblems and can provide significant improvements over standard benchmarks.
Resumo:
Most research on single machine scheduling has assumedthe linearity of job holding costs, which is arguablynot appropriate in some applications. This motivates ourstudy of a model for scheduling $n$ classes of stochasticjobs on a single machine, with the objective of minimizingthe total expected holding cost (discounted or undiscounted). We allow general holding cost rates that are separable,nondecreasing and convex on the number of jobs in eachclass. We formulate the problem as a linear program overa certain greedoid polytope, and establish that it issolved optimally by a dynamic (priority) index rule,whichextends the classical Smith's rule (1956) for the linearcase. Unlike Smith's indices, defined for each class, ournew indices are defined for each extended class, consistingof a class and a number of jobs in that class, and yieldan optimal dynamic index rule: work at each time on a jobwhose current extended class has larger index. We furthershow that the indices possess a decomposition property,as they are computed separately for each class, andinterpret them in economic terms as marginal expected cost rate reductions per unit of expected processing time.We establish the results by deploying a methodology recentlyintroduced by us [J. Niño-Mora (1999). "Restless bandits,partial conservation laws, and indexability. "Forthcomingin Advances in Applied Probability Vol. 33 No. 1, 2001],based on the satisfaction by performance measures of partialconservation laws (PCL) (which extend the generalizedconservation laws of Bertsimas and Niño-Mora (1996)):PCL provide a polyhedral framework for establishing theoptimality of index policies with special structure inscheduling problems under admissible objectives, which weapply to the model of concern.
Resumo:
Therapeutic goal of vitamin D: optimal serum level and dose requirements Results of randomized controlled trials and meta-analyses investigating the effect of vitamin D supplementation on falls and fractures are inconsistent. The optimal serum level 25(OH) vitamin D for musculoskeletal and global health is > or = 30 ng/ml (75 nmol/l) for some experts and 20 ng/ml (50 nmol/l) for some others. A daily dose of vitamin D is better than high intermittent doses to reach this goal. High dose once-yearly vitamin D therapy may increase the incidence of fractures and falls. High serum level of vitamin D is probably harmful for the musculoskeletal system and health at large. The optimal benefits for musculoskeletal health are obtained with an 800 UI daily dose and a serum level of near 30 ng/ml (75 nmol/l).
Resumo:
This paper presents a test of the predictive validity of various classes ofQALY models (i.e., linear, power and exponential models). We first estimatedTTO utilities for 43 EQ-5D chronic health states and next these states wereembedded in health profiles. The chronic TTO utilities were then used topredict the responses to TTO questions with health profiles. We find that thepower QALY model clearly outperforms linear and exponential QALY models.Optimal power coefficient is 0.65. Our results suggest that TTO-based QALYcalculations may be biased. This bias can be avoided using a power QALY model.