74 resultados para non-linear programming
Resumo:
Context There are no evidence syntheses available to guide clinicians on when to titrate antihypertensive medication after initiation. Objective To model the blood pressure (BP) response after initiating antihypertensive medication. Data sources electronic databases including Medline, Embase, Cochrane Register and reference lists up to December 2009. Study selection Trials that initiated antihypertensive medication as single therapy in hypertensive patients who were either drug naive or had a placebo washout from previous drugs. Data extraction Office BP measurements at a minimum of two weekly intervals for a minimum of 4 weeks. An asymptotic approach model of BP response was assumed and non-linear mixed effects modelling used to calculate model parameters. Results and conclusions Eighteen trials that recruited 4168 patients met inclusion criteria. The time to reach 50% of the maximum estimated BP lowering effect was 1 week (systolic 0.91 weeks, 95% CI 0.74 to 1.10; diastolic 0.95, 0.75 to 1.15). Models incorporating drug class as a source of variability did not improve fit of the data. Incorporating the presence of a titration schedule improved model fit for both systolic and diastolic pressure. Titration increased both the predicted maximum effect and the time taken to reach 50% of the maximum (systolic 1.2 vs 0.7 weeks; diastolic 1.4 vs 0.7 weeks). Conclusions Estimates of the maximum efficacy of antihypertensive agents can be made early after starting therapy. This knowledge will guide clinicians in deciding when a newly started antihypertensive agent is likely to be effective or not at controlling BP.
Resumo:
In recent years there has been an explosive growth in the development of adaptive and data driven methods. One of the efficient and data-driven approaches is based on statistical learning theory (Vapnik 1998). The theory is based on Structural Risk Minimisation (SRM) principle and has a solid statistical background. When applying SRM we are trying not only to reduce training error ? to fit the available data with a model, but also to reduce the complexity of the model and to reduce generalisation error. Many nonlinear learning procedures recently developed in neural networks and statistics can be understood and interpreted in terms of the structural risk minimisation inductive principle. A recent methodology based on SRM is called Support Vector Machines (SVM). At present SLT is still under intensive development and SVM find new areas of application (www.kernel-machines.org). SVM develop robust and non linear data models with excellent generalisation abilities that is very important both for monitoring and forecasting. SVM are extremely good when input space is high dimensional and training data set i not big enough to develop corresponding nonlinear model. Moreover, SVM use only support vectors to derive decision boundaries. It opens a way to sampling optimization, estimation of noise in data, quantification of data redundancy etc. Presentation of SVM for spatially distributed data is given in (Kanevski and Maignan 2004).
Resumo:
Introduction: The SMILING project, a multicentric project fundedby the European Union, aims to develop a new gait and balance trainingprogram to prevent falls in older persons. The program includes the"SMILING shoe", an innovative device that generates mechanical perturbationwhile walking by changing the soles' inclination. Induced perturbationschallenge subjects' balance and force them to react to avoidfalls. By training specifically the complex motor reactions used to maintainbalance when walking on irregular ground, the program will improvesubjects' ability to react in situation of unsteadiness and reduce theirrisk of falling. Methods: The program will be evaluated in a multicentric,cross-over randomized controlled trial. Overall, 112 subjects (aged≥65 years, ≥1 falls, POMA score 22-26/28) will be enrolled. Subjectswill be randomised in 2 groups: group A begin the training with active"SMILING shoes", group B with inactive dummy shoes. After 4 weeksof training, group A and B will exchange the shoes. Supervised trainingsessions (30 minutes twice a week for 8 weeks) include walkingtasks of progressive difficulties.To avoid a learning effect, "SMILINGshoes" perturbations will be generated in a non-linear and chaotic way.Gait performance, fear of falling, and acceptability of the program willbe assessed. Conclusion: The SMILING program is an innovative interventionfor falls prevention in older persons based on gait and balancetraining using chaotic perturbations. Because of the easy use of the"SMILING shoes", this program could be used in various settings, suchas geriatric clinics or at home.
Resumo:
BACKGROUND: The aim of this study was to assess the pharmacology, toxicity and activity of high-dose ifosfamide mesna +/- GM-CSF administered by a five-day continuous infusion at a total ifosfamide dose of 12-18 g/m2 in adult patients with advanced sarcomas. PATIENTS AND METHODS: Between January 1991 and October 1992 32 patients with advanced or metastatic sarcoma were entered the study. Twenty-seven patients were pretreated including twenty-three with prior ifosfamide at less than 8 g/m2 total dose/cycle. In 25 patients (27 cycles) extensive pharmacokinetic analyses were performed. RESULTS: The area under the plasma concentration-time curve (AUC) for ifosfamide increased linearly with dose while the AUC's of the metabolites measured in plasma by thin-layer chromatography did not increase with dose, particularly that of the active metabolite isophosphoramide mustard. Furthermore the AUC of the inactive carboxymetabolite did not increase with dose. Interpatient variability of pharmacokinetic parameters was high. Dose-limiting toxicity was myelosuppression at 18 g/m2 total dose with grade 4 neutropenia in five of six patients and grade 4 thrombocytopenia in four of six patients. Therefore the maximum tolerated dose was considered to be 18 g/m2 total dose. There was one CR and eleven PR in twenty-nine evaluable patients (overall response rate 41%). CONCLUSION: Both the activation and inactivation pathways of ifosfamide are non-linear and saturable at high-doses although the pharmacokinetics of the parent drug itself are dose linear. Ifosfamide doses greater than 14-16 g/m2 per cycle appear to result in a relative decrease of the active metabolite isophosphoramide mustard. These data suggest a dose-dependent saturation or even inhibition of ifosfamide metabolism by increasing high dose ifosfamide and suggest the need for further metabolic studies.
Resumo:
This study investigated the spatial, spectral, temporal and functional proprieties of functional brain connections involved in the concurrent execution of unrelated visual perception and working memory tasks. Electroencephalography data was analysed using a novel data-driven approach assessing source coherence at the whole-brain level. Three connections in the beta-band (18-24 Hz) and one in the gamma-band (30-40 Hz) were modulated by dual-task performance. Beta-coherence increased within two dorsofrontal-occipital connections in dual-task conditions compared to the single-task condition, with the highest coherence seen during low working memory load trials. In contrast, beta-coherence in a prefrontal-occipital functional connection and gamma-coherence in an inferior frontal-occipitoparietal connection was not affected by the addition of the second task and only showed elevated coherence under high working memory load. Analysis of coherence as a function of time suggested that the dorsofrontal-occipital beta-connections were relevant to working memory maintenance, while the prefrontal-occipital beta-connection and the inferior frontal-occipitoparietal gamma-connection were involved in top-down control of concurrent visual processing. The fact that increased coherence in the gamma-connection, from low to high working memory load, was negatively correlated with faster reaction time on the perception task supports this interpretation. Together, these results demonstrate that dual-task demands trigger non-linear changes in functional interactions between frontal-executive and occipitoparietal-perceptual cortices.
Resumo:
The aim of this study was to locate the breakpoints of cerebral and muscle oxygenation and muscle electrical activity during a ramp exercise in reference to the first and second ventilatory thresholds. Twenty-five cyclists completed a maximal ramp test on an electromagnetically braked cycle-ergometer with a rate of increment of 25 W/min. Expired gazes (breath-by-breath), prefrontal cortex and vastus lateralis (VL) oxygenation [Near-infrared spectroscopy (NIRS)] together with electromyographic (EMG) Root Mean Square (RMS) activity for the VL, rectus femoris (RF), and biceps femoris (BF) muscles were continuously assessed. There was a non-linear increase in both cerebral deoxyhemoglobin (at 56 ± 13% of the exercise) and oxyhemoglobin (56 ± 8% of exercise) concomitantly to the first ventilatory threshold (57 ± 6% of exercise, p > 0.86, Cohen's d < 0.1). Cerebral deoxyhemoglobin further increased (87 ± 10% of exercise) while oxyhemoglobin reached a plateau/decreased (86 ± 8% of exercise) after the second ventilatory threshold (81 ± 6% of exercise, p < 0.05, d > 0.8). We identified one threshold only for muscle parameters with a non-linear decrease in muscle oxyhemoglobin (78 ± 9% of exercise), attenuation in muscle deoxyhemoglobin (80 ± 8% of exercise), and increase in EMG activity of VL (89 ± 5% of exercise), RF (82 ± 14% of exercise), and BF (85 ± 9% of exercise). The thresholds in BF and VL EMG activity occurred after the second ventilatory threshold (p < 0.05, d > 0.6). Our results suggest that the metabolic and ventilatory events characterizing this latter cardiopulmonary threshold may affect both cerebral and muscle oxygenation levels, and in turn, muscle recruitment responses.
Resumo:
Microstructure imaging from diffusion magnetic resonance (MR) data represents an invaluable tool to study non-invasively the morphology of tissues and to provide a biological insight into their microstructural organization. In recent years, a variety of biophysical models have been proposed to associate particular patterns observed in the measured signal with specific microstructural properties of the neuronal tissue, such as axon diameter and fiber density. Despite very appealing results showing that the estimated microstructure indices agree very well with histological examinations, existing techniques require computationally very expensive non-linear procedures to fit the models to the data which, in practice, demand the use of powerful computer clusters for large-scale applications. In this work, we present a general framework for Accelerated Microstructure Imaging via Convex Optimization (AMICO) and show how to re-formulate this class of techniques as convenient linear systems which, then, can be efficiently solved using very fast algorithms. We demonstrate this linearization of the fitting problem for two specific models, i.e. ActiveAx and NODDI, providing a very attractive alternative for parameter estimation in those techniques; however, the AMICO framework is general and flexible enough to work also for the wider space of microstructure imaging methods. Results demonstrate that AMICO represents an effective means to accelerate the fit of existing techniques drastically (up to four orders of magnitude faster) while preserving accuracy and precision in the estimated model parameters (correlation above 0.9). We believe that the availability of such ultrafast algorithms will help to accelerate the spread of microstructure imaging to larger cohorts of patients and to study a wider spectrum of neurological disorders.
Resumo:
It has been suggested that pathological gamblers develop illusory perceptions of control regarding the outcome of the games and should express higher Internal and Chance locus of control. A sample of 48 outpatients diagnosed with pathological gambling disorder who participated in this ex post facto study, completed the Internality, Powerful Others, and Chance scale, the South Oaks Gambling Screen questionnaire, and the Beck Depression Inventory. Results for the locus of control measure were compared with a reference group. Pathological gamblers scored higher than the reference group on the Chance locus of control, which increased with the severity of cases. Moreover, Internal locus of control did show a curvilinear relationship with the severity of cases. Pathological gamblers have specific locus of control scores that vary in function of the severity, in a linear fashion or a non-linear fashion according to the scale. This effect might be caused by competition between "illusion of control" and the tendency to attribute adverse consequence of gambling to external causes.
Resumo:
General Summary Although the chapters of this thesis address a variety of issues, the principal aim is common: test economic ideas in an international economic context. The intention has been to supply empirical findings using the largest suitable data sets and making use of the most appropriate empirical techniques. This thesis can roughly be divided into two parts: the first one, corresponding to the first two chapters, investigates the link between trade and the environment, the second one, the last three chapters, is related to economic geography issues. Environmental problems are omnipresent in the daily press nowadays and one of the arguments put forward is that globalisation causes severe environmental problems through the reallocation of investments and production to countries with less stringent environmental regulations. A measure of the amplitude of this undesirable effect is provided in the first part. The third and the fourth chapters explore the productivity effects of agglomeration. The computed spillover effects between different sectors indicate how cluster-formation might be productivity enhancing. The last chapter is not about how to better understand the world but how to measure it and it was just a great pleasure to work on it. "The Economist" writes every week about the impressive population and economic growth observed in China and India, and everybody agrees that the world's center of gravity has shifted. But by how much and how fast did it shift? An answer is given in the last part, which proposes a global measure for the location of world production and allows to visualize our results in Google Earth. A short summary of each of the five chapters is provided below. The first chapter, entitled "Unraveling the World-Wide Pollution-Haven Effect" investigates the relative strength of the pollution haven effect (PH, comparative advantage in dirty products due to differences in environmental regulation) and the factor endowment effect (FE, comparative advantage in dirty, capital intensive products due to differences in endowments). We compute the pollution content of imports using the IPPS coefficients (for three pollutants, namely biological oxygen demand, sulphur dioxide and toxic pollution intensity for all manufacturing sectors) provided by the World Bank and use a gravity-type framework to isolate the two above mentioned effects. Our study covers 48 countries that can be classified into 29 Southern and 19 Northern countries and uses the lead content of gasoline as proxy for environmental stringency. For North-South trade we find significant PH and FE effects going in the expected, opposite directions and being of similar magnitude. However, when looking at world trade, the effects become very small because of the high North-North trade share, where we have no a priori expectations about the signs of these effects. Therefore popular fears about the trade effects of differences in environmental regulations might by exaggerated. The second chapter is entitled "Is trade bad for the Environment? Decomposing worldwide SO2 emissions, 1990-2000". First we construct a novel and large database containing reasonable estimates of SO2 emission intensities per unit labor that vary across countries, periods and manufacturing sectors. Then we use these original data (covering 31 developed and 31 developing countries) to decompose the worldwide SO2 emissions into the three well known dynamic effects (scale, technique and composition effect). We find that the positive scale (+9,5%) and the negative technique (-12.5%) effect are the main driving forces of emission changes. Composition effects between countries and sectors are smaller, both negative and of similar magnitude (-3.5% each). Given that trade matters via the composition effects this means that trade reduces total emissions. We next construct, in a first experiment, a hypothetical world where no trade happens, i.e. each country produces its imports at home and does no longer produce its exports. The difference between the actual and this no-trade world allows us (under the omission of price effects) to compute a static first-order trade effect. The latter now increases total world emissions because it allows, on average, dirty countries to specialize in dirty products. However, this effect is smaller (3.5%) in 2000 than in 1990 (10%), in line with the negative dynamic composition effect identified in the previous exercise. We then propose a second experiment, comparing effective emissions with the maximum or minimum possible level of SO2 emissions. These hypothetical levels of emissions are obtained by reallocating labour accordingly across sectors within each country (under the country-employment and the world industry-production constraints). Using linear programming techniques, we show that emissions are reduced by 90% with respect to the worst case, but that they could still be reduced further by another 80% if emissions were to be minimized. The findings from this chapter go together with those from chapter one in the sense that trade-induced composition effect do not seem to be the main source of pollution, at least in the recent past. Going now to the economic geography part of this thesis, the third chapter, entitled "A Dynamic Model with Sectoral Agglomeration Effects" consists of a short note that derives the theoretical model estimated in the fourth chapter. The derivation is directly based on the multi-regional framework by Ciccone (2002) but extends it in order to include sectoral disaggregation and a temporal dimension. This allows us formally to write present productivity as a function of past productivity and other contemporaneous and past control variables. The fourth chapter entitled "Sectoral Agglomeration Effects in a Panel of European Regions" takes the final equation derived in chapter three to the data. We investigate the empirical link between density and labour productivity based on regional data (245 NUTS-2 regions over the period 1980-2003). Using dynamic panel techniques allows us to control for the possible endogeneity of density and for region specific effects. We find a positive long run elasticity of density with respect to labour productivity of about 13%. When using data at the sectoral level it seems that positive cross-sector and negative own-sector externalities are present in manufacturing while financial services display strong positive own-sector effects. The fifth and last chapter entitled "Is the World's Economic Center of Gravity Already in Asia?" computes the world economic, demographic and geographic center of gravity for 1975-2004 and compares them. Based on data for the largest cities in the world and using the physical concept of center of mass, we find that the world's economic center of gravity is still located in Europe, even though there is a clear shift towards Asia. To sum up, this thesis makes three main contributions. First, it provides new estimates of orders of magnitudes for the role of trade in the globalisation and environment debate. Second, it computes reliable and disaggregated elasticities for the effect of density on labour productivity in European regions. Third, it allows us, in a geometrically rigorous way, to track the path of the world's economic center of gravity.
Resumo:
Estimating the time since discharge of a spent cartridge or a firearm can be useful in criminal situa-tions involving firearms. The analysis of volatile gunshot residue remaining after shooting using solid-phase microextraction (SPME) followed by gas chromatography (GC) was proposed to meet this objective. However, current interpretative models suffer from several conceptual drawbacks which render them inadequate to assess the evidential value of a given measurement. This paper aims to fill this gap by proposing a logical approach based on the assessment of likelihood ratios. A probabilistic model was thus developed and applied to a hypothetical scenario where alternative hy-potheses about the discharge time of a spent cartridge found on a crime scene were forwarded. In order to estimate the parameters required to implement this solution, a non-linear regression model was proposed and applied to real published data. The proposed approach proved to be a valuable method for interpreting aging-related data.
Resumo:
When dealing with multi-angular image sequences, problems of reflectance changes due either to illumination and acquisition geometry, or to interactions with the atmosphere, naturally arise. These phenomena interplay with the scene and lead to a modification of the measured radiance: for example, according to the angle of acquisition, tall objects may be seen from top or from the side and different light scatterings may affect the surfaces. This results in shifts in the acquired radiance, that make the problem of multi-angular classification harder and might lead to catastrophic results, since surfaces with the same reflectance return significantly different signals. In this paper, rather than performing atmospheric or bi-directional reflection distribution function (BRDF) correction, a non-linear manifold learning approach is used to align data structures. This method maximizes the similarity between the different acquisitions by deforming their manifold, thus enhancing the transferability of classification models among the images of the sequence.
Resumo:
This paper presents multiple kernel learning (MKL) regression as an exploratory spatial data analysis and modelling tool. The MKL approach is introduced as an extension of support vector regression, where MKL uses dedicated kernels to divide a given task into sub-problems and to treat them separately in an effective way. It provides better interpretability to non-linear robust kernel regression at the cost of a more complex numerical optimization. In particular, we investigate the use of MKL as a tool that allows us to avoid using ad-hoc topographic indices as covariables in statistical models in complex terrains. Instead, MKL learns these relationships from the data in a non-parametric fashion. A study on data simulated from real terrain features confirms the ability of MKL to enhance the interpretability of data-driven models and to aid feature selection without degrading predictive performances. Here we examine the stability of the MKL algorithm with respect to the number of training data samples and to the presence of noise. The results of a real case study are also presented, where MKL is able to exploit a large set of terrain features computed at multiple spatial scales, when predicting mean wind speed in an Alpine region.
Resumo:
In swarm robotics, communication among the robots is essential. Inspired by biological swarms using pheromones, we propose the use of chemical compounds to realize group foraging behavior in robot swarms. We designed a fully autonomous robot, and then created a swarm using ethanol as the trail pheromone allowing the robots to communicate with one another indirectly via pheromone trails. Our group recruitment and cooperative transport algorithms provide the robots with the required swarm behavior. We conducted both simulations and experiments with real robot swarms, and analyzed the data statistically to investigate any changes caused by pheromone communication in the performance of the swarm in solving foraging recruitment and cooperative transport tasks. The results show that the robots can communicate using pheromone trails, and that the improvement due to pheromone communication may be non-linear, depending on the size of the robot swarm.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the regional scale represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed a downscaling procedure based on a non-linear Bayesian sequential simulation approach. The basic objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity, which is available throughout the model space. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariate kernel density function. This method is then applied to the stochastic integration of low-resolution, re- gional-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities. Finally, the overall viability of this downscaling approach is tested and verified by performing and comparing flow and transport simulation through the original and the downscaled hydraulic conductivity fields. Our results indicate that the proposed procedure does indeed allow for obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.
Resumo:
AIM: Total imatinib concentrations are currently measured for the therapeutic drug monitoring of imatinib, whereas only free drug equilibrates with cells for pharmacological action. Due to technical and cost limitations, routine measurement of free concentrations is generally not performed. In this study, free and total imatinib concentrations were measured to establish a model allowing the confident prediction of imatinib free concentrations based on total concentrations and plasma proteins measurements. METHODS: One hundred and fifty total and free plasma concentrations of imatinib were measured in 49 patients with gastrointestinal stromal tumours. A population pharmacokinetic model was built up to characterize mean total and free concentrations with inter-patient and intrapatient variability, while taking into account α1 -acid glycoprotein (AGP) and human serum albumin (HSA) concentrations, in addition to other demographic and environmental covariates. RESULTS: A one compartment model with first order absorption was used to characterize total and free imatinib concentrations. Only AGP influenced imatinib total clearance. Imatinib free concentrations were best predicted using a non-linear binding model to AGP, with a dissociation constant Kd of 319 ng ml(-1) , assuming a 1:1 molar binding ratio. The addition of HSA in the equation did not improve the prediction of imatinib unbound concentrations. CONCLUSION: Although free concentration monitoring is probably more appropriate than total concentrations, it requires an additional ultrafiltration step and sensitive analytical technology, not always available in clinical laboratories. The model proposed might represent a convenient approach to estimate imatinib free concentrations. However, therapeutic ranges for free imatinib concentrations remain to be established before it enters into routine practice.