932 resultados para Linear and multilinear programming
Resumo:
El departament d’electrònica i telecomunicacions de la Universitat de Vic ha dissenyat un conjunt de plaques entrenadores amb finalitat educativa. Perquè els alumnes puguin utilitzar aquestes plaques com a eina d’estudi, és necessari disposar d’un sistema de gravació econòmic i còmode. La major part dels programadors, en aquest cas, no compleixen amb aquests requeriments. L’objectiu d’aquest projecte és dissenyar un sistema de programació que utilitzi la comunicació sèrie i que no requereixi d'un hardware ni software específics. D’aquesta manera, obtenim una placa autònoma i un programador gratuït, de muntatge ràpid i simple d’utilitzar. El sistema de gravació dissenyat s’ha dividit en tres blocs. Per una banda, un programa que anomenem “programador” encarregat de transferir codi de programa des de l’ordinador al microcontrolador de la placa entrenadora. Per altra banda, un programa anomenat “bootloader”, situat al microcontrolador, permet rebre aquest codi de programa i emmagatzemar-lo a les direccions de memòria de programa corresponents. Com a tercer bloc, s’implementa un protocol de comunicació i un sistema de control d’errors per tal d’assegurar una correcta comunicació entre el “programador” i el “bootloader”. Els objectius d’aquest projecte s’han complert i per les proves realitzades, el sistema de programació ha funcionat correctament.
Resumo:
En este trabajo introducimos diversas clases de barreras del dividendo en la teoría modelo clásica de la ruina. Estudiamos la influencia de la estrategia de la barrera en probabilidad de la ruina. Un método basado en las ecuaciones de la renovación [Grandell (1991)], alternativa a la discusión diferenciada [Gerber (1975)], utilizado para conseguir las ecuaciones diferenciales parciales para resolver probabilidades de la supervivencia. Finalmente calculamos y comparamos las probabilidades de la supervivencia usando la barrera linear y parabólica del dividendo, con la ayuda de la simulación
Resumo:
From data collected during routine TDM, plasma concentrations of citalopram (CIT) and its metabolites demethylcitalopram (DCIT) and didemethylcitalopram (DDCIT) were measured in 345 plasma samples collected in steady-state conditions. They were from 258 patients treated with usual doses (20-60 mg/d) and from patients medicated with 80-360 mg/d CIT. Most patients had one or several comedications, including other antidepressants, antipsychotics, lithium, anticonvulsants, psychostimulants and somatic medications. Dose-corrected CIT plasma concentrations (C/D ratio) were 2.51 +/- 2.25 ng mL-1 mg-1 (n = 258; mean +/- SD). Patients >65 years had significantly higher dose-corrected CIT plasma concentrations (n = 56; 3.08 +/- 1.35 ng mL-1 mg-1) than younger patients (n = 195; 2.35 +/- 2.46 ng mL-1 mg-1) (P = 0.03). CIT plasma concentrations in the generally recommended dose range were [mean +/- SD, (median)]: 57 +/- 64 (45) ng/mL (10-20 mg/d; n = 64), 117 +/- 95 (91) ng/mL (21-60 mg/d; n = 96). At higher than usual doses, the following concentrations of CIT were measured: 61-120 mg/d CIT, 211 +/- 103 (190) ng/mL (n = 93); 121-200 mg/d: 339 +/- 143 (322) ng/mL (n = 70); 201-280 mg/d: 700 +/- 408 (565) ng/mL (n = 18); 281-360 mg/d: 888 +/- 620 (616) ng/mL (n = 4). When only one sample per patient (at the highest daily dose if repeated dosages) is considered, there is a linear and significant correlation (n = 48, r = 0.730; P < 0.001) between daily dose (10-200 mg/d) and CIT plasma concentrations. In experiments with dogs, DDCIT was reported to affect the QT interval when present at concentrations >300 ng/mL. In this study, DDCIT concentration reached 100 ng/mL in a patient treated with 280 mg/d CIT. Twelve other patients treated with 140-320 mg/d CIT had plasma concentrations of DDCIT within the range 52-73 ng/mL. In a subgroup comprised of patients treated with > or =160 mg/d CIT and with CIT plasma concentrations < or =300 ng/mL, and patients treated with < or =200 mg/d CIT and CIT plasma concentrations > or = 600 ng/mL, the enantiomers of CIT and DCIT were also analyzed. The highest S-CIT concentration measured in this subgroup was 327 ng/mL in a patient treated with 140 mg/d CIT, but the highest S-CIT concentration (632 ng/mL) was measured in patient treated with 360 mg/d CIT. In conclusion, there is a highly linear correlation between CIT plasma concentrations and CIT doses, well above the usual dose range.
Resumo:
The agricultural potential is generally assessed and managed based on a one-dimensional vision of the soil profile, however, the increased appreciation of sustainable production has stimulated studies on faster and more accurate evaluation techniques and methods of the agricultural potential on detailed scales. The objective of this study was to investigate the possibility of using soil magnetic susceptibility for the identification of landscape segments on a detailed scale in the region of Jaboticabal, São Paulo State. The studied area has two slope curvatures: linear and concave, subdivided into three landscape segments: upper slope (US, concave), middle slope (MS, linear) and lower slope (LS, linear). In each of these segments, 20 points were randomly sampled from a database with 207 samples forming a regular grid installed in each landscape segment. The soil physical and chemical properties, CO2 emissions (FCO2) and magnetic susceptibility (MS) of the samples were evaluated represented by: magnetic susceptibility of air-dried fine earth (MS ADFE), magnetic susceptibility of the total sand fraction (MS TS) and magnetic susceptibility of the clay fraction (MS Cl) in the 0.00 - 0.15 m layer. The principal component analysis showed that MS is an important property that can be used to identify landscape segments, because the correlation of this property within the first principal component was high. The hierarchical cluster analysis method identified two groups based on the variables selected by principal component analysis; of the six selected variables, three were related to magnetic susceptibility. The landscape segments were differentiated similarly by the principal component analysis and by the cluster analysis using only the properties with higher discriminatory power. The cluster analysis of MS ADFE, MS TS and MS Cl allowed the formation of three groups that agree with the segment division established in the field. The grouping by cluster analysis indicated MS as a tool that could facilitate the identification of landscape segments and enable the mapping of more homogeneous areas at similar locations.
Resumo:
Over the past three decades, pedotransfer functions (PTFs) have been widely used by soil scientists to estimate soils properties in temperate regions in response to the lack of soil data for these regions. Several authors indicated that little effort has been dedicated to the prediction of soil properties in the humid tropics, where the need for soil property information is of even greater priority. The aim of this paper is to provide an up-to-date repository of past and recently published articles as well as papers from proceedings of events dealing with water-retention PTFs for soils of the humid tropics. Of the 35 publications found in the literature on PTFs for prediction of water retention of soils of the humid tropics, 91 % of the PTFs are based on an empirical approach, and only 9 % are based on a semi-physical approach. Of the empirical PTFs, 97 % are continuous, and 3 % (one) is a class PTF; of the empirical PTFs, 97 % are based on multiple linear and polynomial regression of n th order techniques, and 3 % (one) is based on the k-Nearest Neighbor approach; 84 % of the continuous PTFs are point-based, and 16 % are parameter-based; 97 % of the continuous PTFs are equation-based PTFs, and 3 % (one) is based on pattern recognition. Additionally, it was found that 26 % of the tropical water-retention PTFs were developed for soils in Brazil, 26 % for soils in India, 11 % for soils in other countries in America, and 11 % for soils in other countries in Africa.
Resumo:
The Lorentz-Dirac equation is not an unavoidable consequence of solely linear and angular momenta conservation for a point charge. It also requires an additional assumption concerning the elementary character of the charge. We here use a less restrictive elementarity assumption for a spinless charge and derive a system of conservation equations that are not properly the equation of motion because, as it contains an extra scalar variable, the future evolution of the charge is not determined. We show that a supplementary constitutive relation can be added so that the motion is determined and free from the troubles that are customary in the Lorentz-Dirac equation, i.e., preacceleration and runaways.
Resumo:
In Brazil, grazing mismanagement may lead to soil and pasture degradation. To impede this process, integrated cropping systems such as silvopasture have been an effective alternative, allied with precision agriculture based on soil mapping for site-specific management. In this study, we aimed to define the soil property that best sheds light on the variability of eucalyptus and forage yield. The experiment was conducted in the 2011/12 crop year in Ribas do Rio Pardo, Mato Grosso do Sul State, Brazil. We analyzed linear and spatial correlations between eucalyptus traits and physical properties of a Typic Quartzipsamment at two depths (0.00-0.10 and 0.10-0.20 m). For that purpose, we set up a geostatistical grid for collection at 72 points. Gravimetric moisture in the 0.00-0.10 m layer is an important index of soil physical quality, showing correlation to eucalyptus circumference at breast height (CBH) in a Typic Quartzipsamment. With an increase in resistance to penetration in the soil surface layer, there is an increase in eucalyptus height and in neutral detergent fiber content in the forage crop. From a spatial point of view, the height of eucalyptus and the neutral detergent fiber of forage can be estimated by co-kriging analysis with soil resistance to penetration. Resistance to penetration values above 2.3 MPa indicated higher yielding sites.
Resumo:
En este trabajo introducimos diversas clases de barreras del dividendo en la teoría modelo clásica de la ruina. Estudiamos la influencia de la estrategia de la barrera en probabilidad de la ruina. Un método basado en las ecuaciones de la renovación [Grandell (1991)], alternativa a la discusión diferenciada [Gerber (1975)], utilizado para conseguir las ecuaciones diferenciales parciales para resolver probabilidades de la supervivencia. Finalmente calculamos y comparamos las probabilidades de la supervivencia usando la barrera linear y parabólica del dividendo, con la ayuda de la simulación
Resumo:
The oxidation of solutions of glucose with methylene-blue as a catalyst in basic media can induce hydrodynamic overturning instabilities, termed chemoconvection in recognition of their similarity to convective instabilities. The phenomenon is due to gluconic acid, the marginally dense product of the reaction, which gradually builds an unstable density profile. Experiments indicate that dominant pattern wavenumbers initially increase before gradually decreasing or can even oscillate for long times. Here, we perform a weakly nonlinear analysis for an established model of the system with simple kinetics, and show that the resulting amplitude equation is analogous to that obtained in convection with insulating walls. We show that the amplitude description predicts that dominant pattern wavenumbers should decrease in the long term, but does not reproduce the aforementioned increasing wavenumber behavior in the initial stages of pattern development. We hypothesize that this is due to horizontally homogeneous steady states not being attained before pattern onset. We show that the behavior can be explained using a combination of pseudo-steady-state linear and steady-state weakly nonlinear theories. The results obtained are in qualitative agreement with the analysis of experiments.
Resumo:
Interfacial hydrodynamic instabilities arise in a range of chemical systems. One mechanism for instability is the occurrence of unstable density gradients due to the accumulation of reaction products. In this paper we conduct two-dimensional nonlinear numerical simulations for a member of this class of system: the methylene-blue¿glucose reaction. The result of these reactions is the oxidation of glucose to a relatively, but marginally, dense product, gluconic acid, that accumulates at oxygen permeable interfaces, such as the surface open to the atmosphere. The reaction is catalyzed by methylene-blue. We show that simulations help to disassemble the mechanisms responsible for the onset of instability and evolution of patterns, and we demonstrate that some of the results are remarkably consistent with experiments. We probe the impact of the upper oxygen boundary condition, for fixed flux, fixed concentration, or mixed boundary conditions, and find significant qualitative differences in solution behavior; structures either attract or repel one another depending on the boundary condition imposed. We suggest that measurement of the form of the boundary condition is possible via observation of oxygen penetration, and improved product yields may be obtained via proper control of boundary conditions in an engineering setting. We also investigate the dependence on parameters such as the Rayleigh number and depth. Finally, we find that pseudo-steady linear and weakly nonlinear techniques described elsewhere are useful tools for predicting the behavior of instabilities beyond their formal range of validity, as good agreement is obtained with the simulations.
Resumo:
If single case experimental designs are to be used to establish guidelines for evidence-based interventions in clinical and educational settings, numerical values that reflect treatment effect sizes are required. The present study compares four recently developed procedures for quantifying the magnitude of intervention effect using data with known characteristics. Monte Carlo methods were used to generate AB designs data with potential confounding variables (serial dependence, linear and curvilinear trend, and heteroscedasticity between phases) and two types of treatment effect (level and slope change). The results suggest that data features are important for choosing the appropriate procedure and, thus, inspecting the graphed data visually is a necessary initial stage. In the presence of serial dependence or a change in data variability, the Nonoverlap of All Pairs (NAP) and the Slope and Level Change (SLC) were the only techniques of the four examined that performed adequately. Introducing a data correction step in NAP renders it unaffected by linear trend, as is also the case for the Percentage of Nonoverlapping Corrected Data and SLC. The performance of these techniques indicates that professionals" judgments concerning treatment effectiveness can be readily complemented by both visual and statistical analyses. A flowchart to guide selection of techniques according to the data characteristics identified by visual inspection is provided.
Resumo:
The main goal of this article is to provide an answer to the question: "Does anything forecast exchange rates, and if so, which variables?". It is well known thatexchange rate fluctuations are very difficult to predict using economic models, andthat a random walk forecasts exchange rates better than any economic model (theMeese and Rogoff puzzle). However, the recent literature has identified a series of fundamentals/methodologies that claim to have resolved the puzzle. This article providesa critical review of the recent literature on exchange rate forecasting and illustratesthe new methodologies and fundamentals that have been recently proposed in an up-to-date, thorough empirical analysis. Overall, our analysis of the literature and thedata suggests that the answer to the question: "Are exchange rates predictable?" is,"It depends" -on the choice of predictor, forecast horizon, sample period, model, andforecast evaluation method. Predictability is most apparent when one or more of thefollowing hold: the predictors are Taylor rule or net foreign assets, the model is linear, and a small number of parameters are estimated. The toughest benchmark is therandom walk without drift.
Resumo:
This study aimed to use the plantar pressure insole for estimating the three-dimensional ground reaction force (GRF) as well as the frictional torque (T(F)) during walking. Eleven subjects, six healthy and five patients with ankle disease participated in the study while wearing pressure insoles during several walking trials on a force-plate. The plantar pressure distribution was analyzed and 10 principal components of 24 regional pressure values with the stance time percentage (STP) were considered for GRF and T(F) estimation. Both linear and non-linear approximators were used for estimating the GRF and T(F) based on two learning strategies using intra-subject and inter-subjects data. The RMS error and the correlation coefficient between the approximators and the actual patterns obtained from force-plate were calculated. Our results showed better performance for non-linear approximation especially when the STP was considered as input. The least errors were observed for vertical force (4%) and anterior-posterior force (7.3%), while the medial-lateral force (11.3%) and frictional torque (14.7%) had higher errors. The result obtained for the patients showed higher error; nevertheless, when the data of the same patient were used for learning, the results were improved and in general slight differences with healthy subjects were observed. In conclusion, this study showed that ambulatory pressure insole with data normalization, an optimal choice of inputs and a well-trained nonlinear mapping function can estimate efficiently the three-dimensional ground reaction force and frictional torque in consecutive gait cycle without requiring a force-plate.
Resumo:
The purpose of this Interstate Corridor Plan (plan) is to provide the Iowa Department of Transportation (Iowa DOT) with an initial screening and prioritization of interstate corridors/segments. This process evaluates the entire interstate system, independent of current financial constraints, using a select group of criteria weighted in terms of their relative significance. The resulting segments would then represent those areas that should be considered for further study (e.g., environmental, design, engineering), with the possibility of being considered for programming by the Iowa Transportation Commission. There was a dominant theme present in conversations with those department stakeholders who have a keen interest in the product of this planning effort. A statement that was often heard was that staff needed more information to help answer the question, “Where do we need to be looking to next, and when?” There was a strong desire to be able to use this plan to help populate that initial pool of candidate segments that would progress towards further study, as discussed below. It was this theme that framed the need for this plan and ultimately guided its development. Further study: As acknowledged at the beginning of this section, the product of this planning effort will be an initial screening and prioritization of interstate corridors/segments. While this initial screening will assist the Iowa DOT in identifying those areas that should be considered for further study, the plan will not identify specific projects or alternatives that could be directly considered as part of the programming process. Bridging the gap between this plan and the programming process are a variety of environmental, design, and engineering activities conducted by various Iowa DOT offices. It is these activities that will further refine the priority corridors/segments identified in this plan into candidate projects. In addition, should the evaluation process developed through this planning effort prove to be successful, it is possible that there will be additional applications, such as future primary system highway plans and statewide freight plans.
Resumo:
Aim To assess the geographical transferability of niche-based species distribution models fitted with two modelling techniques. Location Two distinct geographical study areas in Switzerland and Austria, in the subalpine and alpine belts. Methods Generalized linear and generalized additive models (GLM and GAM) with a binomial probability distribution and a logit link were fitted for 54 plant species, based on topoclimatic predictor variables. These models were then evaluated quantitatively and used for spatially explicit predictions within (internal evaluation and prediction) and between (external evaluation and prediction) the two regions. Comparisons of evaluations and spatial predictions between regions and models were conducted in order to test if species and methods meet the criteria of full transferability. By full transferability, we mean that: (1) the internal evaluation of models fitted in region A and B must be similar; (2) a model fitted in region A must at least retain a comparable external evaluation when projected into region B, and vice-versa; and (3) internal and external spatial predictions have to match within both regions. Results The measures of model fit are, on average, 24% higher for GAMs than for GLMs in both regions. However, the differences between internal and external evaluations (AUC coefficient) are also higher for GAMs than for GLMs (a difference of 30% for models fitted in Switzerland and 54% for models fitted in Austria). Transferability, as measured with the AUC evaluation, fails for 68% of the species in Switzerland and 55% in Austria for GLMs (respectively for 67% and 53% of the species for GAMs). For both GAMs and GLMs, the agreement between internal and external predictions is rather weak on average (Kulczynski's coefficient in the range 0.3-0.4), but varies widely among individual species. The dominant pattern is an asymmetrical transferability between the two study regions (a mean decrease of 20% for the AUC coefficient when the models are transferred from Switzerland and 13% when they are transferred from Austria). Main conclusions The large inter-specific variability observed among the 54 study species underlines the need to consider more than a few species to test properly the transferability of species distribution models. The pronounced asymmetry in transferability between the two study regions may be due to peculiarities of these regions, such as differences in the ranges of environmental predictors or the varied impact of land-use history, or to species-specific reasons like differential phenotypic plasticity, existence of ecotypes or varied dependence on biotic interactions that are not properly incorporated into niche-based models. The lower variation between internal and external evaluation of GLMs compared to GAMs further suggests that overfitting may reduce transferability. Overall, a limited geographical transferability calls for caution when projecting niche-based models for assessing the fate of species in future environments.