959 resultados para Linear program model
Resumo:
We extend the linear reforms introduced by Pf¨ahler (1984) to the case of dual taxes. We study the relative effect that linear dual tax cuts have on the inequality of income distribution -a symmetrical study can be made for dual linear tax hikes-. We also introduce measures of the degree of progressivity for dual taxes and show that they can be connected to the Lorenz dominance criterion. Additionally, we study the tax liability elasticity of each of the reforms proposed. Finally, by means of a microsimulation model and a considerably large data set of taxpayers drawn from 2004 Spanish Income Tax Return population, 1) we compare different yield-equivalent tax cuts applied to the Spanish dual income tax and 2) we investigate how much income redistribution the dual tax reform (Act ‘35/2006’) introduced with respect to the previous tax.
Resumo:
This paper introduces a new model of trend (or underlying) inflation. In contrast to many earlier approaches, which allow for trend inflation to evolve according to a random walk, ours is a bounded model which ensures that trend inflation is constrained to lie in an interval. The bounds of this interval can either be fixed or estimated from the data. Our model also allows for a time-varying degree of persistence in the transitory component of inflation. The bounds placed on trend inflation mean that standard econometric methods for estimating linear Gaussian state space models cannot be used and we develop a posterior simulation algorithm for estimating the bounded trend inflation model. In an empirical exercise with CPI inflation we find the model to work well, yielding more sensible measures of trend inflation and forecasting better than popular alternatives such as the unobserved components stochastic volatility model.
Resumo:
In this work we introduce and analyze a linear size-structured population model with infinite states-at-birth. We model the dynamics of a population in which individuals have two distinct life-stages: an “active” phase when individuals grow, reproduce and die and a second “resting” phase when individuals only grow. Transition between these two phases depends on individuals’ size. First we show that the problem is governed by a positive quasicontractive semigroup on the biologically relevant state space. Then we investigate, in the framework of the spectral theory of linear operators, the asymptotic behavior of solutions of the model. We prove that the associated semigroup has, under biologically plausible assumptions, the property of asynchronous exponential growth.
Resumo:
Asynchronous exponential growth has been extensively studied in population dynamics. In this paper we find out the asymptotic behaviour in a non-linear age-dependent model which takes into account sexual reproduction interactions. The main feature of our model is that the non-linear process converges to a linear one as the solution becomes large, so that the population undergoes asynchronous growth. The steady states analysis and the corresponding stability analysis are completely made and are summarized in a bifurcation diagram according to the parameter R0. Furthermore the effect of intraspecific competition is taken into account, leading to complex dynamics around steady states.
Resumo:
An online algorithm for determining respiratory mechanics in patients using non-invasive ventilation (NIV) in pressure support mode was developed and embedded in a ventilator system. Based on multiple linear regression (MLR) of respiratory data, the algorithm was tested on a patient bench model under conditions with and without leak and simulating a variety of mechanics. Bland-Altman analysis indicates reliable measures of compliance across the clinical range of interest (± 11-18% limits of agreement). Resistance measures showed large quantitative errors (30-50%), however, it was still possible to qualitatively distinguish between normal and obstructive resistances. This outcome provides clinically significant information for ventilator titration and patient management.
Resumo:
AbstractBreast cancer is one of the most common cancers affecting one in eight women during their lives. Survival rates have increased steadily thanks to early diagnosis with mammography screening and more efficient treatment strategies. Post-operative radiation therapy is a standard of care in the management of breast cancer and has been shown to reduce efficiently both local recurrence rate and breast cancer mortality. Radiation therapy is however associated with some late effects for long-term survivors. Radiation-induced secondary cancer is a relatively rare but severe late effect of radiation therapy. Currently, radiotherapy plans are essentially optimized to maximize tumor control and minimize late deterministic effects (tissue reactions) that are mainly associated with high doses (» 1 Gy). With improved cure rates and new radiation therapy technologies, it is also important to evaluate and minimize secondary cancer risks for different treatment techniques. This is a particularly challenging task due to the large uncertainties in the dose-response relationship.In contrast with late deterministic effects, secondary cancers may be associated with much lower doses and therefore out-of-field doses (also called peripheral doses) that are typically inferior to 1 Gy need to be determined accurately. Out-of-field doses result from patient scatter and head scatter from the treatment unit. These doses are particularly challenging to compute and we characterized it by Monte Carlo (MC) calculation. A detailed MC model of the Siemens Primus linear accelerator has been thoroughly validated with measurements. We investigated the accuracy of such a model for retrospective dosimetry in epidemiological studies on secondary cancers. Considering that patients in such large studies could be treated on a variety of machines, we assessed the uncertainty in reconstructed peripheral dose due to the variability of peripheral dose among various linac geometries. For large open fields (> 10x10 cm2), the uncertainty would be less than 50%, but for small fields and wedged fields the uncertainty in reconstructed dose could rise up to a factor of 10. It was concluded that such a model could be used for conventional treatments using large open fields only.The MC model of the Siemens Primus linac was then used to compare out-of-field doses for different treatment techniques in a female whole-body CT-based phantom. Current techniques such as conformai wedged-based radiotherapy and hybrid IMRT were investigated and compared to older two-dimensional radiotherapy techniques. MC doses were also compared to those of a commercial Treatment Planning System (TPS). While the TPS is routinely used to determine the dose to the contralateral breast and the ipsilateral lung which are mostly out of the treatment fields, we have shown that these doses may be highly inaccurate depending on the treatment technique investigated. MC shows that hybrid IMRT is dosimetrically similar to three-dimensional wedge-based radiotherapy within the field, but offers substantially reduced doses to out-of-field healthy organs.Finally, many different approaches to risk estimations extracted from the literature were applied to the calculated MC dose distribution. Absolute risks varied substantially as did the ratio of risk between two treatment techniques, reflecting the large uncertainties involved with current risk models. Despite all these uncertainties, the hybrid IMRT investigated resulted in systematically lower cancer risks than any of the other treatment techniques. More epidemiological studies with accurate dosimetry are required in the future to construct robust risk models. In the meantime, any treatment strategy that reduces out-of-field doses to healthy organs should be investigated. Electron radiotherapy might offer interesting possibilities with this regard.RésuméLe cancer du sein affecte une femme sur huit au cours de sa vie. Grâce au dépistage précoce et à des thérapies de plus en plus efficaces, le taux de guérison a augmenté au cours du temps. La radiothérapie postopératoire joue un rôle important dans le traitement du cancer du sein en réduisant le taux de récidive et la mortalité. Malheureusement, la radiothérapie peut aussi induire des toxicités tardives chez les patients guéris. En particulier, les cancers secondaires radio-induits sont une complication rare mais sévère de la radiothérapie. En routine clinique, les plans de radiothérapie sont essentiellement optimisées pour un contrôle local le plus élevé possible tout en minimisant les réactions tissulaires tardives qui sont essentiellement associées avec des hautes doses (» 1 Gy). Toutefois, avec l'introduction de différentes nouvelles techniques et avec l'augmentation des taux de survie, il devient impératif d'évaluer et de minimiser les risques de cancer secondaire pour différentes techniques de traitement. Une telle évaluation du risque est une tâche ardue étant donné les nombreuses incertitudes liées à la relation dose-risque.Contrairement aux effets tissulaires, les cancers secondaires peuvent aussi être induits par des basses doses dans des organes qui se trouvent hors des champs d'irradiation. Ces organes reçoivent des doses périphériques typiquement inférieures à 1 Gy qui résultent du diffusé du patient et du diffusé de l'accélérateur. Ces doses sont difficiles à calculer précisément, mais les algorithmes Monte Carlo (MC) permettent de les estimer avec une bonne précision. Un modèle MC détaillé de l'accélérateur Primus de Siemens a été élaboré et validé avec des mesures. La précision de ce modèle a également été déterminée pour la reconstruction de dose en épidémiologie. Si on considère que les patients inclus dans de larges cohortes sont traités sur une variété de machines, l'incertitude dans la reconstruction de dose périphérique a été étudiée en fonction de la variabilité de la dose périphérique pour différents types d'accélérateurs. Pour de grands champs (> 10x10 cm ), l'incertitude est inférieure à 50%, mais pour de petits champs et des champs filtrés, l'incertitude de la dose peut monter jusqu'à un facteur 10. En conclusion, un tel modèle ne peut être utilisé que pour les traitements conventionnels utilisant des grands champs.Le modèle MC de l'accélérateur Primus a été utilisé ensuite pour déterminer la dose périphérique pour différentes techniques dans un fantôme corps entier basé sur des coupes CT d'une patiente. Les techniques actuelles utilisant des champs filtrés ou encore l'IMRT hybride ont été étudiées et comparées par rapport aux techniques plus anciennes. Les doses calculées par MC ont été comparées à celles obtenues d'un logiciel de planification commercial (TPS). Alors que le TPS est utilisé en routine pour déterminer la dose au sein contralatéral et au poumon ipsilatéral qui sont principalement hors des faisceaux, nous avons montré que ces doses peuvent être plus ou moins précises selon la technTque étudiée. Les calculs MC montrent que la technique IMRT est dosimétriquement équivalente à celle basée sur des champs filtrés à l'intérieur des champs de traitement, mais offre une réduction importante de la dose aux organes périphériques.Finalement différents modèles de risque ont été étudiés sur la base des distributions de dose calculées par MC. Les risques absolus et le rapport des risques entre deux techniques de traitement varient grandement, ce qui reflète les grandes incertitudes liées aux différents modèles de risque. Malgré ces incertitudes, on a pu montrer que la technique IMRT offrait une réduction du risque systématique par rapport aux autres techniques. En attendant des données épidémiologiques supplémentaires sur la relation dose-risque, toute technique offrant une réduction des doses périphériques aux organes sains mérite d'être étudiée. La radiothérapie avec des électrons offre à ce titre des possibilités intéressantes.
Resumo:
In economic literature, information deficiencies and computational complexities have traditionally been solved through the aggregation of agents and institutions. In inputoutput modelling, researchers have been interested in the aggregation problem since the beginning of 1950s. Extending the conventional input-output aggregation approach to the social accounting matrix (SAM) models may help to identify the effects caused by the information problems and data deficiencies that usually appear in the SAM framework. This paper develops the theory of aggregation and applies it to the social accounting matrix model of multipliers. First, we define the concept of linear aggregation in a SAM database context. Second, we define the aggregated partitioned matrices of multipliers which are characteristic of the SAM approach. Third, we extend the analysis to other related concepts, such as aggregation bias and consistency in aggregation. Finally, we provide an illustrative example that shows the effects of aggregating a social accounting matrix model.
Resumo:
Graph pebbling is a network model for studying whether or not a given supply of discrete pebbles can satisfy a given demand via pebbling moves. A pebbling move across an edge of a graph takes two pebbles from one endpoint and places one pebble at the other endpoint; the other pebble is lost in transit as a toll. It has been shown that deciding whether a supply can meet a demand on a graph is NP-complete. The pebbling number of a graph is the smallest t such that every supply of t pebbles can satisfy every demand of one pebble. Deciding if the pebbling number is at most k is NP 2 -complete. In this paper we develop a tool, called theWeight Function Lemma, for computing upper bounds and sometimes exact values for pebbling numbers with the assistance of linear optimization. With this tool we are able to calculate the pebbling numbers of much larger graphs than in previous algorithms, and much more quickly as well. We also obtain results for many families of graphs, in many cases by hand, with much simpler and remarkably shorter proofs than given in previously existing arguments (certificates typically of size at most the number of vertices times the maximum degree), especially for highly symmetric graphs. Here we apply theWeight Function Lemma to several specific graphs, including the Petersen, Lemke, 4th weak Bruhat, Lemke squared, and two random graphs, as well as to a number of infinite families of graphs, such as trees, cycles, graph powers of cycles, cubes, and some generalized Petersen and Coxeter graphs. This partly answers a question of Pachter, et al., by computing the pebbling exponent of cycles to within an asymptotically small range. It is conceivable that this method yields an approximation algorithm for graph pebbling.
Resumo:
Per definition, alcohol expectancies (after alcohol I expect X), and drinking motives (I drink to achieve X) are conceptually distinct constructs. Theorists have argued that motives mediate the association between expectancies and drinking outcomes. Yet, given the use of different instruments, do these constructs remain distinct when assessment items are matched? The present study tested to what extent motives mediated the link between expectancies and alcohol outcomes when identical items were used, first as expectancies and then as motives. A linear structural equation model was estimated based on a national representative sample of 5,779 alcohol-using students in Switzerland (mean age = 15.2 years). The results showed that expectancies explained up to 38% of the variance in motives. Together with motives, they explained up to 48% of the variance in alcohol outcomes (volume, 5+ drinking, and problems). In 10 of 12 outcomes, there was a significant mediated effect that was often higher than the direct expectancy effect. For coping, the expectancy effect was close to zero, indicating the strongest form of mediation. In only one case (conformity and 5+ drinking), there was a direct expectancy effect but no mediation. To conclude, the study demonstrates that motives are distinct from expectancies even when identical items are used. Motives are more proximally related to different alcohol outcomes, often mediating the effects of expectancies. Consequently, the effectiveness of interventions, particularly those aimed at coping drinkers, should be improved through a shift in focus from expectancies to drinking motives.
Resumo:
La present investigació es centre en crear un programa de formació i assessorament docent per atendre la diversitat d’estudiants en el marc universitari mitjançant els principis del disseny universal de la instrucció (DUI). D’aquesta manera es pretén donar resposta: a) a la necessitat d’implementar un model pedagògic en el marc universitari que doni resposta a la diversitat d’estudiants, b) de formar i canviar actituds docents vers la discapacitat i c) d’adaptar el nou marc legislatiu espanyol en matèria de discapacitat a les aules de les nostres universitats. A través d’un procés d’investigació-acció es detecten quins són les necessitats i dificulats pedagògiques, dels docents de les diferents universitats catalanes, per construir espais educatius que garanteixin la igultat d’oportunitats als estudiants amb discapacitat. Les dades obtingudes ens permeten crear una proposta de formació pels docents universitaris, basada en el paradigma educatiu del disseny universal de la instrucció.
Resumo:
This paper introduces local distance-based generalized linear models. These models extend (weighted) distance-based linear models firstly with the generalized linear model concept, then by localizing. Distances between individuals are the only predictor information needed to fit these models. Therefore they are applicable to mixed (qualitative and quantitative) explanatory variables or when the regressor is of functional type. Models can be fitted and analysed with the R package dbstats, which implements several distancebased prediction methods.
Resumo:
Performance prediction and application behavior modeling have been the subject of exten- sive research that aim to estimate applications performance with an acceptable precision. A novel approach to predict the performance of parallel applications is based in the con- cept of Parallel Application Signatures that consists in extract an application most relevant parts (phases) and the number of times they repeat (weights). Executing these phases in a target machine and multiplying its exeuction time by its weight an estimation of the application total execution time can be made. One of the problems is that the performance of an application depends on the program workload. Every type of workload affects differently how an application performs in a given system and so affects the signature execution time. Since the workloads used in most scientific parallel applications have dimensions and data ranges well known and the behavior of these applications are mostly deterministic, a model of how the programs workload affect its performance can be obtained. We create a new methodology to model how a program’s workload affect the parallel application signature. Using regression analysis we are able to generalize each phase time execution and weight function to predict an application performance in a target system for any type of workload within predefined range. We validate our methodology using a synthetic program, benchmarks applications and well known real scientific applications.
Resumo:
Cryo-electron microscopy of vitreous sections (CEMOVIS) has recently been shown to provide images of biological specimens with unprecedented quality and resolution. Cutting the sections remains however the major difficulty. Here, we examine the parameters influencing the quality of the sections and analyse the resulting artefacts. They are in particular: knife marks, compression, crevasses, and chatter. We propose a model taking into account the interplay between viscous flow and fracture. We confirm that crevasses are formed on only one side of the section, and define conditions by which they can be avoided. Chatter is an effect of irregular compression due to friction of the section of the knife edge and conditions to prevent this are also explored. In absence of crevasses and chatter, the bulk of the section is compressed approximately homogeneously. Within this approximation, it is possible to correct for compression by a simple linear transformation for the bulk of the section. A research program is proposed to test and refine our understanding of the sectioning process.
Resumo:
Glucose supply from blood to brain occurs through facilitative transporter proteins. A near linear relation between brain and plasma glucose has been experimentally determined and described by a reversible model of enzyme kinetics. A conformational four-state exchange model accounting for trans-acceleration and asymmetry of the carrier was included in a recently developed multi-compartmental model of glucose transport. Based on this model, we demonstrate that brain glucose (G(brain)) as function of plasma glucose (G(plasma)) can be described by a single analytical equation namely comprising three kinetic compartments: blood, endothelial cells and brain. Transport was described by four parameters: apparent half saturation constant K(t), apparent maximum rate constant T(max), glucose consumption rate CMR(glc), and the iso-inhibition constant K(ii) that suggests G(brain) as inhibitor of the isomerisation of the unloaded carrier. Previous published data, where G(brain) was quantified as a function of plasma glucose by either biochemical methods or NMR spectroscopy, were used to determine the aforementioned kinetic parameters. Glucose transport was characterized by K(t) ranging from 1.5 to 3.5 mM, T(max)/CMR(glc) from 4.6 to 5.6, and K(ii) from 51 to 149 mM. It was noteworthy that K(t) was on the order of a few mM, as previously determined from the reversible model. The conformational four-state exchange model of glucose transport into the brain includes both efflux and transport inhibition by G(brain), predicting that G(brain) eventually approaches a maximum concentration. However, since K(ii) largely exceeds G(plasma), iso-inhibition is unlikely to be of substantial importance for plasma glucose below 25 mM. As a consequence, the reversible model can account for most experimental observations under euglycaemia and moderate cases of hypo- and hyperglycaemia.
Resumo:
INTRODUCTION The Rasch model is increasingly used in the field of rehabilitation because it improves the accuracy of measurements of patient status and their changes after therapy. OBJECTIVE To determine the long-term effectiveness of a holistic neuropsychological rehabilitation program for Spanish outpatients with acquired brain injury (ABI) using Rasch analysis. METHODS Eighteen patients (ten with long evolution - patients who started the program > 6 months after ABI- and eight with short evolution) and their relatives attended the program for 6 months. Patients' and relatives' answers to the European Brain Injury Questionnaire and the Frontal Systems Behavior Scale at 3 time points (pre-intervention. post-intervention and 12 month follow-up) were transformed into linear measures called logits. RESULTS The linear measures revealed significant improvements with large effects at the follow-up assessment on cognitive and executive functioning, social and emotional self-regulation, apathy and mood. At follow-up, the short evolution group achieved greater improvements in mood and cognitive functioning than the long evolution patients. CONCLUSIONS The program showed long-term effectiveness for most of the variables, and it was more effective for mood and cognitive functioning when patients were treated early. Relatives played a key role in the effectiveness of the rehabilitation program.