940 resultados para linear model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper extends the Nelson-Siegel linear factor model by developing a flexible macro-finance framework for modeling and forecasting the term structure of US interest rates. Our approach is robust to parameter uncertainty and structural change, as we consider instabilities in parameters and volatilities, and our model averaging method allows for investors' model uncertainty over time. Our time-varying parameter Nelson-Siegel Dynamic Model Averaging (NS-DMA) predicts yields better than standard benchmarks and successfully captures plausible time-varying term premia in real time. The proposed model has significant in-sample and out-of-sample predictability for excess bond returns, and the predictability is of economic value.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The implicit projection algorithm of isotropic plasticity is extended to an objective anisotropic elastic perfectly plastic model. The recursion formula developed to project the trial stress on the yield surface, is applicable to any non linear elastic law and any plastic yield function.A curvilinear transverse isotropic model based on a quadratic elastic potential and on Hill's quadratic yield criterion is then developed and implemented in a computer program for bone mechanics perspectives. The paper concludes with a numerical study of a schematic bone-prosthesis system to illustrate the potential of the model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An online algorithm for determining respiratory mechanics in patients using non-invasive ventilation (NIV) in pressure support mode was developed and embedded in a ventilator system. Based on multiple linear regression (MLR) of respiratory data, the algorithm was tested on a patient bench model under conditions with and without leak and simulating a variety of mechanics. Bland-Altman analysis indicates reliable measures of compliance across the clinical range of interest (± 11-18% limits of agreement). Resistance measures showed large quantitative errors (30-50%), however, it was still possible to qualitatively distinguish between normal and obstructive resistances. This outcome provides clinically significant information for ventilator titration and patient management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AbstractBreast cancer is one of the most common cancers affecting one in eight women during their lives. Survival rates have increased steadily thanks to early diagnosis with mammography screening and more efficient treatment strategies. Post-operative radiation therapy is a standard of care in the management of breast cancer and has been shown to reduce efficiently both local recurrence rate and breast cancer mortality. Radiation therapy is however associated with some late effects for long-term survivors. Radiation-induced secondary cancer is a relatively rare but severe late effect of radiation therapy. Currently, radiotherapy plans are essentially optimized to maximize tumor control and minimize late deterministic effects (tissue reactions) that are mainly associated with high doses (» 1 Gy). With improved cure rates and new radiation therapy technologies, it is also important to evaluate and minimize secondary cancer risks for different treatment techniques. This is a particularly challenging task due to the large uncertainties in the dose-response relationship.In contrast with late deterministic effects, secondary cancers may be associated with much lower doses and therefore out-of-field doses (also called peripheral doses) that are typically inferior to 1 Gy need to be determined accurately. Out-of-field doses result from patient scatter and head scatter from the treatment unit. These doses are particularly challenging to compute and we characterized it by Monte Carlo (MC) calculation. A detailed MC model of the Siemens Primus linear accelerator has been thoroughly validated with measurements. We investigated the accuracy of such a model for retrospective dosimetry in epidemiological studies on secondary cancers. Considering that patients in such large studies could be treated on a variety of machines, we assessed the uncertainty in reconstructed peripheral dose due to the variability of peripheral dose among various linac geometries. For large open fields (> 10x10 cm2), the uncertainty would be less than 50%, but for small fields and wedged fields the uncertainty in reconstructed dose could rise up to a factor of 10. It was concluded that such a model could be used for conventional treatments using large open fields only.The MC model of the Siemens Primus linac was then used to compare out-of-field doses for different treatment techniques in a female whole-body CT-based phantom. Current techniques such as conformai wedged-based radiotherapy and hybrid IMRT were investigated and compared to older two-dimensional radiotherapy techniques. MC doses were also compared to those of a commercial Treatment Planning System (TPS). While the TPS is routinely used to determine the dose to the contralateral breast and the ipsilateral lung which are mostly out of the treatment fields, we have shown that these doses may be highly inaccurate depending on the treatment technique investigated. MC shows that hybrid IMRT is dosimetrically similar to three-dimensional wedge-based radiotherapy within the field, but offers substantially reduced doses to out-of-field healthy organs.Finally, many different approaches to risk estimations extracted from the literature were applied to the calculated MC dose distribution. Absolute risks varied substantially as did the ratio of risk between two treatment techniques, reflecting the large uncertainties involved with current risk models. Despite all these uncertainties, the hybrid IMRT investigated resulted in systematically lower cancer risks than any of the other treatment techniques. More epidemiological studies with accurate dosimetry are required in the future to construct robust risk models. In the meantime, any treatment strategy that reduces out-of-field doses to healthy organs should be investigated. Electron radiotherapy might offer interesting possibilities with this regard.RésuméLe cancer du sein affecte une femme sur huit au cours de sa vie. Grâce au dépistage précoce et à des thérapies de plus en plus efficaces, le taux de guérison a augmenté au cours du temps. La radiothérapie postopératoire joue un rôle important dans le traitement du cancer du sein en réduisant le taux de récidive et la mortalité. Malheureusement, la radiothérapie peut aussi induire des toxicités tardives chez les patients guéris. En particulier, les cancers secondaires radio-induits sont une complication rare mais sévère de la radiothérapie. En routine clinique, les plans de radiothérapie sont essentiellement optimisées pour un contrôle local le plus élevé possible tout en minimisant les réactions tissulaires tardives qui sont essentiellement associées avec des hautes doses (» 1 Gy). Toutefois, avec l'introduction de différentes nouvelles techniques et avec l'augmentation des taux de survie, il devient impératif d'évaluer et de minimiser les risques de cancer secondaire pour différentes techniques de traitement. Une telle évaluation du risque est une tâche ardue étant donné les nombreuses incertitudes liées à la relation dose-risque.Contrairement aux effets tissulaires, les cancers secondaires peuvent aussi être induits par des basses doses dans des organes qui se trouvent hors des champs d'irradiation. Ces organes reçoivent des doses périphériques typiquement inférieures à 1 Gy qui résultent du diffusé du patient et du diffusé de l'accélérateur. Ces doses sont difficiles à calculer précisément, mais les algorithmes Monte Carlo (MC) permettent de les estimer avec une bonne précision. Un modèle MC détaillé de l'accélérateur Primus de Siemens a été élaboré et validé avec des mesures. La précision de ce modèle a également été déterminée pour la reconstruction de dose en épidémiologie. Si on considère que les patients inclus dans de larges cohortes sont traités sur une variété de machines, l'incertitude dans la reconstruction de dose périphérique a été étudiée en fonction de la variabilité de la dose périphérique pour différents types d'accélérateurs. Pour de grands champs (> 10x10 cm ), l'incertitude est inférieure à 50%, mais pour de petits champs et des champs filtrés, l'incertitude de la dose peut monter jusqu'à un facteur 10. En conclusion, un tel modèle ne peut être utilisé que pour les traitements conventionnels utilisant des grands champs.Le modèle MC de l'accélérateur Primus a été utilisé ensuite pour déterminer la dose périphérique pour différentes techniques dans un fantôme corps entier basé sur des coupes CT d'une patiente. Les techniques actuelles utilisant des champs filtrés ou encore l'IMRT hybride ont été étudiées et comparées par rapport aux techniques plus anciennes. Les doses calculées par MC ont été comparées à celles obtenues d'un logiciel de planification commercial (TPS). Alors que le TPS est utilisé en routine pour déterminer la dose au sein contralatéral et au poumon ipsilatéral qui sont principalement hors des faisceaux, nous avons montré que ces doses peuvent être plus ou moins précises selon la technTque étudiée. Les calculs MC montrent que la technique IMRT est dosimétriquement équivalente à celle basée sur des champs filtrés à l'intérieur des champs de traitement, mais offre une réduction importante de la dose aux organes périphériques.Finalement différents modèles de risque ont été étudiés sur la base des distributions de dose calculées par MC. Les risques absolus et le rapport des risques entre deux techniques de traitement varient grandement, ce qui reflète les grandes incertitudes liées aux différents modèles de risque. Malgré ces incertitudes, on a pu montrer que la technique IMRT offrait une réduction du risque systématique par rapport aux autres techniques. En attendant des données épidémiologiques supplémentaires sur la relation dose-risque, toute technique offrant une réduction des doses périphériques aux organes sains mérite d'être étudiée. La radiothérapie avec des électrons offre à ce titre des possibilités intéressantes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In economic literature, information deficiencies and computational complexities have traditionally been solved through the aggregation of agents and institutions. In inputoutput modelling, researchers have been interested in the aggregation problem since the beginning of 1950s. Extending the conventional input-output aggregation approach to the social accounting matrix (SAM) models may help to identify the effects caused by the information problems and data deficiencies that usually appear in the SAM framework. This paper develops the theory of aggregation and applies it to the social accounting matrix model of multipliers. First, we define the concept of linear aggregation in a SAM database context. Second, we define the aggregated partitioned matrices of multipliers which are characteristic of the SAM approach. Third, we extend the analysis to other related concepts, such as aggregation bias and consistency in aggregation. Finally, we provide an illustrative example that shows the effects of aggregating a social accounting matrix model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Graph pebbling is a network model for studying whether or not a given supply of discrete pebbles can satisfy a given demand via pebbling moves. A pebbling move across an edge of a graph takes two pebbles from one endpoint and places one pebble at the other endpoint; the other pebble is lost in transit as a toll. It has been shown that deciding whether a supply can meet a demand on a graph is NP-complete. The pebbling number of a graph is the smallest t such that every supply of t pebbles can satisfy every demand of one pebble. Deciding if the pebbling number is at most k is NP 2 -complete. In this paper we develop a tool, called theWeight Function Lemma, for computing upper bounds and sometimes exact values for pebbling numbers with the assistance of linear optimization. With this tool we are able to calculate the pebbling numbers of much larger graphs than in previous algorithms, and much more quickly as well. We also obtain results for many families of graphs, in many cases by hand, with much simpler and remarkably shorter proofs than given in previously existing arguments (certificates typically of size at most the number of vertices times the maximum degree), especially for highly symmetric graphs. Here we apply theWeight Function Lemma to several specific graphs, including the Petersen, Lemke, 4th weak Bruhat, Lemke squared, and two random graphs, as well as to a number of infinite families of graphs, such as trees, cycles, graph powers of cycles, cubes, and some generalized Petersen and Coxeter graphs. This partly answers a question of Pachter, et al., by computing the pebbling exponent of cycles to within an asymptotically small range. It is conceivable that this method yields an approximation algorithm for graph pebbling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Per definition, alcohol expectancies (after alcohol I expect X), and drinking motives (I drink to achieve X) are conceptually distinct constructs. Theorists have argued that motives mediate the association between expectancies and drinking outcomes. Yet, given the use of different instruments, do these constructs remain distinct when assessment items are matched? The present study tested to what extent motives mediated the link between expectancies and alcohol outcomes when identical items were used, first as expectancies and then as motives. A linear structural equation model was estimated based on a national representative sample of 5,779 alcohol-using students in Switzerland (mean age = 15.2 years). The results showed that expectancies explained up to 38% of the variance in motives. Together with motives, they explained up to 48% of the variance in alcohol outcomes (volume, 5+ drinking, and problems). In 10 of 12 outcomes, there was a significant mediated effect that was often higher than the direct expectancy effect. For coping, the expectancy effect was close to zero, indicating the strongest form of mediation. In only one case (conformity and 5+ drinking), there was a direct expectancy effect but no mediation. To conclude, the study demonstrates that motives are distinct from expectancies even when identical items are used. Motives are more proximally related to different alcohol outcomes, often mediating the effects of expectancies. Consequently, the effectiveness of interventions, particularly those aimed at coping drinkers, should be improved through a shift in focus from expectancies to drinking motives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Glucose supply from blood to brain occurs through facilitative transporter proteins. A near linear relation between brain and plasma glucose has been experimentally determined and described by a reversible model of enzyme kinetics. A conformational four-state exchange model accounting for trans-acceleration and asymmetry of the carrier was included in a recently developed multi-compartmental model of glucose transport. Based on this model, we demonstrate that brain glucose (G(brain)) as function of plasma glucose (G(plasma)) can be described by a single analytical equation namely comprising three kinetic compartments: blood, endothelial cells and brain. Transport was described by four parameters: apparent half saturation constant K(t), apparent maximum rate constant T(max), glucose consumption rate CMR(glc), and the iso-inhibition constant K(ii) that suggests G(brain) as inhibitor of the isomerisation of the unloaded carrier. Previous published data, where G(brain) was quantified as a function of plasma glucose by either biochemical methods or NMR spectroscopy, were used to determine the aforementioned kinetic parameters. Glucose transport was characterized by K(t) ranging from 1.5 to 3.5 mM, T(max)/CMR(glc) from 4.6 to 5.6, and K(ii) from 51 to 149 mM. It was noteworthy that K(t) was on the order of a few mM, as previously determined from the reversible model. The conformational four-state exchange model of glucose transport into the brain includes both efflux and transport inhibition by G(brain), predicting that G(brain) eventually approaches a maximum concentration. However, since K(ii) largely exceeds G(plasma), iso-inhibition is unlikely to be of substantial importance for plasma glucose below 25 mM. As a consequence, the reversible model can account for most experimental observations under euglycaemia and moderate cases of hypo- and hyperglycaemia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In CoDaWork’05, we presented an application of discriminant function analysis (DFA) to 4 differentcompositional datasets and modelled the first canonical variable using a segmented regression modelsolely based on an observation about the scatter plots. In this paper, multiple linear regressions areapplied to different datasets to confirm the validity of our proposed model. In addition to dating theunknown tephras by calibration as discussed previously, another method of mapping the unknown tephrasinto samples of the reference set or missing samples in between consecutive reference samples isproposed. The application of these methodologies is demonstrated with both simulated and real datasets.This new proposed methodology provides an alternative, more acceptable approach for geologists as theirfocus is on mapping the unknown tephra with relevant eruptive events rather than estimating the age ofunknown tephra.Kew words: Tephrochronology; Segmented regression

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a number of programs for gene structure prediction in higher eukaryotic genomic sequences, exon prediction is decoupled from gene assembly: a large pool of candidate exons is predicted and scored from features located in the query DNA sequence, and candidate genes are assembled from such a pool as sequences of nonoverlapping frame-compatible exons. Genes are scored as a function of the scores of the assembled exons, and the highest scoring candidate gene is assumed to be the most likely gene encoded by the query DNA sequence. Considering additive gene scoring functions, currently available algorithms to determine such a highest scoring candidate gene run in time proportional to the square of the number of predicted exons. Here, we present an algorithm whose running time grows only linearly with the size of the set of predicted exons. Polynomial algorithms rely on the fact that, while scanning the set of predicted exons, the highest scoring gene ending in a given exon can be obtained by appending the exon to the highest scoring among the highest scoring genes ending at each compatible preceding exon. The algorithm here relies on the simple fact that such highest scoring gene can be stored and updated. This requires scanning the set of predicted exons simultaneously by increasing acceptor and donor position. On the other hand, the algorithm described here does not assume an underlying gene structure model. Indeed, the definition of valid gene structures is externally defined in the so-called Gene Model. The Gene Model specifies simply which gene features are allowed immediately upstream which other gene features in valid gene structures. This allows for great flexibility in formulating the gene identification problem. In particular it allows for multiple-gene two-strand predictions and for considering gene features other than coding exons (such as promoter elements) in valid gene structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When back-calculating fish length from scale measurements, the choice of the body-scale relationship is a fundamental step. Using data from the arctic charrSalvelinus alpinus (L.) of Lake Geneva (Switzerland) we show the need for a curvilinear model, on both statistical and biological grounds. From several 2-parameters models, the log-linear relationship appears to provide the best fit. A 3-parameters, Bertalanffy model did not improve the fit. We show moreover that using the proportional model would lead to important misinterpretations of the data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces the approach of using TURF analysis to design a product line through a binary linear programming model. This improves the efficiency of the search for the solution to the problem compared to the algorithms that have been used to date. Furthermore, the proposed technique enables the model to be improved in order to overcome the main drawbacks presented by TURF analysis in practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Doxorubicin is an antineoplasic agent active against sarcoma pulmonary metastasis, but its clinical use is hampered by its myelotoxicity and its cumulative cardiotoxicity, when administered systemically. This limitation may be circumvented using the isolated lung perfusion (ILP) approach, wherein a therapeutic agent is infused locoregionally after vascular isolation of the lung. The influence of the mode of infusion (anterograde (AG): through the pulmonary artery (PA); retrograde (RG): through the pulmonary vein (PV)) on doxorubicin pharmacokinetics and lung distribution was unknown. Therefore, a simple, rapid and sensitive high-performance liquid chromatography method has been developed to quantify doxorubicin in four different biological matrices (infusion effluent, serum, tissues with low or high levels of doxorubicin). The related compound daunorubicin was used as internal standard (I.S.). Following a single-step protein precipitation of 500 microl samples with 250 microl acetone and 50 microl zinc sulfate 70% aqueous solution, the obtained supernatant was evaporated to dryness at 60 degrees C for exactly 45 min under a stream of nitrogen and the solid residue was solubilized in 200 microl of purified water. A 100 microl-volume was subjected to HPLC analysis onto a Nucleosil 100-5 microm C18 AB column equipped with a guard column (Nucleosil 100-5 microm C(6)H(5) (phenyl) end-capped) using a gradient elution of acetonitrile and 1-heptanesulfonic acid 0.2% pH 4: 15/85 at 0 min-->50/50 at 20 min-->100/0 at 22 min-->15/85 at 24 min-->15/85 at 26 min, delivered at 1 ml/min. The analytes were detected by fluorescence detection with excitation and emission wavelength set at 480 and 550 nm, respectively. The calibration curves were linear over the range of 2-1000 ng/ml for effluent and plasma matrices, and 0.1 microg/g-750 microg/g for tissues matrices. The method is precise with inter-day and intra-day relative standard deviation within 0.5 and 6.7% and accurate with inter-day and intra-day deviations between -5.4 and +7.7%. The in vitro stability in all matrices and in processed samples has been studied at -80 degrees C for 1 month, and at 4 degrees C for 48 h, respectively. During initial studies, heparin used as anticoagulant was found to profoundly influence the measurements of doxorubicin in effluents collected from animals under ILP. Moreover, the strong matrix effect observed with tissues samples indicate that it is mandatory to prepare doxorubicin calibration standard samples in biological matrices which would reflect at best the composition of samples to be analyzed. This method was successfully applied in animal studies for the analysis of effluent, serum and tissue samples collected from pigs and rats undergoing ILP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Investigação Operacional vem demonstrando ser uma valiosa ferramenta de gestão nos dias de hoje em que se vive num mercado cada vez mais competitivo. Através da Programação Linear pode-se reproduzir matematicamente um problema de maximização dos resultados ou minimização dos custos de produção com o propósito de auxiliar os gestores na tomada de decisão. A Programação Linear é um método matemático em que a função objectivo e as restrições assumem características lineares, com diversas aplicações no controlo de gestão, envolvendo normalmente problemas de utilização dos recursos disponíveis sujeitos a limitações impostas pelo processo produtivo ou pelo mercado. O objectivo geral deste trabalho é o de propor um modelo de Programação Linear para a programação ou produção e alocação de recursos necessários. Optimizar uma quantidade física designada função objectivo, tendo em conta um conjunto de condicionalismos endógenas às actividades em gestão. O objectivo crucial é dispor um modelo de apoio à gestão contribuindo assim para afectação eficiente de recursos escassos à disposição da unidade económica. Com o trabalho desenvolvido ficou patente a importância da abordagem quantitativa como recurso imprescindível de apoio ao processo de decisão. The operational research has proven to be a valuable management tool today we live in an increasingly competitive market. Through Linear Programming can be mathematically reproduce a problem of maximizing performance or minimizing production costs in order to assist managers in decision making. The Linear Programming is a mathematical method in which the objective function and constraints are linear features, with several applications in the control of management, usually involving problems of resource use are available subject to limitations imposed by the production process or the market. The overall objective of this work is to propose a Linear Programming model for scheduling or production and allocation of necessary resources. Optimizing a physical quantity called the objective function, given a set of endogenous constraints on management thus contributing to efficient allocation of scarce resources available to the economic unit. With the work has demonstrated the importance of the quantitative approach as essential resource to support the decision process.