1000 resultados para Linear Resonance Accelerator
Resumo:
OBJECTIVE: To determine the usefulness of computed tomography (CT), magnetic resonance imaging (MRI), and Doppler ultrasonography (US) in providing specific images of gouty tophi. METHODS: Four male patients with chronic gout with tophi affecting the knee joints (three cases) or the olecranon processes of the elbows (one case) were assessed. Crystallographic analyses of the synovial fluid or tissue aspirates of the areas of interest were made with polarising light microscopy, alizarin red staining, and x ray diffraction. CT was performed with a GE scanner, MR imaging was obtained with a 1.5 T Magneton (Siemens), and ultrasonography with colour Doppler was carried out by standard technique. RESULTS: Crystallographic analyses showed monosodium urate (MSU) crystals in the specimens of the four patients; hydroxyapatite and calcium pyrophosphate dihydrate (CPPD) crystals were not found. A diffuse soft tissue thickening was seen on plain radiographs but no calcifications or ossifications of the tophi. CT disclosed lesions containing round and oval opacities, with a mean density of about 160 Hounsfield units (HU). With MRI, lesions were of low to intermediate signal intensity on T(1) and T(2) weighting. After contrast injection in two cases, enhancement of the tophus was seen in one. Colour Doppler US showed the tophi to be hypoechogenic with peripheral increase of the blood flow in three cases. CONCLUSION: The MR and colour Doppler US images showed the tophi as masses surrounded by a hypervascular area, which cannot be considered as specific for gout. But on CT images, masses of about 160 HU density were clearly seen, which correspond to MSU crystal deposits.
Resumo:
A Investigação Operacional vem demonstrando ser uma valiosa ferramenta de gestão nos dias de hoje em que se vive num mercado cada vez mais competitivo. Através da Programação Linear pode-se reproduzir matematicamente um problema de maximização dos resultados ou minimização dos custos de produção com o propósito de auxiliar os gestores na tomada de decisão. A Programação Linear é um método matemático em que a função objectivo e as restrições assumem características lineares, com diversas aplicações no controlo de gestão, envolvendo normalmente problemas de utilização dos recursos disponíveis sujeitos a limitações impostas pelo processo produtivo ou pelo mercado. O objectivo geral deste trabalho é o de propor um modelo de Programação Linear para a programação ou produção e alocação de recursos necessários. Optimizar uma quantidade física designada função objectivo, tendo em conta um conjunto de condicionalismos endógenas às actividades em gestão. O objectivo crucial é dispor um modelo de apoio à gestão contribuindo assim para afectação eficiente de recursos escassos à disposição da unidade económica. Com o trabalho desenvolvido ficou patente a importância da abordagem quantitativa como recurso imprescindível de apoio ao processo de decisão. The operational research has proven to be a valuable management tool today we live in an increasingly competitive market. Through Linear Programming can be mathematically reproduce a problem of maximizing performance or minimizing production costs in order to assist managers in decision making. The Linear Programming is a mathematical method in which the objective function and constraints are linear features, with several applications in the control of management, usually involving problems of resource use are available subject to limitations imposed by the production process or the market. The overall objective of this work is to propose a Linear Programming model for scheduling or production and allocation of necessary resources. Optimizing a physical quantity called the objective function, given a set of endogenous constraints on management thus contributing to efficient allocation of scarce resources available to the economic unit. With the work has demonstrated the importance of the quantitative approach as essential resource to support the decision process.
Resumo:
Due to their relatively small size and central location within the thorax, improvement in signal-to-noise (SNR) is of paramount importance for in vivo coronary vessel wall imaging. Thus, with higher field strengths, coronary vessel wall imaging is likely to benefit from the expected "near linear" proportional gain in SNR. In this study, we demonstrate the feasibility of in vivo human high field (3 T) coronary vessel wall imaging using a free-breathing black blood fast gradient echo technique with respiratory navigator gating and real-time motion correction. With the broader availability of more SNR efficient fast spin echo and spiral techniques, further improvements can be expected.
Resumo:
The mathematical representation of Brunswik s lens model has been usedextensively to study human judgment and provides a unique opportunity to conduct ameta-analysis of studies that covers roughly five decades. Specifically, we analyzestatistics of the lens model equation (Tucker, 1964) associated with 259 different taskenvironments obtained from 78 papers. In short, we find on average fairly high levelsof judgmental achievement and note that people can achieve similar levels of cognitiveperformance in both noisy and predictable environments. Although overall performancevaries little between laboratory and field studies, both differ in terms of components ofperformance and types of environments (numbers of cues and redundancy). An analysisof learning studies reveals that the most effective form of feedback is information aboutthe task. We also analyze empirically when bootstrapping is more likely to occur. Weconclude by indicating shortcomings of the kinds of studies conducted to date, limitationsin the lens model methodology, and possibilities for future research.
Resumo:
We consider the application of normal theory methods to the estimation and testing of a general type of multivariate regressionmodels with errors--in--variables, in the case where various data setsare merged into a single analysis and the observable variables deviatepossibly from normality. The various samples to be merged can differ on the set of observable variables available. We show that there is a convenient way to parameterize the model so that, despite the possiblenon--normality of the data, normal--theory methods yield correct inferencesfor the parameters of interest and for the goodness--of--fit test. Thetheory described encompasses both the functional and structural modelcases, and can be implemented using standard software for structuralequations models, such as LISREL, EQS, LISCOMP, among others. An illustration with Monte Carlo data is presented.
Resumo:
BACKGROUND: Conventional x-ray angiography frequently underestimates the true burden of atherosclerosis. Although intravascular ultrasound allows for imaging of coronary plaque, this invasive technique is inappropriate for screening or serial examinations. We therefore sought to develop a noninvasive free-breathing MR technique for coronary vessel wall imaging. We hypothesized that such an approach would allow for in vivo imaging of coronary atherosclerosis. METHODS AND RESULTS: Ten subjects, including 5 healthy adult volunteers (aged 35+/-17 years, range 19 to 56 years) and 5 patients (aged 60+/-4 years, range 56 to 66 years) with x-ray-confirmed coronary artery disease (CAD), were studied with a T2-weighted, dual-inversion, fast spin-echo MR sequence. Multiple adjacent 5-mm cross-sectional images of the proximal right coronary artery were obtained with an in-plane resolution of 0.5x1.0 mm. A right hemidiaphragmatic navigator was used to facilitate free-breathing MR acquisition. Coronary vessel wall images were readily acquired in all subjects. Both coronary vessel wall thickness (1.5+/-0.2 versus 1.0+/-0.2 mm) and wall area (21.2+/-3.1 versus 13.7+/-4.2 mm(2)) were greater in patients with CAD (both P:<0.02 versus healthy adults). CONCLUSIONS: In vivo free-breathing coronary vessel wall and plaque imaging with MR has been successfully implemented in humans. Coronary wall thickness and wall area were significantly greater in patients with angiographic CAD. The presented technique may have potential applications in patients with known or suspected atherosclerotic CAD or for serial evaluation after pharmacological intervention.
Resumo:
The network revenue management (RM) problem arises in airline, hotel, media,and other industries where the sale products use multiple resources. It can be formulatedas a stochastic dynamic program but the dynamic program is computationallyintractable because of an exponentially large state space, and a number of heuristicshave been proposed to approximate it. Notable amongst these -both for their revenueperformance, as well as their theoretically sound basis- are approximate dynamic programmingmethods that approximate the value function by basis functions (both affinefunctions as well as piecewise-linear functions have been proposed for network RM)and decomposition methods that relax the constraints of the dynamic program to solvesimpler dynamic programs (such as the Lagrangian relaxation methods). In this paperwe show that these two seemingly distinct approaches coincide for the network RMdynamic program, i.e., the piecewise-linear approximation method and the Lagrangianrelaxation method are one and the same.
Resumo:
The choice network revenue management model incorporates customer purchase behavioras a function of the offered products, and is the appropriate model for airline and hotel networkrevenue management, dynamic sales of bundles, and dynamic assortment optimization.The optimization problem is a stochastic dynamic program and is intractable. A certainty-equivalencerelaxation of the dynamic program, called the choice deterministic linear program(CDLP) is usually used to generate dyamic controls. Recently, a compact linear programmingformulation of this linear program was given for the multi-segment multinomial-logit (MNL)model of customer choice with non-overlapping consideration sets. Our objective is to obtaina tighter bound than this formulation while retaining the appealing properties of a compactlinear programming representation. To this end, it is natural to consider the affine relaxationof the dynamic program. We first show that the affine relaxation is NP-complete even for asingle-segment MNL model. Nevertheless, by analyzing the affine relaxation we derive a newcompact linear program that approximates the dynamic programming value function betterthan CDLP, provably between the CDLP value and the affine relaxation, and often comingclose to the latter in our numerical experiments. When the segment consideration sets overlap,we show that some strong equalities called product cuts developed for the CDLP remain validfor our new formulation. Finally we perform extensive numerical comparisons on the variousbounds to evaluate their performance.
Resumo:
Standard methods for the analysis of linear latent variable models oftenrely on the assumption that the vector of observed variables is normallydistributed. This normality assumption (NA) plays a crucial role inassessingoptimality of estimates, in computing standard errors, and in designinganasymptotic chi-square goodness-of-fit test. The asymptotic validity of NAinferences when the data deviates from normality has been calledasymptoticrobustness. In the present paper we extend previous work on asymptoticrobustnessto a general context of multi-sample analysis of linear latent variablemodels,with a latent component of the model allowed to be fixed across(hypothetical)sample replications, and with the asymptotic covariance matrix of thesamplemoments not necessarily finite. We will show that, under certainconditions,the matrix $\Gamma$ of asymptotic variances of the analyzed samplemomentscan be substituted by a matrix $\Omega$ that is a function only of thecross-product moments of the observed variables. The main advantage of thisis thatinferences based on $\Omega$ are readily available in standard softwareforcovariance structure analysis, and do not require to compute samplefourth-order moments. An illustration with simulated data in the context ofregressionwith errors in variables will be presented.
Resumo:
We introduce several exact nonparametric tests for finite sample multivariatelinear regressions, and compare their powers. This fills an important gap inthe literature where the only known nonparametric tests are either asymptotic,or assume one covariate only.
Resumo:
A new algorithm called the parameterized expectations approach(PEA) for solving dynamic stochastic models under rational expectationsis developed and its advantages and disadvantages are discussed. Thisalgorithm can, in principle, approximate the true equilibrium arbitrarilywell. Also, this algorithm works from the Euler equations, so that theequilibrium does not have to be cast in the form of a planner's problem.Monte--Carlo integration and the absence of grids on the state variables,cause the computation costs not to go up exponentially when the numberof state variables or the exogenous shocks in the economy increase. \\As an application we analyze an asset pricing model with endogenousproduction. We analyze its implications for time dependence of volatilityof stock returns and the term structure of interest rates. We argue thatthis model can generate hump--shaped term structures.
Resumo:
Electron microscopy was used to monitor the fate of reconstituted nucleosome cores during in vitro transcription of long linear and supercoiled multinucleosomic templates by the prokaryotic T7 RNA polymerase and the eukaryotic RNA polymerase II. Transcription by T7 RNA polymerase disrupted the nucleosomal configuration in the transcribed region, while nucleosomes were preserved upstream of the transcription initiation site and in front of the polymerase. Nucleosome disruption was independent of the topology of the template, linear or supercoiled, and of the presence or absence of nucleosome positioning sequences in the transcribed region. In contrast, the nucleosomal configuration was preserved during transcription from the vitellogenin B1 promoter with RNA polymerase II in a rat liver total nuclear extract. However, the persistence of nucleosomes on the template was not RNA polymerase II-specific, but was dependent on another activity present in the nuclear extract. This was demonstrated by addition of the extract to the T7 RNA polymerase transcription reaction, which resulted in retention of the nucleosomal configuration. This nuclear activity, also found in HeLa cell nuclei, is heat sensitive and could not be substituted by nucleoplasmin, chromatin assembly factor (CAF-I) or a combination thereof. Altogether, these results identify a novel nuclear activity, called herein transcription-dependent chromatin stabilizing activity I or TCSA-I, which may be involved in a nucleosome transfer mechanism during transcription.