956 resultados para Localised Approximation
Resumo:
Els continguts de la memòria es divideixen en dues parts fonamentals, precedides d’una breu explicació on s’introdueix al lector en el tema; primer s’exposen les bases d’un ressonador simple FBAR, d’on s’obtenen equacions de control vitals, i seguidament es plantegen les bases per a un ressonador apilat SCR, derivades de l’estudi previ d’un ressonador simple. La primera part fonamental del treball es centra en l’anàlisi d’un ressonador SCR. Aquest anàlisi es recolza sobre el punt de vista teòric dels paràmetres imatge, el nostre punt de sortida. Aquesta primera part és la més teòrica: obtenció i aplicació dels paràmetres imatge i obtenció dels elements discrets que conformen el circuit equivalent SCR, una xarxa de dos ports. Posteriorment, s’analitza què succeeix modificant els valors dels elements discrets sense variar determinats paràmetres imatge i finalment, es proposa una aproximació per a controlar determinades especificacions de disseny, com són l’ample de banda de transmissió de la xarxa i el factor de qualitat, en funció de les modificacions dels elements discrets. En la segona part s’analitzen, de forma qualitativa, xarxes compostes per diferents ressonadors apilats connectats en cascada. Aquest segon estudi es divideix en dues subparts. En la primera connectem N ressonadors idèntics, plantegem algunes equacions de control i analitzem les respostes. En la segona, es planteja la connexió de dos (N=2) ressonadors diferents amb freqüències de ressonància properes. Aquest segon anàlisi es també totalment qualitatiu, però ens aporta informació que amb la unió de N ressonadors iguals no aconseguíem. Finalitzant aquesta segona part, es planteja l’optimització dels resultats obtinguts per a N=2, mitjançant estructures N=3 ressonadors.
Resumo:
OBJECTIVES: In patients with septic shock, circulating monocytes become refractory to stimulation with microbial products. Whether this hyporesponsive state is induced by infection or is related to shock is unknown. To address this question, we measured TNF alpha production by monocytes or by whole blood obtained from healthy volunteers (controls), from patients with septic shock, from patients with severe infection (bacterial pneumonia) without shock, and from patients with cardiogenic shock without infection. MEASUREMENTS: The numbers of circulating monocytes, of CD14+ monocytes, and the expression of monocyte CD14 and the LPS receptor, were assessed by flow cytometry. Monocytes or whole blood were stimulated with lipopolysaccharide endotoxin (LPS), heat-killed Escherichia coli or Staphylococcus aureus, and TNF alpha production was measured by bioassay. RESULTS: The number of circulating monocytes, of CD14+ monocytes, and the monocyte CD14 expression were significantly lower in patients with septic shock than in controls, in patients with bacterial pneumonia or in those with cardiogenic shock (p < 0.001). Monocytes or whole blood of patients with septic shock exhibited a profound deficiency of TNF alpha production in response to all stimuli (p < 0.05 compared to controls). Whole blood of patients with cardiogenic shock also exhibited this defect (p < 0.05 compared to controls), although to a lesser extent, despite normal monocyte counts and normal CD14 expression. CONCLUSIONS: Unlike patients with bacterial pneumonia, patients with septic or cardiogenic shock display profoundly defective TNF alpha production in response to a broad range of infectious stimuli. Thus, down-regulation of cytokine production appears to occur in patients with systemic, but not localised, albeit severe, infections and also in patients with non-infectious circulatory failure. Whilst depletion of monocytes and reduced monocyte CD14 expression are likely to be critical components of the hyporesponsiveness observed in patients with septic shock, other as yet unidentified factors are at work in this group and in patients with cardiogenic shock.
Resumo:
Industrial clustering policy is now an integral part of economic development planning in most advanced economies. However, there have been concerns in some quarters over the ability of an industrial cluster-based development strategy to deliver its promised economic benefits and this has been increasingly been blamed on the failure by governments to identify industrial clusters. In a study published in 2001, the DTI identified clusters across the UK based on the comparative scale and significance of industrial sectors. The study identified thirteen industrial clusters in Scotland. However the clusters identified are not a homogeneous set and they seem to vary in terms of their geographic concentration within Scotland. This paper examines the spatial distribution of industries within Scotland, thereby identifying more localised clusters. The study follows as closely as possible the DTI methodology which was used to identify such concentrations of economic activity with particular attention directed towards the thirteen clusters identified by the DTI. The paper concludes with some remarks of the general problem of identifying the existence of industrial clusters.
Resumo:
The unconditional expectation of social welfare is often used to assess alternative macroeconomic policy rules in applied quantitative research. It is shown that it is generally possible to derive a linear - quadratic problem that approximates the exact non-linear problem where the unconditional expectation of the objective is maximised and the steady-state is distorted. Thus, the measure of pol icy performance is a linear combinat ion of second moments of economic variables which is relatively easy to compute numerically, and can be used to rank alternative policy rules. The approach is applied to a simple Calvo-type model under various monetary policy rules.
Resumo:
We derive a rational model of separable consumer choice which can also serve as a behavioral model. The central construct is [lambda] , the marginal utility of money, derived from the consumer's rest-of-life problem. We present a robust approximation of [lambda], and show how to incorporate liquidity constraints, indivisibilities and adaptation to a changing environment. We fi nd connections with numerous historical and recent constructs, both behavioral and neoclassical, and draw contrasts with standard partial equilibrium analysis. The result is a better grounded, more flexible and more intuitive description of consumer choice.
Resumo:
New Keynesian models rely heavily on two workhorse models of nominal inertia - price contracts of random duration (Calvo, 1983) and price adjustment costs (Rotemberg, 1982) - to generate a meaningful role for monetary policy. These alternative descriptions of price stickiness are often used interchangeably since, to a first order of approximation they imply an isomorphic Phillips curve and, if the steady-state is efficient, identical objectives for the policy maker and as a result in an LQ framework, the same policy conclusions. In this paper we compute time-consistent optimal monetary policy in bench-mark New Keynesian models containing each form of price stickiness. Using global solution techniques we find that the inflation bias problem under Calvo contracts is significantly greater than under Rotemberg pricing, despite the fact that the former typically significant exhibits far greater welfare costs of inflation. The rates of inflation observed under this policy are non-trivial and suggest that the model can comfortably generate the rates of inflation at which the problematic issues highlighted in the trend inflation literature emerge, as well as the movements in trend inflation emphasized in empirical studies of the evolution of inflation. Finally, we consider the response to cost push shocks across both models and find these can also be significantly different. The choice of which form of nominal inertia to adopt is not innocuous.
Resumo:
Bilateral oligopoly is a strategic market game with two commodities, allowing strategic behavior on both sides of the market. When the number of buyers is large, such a game approximates a game of quantity competition played by sellers. We present examples which show that this is not typically a Cournot game. Rather, we introduce an alternative game of quantity competition (the market share game) and, appealing to results in the literature on contests, show that this yields the same equilibria as the many-buyer limit of bilateral oligopoly, under standard assumptions on costs and preferences. We also show that the market share and Cournot games have the same equilibria if and only if the price elasticity of the latter is one. These results lead to necessary and sufficient conditions for the Cournot game to be a good approximation to bilateral oligopoly with many buyers and to an ordering of total output when they are not satisfied.
Resumo:
We prove existence theorems for the Dirichlet problem for hypersurfaces of constant special Lagrangian curvature in Hadamard manifolds. The first results are obtained using the continuity method and approximation and then refined using two iterations of the Perron method. The a-priori estimates used in the continuity method are valid in any ambient manifold.
Resumo:
We report an unusual case of congenital giant coronary aneurysm. A 23 year-old male with a history of acute myocardial infarction presented an abnormal shadow in the left cardiac border on routine X-ray. Electrocardiogram and physical examination were normal without any clinical signs of inflammation, but computed tomography (CT) scan and cardiac magnetic resonance imaging (MRI) revealed a giant (>50mm) coronary aneurysm. Coronary artery bypass grafting (CABG) with coronary artery aneurysm (CAA) resection resolved the CAA. Coronary artery aneurysms are entities of localised dilation and can be common events in chronic infectious disease as a result of the systemic inflammatory state; however, giant coronary aneurysms (measuring more than 50mm) are rare. This is especially true where the pathological aetiology was not clearly defined or was believed to be of congenital origin. To date only a few published case reports exist for this type of pathological entity.
Resumo:
The broad resonances underlying the entire (1) H NMR spectrum of the brain, ascribed to macromolecules, can influence metabolite quantification. At the intermediate field strength of 3 T, distinct approaches for the determination of the macromolecule signal, previously used at either 1.5 or 7 T and higher, may become equivalent. The aim of this study was to evaluate, at 3 T for healthy subjects using LCModel, the impact on the metabolite quantification of two different macromolecule approaches: (i) experimentally measured macromolecules; and (ii) mathematically estimated macromolecules. Although small, but significant, differences in metabolite quantification (up to 23% for glutamate) were noted for some metabolites, 10 metabolites were quantified reproducibly with both approaches with a Cramer-Rao lower bound below 20%, and the neurochemical profiles were therefore similar. We conclude that the mathematical approximation can provide sufficiently accurate and reproducible estimation of the macromolecule contribution to the (1) H spectrum at 3 T. Copyright © 2013 John Wiley & Sons, Ltd.
Resumo:
Having determined in a phase I study the maximum tolerated dose of high-dose ifosfamide combined with high-dose doxorubicin, we now report the long-term results of a phase II trial in advanced soft-tissue sarcomas. Forty-six patients with locally advanced or metastatic soft-tissue sarcomas were included, with age <60 years and all except one in good performance status (0 or 1). The chemotherapy treatment consisted of ifosfamide 10 g m(-2) (continuous infusion for 5 days), doxorubicin 30 mg m(-2) day(-1) x 3 (total dose 90 mg m(-2)), mesna and granulocyte-colony stimulating factor. Cycles were repeated every 21 days. A median of 4 (1-6) cycles per patient was administered. Twenty-two patients responded to therapy, including three complete responders and 19 partial responders for an overall response rate of 48% (95% CI: 33-63%). The response rate was not different between localised and metastatic diseases or between histological types, but was higher in grade 3 tumours. Median overall survival was 19 months. Salvage therapies (surgery and/or radiotherapy) were performed in 43% of patients and found to be the most significant predictor for favourable survival (exploratory multivariate analysis). Haematological toxicity was severe, including grade > or =3 neutropenia in 59%, thrombopenia in 39% and anaemia in 27% of cycles. Three patients experienced grade 3 neurotoxicity and one patient died of septic shock. This high-dose regimen is toxic but nonetheless feasible in multicentre settings in non elderly patients with good performance status. A high response rate was obtained. Prolonged survival was mainly a function of salvage therapies.
Resumo:
We construct a new family of semi-discrete numerical schemes for the approximation of the one-dimensional periodic Vlasov-Poisson system. The methods are based on the coupling of discontinuous Galerkin approximation to the Vlasov equation and several finite element (conforming, non-conforming and mixed) approximations for the Poisson problem. We show optimal error estimates for the all proposed methods in the case of smooth compactly supported initial data. The issue of energy conservation is also analyzed for some of the methods.
Application of standard and refined heat balance integral methods to one-dimensional Stefan problems
Resumo:
The work in this paper concerns the study of conventional and refined heat balance integral methods for a number of phase change problems. These include standard test problems, both with one and two phase changes, which have exact solutions to enable us to test the accuracy of the approximate solutions. We also consider situations where no analytical solution is available and compare these to numerical solutions. It is popular to use a quadratic profile as an approximation of the temperature, but we show that a cubic profile, seldom considered in the literature, is far more accurate in most circumstances. In addition, the refined integral method can give greater improvement still and we develop a variation on this method which turns out to be optimal in some cases. We assess which integral method is better for various problems, showing that it is largely dependent on the specified boundary conditions.
Resumo:
To describe the collective behavior of large ensembles of neurons in neuronal network, a kinetic theory description was developed in [13, 12], where a macroscopic representation of the network dynamics was directly derived from the microscopic dynamics of individual neurons, which are modeled by conductance-based, linear, integrate-and-fire point neurons. A diffusion approximation then led to a nonlinear Fokker-Planck equation for the probability density function of neuronal membrane potentials and synaptic conductances. In this work, we propose a deterministic numerical scheme for a Fokker-Planck model of an excitatory-only network. Our numerical solver allows us to obtain the time evolution of probability distribution functions, and thus, the evolution of all possible macroscopic quantities that are given by suitable moments of the probability density function. We show that this deterministic scheme is capable of capturing the bistability of stationary states observed in Monte Carlo simulations. Moreover, the transient behavior of the firing rates computed from the Fokker-Planck equation is analyzed in this bistable situation, where a bifurcation scenario, of asynchronous convergence towards stationary states, periodic synchronous solutions or damped oscillatory convergence towards stationary states, can be uncovered by increasing the strength of the excitatory coupling. Finally, the computation of moments of the probability distribution allows us to validate the applicability of a moment closure assumption used in [13] to further simplify the kinetic theory.
Resumo:
In this article, we present a new approach of Nekhoroshev theory for a generic unperturbed Hamiltonian which completely avoids small divisors problems. The proof is an extension of a method introduced by P. Lochak which combines averaging along periodic orbits with simultaneous Diophantine approximation and uses geometric arguments designed by the second author to handle generic integrable Hamiltonians. This method allows to deal with generic non-analytic Hamiltonians and to obtain new results of generic stability around linearly stable tori.