946 resultados para Fundamentals in linear algebra
Resumo:
AIM: To evaluate the incidence of late biliary complications in non-resectable alveolar echinococcosis (AE) under long-term chemotherapy with benzimidazoles. METHODS: Retrospective analysis of AE patients with biliary complications occurring more than three years after the diagnosis of AE. We compared characteristics of patients with and without biliary complications, analyzed potential risk factor for biliary complications and performed survival analyses. RESULTS: Ninety four of 148 patients with AE in Zurich had non-resectable AE requiring long-term benzimidazole chemotherapy, of which 26 (28%) patients developed late biliary complications. These patients had a median age of 55.5 (35.5-65) years at diagnosis of AE and developed biliary complications after 15 (8.25-19) years of chemotherapy. The most common biliary complications during long-term chemotherapy were late-onset cholangitis (n = 14), sclerosing cholangitis-like lesions (n = 8), hepatolithiasis (n = 5), affection of the common bile duct (n = 7) and secondary biliary cirrhosis (n = 7). Thirteen of the 26 patients had undergone surgery (including 12 resections) before chemotherapy. Previous surgery was a risk factor for late biliary complications in linear regression analysis (P = 0.012). CONCLUSION: Late biliary complications can be observed in nearly one third of patients with non-resectable AE, with previous surgery being a potential risk factor. After the occurrence of late biliary complications, the median survival is only 3 years, suggesting that late biliary complications indicate a poor prognostic outcome.
Resumo:
In this paper, I consider a general and informationally effcient approach to determine the optimal access rule and show that there exists a simple rule that achieves the Ramsey outcome as the unique equilibrium when networks compete in linear prices without network-based price discrimination. My approach is informationally effcient in the sense that the regulator is required to know only the marginal cost structure, i.e. the marginal cost of making and terminating a call. The approach is general in that access prices can depend not only on the marginal costs but also on the retail prices, which can be observed by consumers and therefore by the regulator as well. In particular, I consider the set of linear access pricing rules which includes any fixed access price, the Efficient Component Pricing Rule (ECPR) and the Modified ECPR as special cases. I show that in this set, there is a unique access rule that achieves the Ramsey outcome as the unique equilibrium as long as there exists at least a mild degree of substitutability among networks' services.
Resumo:
The first generation models of currency crises have often been criticized because they predict that, in the absence of very large triggering shocks, currency attacks should be predictable and lead to small devaluations. This paper shows that these features of first generation models are not robust to the inclusion of private information. In particular, this paper analyzes a generalization of the Krugman-Flood-Garber (KFG) model, which relaxes the assumption that all consumers are perfectly informed about the level of fundamentals. In this environment, the KFG equilibrium of zero devaluation is only one of many possible equilibria. In all the other equilibria, the lack of perfect information delays the attack on the currency past the point at which the shadow exchange rate equals the peg, giving rise to unpredictable and discrete devaluations.
Resumo:
OBJECTIVES: The present study examines whether depressed mood and external control mediate or moderate the relationship between the number of social roles and alcohol use. PARTICIPANTS: The analysis was based on a national representative sample of 25- to 45-year-old male and female drinkers in Switzerland. METHOD: The influence of depressed mood and external control on the relationship between the number of social roles (parenthood, partnership, employment) and alcohol use was examined in linear structural equation models (mediation) and in multiple regressions (moderation) stratified by gender. All analyses were adjusted for age and education level. RESULTS: Holding more roles was associated with lower alcohol use, lower external control and lower depressed mood. The study did not find evidence of depressed mood or external control mediating the social roles-alcohol relationship. A moderation effect was identified among women only, whereby a protective effect of having more roles could not be found among those who scored high on external control. In general, a stronger link was observed between roles and alcohol use, while depressed mood and external control acted independently on drinking. With the exception of women with high external control, the study found no link between a higher number of social roles and greater alcohol use. CONCLUSION: Our results indicate that drinking behaviours are more strongly linked to external control and depressed mood than they are to the number of social roles. The study also demonstrates that in any effective alcohol prevention policy, societal actions that enable individuals to combine more social roles play a central role.
Resumo:
An efficient method is developed for an iterative solution of the Poisson and Schro¿dinger equations, which allows systematic studies of the properties of the electron gas in linear deep-etched quantum wires. A much simpler two-dimensional (2D) approximation is developed that accurately reproduces the results of the 3D calculations. A 2D Thomas-Fermi approximation is then derived, and shown to give a good account of average properties. Further, we prove that an analytic form due to Shikin et al. is a good approximation to the electron density given by the self-consistent methods.
Resumo:
We perform a three-dimensional study of steady state viscous fingers that develop in linear channels. By means of a three-dimensional lattice-Boltzmann scheme that mimics the full macroscopic equations of motion of the fluid momentum and order parameter, we study the effect of the thickness of the channel in two cases. First, for total displacement of the fluids in the channel thickness direction, we find that the steady state finger is effectively two-dimensional and that previous two-dimensional results can be recovered by taking into account the effect of a curved meniscus across the channel thickness as a contribution to surface stresses. Second, when a thin film develops in the channel thickness direction, the finger narrows with increasing channel aspect ratio in agreement with experimental results. The effect of the thin film renders the problem three-dimensional and results deviate from the two-dimensional prediction.
Resumo:
We perform a three-dimensional study of steady state viscous fingers that develop in linear channels. By means of a three-dimensional lattice-Boltzmann scheme that mimics the full macroscopic equations of motion of the fluid momentum and order parameter, we study the effect of the thickness of the channel in two cases. First, for total displacement of the fluids in the channel thickness direction, we find that the steady state finger is effectively two-dimensional and that previous two-dimensional results can be recovered by taking into account the effect of a curved meniscus across the channel thickness as a contribution to surface stresses. Second, when a thin film develops in the channel thickness direction, the finger narrows with increasing channel aspect ratio in agreement with experimental results. The effect of the thin film renders the problem three-dimensional and results deviate from the two-dimensional prediction.
Resumo:
The computer simulation of reaction dynamics has nowadays reached a remarkable degree of accuracy. Triatomic elementary reactions are rigorously studied with great detail on a straightforward basis using a considerable variety of Quantum Dynamics computational tools available to the scientific community. In our contribution we compare the performance of two quantum scattering codes in the computation of reaction cross sections of a triatomic benchmark reaction such as the gas phase reaction Ne + H2+ %12. NeH++ H. The computational codes are selected as representative of time-dependent (Real Wave Packet [ ]) and time-independent (ABC [ ]) methodologies. The main conclusion to be drawn from our study is that both strategies are, to a great extent, not competing but rather complementary. While time-dependent calculations advantages with respect to the energy range that can be covered in a single simulation, time-independent approaches offer much more detailed information from each single energy calculation. Further details such as the calculation of reactivity at very low collision energies or the computational effort related to account for the Coriolis couplings are analyzed in this paper.
Resumo:
We assessed the association between several cardiometabolic risk factors (CRFs) (blood pressure, LDL-cholesterol, HDL-cholesterol, triglycerides, uric acid, and glucose) in 390 young adults aged 19-20 years in Seychelles (Indian Ocean, Africa) and body mass index (BMI) measured either at the same time (cross-sectional analysis) or at the age of 12-15 years (longitudinal analysis). BMI tracked markedly between age of 12-15 and age of 19-20. BMI was strongly associated with all considered CRFs in both cross-sectional and longitudinal analyses, with some exceptions. Comparing overweight participants with those having a BMI below the age-specific median, the odds ratios for high blood pressure were 5.4/4.7 (male/female) cross-sectionally and 2.5/3.9 longitudinally (P < 0.05). Significant associations were also found for most other CRFs, with some exceptions. In linear regression analysis including both BMI at age of 12-15 and BMI at age of 19-20, only BMI at age of 19-20 remained significantly associated with most CRFs. We conclude that CRFs are predicted strongly by either current or past BMI levels in adolescents and young adults in this population. The observation that only current BMI remained associated with CRFs when including past and current levels together suggests that weight control at a later age may be effective in reducing CRFs in overweight children irrespective of past weight status.
Resumo:
BACKGROUND: Anxiety disorders have been linked to an increased risk of incident coronary heart disease in which inflammation plays a key pathogenic role. To date, no studies have looked at the association between proinflammatory markers and agoraphobia. METHODS: In a random Swiss population sample of 2890 persons (35-67 years, 53% women), we diagnosed a total of 124 individuals (4.3%) with agoraphobia using a validated semi-structured psychiatric interview. We also assessed socioeconomic status, traditional cardiovascular risk factors (i.e., body mass index, hypertension, blood glucose levels, total cholesterol/high-density lipoprotein-cholesterol ratio), and health behaviors (i.e., smoking, alcohol consumption, and physical activity), and other major psychiatric diseases (other anxiety disorders, major depressive disorder, drug dependence) which were treated as covariates in linear regression models. Circulating levels of inflammatory markers, statistically controlled for the baseline demographic and health-related measures, were determined at a mean follow-up of 5.5 ± 0.4 years (range 4.7 - 8.5). RESULTS: Individuals with agoraphobia had significantly higher follow-up levels of C-reactive protein (p = 0.007) and tumor-necrosis-factor-α (p = 0.042) as well as lower levels of the cardioprotective marker adiponectin (p = 0.032) than their non-agoraphobic counterparts. Follow-up levels of interleukin (IL)-1β and IL-6 did not significantly differ between the two groups. CONCLUSIONS: Our results suggest an increase in chronic low-grade inflammation in agoraphobia over time. Such a mechanism might link agoraphobia with an increased risk of atherosclerosis and coronary heart disease, and needs to be tested in longitudinal studies.
Resumo:
Lying at the core of statistical physics is the need to reduce the number of degrees of freedom in a system. Coarse-graining is a frequently-used procedure to bridge molecular modeling with experiments. In equilibrium systems, this task can be readily performed; however in systems outside equilibrium, a possible lack of equilibration of the eliminated degrees of freedom may lead to incomplete or even misleading descriptions. Here, we present some examples showing how an improper coarse-graining procedure may result in linear approaches to nonlinear processes, miscalculations of activation rates and violations of the fluctuation-dissipation theorem.
Resumo:
The computer simulation of reaction dynamics has nowadays reached a remarkable degree of accuracy. Triatomic elementary reactions are rigorously studied with great detail on a straightforward basis using a considerable variety of Quantum Dynamics computational tools available to the scientific community. In our contribution we compare the performance of two quantum scattering codes in the computation of reaction cross sections of a triatomic benchmark reaction such as the gas phase reaction Ne + H2+ %12. NeH++ H. The computational codes are selected as representative of time-dependent (Real Wave Packet [ ]) and time-independent (ABC [ ]) methodologies. The main conclusion to be drawn from our study is that both strategies are, to a great extent, not competing but rather complementary. While time-dependent calculations advantages with respect to the energy range that can be covered in a single simulation, time-independent approaches offer much more detailed information from each single energy calculation. Further details such as the calculation of reactivity at very low collision energies or the computational effort related to account for the Coriolis couplings are analyzed in this paper.
Resumo:
The Switched Reluctance technology is probably best suited for industrial low-speed or zerospeed applications where the power can be small but the torque or the force in linear movement cases might be relatively high. Because of its simple structure the SR-motor is an interesting alternative for low power applications where pneumatic or hydraulic linear drives are to be avoided. This study analyses the basic parts of an LSR-motor which are the two mover poles and one stator pole and which form the “basic pole pair” in linear-movement transversal-flux switchedreluctance motors. The static properties of the basic pole pair are modelled and the basic design rules are derived. The models developed are validated with experiments. A one-sided one-polepair transversal-flux switched-reluctance-linear-motor prototype is demonstrated and its static properties are measured. The modelling of the static properties is performed with FEM-calculations. Two-dimensional models are accurate enough to model the static key features for the basic dimensioning of LSRmotors. Three-dimensional models must be used in order to get the most accurate calculation results of the static traction force production. The developed dimensioning and modelling methods, which could be systematically validated by laboratory measurements, are the most significant contributions of this thesis.
Resumo:
This thesis introduces an extension of Chomsky’s context-free grammars equipped with operators for referring to left and right contexts of strings.The new model is called grammar with contexts. The semantics of these grammars are given in two equivalent ways — by language equations and by logical deduction, where a grammar is understood as a logic for the recursive definition of syntax. The motivation for grammars with contexts comes from an extensive example that completely defines the syntax and static semantics of a simple typed programming language. Grammars with contexts maintain most important practical properties of context-free grammars, including a variant of the Chomsky normal form. For grammars with one-sided contexts (that is, either left or right), there is a cubic-time tabular parsing algorithm, applicable to an arbitrary grammar. The time complexity of this algorithm can be improved to quadratic,provided that the grammar is unambiguous, that is, it only allows one parsefor every string it defines. A tabular parsing algorithm for grammars withtwo-sided contexts has fourth power time complexity. For these grammarsthere is a recognition algorithm that uses a linear amount of space. For certain subclasses of grammars with contexts there are low-degree polynomial parsing algorithms. One of them is an extension of the classical recursive descent for context-free grammars; the version for grammars with contexts still works in linear time like its prototype. Another algorithm, with time complexity varying from linear to cubic depending on the particular grammar, adapts deterministic LR parsing to the new model. If all context operators in a grammar define regular languages, then such a grammar can be transformed to an equivalent grammar without context operators at all. This allows one to represent the syntax of languages in a more succinct way by utilizing context specifications. Linear grammars with contexts turned out to be non-trivial already over a one-letter alphabet. This fact leads to some undecidability results for this family of grammars
Resumo:
L’introduction aux concepts unificateurs dans l’enseignement des mathématiques privilégie typiquement l’approche axiomatique. Il n’est pas surprenant de constater qu’une telle approche tend à une algorithmisation des tâches pour augmenter l’efficacité de leur résolution et favoriser la transparence du nouveau concept enseigné (Chevallard, 1991). Cette réponse classique fait néanmoins oublier le rôle unificateur du concept et n’encourage pas à l’utilisation de sa puissance. Afin d’améliorer l’apprentissage d’un concept unificateur, ce travail de thèse étudie la pertinence d’une séquence didactique dans la formation d’ingénieurs centrée sur un concept unificateur de l’algèbre linéaire: la transformation linéaire (TL). La notion d’unification et la question du sens de la linéarité sont abordées à travers l’acquisition de compétences en résolution de problèmes. La séquence des problèmes à résoudre a pour objet le processus de construction d’un concept abstrait (la TL) sur un domaine déjà mathématisé, avec l’intention de dégager l’aspect unificateur de la notion formelle (Astolfi y Drouin, 1992). À partir de résultats de travaux en didactique des sciences et des mathématiques (Dupin 1995; Sfard 1991), nous élaborons des situations didactiques sur la base d’éléments de modélisation, en cherchant à articuler deux façons de concevoir l’objet (« procédurale » et « structurale ») de façon à trouver une stratégie de résolution plus sûre, plus économique et réutilisable. En particulier, nous avons cherché à situer la notion dans différents domaines mathématiques où elle est applicable : arithmétique, géométrique, algébrique et analytique. La séquence vise à développer des liens entre différents cadres mathématiques, et entre différentes représentations de la TL dans les différents registres mathématiques, en s’inspirant notamment dans cette démarche du développement historique de la notion. De plus, la séquence didactique vise à maintenir un équilibre entre le côté applicable des tâches à la pratique professionnelle visée, et le côté théorique propice à la structuration des concepts. L’étude a été conduite avec des étudiants chiliens en formation au génie, dans le premier cours d’algèbre linéaire. Nous avons mené une analyse a priori détaillée afin de renforcer la robustesse de la séquence et de préparer à l’analyse des données. Par l’analyse des réponses au questionnaire d’entrée, des productions des équipes et des commentaires reçus en entrevus, nous avons pu identifier les compétences mathématiques et les niveaux d’explicitation (Caron, 2004) mis à contribution dans l’utilisation de la TL. Les résultats obtenus montrent l’émergence du rôle unificateur de la TL, même chez ceux dont les habitudes en résolution de problèmes mathématiques sont marquées par une orientation procédurale, tant dans l’apprentissage que dans l’enseignement. La séquence didactique a montré son efficacité pour la construction progressive chez les étudiants de la notion de transformation linéaire (TL), avec le sens et les propriétés qui lui sont propres : la TL apparaît ainsi comme un moyen économique de résoudre des problèmes extérieurs à l’algèbre linéaire, ce qui permet aux étudiants d’en abstraire les propriétés sous-jacentes. Par ailleurs, nous avons pu observer que certains concepts enseignés auparavant peuvent agir comme obstacles à l’unification visée. Cela peut ramener les étudiants à leur point de départ, et le rôle de la TL se résume dans ces conditions à révéler des connaissances partielles, plutôt qu’à guider la résolution.