917 resultados para Multi-Agent
Resumo:
Circulating levels of adiponectin, a hormone produced predominantly by adipocytes, are highly heritable and are inversely associated with type 2 diabetes mellitus (T2D) and other metabolic traits. We conducted a meta-analysis of genome-wide association studies in 39,883 individuals of European ancestry to identify genes associated with metabolic disease. We identified 8 novel loci associated with adiponectin levels and confirmed 2 previously reported loci (P = 4.5×10(-8)-1.2×10(-43)). Using a novel method to combine data across ethnicities (N = 4,232 African Americans, N = 1,776 Asians, and N = 29,347 Europeans), we identified two additional novel loci. Expression analyses of 436 human adipocyte samples revealed that mRNA levels of 18 genes at candidate regions were associated with adiponectin concentrations after accounting for multiple testing (p<3×10(-4)). We next developed a multi-SNP genotypic risk score to test the association of adiponectin decreasing risk alleles on metabolic traits and diseases using consortia-level meta-analytic data. This risk score was associated with increased risk of T2D (p = 4.3×10(-3), n = 22,044), increased triglycerides (p = 2.6×10(-14), n = 93,440), increased waist-to-hip ratio (p = 1.8×10(-5), n = 77,167), increased glucose two hours post oral glucose tolerance testing (p = 4.4×10(-3), n = 15,234), increased fasting insulin (p = 0.015, n = 48,238), but with lower in HDL-cholesterol concentrations (p = 4.5×10(-13), n = 96,748) and decreased BMI (p = 1.4×10(-4), n = 121,335). These findings identify novel genetic determinants of adiponectin levels, which, taken together, influence risk of T2D and markers of insulin resistance.
Resumo:
Solving multi-stage oligopoly models by backward induction can easily become a com- plex task when rms are multi-product and demands are derived from a nested logit frame- work. This paper shows that under the assumption that within-segment rm shares are equal across segments, the analytical expression for equilibrium pro ts can be substantially simpli ed. The size of the error arising when this condition does not hold perfectly is also computed. Through numerical examples, it is shown that the error is rather small in general. Therefore, using this assumption allows to gain analytical tractability in a class of models that has been used to approach relevant policy questions, such as for example rm entry in an industry or the relation between competition and location. The simplifying approach proposed in this paper is aimed at helping improving these type of models for reaching more accurate recommendations.
Resumo:
This article studies how product introduction decisions relate to profitability and uncertainty in the context of multi-product firms and product differentiation. These two features, common to many modern industries, have not received much attention in the literature as compared to the classical problem of firm entry, even if the determinants of firm and product entry are quite different. The theoretical predictions about the sign of the impact of uncertainty on product entry are not conclusive. Therefore, an econometric model relating firms’ product introduction decisions with profitability and profit uncertainty is proposed. Firm’s estimated profits are obtained from a structural model of product demand and supply, and uncertainty is proxied by profits’ variance. The empirical analysis is carried out using data on the Spanish car industry for the period 1990-2000. The results show a positive relationship between product introduction and profitability, and a negative one with respect to profit variability. Interestingly, the degree of uncertainty appears to be a driving force of entry stronger than profitability, suggesting that the product proliferation process in the Spanish car market may have been mainly a consequence of lower uncertainty rather than the result of having a more profitable market. Keywords: Product introduction, entry, uncertainty, multiproduct firms, automobile JEL codes: L11, L13
Resumo:
PURPOSE: To determine the diagnostic value of the intravascular contrast agent gadocoletic acid (B-22956) in three-dimensional, free breathing coronary magnetic resonance angiography (MRA) for stenosis detection in patients with suspected or known coronary artery disease. METHODS: Eighteen patients underwent three-dimensional, free breathing coronary MRA of the left and right coronary system before and after intravenous application of a single dose of gadocoletic acid (B-22956) using three different dose regimens (group A 0.050 mmol/kg; group B 0.075 mmol/kg; group C 0.100 mmol/kg). Precontrast scanning followed a coronary MRA standard non-contrast T2 preparation/turbo-gradient echo sequence (T2Prep); for postcontrast scanning an inversion-recovery gradient echo sequence was used (real-time navigator correction for both scans). In pre- and postcontrast scans quantitative analysis of coronary MRA data was performed to determine the number of visible side branches, vessel length and vessel sharpness of each of the three coronary arteries (LAD, LCX, RCA). The number of assessable coronary artery segments was determined to calculate sensitivity and specificity for detection of stenosis > or = 50% on a segment-to-segment basis (16-segment-model) in pre- and postcontrast scans with x-ray coronary angiography as the standard of reference. RESULTS: Dose group B (0.075 mmol/kg) was preferable with regard to improvement of MR angiographic parameters: in postcontrast scans all MR angiographic parameters increased significantly except for the number of visible side branches of the left circumflex artery. In addition, assessability of coronary artery segments significantly improved postcontrast in this dose group (67 versus 88%, p < 0.01). Diagnostic performance (sensitivity, specificity, accuracy) was 83, 77 and 78% for precontrast and 86, 95 and 94% for postcontrast scans. CONCLUSIONS: The use of gadocoletic acid (B-22956) results in an improvement of MR angiographic parameters, asssessability of coronary segments and detection of coronary stenoses > or = 50%.
Resumo:
El objetivo de este estudio es analizar el impacto, en emisiones de CO2, de la demanda final de Cataluña en relación a los vínculos comerciales interregionales con el resto de España y el resto del mundo. Este proceso implica el análisis del balance en CO2 incorporado para Cataluña, lo que permitirá evaluar la responsabilidad de la economía catalana respecto a estas emisiones. Para este propósito se construye, para esta determinada desagregación regional, un modelo Multi-Regional Input-Output (MRIO) extendido al medioambiente con sectores verticalmente integrados. La incorporación de la técnica de la integración vertical nos permite un enfoque alternativo para el Balance Neto y un análisis más detallado de los vínculos interregionales entre los diversos sectores productivos, centrado en la responsabilidad última de la demanda final de cada sector en cada región. Hasta el momento, los estudios previos sobre los impactos medioambientales incorporados al comercio español se han centrado principalmente en el ámbito nacional. No obstante, por un lado el comercio interregional con el resto de España en términos monetarios representa cerca de la mitad del comercio exterior catalán. Por otro lado, los distintos metabolismos energéticos de ambas economías tienen como consecuencia una importante diferencia en la intensidad de emisión en la producción de bienes y servicios. Esta situación genera para Cataluña un déficit en el Balance Neto estimado con el resto de España, aún teniendo un importante superávit monetario. De esto se desprende la importancia de integrar el nivel interregional en los estudios de los impactos medioambientales incorporados en el comercio y, en consecuencia, en la planificación y formalización de políticas económicas y ambientales a nivel nacional.
Diseño y evaluación de un algoritmo paralelo para la eliminación gausiana en procesadores multi-core
Resumo:
A cross section of a human population (501 individuals) selected at random, and living in a Bolivian community, highly endemic for Chagas disease, was investigated combining together clinical, parasitological and molecular approaches. Conventional serology and polymerase chain reaction (PCR) indicated an active transmission of the infection, a high seroprevalence (43.3%) ranging from around 12% in < 5 years to 94.7% in > 45 years, and a high sensitivity (83.8%) and specificity of PCR. Abnormal ECG tracing was predominant in chagasic patients and was already present among individuals younger than 13 years. SAPA (shed acute phase antigen) recombinant protein and the synthetic peptide R-13 were used as antigens in ELISA tests. The reactivity of SAPA was strongly associated to Trypanosoma cruzi infection and independent of the age of the patients but was not suitable neither for universal serodiagnosis nor for discrimination of specific phases of Chagas infection. Anti-R-13 response was observed in 27.5% only in chagasic patients. Moreover, anti-R13 reactivity was associated with early infection and not to cardiac pathology. This result questioned previous studies, which considered the anti-R-13 response as a marker of chronic Chagas heart disease. The major clonets 20 and 39 (belonging to Trypanosoma cruzi I and T. cruzi II respectively) which circulate in equal proportions in vectors of the studied area, were identified in patients' blood by PCR. Clonet 39 was selected over clonet 20 in the circulation whatever the age of the patient. The only factor related to strain detected in patients' blood, was the anti-R-13 reactivity: 37% of the patients infected by clonet 39 (94 cases) had anti-R13 antibodies contrasting with only 6% of the patients without clonet 39 (16 cases).
Resumo:
We evaluated the usefulness of the combination of three plasmids encoding tegumental (pECL and pSM14) and muscular (pIRV5) antigens of the Schistosoma mansoni on improving the protective immunity over the use of a single antigen as DNA vaccines. Female BALB/c mice were inoculated twice with 25 µg DNA plasmid within two weeks interval. The challenge was performed with 80 cercarias of a regional isolate of S. mansoni (SLM) one week after the last immunization. Six weeks after challenge, all mice were perfused for worm load determination. The following groups were analyzed: saline; empty vector; monovalent formulations of pECL; pSM14 and pIRV5 and also double combinations of pECL/pIRV5 and pIRV5/pSM14 and a triple combination of pECL/pIRV5/pSM14. The protection was expressed as a percentage of worm loads in each group compared with the saline group. The results obtained were 41% (p < 0.05); 52% (p < 0.05); 51% (p < 0.05); 48% (p < 0.05); 55% (p < 0.05); 45% (p < 0.05); 65% (p < 0.05) for each group respectively.
Resumo:
Las aplicaciones de alineamiento de secuencias son una herramienta importante para la comunidad científica. Estas aplicaciones bioinformáticas son usadas en muchos campos distintos como pueden ser la medicina, la biología, la farmacología, la genética, etc. A día de hoy los algoritmos de alineamiento de secuencias tienen una complejidad elevada y cada día tienen que manejar un volumen de datos más grande. Por esta razón se deben buscar alternativas para que estas aplicaciones sean capaces de manejar el aumento de tamaño que los bancos de secuencias están sufriendo día a día. En este proyecto se estudian y se investigan mejoras en este tipo de aplicaciones como puede ser el uso de sistemas paralelos que pueden mejorar el rendimiento notablemente.
Resumo:
AbstractPerforming publicly has become increasingly important in a variety of professions. This condition is associated with performance anxiety in almost all performers. Whereas some performers successfully cope with this anxiety, for others it represents a major problem and even threatens their career. Musicians and especially music students were shown to be particularly affected by performance anxiety.Therefore, the goal of this PhD thesis was to gain a better understanding of performance anxiety in university music students. More precisely, the first part of this thesis aimed at increasing knowledge on the occurrence, the experience, and the management of performance anxiety (Article 1). The second part aimed at investigating the hypothesis that there is an underlying hyperventilation problem in musicians with a high level of anxiety before a performance. This hypothesis was addressed in two ways: firstly, by investigating the association between the negative affective dimension of music performance anxiety (MPA) and self-perceived physiological symptoms that are known to co-occur with hyperventilation (Article 2) and secondly, by analyzing this association on the physiological level before a private (audience-free) and a public performance (Article 3). Article 4 places some key variables of Article 3 in a larger context by jointly analyzing the phases before, during, and after performing.The main results of the self-report data show (a) that stage fright is experienced as a problem by one-third of the surveyed students, (b) that the students express a considerable need for more help to better cope with it, and (c) that there is a positive association between negative feelings of MPA and the self-reported hyperventilation complaints before performing. This latter finding was confirmed on the physiological level in a tendency of particularly high performance-anxious musicians to hyperventilate. Furthermore, the psycho-physiological activation increased from a private to a public performance, and was higher during the performances than before or after them. The physiological activation was mainly independent of the MPA score. Finally, there was a low response coherence between the actual physiological activation and the self-reports on the instantaneous anxiety, tension, and perceived physiological activation.Given the high proportion of music students who consider stage fright as a problem and given the need for more help to better cope with it, a better understanding of this phenomenon and its inclusion in the educational process is fundamental to prevent future occupational problems. On the physiological level, breathing exercises might be a good means to decrease - but also to increase - the arousal associated with a public performance in order to meet an optimal level of arousal needed for a good performance.
Resumo:
PURPOSE: We report the long-term results of a randomized clinical trial comparing induction therapy with once per week for 4 weeks single-agent rituximab alone versus induction followed by 4 cycles of maintenance therapy every 2 months in patients with follicular lymphoma. PATIENTS AND METHODS: Patients (prior chemotherapy 138; chemotherapy-naive 64) received single-agent rituximab and if nonprogressive, were randomly assigned to no further treatment (observation) or four additional doses of rituximab given at 2-month intervals (prolonged exposure). RESULTS: At a median follow-up of 9.5 years and with all living patients having been observed for at least 5 years, the median event-free survival (EFS) was 13 months for the observation and 24 months for the prolonged exposure arm (P < .001). In the observation arm, patients without events at 8 years were 5%, while in the prolonged exposure arm they were 27%. Of previously untreated patients receiving prolonged treatment after responding to rituximab induction, at 8 years 45% were still without event. The only favorable prognostic factor for EFS in a multivariate Cox regression was the prolonged rituximab schedule (hazard ratio, 0.59; 95% CI, 0.39 to 0.88; P = .009), whereas being chemotherapy naive, presenting with stage lower than IV, and showing a VV phenotype at position 158 of the Fc-gamma RIIIA receptor were not of independent prognostic value. No long-term toxicity potentially due to rituximab was observed. CONCLUSION: An important proportion of patients experienced long-term remission after prolonged exposure to rituximab, particularly if they had no prior treatment and responded to rituximab induction.
Resumo:
Purpose: To load embolization particles (DC-Beads, Biocompatibles, UK) with an anti-angiogenic agent (sunitinib) and to characterize the in vitro properties of the Beads-drug association.Materials: DC Beads of 100-300µm were loaded using a specially designed 10mg/ml sunitinib solution. Loading profile was studied by spectrophotometry of the supernatant solution at 430nm at different time points. Release experiment was performed using the USP method 4 (flow-through cell). Spectrophotometric determination at 430nm was used to measure drug concentration in the eluting solution.Results: We were able to load >98% of the drug in the DC-Beads in 2 hours. The maximum concentration was 20mg sunitinib/ml DC Beads. Loaded Beads gradually released 59% of the loaded drug in the eluting solution, by an ionic exchange mechanism,over 6 hours.Conclusions: DC Beads could be loaded with the multi tyrosine kinase inhibitor sunitinib using a specially designed solution. High drug payload can be achieved. The loaded DC Beads released the drug in an ionic eluting solution with an interesting release profile.
Resumo:
Methods like Event History Analysis can show the existence of diffusion and part of its nature, but do not study the process itself. Nowadays, thanks to the increasing performance of computers, processes can be studied using computational modeling. This thesis presents an agent-based model of policy diffusion mainly inspired from the model developed by Braun and Gilardi (2006). I first start by developing a theoretical framework of policy diffusion that presents the main internal drivers of policy diffusion - such as the preference for the policy, the effectiveness of the policy, the institutional constraints, and the ideology - and its main mechanisms, namely learning, competition, emulation, and coercion. Therefore diffusion, expressed by these interdependencies, is a complex process that needs to be studied with computational agent-based modeling. In a second step, computational agent-based modeling is defined along with its most significant concepts: complexity and emergence. Using computational agent-based modeling implies the development of an algorithm and its programming. When this latter has been developed, we let the different agents interact. Consequently, a phenomenon of diffusion, derived from learning, emerges, meaning that the choice made by an agent is conditional to that made by its neighbors. As a result, learning follows an inverted S-curve, which leads to partial convergence - global divergence and local convergence - that triggers the emergence of political clusters; i.e. the creation of regions with the same policy. Furthermore, the average effectiveness in this computational world tends to follow a J-shaped curve, meaning that not only time is needed for a policy to deploy its effects, but that it also takes time for a country to find the best-suited policy. To conclude, diffusion is an emergent phenomenon from complex interactions and its outcomes as ensued from my model are in line with the theoretical expectations and the empirical evidence.Les méthodes d'analyse de biographie (event history analysis) permettent de mettre en évidence l'existence de phénomènes de diffusion et de les décrire, mais ne permettent pas d'en étudier le processus. Les simulations informatiques, grâce aux performances croissantes des ordinateurs, rendent possible l'étude des processus en tant que tels. Cette thèse, basée sur le modèle théorique développé par Braun et Gilardi (2006), présente une simulation centrée sur les agents des phénomènes de diffusion des politiques. Le point de départ de ce travail met en lumière, au niveau théorique, les principaux facteurs de changement internes à un pays : la préférence pour une politique donnée, l'efficacité de cette dernière, les contraintes institutionnelles, l'idéologie, et les principaux mécanismes de diffusion que sont l'apprentissage, la compétition, l'émulation et la coercition. La diffusion, définie par l'interdépendance des différents acteurs, est un système complexe dont l'étude est rendue possible par les simulations centrées sur les agents. Au niveau méthodologique, nous présenterons également les principaux concepts sous-jacents aux simulations, notamment la complexité et l'émergence. De plus, l'utilisation de simulations informatiques implique le développement d'un algorithme et sa programmation. Cette dernière réalisée, les agents peuvent interagir, avec comme résultat l'émergence d'un phénomène de diffusion, dérivé de l'apprentissage, où le choix d'un agent dépend en grande partie de ceux faits par ses voisins. De plus, ce phénomène suit une courbe en S caractéristique, poussant à la création de régions politiquement identiques, mais divergentes au niveau globale. Enfin, l'efficacité moyenne, dans ce monde simulé, suit une courbe en J, ce qui signifie qu'il faut du temps, non seulement pour que la politique montre ses effets, mais également pour qu'un pays introduise la politique la plus efficace. En conclusion, la diffusion est un phénomène émergent résultant d'interactions complexes dont les résultats du processus tel que développé dans ce modèle correspondent tant aux attentes théoriques qu'aux résultats pratiques.